doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1606.06737 | 48 | separation. Without requiring any knowledge about the true entropy of the input text (which is famously NP- hard to compute), this ï¬gure immediately shows that the LSTM-RNN we trained is performing sub-optimally; it is not able to capture all the long-term dependencies found in the training data.
As a comparison, we also calculated the bigram transi- tion matrix P(X3X4|X,X2) from the data and used it to hallucinate 1 MB of text. Despite the fact that this higher order Markov model needs ~ 10° more parameters than our LSTM-RNN, it captures less than a fifth of the mutual information captured by the LSTM-RNN even at modest separations 2 5. This phenomenon is related to a classic result in the theory of formal languages: a context free grammar
In summary, Figure 3 shows both the successes and short- comings of machine learning. On the one hand, LSTM- RNNâs can capture long-range correlations much more eï¬ciently than Markovian models; on the other hand, they cannot match the two point functions of training data, never mind higher order statistics!
One might wonder how the lack of mutual information at large scales for the bigram Markov model is manifested in the hallucinated text. Below we give a line from the Markov hallucinations:
9 | 1606.06737#48 | Criticality in Formal Languages and Statistical Physics | We show that the mutual information between two symbols, as a function of the
number of symbols between the two, decays exponentially in any probabilistic
regular grammar, but can decay like a power law for a context-free grammar.
This result about formal languages is closely related to a well-known result in
classical statistical mechanics that there are no phase transitions in
dimensions fewer than two. It is also related to the emergence of power-law
correlations in turbulence and cosmological inflation through recursive
generative processes. We elucidate these physics connections and comment on
potential applications of our results to machine learning tasks like training
artificial recurrent neural networks. Along the way, we introduce a useful
quantity which we dub the rational mutual information and discuss
generalizations of our claims involving more complicated Bayesian networks. | http://arxiv.org/pdf/1606.06737 | Henry W. Lin, Max Tegmark | cond-mat.dis-nn, cs.CL | Replaced to match final published version. Discussion improved,
references added | Entropy, 19, 299 (2017) | cond-mat.dis-nn | 20160621 | 20170823 | [] |
1606.06565 | 49 | 4When implementing hierarchical RL, we may ï¬nd that subagents take actions that donât serve top-level agentâs real goals, in the same way that a human may be concerned that the top-level agentâs actions donât serve the humanâs real goals. This is an intriguing analogy that suggests that there may be fruitful parallels between hierarchical RL and several aspects of the safety problem.
13
# 6 Safe Exploration | 1606.06565#49 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | [
{
"id": "1507.01986"
},
{
"id": "1506.02142"
},
{
"id": "1602.04621"
},
{
"id": "1602.04450"
},
{
"id": "1605.09304"
},
{
"id": "1606.05374"
},
{
"id": "1604.05288"
},
{
"id": "1603.00448"
},
{
"id": "1605.03661"
},
{
"id": "1512.02595"
},
{
"id": "1511.08228"
},
{
"id": "1510.04342"
},
{
"id": "1604.05280"
},
{
"id": "1606.04753"
},
{
"id": "1606.03976"
},
{
"id": "1602.02697"
},
{
"id": "1505.05424"
},
{
"id": "1604.06057"
},
{
"id": "1605.03143"
},
{
"id": "1506.06579"
},
{
"id": "1605.03142"
},
{
"id": "1502.05698"
},
{
"id": "1512.05832"
},
{
"id": "1502.02072"
},
{
"id": "1506.02438"
},
{
"id": "1502.02362"
}
] |
1606.06737 | 49 | One might wonder how the lack of mutual information at large scales for the bigram Markov model is manifested in the hallucinated text. Below we give a line from the Markov hallucinations:
9
1 2 0.100 > > 0.010 c £ 0.001 £ S 104 £& $ 10° s 10°° 1 10 100 Distance between symbols d(X,Y) 1000
FIG. 4: Diagnosing diï¬erent models with by hallucinating text and then measuring the mutual information as a func- tion of separation. The red line is the mutual information of enwik8, a 100 MB sample of English Wikipedia. In shaded blue is the mutual information of hallucinated Wikipedia from a trained LSTM with 3 layers of size 256. We plot in solid black the mutual information of a Markov process on sin- gle characters, which we compute exactly. (This would cor- respond to the mutual information of hallucinations in the limit where the length of the hallucinations goes to inï¬nity). This curve shows a sharp exponential decay after a distance of â¼ 10, in agreement with our theoretical predictions. We also measured the mutual information for hallucinated text on a Markov process for bigrams, which still underperforms the LSTMs in long-ranged correlations, despite having â¼ 103 more parameters than | 1606.06737#49 | Criticality in Formal Languages and Statistical Physics | We show that the mutual information between two symbols, as a function of the
number of symbols between the two, decays exponentially in any probabilistic
regular grammar, but can decay like a power law for a context-free grammar.
This result about formal languages is closely related to a well-known result in
classical statistical mechanics that there are no phase transitions in
dimensions fewer than two. It is also related to the emergence of power-law
correlations in turbulence and cosmological inflation through recursive
generative processes. We elucidate these physics connections and comment on
potential applications of our results to machine learning tasks like training
artificial recurrent neural networks. Along the way, we introduce a useful
quantity which we dub the rational mutual information and discuss
generalizations of our claims involving more complicated Bayesian networks. | http://arxiv.org/pdf/1606.06737 | Henry W. Lin, Max Tegmark | cond-mat.dis-nn, cs.CL | Replaced to match final published version. Discussion improved,
references added | Entropy, 19, 299 (2017) | cond-mat.dis-nn | 20160621 | 20170823 | [] |
1606.06565 | 50 | All autonomous learning agents need to sometimes engage in explorationâtaking actions that donât seem ideal given current information, but which help the agent learn about its environment. However, exploration can be dangerous, since it involves taking actions whose consequences the agent doesnât understand well. In toy environments, like an Atari video game, thereâs a limit to how bad these consequences can beâmaybe the agent loses some score, or runs into an enemy and suï¬ers some damage. But the real world can be much less forgiving. Badly chosen actions may destroy the agent or trap it in states it canât get out of. Robot helicopters may run into the ground or damage property; industrial control systems could cause serious issues. Common exploration policies such as epsilon- greedy [150] or R-max [31] explore by choosing an action at random or viewing unexplored actions optimistically, and thus make no attempt to avoid these dangerous situations. More sophisticated exploration strategies that adopt a coherent exploration policy over extended temporal scales [114] could actually have even greater potential for harm, since a coherently chosen bad policy may be more insidious than mere random actions. Yet intuitively it seems | 1606.06565#50 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | [
{
"id": "1507.01986"
},
{
"id": "1506.02142"
},
{
"id": "1602.04621"
},
{
"id": "1602.04450"
},
{
"id": "1605.09304"
},
{
"id": "1606.05374"
},
{
"id": "1604.05288"
},
{
"id": "1603.00448"
},
{
"id": "1605.03661"
},
{
"id": "1512.02595"
},
{
"id": "1511.08228"
},
{
"id": "1510.04342"
},
{
"id": "1604.05280"
},
{
"id": "1606.04753"
},
{
"id": "1606.03976"
},
{
"id": "1602.02697"
},
{
"id": "1505.05424"
},
{
"id": "1604.06057"
},
{
"id": "1605.03143"
},
{
"id": "1506.06579"
},
{
"id": "1605.03142"
},
{
"id": "1502.05698"
},
{
"id": "1512.05832"
},
{
"id": "1502.02072"
},
{
"id": "1506.02438"
},
{
"id": "1502.02362"
}
] |
1606.06737 | 50 | [[computhourgist, Flagesernmenserved whirequotes or thand dy excommentaligmaktophy as its:Fran at ||<If ISBN 088;&ategor and on of to [[Prefung]]â and at them rector>
This can be compared with an example from the LSTM RNN:
Proudknow pop groups at Oxford - [http://ccw.com/faqsisdaler/cardiffstwander --helgar.jpg] and Cape Normansâs first attacks Cup rigid (AM).
Despite using many fewer parameters, the LSTM man- ages to produce a realistic looking URL and is able to close brackets correctly [53], something that the Markov model struggles with.
Although great challenges remain to accurately model natural languages, our results at least allow us to improve on some earlier answers to key questions we sought to address :
1. Why is natural language so hard? The old answer was that language is uniquely human. Our new an- swer is that at least part of the diï¬culty is that nat- ural language is a critical system, with long-ranged correlations that are diï¬cult for machines to learn. | 1606.06737#50 | Criticality in Formal Languages and Statistical Physics | We show that the mutual information between two symbols, as a function of the
number of symbols between the two, decays exponentially in any probabilistic
regular grammar, but can decay like a power law for a context-free grammar.
This result about formal languages is closely related to a well-known result in
classical statistical mechanics that there are no phase transitions in
dimensions fewer than two. It is also related to the emergence of power-law
correlations in turbulence and cosmological inflation through recursive
generative processes. We elucidate these physics connections and comment on
potential applications of our results to machine learning tasks like training
artificial recurrent neural networks. Along the way, we introduce a useful
quantity which we dub the rational mutual information and discuss
generalizations of our claims involving more complicated Bayesian networks. | http://arxiv.org/pdf/1606.06737 | Henry W. Lin, Max Tegmark | cond-mat.dis-nn, cs.CL | Replaced to match final published version. Discussion improved,
references added | Entropy, 19, 299 (2017) | cond-mat.dis-nn | 20160621 | 20170823 | [] |
1606.06565 | 51 | [114] could actually have even greater potential for harm, since a coherently chosen bad policy may be more insidious than mere random actions. Yet intuitively it seems like it should often be possible to predict which actions are dangerous and explore in a way that avoids them, even when we donât have that much information about the environment. For example, if I want to learn about tigers, should I buy a tiger, or buy a book about tigers? It takes only a tiny bit of prior knowledge about tigers to determine which option is safer. | 1606.06565#51 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | [
{
"id": "1507.01986"
},
{
"id": "1506.02142"
},
{
"id": "1602.04621"
},
{
"id": "1602.04450"
},
{
"id": "1605.09304"
},
{
"id": "1606.05374"
},
{
"id": "1604.05288"
},
{
"id": "1603.00448"
},
{
"id": "1605.03661"
},
{
"id": "1512.02595"
},
{
"id": "1511.08228"
},
{
"id": "1510.04342"
},
{
"id": "1604.05280"
},
{
"id": "1606.04753"
},
{
"id": "1606.03976"
},
{
"id": "1602.02697"
},
{
"id": "1505.05424"
},
{
"id": "1604.06057"
},
{
"id": "1605.03143"
},
{
"id": "1506.06579"
},
{
"id": "1605.03142"
},
{
"id": "1502.05698"
},
{
"id": "1512.05832"
},
{
"id": "1502.02072"
},
{
"id": "1506.02438"
},
{
"id": "1502.02362"
}
] |
1606.06737 | 51 | 2. Why are machines bad at natural languages, and why are they good? The old answer is that Markov models are simply not brain/human-like, whereas neural nets are more brain-like and hence better. Our new answer is that Markov models or other 1-dimensional models cannot exhibit critical be- havior, whereas neural nets and other deep models (where an extra hidden dimension is formed by the layers of the network) are able to exhibit critical behavior.
3. How can we know when machines are bad or good? The old answer is to compute the loss function. Our new answer is to also compute the mutual in- formation as a function of separation, which can immediately show how well the model is doing at capturing correlations on diï¬erent scales.
Future studies could include generalizing our theorems to more complex formal languages such as Merge Gram- mars. | 1606.06737#51 | Criticality in Formal Languages and Statistical Physics | We show that the mutual information between two symbols, as a function of the
number of symbols between the two, decays exponentially in any probabilistic
regular grammar, but can decay like a power law for a context-free grammar.
This result about formal languages is closely related to a well-known result in
classical statistical mechanics that there are no phase transitions in
dimensions fewer than two. It is also related to the emergence of power-law
correlations in turbulence and cosmological inflation through recursive
generative processes. We elucidate these physics connections and comment on
potential applications of our results to machine learning tasks like training
artificial recurrent neural networks. Along the way, we introduce a useful
quantity which we dub the rational mutual information and discuss
generalizations of our claims involving more complicated Bayesian networks. | http://arxiv.org/pdf/1606.06737 | Henry W. Lin, Max Tegmark | cond-mat.dis-nn, cs.CL | Replaced to match final published version. Discussion improved,
references added | Entropy, 19, 299 (2017) | cond-mat.dis-nn | 20160621 | 20170823 | [] |
1606.06565 | 52 | In practice, real world RL projects can often avoid these issues by simply hard-coding an avoidance of catastrophic behaviors. For instance, an RL-based robot helicopter might be programmed to override its policy with a hard-coded collision avoidance sequence (such as spinning its propellers to gain altitude) whenever itâs too close to the ground. This approach works well when there are only a few things that could go wrong, and the designers know all of them ahead of time. But as agents become more autonomous and act in more complex domains, it may become harder and harder to anticipate every possible catastrophic failure. The space of failure modes for an agent running a power grid or a search-and-rescue operation could be quite large. Hard-coding against every possible failure is unlikely to be feasible in these cases, so a more principled approach to preventing harmful exploration seems essential. Even in simple cases like the robot helicopter, a principled approach would simplify system design and reduce the need for domain-speciï¬c engineering.
There is a sizable literature on such safe explorationâit is arguably the most studied of the problems we discuss in this document. [55, 118] provide thorough reviews of this literature, so we donât review it extensively here, but simply describe some general routes that this research has taken, as well as suggesting some directions that might have increasing relevance as RL systems expand in scope and capability. | 1606.06565#52 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | [
{
"id": "1507.01986"
},
{
"id": "1506.02142"
},
{
"id": "1602.04621"
},
{
"id": "1602.04450"
},
{
"id": "1605.09304"
},
{
"id": "1606.05374"
},
{
"id": "1604.05288"
},
{
"id": "1603.00448"
},
{
"id": "1605.03661"
},
{
"id": "1512.02595"
},
{
"id": "1511.08228"
},
{
"id": "1510.04342"
},
{
"id": "1604.05280"
},
{
"id": "1606.04753"
},
{
"id": "1606.03976"
},
{
"id": "1602.02697"
},
{
"id": "1505.05424"
},
{
"id": "1604.06057"
},
{
"id": "1605.03143"
},
{
"id": "1506.06579"
},
{
"id": "1605.03142"
},
{
"id": "1502.05698"
},
{
"id": "1512.05832"
},
{
"id": "1502.02072"
},
{
"id": "1506.02438"
},
{
"id": "1502.02362"
}
] |
1606.06737 | 52 | Future studies could include generalizing our theorems to more complex formal languages such as Merge Gram- mars.
Acknowledgments: This work was supported by the Foundational Questions Institute http://fqxi.org. The authors wish to thank Noam Chomsky and Greg Lessard for valuable comments on the linguistic aspects of this work, Taiga Abe, Meia Chita-Tegmark, Hanna Field, Esther Goldberg, Emily Mu, John Peurifoi, Tomaso Pog- gio, Luis Seoane, Leon Shen, David Theurel, Cindy Zhao, and two anonymous referees for helpful discussions and encouragement, Michelle Xu for help acquiring genome data and the Center for Brains Minds and Machines (CMBB) for hospitality.
# Appendix A: Properties of rational mutual information
In this appendix, we prove the following elementary prop- erties of rational mutual information:
1. Symmetry: for any two random variables X and Y , IR(X, Y ) = IR(Y, X). The proof is straightfor- ward:
P(X =a,Y =b) Tr(X,Y) > P(X =a)P(Y =)) l P(Y =b,X =a)? py P(Y =b)P(X =a) ba (AD) =1= Ip(Y,X). | 1606.06737#52 | Criticality in Formal Languages and Statistical Physics | We show that the mutual information between two symbols, as a function of the
number of symbols between the two, decays exponentially in any probabilistic
regular grammar, but can decay like a power law for a context-free grammar.
This result about formal languages is closely related to a well-known result in
classical statistical mechanics that there are no phase transitions in
dimensions fewer than two. It is also related to the emergence of power-law
correlations in turbulence and cosmological inflation through recursive
generative processes. We elucidate these physics connections and comment on
potential applications of our results to machine learning tasks like training
artificial recurrent neural networks. Along the way, we introduce a useful
quantity which we dub the rational mutual information and discuss
generalizations of our claims involving more complicated Bayesian networks. | http://arxiv.org/pdf/1606.06737 | Henry W. Lin, Max Tegmark | cond-mat.dis-nn, cs.CL | Replaced to match final published version. Discussion improved,
references added | Entropy, 19, 299 (2017) | cond-mat.dis-nn | 20160621 | 20170823 | [] |
1606.06565 | 53 | ⢠Risk-Sensitive Performance Criteria: A body of existing literature considers changing the optimization criteria from expected total reward to other objectives that are better at preventing rare, catastrophic events; see [55] for a thorough and up-to-date review of this literature. These approaches involve optimizing worst-case performance, or ensuring that the probability of very bad performance is small, or penalizing the variance in performance. These methods have not yet been tested with expressive function approximators such as deep neural networks, but this should be possible in principle for some of the methods, such as [153], which proposes a modiï¬cation to policy gradient algorithms to optimize a risk-sensitive criterion. There is also recent work studying how to estimate uncertainty in value functions that are represented by deep neural networks [114, 53]; these ideas could be incorporated into risk-sensitive RL algorithms. Another line of work relevant to risk sensitivity uses oï¬-policy estimation to perform a policy update that is good with high probability [156].
⢠Use Demonstrations: Exploration is necessary to ensure that the agent ï¬nds the states that are necessary for near-optimal performance. We may be able to avoid the need for exploration
14 | 1606.06565#53 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | [
{
"id": "1507.01986"
},
{
"id": "1506.02142"
},
{
"id": "1602.04621"
},
{
"id": "1602.04450"
},
{
"id": "1605.09304"
},
{
"id": "1606.05374"
},
{
"id": "1604.05288"
},
{
"id": "1603.00448"
},
{
"id": "1605.03661"
},
{
"id": "1512.02595"
},
{
"id": "1511.08228"
},
{
"id": "1510.04342"
},
{
"id": "1604.05280"
},
{
"id": "1606.04753"
},
{
"id": "1606.03976"
},
{
"id": "1602.02697"
},
{
"id": "1505.05424"
},
{
"id": "1604.06057"
},
{
"id": "1605.03143"
},
{
"id": "1506.06579"
},
{
"id": "1605.03142"
},
{
"id": "1502.05698"
},
{
"id": "1512.05832"
},
{
"id": "1502.02072"
},
{
"id": "1506.02438"
},
{
"id": "1502.02362"
}
] |
1606.06737 | 53 | 2. Upper bound to mutual information: The log- arithm function satisï¬es ln(1 + x) ⤠x with equal- ity if and only if (iï¬) x = 0. Therefore setting
10
x = P (a,b) P (a)P (b) â 1 gives
_ P(a,b) TOY) = (los Pury) na"|+ (para ~!)]) < inp (pepe) 1) ae. nB
Hence the rational mutual information IR ⥠I ln B with equality iï¬ I = 0 (or simply IR ⥠I if we use the natural logarithm base B = e).
It follows from the above in- equality that IR(X, Y ) ⥠0 with equality iï¬ P (a, b) = P (a)P (b), since IR = I = 0 iï¬ P (a, b) = P (a)P (b). Note that this short proof is only pos- sible because of the information inequality I ⥠0. From the deï¬nition of IR, it is only obvious that IR ⥠â1; information theory gives a much tighter bound. Our ï¬ndings 1-3 can be summarized as fol- lows: | 1606.06737#53 | Criticality in Formal Languages and Statistical Physics | We show that the mutual information between two symbols, as a function of the
number of symbols between the two, decays exponentially in any probabilistic
regular grammar, but can decay like a power law for a context-free grammar.
This result about formal languages is closely related to a well-known result in
classical statistical mechanics that there are no phase transitions in
dimensions fewer than two. It is also related to the emergence of power-law
correlations in turbulence and cosmological inflation through recursive
generative processes. We elucidate these physics connections and comment on
potential applications of our results to machine learning tasks like training
artificial recurrent neural networks. Along the way, we introduce a useful
quantity which we dub the rational mutual information and discuss
generalizations of our claims involving more complicated Bayesian networks. | http://arxiv.org/pdf/1606.06737 | Henry W. Lin, Max Tegmark | cond-mat.dis-nn, cs.CL | Replaced to match final published version. Discussion improved,
references added | Entropy, 19, 299 (2017) | cond-mat.dis-nn | 20160621 | 20170823 | [] |
1606.06565 | 54 | 14
altogether if we instead use inverse RL or apprenticeship learning, where the learning algorithm is provided with expert trajectories of near-optimal behavior [128, 2]. Recent progress in inverse reinforcement learning using deep neural networks to learn the cost function or policy [51] suggests that it might also be possible to reduce the need for exploration in advanced RL systems by training on a small set of demonstrations. Such demonstrations could be used to create a baseline policy, such that even if further learning is necessary, exploration away from the baseline policy can be limited in magnitude. | 1606.06565#54 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | [
{
"id": "1507.01986"
},
{
"id": "1506.02142"
},
{
"id": "1602.04621"
},
{
"id": "1602.04450"
},
{
"id": "1605.09304"
},
{
"id": "1606.05374"
},
{
"id": "1604.05288"
},
{
"id": "1603.00448"
},
{
"id": "1605.03661"
},
{
"id": "1512.02595"
},
{
"id": "1511.08228"
},
{
"id": "1510.04342"
},
{
"id": "1604.05280"
},
{
"id": "1606.04753"
},
{
"id": "1606.03976"
},
{
"id": "1602.02697"
},
{
"id": "1505.05424"
},
{
"id": "1604.06057"
},
{
"id": "1605.03143"
},
{
"id": "1506.06579"
},
{
"id": "1605.03142"
},
{
"id": "1502.05698"
},
{
"id": "1512.05832"
},
{
"id": "1502.02072"
},
{
"id": "1506.02438"
},
{
"id": "1502.02362"
}
] |
1606.06737 | 54 | IR(X, Y ) = IR(Y, X) ⥠I(X, Y ) ⥠0, (A3)
where both equalities occur iï¬ p(X, Y ) = p(X)p(Y ). It is impossible for one of the last two relations to be an equality while the other is an inequality.
4. Generalization. Note that if we view the mutual information as the divergence between two joint probability distributions, we can generalize the no- tion of rational mutual information to that of ra- tional divergence:
Da(p\|q) = (2) -1, (A4)
where the expectation value is taken with respect to the âtrueâ probability distribution p. This is a special case of what is known in the literature as α-divergence [54].
The α-divergence is itself a special case of so-called f -divergences [55â57]:
Dy (olla) = >_ vif (Gi/pi), (A5)
where DR(p||q) corresponds to f (x) = 1
x â 1. | 1606.06737#54 | Criticality in Formal Languages and Statistical Physics | We show that the mutual information between two symbols, as a function of the
number of symbols between the two, decays exponentially in any probabilistic
regular grammar, but can decay like a power law for a context-free grammar.
This result about formal languages is closely related to a well-known result in
classical statistical mechanics that there are no phase transitions in
dimensions fewer than two. It is also related to the emergence of power-law
correlations in turbulence and cosmological inflation through recursive
generative processes. We elucidate these physics connections and comment on
potential applications of our results to machine learning tasks like training
artificial recurrent neural networks. Along the way, we introduce a useful
quantity which we dub the rational mutual information and discuss
generalizations of our claims involving more complicated Bayesian networks. | http://arxiv.org/pdf/1606.06737 | Henry W. Lin, Max Tegmark | cond-mat.dis-nn, cs.CL | Replaced to match final published version. Discussion improved,
references added | Entropy, 19, 299 (2017) | cond-mat.dis-nn | 20160621 | 20170823 | [] |
1606.06565 | 55 | ⢠Simulated Exploration: The more we can do our exploration in simulated environments instead of the real world, the less opportunity there is for catastrophe. It will probably al- ways be necessary to do some real-world exploration, since many complex situations cannot be perfectly captured by a simulator, but it might be possible to learn about danger in sim- ulation and then adopt a more conservative âsafe explorationâ policy when acting in the real world. Training RL agents (particularly robots) in simulated environments is already quite common, so advances in âexploration-focused simulationâ could be easily incorporated into current workï¬ows. In systems that involve a continual cycle of learning and deployment, there may be interesting research problems associated with how to safely incrementally update poli- cies given simulation-based trajectories that imperfectly represent the consequences of those policies as well as reliably accurate oï¬-policy trajectories (e.g. âsemi-on-policyâ evaluation). | 1606.06565#55 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | [
{
"id": "1507.01986"
},
{
"id": "1506.02142"
},
{
"id": "1602.04621"
},
{
"id": "1602.04450"
},
{
"id": "1605.09304"
},
{
"id": "1606.05374"
},
{
"id": "1604.05288"
},
{
"id": "1603.00448"
},
{
"id": "1605.03661"
},
{
"id": "1512.02595"
},
{
"id": "1511.08228"
},
{
"id": "1510.04342"
},
{
"id": "1604.05280"
},
{
"id": "1606.04753"
},
{
"id": "1606.03976"
},
{
"id": "1602.02697"
},
{
"id": "1505.05424"
},
{
"id": "1604.06057"
},
{
"id": "1605.03143"
},
{
"id": "1506.06579"
},
{
"id": "1605.03142"
},
{
"id": "1502.05698"
},
{
"id": "1512.05832"
},
{
"id": "1502.02072"
},
{
"id": "1506.02438"
},
{
"id": "1502.02362"
}
] |
1606.06737 | 55 | Dy (olla) = >_ vif (Gi/pi), (A5)
where DR(p||q) corresponds to f (x) = 1
x â 1.
Note that as it is written, p could be any probability measure on either a discrete or continuous space. The above results can be trivially modiï¬ed to show that DR(p||q) ⥠DKL(p||q) and hence DR(p||q) ⥠0, with equality iï¬ p = q.
# Appendix B: General proof for Markov processes
In this appendix, we drop the assumptions of non- degeneracy, irreducibility and non-periodicity made in the main body of the paper where we proved that Markov processes lead to exponential decay.
# 1. The degenerate case | 1606.06737#55 | Criticality in Formal Languages and Statistical Physics | We show that the mutual information between two symbols, as a function of the
number of symbols between the two, decays exponentially in any probabilistic
regular grammar, but can decay like a power law for a context-free grammar.
This result about formal languages is closely related to a well-known result in
classical statistical mechanics that there are no phase transitions in
dimensions fewer than two. It is also related to the emergence of power-law
correlations in turbulence and cosmological inflation through recursive
generative processes. We elucidate these physics connections and comment on
potential applications of our results to machine learning tasks like training
artificial recurrent neural networks. Along the way, we introduce a useful
quantity which we dub the rational mutual information and discuss
generalizations of our claims involving more complicated Bayesian networks. | http://arxiv.org/pdf/1606.06737 | Henry W. Lin, Max Tegmark | cond-mat.dis-nn, cs.CL | Replaced to match final published version. Discussion improved,
references added | Entropy, 19, 299 (2017) | cond-mat.dis-nn | 20160621 | 20170823 | [] |
1606.06565 | 56 | ⢠Bounded Exploration: If we know that a certain portion of state space is safe, and that even the worst action within it can be recovered from or bounded in harm, we can allow the agent to run freely within those bounds. For example, a quadcopter suï¬ciently far from the ground might be able to explore safely, since even if something goes wrong there will be ample time for a human or another policy to rescue it. Better yet, if we have a model, we can extrapolate forward and ask whether an action will take us outside the safe state space. Safety can be deï¬ned as remaining within an ergodic region of the state space such that actions are reversible [104, 159], or as limiting the probability of huge negative reward to some small value [156]. Yet another approaches uses separate safety and performance functions and attempts to obey constraints on the safety function with high probabilty [22]. As with several of the other directions, applying or adapting these methods to recently developed advanced RL systems could be a promising area of research. This idea seems related to H-inï¬nity control [20] and regional veriï¬cation [148]. | 1606.06565#56 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | [
{
"id": "1507.01986"
},
{
"id": "1506.02142"
},
{
"id": "1602.04621"
},
{
"id": "1602.04450"
},
{
"id": "1605.09304"
},
{
"id": "1606.05374"
},
{
"id": "1604.05288"
},
{
"id": "1603.00448"
},
{
"id": "1605.03661"
},
{
"id": "1512.02595"
},
{
"id": "1511.08228"
},
{
"id": "1510.04342"
},
{
"id": "1604.05280"
},
{
"id": "1606.04753"
},
{
"id": "1606.03976"
},
{
"id": "1602.02697"
},
{
"id": "1505.05424"
},
{
"id": "1604.06057"
},
{
"id": "1605.03143"
},
{
"id": "1506.06579"
},
{
"id": "1605.03142"
},
{
"id": "1502.05698"
},
{
"id": "1512.05832"
},
{
"id": "1502.02072"
},
{
"id": "1506.02438"
},
{
"id": "1502.02362"
}
] |
1606.06737 | 56 | # 1. The degenerate case
First, we consider the case where the Markov matrix M has degenerate eigenvalues. In this case, we cannot guar- antee that M can be diagonalized. However, any complex matrix can be put into Jordan normal form. In Jordan normal form, a matrix is block diagonal, with each d à d block corresponding to an eigenvalue with degeneracy d. These blocks have a particularly simple form, with block i having λi on the diagonal and ones right above the diagonal. For example, if there are only three distinct eigenvalues and λ2 is threefold degenerate, the the Jor- dan form of M would be
Bâ1MB = 0 1 0 0 0 0 λ2 1 0 0 λ2 1 0 0 0 0 0 0 0 0 λ2 0 0 λ3 0 . (B1)
Note that the largest eigenvalue is unique and equal to 1 for all irreducible and aperiodic M. In this example, the matrix power MÏ is
1 0 0 0 0 7 OAs (G)AR* (3)AR* 0 B'IMâB=|0 0 2% (AZ! OO (B2) 0 0 0 3 60 0 0 0 0 AM | 1606.06737#56 | Criticality in Formal Languages and Statistical Physics | We show that the mutual information between two symbols, as a function of the
number of symbols between the two, decays exponentially in any probabilistic
regular grammar, but can decay like a power law for a context-free grammar.
This result about formal languages is closely related to a well-known result in
classical statistical mechanics that there are no phase transitions in
dimensions fewer than two. It is also related to the emergence of power-law
correlations in turbulence and cosmological inflation through recursive
generative processes. We elucidate these physics connections and comment on
potential applications of our results to machine learning tasks like training
artificial recurrent neural networks. Along the way, we introduce a useful
quantity which we dub the rational mutual information and discuss
generalizations of our claims involving more complicated Bayesian networks. | http://arxiv.org/pdf/1606.06737 | Henry W. Lin, Max Tegmark | cond-mat.dis-nn, cs.CL | Replaced to match final published version. Discussion improved,
references added | Entropy, 19, 299 (2017) | cond-mat.dis-nn | 20160621 | 20170823 | [] |
1606.06565 | 57 | ⢠Trusted Policy Oversight: If we have a trusted policy and a model of the environment, we can limit exploration to actions the trusted policy believes we can recover from. Itâs ï¬ne to dive towards the ground, as long as we know we can pull out of the dive in time.
⢠Human Oversight: Another possibility is to check potentially unsafe actions with a human. Unfortunately, this problem runs into the scalable oversight problem: the agent may need to make too many exploratory actions for human oversight to be practical, or may need to make them too fast for humans to judge them. A key challenge to making this work is having the agent be a good judge of which exploratory actions are genuinely risky, versus which are safe actions it can unilaterally take; another challenge is ï¬nding appropriately safe actions to take while waiting for the oversight.
Potential Experiments: It might be helpful to have a suite of toy environments where unwary agents can fall prey to harmful exploration, but there is enough pattern to the possible catastro- phes that clever agents can predict and avoid them. To some extent this feature already exists in autonomous helicopter competitions and Mars rover simulations [104], but there is always the risk of catastrophes being idiosyncratic, such that trained agents can overï¬t to them. A truly broad set of environments, containing conceptually distinct pitfalls that can cause unwary agents to receive
15 | 1606.06565#57 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | [
{
"id": "1507.01986"
},
{
"id": "1506.02142"
},
{
"id": "1602.04621"
},
{
"id": "1602.04450"
},
{
"id": "1605.09304"
},
{
"id": "1606.05374"
},
{
"id": "1604.05288"
},
{
"id": "1603.00448"
},
{
"id": "1605.03661"
},
{
"id": "1512.02595"
},
{
"id": "1511.08228"
},
{
"id": "1510.04342"
},
{
"id": "1604.05280"
},
{
"id": "1606.04753"
},
{
"id": "1606.03976"
},
{
"id": "1602.02697"
},
{
"id": "1505.05424"
},
{
"id": "1604.06057"
},
{
"id": "1605.03143"
},
{
"id": "1506.06579"
},
{
"id": "1605.03142"
},
{
"id": "1502.05698"
},
{
"id": "1512.05832"
},
{
"id": "1502.02072"
},
{
"id": "1506.02438"
},
{
"id": "1502.02362"
}
] |
1606.06737 | 57 | In the general case, raising a matrix to an arbitrary power will yield a matrix which is still block diagonal, with each block being an upper triangular matrix. The important point is that in block i, every entry scales â Î»Ï i , up to a combinatorial factor. Each combinatorial factor grows only polynomially with Ï , with the degree of the polyno- mials in the ith block bounded by the multiplicity of λi, minus one.
Using this Jordan decomposition, we can replicate equa- tion (7) and write
M Ï ij = µi + Î»Ï 2 Aij. (B3)
There are two cases, depending on whether the second eigenvalue λ2 is degenerate or not. If not, then the equa- tion
lim Ï ââ Aij = Bi2Bâ1 2j (B4)
11
still holds, since for i ⥠3, (λi/λ2)Ï decays faster than any polynomial of ï¬nite degree. On the other hand, if the second eigenvalue is degenerate with multiplicity m2, we instead deï¬ne A with the combinatorial factor removed:
Tr. T T My, = m+ (7) 5 Aiy- (B5) | 1606.06737#57 | Criticality in Formal Languages and Statistical Physics | We show that the mutual information between two symbols, as a function of the
number of symbols between the two, decays exponentially in any probabilistic
regular grammar, but can decay like a power law for a context-free grammar.
This result about formal languages is closely related to a well-known result in
classical statistical mechanics that there are no phase transitions in
dimensions fewer than two. It is also related to the emergence of power-law
correlations in turbulence and cosmological inflation through recursive
generative processes. We elucidate these physics connections and comment on
potential applications of our results to machine learning tasks like training
artificial recurrent neural networks. Along the way, we introduce a useful
quantity which we dub the rational mutual information and discuss
generalizations of our claims involving more complicated Bayesian networks. | http://arxiv.org/pdf/1606.06737 | Henry W. Lin, Max Tegmark | cond-mat.dis-nn, cs.CL | Replaced to match final published version. Discussion improved,
references added | Entropy, 19, 299 (2017) | cond-mat.dis-nn | 20160621 | 20170823 | [] |
1606.06565 | 58 | 15
extremely negative reward, and covering both physical and abstract catastrophes, might help in the development of safe exploration techniques for advanced RL systems. Such a suite of environments might serve a benchmarking role similar to that of the bAbI tasks [163], with the eventual goal being to develop a single architecture that can learn to avoid catastrophes in all environments in the suite.
# 7 Robustness to Distributional Change | 1606.06565#58 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | [
{
"id": "1507.01986"
},
{
"id": "1506.02142"
},
{
"id": "1602.04621"
},
{
"id": "1602.04450"
},
{
"id": "1605.09304"
},
{
"id": "1606.05374"
},
{
"id": "1604.05288"
},
{
"id": "1603.00448"
},
{
"id": "1605.03661"
},
{
"id": "1512.02595"
},
{
"id": "1511.08228"
},
{
"id": "1510.04342"
},
{
"id": "1604.05280"
},
{
"id": "1606.04753"
},
{
"id": "1606.03976"
},
{
"id": "1602.02697"
},
{
"id": "1505.05424"
},
{
"id": "1604.06057"
},
{
"id": "1605.03143"
},
{
"id": "1506.06579"
},
{
"id": "1605.03142"
},
{
"id": "1502.05698"
},
{
"id": "1512.05832"
},
{
"id": "1502.02072"
},
{
"id": "1506.02438"
},
{
"id": "1502.02362"
}
] |
1606.06737 | 58 | Tr. T T My, = m+ (7) 5 Aiy- (B5)
If m2 = 1, this deï¬nition simply reduces to the previous deï¬nition of A. With this deï¬nition,
lim Ï ââ Aij = λâm2 2 Bi2Bâ1 (2+m2)j, (B6)
Hence in the most general case, the mutual information decays like a polynomial P(Ï )eâÎ³Ï , where γ = 2 ln 1 . λ2 The polynomial is non-constant if and only if the second largest eigenvalue is degenerate. Note that even in this case, the mutual information decays exponentially in the sense that it is possible to bound the mutual information by an exponential.
# 2. The reducible case
Now let us generalize to the case where the Markov pro- cess is reducible. A general Markov state space can be partitioned into m subsets,
m s=US, i=1 (B7)
where elements in the same partition communicate with it is possible to transition from i â j and each other: j â i for i, j â Si. | 1606.06737#58 | Criticality in Formal Languages and Statistical Physics | We show that the mutual information between two symbols, as a function of the
number of symbols between the two, decays exponentially in any probabilistic
regular grammar, but can decay like a power law for a context-free grammar.
This result about formal languages is closely related to a well-known result in
classical statistical mechanics that there are no phase transitions in
dimensions fewer than two. It is also related to the emergence of power-law
correlations in turbulence and cosmological inflation through recursive
generative processes. We elucidate these physics connections and comment on
potential applications of our results to machine learning tasks like training
artificial recurrent neural networks. Along the way, we introduce a useful
quantity which we dub the rational mutual information and discuss
generalizations of our claims involving more complicated Bayesian networks. | http://arxiv.org/pdf/1606.06737 | Henry W. Lin, Max Tegmark | cond-mat.dis-nn, cs.CL | Replaced to match final published version. Discussion improved,
references added | Entropy, 19, 299 (2017) | cond-mat.dis-nn | 20160621 | 20170823 | [] |
1606.06565 | 59 | All of us occasionally ï¬nd ourselves in situations that our previous experience has not adequately prepared us to deal withâfor instance, ï¬ying an airplane, traveling to a country whose culture is very diï¬erent from ours, or taking care of children for the ï¬rst time. Such situations are inherently diï¬cult to handle and inevitably lead to some missteps. However, a key (and often rare) skill in dealing with such situations is to recognize our own ignorance, rather than simply assuming that the heuristics and intuitions weâve developed for other situations will carry over perfectly. Machine learning systems also have this problemâa speech system trained on clean speech will perform very poorly on noisy speech, yet often be highly conï¬dent in its erroneous classiï¬cations (some of the authors have personally observed this in training automatic speech recognition systems). In the case of our cleaning robot, harsh cleaning materials that it has found useful in cleaning factory ï¬oors could cause a lot of harm if used to clean an oï¬ce. Or, an oï¬ce might contain pets that the robot, never having seen before, attempts to wash with soap, leading to predictably bad results. In general, when the testing distribution diï¬ers from the training distribution, machine learning systems may not only exhibit poor performance, but also wrongly assume that their performance is good. | 1606.06565#59 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | [
{
"id": "1507.01986"
},
{
"id": "1506.02142"
},
{
"id": "1602.04621"
},
{
"id": "1602.04450"
},
{
"id": "1605.09304"
},
{
"id": "1606.05374"
},
{
"id": "1604.05288"
},
{
"id": "1603.00448"
},
{
"id": "1605.03661"
},
{
"id": "1512.02595"
},
{
"id": "1511.08228"
},
{
"id": "1510.04342"
},
{
"id": "1604.05280"
},
{
"id": "1606.04753"
},
{
"id": "1606.03976"
},
{
"id": "1602.02697"
},
{
"id": "1505.05424"
},
{
"id": "1604.06057"
},
{
"id": "1605.03143"
},
{
"id": "1506.06579"
},
{
"id": "1605.03142"
},
{
"id": "1502.05698"
},
{
"id": "1512.05832"
},
{
"id": "1502.02072"
},
{
"id": "1506.02438"
},
{
"id": "1502.02362"
}
] |
1606.06737 | 59 | where elements in the same partition communicate with it is possible to transition from i â j and each other: j â i for i, j â Si.
In general, the set of partitions will be a ï¬nite directed acyclic graph (DAG), where the arrows of the DAG are inherited from the Markov chain. Since the DAG is ï¬nite, after some ï¬nite amount of time, almost all the proba- bility will be concentrated in the âï¬nalâ partitions that have no outgoing arrows and almost no probability will be in the âtransientâ partitions. Since the statistics of the chain that we are interested are determined by run- ning the chain for inï¬nite time, they are insensitive to transient behavior, and hence we can ignore all but the ï¬nal partitions. (The mutual information at ï¬xed sepa- ration is still determined by averaging over all (inï¬nite) time steps.)
Consider the case where the initial probability distribu- ion only has support on one of the S;. Since states in S; A S; will never be accessed, the Markov process (with this initial condition) is identical to an irreducible Markov process on $;. Our previous results imply that he mutual information will exponentially decay to zero. | 1606.06737#59 | Criticality in Formal Languages and Statistical Physics | We show that the mutual information between two symbols, as a function of the
number of symbols between the two, decays exponentially in any probabilistic
regular grammar, but can decay like a power law for a context-free grammar.
This result about formal languages is closely related to a well-known result in
classical statistical mechanics that there are no phase transitions in
dimensions fewer than two. It is also related to the emergence of power-law
correlations in turbulence and cosmological inflation through recursive
generative processes. We elucidate these physics connections and comment on
potential applications of our results to machine learning tasks like training
artificial recurrent neural networks. Along the way, we introduce a useful
quantity which we dub the rational mutual information and discuss
generalizations of our claims involving more complicated Bayesian networks. | http://arxiv.org/pdf/1606.06737 | Henry W. Lin, Max Tegmark | cond-mat.dis-nn, cs.CL | Replaced to match final published version. Discussion improved,
references added | Entropy, 19, 299 (2017) | cond-mat.dis-nn | 20160621 | 20170823 | [] |
1606.06565 | 60 | Such errors can be harmful or oï¬ensiveâa classiï¬er could give the wrong medical diagnosis with such high conï¬dence that the data isnât ï¬agged for human inspection, or a language model could output oï¬ensive text that it conï¬dently believes is non-problematic. For autonomous agents acting in the world, there may be even greater potential for something bad to happenâfor instance, an autonomous agent might overload a power grid because it incorrectly but conï¬dently perceives that a particular region doesnât have enough power, and concludes that more power is urgently needed and overload is unlikely. More broadly, any agent whose perception or heuristic reasoning processes are not trained on the correct distribution may badly misunderstand its situation, and thus runs the risk of committing harmful actions that it does not realize are harmful. Additionally, safety checks that depend on trained machine learning systems (e.g. âdoes my visual system believe this route is clear?â) may fail silently and unpredictably if those systems encounter real-world data that diï¬ers suï¬ciently from their training data. Having a better way to detect such failures, and ultimately having statistical assurances about how often theyâll happen, seems critical to building safe and predictable systems. | 1606.06565#60 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | [
{
"id": "1507.01986"
},
{
"id": "1506.02142"
},
{
"id": "1602.04621"
},
{
"id": "1602.04450"
},
{
"id": "1605.09304"
},
{
"id": "1606.05374"
},
{
"id": "1604.05288"
},
{
"id": "1603.00448"
},
{
"id": "1605.03661"
},
{
"id": "1512.02595"
},
{
"id": "1511.08228"
},
{
"id": "1510.04342"
},
{
"id": "1604.05280"
},
{
"id": "1606.04753"
},
{
"id": "1606.03976"
},
{
"id": "1602.02697"
},
{
"id": "1505.05424"
},
{
"id": "1604.06057"
},
{
"id": "1605.03143"
},
{
"id": "1506.06579"
},
{
"id": "1605.03142"
},
{
"id": "1502.05698"
},
{
"id": "1512.05832"
},
{
"id": "1502.02072"
},
{
"id": "1506.02438"
},
{
"id": "1502.02362"
}
] |
1606.06737 | 60 | Let us deï¬ne the random variable Z = f (X), where f (x â Si) = Si. For a general initial condition, the total probability within each set Si is independent of time. This means that the entropy H(Z) is independent of time. Using the fact that H(Z|X) = H(Y |X) = 0, one can show that
I(X, Y ) = I(X, Y |Z) + H(Z), (B8)
where I(X, Y |Z) = H(X|Z) â H(Y |X, Z) is the condi- tional mutual information. Our previous results then im- ply that the conditional mutual information decays expo- nentially, whereas the second term H(Z) ⤠log m is con- stant. In the language of statistical physics, this is an ex- ample of topological order which leads to constant terms in the correlation functions; here, the Markov graph of M is disconnected, so there are m degenerate equilibrium states.
# 3. The periodic case | 1606.06737#60 | Criticality in Formal Languages and Statistical Physics | We show that the mutual information between two symbols, as a function of the
number of symbols between the two, decays exponentially in any probabilistic
regular grammar, but can decay like a power law for a context-free grammar.
This result about formal languages is closely related to a well-known result in
classical statistical mechanics that there are no phase transitions in
dimensions fewer than two. It is also related to the emergence of power-law
correlations in turbulence and cosmological inflation through recursive
generative processes. We elucidate these physics connections and comment on
potential applications of our results to machine learning tasks like training
artificial recurrent neural networks. Along the way, we introduce a useful
quantity which we dub the rational mutual information and discuss
generalizations of our claims involving more complicated Bayesian networks. | http://arxiv.org/pdf/1606.06737 | Henry W. Lin, Max Tegmark | cond-mat.dis-nn, cs.CL | Replaced to match final published version. Discussion improved,
references added | Entropy, 19, 299 (2017) | cond-mat.dis-nn | 20160621 | 20170823 | [] |
1606.06565 | 61 | For concreteness, we imagine that a machine learning model is trained on one distribution (call it p0) but deployed on a potentially diï¬erent test distribution (call it pâ). There are many other ways to formalize this problem (for instance, in an online learning setting with concept drift [70, 54]) but we will focus on the above for simplicity. An important point is that we likely have access to a large amount of labeled data at training time, but little or no labeled data at test time. Our goal is to ensure that the model âperforms reasonablyâ on pâ, in the sense that (1) it often performs well on pâ, and (2) it knows when it is performing badly (and ideally can avoid/mitigate the bad performance by taking conservative actions or soliciting human input).
There are a variety of areas that are potentially relevant to this problem, including change detection and anomaly detection [21, 80, 91], hypothesis testing [145], transfer learning [138, 124, 125, 25], and several others [136, 87, 18, 122, 121, 74, 147]. Rather than fully reviewing all of this work in detail (which would necessitate a paper in itself), we will describe a few illustrative approaches and lay out some of their relative strengths and challenges.
16 | 1606.06565#61 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | [
{
"id": "1507.01986"
},
{
"id": "1506.02142"
},
{
"id": "1602.04621"
},
{
"id": "1602.04450"
},
{
"id": "1605.09304"
},
{
"id": "1606.05374"
},
{
"id": "1604.05288"
},
{
"id": "1603.00448"
},
{
"id": "1605.03661"
},
{
"id": "1512.02595"
},
{
"id": "1511.08228"
},
{
"id": "1510.04342"
},
{
"id": "1604.05280"
},
{
"id": "1606.04753"
},
{
"id": "1606.03976"
},
{
"id": "1602.02697"
},
{
"id": "1505.05424"
},
{
"id": "1604.06057"
},
{
"id": "1605.03143"
},
{
"id": "1506.06579"
},
{
"id": "1605.03142"
},
{
"id": "1502.05698"
},
{
"id": "1512.05832"
},
{
"id": "1502.02072"
},
{
"id": "1506.02438"
},
{
"id": "1502.02362"
}
] |
1606.06737 | 61 | # 3. The periodic case
If a Markov process is periodic, one can further de- compose each ï¬nal partition. It is easy to check that the period of each element in a partition must be con- stant throughout the partition. It follows that each ï¬- nal partition Si can be decomposed into cyclic classes Si1, Si2, · · · , Sid, where d is the period of the elements in the partition in Si. The arguments in the previous sec- tion with f (x â Sik) = Sik then show that the mutual information again has two terms, one of which exponen- tially decays, the other of which is constant.
# 4. The n > 1 case | 1606.06737#61 | Criticality in Formal Languages and Statistical Physics | We show that the mutual information between two symbols, as a function of the
number of symbols between the two, decays exponentially in any probabilistic
regular grammar, but can decay like a power law for a context-free grammar.
This result about formal languages is closely related to a well-known result in
classical statistical mechanics that there are no phase transitions in
dimensions fewer than two. It is also related to the emergence of power-law
correlations in turbulence and cosmological inflation through recursive
generative processes. We elucidate these physics connections and comment on
potential applications of our results to machine learning tasks like training
artificial recurrent neural networks. Along the way, we introduce a useful
quantity which we dub the rational mutual information and discuss
generalizations of our claims involving more complicated Bayesian networks. | http://arxiv.org/pdf/1606.06737 | Henry W. Lin, Max Tegmark | cond-mat.dis-nn, cs.CL | Replaced to match final published version. Discussion improved,
references added | Entropy, 19, 299 (2017) | cond-mat.dis-nn | 20160621 | 20170823 | [] |
1606.06565 | 62 | 16
Well-speciï¬ed models: covariate shift and marginal likelihood. If we specialize to prediction tasks and let x denote the input and y denote the output (prediction target), then one possibility is to make the covariate shift assumption that p0(y|x) = pâ(y|x). In this case, assuming that we can model p0(x) and pâ(x) well, we can perform importance weighting by re-weighting each training example (x, y) by pâ(x)/p0(x) [138, 124]. Then the importance-weighted samples allow us to estimate the performance on pâ, and even re-train a model to perform well on pâ. This approach is limited by the variance of the importance estimate, which is very large or even inï¬nite unless p0 and pâ are close together. | 1606.06565#62 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | [
{
"id": "1507.01986"
},
{
"id": "1506.02142"
},
{
"id": "1602.04621"
},
{
"id": "1602.04450"
},
{
"id": "1605.09304"
},
{
"id": "1606.05374"
},
{
"id": "1604.05288"
},
{
"id": "1603.00448"
},
{
"id": "1605.03661"
},
{
"id": "1512.02595"
},
{
"id": "1511.08228"
},
{
"id": "1510.04342"
},
{
"id": "1604.05280"
},
{
"id": "1606.04753"
},
{
"id": "1606.03976"
},
{
"id": "1602.02697"
},
{
"id": "1505.05424"
},
{
"id": "1604.06057"
},
{
"id": "1605.03143"
},
{
"id": "1506.06579"
},
{
"id": "1605.03142"
},
{
"id": "1502.05698"
},
{
"id": "1512.05832"
},
{
"id": "1502.02072"
},
{
"id": "1506.02438"
},
{
"id": "1502.02362"
}
] |
1606.06737 | 62 | # 4. The n > 1 case
The following proof holds only for order n = 1 Markov processes, but we can easily extend the results for arbi- trary n. Any n = 2 Markov process can be converted into an n = 1 Markov process on pairs of letters X1X2. Hence our proof shows that I(X1X2, Y1Y2) decays ex- ponentially. But for any random variables X, Y , the data processing inequality [40] states that I(X, g(Y )) ⤠I(X, Y ), where g is an arbitrary function of Y . Let- ting g(Y1Y2) = Y1, and then permuting and applying g(X1, X2) = X1 gives
I(X1X2, Y1Y2) ⥠I(X1X2, Y1) ⥠I(X1, Y1). (B9)
Hence, we see that I(X1, Y1) must exponentially decay. The preceding remarks can be easily formalized into a proof for an arbitrary Markov process by induction on n.
# 5. The detailed balance case
This asymptotic relation can be strengthened for a sub- class of Markov processes which obey a condition known
12 | 1606.06737#62 | Criticality in Formal Languages and Statistical Physics | We show that the mutual information between two symbols, as a function of the
number of symbols between the two, decays exponentially in any probabilistic
regular grammar, but can decay like a power law for a context-free grammar.
This result about formal languages is closely related to a well-known result in
classical statistical mechanics that there are no phase transitions in
dimensions fewer than two. It is also related to the emergence of power-law
correlations in turbulence and cosmological inflation through recursive
generative processes. We elucidate these physics connections and comment on
potential applications of our results to machine learning tasks like training
artificial recurrent neural networks. Along the way, we introduce a useful
quantity which we dub the rational mutual information and discuss
generalizations of our claims involving more complicated Bayesian networks. | http://arxiv.org/pdf/1606.06737 | Henry W. Lin, Max Tegmark | cond-mat.dis-nn, cs.CL | Replaced to match final published version. Discussion improved,
references added | Entropy, 19, 299 (2017) | cond-mat.dis-nn | 20160621 | 20170823 | [] |
1606.06565 | 63 | An alternative to sample re-weighting involves assuming a well-speciï¬ed model family, in which case there is a single optimal model for predicting under both p0 and pâ. In this case, one need only heed ï¬nite-sample variance in the estimated model [25, 87]. A limitation to this approach, at least currently, is that models are often mis-speciï¬ed in practice. However, this could potentially be over- come by employing highly expressive model families such as reproducing kernel Hilbert spaces [72], Turing machines [143, 144], or suï¬ciently expressive neural nets [64, 79]. In the latter case, there has been interesting recent work on using bootstrapping to estimate ï¬nite-sample variation in the learned parameters of a neural network [114]; it seems worthwhile to better understand whether this approach can be used to eï¬ectively estimate out-of-sample performance in practice, as well as how local minima, lack of curvature, and other peculiarities relative to the typical setting of the bootstrap [47] aï¬ect the validity of this approach. | 1606.06565#63 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | [
{
"id": "1507.01986"
},
{
"id": "1506.02142"
},
{
"id": "1602.04621"
},
{
"id": "1602.04450"
},
{
"id": "1605.09304"
},
{
"id": "1606.05374"
},
{
"id": "1604.05288"
},
{
"id": "1603.00448"
},
{
"id": "1605.03661"
},
{
"id": "1512.02595"
},
{
"id": "1511.08228"
},
{
"id": "1510.04342"
},
{
"id": "1604.05280"
},
{
"id": "1606.04753"
},
{
"id": "1606.03976"
},
{
"id": "1602.02697"
},
{
"id": "1505.05424"
},
{
"id": "1604.06057"
},
{
"id": "1605.03143"
},
{
"id": "1506.06579"
},
{
"id": "1605.03142"
},
{
"id": "1502.05698"
},
{
"id": "1512.05832"
},
{
"id": "1502.02072"
},
{
"id": "1506.02438"
},
{
"id": "1502.02362"
}
] |
1606.06737 | 63 | # 5. The detailed balance case
This asymptotic relation can be strengthened for a sub- class of Markov processes which obey a condition known
12
as detailed balance. This subclass arises naturally in the study of statistical physics [58]. For our purposes, this simply means that there exist some real numbers Km and a symmetric matrix Sab = Sba such that
Mab = eKa/2SabeâKb/2. (B10)
Let us note the following facts. (1) The matrix power is simply (M Ï )ab = eKa/2 (SÏ )ab eâKb/2. (2) By the spec- tral theorem, we can diagonalize S into an orthonormal basis of eigenvectors, which we label as v (or sometimes w), e.g., Sv = λiv and v · w = δvw. Notice that
=e Ke Sn tn = ieM Um. n n | 1606.06737#63 | Criticality in Formal Languages and Statistical Physics | We show that the mutual information between two symbols, as a function of the
number of symbols between the two, decays exponentially in any probabilistic
regular grammar, but can decay like a power law for a context-free grammar.
This result about formal languages is closely related to a well-known result in
classical statistical mechanics that there are no phase transitions in
dimensions fewer than two. It is also related to the emergence of power-law
correlations in turbulence and cosmological inflation through recursive
generative processes. We elucidate these physics connections and comment on
potential applications of our results to machine learning tasks like training
artificial recurrent neural networks. Along the way, we introduce a useful
quantity which we dub the rational mutual information and discuss
generalizations of our claims involving more complicated Bayesian networks. | http://arxiv.org/pdf/1606.06737 | Henry W. Lin, Max Tegmark | cond-mat.dis-nn, cs.CL | Replaced to match final published version. Discussion improved,
references added | Entropy, 19, 299 (2017) | cond-mat.dis-nn | 20160621 | 20170823 | [] |
1606.06565 | 64 | All of the approaches so far rely on the covariate shift assumption, which is very strong and is also untestable; the latter property is particularly problematic from a safety perspective, since it could lead to silent failures in a machine learning system. Another approach, which does not rely on covariate shift, builds a generative model of the distribution. Rather than assuming that p(x) changes while p(y|x) stays the same, we are free to assume other invariants (for instance, that p(y) changes but p(x|y) stays the same, or that certain conditional independencies are preserved). An advantage is that such assumptions are typically more testable than the covariate shift assumption (since they do not only involve the unobserved variable y). A disadvantage is that generative approaches are even more fragile than discriminative approaches in the presence of model mis-speciï¬cation â for instance, there is a large empirical literature showing that generative approaches to semi-supervised learning based on maximizing marginal likelihood can perform very poorly when the model is mis- speciï¬ed [98, 110, 35, 90, 88]. | 1606.06565#64 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | [
{
"id": "1507.01986"
},
{
"id": "1506.02142"
},
{
"id": "1602.04621"
},
{
"id": "1602.04450"
},
{
"id": "1605.09304"
},
{
"id": "1606.05374"
},
{
"id": "1604.05288"
},
{
"id": "1603.00448"
},
{
"id": "1605.03661"
},
{
"id": "1512.02595"
},
{
"id": "1511.08228"
},
{
"id": "1510.04342"
},
{
"id": "1604.05280"
},
{
"id": "1606.04753"
},
{
"id": "1606.03976"
},
{
"id": "1602.02697"
},
{
"id": "1505.05424"
},
{
"id": "1604.06057"
},
{
"id": "1605.03143"
},
{
"id": "1506.06579"
},
{
"id": "1605.03142"
},
{
"id": "1502.05698"
},
{
"id": "1512.05832"
},
{
"id": "1502.02072"
},
{
"id": "1506.02438"
},
{
"id": "1502.02362"
}
] |
1606.06737 | 64 | =e Ke Sn tn = ieM Um. n n
Hence we have found an eigenvector of M for every eigen- vector of S. Conversely, the set of eigenvectors of S forms a basis, so there cannot be any more eigenvectors of M . This implies that all the eigenvalues of M are given by m = eKm/2vm, and the eigenvalues of P v are λi. P v In other words, M and S share the same eigenvalues. (3) µa = 1 hence is the stationary state:
> Mav = 1 - = zx > e0/2 Sy ,e7 Kel? = <2) Moa = La: z> e(KatKo)/2 (B11)
The previous facts then let us ï¬nish the calculation:
P(a, b) _ Ka (at ,â-Ky Ky-Ka (Paras) ~Y (VEN) â¢) _ =X Ka (s7)2, eK) (eKe-Ka) =F (97 = ISTP. ab (B12)
(B12)
Now using the fact that ||A||? = tr (A7A) and is there- fore invariant under an orthogonal change of basis, we find that
(Fara) 7 Dsl. (B13) | 1606.06737#64 | Criticality in Formal Languages and Statistical Physics | We show that the mutual information between two symbols, as a function of the
number of symbols between the two, decays exponentially in any probabilistic
regular grammar, but can decay like a power law for a context-free grammar.
This result about formal languages is closely related to a well-known result in
classical statistical mechanics that there are no phase transitions in
dimensions fewer than two. It is also related to the emergence of power-law
correlations in turbulence and cosmological inflation through recursive
generative processes. We elucidate these physics connections and comment on
potential applications of our results to machine learning tasks like training
artificial recurrent neural networks. Along the way, we introduce a useful
quantity which we dub the rational mutual information and discuss
generalizations of our claims involving more complicated Bayesian networks. | http://arxiv.org/pdf/1606.06737 | Henry W. Lin, Max Tegmark | cond-mat.dis-nn, cs.CL | Replaced to match final published version. Discussion improved,
references added | Entropy, 19, 299 (2017) | cond-mat.dis-nn | 20160621 | 20170823 | [] |
1606.06565 | 65 | The approaches discussed above all rely relatively strongly on having a well-speciï¬ed model family â one that contains the true distribution or true concept. This can be problematic in many cases, since nature is often more complicated than our model family is capable of capturing. As noted above, it may be possible to mitigate this with very expressive models, such as kernels, Turing machines, or very large neural networks, but even here there is at least some remaining problem: for example, even if our model family consists of all Turing machines, given any ï¬nite amount of data we can only actually learn among Turing machines up to a given description length, and if the Turing machine describing nature exceeds this length, we are back to the mis-speciï¬ed regime (alternatively, nature might not even be describable by a Turing machine). | 1606.06565#65 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | [
{
"id": "1507.01986"
},
{
"id": "1506.02142"
},
{
"id": "1602.04621"
},
{
"id": "1602.04450"
},
{
"id": "1605.09304"
},
{
"id": "1606.05374"
},
{
"id": "1604.05288"
},
{
"id": "1603.00448"
},
{
"id": "1605.03661"
},
{
"id": "1512.02595"
},
{
"id": "1511.08228"
},
{
"id": "1510.04342"
},
{
"id": "1604.05280"
},
{
"id": "1606.04753"
},
{
"id": "1606.03976"
},
{
"id": "1602.02697"
},
{
"id": "1505.05424"
},
{
"id": "1604.06057"
},
{
"id": "1605.03143"
},
{
"id": "1506.06579"
},
{
"id": "1605.03142"
},
{
"id": "1502.05698"
},
{
"id": "1512.05832"
},
{
"id": "1502.02072"
},
{
"id": "1506.02438"
},
{
"id": "1502.02362"
}
] |
1606.06737 | 65 | (Fara) 7 Dsl. (B13)
Since the λiâs are both the eigenvalues of M and S, and since M is irreducible and aperiodic, there is exactly one eigenvalue λ1 = 1, and all other eigenvalues are less than one. Altogether,
Ip(ti, te) = (7p) -l= Ss |Ai?â. (B14) i=2
Hence one can easily estimate the asymptotic behavior of the mutual information if one has knowledge of the
spectrum of M . We see that the mutual information exponentially decays, with a decay scale time-scale given by the second largest eigenvalue λ2:
Ï â1 decay = 2 log 1 λ2 . (B15)
# 6. Hidden Markov Model
In this subsection, we generalize our ï¬ndings to hidden Markov models and present a proof of Theorem 2. If we have a Bayesian network of the form W â X â X â Y â Z, one can show that I(W, Z) ⤠I(X, Y ) using arguments similar to the proof of the data processing in- equality. Hence if I(X, Y ) decays exponentially, I(W, Z) should also decay exponentially. In what follows, we will show this in greater detail. | 1606.06737#65 | Criticality in Formal Languages and Statistical Physics | We show that the mutual information between two symbols, as a function of the
number of symbols between the two, decays exponentially in any probabilistic
regular grammar, but can decay like a power law for a context-free grammar.
This result about formal languages is closely related to a well-known result in
classical statistical mechanics that there are no phase transitions in
dimensions fewer than two. It is also related to the emergence of power-law
correlations in turbulence and cosmological inflation through recursive
generative processes. We elucidate these physics connections and comment on
potential applications of our results to machine learning tasks like training
artificial recurrent neural networks. Along the way, we introduce a useful
quantity which we dub the rational mutual information and discuss
generalizations of our claims involving more complicated Bayesian networks. | http://arxiv.org/pdf/1606.06737 | Henry W. Lin, Max Tegmark | cond-mat.dis-nn, cs.CL | Replaced to match final published version. Discussion improved,
references added | Entropy, 19, 299 (2017) | cond-mat.dis-nn | 20160621 | 20170823 | [] |
1606.06565 | 66 | Partially specified models: method of moments, unsupervised risk estimation, causal identification, and limited-information maximum likelihood. Another approach is to take for granted that constructing a fully well-specified model family is probably infeasible, and to design methods that perform well despite this fact. This leads to the idea of partially specified models â models for which assumptions are made about some aspects of a distribution, but for which we are agnostic or make limited assumptions about other aspects. For a simple example, consider a variant of linear regression where we might assume that y = (w*,x) + v, where E[v|a] = 0, but we donât make any further assumptions about the distributional form of the noise v. It turns out that this is already enough to identify the parameters w*, and that these parameters will minimize the squared
17
prediction error even if the distribution over x changes. What is interesting about this example is that wâ can be identiï¬ed even with an incomplete (partial) speciï¬cation of the noise distribution. | 1606.06565#66 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | [
{
"id": "1507.01986"
},
{
"id": "1506.02142"
},
{
"id": "1602.04621"
},
{
"id": "1602.04450"
},
{
"id": "1605.09304"
},
{
"id": "1606.05374"
},
{
"id": "1604.05288"
},
{
"id": "1603.00448"
},
{
"id": "1605.03661"
},
{
"id": "1512.02595"
},
{
"id": "1511.08228"
},
{
"id": "1510.04342"
},
{
"id": "1604.05280"
},
{
"id": "1606.04753"
},
{
"id": "1606.03976"
},
{
"id": "1602.02697"
},
{
"id": "1505.05424"
},
{
"id": "1604.06057"
},
{
"id": "1605.03143"
},
{
"id": "1506.06579"
},
{
"id": "1605.03142"
},
{
"id": "1502.05698"
},
{
"id": "1512.05832"
},
{
"id": "1502.02072"
},
{
"id": "1506.02438"
},
{
"id": "1502.02362"
}
] |
1606.06737 | 66 | Based on the considerations in the main body of the text, the joint probability distribution between two visi- ble states Xt1 , Xt2 is given by
P(a,b) = > Goa [(Mâ )ac He] Gacs ed (B16)
where the term in brackets would have been there in an ordinary Markov model and the two new factors of G are the result of the generalization. Note that as before, µ is the stationary state corresponding to M. We will only consider the typical case where M is aperiodic, irre- ducible, and non-degenerate; once we have this case, the other cases can be easily treated by mimicking our above proof for or ordinary Markov processes. Using equation (7) and deï¬ning g = Mµ gives
= 2G (M7) a te] G = Ja9> +3 Y > (GoaAdcteGac) + cd (B17)
Plugging this in to our deï¬nition of rational mutual in- formation gives
Int 1 = Sp Pe uh ab Jab = > (x0 +3 Ss Giada) ab cd (B18) +A37C =14A3 ¥> Adette + AZC ed =14+3"¢, | 1606.06737#66 | Criticality in Formal Languages and Statistical Physics | We show that the mutual information between two symbols, as a function of the
number of symbols between the two, decays exponentially in any probabilistic
regular grammar, but can decay like a power law for a context-free grammar.
This result about formal languages is closely related to a well-known result in
classical statistical mechanics that there are no phase transitions in
dimensions fewer than two. It is also related to the emergence of power-law
correlations in turbulence and cosmological inflation through recursive
generative processes. We elucidate these physics connections and comment on
potential applications of our results to machine learning tasks like training
artificial recurrent neural networks. Along the way, we introduce a useful
quantity which we dub the rational mutual information and discuss
generalizations of our claims involving more complicated Bayesian networks. | http://arxiv.org/pdf/1606.06737 | Henry W. Lin, Max Tegmark | cond-mat.dis-nn, cs.CL | Replaced to match final published version. Discussion improved,
references added | Entropy, 19, 299 (2017) | cond-mat.dis-nn | 20160621 | 20170823 | [] |
1606.06565 | 67 | This insight can be substantially generalized, and is one of the primary motivations for the gen- eralized method of moments in econometrics [68, 123, 69]. The econometrics literature has in fact developed a large family of tools for handling partial speciï¬cation, which also includes limited- information maximum likelihood and instrumental variables [10, 11, 133, 132].
Returning to machine learning, the method of moments has recently seen a great deal of success for use in the estimation of latent variable models [9]. While the current focus is on using the method of moments to overcome non-convexity issues, it can also oï¬er a way to perform unsupervised learning while relying only on conditional independence assumptions, rather than the strong distributional assumptions underlying maximum likelihood learning [147]. | 1606.06565#67 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | [
{
"id": "1507.01986"
},
{
"id": "1506.02142"
},
{
"id": "1602.04621"
},
{
"id": "1602.04450"
},
{
"id": "1605.09304"
},
{
"id": "1606.05374"
},
{
"id": "1604.05288"
},
{
"id": "1603.00448"
},
{
"id": "1605.03661"
},
{
"id": "1512.02595"
},
{
"id": "1511.08228"
},
{
"id": "1510.04342"
},
{
"id": "1604.05280"
},
{
"id": "1606.04753"
},
{
"id": "1606.03976"
},
{
"id": "1602.02697"
},
{
"id": "1505.05424"
},
{
"id": "1604.06057"
},
{
"id": "1605.03143"
},
{
"id": "1506.06579"
},
{
"id": "1605.03142"
},
{
"id": "1502.05698"
},
{
"id": "1512.05832"
},
{
"id": "1502.02072"
},
{
"id": "1506.02438"
},
{
"id": "1502.02362"
}
] |
1606.06737 | 67 | where we have used the facts that 3°; Gi; = 1, 30; Aij = 0, and as before C is asymptotically constant. This shows that Ip x exponentially decays.
13
# Appendix C: Power laws for generative grammars
Let us now generalize to the strongly correlated case. As discussed in the text, the joint probability is modiï¬ed to
In this appendix, we prove that the rational mutual infor- mation decays like a power law for a sub-class of gener- ative grammars. We proceed by mimicking the strategy employed in the above appendix. Let G be the linear operator associated with the matrix Pb|a, the probability that a node takes the value b given that the parent node has value b. We will assume that G is irreducible and aperiodic, with no degeneracies. From the above discus- sion, we see that removing the degeneracy assumption does not qualitatively change things; one simply replaces the procedure of diagonalizing G with putting G in Jor- dan normal form.
(C7) Ss P(a,b) = Sr Qn (cae) (cae) :
where Q is some symmetric matrix which satisfies >, Qrs = Hs. We now employ our favorite trick of diag- onalizing G and then writing | 1606.06737#67 | Criticality in Formal Languages and Statistical Physics | We show that the mutual information between two symbols, as a function of the
number of symbols between the two, decays exponentially in any probabilistic
regular grammar, but can decay like a power law for a context-free grammar.
This result about formal languages is closely related to a well-known result in
classical statistical mechanics that there are no phase transitions in
dimensions fewer than two. It is also related to the emergence of power-law
correlations in turbulence and cosmological inflation through recursive
generative processes. We elucidate these physics connections and comment on
potential applications of our results to machine learning tasks like training
artificial recurrent neural networks. Along the way, we introduce a useful
quantity which we dub the rational mutual information and discuss
generalizations of our claims involving more complicated Bayesian networks. | http://arxiv.org/pdf/1606.06737 | Henry W. Lin, Max Tegmark | cond-mat.dis-nn, cs.CL | Replaced to match final published version. Discussion improved,
references added | Entropy, 19, 299 (2017) | cond-mat.dis-nn | 20160621 | 20170823 | [] |
1606.06565 | 68 | Finally, some recent work in machine learning focuses only on modeling the distribution of errors of a model, which is suï¬cient for determining whether a model is performing well or poorly. Formally, the goal is to perform unsupervised risk estimation â given a model and unlabeled data from a test distribution, estimate the labeled risk of the model. This formalism, introduced by [44], has the advantage of potentially handling very large changes between train and test â even if the test distribution looks completely diï¬erent from the training distribution and we have no hope of outputting accurate predictions, unsupervised risk estimation may still be possible, as in this case we would only need to output a large estimate for the risk. As in [147], one can approach unsupervised risk estimation by positing certain conditional independencies in the distribution of errors, and using this to estimate the error distribution from unlabeled data [39, 170, 121, 74]. Instead of assuming independence, another assumption is that the errors are Gaussian conditioned on the true output y, in which case estimating the risk reduces to estimating a Gaussian mixture model [18]. Because these methods focus only on the model errors and ignore other aspects of the data distribution, they can also be seen as an instance of partial model speciï¬cation. | 1606.06565#68 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | [
{
"id": "1507.01986"
},
{
"id": "1506.02142"
},
{
"id": "1602.04621"
},
{
"id": "1602.04450"
},
{
"id": "1605.09304"
},
{
"id": "1606.05374"
},
{
"id": "1604.05288"
},
{
"id": "1603.00448"
},
{
"id": "1605.03661"
},
{
"id": "1512.02595"
},
{
"id": "1511.08228"
},
{
"id": "1510.04342"
},
{
"id": "1604.05280"
},
{
"id": "1606.04753"
},
{
"id": "1606.03976"
},
{
"id": "1602.02697"
},
{
"id": "1505.05424"
},
{
"id": "1604.06057"
},
{
"id": "1605.03143"
},
{
"id": "1506.06579"
},
{
"id": "1605.03142"
},
{
"id": "1502.05698"
},
{
"id": "1512.05832"
},
{
"id": "1502.02072"
},
{
"id": "1506.02438"
},
{
"id": "1502.02362"
}
] |
1606.06737 | 68 | where Q is some symmetric matrix which satisfies >, Qrs = Hs. We now employ our favorite trick of diag- onalizing G and then writing
(G9). = pi + â¬Aiy, (C8)
A/2-1 where ⬠= A . This gives
Let us start with the weakly correlated case. In this case,
P(a,b) = ; (cr) (câ) , (a,b) a My 7 » (C1)
since as we have discussed in the main text, the parent node has the stationary distribution µ and Gâ/2 give the conditional probabilities from transitioning from the parent node to the nodes at the bottom of the tree that we are interested in. We now employ our favorite trick of diagonalizing G and then writing
(Gâ/2)ij = µi + λâ/2 2 Aij, (C2)
P(a,b) = Ss Qrs (Ha + â¬Aar) (He + â¬Abds) , rs = Malte + ¥° Qrs (Haâ¬Avs + MoeAar +P AarAds) - rs = Mato + Â¥> HatAdstts +) Hoe Aarker 5 r +e Ss QrsAarAbs = Maly + > QrsAarAds- (C9)
which gives | 1606.06737#68 | Criticality in Formal Languages and Statistical Physics | We show that the mutual information between two symbols, as a function of the
number of symbols between the two, decays exponentially in any probabilistic
regular grammar, but can decay like a power law for a context-free grammar.
This result about formal languages is closely related to a well-known result in
classical statistical mechanics that there are no phase transitions in
dimensions fewer than two. It is also related to the emergence of power-law
correlations in turbulence and cosmological inflation through recursive
generative processes. We elucidate these physics connections and comment on
potential applications of our results to machine learning tasks like training
artificial recurrent neural networks. Along the way, we introduce a useful
quantity which we dub the rational mutual information and discuss
generalizations of our claims involving more complicated Bayesian networks. | http://arxiv.org/pdf/1606.06737 | Henry W. Lin, Max Tegmark | cond-mat.dis-nn, cs.CL | Replaced to match final published version. Discussion improved,
references added | Entropy, 19, 299 (2017) | cond-mat.dis-nn | 20160621 | 20170823 | [] |
1606.06565 | 69 | Training on multiple distributions. One could also train on multiple training distributions in the hope that a model which simultaneously works well on many training distributions will also work well on a novel test distribution. One of the authors has found this to be the case, for instance, in the context of automated speech recognition systems [7]. One could potentially combine this with any of the ideas above, and/or take an engineering approach of simply trying to develop design methodologies that consistently allow one to collect a representative set of training sets and from this build a model that consistently generalizes to novel distributions. Even for this engineering approach, it seems important to be able to detect when one is in a situation that was not covered by the training data and to respond appropriately, and to have methodologies for adequately stress-testing the model with distributions that are suï¬ciently diï¬erent from the set of training distributions. | 1606.06565#69 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | [
{
"id": "1507.01986"
},
{
"id": "1506.02142"
},
{
"id": "1602.04621"
},
{
"id": "1602.04450"
},
{
"id": "1605.09304"
},
{
"id": "1606.05374"
},
{
"id": "1604.05288"
},
{
"id": "1603.00448"
},
{
"id": "1605.03661"
},
{
"id": "1512.02595"
},
{
"id": "1511.08228"
},
{
"id": "1510.04342"
},
{
"id": "1604.05280"
},
{
"id": "1606.04753"
},
{
"id": "1606.03976"
},
{
"id": "1602.02697"
},
{
"id": "1505.05424"
},
{
"id": "1604.06057"
},
{
"id": "1605.03143"
},
{
"id": "1506.06579"
},
{
"id": "1605.03142"
},
{
"id": "1502.05698"
},
{
"id": "1512.05832"
},
{
"id": "1502.02072"
},
{
"id": "1506.02438"
},
{
"id": "1502.02362"
}
] |
1606.06737 | 69 | which gives
P(A.) = So tr (Ha +3" Aar) (to + 2! Abr) 5 : = SO pr (Matty + Hae Ane + beAar + AarAbr) . (C3)
where we have defined « = > 2 Now note that >, Aarfr = 0, since pz is an eigenvector with eigenvalue 1 of G4/?. Hence this simplifies the above to just
⡠rs QrsAarAbs ⡠(µaµb)1/2 Nab, and noting that a Rab = 0, we have
yet nto + Rat) aD Hab C10 = [Mam + ANZ) . (C10) ab =1+e|NI/,
P(a,b) = pats + > pr AarAbr- (C4)
which gives
IR = λ2ââ4 2 ||N||2. (C11)
From the definition of rational mutual information, and employing the fact that }>, Aj; = 0 gives
In either the strongly or the weakly correlated case, note that N is asymptotically constant. We can write the second largest eigenvalue |λ2|2 = qâk2/2, where q is the branching factor, | 1606.06737#69 | Criticality in Formal Languages and Statistical Physics | We show that the mutual information between two symbols, as a function of the
number of symbols between the two, decays exponentially in any probabilistic
regular grammar, but can decay like a power law for a context-free grammar.
This result about formal languages is closely related to a well-known result in
classical statistical mechanics that there are no phase transitions in
dimensions fewer than two. It is also related to the emergence of power-law
correlations in turbulence and cosmological inflation through recursive
generative processes. We elucidate these physics connections and comment on
potential applications of our results to machine learning tasks like training
artificial recurrent neural networks. Along the way, we introduce a useful
quantity which we dub the rational mutual information and discuss
generalizations of our claims involving more complicated Bayesian networks. | http://arxiv.org/pdf/1606.06737 | Henry W. Lin, Max Tegmark | cond-mat.dis-nn, cs.CL | Replaced to match final published version. Discussion improved,
references added | Entropy, 19, 299 (2017) | cond-mat.dis-nn | 20160621 | 20170823 | [] |
1606.06565 | 70 | How to respond when out-of-distribution. The approaches described above focus on detecting when a model is unlikely to make good predictions on a new distribution. An important related question is what to do once the detection occurs. One natural approach would be to ask humans for information, though in the context of complex structured output tasks it may be unclear a priori what question to ask, and in time-critical situations asking for information may not be an option. For the former challenge, there has been some recent promising work on pinpointing aspects of a structure that a model is uncertain about [162, 81], as well as obtaining calibration in structured output settings [83], but we believe there is much work yet to be done. For the latter challenge, there is also relevant work based on reachability analysis [93, 100] and robust policy improvement [164], which provide potential methods for deploying conservative policies in situations of uncertainty; to our knowledge, this work has not yet been combined with methods for detecting out-of-distribution failures of a model.
Beyond the structured output setting, for agents that can act in an environment (such as RL agents),
18 | 1606.06565#70 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | [
{
"id": "1507.01986"
},
{
"id": "1506.02142"
},
{
"id": "1602.04621"
},
{
"id": "1602.04450"
},
{
"id": "1605.09304"
},
{
"id": "1606.05374"
},
{
"id": "1604.05288"
},
{
"id": "1603.00448"
},
{
"id": "1605.03661"
},
{
"id": "1512.02595"
},
{
"id": "1511.08228"
},
{
"id": "1510.04342"
},
{
"id": "1604.05280"
},
{
"id": "1606.04753"
},
{
"id": "1606.03976"
},
{
"id": "1602.02697"
},
{
"id": "1505.05424"
},
{
"id": "1604.06057"
},
{
"id": "1605.03143"
},
{
"id": "1506.06579"
},
{
"id": "1605.03142"
},
{
"id": "1502.05698"
},
{
"id": "1512.05832"
},
{
"id": "1502.02072"
},
{
"id": "1506.02438"
},
{
"id": "1502.02362"
}
] |
1606.06737 | 70 | > (Wattn +2 oy Mr AarAbr) IR+1% ab Hablb C5 =D [rons + ANS] ©) ab =1+el(NIP,
(C5) IR â qââk2/2 â qâk2 logq |iâj| = C|i â j|âk2. â¼ (C12)
Behold the glorious power law! We note that the normal- ization C must be a function of the form C = m2f (λ2, q), where m2 is the multiplicity of the eigenvalue λ2. We evaluate this normalization in the next section.
where Nay = (apy) â1/? > , HrAarApr is a symmetric matrix and || - || denotes the Frobenius norm. Hence
IR = λ2â 2 ||S||2. (C6)
As before, this result can be sharpened if we assume that G satisï¬es detailed balance Gmn = eKm/2SmneâKn/2
14
where S is a symmetric matrix and Kn are just num- bers. Let us only consider the weakly correlated case. By the spectral theorem, we diagonalize S into an or- thonormal basis of eigenvectors v. As before, G and S share the same eigenvalues. Proceeding, | 1606.06737#70 | Criticality in Formal Languages and Statistical Physics | We show that the mutual information between two symbols, as a function of the
number of symbols between the two, decays exponentially in any probabilistic
regular grammar, but can decay like a power law for a context-free grammar.
This result about formal languages is closely related to a well-known result in
classical statistical mechanics that there are no phase transitions in
dimensions fewer than two. It is also related to the emergence of power-law
correlations in turbulence and cosmological inflation through recursive
generative processes. We elucidate these physics connections and comment on
potential applications of our results to machine learning tasks like training
artificial recurrent neural networks. Along the way, we introduce a useful
quantity which we dub the rational mutual information and discuss
generalizations of our claims involving more complicated Bayesian networks. | http://arxiv.org/pdf/1606.06737 | Henry W. Lin, Max Tegmark | cond-mat.dis-nn, cs.CL | Replaced to match final published version. Discussion improved,
references added | Entropy, 19, 299 (2017) | cond-mat.dis-nn | 20160621 | 20170823 | [] |
1606.06565 | 71 | Beyond the structured output setting, for agents that can act in an environment (such as RL agents),
18
information about the reliability of percepts in uncertain situations seems to have great potential value. In suï¬ciently rich environments, these agents may have the option to gather information that clariï¬es the percept (e.g. if in a noisy environment, move closer to the speaker), engage in low- stakes experimentation when uncertainty is high (e.g. try a potentially dangerous chemical reaction in a controlled environment), or seek experiences that are likely to help expose the perception system to the relevant distribution (e.g. practice listening to accented speech). Humans utilize such information routinely, but to our knowledge current RL techniques make little eï¬ort to do so, perhaps because popular RL environments are typically not rich enough to require such subtle management of uncertainty. Properly responding to out-of-distribution information thus seems to the authors like an exciting and (as far as we are aware) mostly unexplored challenge for next generation RL systems. | 1606.06565#71 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | [
{
"id": "1507.01986"
},
{
"id": "1506.02142"
},
{
"id": "1602.04621"
},
{
"id": "1602.04450"
},
{
"id": "1605.09304"
},
{
"id": "1606.05374"
},
{
"id": "1604.05288"
},
{
"id": "1603.00448"
},
{
"id": "1605.03661"
},
{
"id": "1512.02595"
},
{
"id": "1511.08228"
},
{
"id": "1510.04342"
},
{
"id": "1604.05280"
},
{
"id": "1606.04753"
},
{
"id": "1606.03976"
},
{
"id": "1602.02697"
},
{
"id": "1505.05424"
},
{
"id": "1604.06057"
},
{
"id": "1605.03143"
},
{
"id": "1506.06579"
},
{
"id": "1605.03142"
},
{
"id": "1502.05698"
},
{
"id": "1512.05832"
},
{
"id": "1502.02072"
},
{
"id": "1506.02438"
},
{
"id": "1502.02362"
}
] |
1606.06737 | 71 | 1 P(a,b) = z Ss dM vaupe Ket Ko)/2 (C13) D
where Z is a constant that ensures that P is properly normalized. Let us move full steam ahead to compute the rational mutual information:
P(a,b)? » P(a)P(b)
2 = Ss e7KatKo) (= aerator) ab E v 2 = Ss (= s2oun) : ab v (C14)
This is just the Frobenius norm of the symmetric matrix H =, ASvavy! The eigenvalues of the matrix can be read off, so we have
IR(a, b) = |λi|2â. (C15) i=2
Hence we have computed the rational mutual information exactly as a function of â.In the next section, we use this result to compute the mutual information as a function of separation |i â j|, which will lead to a precise evaluation of the normalization constant C in the equation
I(a, b) â C|i â j|âk2. (C16)
# 1. Detailed evaluation of the normalization | 1606.06737#71 | Criticality in Formal Languages and Statistical Physics | We show that the mutual information between two symbols, as a function of the
number of symbols between the two, decays exponentially in any probabilistic
regular grammar, but can decay like a power law for a context-free grammar.
This result about formal languages is closely related to a well-known result in
classical statistical mechanics that there are no phase transitions in
dimensions fewer than two. It is also related to the emergence of power-law
correlations in turbulence and cosmological inflation through recursive
generative processes. We elucidate these physics connections and comment on
potential applications of our results to machine learning tasks like training
artificial recurrent neural networks. Along the way, we introduce a useful
quantity which we dub the rational mutual information and discuss
generalizations of our claims involving more complicated Bayesian networks. | http://arxiv.org/pdf/1606.06737 | Henry W. Lin, Max Tegmark | cond-mat.dis-nn, cs.CL | Replaced to match final published version. Discussion improved,
references added | Entropy, 19, 299 (2017) | cond-mat.dis-nn | 20160621 | 20170823 | [] |
1606.06565 | 72 | A unifying view: counterfactual reasoning and machine learning with contracts. Some of the authors have found two viewpoints to be particularly helpful when thinking about problems related to out-of-distribution prediction. The ï¬rst is counterfactual reasoning [106, 129, 117, 30], where one asks âwhat would have happened if the world were diï¬erent in a certain wayâ? In some sense, distributional shift can be thought of as a particular type of counterfactual, and so understanding counterfactual reasoning is likely to help in making systems robust to distributional shift. We are excited by recent work applying counterfactual reasoning techniques to machine learning problems [30, 120, 151, 160, 77, 137] though there appears to be much work remaining to be done to scale these to high-dimensional and highly complex settings. | 1606.06565#72 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | [
{
"id": "1507.01986"
},
{
"id": "1506.02142"
},
{
"id": "1602.04621"
},
{
"id": "1602.04450"
},
{
"id": "1605.09304"
},
{
"id": "1606.05374"
},
{
"id": "1604.05288"
},
{
"id": "1603.00448"
},
{
"id": "1605.03661"
},
{
"id": "1512.02595"
},
{
"id": "1511.08228"
},
{
"id": "1510.04342"
},
{
"id": "1604.05280"
},
{
"id": "1606.04753"
},
{
"id": "1606.03976"
},
{
"id": "1602.02697"
},
{
"id": "1505.05424"
},
{
"id": "1604.06057"
},
{
"id": "1605.03143"
},
{
"id": "1506.06579"
},
{
"id": "1605.03142"
},
{
"id": "1502.05698"
},
{
"id": "1512.05832"
},
{
"id": "1502.02072"
},
{
"id": "1506.02438"
},
{
"id": "1502.02362"
}
] |
1606.06737 | 72 | # 1. Detailed evaluation of the normalization
For simplicity, we specialize to the case q = 2 although our results can surely be extended to q > 2. Deï¬ne δ = â/2 and d = |i â j|. We wish to compute the ex- pected value of IR conditioned on knowledge of d. By Bayes rule, p(δ|d) â p(d|δ)p(δ). Now p(d|δ) is given by a triangle distribution with mean 2δâ1 and compact sup- port (0, 2δ). On the other hand, p(δ) â 2δ for δ ⤠δmax and p(δ) = 0 for δ ⤠0 or δ > δmax. This new constant δmax serves two purposes. First, it can be thought of as a way to regulate the probability distribution p(δ) so that it is normalizable; at the end of the calculation we formally take δmax â â without obstruction. Second, if we are interested in empirically sampling the mutual information, we cannot generate an inï¬nite string, so set- ting δmax to a ï¬nite value accounts for the fact that our generated string may be ï¬nite. | 1606.06737#72 | Criticality in Formal Languages and Statistical Physics | We show that the mutual information between two symbols, as a function of the
number of symbols between the two, decays exponentially in any probabilistic
regular grammar, but can decay like a power law for a context-free grammar.
This result about formal languages is closely related to a well-known result in
classical statistical mechanics that there are no phase transitions in
dimensions fewer than two. It is also related to the emergence of power-law
correlations in turbulence and cosmological inflation through recursive
generative processes. We elucidate these physics connections and comment on
potential applications of our results to machine learning tasks like training
artificial recurrent neural networks. Along the way, we introduce a useful
quantity which we dub the rational mutual information and discuss
generalizations of our claims involving more complicated Bayesian networks. | http://arxiv.org/pdf/1606.06737 | Henry W. Lin, Max Tegmark | cond-mat.dis-nn, cs.CL | Replaced to match final published version. Discussion improved,
references added | Entropy, 19, 299 (2017) | cond-mat.dis-nn | 20160621 | 20170823 | [] |
1606.06565 | 73 | The second perspective is machine learning with contracts â in this perspective, one would like to construct machine learning systems that satisfy a well-deï¬ned contract on their behavior in analogy with the design of software systems [135, 28, 89]. [135] enumerates a list of ways in which existing machine learning systems fail to do this, and the problems this can cause for deployment and maintenance of machine learning systems at scale. The simplest and to our mind most important failure is the extremely brittle implicit contract in most machine learning systems, namely that they only necessarily perform well if the training and test distributions are identical. This condition is diï¬cult to check and rare in practice, and it would be valuable to build systems that perform well under weaker contracts that are easier to reason about. Partially speciï¬ed models oï¬er one approach to this â rather than requiring the distributions to be identical, we only need them to match on the pieces of the distribution that are speciï¬ed in the model. Reachability analysis [93, 100] and model repair [58] provide other avenues for obtaining better contracts â in reachability analysis, we optimize performance subject to the condition that a safe region can always be reached by a known conservative policy, and in model repair we alter a trained model to ensure that certain desired safety properties hold. | 1606.06565#73 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | [
{
"id": "1507.01986"
},
{
"id": "1506.02142"
},
{
"id": "1602.04621"
},
{
"id": "1602.04450"
},
{
"id": "1605.09304"
},
{
"id": "1606.05374"
},
{
"id": "1604.05288"
},
{
"id": "1603.00448"
},
{
"id": "1605.03661"
},
{
"id": "1512.02595"
},
{
"id": "1511.08228"
},
{
"id": "1510.04342"
},
{
"id": "1604.05280"
},
{
"id": "1606.04753"
},
{
"id": "1606.03976"
},
{
"id": "1602.02697"
},
{
"id": "1505.05424"
},
{
"id": "1604.06057"
},
{
"id": "1605.03143"
},
{
"id": "1506.06579"
},
{
"id": "1605.03142"
},
{
"id": "1502.05698"
},
{
"id": "1512.05832"
},
{
"id": "1502.02072"
},
{
"id": "1506.02438"
},
{
"id": "1502.02362"
}
] |
1606.06737 | 73 | 0.50 0.20 10%) 0.10 0.05 0.010 0.005 0.000 - 0.005 @- 0.010 Residual from power law - 0.015 - 0.020 1 5 10 50 Distance between symbols d(X,Y)
FIG. 5: Decay of rational mutual information with separation for a binary sequence from a numerical simulation with prob- abilities p(0|0) = p(1|1) = 0.9 and a branching factor q = 2. The blue curve is not a ï¬t to the simulated data but rather an analytic calculation. The smooth power law displayed on the left is what is predicted by our âcontinuumâ approximation. The very small discrepancies (right) are not random but are fully accounted for by more involved exact calculations with discrete sums.
We now assume d > 1 so that we can swap discrete sums with integrals. We can then compute the conditional expectation value of 2~*?9, This yields
In [ 2-*25 P(d|d) dé = G-28)ae (C17) ~ So ko (kz + 1) log(2)â
or equivalently,
Cq=2 = 1 â |λ2|4 k2(k2 + 1) 1 log 2 . (C18)
It turns out it is also possible to compute the answer without making any approximations with integrals: | 1606.06737#73 | Criticality in Formal Languages and Statistical Physics | We show that the mutual information between two symbols, as a function of the
number of symbols between the two, decays exponentially in any probabilistic
regular grammar, but can decay like a power law for a context-free grammar.
This result about formal languages is closely related to a well-known result in
classical statistical mechanics that there are no phase transitions in
dimensions fewer than two. It is also related to the emergence of power-law
correlations in turbulence and cosmological inflation through recursive
generative processes. We elucidate these physics connections and comment on
potential applications of our results to machine learning tasks like training
artificial recurrent neural networks. Along the way, we introduce a useful
quantity which we dub the rational mutual information and discuss
generalizations of our claims involving more complicated Bayesian networks. | http://arxiv.org/pdf/1606.06737 | Henry W. Lin, Max Tegmark | cond-mat.dis-nn, cs.CL | Replaced to match final published version. Discussion improved,
references added | Entropy, 19, 299 (2017) | cond-mat.dis-nn | 20160621 | 20170823 | [] |
1606.06565 | 74 | Summary. There are a variety of approaches to building machine learning systems that robustly perform well when deployed on novel test distributions. One family of approaches is based on assuming a well-speciï¬ed model; in this case, the primary obstacles are the diï¬culty of building well-speciï¬ed models in practice, an incomplete picture of how to maintain uncertainty on novel distributions in the presence of ï¬nite training data, and the diï¬culty of detecting when a model is mis-speciï¬ed. Another family of approaches only assumes a partially speciï¬ed model; this approach is potentially promising, but it currently suï¬ers from a lack of development in the context of machine learning, since most of the historical development has been by the ï¬eld of econometrics; there is also a question of whether partially speciï¬ed models are fundamentally constrained to simple situations and/or conservative predictions, or whether they can meaningfully scale to the complex situations demanded by modern machine learning applications. Finally, one could try to train on multiple training distributions in the hope that a model which simultaneously works well on many training distributions will also work well on a novel test distribution; for this approach it seems particularly important to stress-test the learned model with distributions that are substantially diï¬erent from
19 | 1606.06565#74 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | [
{
"id": "1507.01986"
},
{
"id": "1506.02142"
},
{
"id": "1602.04621"
},
{
"id": "1602.04450"
},
{
"id": "1605.09304"
},
{
"id": "1606.05374"
},
{
"id": "1604.05288"
},
{
"id": "1603.00448"
},
{
"id": "1605.03661"
},
{
"id": "1512.02595"
},
{
"id": "1511.08228"
},
{
"id": "1510.04342"
},
{
"id": "1604.05280"
},
{
"id": "1606.04753"
},
{
"id": "1606.03976"
},
{
"id": "1602.02697"
},
{
"id": "1505.05424"
},
{
"id": "1604.06057"
},
{
"id": "1605.03143"
},
{
"id": "1506.06579"
},
{
"id": "1605.03142"
},
{
"id": "1502.05698"
},
{
"id": "1512.05832"
},
{
"id": "1502.02072"
},
{
"id": "1506.02438"
},
{
"id": "1502.02362"
}
] |
1606.06737 | 74 | It turns out it is also possible to compute the answer without making any approximations with integrals:
In Q-(ke+1) [logs (d)] ((2h2+1 _ 1) gllese(4)] _ 2d (2* _ )) Qhetl (C19)
The resulting predictions are compared in ï¬gure Figure 5.
# Appendix D: Estimating (rational) mutual information from empirical data
Estimating mutual information or rational mutual infor- mation from empirical data is fraught with subtleties.
15
.
It is well known that a naive estimate of the Shannon entropy obtained S= -yitX 1 W log = Me is biased, gen- erally underestimating the true entropy from finite sam- ples. For example, We use the estimator advocated by Grassberger [59]:
S= bed â FM (D1)
where (x) is the digamma function, N = )> N;, and K is the number of characters in the alphabet. mutual information estimator can then be estimated I(X,Y) = $(X) + $(Y) â $(X,Y). The variance of estimator is then the sum of the variances The by his
var( ËI) = varEnt(X) + varEnt(Y ) + varEnt(X, Y ),
(D2)
where the varEntropy is deï¬ned as | 1606.06737#74 | Criticality in Formal Languages and Statistical Physics | We show that the mutual information between two symbols, as a function of the
number of symbols between the two, decays exponentially in any probabilistic
regular grammar, but can decay like a power law for a context-free grammar.
This result about formal languages is closely related to a well-known result in
classical statistical mechanics that there are no phase transitions in
dimensions fewer than two. It is also related to the emergence of power-law
correlations in turbulence and cosmological inflation through recursive
generative processes. We elucidate these physics connections and comment on
potential applications of our results to machine learning tasks like training
artificial recurrent neural networks. Along the way, we introduce a useful
quantity which we dub the rational mutual information and discuss
generalizations of our claims involving more complicated Bayesian networks. | http://arxiv.org/pdf/1606.06737 | Henry W. Lin, Max Tegmark | cond-mat.dis-nn, cs.CL | Replaced to match final published version. Discussion improved,
references added | Entropy, 19, 299 (2017) | cond-mat.dis-nn | 20160621 | 20170823 | [] |
1606.06737 | 75 | (D2)
where the varEntropy is deï¬ned as
varEnt(X) = var (â log p(X), ) (D3)
where we can again replace logarithms with the digamma function w. The uncertainty after N measurements is then © \/var(f)/N.
[1] P. Bak, Physical Review Letters 59, 381 (1987). [2] P. Bak, C. Tang, and K. Wiesenfeld, Physical review A
38, 364 (1988).
[3] K. Linkenkaer-Hansen, V. J. 21, V. Nikouline, The (2001), J. M. Journal http://www.jneurosci.org/content/21/4/1370.full.pdf+html, URL http://www.jneurosci.org/content/21/4/1370. abstract. Palva, of and R. Neuroscience Ilmoniemi, 1370
[4] D. J. Levitin, P. Chordia, and V. Menon, Proceedings of the National Academy of Sciences 109, 3716 (2012). | 1606.06737#75 | Criticality in Formal Languages and Statistical Physics | We show that the mutual information between two symbols, as a function of the
number of symbols between the two, decays exponentially in any probabilistic
regular grammar, but can decay like a power law for a context-free grammar.
This result about formal languages is closely related to a well-known result in
classical statistical mechanics that there are no phase transitions in
dimensions fewer than two. It is also related to the emergence of power-law
correlations in turbulence and cosmological inflation through recursive
generative processes. We elucidate these physics connections and comment on
potential applications of our results to machine learning tasks like training
artificial recurrent neural networks. Along the way, we introduce a useful
quantity which we dub the rational mutual information and discuss
generalizations of our claims involving more complicated Bayesian networks. | http://arxiv.org/pdf/1606.06737 | Henry W. Lin, Max Tegmark | cond-mat.dis-nn, cs.CL | Replaced to match final published version. Discussion improved,
references added | Entropy, 19, 299 (2017) | cond-mat.dis-nn | 20160621 | 20170823 | [] |
1606.06565 | 76 | Potential Experiments: Speech systems frequently exhibit poor calibration when they go out-of- distribution, so a speech system that âknows when it is uncertainâ could be one possible demon- stration project. To be speciï¬c, the challenge could be: train a state-of-the-art speech system on a standard dataset [116] that gives well-calibrated results (if not necessarily good results) on a range of other test sets, like noisy and accented speech. Current systems not only perform poorly on these test sets when trained only on small datasets, but are usually overconï¬dent in their incorrect transcriptions. Fixing this problem without harming performance on the original training set would be a valuable achievement, and would obviously have practical value. More generally, it would be valuable to design models that could consistently estimate (bounds on) their performance on novel test distributions. If a single methodology could consistently accomplish this for a wide variety of tasks (including not just speech but e.g. sentiment analysis [24], as well as benchmarks in computer vision [158]), that would inspire conï¬dence in the reliability of that methodology for handling novel | 1606.06565#76 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | [
{
"id": "1507.01986"
},
{
"id": "1506.02142"
},
{
"id": "1602.04621"
},
{
"id": "1602.04450"
},
{
"id": "1605.09304"
},
{
"id": "1606.05374"
},
{
"id": "1604.05288"
},
{
"id": "1603.00448"
},
{
"id": "1605.03661"
},
{
"id": "1512.02595"
},
{
"id": "1511.08228"
},
{
"id": "1510.04342"
},
{
"id": "1604.05280"
},
{
"id": "1606.04753"
},
{
"id": "1606.03976"
},
{
"id": "1602.02697"
},
{
"id": "1505.05424"
},
{
"id": "1604.06057"
},
{
"id": "1605.03143"
},
{
"id": "1506.06579"
},
{
"id": "1605.03142"
},
{
"id": "1502.05698"
},
{
"id": "1512.05832"
},
{
"id": "1502.02072"
},
{
"id": "1506.02438"
},
{
"id": "1502.02362"
}
] |
1606.06737 | 76 | [4] D. J. Levitin, P. Chordia, and V. Menon, Proceedings of the National Academy of Sciences 109, 3716 (2012).
[5] M. Tegmark, ArXiv e-prints (2014), 1401.1219. [6] B. Manaris, J. Romero, P. Machado, D. Krehbiel, T. Hirzel, W. Pharr, and R. B. Davis, Computer Mu- sic Journal 29, 55 (2005).
[7] C. Peng, S. Buldyrev, A. Goldberger, S. Havlin, F. Sciortino, M. Simons, H. Stanley, et al., Nature 356, 168 (1992).
[8] R. N. Mantegna, S. V. Buldyrev, A. L. Goldberger, S. Havlin, C.-K. Peng, M. Simons, and H. E. Stanley, Physical review letters 73, 3169 (1994).
[9] W. Ebeling and T. P¨oschel, EPL (Europhysics Letters) 26, 241 (1994), cond-mat/0204108.
[10] W. Ebeling and A. Neiman, Physica A: Statistical Me- chanics and its Applications 215, 233 (1995). | 1606.06737#76 | Criticality in Formal Languages and Statistical Physics | We show that the mutual information between two symbols, as a function of the
number of symbols between the two, decays exponentially in any probabilistic
regular grammar, but can decay like a power law for a context-free grammar.
This result about formal languages is closely related to a well-known result in
classical statistical mechanics that there are no phase transitions in
dimensions fewer than two. It is also related to the emergence of power-law
correlations in turbulence and cosmological inflation through recursive
generative processes. We elucidate these physics connections and comment on
potential applications of our results to machine learning tasks like training
artificial recurrent neural networks. Along the way, we introduce a useful
quantity which we dub the rational mutual information and discuss
generalizations of our claims involving more complicated Bayesian networks. | http://arxiv.org/pdf/1606.06737 | Henry W. Lin, Max Tegmark | cond-mat.dis-nn, cs.CL | Replaced to match final published version. Discussion improved,
references added | Entropy, 19, 299 (2017) | cond-mat.dis-nn | 20160621 | 20170823 | [] |
1606.06565 | 77 | sentiment analysis [24], as well as benchmarks in computer vision [158]), that would inspire conï¬dence in the reliability of that methodology for handling novel inputs. Note that estimating performance on novel distributions has additional practical value in allowing us to then potentially adapt the model to that new situation. Finally, it might also be valuable to create an environment where an RL agent must learn to interpret speech as part of some larger task, and to explore how to respond appropriately to its own estimates of its transcription error. | 1606.06565#77 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | [
{
"id": "1507.01986"
},
{
"id": "1506.02142"
},
{
"id": "1602.04621"
},
{
"id": "1602.04450"
},
{
"id": "1605.09304"
},
{
"id": "1606.05374"
},
{
"id": "1604.05288"
},
{
"id": "1603.00448"
},
{
"id": "1605.03661"
},
{
"id": "1512.02595"
},
{
"id": "1511.08228"
},
{
"id": "1510.04342"
},
{
"id": "1604.05280"
},
{
"id": "1606.04753"
},
{
"id": "1606.03976"
},
{
"id": "1602.02697"
},
{
"id": "1505.05424"
},
{
"id": "1604.06057"
},
{
"id": "1605.03143"
},
{
"id": "1506.06579"
},
{
"id": "1605.03142"
},
{
"id": "1502.05698"
},
{
"id": "1512.05832"
},
{
"id": "1502.02072"
},
{
"id": "1506.02438"
},
{
"id": "1502.02362"
}
] |
1606.06737 | 77 | [10] W. Ebeling and A. Neiman, Physica A: Statistical Me- chanics and its Applications 215, 233 (1995).
[11] E. G. Altmann, G. Cristadoro, and M. Degli Esposti, Proceedings of the National Academy of Sciences 109, 11582 (2012).
[12] M. A. Montemurro and P. A. Pury, Fractals 10, 451 (2002).
[13] G. Deco and B. Sch¨urmann, Information dynamics: foundations and applications (Springer Science & BusiTo compare our theoretical results with experiment in Fig. 4, we must measure the rational mutual information for a binary sequence from (simulated) data. For a binary sequence with covariance coeï¬cient Ï(X, Y ) = P (1, 1) â P (1)2, the rational mutual information is
o(X,Y) \? X.Y) = (oye) - (ps)
This was essentially calculated in by considering the limit where the covariance coefficient is small p < 1. In their paper, there is an erroneous factor of 2. To estimate covariance p(d) as a function of d (sometimes confusingly referred to as the correlation function), we use the unbi- ased estimator for a data sequence {x1,72,--- pn}: | 1606.06737#77 | Criticality in Formal Languages and Statistical Physics | We show that the mutual information between two symbols, as a function of the
number of symbols between the two, decays exponentially in any probabilistic
regular grammar, but can decay like a power law for a context-free grammar.
This result about formal languages is closely related to a well-known result in
classical statistical mechanics that there are no phase transitions in
dimensions fewer than two. It is also related to the emergence of power-law
correlations in turbulence and cosmological inflation through recursive
generative processes. We elucidate these physics connections and comment on
potential applications of our results to machine learning tasks like training
artificial recurrent neural networks. Along the way, we introduce a useful
quantity which we dub the rational mutual information and discuss
generalizations of our claims involving more complicated Bayesian networks. | http://arxiv.org/pdf/1606.06737 | Henry W. Lin, Max Tegmark | cond-mat.dis-nn, cs.CL | Replaced to match final published version. Discussion improved,
references added | Entropy, 19, 299 (2017) | cond-mat.dis-nn | 20160621 | 20170823 | [] |
1606.06565 | 78 | # 8 Related Eï¬orts
As mentioned in the introduction, several other communities have thought broadly about the safety of AI systems, both within and outside of the machine learning community. Work within the machine learning community on accidents in particular was discussed in detail above, but here we very brieï¬y highlight a few other communities doing work that is broadly related to the topic of AI safety.
⢠Cyber-Physical Systems Community: An existing community of researchers studies the security and safety of systems that interact with the physical world. Illustrative of this work is an impressive and successful eï¬ort to formally verify the entire federal aircraft collision avoidance system [75, 92]. Similar work includes traï¬c control algorithms [101] and many other topics. However, to date this work has not focused much on modern machine learning systems, where formal veriï¬cation is often not feasible. | 1606.06565#78 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | [
{
"id": "1507.01986"
},
{
"id": "1506.02142"
},
{
"id": "1602.04621"
},
{
"id": "1602.04450"
},
{
"id": "1605.09304"
},
{
"id": "1606.05374"
},
{
"id": "1604.05288"
},
{
"id": "1603.00448"
},
{
"id": "1605.03661"
},
{
"id": "1512.02595"
},
{
"id": "1511.08228"
},
{
"id": "1510.04342"
},
{
"id": "1604.05280"
},
{
"id": "1606.04753"
},
{
"id": "1606.03976"
},
{
"id": "1602.02697"
},
{
"id": "1505.05424"
},
{
"id": "1604.06057"
},
{
"id": "1605.03143"
},
{
"id": "1506.06579"
},
{
"id": "1605.03142"
},
{
"id": "1502.05698"
},
{
"id": "1512.05832"
},
{
"id": "1502.02072"
},
{
"id": "1506.02438"
},
{
"id": "1502.02362"
}
] |
1606.06737 | 78 | nâd Ad) = nod-1 > (ai â ) (tita â Z)- (D5)
However, it is important to note that estimating the co- variance function Ï by averaging and then squaring will generically yield a biased estimate; we circumvent this by simply estimating IR(X, Y )1/2 â Ï(X, Y ).
ness Media, 2012).
[14] G. K. Zipf, Human behavior and the principle of least eï¬ort (Addison-Wesley Press, 1949).
[15] H. W. Lin and A. Loeb, Physical Review E 93, 032306 (2016).
[16] L. Pietronero, E. Tosatti, V. Tosatti, and A. Vespig- nani, Physica A: Statistical Mechanics and its Ap- plications 293, 297 ISSN 0378-4371, URL (2001), http://www.sciencedirect.com/science/article/ pii/S0378437100006336.
[17] M. Kardar, Statistical physics of ï¬elds (Cambridge Uni- versity Press, 2007).
[18] URL ftp://ftp.ncbi.nih.gov/genomes/Homo_ sapiens/. | 1606.06737#78 | Criticality in Formal Languages and Statistical Physics | We show that the mutual information between two symbols, as a function of the
number of symbols between the two, decays exponentially in any probabilistic
regular grammar, but can decay like a power law for a context-free grammar.
This result about formal languages is closely related to a well-known result in
classical statistical mechanics that there are no phase transitions in
dimensions fewer than two. It is also related to the emergence of power-law
correlations in turbulence and cosmological inflation through recursive
generative processes. We elucidate these physics connections and comment on
potential applications of our results to machine learning tasks like training
artificial recurrent neural networks. Along the way, we introduce a useful
quantity which we dub the rational mutual information and discuss
generalizations of our claims involving more complicated Bayesian networks. | http://arxiv.org/pdf/1606.06737 | Henry W. Lin, Max Tegmark | cond-mat.dis-nn, cs.CL | Replaced to match final published version. Discussion improved,
references added | Entropy, 19, 299 (2017) | cond-mat.dis-nn | 20160621 | 20170823 | [] |
1606.06565 | 79 | ⢠Futurist Community: A cross-disciplinary group of academics and non-proï¬ts has raised concern about the long term implications of AI [27, 167], particularly superintelligent AI. The Future of Humanity Institute has studied this issue particularly as it relates to future AI sys- tems learning or executing humanityâs preferences [48, 43, 14, 12]. The Machine Intelligence Research Institute has studied safety issues that may arise in very advanced AI [57, 56, 36, 154, 142], including a few mentioned above (e.g., wireheading, environmental embedding, counter- factual reasoning), albeit at a more philosophical level. To date, they have not focused much on applications to modern machine learning. By contrast, our focus is on the empirical study of practical safety problems in modern machine learning systems, which we believe is likely to be robustly useful across a broad variety of potential risks, both short- and long-term.
⢠Other Calls for Work on Safety: There have been other public documents within the research community pointing out the importance of work on AI safety. A 2015 Open Letter [8] signed by many members of the research community states the importance of âhow to reap [AIâs] beneï¬ts while avoiding the potential pitfalls.â [130] propose research priorities for
20 | 1606.06565#79 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | [
{
"id": "1507.01986"
},
{
"id": "1506.02142"
},
{
"id": "1602.04621"
},
{
"id": "1602.04450"
},
{
"id": "1605.09304"
},
{
"id": "1606.05374"
},
{
"id": "1604.05288"
},
{
"id": "1603.00448"
},
{
"id": "1605.03661"
},
{
"id": "1512.02595"
},
{
"id": "1511.08228"
},
{
"id": "1510.04342"
},
{
"id": "1604.05280"
},
{
"id": "1606.04753"
},
{
"id": "1606.03976"
},
{
"id": "1602.02697"
},
{
"id": "1505.05424"
},
{
"id": "1604.06057"
},
{
"id": "1605.03143"
},
{
"id": "1506.06579"
},
{
"id": "1605.03142"
},
{
"id": "1502.05698"
},
{
"id": "1512.05832"
},
{
"id": "1502.02072"
},
{
"id": "1506.02438"
},
{
"id": "1502.02362"
}
] |
1606.06737 | 79 | [18] URL ftp://ftp.ncbi.nih.gov/genomes/Homo_ sapiens/.
[19] URL http://www.jsbach.net/midi/midi_solo_ violin.html.
# [20] URL http://prize.hutter1.net/. [21] URL
http://www.lexique.org/public/lisezmoi. corpatext.htm.
[22] A. M. Turing, Mind 59, 433 (1950). [23] D. Ferrucci, E. Brown,
J. Fan, D. Gondek, A. A. Kalyanpur, A. Lally, J. W. Murdock, E. Nyberg, J. Prager, et al., AI magazine 31, 59 (2010). [24] M. Campbell, A. J. Hoane, and F.-h. Hsu, Artiï¬cial intelligence 134, 57 (2002). | 1606.06737#79 | Criticality in Formal Languages and Statistical Physics | We show that the mutual information between two symbols, as a function of the
number of symbols between the two, decays exponentially in any probabilistic
regular grammar, but can decay like a power law for a context-free grammar.
This result about formal languages is closely related to a well-known result in
classical statistical mechanics that there are no phase transitions in
dimensions fewer than two. It is also related to the emergence of power-law
correlations in turbulence and cosmological inflation through recursive
generative processes. We elucidate these physics connections and comment on
potential applications of our results to machine learning tasks like training
artificial recurrent neural networks. Along the way, we introduce a useful
quantity which we dub the rational mutual information and discuss
generalizations of our claims involving more complicated Bayesian networks. | http://arxiv.org/pdf/1606.06737 | Henry W. Lin, Max Tegmark | cond-mat.dis-nn, cs.CL | Replaced to match final published version. Discussion improved,
references added | Entropy, 19, 299 (2017) | cond-mat.dis-nn | 20160621 | 20170823 | [] |
1606.06565 | 80 | 20
robust and beneï¬cial artiï¬cial intelligence, and includes several other topics in addition to a (briefer) discussion of AI-related accidents. [161], writing over 20 years ago, proposes that the community look for ways to formalize Asimovâs ï¬rst law of robotics (robots must not harm humans), and focuses mainly on classical planning. Finally, two of the authors of this paper have written informally about safety in AI systems [146, 34]; these postings provided inspiration for parts of the present document.
⢠Related Problems in Safety: A number of researchers in machine learning and other ï¬elds have begun to think about the social impacts of AI technologies. Aside from work directly on accidents (which we reviewed in the main document), there is also substantial work on other topics, many of which are closely related to or overlap with the issue of accidents. A thorough overview of all of this work is beyond the scope of this document, but we brieï¬y list a few emerging themes:
⢠Privacy: How can we ensure privacy when applying machine learning to sensitive data sources such as medical data? [76, 1]
⢠Fairness: How can we make sure ML systems donât discriminate? [3, 168, 6, 46, 119, 169] | 1606.06565#80 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | [
{
"id": "1507.01986"
},
{
"id": "1506.02142"
},
{
"id": "1602.04621"
},
{
"id": "1602.04450"
},
{
"id": "1605.09304"
},
{
"id": "1606.05374"
},
{
"id": "1604.05288"
},
{
"id": "1603.00448"
},
{
"id": "1605.03661"
},
{
"id": "1512.02595"
},
{
"id": "1511.08228"
},
{
"id": "1510.04342"
},
{
"id": "1604.05280"
},
{
"id": "1606.04753"
},
{
"id": "1606.03976"
},
{
"id": "1602.02697"
},
{
"id": "1505.05424"
},
{
"id": "1604.06057"
},
{
"id": "1605.03143"
},
{
"id": "1506.06579"
},
{
"id": "1605.03142"
},
{
"id": "1502.05698"
},
{
"id": "1512.05832"
},
{
"id": "1502.02072"
},
{
"id": "1506.02438"
},
{
"id": "1502.02362"
}
] |
1606.06737 | 80 | [25] V. Mnih, Nature 518, 529 (2015). [26] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, et al., Nature 529, 484 (2016), URL http://dx.doi.org/10. 1038/nature16961.
[27] N. Chomsky, Information and control 2, 137 (1959).
16
[28] Y. Kim, Y. Jernite, D. Sontag, and A. M. Rush (2015), 1508.06615, URL https://arxiv.org/abs/1508.06615.
[29] A. Graves, ArXiv e-prints (2013), 1308.0850. [30] A. Graves, A.-r. Mohamed, and G. Hinton, in 2013 IEEE international conference on acoustics, speech and signal processing (IEEE, 2013), pp. 6645â6649.
[31] R. Collobert and J. Weston, in Proceedings of the 25th international conference on Machine learning (ACM, 2008), pp. 160â167. | 1606.06737#80 | Criticality in Formal Languages and Statistical Physics | We show that the mutual information between two symbols, as a function of the
number of symbols between the two, decays exponentially in any probabilistic
regular grammar, but can decay like a power law for a context-free grammar.
This result about formal languages is closely related to a well-known result in
classical statistical mechanics that there are no phase transitions in
dimensions fewer than two. It is also related to the emergence of power-law
correlations in turbulence and cosmological inflation through recursive
generative processes. We elucidate these physics connections and comment on
potential applications of our results to machine learning tasks like training
artificial recurrent neural networks. Along the way, we introduce a useful
quantity which we dub the rational mutual information and discuss
generalizations of our claims involving more complicated Bayesian networks. | http://arxiv.org/pdf/1606.06737 | Henry W. Lin, Max Tegmark | cond-mat.dis-nn, cs.CL | Replaced to match final published version. Discussion improved,
references added | Entropy, 19, 299 (2017) | cond-mat.dis-nn | 20160621 | 20170823 | [] |
1606.06565 | 81 | ⢠Fairness: How can we make sure ML systems donât discriminate? [3, 168, 6, 46, 119, 169]
Security: What can a malicious adversary do to a ML system? [149, 96, 97, 115, 108, 19] ⢠Abuse:5 How do we prevent the misuse of ML systems to attack or harm people? [16] ⢠Transparency: How can we understand what complicated ML systems are doing? [112,
166, 105, 109]
⢠Policy: How do we predict and respond to the economic and social consequences of ML? [32, 52, 15, 33]
We believe that research on these topics has both urgency and great promise, and that fruitful intersection is likely to exist between these topics and the topics we discuss in this paper.
# 9 Conclusion
This paper analyzed the problem of accidents in machine learning systems and particularly rein- forcement learning agents, where an accident is deï¬ned as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We presented ï¬ve possible research problems related to accident risk and for each we discussed possible approaches that are highly amenable to concrete experimental work. | 1606.06565#81 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | [
{
"id": "1507.01986"
},
{
"id": "1506.02142"
},
{
"id": "1602.04621"
},
{
"id": "1602.04450"
},
{
"id": "1605.09304"
},
{
"id": "1606.05374"
},
{
"id": "1604.05288"
},
{
"id": "1603.00448"
},
{
"id": "1605.03661"
},
{
"id": "1512.02595"
},
{
"id": "1511.08228"
},
{
"id": "1510.04342"
},
{
"id": "1604.05280"
},
{
"id": "1606.04753"
},
{
"id": "1606.03976"
},
{
"id": "1602.02697"
},
{
"id": "1505.05424"
},
{
"id": "1604.06057"
},
{
"id": "1605.03143"
},
{
"id": "1506.06579"
},
{
"id": "1605.03142"
},
{
"id": "1502.05698"
},
{
"id": "1512.05832"
},
{
"id": "1502.02072"
},
{
"id": "1506.02438"
},
{
"id": "1502.02362"
}
] |
1606.06737 | 81 | [31] R. Collobert and J. Weston, in Proceedings of the 25th international conference on Machine learning (ACM, 2008), pp. 160â167.
[32] A. van den Oord, S. Dieleman, H. Zen, K. Simonyan, O. Vinyals, A. Graves, N. Kalchbrenner, A. Senior, and K. Kavukcuoglu (????).
[33] J. Schmidhuber, Neural Networks 61, 85 (2015). [34] Y. LeCun, Y. Bengio, and G. Hinton, Nature 521, 436
(2015).
[35] S. Hochreiter and J. Schmidhuber, Neural computation 9, 1735 (1997).
[36] S. M. Shieber, in The Formal complexity of natural lan- guage (Springer, 1985), pp. 320â334.
[37] A. V. Anisimov, Cybernetics and Systems Analysis 7, 594 (1971).
[38] C. E. Shannon, ACM SIGMOBILE Mobile Computing and Communications Review 5, 3 (1948). | 1606.06737#81 | Criticality in Formal Languages and Statistical Physics | We show that the mutual information between two symbols, as a function of the
number of symbols between the two, decays exponentially in any probabilistic
regular grammar, but can decay like a power law for a context-free grammar.
This result about formal languages is closely related to a well-known result in
classical statistical mechanics that there are no phase transitions in
dimensions fewer than two. It is also related to the emergence of power-law
correlations in turbulence and cosmological inflation through recursive
generative processes. We elucidate these physics connections and comment on
potential applications of our results to machine learning tasks like training
artificial recurrent neural networks. Along the way, we introduce a useful
quantity which we dub the rational mutual information and discuss
generalizations of our claims involving more complicated Bayesian networks. | http://arxiv.org/pdf/1606.06737 | Henry W. Lin, Max Tegmark | cond-mat.dis-nn, cs.CL | Replaced to match final published version. Discussion improved,
references added | Entropy, 19, 299 (2017) | cond-mat.dis-nn | 20160621 | 20170823 | [] |
1606.06565 | 82 | With the realistic possibility of machine learning-based systems controlling industrial processes, health-related systems, and other mission-critical technology, small-scale accidents seem like a very concrete threat, and are critical to prevent both intrinsically and because such accidents could cause a justiï¬ed loss of trust in automated systems. The risk of larger accidents is more diï¬cult to gauge, but we believe it is worthwhile and prudent to develop a principled and forward-looking approach to safety that continues to remain relevant as autonomous systems become more powerful. While many current-day safety problems can and have been handled with ad hoc ï¬xes or case-by-case rules, we believe that the increasing trend towards end-to-end, fully autonomous systems points towards the need for a uniï¬ed approach to prevent these systems from causing unintended harm.
5Note that âsecurityâ diï¬ers from âabuseâ in that the former involves attacks against a legitimate ML system by an adversary (e.g. a criminal tries to fool a face recognition system), while the latter involves attacks by an ML system controlled by an adversary (e.g. a criminal trains a âsmart hackerâ system to break into a website).
21
# Acknowledgements | 1606.06565#82 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | [
{
"id": "1507.01986"
},
{
"id": "1506.02142"
},
{
"id": "1602.04621"
},
{
"id": "1602.04450"
},
{
"id": "1605.09304"
},
{
"id": "1606.05374"
},
{
"id": "1604.05288"
},
{
"id": "1603.00448"
},
{
"id": "1605.03661"
},
{
"id": "1512.02595"
},
{
"id": "1511.08228"
},
{
"id": "1510.04342"
},
{
"id": "1604.05280"
},
{
"id": "1606.04753"
},
{
"id": "1606.03976"
},
{
"id": "1602.02697"
},
{
"id": "1505.05424"
},
{
"id": "1604.06057"
},
{
"id": "1605.03143"
},
{
"id": "1506.06579"
},
{
"id": "1605.03142"
},
{
"id": "1502.05698"
},
{
"id": "1512.05832"
},
{
"id": "1502.02072"
},
{
"id": "1506.02438"
},
{
"id": "1502.02362"
}
] |
1606.06737 | 82 | [38] C. E. Shannon, ACM SIGMOBILE Mobile Computing and Communications Review 5, 3 (1948).
[39] S. Kullback and R. A. Leibler, Ann. Math. Statist. 22, 79 (1951), URL http://dx.doi.org/10.1214/aoms/ 1177729694.
[40] T. M. Cover and J. A. Thomas, Elements of information theory (John Wiley & Sons, 2012).
[41] L. R. Rabiner, Proceedings of the IEEE 77, 257 (1989). [42] R. C. Carrasco and J. Oncina, in International Collo- quium on Grammatical Inference (Springer, 1994), pp. 139â152.
[43] S. Ginsburg, The Mathematical Theory of Context Free Languages.[Mit Fig.] (McGraw-Hill Book Company, 1966).
[44] T. L. Booth, in Switching and Automata Theory, 1969., IEEE Conference Record of 10th Annual Symposium on (IEEE, 1969), pp. 74â81.
[45] T. Huang and K. Fu, Information Sciences 3, 201 (1971), ISSN 0020-0255, URL http://www.sciencedirect.com/ science/article/pii/S0020025571800075. | 1606.06737#82 | Criticality in Formal Languages and Statistical Physics | We show that the mutual information between two symbols, as a function of the
number of symbols between the two, decays exponentially in any probabilistic
regular grammar, but can decay like a power law for a context-free grammar.
This result about formal languages is closely related to a well-known result in
classical statistical mechanics that there are no phase transitions in
dimensions fewer than two. It is also related to the emergence of power-law
correlations in turbulence and cosmological inflation through recursive
generative processes. We elucidate these physics connections and comment on
potential applications of our results to machine learning tasks like training
artificial recurrent neural networks. Along the way, we introduce a useful
quantity which we dub the rational mutual information and discuss
generalizations of our claims involving more complicated Bayesian networks. | http://arxiv.org/pdf/1606.06737 | Henry W. Lin, Max Tegmark | cond-mat.dis-nn, cs.CL | Replaced to match final published version. Discussion improved,
references added | Entropy, 19, 299 (2017) | cond-mat.dis-nn | 20160621 | 20170823 | [] |
1606.06565 | 83 | 21
# Acknowledgements
We thank Shane Legg, Peter Norvig, Ilya Sutskever, Greg Corrado, Laurent Orseau, David Krueger, Rif Saurous, David Andersen, and Victoria Krakovna for detailed feedback and suggestions. We would also like to thank Geoï¬rey Irving, Toby Ord, Quoc Le, Greg Wayne, Daniel Dewey, Nick Beckstead, Holden Karnofsky, Chelsea Finn, Marcello Herreshoï¬, Alex Donaldson, Jared Kaplan, Greg Brockman, Wojciech Zaremba, Ian Goodfellow, Dylan Hadï¬eld-Menell, Jessica Taylor, Blaise Aguera y Arcas, David Berlekamp, Aaron Courville, and Jeï¬ Dean for helpful discussions and comments. Paul Christiano was supported as part of the Future of Life Institute FLI-RFP-AI1 program, grant #2015â143898. In addition a minority of the work done by Paul Christiano was performed as a contractor for Theiss Research and at OpenAI. Finally, we thank the Google Brain team for providing a supportive environment and encouraging us to publish this work.
# References | 1606.06565#83 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | [
{
"id": "1507.01986"
},
{
"id": "1506.02142"
},
{
"id": "1602.04621"
},
{
"id": "1602.04450"
},
{
"id": "1605.09304"
},
{
"id": "1606.05374"
},
{
"id": "1604.05288"
},
{
"id": "1603.00448"
},
{
"id": "1605.03661"
},
{
"id": "1512.02595"
},
{
"id": "1511.08228"
},
{
"id": "1510.04342"
},
{
"id": "1604.05280"
},
{
"id": "1606.04753"
},
{
"id": "1606.03976"
},
{
"id": "1602.02697"
},
{
"id": "1505.05424"
},
{
"id": "1604.06057"
},
{
"id": "1605.03143"
},
{
"id": "1506.06579"
},
{
"id": "1605.03142"
},
{
"id": "1502.05698"
},
{
"id": "1512.05832"
},
{
"id": "1502.02072"
},
{
"id": "1506.02438"
},
{
"id": "1502.02362"
}
] |
1606.06737 | 83 | [46] K. Lari and S. J. Young, Computer speech & lan- guage 4, 35 (1990).
[47] D. Harlow, S. H. Shenker, D. Stanford, and L. Susskind,
Physical Review D 85, 063516 (2012). [48] L. Van Hove, Physica 16, 137 (1950). [49] J. A. Cuesta and A. S´anchez, Journal of Statistical
Physics 115, 869 (2004), cond-mat/0306354.
[50] G. Evenbly and G. Vidal, Journal of Statistical Physics 145, 891 (2011).
[51] A. M. Saxe, J. L. McClelland, and S. Ganguli, arXiv preprint arXiv:1312.6120 (2013).
[52] M. Mahoney, Large text compression benchmark. [53] A. Karpathy, J. Johnson, and L. Fei-Fei, ArXiv e-prints
(2015), 1506.02078.
[54] S.-i. Amari, in Diï¬erential-Geometrical Methods in Statistics (Springer, 1985), pp. 66â103.
[55] T. Morimoto, Journal of the Physical Society of Japan 18, 328 (1963). | 1606.06737#83 | Criticality in Formal Languages and Statistical Physics | We show that the mutual information between two symbols, as a function of the
number of symbols between the two, decays exponentially in any probabilistic
regular grammar, but can decay like a power law for a context-free grammar.
This result about formal languages is closely related to a well-known result in
classical statistical mechanics that there are no phase transitions in
dimensions fewer than two. It is also related to the emergence of power-law
correlations in turbulence and cosmological inflation through recursive
generative processes. We elucidate these physics connections and comment on
potential applications of our results to machine learning tasks like training
artificial recurrent neural networks. Along the way, we introduce a useful
quantity which we dub the rational mutual information and discuss
generalizations of our claims involving more complicated Bayesian networks. | http://arxiv.org/pdf/1606.06737 | Henry W. Lin, Max Tegmark | cond-mat.dis-nn, cs.CL | Replaced to match final published version. Discussion improved,
references added | Entropy, 19, 299 (2017) | cond-mat.dis-nn | 20160621 | 20170823 | [] |
1606.06565 | 84 | # References
[1] Martin Abadi et al. âDeep Learning with Diï¬erential Privacyâ. In: (in press (2016)). [2] Pieter Abbeel and Andrew Y Ng. âExploration and apprenticeship learning in reinforcement learningâ. In: Proceedings of the 22nd international conference on Machine learning. ACM. 2005, pp. 1â8.
[3] Julius Adebayo, Lalana Kagal, and Alex Pentland. The Hidden Cost of Eï¬ciency: Fairness and Discrimination in Predictive Modeling. 2015.
[4] Alekh Agarwal et al. âTaming the monster: A fast and simple algorithm for contextual ban- ditsâ. In: (2014).
[5] Hana Ajakan et al. âDomain-adversarial neural networksâ. In: arXiv preprint arXiv:1412.4446
(2014). Ifeoma Ajunwa et al. âHiring by algorithm: predicting and preventing disparate impactâ. In: Available at SSRN 2746078 (2016).
6]
[7] Dario Amodei et al. âDeep Speech 2: End-to-End Speech Recognition in English and Man- darinâ. In: arXiv preprint arXiv:1512.02595 (2015). | 1606.06565#84 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | [
{
"id": "1507.01986"
},
{
"id": "1506.02142"
},
{
"id": "1602.04621"
},
{
"id": "1602.04450"
},
{
"id": "1605.09304"
},
{
"id": "1606.05374"
},
{
"id": "1604.05288"
},
{
"id": "1603.00448"
},
{
"id": "1605.03661"
},
{
"id": "1512.02595"
},
{
"id": "1511.08228"
},
{
"id": "1510.04342"
},
{
"id": "1604.05280"
},
{
"id": "1606.04753"
},
{
"id": "1606.03976"
},
{
"id": "1602.02697"
},
{
"id": "1505.05424"
},
{
"id": "1604.06057"
},
{
"id": "1605.03143"
},
{
"id": "1506.06579"
},
{
"id": "1605.03142"
},
{
"id": "1502.05698"
},
{
"id": "1512.05832"
},
{
"id": "1502.02072"
},
{
"id": "1506.02438"
},
{
"id": "1502.02362"
}
] |
1606.06737 | 84 | [55] T. Morimoto, Journal of the Physical Society of Japan 18, 328 (1963).
[56] I. Csisz et al., Studia Sci. Math. Hungar. 2, 299 (1967). [57] S. M. Ali and S. D. Silvey, Journal of the Royal Statistical
Society. Series B (Methodological) pp. 131â142 (1966).
[58] C. W. Gardiner et al., Handbook of stochastic methods,
vol. 3 (Springer Berlin, 1985). [59] P. Grassberger, ArXiv Physics
e-prints (2003), physics/0307138.
[60] W. Li, Journal of Statistical Physics 60, 823 (1990), ISSN 1572-9613, URL http://dx.doi.org/10.1007/ BF01025996.
17 | 1606.06737#84 | Criticality in Formal Languages and Statistical Physics | We show that the mutual information between two symbols, as a function of the
number of symbols between the two, decays exponentially in any probabilistic
regular grammar, but can decay like a power law for a context-free grammar.
This result about formal languages is closely related to a well-known result in
classical statistical mechanics that there are no phase transitions in
dimensions fewer than two. It is also related to the emergence of power-law
correlations in turbulence and cosmological inflation through recursive
generative processes. We elucidate these physics connections and comment on
potential applications of our results to machine learning tasks like training
artificial recurrent neural networks. Along the way, we introduce a useful
quantity which we dub the rational mutual information and discuss
generalizations of our claims involving more complicated Bayesian networks. | http://arxiv.org/pdf/1606.06737 | Henry W. Lin, Max Tegmark | cond-mat.dis-nn, cs.CL | Replaced to match final published version. Discussion improved,
references added | Entropy, 19, 299 (2017) | cond-mat.dis-nn | 20160621 | 20170823 | [] |
1606.06565 | 85 | [8] An Open Letter: Research Priorities for Robust and Beneï¬cial Artiï¬cial Intelligence. Open Letter. Signed by 8,600 people; see attached research agenda. 2015.
[9] Animashree Anandkumar, Daniel Hsu, and Sham M Kakade. âA method of moments for mixture models and hidden Markov modelsâ. In: arXiv preprint arXiv:1203.0683 (2012).
[10] Theodore W Anderson and Herman Rubin. âEstimation of the parameters of a single equation in a complete system of stochastic equationsâ. In: The Annals of Mathematical Statistics (1949), pp. 46â63.
[11] Theodore W Anderson and Herman Rubin. âThe asymptotic properties of estimates of the parameters of a single equation in a complete system of stochastic equationsâ. In: The Annals of Mathematical Statistics (1950), pp. 570â582.
[12] Stuart Armstrong. âMotivated value selection for artiï¬cial agentsâ. In: Workshops at the Twenty-Ninth AAAI Conference on Artiï¬cial Intelligence. 2015. | 1606.06565#85 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | [
{
"id": "1507.01986"
},
{
"id": "1506.02142"
},
{
"id": "1602.04621"
},
{
"id": "1602.04450"
},
{
"id": "1605.09304"
},
{
"id": "1606.05374"
},
{
"id": "1604.05288"
},
{
"id": "1603.00448"
},
{
"id": "1605.03661"
},
{
"id": "1512.02595"
},
{
"id": "1511.08228"
},
{
"id": "1510.04342"
},
{
"id": "1604.05280"
},
{
"id": "1606.04753"
},
{
"id": "1606.03976"
},
{
"id": "1602.02697"
},
{
"id": "1505.05424"
},
{
"id": "1604.06057"
},
{
"id": "1605.03143"
},
{
"id": "1506.06579"
},
{
"id": "1605.03142"
},
{
"id": "1502.05698"
},
{
"id": "1512.05832"
},
{
"id": "1502.02072"
},
{
"id": "1506.02438"
},
{
"id": "1502.02362"
}
] |
1606.06565 | 86 | [13] Stuart Armstrong. The mathematics of reduced impact: help needed. 2012. [14] Stuart Armstrong. Utility indiï¬erence. Tech. rep. Technical Report 2010-1. Oxford: Future
of Humanity Institute, University of Oxford, 2010.
[15] Melanie Arntz, Terry Gregory, and Ulrich Zierahn. âThe Risk of Automation for Jobs in OECD Countriesâ. In: OECD Social, Employment and Migration Working Papers (2016). url: http://dx.doi.org/10.1787/5jlz9h56dvq7-en.
[16] Autonomous Weapons: An Open Letter from AI & Robotics Researchers. Open Letter. Signed by 20,000+ people. 2015.
22
[17] James Babcock, Janos Kramar, and Roman Yampolskiy. âThe AGI Containment Problemâ. In: The Ninth Conference on Artiï¬cial General Intelligence (2016).
[18] Krishnakumar Balasubramanian, Pinar Donmez, and Guy Lebanon. âUnsupervised super- vised learning ii: Margin-based classiï¬cation without labelsâ. In: The Journal of Machine Learning Research 12 (2011), pp. 3119â3145. | 1606.06565#86 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | [
{
"id": "1507.01986"
},
{
"id": "1506.02142"
},
{
"id": "1602.04621"
},
{
"id": "1602.04450"
},
{
"id": "1605.09304"
},
{
"id": "1606.05374"
},
{
"id": "1604.05288"
},
{
"id": "1603.00448"
},
{
"id": "1605.03661"
},
{
"id": "1512.02595"
},
{
"id": "1511.08228"
},
{
"id": "1510.04342"
},
{
"id": "1604.05280"
},
{
"id": "1606.04753"
},
{
"id": "1606.03976"
},
{
"id": "1602.02697"
},
{
"id": "1505.05424"
},
{
"id": "1604.06057"
},
{
"id": "1605.03143"
},
{
"id": "1506.06579"
},
{
"id": "1605.03142"
},
{
"id": "1502.05698"
},
{
"id": "1512.05832"
},
{
"id": "1502.02072"
},
{
"id": "1506.02438"
},
{
"id": "1502.02362"
}
] |
1606.06565 | 87 | [19] Marco Barreno et al. âThe security of machine learningâ. In: Machine Learning 81.2 (2010), pp. 121â148.
[20] Tamer Ba¸sar and Pierre Bernhard. H-inï¬nity optimal control and related minimax design problems: a dynamic game approach. Springer Science & Business Media, 2008.
[21] Mich`ele Basseville. âDetecting changes in signals and systemsâa surveyâ. In: Automatica 24.3 (1988), pp. 309â326.
[22] F Berkenkamp, A Krause, and Angela P Schoellig. âBayesian optimization with safety con- straints: safe and automatic parameter tuning in robotics.â arXiv, 2016â. In: arXiv preprint arXiv:1602.04450 ().
[23] Jon Bird and Paul Layzell. âThe evolved radio and its implications for modelling the evolution of novel sensorsâ. In: Evolutionary Computation, 2002. CECâ02. Proceedings of the 2002 Congress on. Vol. 2. IEEE. 2002, pp. 1836â1841. | 1606.06565#87 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | [
{
"id": "1507.01986"
},
{
"id": "1506.02142"
},
{
"id": "1602.04621"
},
{
"id": "1602.04450"
},
{
"id": "1605.09304"
},
{
"id": "1606.05374"
},
{
"id": "1604.05288"
},
{
"id": "1603.00448"
},
{
"id": "1605.03661"
},
{
"id": "1512.02595"
},
{
"id": "1511.08228"
},
{
"id": "1510.04342"
},
{
"id": "1604.05280"
},
{
"id": "1606.04753"
},
{
"id": "1606.03976"
},
{
"id": "1602.02697"
},
{
"id": "1505.05424"
},
{
"id": "1604.06057"
},
{
"id": "1605.03143"
},
{
"id": "1506.06579"
},
{
"id": "1605.03142"
},
{
"id": "1502.05698"
},
{
"id": "1512.05832"
},
{
"id": "1502.02072"
},
{
"id": "1506.02438"
},
{
"id": "1502.02362"
}
] |
1606.06565 | 88 | [24] John Blitzer, Mark Dredze, Fernando Pereira, et al. âBiographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classiï¬cationâ. In: ACL. Vol. 7. 2007, pp. 440â 447.
[25] John Blitzer, Sham Kakade, and Dean P Foster. âDomain adaptation with coupled sub- spacesâ. In: International Conference on Artiï¬cial Intelligence and Statistics. 2011, pp. 173â 181.
[26] Charles Blundell et al. âWeight uncertainty in neural networksâ. In: arXiv preprint arXiv:1505.05424 (2015).
[27] Nick Bostrom. Superintelligence: Paths, dangers, strategies. OUP Oxford, 2014. [28] L´eon Bottou. âTwo high stakes challenges in machine learningâ. Invited talk at the 32nd
International Conference on Machine Learning. 2015.
[29] L´eon Bottou et al. âCounterfactual Reasoning and Learning Systemsâ. In: arXiv preprint arXiv:1209.2355 (2012). | 1606.06565#88 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | [
{
"id": "1507.01986"
},
{
"id": "1506.02142"
},
{
"id": "1602.04621"
},
{
"id": "1602.04450"
},
{
"id": "1605.09304"
},
{
"id": "1606.05374"
},
{
"id": "1604.05288"
},
{
"id": "1603.00448"
},
{
"id": "1605.03661"
},
{
"id": "1512.02595"
},
{
"id": "1511.08228"
},
{
"id": "1510.04342"
},
{
"id": "1604.05280"
},
{
"id": "1606.04753"
},
{
"id": "1606.03976"
},
{
"id": "1602.02697"
},
{
"id": "1505.05424"
},
{
"id": "1604.06057"
},
{
"id": "1605.03143"
},
{
"id": "1506.06579"
},
{
"id": "1605.03142"
},
{
"id": "1502.05698"
},
{
"id": "1512.05832"
},
{
"id": "1502.02072"
},
{
"id": "1506.02438"
},
{
"id": "1502.02362"
}
] |
1606.06565 | 89 | [30] L´eon Bottou et al. âCounterfactual reasoning and learning systems: The example of compu- tational advertisingâ. In: The Journal of Machine Learning Research 14.1 (2013), pp. 3207â 3260.
[31] Ronen I Brafman and Moshe Tennenholtz. âR-max-a general polynomial time algorithm for near-optimal reinforcement learningâ. In: The Journal of Machine Learning Research 3 (2003), pp. 213â231.
[32] Erik Brynjolfsson and Andrew McAfee. The second machine age: work, progress, and pros- perity in a time of brilliant technologies. WW Norton & Company, 2014.
[33] Ryan Calo. âOpen roboticsâ. In: Maryland Law Review 70.3 (2011). [34] Paul Christiano. AI Control. [Online; accessed 13-June-2016]. 2015. url: https://medium.
com/ai-control.
[35] Fabio Cozman and Ira Cohen. âRisks of semi-supervised learningâ. In: Semi-Supervised Learn- ing (2006), pp. 56â72.
[36] Andrew Critch. âParametric Bounded L¨obâs Theorem and Robust Cooperation of Bounded Agentsâ. In: (2016). | 1606.06565#89 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | [
{
"id": "1507.01986"
},
{
"id": "1506.02142"
},
{
"id": "1602.04621"
},
{
"id": "1602.04450"
},
{
"id": "1605.09304"
},
{
"id": "1606.05374"
},
{
"id": "1604.05288"
},
{
"id": "1603.00448"
},
{
"id": "1605.03661"
},
{
"id": "1512.02595"
},
{
"id": "1511.08228"
},
{
"id": "1510.04342"
},
{
"id": "1604.05280"
},
{
"id": "1606.04753"
},
{
"id": "1606.03976"
},
{
"id": "1602.02697"
},
{
"id": "1505.05424"
},
{
"id": "1604.06057"
},
{
"id": "1605.03143"
},
{
"id": "1506.06579"
},
{
"id": "1605.03142"
},
{
"id": "1502.05698"
},
{
"id": "1512.05832"
},
{
"id": "1502.02072"
},
{
"id": "1506.02438"
},
{
"id": "1502.02362"
}
] |
1606.06565 | 90 | [36] Andrew Critch. âParametric Bounded L¨obâs Theorem and Robust Cooperation of Bounded Agentsâ. In: (2016).
[37] Christian Daniel et al. âActive reward learningâ. In: Proceedings of Robotics Science & Sys- tems. 2014.
[38] Ernest Davis. âEthical guidelines for a superintelligence.â In: Artif. Intell. 220 (2015), pp. 121â 124.
[39] Alexander Philip Dawid and Allan M Skene. âMaximum likelihood estimation of observer error-rates using the EM algorithmâ. In: Applied statistics (1979), pp. 20â28.
23
[40] Peter Dayan and Geoï¬rey E Hinton. âFeudal reinforcement learningâ. In: Advances in neural information processing systems. Morgan Kaufmann Publishers. 1993, pp. 271â271.
[41] Kalyanmoy Deb. âMulti-objective optimizationâ. In: Search methodologies. Springer, 2014, pp. 403â449.
[42] Daniel Dewey. âLearning what to valueâ. In: Artiï¬cial General Intelligence. Springer, 2011, pp. 309â314. | 1606.06565#90 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | [
{
"id": "1507.01986"
},
{
"id": "1506.02142"
},
{
"id": "1602.04621"
},
{
"id": "1602.04450"
},
{
"id": "1605.09304"
},
{
"id": "1606.05374"
},
{
"id": "1604.05288"
},
{
"id": "1603.00448"
},
{
"id": "1605.03661"
},
{
"id": "1512.02595"
},
{
"id": "1511.08228"
},
{
"id": "1510.04342"
},
{
"id": "1604.05280"
},
{
"id": "1606.04753"
},
{
"id": "1606.03976"
},
{
"id": "1602.02697"
},
{
"id": "1505.05424"
},
{
"id": "1604.06057"
},
{
"id": "1605.03143"
},
{
"id": "1506.06579"
},
{
"id": "1605.03142"
},
{
"id": "1502.05698"
},
{
"id": "1512.05832"
},
{
"id": "1502.02072"
},
{
"id": "1506.02438"
},
{
"id": "1502.02362"
}
] |
1606.06565 | 91 | [42] Daniel Dewey. âLearning what to valueâ. In: Artiï¬cial General Intelligence. Springer, 2011, pp. 309â314.
[43] Daniel Dewey. âReinforcement learning and the reward engineering principleâ. In: 2014 AAAI Spring Symposium Series. 2014.
[44] Pinar Donmez, Guy Lebanon, and Krishnakumar Balasubramanian. âUnsupervised super- vised learning i: Estimating classiï¬cation and regression errors without labelsâ. In: The Jour- nal of Machine Learning Research 11 (2010), pp. 1323â1351.
[45] Gregory Druck, Gideon Mann, and Andrew McCallum. âLearning from labeled features using generalized expectation criteriaâ. In: Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval. ACM. 2008, pp. 595â602. [46] Cynthia Dwork et al. âFairness through awarenessâ. In: Proceedings of the 3rd Innovations
in Theoretical Computer Science Conference. ACM. 2012, pp. 214â226.
[47] Bradley Efron. âComputers and the theory of statistics: thinking the unthinkableâ. In: SIAM review 21.4 (1979), pp. 460â480. | 1606.06565#91 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | [
{
"id": "1507.01986"
},
{
"id": "1506.02142"
},
{
"id": "1602.04621"
},
{
"id": "1602.04450"
},
{
"id": "1605.09304"
},
{
"id": "1606.05374"
},
{
"id": "1604.05288"
},
{
"id": "1603.00448"
},
{
"id": "1605.03661"
},
{
"id": "1512.02595"
},
{
"id": "1511.08228"
},
{
"id": "1510.04342"
},
{
"id": "1604.05280"
},
{
"id": "1606.04753"
},
{
"id": "1606.03976"
},
{
"id": "1602.02697"
},
{
"id": "1505.05424"
},
{
"id": "1604.06057"
},
{
"id": "1605.03143"
},
{
"id": "1506.06579"
},
{
"id": "1605.03142"
},
{
"id": "1502.05698"
},
{
"id": "1512.05832"
},
{
"id": "1502.02072"
},
{
"id": "1506.02438"
},
{
"id": "1502.02362"
}
] |
1606.06565 | 92 | [48] Owain Evans, Andreas Stuhlm¨uller, and Noah D Goodman. âLearning the preferences of ignorant, inconsistent agentsâ. In: arXiv preprint arXiv:1512.05832 (2015).
[49] Tom Everitt and Marcus Hutter. âAvoiding wireheading with value reinforcement learningâ. In: arXiv preprint arXiv:1605.03143 (2016).
[50] Tom Everitt et al. âSelf-Modiï¬cation of Policy and Utility Function in Rational Agentsâ. In: arXiv preprint arXiv:1605.03142 (2016).
[51] Chelsea Finn, Sergey Levine, and Pieter Abbeel. âGuided Cost Learning: Deep Inverse Op- timal Control via Policy Optimizationâ. In: arXiv preprint arXiv:1603.00448 (2016). [52] Carl Benedikt Frey and Michael A Osborne. âThe future of employment: how susceptible are
jobs to computerisationâ. In: Retrieved September 7 (2013), p. 2013.
[53] Yarin Gal and Zoubin Ghahramani. âDropout as a Bayesian approximation: Representing model uncertainty in deep learningâ. In: arXiv preprint arXiv:1506.02142 (2015). | 1606.06565#92 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | [
{
"id": "1507.01986"
},
{
"id": "1506.02142"
},
{
"id": "1602.04621"
},
{
"id": "1602.04450"
},
{
"id": "1605.09304"
},
{
"id": "1606.05374"
},
{
"id": "1604.05288"
},
{
"id": "1603.00448"
},
{
"id": "1605.03661"
},
{
"id": "1512.02595"
},
{
"id": "1511.08228"
},
{
"id": "1510.04342"
},
{
"id": "1604.05280"
},
{
"id": "1606.04753"
},
{
"id": "1606.03976"
},
{
"id": "1602.02697"
},
{
"id": "1505.05424"
},
{
"id": "1604.06057"
},
{
"id": "1605.03143"
},
{
"id": "1506.06579"
},
{
"id": "1605.03142"
},
{
"id": "1502.05698"
},
{
"id": "1512.05832"
},
{
"id": "1502.02072"
},
{
"id": "1506.02438"
},
{
"id": "1502.02362"
}
] |
1606.06565 | 93 | [54] Joao Gama et al. âLearning with drift detectionâ. In: Advances in artiï¬cial intelligenceâSBIA 2004. Springer, 2004, pp. 286â295.
[55] Javier Garc´ıa and Fernando Fern´andez. âA Comprehensive Survey on Safe Reinforcement Learningâ. In: Journal of Machine Learning Research 16 (2015), pp. 1437â1480.
[56] Scott Garrabrant, Nate Soares, and Jessica Taylor. âAsymptotic Convergence in Online Learning with Unbounded Delaysâ. In: arXiv preprint arXiv:1604.05280 (2016).
[57] Scott Garrabrant et al. âUniform Coherenceâ. In: arXiv preprint arXiv:1604.05288 (2016). [58] Shalini Ghosh et al. âTrusted Machine Learning for Probabilistic Modelsâ. In: Reliable Machine Learning in the Wild at ICML 2016 (2016).
[59] Yolanda Gil et al. âAmplify scientiï¬c discovery with artiï¬cial intelligenceâ. In: Science 346.6206 (2014), pp. 171â172. | 1606.06565#93 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | [
{
"id": "1507.01986"
},
{
"id": "1506.02142"
},
{
"id": "1602.04621"
},
{
"id": "1602.04450"
},
{
"id": "1605.09304"
},
{
"id": "1606.05374"
},
{
"id": "1604.05288"
},
{
"id": "1603.00448"
},
{
"id": "1605.03661"
},
{
"id": "1512.02595"
},
{
"id": "1511.08228"
},
{
"id": "1510.04342"
},
{
"id": "1604.05280"
},
{
"id": "1606.04753"
},
{
"id": "1606.03976"
},
{
"id": "1602.02697"
},
{
"id": "1505.05424"
},
{
"id": "1604.06057"
},
{
"id": "1605.03143"
},
{
"id": "1506.06579"
},
{
"id": "1605.03142"
},
{
"id": "1502.05698"
},
{
"id": "1512.05832"
},
{
"id": "1502.02072"
},
{
"id": "1506.02438"
},
{
"id": "1502.02362"
}
] |
1606.06565 | 94 | [60] Alec Go, Richa Bhayani, and Lei Huang. âTwitter sentiment classiï¬cation using distant
supervisionâ. In: CS224N Project Report, Stanford 1 (2009), p. 12. Ian Goodfellow et al. âGenerative adversarial netsâ. In: Advances in Neural Information Processing Systems. 2014, pp. 2672â2680. Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. âExplaining and harnessing ad- versarial examplesâ. In: arXiv preprint arXiv:1412.6572 (2014).
61
62
[63] Charles AE Goodhart. Problems of monetary management: the UK experience. Springer, 1984.
[64] Alex Graves, Greg Wayne, and Ivo Danihelka. âNeural turing machinesâ. In: arXiv preprint arXiv:1410.5401 (2014).
24
[65] Sonal Gupta. âDistantly Supervised Information Extraction Using Bootstrapped Patternsâ. PhD thesis. Stanford University, 2015. | 1606.06565#94 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | [
{
"id": "1507.01986"
},
{
"id": "1506.02142"
},
{
"id": "1602.04621"
},
{
"id": "1602.04450"
},
{
"id": "1605.09304"
},
{
"id": "1606.05374"
},
{
"id": "1604.05288"
},
{
"id": "1603.00448"
},
{
"id": "1605.03661"
},
{
"id": "1512.02595"
},
{
"id": "1511.08228"
},
{
"id": "1510.04342"
},
{
"id": "1604.05280"
},
{
"id": "1606.04753"
},
{
"id": "1606.03976"
},
{
"id": "1602.02697"
},
{
"id": "1505.05424"
},
{
"id": "1604.06057"
},
{
"id": "1605.03143"
},
{
"id": "1506.06579"
},
{
"id": "1605.03142"
},
{
"id": "1502.05698"
},
{
"id": "1512.05832"
},
{
"id": "1502.02072"
},
{
"id": "1506.02438"
},
{
"id": "1502.02362"
}
] |
1606.06565 | 95 | 24
[65] Sonal Gupta. âDistantly Supervised Information Extraction Using Bootstrapped Patternsâ. PhD thesis. Stanford University, 2015.
[66] Dylan Hadï¬eld-Menell et al. Cooperative Inverse Reinforcement Learning. 2016. [67] Dylan Hadï¬eld-Menell et al. âThe Oï¬-Switchâ. In: (2016). [68] Lars Peter Hansen. âLarge sample properties of generalized method of moments estimatorsâ.
In: Econometrica: Journal of the Econometric Society (1982), pp. 1029â1054.
[69] Lars Peter Hansen. âNobel Lecture: Uncertainty Outside and Inside Economic Modelsâ. In: Journal of Political Economy 122.5 (2014), pp. 945â987.
[70] Mark Herbster and Manfred K Warmuth. âTracking the best linear predictorâ. In: The Jour- nal of Machine Learning Research 1 (2001), pp. 281â309.
[71] Bill Hibbard. âModel-based utility functionsâ. In: Journal of Artiï¬cial General Intelligence 3.1 (2012), pp. 1â24. | 1606.06565#95 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | [
{
"id": "1507.01986"
},
{
"id": "1506.02142"
},
{
"id": "1602.04621"
},
{
"id": "1602.04450"
},
{
"id": "1605.09304"
},
{
"id": "1606.05374"
},
{
"id": "1604.05288"
},
{
"id": "1603.00448"
},
{
"id": "1605.03661"
},
{
"id": "1512.02595"
},
{
"id": "1511.08228"
},
{
"id": "1510.04342"
},
{
"id": "1604.05280"
},
{
"id": "1606.04753"
},
{
"id": "1606.03976"
},
{
"id": "1602.02697"
},
{
"id": "1505.05424"
},
{
"id": "1604.06057"
},
{
"id": "1605.03143"
},
{
"id": "1506.06579"
},
{
"id": "1605.03142"
},
{
"id": "1502.05698"
},
{
"id": "1512.05832"
},
{
"id": "1502.02072"
},
{
"id": "1506.02438"
},
{
"id": "1502.02362"
}
] |
1606.06565 | 96 | [72] Thomas Hofmann, Bernhard Sch¨olkopf, and Alexander J Smola. âKernel methods in machine learningâ. In: The annals of statistics (2008), pp. 1171â1220.
[73] Garud N Iyengar. âRobust dynamic programmingâ. In: Mathematics of Operations Research 30.2 (2005), pp. 257â280.
[74] Ariel Jaï¬e, Boaz Nadler, and Yuval Kluger. âEstimating the accuracies of multiple classiï¬ers without labeled dataâ. In: arXiv preprint arXiv:1407.7644 (2014).
[75] Jean-Baptiste Jeannin et al. âA formally veriï¬ed hybrid system for the next-generation air- borne collision avoidance systemâ. In: Tools and Algorithms for the Construction and Analysis of Systems. Springer, 2015, pp. 21â36.
[76] Zhanglong Ji, Zachary C Lipton, and Charles Elkan. âDiï¬erential privacy and machine learn- ing: A survey and reviewâ. In: arXiv preprint arXiv:1412.7584 (2014). | 1606.06565#96 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | [
{
"id": "1507.01986"
},
{
"id": "1506.02142"
},
{
"id": "1602.04621"
},
{
"id": "1602.04450"
},
{
"id": "1605.09304"
},
{
"id": "1606.05374"
},
{
"id": "1604.05288"
},
{
"id": "1603.00448"
},
{
"id": "1605.03661"
},
{
"id": "1512.02595"
},
{
"id": "1511.08228"
},
{
"id": "1510.04342"
},
{
"id": "1604.05280"
},
{
"id": "1606.04753"
},
{
"id": "1606.03976"
},
{
"id": "1602.02697"
},
{
"id": "1505.05424"
},
{
"id": "1604.06057"
},
{
"id": "1605.03143"
},
{
"id": "1506.06579"
},
{
"id": "1605.03142"
},
{
"id": "1502.05698"
},
{
"id": "1512.05832"
},
{
"id": "1502.02072"
},
{
"id": "1506.02438"
},
{
"id": "1502.02362"
}
] |
1606.06565 | 97 | [77] Fredrik D Johansson, Uri Shalit, and David Sontag. âLearning Representations for Counter- factual Inferenceâ. In: arXiv preprint arXiv:1605.03661 (2016).
78 79 Leslie Pack Kaelbling, Michael L Littman, and Anthony R Cassandra. âPlanning and acting in partially observable stochastic domainsâ. In: Artificial intelligence 101.1 (1998), pp. 99- 134. Lukasz Kaiser and Ilya Sutskever. âNeural GPUs learn algorithmsâ. In: arXiv preprint arXiv:1511.08228 (2015).
[80] Yoshinobu Kawahara and Masashi Sugiyama. âChange-Point Detection in Time-Series Data by Direct Density-Ratio Estimation.â In: SDM. Vol. 9. SIAM. 2009, pp. 389â400.
[81] F. Khani, M. Rinard, and P. Liang. âUnanimous Prediction for 100Learning Semantic Parsersâ. In: Association for Computational Linguistics (ACL). 2016. | 1606.06565#97 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | [
{
"id": "1507.01986"
},
{
"id": "1506.02142"
},
{
"id": "1602.04621"
},
{
"id": "1602.04450"
},
{
"id": "1605.09304"
},
{
"id": "1606.05374"
},
{
"id": "1604.05288"
},
{
"id": "1603.00448"
},
{
"id": "1605.03661"
},
{
"id": "1512.02595"
},
{
"id": "1511.08228"
},
{
"id": "1510.04342"
},
{
"id": "1604.05280"
},
{
"id": "1606.04753"
},
{
"id": "1606.03976"
},
{
"id": "1602.02697"
},
{
"id": "1505.05424"
},
{
"id": "1604.06057"
},
{
"id": "1605.03143"
},
{
"id": "1506.06579"
},
{
"id": "1605.03142"
},
{
"id": "1502.05698"
},
{
"id": "1512.05832"
},
{
"id": "1502.02072"
},
{
"id": "1506.02438"
},
{
"id": "1502.02362"
}
] |
1606.06565 | 98 | [82] Alex Krizhevsky, Ilya Sutskever, and Geoï¬rey E Hinton. âImagenet classiï¬cation with deep convolutional neural networksâ. In: Advances in neural information processing systems. 2012, pp. 1097â1105.
[83] Volodymyr Kuleshov and Percy S Liang. âCalibrated Structured Predictionâ. In: Advances in Neural Information Processing Systems. 2015, pp. 3456â3464.
[84] Tejas D Kulkarni et al. âHierarchical Deep Reinforcement Learning: Integrating Temporal Abstraction and Intrinsic Motivationâ. In: arXiv preprint arXiv:1604.06057 (2016).
[85] Neil Lawrence. Discussion of âSuperintelligence: Paths, Dangers, Strategiesâ. 2016. [86] Jesse Levinson et al. âTowards fully autonomous driving: Systems and algorithmsâ. In: Intelligent Vehicles Symposium (IV), 2011 IEEE. IEEE. 2011, pp. 163â168.
[87] Lihong Li et al. âKnows what it knows: a framework for self-aware learningâ. In: Machine learning 82.3 (2011), pp. 399â443. | 1606.06565#98 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | [
{
"id": "1507.01986"
},
{
"id": "1506.02142"
},
{
"id": "1602.04621"
},
{
"id": "1602.04450"
},
{
"id": "1605.09304"
},
{
"id": "1606.05374"
},
{
"id": "1604.05288"
},
{
"id": "1603.00448"
},
{
"id": "1605.03661"
},
{
"id": "1512.02595"
},
{
"id": "1511.08228"
},
{
"id": "1510.04342"
},
{
"id": "1604.05280"
},
{
"id": "1606.04753"
},
{
"id": "1606.03976"
},
{
"id": "1602.02697"
},
{
"id": "1505.05424"
},
{
"id": "1604.06057"
},
{
"id": "1605.03143"
},
{
"id": "1506.06579"
},
{
"id": "1605.03142"
},
{
"id": "1502.05698"
},
{
"id": "1512.05832"
},
{
"id": "1502.02072"
},
{
"id": "1506.02438"
},
{
"id": "1502.02362"
}
] |
1606.06565 | 99 | [88] Yu-Feng Li and Zhi-Hua Zhou. âTowards making unlabeled data never hurtâ. In: Pattern Analysis and Machine Intelligence, IEEE Transactions on 37.1 (2015), pp. 175â188. [89] Percy Liang. âOn the Elusiveness of a Speciï¬cation for AIâ. NIPS 2015, Symposium: Algo- rithms Among Us. 2015. url: http://research.microsoft.com/apps/video/default. aspx?id=260009&r=1.
25
[90] Percy Liang and Dan Klein. âAnalyzing the Errors of Unsupervised Learning.â In: ACL. 2008, pp. 879â887.
[91] Song Liu et al. âChange-point detection in time-series data by relative density-ratio estima- tionâ. In: Neural Networks 43 (2013), pp. 72â83.
[92] Sarah M Loos, David Renshaw, and Andr´e Platzer. âFormal veriï¬cation of distributed air- craft controllersâ. In: Proceedings of the 16th international conference on Hybrid systems: computation and control. ACM. 2013, pp. 125â130. | 1606.06565#99 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | [
{
"id": "1507.01986"
},
{
"id": "1506.02142"
},
{
"id": "1602.04621"
},
{
"id": "1602.04450"
},
{
"id": "1605.09304"
},
{
"id": "1606.05374"
},
{
"id": "1604.05288"
},
{
"id": "1603.00448"
},
{
"id": "1605.03661"
},
{
"id": "1512.02595"
},
{
"id": "1511.08228"
},
{
"id": "1510.04342"
},
{
"id": "1604.05280"
},
{
"id": "1606.04753"
},
{
"id": "1606.03976"
},
{
"id": "1602.02697"
},
{
"id": "1505.05424"
},
{
"id": "1604.06057"
},
{
"id": "1605.03143"
},
{
"id": "1506.06579"
},
{
"id": "1605.03142"
},
{
"id": "1502.05698"
},
{
"id": "1512.05832"
},
{
"id": "1502.02072"
},
{
"id": "1506.02438"
},
{
"id": "1502.02362"
}
] |
1606.06565 | 100 | [93] John Lygeros, Claire Tomlin, and Shankar Sastry. âControllers for reachability speciï¬cations for hybrid systemsâ. In: Automatica 35.3 (1999), pp. 349â370.
[94] Gideon S Mann and Andrew McCallum. âGeneralized expectation criteria for semi-supervised learning with weakly labeled dataâ. In: The Journal of Machine Learning Research 11 (2010), pp. 955â984.
[95] John McCarthy and Patrick J Hayes. âSome philosophical problems from the standpoint of artiï¬cial intelligenceâ. In: Readings in artiï¬cial intelligence (1969), pp. 431â450.
[96] Shike Mei and Xiaojin Zhu. âThe Security of Latent Dirichlet Allocation.â In: AISTATS. 2015.
[97] Shike Mei and Xiaojin Zhu. âUsing Machine Teaching to Identify Optimal Training-Set At- tacks on Machine Learners.â In: AAAI. 2015, pp. 2871â2877.
[98] Bernard Merialdo. âTagging English text with a probabilistic modelâ. In: Computational linguistics 20.2 (1994), pp. 155â171. | 1606.06565#100 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | [
{
"id": "1507.01986"
},
{
"id": "1506.02142"
},
{
"id": "1602.04621"
},
{
"id": "1602.04450"
},
{
"id": "1605.09304"
},
{
"id": "1606.05374"
},
{
"id": "1604.05288"
},
{
"id": "1603.00448"
},
{
"id": "1605.03661"
},
{
"id": "1512.02595"
},
{
"id": "1511.08228"
},
{
"id": "1510.04342"
},
{
"id": "1604.05280"
},
{
"id": "1606.04753"
},
{
"id": "1606.03976"
},
{
"id": "1602.02697"
},
{
"id": "1505.05424"
},
{
"id": "1604.06057"
},
{
"id": "1605.03143"
},
{
"id": "1506.06579"
},
{
"id": "1605.03142"
},
{
"id": "1502.05698"
},
{
"id": "1512.05832"
},
{
"id": "1502.02072"
},
{
"id": "1506.02438"
},
{
"id": "1502.02362"
}
] |
1606.06565 | 101 | [99] Mike Mintz et al. âDistant supervision for relation extraction without labeled dataâ. In: Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2- Volume 2. Association for Computational Linguistics. 2009, pp. 1003â1011. Ian M Mitchell, Alexandre M Bayen, and Claire J Tomlin. âA time-dependent Hamilton- Jacobi formulation of reachable sets for continuous dynamic gamesâ. In: Automatic Control, IEEE Transactions on 50.7 (2005), pp. 947â957.
[101] Stefan Mitsch, Sarah M Loos, and Andr´e Platzer. âTowards formal veriï¬cation of freeway traï¬c controlâ. In: Cyber-Physical Systems (ICCPS), 2012 IEEE/ACM Third International Conference on. IEEE. 2012, pp. 171â180.
[102] Volodymyr Mnih et al. âHuman-level control through deep reinforcement learningâ. In: Nature 518.7540 (2015), pp. 529â533. | 1606.06565#101 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | [
{
"id": "1507.01986"
},
{
"id": "1506.02142"
},
{
"id": "1602.04621"
},
{
"id": "1602.04450"
},
{
"id": "1605.09304"
},
{
"id": "1606.05374"
},
{
"id": "1604.05288"
},
{
"id": "1603.00448"
},
{
"id": "1605.03661"
},
{
"id": "1512.02595"
},
{
"id": "1511.08228"
},
{
"id": "1510.04342"
},
{
"id": "1604.05280"
},
{
"id": "1606.04753"
},
{
"id": "1606.03976"
},
{
"id": "1602.02697"
},
{
"id": "1505.05424"
},
{
"id": "1604.06057"
},
{
"id": "1605.03143"
},
{
"id": "1506.06579"
},
{
"id": "1605.03142"
},
{
"id": "1502.05698"
},
{
"id": "1512.05832"
},
{
"id": "1502.02072"
},
{
"id": "1506.02438"
},
{
"id": "1502.02362"
}
] |
1606.06565 | 102 | [103] Shakir Mohamed and Danilo Jimenez Rezende. âVariational Information Maximisation for Intrinsically Motivated Reinforcement Learningâ. In: Advances in Neural Information Pro- cessing Systems. 2015, pp. 2116â2124.
[104] Teodor Mihai Moldovan and Pieter Abbeel. âSafe exploration in markov decision processesâ. In: arXiv preprint arXiv:1205.4810 (2012).
[105] Alexander Mordvintsev, Christopher Olah, and Mike Tyka. âInceptionism: Going deeper into neural networksâ. In: Google Research Blog. Retrieved June 20 (2015).
[106] Jersey Neyman. âSur les applications de la th´eorie des probabilit´es aux experiences agricoles: Essai des principesâ. In: Roczniki Nauk Rolniczych 10 (1923), pp. 1â51.
[107] Andrew Y Ng, Stuart J Russell, et al. âAlgorithms for inverse reinforcement learning.â In: Icml. 2000, pp. 663â670. | 1606.06565#102 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | [
{
"id": "1507.01986"
},
{
"id": "1506.02142"
},
{
"id": "1602.04621"
},
{
"id": "1602.04450"
},
{
"id": "1605.09304"
},
{
"id": "1606.05374"
},
{
"id": "1604.05288"
},
{
"id": "1603.00448"
},
{
"id": "1605.03661"
},
{
"id": "1512.02595"
},
{
"id": "1511.08228"
},
{
"id": "1510.04342"
},
{
"id": "1604.05280"
},
{
"id": "1606.04753"
},
{
"id": "1606.03976"
},
{
"id": "1602.02697"
},
{
"id": "1505.05424"
},
{
"id": "1604.06057"
},
{
"id": "1605.03143"
},
{
"id": "1506.06579"
},
{
"id": "1605.03142"
},
{
"id": "1502.05698"
},
{
"id": "1512.05832"
},
{
"id": "1502.02072"
},
{
"id": "1506.02438"
},
{
"id": "1502.02362"
}
] |
1606.06565 | 103 | [107] Andrew Y Ng, Stuart J Russell, et al. âAlgorithms for inverse reinforcement learning.â In: Icml. 2000, pp. 663â670.
[108] Anh Nguyen, Jason Yosinski, and Jeï¬ Clune. âDeep neural networks are easily fooled: High conï¬dence predictions for unrecognizable imagesâ. In: Computer Vision and Pattern Recog- nition (CVPR), 2015 IEEE Conference on. IEEE. 2015, pp. 427â436.
[109] Anh Nguyen et al. âSynthesizing the preferred inputs for neurons in neural networks via deep generator networksâ. In: arXiv preprint arXiv:1605.09304 (2016).
[110] Kamal Nigam et al. âLearning to classify text from labeled and unlabeled documentsâ. In: AAAI/IAAI 792 (1998).
26
[111] Arnab Nilim and Laurent El Ghaoui. âRobust control of Markov decision processes with uncertain transition matricesâ. In: Operations Research 53.5 (2005), pp. 780â798.
[112] Christopher Olah. Visualizing Representations: Deep Learning and Human Beings. 2015. url: http://colah.github.io/posts/2015-01-Visualizing-Representations/. | 1606.06565#103 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | [
{
"id": "1507.01986"
},
{
"id": "1506.02142"
},
{
"id": "1602.04621"
},
{
"id": "1602.04450"
},
{
"id": "1605.09304"
},
{
"id": "1606.05374"
},
{
"id": "1604.05288"
},
{
"id": "1603.00448"
},
{
"id": "1605.03661"
},
{
"id": "1512.02595"
},
{
"id": "1511.08228"
},
{
"id": "1510.04342"
},
{
"id": "1604.05280"
},
{
"id": "1606.04753"
},
{
"id": "1606.03976"
},
{
"id": "1602.02697"
},
{
"id": "1505.05424"
},
{
"id": "1604.06057"
},
{
"id": "1605.03143"
},
{
"id": "1506.06579"
},
{
"id": "1605.03142"
},
{
"id": "1502.05698"
},
{
"id": "1512.05832"
},
{
"id": "1502.02072"
},
{
"id": "1506.02438"
},
{
"id": "1502.02362"
}
] |
1606.06565 | 104 | [113] Laurent Orseau and Stuart Armstrong. âSafely Interruptible Agentsâ. In: (2016). [114]
Ian Osband et al. âDeep Exploration via Bootstrapped DQNâ. In: arXiv preprint arXiv:1602.04621 (2016).
[115] Nicolas Papernot et al. âPractical Black-Box Attacks against Deep Learning Systems using Adversarial Examplesâ. In: arXiv preprint arXiv:1602.02697 (2016).
[116] Douglas B Paul and Janet M Baker. âThe design for the Wall Street Journal-based CSR corpusâ. In: Proceedings of the workshop on Speech and Natural Language. Association for Computational Linguistics. 1992, pp. 357â362.
[117] Judea Pearl et al. âCausal inference in statistics: An overviewâ. In: Statistics Surveys 3 (2009), pp. 96â146.
[118] Martin Pecka and Tomas Svoboda. âSafe exploration techniques for reinforcement learningâan overviewâ. In: Modelling and Simulation for Autonomous Systems. Springer, 2014, pp. 357â 375. | 1606.06565#104 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | [
{
"id": "1507.01986"
},
{
"id": "1506.02142"
},
{
"id": "1602.04621"
},
{
"id": "1602.04450"
},
{
"id": "1605.09304"
},
{
"id": "1606.05374"
},
{
"id": "1604.05288"
},
{
"id": "1603.00448"
},
{
"id": "1605.03661"
},
{
"id": "1512.02595"
},
{
"id": "1511.08228"
},
{
"id": "1510.04342"
},
{
"id": "1604.05280"
},
{
"id": "1606.04753"
},
{
"id": "1606.03976"
},
{
"id": "1602.02697"
},
{
"id": "1505.05424"
},
{
"id": "1604.06057"
},
{
"id": "1605.03143"
},
{
"id": "1506.06579"
},
{
"id": "1605.03142"
},
{
"id": "1502.05698"
},
{
"id": "1512.05832"
},
{
"id": "1502.02072"
},
{
"id": "1506.02438"
},
{
"id": "1502.02362"
}
] |
1606.06565 | 105 | [119] Dino Pedreshi, Salvatore Ruggieri, and Franco Turini. âDiscrimination-aware data miningâ. In: Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM. 2008, pp. 560â568.
[120] Jonas Peters et al. âCausal discovery with continuous additive noise modelsâ. In: The Journal of Machine Learning Research 15.1 (2014), pp. 2009â2053.
[121] Emmanouil Antonios Platanios. âEstimating accuracy from unlabeled dataâ. MA thesis. Carnegie Mellon University, 2015.
[122] Emmanouil Antonios Platanios, Avrim Blum, and Tom Mitchell. âEstimating accuracy from unlabeled dataâ. In: (2014).
[123] Walter W Powell and Laurel Smith-Doerr. âNetworks and economic lifeâ. In: The handbook of economic sociology 368 (1994), p. 380.
[124] Joaquin Quinonero-Candela et al. Dataset shift in machine learning, ser. Neural information processing series. 2009. | 1606.06565#105 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | [
{
"id": "1507.01986"
},
{
"id": "1506.02142"
},
{
"id": "1602.04621"
},
{
"id": "1602.04450"
},
{
"id": "1605.09304"
},
{
"id": "1606.05374"
},
{
"id": "1604.05288"
},
{
"id": "1603.00448"
},
{
"id": "1605.03661"
},
{
"id": "1512.02595"
},
{
"id": "1511.08228"
},
{
"id": "1510.04342"
},
{
"id": "1604.05280"
},
{
"id": "1606.04753"
},
{
"id": "1606.03976"
},
{
"id": "1602.02697"
},
{
"id": "1505.05424"
},
{
"id": "1604.06057"
},
{
"id": "1605.03143"
},
{
"id": "1506.06579"
},
{
"id": "1605.03142"
},
{
"id": "1502.05698"
},
{
"id": "1512.05832"
},
{
"id": "1502.02072"
},
{
"id": "1506.02438"
},
{
"id": "1502.02362"
}
] |
1606.06565 | 106 | [124] Joaquin Quinonero-Candela et al. Dataset shift in machine learning, ser. Neural information processing series. 2009.
[125] Rajat Raina et al. âSelf-taught learning: transfer learning from unlabeled dataâ. In: Proceed- ings of the 24th international conference on Machine learning. ACM. 2007, pp. 759â766.
[126] Bharath Ramsundar et al. âMassively multitask networks for drug discoveryâ. In: arXiv preprint arXiv:1502.02072 (2015).
[127] Mark Ring and Laurent Orseau. âDelusion, survival, and intelligent agentsâ. In: Artiï¬cial General Intelligence. Springer, 2011, pp. 11â20.
[128] St´ephane Ross, Geoï¬rey J Gordon, and J Andrew Bagnell. âA reduction of imitation learning and structured prediction to no-regret online learningâ. In: arXiv preprint arXiv:1011.0686 (2010).
[129] Donald B Rubin. âEstimating causal eï¬ects of treatments in randomized and nonrandomized studies.â In: Journal of educational Psychology 66.5 (1974), p. 688. | 1606.06565#106 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | [
{
"id": "1507.01986"
},
{
"id": "1506.02142"
},
{
"id": "1602.04621"
},
{
"id": "1602.04450"
},
{
"id": "1605.09304"
},
{
"id": "1606.05374"
},
{
"id": "1604.05288"
},
{
"id": "1603.00448"
},
{
"id": "1605.03661"
},
{
"id": "1512.02595"
},
{
"id": "1511.08228"
},
{
"id": "1510.04342"
},
{
"id": "1604.05280"
},
{
"id": "1606.04753"
},
{
"id": "1606.03976"
},
{
"id": "1602.02697"
},
{
"id": "1505.05424"
},
{
"id": "1604.06057"
},
{
"id": "1605.03143"
},
{
"id": "1506.06579"
},
{
"id": "1605.03142"
},
{
"id": "1502.05698"
},
{
"id": "1512.05832"
},
{
"id": "1502.02072"
},
{
"id": "1506.02438"
},
{
"id": "1502.02362"
}
] |
1606.06565 | 107 | [130] Stuart Russell et al. âResearch priorities for robust and beneï¬cial artiï¬cial intelligenceâ. In: Future of Life Institute (2015).
[131] Christoph Salge, Cornelius Glackin, and Daniel Polani. âEmpowermentâan introductionâ. In: Guided Self-Organization: Inception. Springer, 2014, pp. 67â114.
[132] J Denis Sargan. âThe estimation of relationships with autocorrelated residuals by the use of instrumental variablesâ. In: Journal of the Royal Statistical Society. Series B (Methodological) (1959), pp. 91â105.
[133] John D Sargan. âThe estimation of economic relationships using instrumental variablesâ. In: Econometrica: Journal of the Econometric Society (1958), pp. 393â415.
27
[134] John Schulman et al. âHigh-dimensional continuous control using generalized advantage es- timationâ. In: arXiv preprint arXiv:1506.02438 (2015).
[135] D Sculley et al. âMachine Learning: The High-Interest Credit Card of Technical Debtâ. In: (2014). | 1606.06565#107 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | [
{
"id": "1507.01986"
},
{
"id": "1506.02142"
},
{
"id": "1602.04621"
},
{
"id": "1602.04450"
},
{
"id": "1605.09304"
},
{
"id": "1606.05374"
},
{
"id": "1604.05288"
},
{
"id": "1603.00448"
},
{
"id": "1605.03661"
},
{
"id": "1512.02595"
},
{
"id": "1511.08228"
},
{
"id": "1510.04342"
},
{
"id": "1604.05280"
},
{
"id": "1606.04753"
},
{
"id": "1606.03976"
},
{
"id": "1602.02697"
},
{
"id": "1505.05424"
},
{
"id": "1604.06057"
},
{
"id": "1605.03143"
},
{
"id": "1506.06579"
},
{
"id": "1605.03142"
},
{
"id": "1502.05698"
},
{
"id": "1512.05832"
},
{
"id": "1502.02072"
},
{
"id": "1506.02438"
},
{
"id": "1502.02362"
}
] |
1606.06565 | 108 | [135] D Sculley et al. âMachine Learning: The High-Interest Credit Card of Technical Debtâ. In: (2014).
[136] Glenn Shafer and Vladimir Vovk. âA tutorial on conformal predictionâ. In: The Journal of Machine Learning Research 9 (2008), pp. 371â421.
[137] Uri Shalit, Fredrik Johansson, and David Sontag. âBounding and Minimizing Counterfactual Errorâ. In: arXiv preprint arXiv:1606.03976 (2016).
[138] Hidetoshi Shimodaira. âImproving predictive inference under covariate shift by weighting the log-likelihood functionâ. In: Journal of statistical planning and inference 90.2 (2000), pp. 227â 244.
[139] Jaeho Shin et al. âIncremental knowledge base construction using deepdiveâ. In: Proceedings of the VLDB Endowment 8.11 (2015), pp. 1310â1321.
[140] David Silver et al. âMastering the game of Go with deep neural networks and tree searchâ. In: Nature 529.7587 (2016), pp. 484â489. | 1606.06565#108 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | [
{
"id": "1507.01986"
},
{
"id": "1506.02142"
},
{
"id": "1602.04621"
},
{
"id": "1602.04450"
},
{
"id": "1605.09304"
},
{
"id": "1606.05374"
},
{
"id": "1604.05288"
},
{
"id": "1603.00448"
},
{
"id": "1605.03661"
},
{
"id": "1512.02595"
},
{
"id": "1511.08228"
},
{
"id": "1510.04342"
},
{
"id": "1604.05280"
},
{
"id": "1606.04753"
},
{
"id": "1606.03976"
},
{
"id": "1602.02697"
},
{
"id": "1505.05424"
},
{
"id": "1604.06057"
},
{
"id": "1605.03143"
},
{
"id": "1506.06579"
},
{
"id": "1605.03142"
},
{
"id": "1502.05698"
},
{
"id": "1512.05832"
},
{
"id": "1502.02072"
},
{
"id": "1506.02438"
},
{
"id": "1502.02362"
}
] |
1606.06565 | 109 | [141] SNES Super Mario World (USA) âarbitrary code executionâ. Tool-assisted movies. 2014. url: http://tasvideos.org/2513M.html.
[142] Nate Soares and Benja Fallenstein. âToward idealized decision theoryâ. In: arXiv preprint arXiv:1507.01986 (2015).
[143] Ray J Solomonoï¬. âA formal theory of inductive inference. Part Iâ. In: Information and control 7.1 (1964), pp. 1â22.
[144] Ray J Solomonoï¬. âA formal theory of inductive inference. Part IIâ. In: Information and control 7.2 (1964), pp. 224â254.
[145] J Steinebach. âEL Lehmann, JP Romano: Testing statistical hypothesesâ. In: Metrika 64.2 (2006), pp. 255â256.
[146] Jacob Steinhardt. Long-Term and Short-Term Challenges to Ensuring the Safety of AI Sys- tems. [Online; accessed 13-June-2016]. 2015. url: https://jsteinhardt.wordpress.com/ 2015/06/24/long- term- and- short- term- challenges- to- ensuring- the- safety- of- ai-systems/. | 1606.06565#109 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | [
{
"id": "1507.01986"
},
{
"id": "1506.02142"
},
{
"id": "1602.04621"
},
{
"id": "1602.04450"
},
{
"id": "1605.09304"
},
{
"id": "1606.05374"
},
{
"id": "1604.05288"
},
{
"id": "1603.00448"
},
{
"id": "1605.03661"
},
{
"id": "1512.02595"
},
{
"id": "1511.08228"
},
{
"id": "1510.04342"
},
{
"id": "1604.05280"
},
{
"id": "1606.04753"
},
{
"id": "1606.03976"
},
{
"id": "1602.02697"
},
{
"id": "1505.05424"
},
{
"id": "1604.06057"
},
{
"id": "1605.03143"
},
{
"id": "1506.06579"
},
{
"id": "1605.03142"
},
{
"id": "1502.05698"
},
{
"id": "1512.05832"
},
{
"id": "1502.02072"
},
{
"id": "1506.02438"
},
{
"id": "1502.02362"
}
] |
1606.06565 | 110 | [147] Jacob Steinhardt and Percy Liang. âUnsupervised Risk Estimation with only Structural Assumptionsâ. In: (2016).
[148] Jacob Steinhardt and Russ Tedrake. âFinite-time regional veriï¬cation of stochastic non-linear systemsâ. In: The International Journal of Robotics Research 31.7 (2012), pp. 901â923. [149] Jacob Steinhardt, Gregory Valiant, and Moses Charikar. âAvoiding Imposters and Delin- quents: Adversarial Crowdsourcing and Peer Predictionâ. In: arxiv prepring arXiv:1606.05374 (2016). url: http://arxiv.org/abs/1606.05374.
[150] Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. MIT press, 1998.
[151] Adith Swaminathan and Thorsten Joachims. âCounterfactual risk minimization: Learning from logged bandit feedbackâ. In: arXiv preprint arXiv:1502.02362 (2015).
[152] Christian Szegedy et al. âIntriguing properties of neural networksâ. In: arXiv preprint arXiv:1312.6199 (2013). | 1606.06565#110 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | [
{
"id": "1507.01986"
},
{
"id": "1506.02142"
},
{
"id": "1602.04621"
},
{
"id": "1602.04450"
},
{
"id": "1605.09304"
},
{
"id": "1606.05374"
},
{
"id": "1604.05288"
},
{
"id": "1603.00448"
},
{
"id": "1605.03661"
},
{
"id": "1512.02595"
},
{
"id": "1511.08228"
},
{
"id": "1510.04342"
},
{
"id": "1604.05280"
},
{
"id": "1606.04753"
},
{
"id": "1606.03976"
},
{
"id": "1602.02697"
},
{
"id": "1505.05424"
},
{
"id": "1604.06057"
},
{
"id": "1605.03143"
},
{
"id": "1506.06579"
},
{
"id": "1605.03142"
},
{
"id": "1502.05698"
},
{
"id": "1512.05832"
},
{
"id": "1502.02072"
},
{
"id": "1506.02438"
},
{
"id": "1502.02362"
}
] |
1606.06565 | 111 | [153] Aviv Tamar, Yonatan Glassner, and Shie Mannor. âPolicy gradients beyond expectations: Conditional value-at-riskâ. In: arXiv preprint arXiv:1404.3862 (2014).
[154] Jessica Taylor. âQuantilizers: A Safer Alternative to Maximizers for Limited Optimizationâ. In: forthcoming). Submitted to AAAI (2016).
[155] Matthew E Taylor and Peter Stone. âTransfer learning for reinforcement learning domains: A surveyâ. In: Journal of Machine Learning Research 10.Jul (2009), pp. 1633â1685. [156] Philip S Thomas, Georgios Theocharous, and Mohammad Ghavamzadeh. âHigh-Conï¬dence
# Oï¬-Policy Evaluation.â In: AAAI. 2015, pp. 3000â3006.
[157] Adrian Thompson. Artiï¬cial evolution in the physical world. 1997.
28
[158] Antonio Torralba and Alexei A Efros. âUnbiased look at dataset biasâ. In: Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on. IEEE. 2011, pp. 1521â1528. | 1606.06565#111 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | [
{
"id": "1507.01986"
},
{
"id": "1506.02142"
},
{
"id": "1602.04621"
},
{
"id": "1602.04450"
},
{
"id": "1605.09304"
},
{
"id": "1606.05374"
},
{
"id": "1604.05288"
},
{
"id": "1603.00448"
},
{
"id": "1605.03661"
},
{
"id": "1512.02595"
},
{
"id": "1511.08228"
},
{
"id": "1510.04342"
},
{
"id": "1604.05280"
},
{
"id": "1606.04753"
},
{
"id": "1606.03976"
},
{
"id": "1602.02697"
},
{
"id": "1505.05424"
},
{
"id": "1604.06057"
},
{
"id": "1605.03143"
},
{
"id": "1506.06579"
},
{
"id": "1605.03142"
},
{
"id": "1502.05698"
},
{
"id": "1512.05832"
},
{
"id": "1502.02072"
},
{
"id": "1506.02438"
},
{
"id": "1502.02362"
}
] |
1606.06565 | 112 | [159] Matteo Turchetta, Felix Berkenkamp, and Andreas Krause. âSafe Exploration in Finite Markov Decision Processes with Gaussian Processesâ. In: arXiv preprint arXiv:1606.04753 (2016).
[160] Stefan Wager and Susan Athey. âEstimation and Inference of Heterogeneous Treatment Ef- fects using Random Forestsâ. In: arXiv preprint arXiv:1510.04342 (2015).
[161] Daniel Weld and Oren Etzioni. âThe ï¬rst law of robotics (a call to arms)â. In: AAAI. Vol. 94. 1994. 1994, pp. 1042â1047.
[162] Keenon Werling et al. âOn-the-job learning with bayesian decision theoryâ. In: Advances in Neural Information Processing Systems. 2015, pp. 3447â3455.
[163] Jason Weston et al. âTowards ai-complete question answering: A set of prerequisite toy tasksâ. In: arXiv preprint arXiv:1502.05698 (2015). | 1606.06565#112 | Concrete Problems in AI Safety | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI. | http://arxiv.org/pdf/1606.06565 | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané | cs.AI, cs.LG | 29 pages | null | cs.AI | 20160621 | 20160725 | [
{
"id": "1507.01986"
},
{
"id": "1506.02142"
},
{
"id": "1602.04621"
},
{
"id": "1602.04450"
},
{
"id": "1605.09304"
},
{
"id": "1606.05374"
},
{
"id": "1604.05288"
},
{
"id": "1603.00448"
},
{
"id": "1605.03661"
},
{
"id": "1512.02595"
},
{
"id": "1511.08228"
},
{
"id": "1510.04342"
},
{
"id": "1604.05280"
},
{
"id": "1606.04753"
},
{
"id": "1606.03976"
},
{
"id": "1602.02697"
},
{
"id": "1505.05424"
},
{
"id": "1604.06057"
},
{
"id": "1605.03143"
},
{
"id": "1506.06579"
},
{
"id": "1605.03142"
},
{
"id": "1502.05698"
},
{
"id": "1512.05832"
},
{
"id": "1502.02072"
},
{
"id": "1506.02438"
},
{
"id": "1502.02362"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.