id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
listlengths 1
1
|
---|---|---|---|---|---|---|
1606.06565#36 | Concrete Problems in AI Safety | This is an intriguing analogy that suggests that there may be fruitful parallels between hierarchical RL and several aspects of the safety problem. 13 # 6 Safe Exploration All autonomous learning agents need to sometimes engage in explorationâ taking actions that donâ t seem ideal given current information, but which help the agent learn about its environment. However, exploration can be dangerous, since it involves taking actions whose consequences the agent doesnâ t understand well. In toy environments, like an Atari video game, thereâ s a limit to how bad these consequences can beâ maybe the agent loses some score, or runs into an enemy and suï¬ ers some damage. But the real world can be much less forgiving. Badly chosen actions may destroy the agent or trap it in states it canâ t get out of. Robot helicopters may run into the ground or damage property; industrial control systems could cause serious issues. Common exploration policies such as epsilon- greedy [150] or R-max [31] explore by choosing an action at random or viewing unexplored actions optimistically, and thus make no attempt to avoid these dangerous situations. More sophisticated exploration strategies that adopt a coherent exploration policy over extended temporal scales [114] could actually have even greater potential for harm, since a coherently chosen bad policy may be more insidious than mere random actions. Yet intuitively it seems like it should often be possible to predict which actions are dangerous and explore in a way that avoids them, even when we donâ t have that much information about the environment. For example, if I want to learn about tigers, should I buy a tiger, or buy a book about tigers? It takes only a tiny bit of prior knowledge about tigers to determine which option is safer. In practice, real world RL projects can often avoid these issues by simply hard-coding an avoidance of catastrophic behaviors. For instance, an RL-based robot helicopter might be programmed to override its policy with a hard-coded collision avoidance sequence (such as spinning its propellers to gain altitude) whenever itâ s too close to the ground. This approach works well when there are only a few things that could go wrong, and the designers know all of them ahead of time. But as agents become more autonomous and act in more complex domains, it may become harder and harder to anticipate every possible catastrophic failure. The space of failure modes for an agent running a power grid or a search-and-rescue operation could be quite large. | 1606.06565#35 | 1606.06565#37 | 1606.06565 | [
"1507.01986"
] |
1606.06565#37 | Concrete Problems in AI Safety | Hard-coding against every possible failure is unlikely to be feasible in these cases, so a more principled approach to preventing harmful exploration seems essential. Even in simple cases like the robot helicopter, a principled approach would simplify system design and reduce the need for domain-speciï¬ c engineering. There is a sizable literature on such safe explorationâ it is arguably the most studied of the problems we discuss in this document. [55, 118] provide thorough reviews of this literature, so we donâ t review it extensively here, but simply describe some general routes that this research has taken, as well as suggesting some directions that might have increasing relevance as RL systems expand in scope and capability. | 1606.06565#36 | 1606.06565#38 | 1606.06565 | [
"1507.01986"
] |
1606.06565#38 | Concrete Problems in AI Safety | â ¢ Risk-Sensitive Performance Criteria: A body of existing literature considers changing the optimization criteria from expected total reward to other objectives that are better at preventing rare, catastrophic events; see [55] for a thorough and up-to-date review of this literature. These approaches involve optimizing worst-case performance, or ensuring that the probability of very bad performance is small, or penalizing the variance in performance. These methods have not yet been tested with expressive function approximators such as deep neural networks, but this should be possible in principle for some of the methods, such as [153], which proposes a modiï¬ cation to policy gradient algorithms to optimize a risk-sensitive criterion. There is also recent work studying how to estimate uncertainty in value functions that are represented by deep neural networks [114, 53]; these ideas could be incorporated into risk-sensitive RL algorithms. Another line of work relevant to risk sensitivity uses oï¬ -policy estimation to perform a policy update that is good with high probability [156]. | 1606.06565#37 | 1606.06565#39 | 1606.06565 | [
"1507.01986"
] |
1606.06565#39 | Concrete Problems in AI Safety | â ¢ Use Demonstrations: Exploration is necessary to ensure that the agent ï¬ nds the states that are necessary for near-optimal performance. We may be able to avoid the need for exploration 14 altogether if we instead use inverse RL or apprenticeship learning, where the learning algorithm is provided with expert trajectories of near-optimal behavior [128, 2]. Recent progress in inverse reinforcement learning using deep neural networks to learn the cost function or policy [51] suggests that it might also be possible to reduce the need for exploration in advanced RL systems by training on a small set of demonstrations. Such demonstrations could be used to create a baseline policy, such that even if further learning is necessary, exploration away from the baseline policy can be limited in magnitude. | 1606.06565#38 | 1606.06565#40 | 1606.06565 | [
"1507.01986"
] |
1606.06565#40 | Concrete Problems in AI Safety | â ¢ Simulated Exploration: The more we can do our exploration in simulated environments instead of the real world, the less opportunity there is for catastrophe. It will probably al- ways be necessary to do some real-world exploration, since many complex situations cannot be perfectly captured by a simulator, but it might be possible to learn about danger in sim- ulation and then adopt a more conservative â safe explorationâ policy when acting in the real world. Training RL agents (particularly robots) in simulated environments is already quite common, so advances in â exploration-focused simulationâ could be easily incorporated into current workï¬ ows. In systems that involve a continual cycle of learning and deployment, there may be interesting research problems associated with how to safely incrementally update poli- cies given simulation-based trajectories that imperfectly represent the consequences of those policies as well as reliably accurate oï¬ -policy trajectories (e.g. â semi-on-policyâ evaluation). â ¢ Bounded Exploration: If we know that a certain portion of state space is safe, and that even the worst action within it can be recovered from or bounded in harm, we can allow the agent to run freely within those bounds. | 1606.06565#39 | 1606.06565#41 | 1606.06565 | [
"1507.01986"
] |
1606.06565#41 | Concrete Problems in AI Safety | For example, a quadcopter suï¬ ciently far from the ground might be able to explore safely, since even if something goes wrong there will be ample time for a human or another policy to rescue it. Better yet, if we have a model, we can extrapolate forward and ask whether an action will take us outside the safe state space. Safety can be deï¬ ned as remaining within an ergodic region of the state space such that actions are reversible [104, 159], or as limiting the probability of huge negative reward to some small value [156]. Yet another approaches uses separate safety and performance functions and attempts to obey constraints on the safety function with high probabilty [22]. As with several of the other directions, applying or adapting these methods to recently developed advanced RL systems could be a promising area of research. This idea seems related to H-inï¬ nity control [20] and regional veriï¬ cation [148]. â | 1606.06565#40 | 1606.06565#42 | 1606.06565 | [
"1507.01986"
] |
1606.06565#42 | Concrete Problems in AI Safety | ¢ Trusted Policy Oversight: If we have a trusted policy and a model of the environment, we can limit exploration to actions the trusted policy believes we can recover from. Itâ s ï¬ ne to dive towards the ground, as long as we know we can pull out of the dive in time. â ¢ Human Oversight: Another possibility is to check potentially unsafe actions with a human. Unfortunately, this problem runs into the scalable oversight problem: the agent may need to make too many exploratory actions for human oversight to be practical, or may need to make them too fast for humans to judge them. A key challenge to making this work is having the agent be a good judge of which exploratory actions are genuinely risky, versus which are safe actions it can unilaterally take; another challenge is ï¬ nding appropriately safe actions to take while waiting for the oversight. Potential Experiments: It might be helpful to have a suite of toy environments where unwary agents can fall prey to harmful exploration, but there is enough pattern to the possible catastro- phes that clever agents can predict and avoid them. To some extent this feature already exists in autonomous helicopter competitions and Mars rover simulations [104], but there is always the risk of catastrophes being idiosyncratic, such that trained agents can overï¬ | 1606.06565#41 | 1606.06565#43 | 1606.06565 | [
"1507.01986"
] |
1606.06565#43 | Concrete Problems in AI Safety | t to them. A truly broad set of environments, containing conceptually distinct pitfalls that can cause unwary agents to receive 15 extremely negative reward, and covering both physical and abstract catastrophes, might help in the development of safe exploration techniques for advanced RL systems. Such a suite of environments might serve a benchmarking role similar to that of the bAbI tasks [163], with the eventual goal being to develop a single architecture that can learn to avoid catastrophes in all environments in the suite. # 7 Robustness to Distributional Change All of us occasionally ï¬ nd ourselves in situations that our previous experience has not adequately prepared us to deal withâ for instance, ï¬ ying an airplane, traveling to a country whose culture is very diï¬ erent from ours, or taking care of children for the ï¬ rst time. Such situations are inherently diï¬ cult to handle and inevitably lead to some missteps. However, a key (and often rare) skill in dealing with such situations is to recognize our own ignorance, rather than simply assuming that the heuristics and intuitions weâ ve developed for other situations will carry over perfectly. Machine learning systems also have this problemâ a speech system trained on clean speech will perform very poorly on noisy speech, yet often be highly conï¬ dent in its erroneous classiï¬ cations (some of the authors have personally observed this in training automatic speech recognition systems). In the case of our cleaning robot, harsh cleaning materials that it has found useful in cleaning factory ï¬ oors could cause a lot of harm if used to clean an oï¬ | 1606.06565#42 | 1606.06565#44 | 1606.06565 | [
"1507.01986"
] |
1606.06565#44 | Concrete Problems in AI Safety | ce. Or, an oï¬ ce might contain pets that the robot, never having seen before, attempts to wash with soap, leading to predictably bad results. In general, when the testing distribution diï¬ ers from the training distribution, machine learning systems may not only exhibit poor performance, but also wrongly assume that their performance is good. Such errors can be harmful or oï¬ ensiveâ a classiï¬ er could give the wrong medical diagnosis with such high conï¬ dence that the data isnâ t ï¬ agged for human inspection, or a language model could output oï¬ ensive text that it conï¬ dently believes is non-problematic. For autonomous agents acting in the world, there may be even greater potential for something bad to happenâ for instance, an autonomous agent might overload a power grid because it incorrectly but conï¬ dently perceives that a particular region doesnâ t have enough power, and concludes that more power is urgently needed and overload is unlikely. More broadly, any agent whose perception or heuristic reasoning processes are not trained on the correct distribution may badly misunderstand its situation, and thus runs the risk of committing harmful actions that it does not realize are harmful. Additionally, safety checks that depend on trained machine learning systems (e.g. â does my visual system believe this route is clear?â ) may fail silently and unpredictably if those systems encounter real-world data that diï¬ ers suï¬ ciently from their training data. Having a better way to detect such failures, and ultimately having statistical assurances about how often theyâ ll happen, seems critical to building safe and predictable systems. For concreteness, we imagine that a machine learning model is trained on one distribution (call it p0) but deployed on a potentially diï¬ erent test distribution (call it pâ ). There are many other ways to formalize this problem (for instance, in an online learning setting with concept drift [70, 54]) but we will focus on the above for simplicity. An important point is that we likely have access to a large amount of labeled data at training time, but little or no labeled data at test time. Our goal is to ensure that the model â performs reasonablyâ on pâ , in the sense that (1) it often performs well on pâ | 1606.06565#43 | 1606.06565#45 | 1606.06565 | [
"1507.01986"
] |
1606.06565#45 | Concrete Problems in AI Safety | , and (2) it knows when it is performing badly (and ideally can avoid/mitigate the bad performance by taking conservative actions or soliciting human input). There are a variety of areas that are potentially relevant to this problem, including change detection and anomaly detection [21, 80, 91], hypothesis testing [145], transfer learning [138, 124, 125, 25], and several others [136, 87, 18, 122, 121, 74, 147]. Rather than fully reviewing all of this work in detail (which would necessitate a paper in itself), we will describe a few illustrative approaches and lay out some of their relative strengths and challenges. | 1606.06565#44 | 1606.06565#46 | 1606.06565 | [
"1507.01986"
] |
1606.06565#46 | Concrete Problems in AI Safety | 16 Well-speciï¬ ed models: covariate shift and marginal likelihood. If we specialize to prediction tasks and let x denote the input and y denote the output (prediction target), then one possibility is to make the covariate shift assumption that p0(y|x) = pâ (y|x). In this case, assuming that we can model p0(x) and pâ (x) well, we can perform importance weighting by re-weighting each training example (x, y) by pâ (x)/p0(x) [138, 124]. Then the importance-weighted samples allow us to estimate the performance on pâ , and even re-train a model to perform well on pâ | 1606.06565#45 | 1606.06565#47 | 1606.06565 | [
"1507.01986"
] |
1606.06565#47 | Concrete Problems in AI Safety | . This approach is limited by the variance of the importance estimate, which is very large or even inï¬ nite unless p0 and pâ are close together. An alternative to sample re-weighting involves assuming a well-speciï¬ ed model family, in which case there is a single optimal model for predicting under both p0 and pâ . In this case, one need only heed ï¬ nite-sample variance in the estimated model [25, 87]. A limitation to this approach, at least currently, is that models are often mis-speciï¬ ed in practice. However, this could potentially be over- come by employing highly expressive model families such as reproducing kernel Hilbert spaces [72], Turing machines [143, 144], or suï¬ ciently expressive neural nets [64, 79]. In the latter case, there has been interesting recent work on using bootstrapping to estimate ï¬ nite-sample variation in the learned parameters of a neural network [114]; it seems worthwhile to better understand whether this approach can be used to eï¬ ectively estimate out-of-sample performance in practice, as well as how local minima, lack of curvature, and other peculiarities relative to the typical setting of the bootstrap [47] aï¬ ect the validity of this approach. All of the approaches so far rely on the covariate shift assumption, which is very strong and is also untestable; the latter property is particularly problematic from a safety perspective, since it could lead to silent failures in a machine learning system. Another approach, which does not rely on covariate shift, builds a generative model of the distribution. Rather than assuming that p(x) changes while p(y|x) stays the same, we are free to assume other invariants (for instance, that p(y) changes but p(x|y) stays the same, or that certain conditional independencies are preserved). An advantage is that such assumptions are typically more testable than the covariate shift assumption (since they do not only involve the unobserved variable y). A disadvantage is that generative approaches are even more fragile than discriminative approaches in the presence of model mis-speciï¬ | 1606.06565#46 | 1606.06565#48 | 1606.06565 | [
"1507.01986"
] |
1606.06565#48 | Concrete Problems in AI Safety | cation â for instance, there is a large empirical literature showing that generative approaches to semi-supervised learning based on maximizing marginal likelihood can perform very poorly when the model is mis- speciï¬ ed [98, 110, 35, 90, 88]. The approaches discussed above all rely relatively strongly on having a well-speciï¬ ed model family â one that contains the true distribution or true concept. This can be problematic in many cases, since nature is often more complicated than our model family is capable of capturing. As noted above, it may be possible to mitigate this with very expressive models, such as kernels, Turing machines, or very large neural networks, but even here there is at least some remaining problem: for example, even if our model family consists of all Turing machines, given any ï¬ nite amount of data we can only actually learn among Turing machines up to a given description length, and if the Turing machine describing nature exceeds this length, we are back to the mis-speciï¬ ed regime (alternatively, nature might not even be describable by a Turing machine). Partially specified models: method of moments, unsupervised risk estimation, causal identification, and limited-information maximum likelihood. Another approach is to take for granted that constructing a fully well-specified model family is probably infeasible, and to design methods that perform well despite this fact. This leads to the idea of partially specified models â models for which assumptions are made about some aspects of a distribution, but for which we are agnostic or make limited assumptions about other aspects. For a simple example, consider a variant of linear regression where we might assume that y = (w*,x) + v, where E[v|a] = 0, but we donâ t make any further assumptions about the distributional form of the noise v. It turns out that this is already enough to identify the parameters w*, and that these parameters will minimize the squared | 1606.06565#47 | 1606.06565#49 | 1606.06565 | [
"1507.01986"
] |
1606.06565#49 | Concrete Problems in AI Safety | 17 prediction error even if the distribution over x changes. What is interesting about this example is that wâ can be identiï¬ ed even with an incomplete (partial) speciï¬ cation of the noise distribution. This insight can be substantially generalized, and is one of the primary motivations for the gen- eralized method of moments in econometrics [68, 123, 69]. The econometrics literature has in fact developed a large family of tools for handling partial speciï¬ cation, which also includes limited- information maximum likelihood and instrumental variables [10, 11, 133, 132]. Returning to machine learning, the method of moments has recently seen a great deal of success for use in the estimation of latent variable models [9]. While the current focus is on using the method of moments to overcome non-convexity issues, it can also oï¬ er a way to perform unsupervised learning while relying only on conditional independence assumptions, rather than the strong distributional assumptions underlying maximum likelihood learning [147]. Finally, some recent work in machine learning focuses only on modeling the distribution of errors of a model, which is suï¬ cient for determining whether a model is performing well or poorly. Formally, the goal is to perform unsupervised risk estimation â given a model and unlabeled data from a test distribution, estimate the labeled risk of the model. This formalism, introduced by [44], has the advantage of potentially handling very large changes between train and test â even if the test distribution looks completely diï¬ erent from the training distribution and we have no hope of outputting accurate predictions, unsupervised risk estimation may still be possible, as in this case we would only need to output a large estimate for the risk. As in [147], one can approach unsupervised risk estimation by positing certain conditional independencies in the distribution of errors, and using this to estimate the error distribution from unlabeled data [39, 170, 121, 74]. Instead of assuming independence, another assumption is that the errors are Gaussian conditioned on the true output y, in which case estimating the risk reduces to estimating a Gaussian mixture model [18]. Because these methods focus only on the model errors and ignore other aspects of the data distribution, they can also be seen as an instance of partial model speciï¬ | 1606.06565#48 | 1606.06565#50 | 1606.06565 | [
"1507.01986"
] |
1606.06565#50 | Concrete Problems in AI Safety | cation. Training on multiple distributions. One could also train on multiple training distributions in the hope that a model which simultaneously works well on many training distributions will also work well on a novel test distribution. One of the authors has found this to be the case, for instance, in the context of automated speech recognition systems [7]. One could potentially combine this with any of the ideas above, and/or take an engineering approach of simply trying to develop design methodologies that consistently allow one to collect a representative set of training sets and from this build a model that consistently generalizes to novel distributions. Even for this engineering approach, it seems important to be able to detect when one is in a situation that was not covered by the training data and to respond appropriately, and to have methodologies for adequately stress-testing the model with distributions that are suï¬ | 1606.06565#49 | 1606.06565#51 | 1606.06565 | [
"1507.01986"
] |
1606.06565#51 | Concrete Problems in AI Safety | ciently diï¬ erent from the set of training distributions. How to respond when out-of-distribution. The approaches described above focus on detecting when a model is unlikely to make good predictions on a new distribution. An important related question is what to do once the detection occurs. One natural approach would be to ask humans for information, though in the context of complex structured output tasks it may be unclear a priori what question to ask, and in time-critical situations asking for information may not be an option. For the former challenge, there has been some recent promising work on pinpointing aspects of a structure that a model is uncertain about [162, 81], as well as obtaining calibration in structured output settings [83], but we believe there is much work yet to be done. For the latter challenge, there is also relevant work based on reachability analysis [93, 100] and robust policy improvement [164], which provide potential methods for deploying conservative policies in situations of uncertainty; to our knowledge, this work has not yet been combined with methods for detecting out-of-distribution failures of a model. Beyond the structured output setting, for agents that can act in an environment (such as RL agents), | 1606.06565#50 | 1606.06565#52 | 1606.06565 | [
"1507.01986"
] |
1606.06565#52 | Concrete Problems in AI Safety | 18 information about the reliability of percepts in uncertain situations seems to have great potential value. In suï¬ ciently rich environments, these agents may have the option to gather information that clariï¬ es the percept (e.g. if in a noisy environment, move closer to the speaker), engage in low- stakes experimentation when uncertainty is high (e.g. try a potentially dangerous chemical reaction in a controlled environment), or seek experiences that are likely to help expose the perception system to the relevant distribution (e.g. practice listening to accented speech). Humans utilize such information routinely, but to our knowledge current RL techniques make little eï¬ ort to do so, perhaps because popular RL environments are typically not rich enough to require such subtle management of uncertainty. Properly responding to out-of-distribution information thus seems to the authors like an exciting and (as far as we are aware) mostly unexplored challenge for next generation RL systems. A unifying view: counterfactual reasoning and machine learning with contracts. Some of the authors have found two viewpoints to be particularly helpful when thinking about problems related to out-of-distribution prediction. | 1606.06565#51 | 1606.06565#53 | 1606.06565 | [
"1507.01986"
] |
1606.06565#53 | Concrete Problems in AI Safety | The ï¬ rst is counterfactual reasoning [106, 129, 117, 30], where one asks â what would have happened if the world were diï¬ erent in a certain wayâ ? In some sense, distributional shift can be thought of as a particular type of counterfactual, and so understanding counterfactual reasoning is likely to help in making systems robust to distributional shift. We are excited by recent work applying counterfactual reasoning techniques to machine learning problems [30, 120, 151, 160, 77, 137] though there appears to be much work remaining to be done to scale these to high-dimensional and highly complex settings. The second perspective is machine learning with contracts â in this perspective, one would like to construct machine learning systems that satisfy a well-deï¬ ned contract on their behavior in analogy with the design of software systems [135, 28, 89]. [135] enumerates a list of ways in which existing machine learning systems fail to do this, and the problems this can cause for deployment and maintenance of machine learning systems at scale. The simplest and to our mind most important failure is the extremely brittle implicit contract in most machine learning systems, namely that they only necessarily perform well if the training and test distributions are identical. | 1606.06565#52 | 1606.06565#54 | 1606.06565 | [
"1507.01986"
] |
1606.06565#54 | Concrete Problems in AI Safety | This condition is diï¬ cult to check and rare in practice, and it would be valuable to build systems that perform well under weaker contracts that are easier to reason about. Partially speciï¬ ed models oï¬ er one approach to this â rather than requiring the distributions to be identical, we only need them to match on the pieces of the distribution that are speciï¬ ed in the model. Reachability analysis [93, 100] and model repair [58] provide other avenues for obtaining better contracts â in reachability analysis, we optimize performance subject to the condition that a safe region can always be reached by a known conservative policy, and in model repair we alter a trained model to ensure that certain desired safety properties hold. Summary. There are a variety of approaches to building machine learning systems that robustly perform well when deployed on novel test distributions. One family of approaches is based on assuming a well-speciï¬ ed model; in this case, the primary obstacles are the diï¬ culty of building well-speciï¬ ed models in practice, an incomplete picture of how to maintain uncertainty on novel distributions in the presence of ï¬ nite training data, and the diï¬ culty of detecting when a model is mis-speciï¬ ed. Another family of approaches only assumes a partially speciï¬ ed model; this approach is potentially promising, but it currently suï¬ ers from a lack of development in the context of machine learning, since most of the historical development has been by the ï¬ eld of econometrics; there is also a question of whether partially speciï¬ ed models are fundamentally constrained to simple situations and/or conservative predictions, or whether they can meaningfully scale to the complex situations demanded by modern machine learning applications. Finally, one could try to train on multiple training distributions in the hope that a model which simultaneously works well on many training distributions will also work well on a novel test distribution; for this approach it seems particularly important to stress-test the learned model with distributions that are substantially diï¬ | 1606.06565#53 | 1606.06565#55 | 1606.06565 | [
"1507.01986"
] |
1606.06565#55 | Concrete Problems in AI Safety | erent from 19 any in the set of training distributions. predict when inputs are too novel to admit good predictions. In addition, it is probably still important to be able to Potential Experiments: Speech systems frequently exhibit poor calibration when they go out-of- distribution, so a speech system that â knows when it is uncertainâ could be one possible demon- stration project. To be speciï¬ c, the challenge could be: train a state-of-the-art speech system on a standard dataset [116] that gives well-calibrated results (if not necessarily good results) on a range of other test sets, like noisy and accented speech. Current systems not only perform poorly on these test sets when trained only on small datasets, but are usually overconï¬ dent in their incorrect transcriptions. Fixing this problem without harming performance on the original training set would be a valuable achievement, and would obviously have practical value. More generally, it would be valuable to design models that could consistently estimate (bounds on) their performance on novel test distributions. If a single methodology could consistently accomplish this for a wide variety of tasks (including not just speech but e.g. sentiment analysis [24], as well as benchmarks in computer vision [158]), that would inspire conï¬ dence in the reliability of that methodology for handling novel inputs. Note that estimating performance on novel distributions has additional practical value in allowing us to then potentially adapt the model to that new situation. Finally, it might also be valuable to create an environment where an RL agent must learn to interpret speech as part of some larger task, and to explore how to respond appropriately to its own estimates of its transcription error. | 1606.06565#54 | 1606.06565#56 | 1606.06565 | [
"1507.01986"
] |
1606.06565#56 | Concrete Problems in AI Safety | # 8 Related Eï¬ orts As mentioned in the introduction, several other communities have thought broadly about the safety of AI systems, both within and outside of the machine learning community. Work within the machine learning community on accidents in particular was discussed in detail above, but here we very brieï¬ y highlight a few other communities doing work that is broadly related to the topic of AI safety. â ¢ Cyber-Physical Systems Community: An existing community of researchers studies the security and safety of systems that interact with the physical world. | 1606.06565#55 | 1606.06565#57 | 1606.06565 | [
"1507.01986"
] |
1606.06565#57 | Concrete Problems in AI Safety | Illustrative of this work is an impressive and successful eï¬ ort to formally verify the entire federal aircraft collision avoidance system [75, 92]. Similar work includes traï¬ c control algorithms [101] and many other topics. However, to date this work has not focused much on modern machine learning systems, where formal veriï¬ cation is often not feasible. â ¢ Futurist Community: A cross-disciplinary group of academics and non-proï¬ ts has raised concern about the long term implications of AI [27, 167], particularly superintelligent AI. The Future of Humanity Institute has studied this issue particularly as it relates to future AI sys- tems learning or executing humanityâ | 1606.06565#56 | 1606.06565#58 | 1606.06565 | [
"1507.01986"
] |
1606.06565#58 | Concrete Problems in AI Safety | s preferences [48, 43, 14, 12]. The Machine Intelligence Research Institute has studied safety issues that may arise in very advanced AI [57, 56, 36, 154, 142], including a few mentioned above (e.g., wireheading, environmental embedding, counter- factual reasoning), albeit at a more philosophical level. To date, they have not focused much on applications to modern machine learning. By contrast, our focus is on the empirical study of practical safety problems in modern machine learning systems, which we believe is likely to be robustly useful across a broad variety of potential risks, both short- and long-term. | 1606.06565#57 | 1606.06565#59 | 1606.06565 | [
"1507.01986"
] |
1606.06565#59 | Concrete Problems in AI Safety | â ¢ Other Calls for Work on Safety: There have been other public documents within the research community pointing out the importance of work on AI safety. A 2015 Open Letter [8] signed by many members of the research community states the importance of â how to reap [AIâ s] beneï¬ ts while avoiding the potential pitfalls.â [130] propose research priorities for 20 robust and beneï¬ cial artiï¬ cial intelligence, and includes several other topics in addition to a (briefer) discussion of AI-related accidents. [161], writing over 20 years ago, proposes that the community look for ways to formalize Asimovâ s ï¬ rst law of robotics (robots must not harm humans), and focuses mainly on classical planning. Finally, two of the authors of this paper have written informally about safety in AI systems [146, 34]; these postings provided inspiration for parts of the present document. | 1606.06565#58 | 1606.06565#60 | 1606.06565 | [
"1507.01986"
] |
1606.06565#60 | Concrete Problems in AI Safety | â ¢ Related Problems in Safety: A number of researchers in machine learning and other ï¬ elds have begun to think about the social impacts of AI technologies. Aside from work directly on accidents (which we reviewed in the main document), there is also substantial work on other topics, many of which are closely related to or overlap with the issue of accidents. A thorough overview of all of this work is beyond the scope of this document, but we brieï¬ y list a few emerging themes: â ¢ Privacy: How can we ensure privacy when applying machine learning to sensitive data sources such as medical data? [76, 1] | 1606.06565#59 | 1606.06565#61 | 1606.06565 | [
"1507.01986"
] |
1606.06565#61 | Concrete Problems in AI Safety | â ¢ Fairness: How can we make sure ML systems donâ t discriminate? [3, 168, 6, 46, 119, 169] Security: What can a malicious adversary do to a ML system? [149, 96, 97, 115, 108, 19] â ¢ Abuse:5 How do we prevent the misuse of ML systems to attack or harm people? [16] â ¢ Transparency: How can we understand what complicated ML systems are doing? [112, 166, 105, 109] | 1606.06565#60 | 1606.06565#62 | 1606.06565 | [
"1507.01986"
] |
1606.06565#62 | Concrete Problems in AI Safety | â ¢ Policy: How do we predict and respond to the economic and social consequences of ML? [32, 52, 15, 33] We believe that research on these topics has both urgency and great promise, and that fruitful intersection is likely to exist between these topics and the topics we discuss in this paper. # 9 Conclusion This paper analyzed the problem of accidents in machine learning systems and particularly rein- forcement learning agents, where an accident is deï¬ ned as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We presented ï¬ ve possible research problems related to accident risk and for each we discussed possible approaches that are highly amenable to concrete experimental work. With the realistic possibility of machine learning-based systems controlling industrial processes, health-related systems, and other mission-critical technology, small-scale accidents seem like a very concrete threat, and are critical to prevent both intrinsically and because such accidents could cause a justiï¬ ed loss of trust in automated systems. The risk of larger accidents is more diï¬ cult to gauge, but we believe it is worthwhile and prudent to develop a principled and forward-looking approach to safety that continues to remain relevant as autonomous systems become more powerful. While many current-day safety problems can and have been handled with ad hoc ï¬ xes or case-by-case rules, we believe that the increasing trend towards end-to-end, fully autonomous systems points towards the need for a uniï¬ ed approach to prevent these systems from causing unintended harm. 5Note that â securityâ diï¬ ers from â abuseâ in that the former involves attacks against a legitimate ML system by an adversary (e.g. a criminal tries to fool a face recognition system), while the latter involves attacks by an ML system controlled by an adversary (e.g. a criminal trains a â smart hackerâ system to break into a website). | 1606.06565#61 | 1606.06565#63 | 1606.06565 | [
"1507.01986"
] |
1606.06565#63 | Concrete Problems in AI Safety | 21 # Acknowledgements We thank Shane Legg, Peter Norvig, Ilya Sutskever, Greg Corrado, Laurent Orseau, David Krueger, Rif Saurous, David Andersen, and Victoria Krakovna for detailed feedback and suggestions. We would also like to thank Geoï¬ rey Irving, Toby Ord, Quoc Le, Greg Wayne, Daniel Dewey, Nick Beckstead, Holden Karnofsky, Chelsea Finn, Marcello Herreshoï¬ , Alex Donaldson, Jared Kaplan, Greg Brockman, Wojciech Zaremba, Ian Goodfellow, Dylan Hadï¬ eld-Menell, Jessica Taylor, Blaise Aguera y Arcas, David Berlekamp, Aaron Courville, and Jeï¬ Dean for helpful discussions and comments. Paul Christiano was supported as part of the Future of Life Institute FLI-RFP-AI1 program, grant #2015â 143898. In addition a minority of the work done by Paul Christiano was performed as a contractor for Theiss Research and at OpenAI. Finally, we thank the Google Brain team for providing a supportive environment and encouraging us to publish this work. # References [1] Martin Abadi et al. â Deep Learning with Diï¬ erential Privacyâ . In: (in press (2016)). [2] Pieter Abbeel and Andrew Y Ng. â Exploration and apprenticeship learning in reinforcement learningâ | 1606.06565#62 | 1606.06565#64 | 1606.06565 | [
"1507.01986"
] |
1606.06565#64 | Concrete Problems in AI Safety | . In: Proceedings of the 22nd international conference on Machine learning. ACM. 2005, pp. 1â 8. [3] Julius Adebayo, Lalana Kagal, and Alex Pentland. The Hidden Cost of Eï¬ ciency: Fairness and Discrimination in Predictive Modeling. 2015. [4] Alekh Agarwal et al. â Taming the monster: A fast and simple algorithm for contextual ban- ditsâ . In: (2014). [5] Hana Ajakan et al. â Domain-adversarial neural networksâ . In: arXiv preprint arXiv:1412.4446 (2014). Ifeoma Ajunwa et al. â Hiring by algorithm: predicting and preventing disparate impactâ . In: Available at SSRN 2746078 (2016). 6] [7] Dario Amodei et al. â Deep Speech 2: End-to-End Speech Recognition in English and Man- darinâ | 1606.06565#63 | 1606.06565#65 | 1606.06565 | [
"1507.01986"
] |
1606.06565#65 | Concrete Problems in AI Safety | . In: arXiv preprint arXiv:1512.02595 (2015). [8] An Open Letter: Research Priorities for Robust and Beneï¬ cial Artiï¬ cial Intelligence. Open Letter. Signed by 8,600 people; see attached research agenda. 2015. [9] Animashree Anandkumar, Daniel Hsu, and Sham M Kakade. â A method of moments for mixture models and hidden Markov modelsâ . In: arXiv preprint arXiv:1203.0683 (2012). [10] Theodore W Anderson and Herman Rubin. â Estimation of the parameters of a single equation in a complete system of stochastic equationsâ . In: The Annals of Mathematical Statistics (1949), pp. 46â 63. [11] Theodore W Anderson and Herman Rubin. â | 1606.06565#64 | 1606.06565#66 | 1606.06565 | [
"1507.01986"
] |
1606.06565#66 | Concrete Problems in AI Safety | The asymptotic properties of estimates of the parameters of a single equation in a complete system of stochastic equationsâ . In: The Annals of Mathematical Statistics (1950), pp. 570â 582. [12] Stuart Armstrong. â Motivated value selection for artiï¬ cial agentsâ . In: Workshops at the Twenty-Ninth AAAI Conference on Artiï¬ cial Intelligence. 2015. [13] Stuart Armstrong. The mathematics of reduced impact: help needed. 2012. [14] Stuart Armstrong. | 1606.06565#65 | 1606.06565#67 | 1606.06565 | [
"1507.01986"
] |
1606.06565#67 | Concrete Problems in AI Safety | Utility indiï¬ erence. Tech. rep. Technical Report 2010-1. Oxford: Future of Humanity Institute, University of Oxford, 2010. [15] Melanie Arntz, Terry Gregory, and Ulrich Zierahn. â The Risk of Automation for Jobs in OECD Countriesâ . In: OECD Social, Employment and Migration Working Papers (2016). url: http://dx.doi.org/10.1787/5jlz9h56dvq7-en. [16] Autonomous Weapons: An Open Letter from AI & Robotics Researchers. Open Letter. Signed by 20,000+ people. 2015. 22 [17] James Babcock, Janos Kramar, and Roman Yampolskiy. â | 1606.06565#66 | 1606.06565#68 | 1606.06565 | [
"1507.01986"
] |
1606.06565#68 | Concrete Problems in AI Safety | The AGI Containment Problemâ . In: The Ninth Conference on Artiï¬ cial General Intelligence (2016). [18] Krishnakumar Balasubramanian, Pinar Donmez, and Guy Lebanon. â Unsupervised super- vised learning ii: Margin-based classiï¬ cation without labelsâ . In: The Journal of Machine Learning Research 12 (2011), pp. 3119â 3145. [19] Marco Barreno et al. â The security of machine learningâ . In: Machine Learning 81.2 (2010), pp. 121â 148. [20] Tamer Ba¸sar and Pierre Bernhard. H-inï¬ nity optimal control and related minimax design problems: a dynamic game approach. | 1606.06565#67 | 1606.06565#69 | 1606.06565 | [
"1507.01986"
] |
1606.06565#69 | Concrete Problems in AI Safety | Springer Science & Business Media, 2008. [21] Mich`ele Basseville. â Detecting changes in signals and systemsâ a surveyâ . In: Automatica 24.3 (1988), pp. 309â 326. [22] F Berkenkamp, A Krause, and Angela P Schoellig. â Bayesian optimization with safety con- straints: safe and automatic parameter tuning in robotics.â arXiv, 2016â . In: arXiv preprint arXiv:1602.04450 (). [23] Jon Bird and Paul Layzell. â The evolved radio and its implications for modelling the evolution of novel sensorsâ . In: Evolutionary Computation, 2002. CECâ 02. Proceedings of the 2002 Congress on. Vol. 2. IEEE. 2002, pp. 1836â 1841. [24] John Blitzer, Mark Dredze, Fernando Pereira, et al. â Biographies, bollywood, boom-boxes and blenders: | 1606.06565#68 | 1606.06565#70 | 1606.06565 | [
"1507.01986"
] |
1606.06565#70 | Concrete Problems in AI Safety | Domain adaptation for sentiment classiï¬ cationâ . In: ACL. Vol. 7. 2007, pp. 440â 447. [25] John Blitzer, Sham Kakade, and Dean P Foster. â Domain adaptation with coupled sub- spacesâ . In: International Conference on Artiï¬ cial Intelligence and Statistics. 2011, pp. 173â 181. [26] Charles Blundell et al. â Weight uncertainty in neural networksâ . In: arXiv preprint arXiv:1505.05424 (2015). [27] Nick Bostrom. Superintelligence: | 1606.06565#69 | 1606.06565#71 | 1606.06565 | [
"1507.01986"
] |
1606.06565#71 | Concrete Problems in AI Safety | Paths, dangers, strategies. OUP Oxford, 2014. [28] L´eon Bottou. â Two high stakes challenges in machine learningâ . Invited talk at the 32nd International Conference on Machine Learning. 2015. [29] L´eon Bottou et al. â Counterfactual Reasoning and Learning Systemsâ . In: arXiv preprint arXiv:1209.2355 (2012). [30] L´eon Bottou et al. â Counterfactual reasoning and learning systems: The example of compu- tational advertisingâ | 1606.06565#70 | 1606.06565#72 | 1606.06565 | [
"1507.01986"
] |
1606.06565#72 | Concrete Problems in AI Safety | . In: The Journal of Machine Learning Research 14.1 (2013), pp. 3207â 3260. [31] Ronen I Brafman and Moshe Tennenholtz. â R-max-a general polynomial time algorithm for near-optimal reinforcement learningâ . In: The Journal of Machine Learning Research 3 (2003), pp. 213â 231. [32] Erik Brynjolfsson and Andrew McAfee. The second machine age: work, progress, and pros- perity in a time of brilliant technologies. WW Norton & Company, 2014. | 1606.06565#71 | 1606.06565#73 | 1606.06565 | [
"1507.01986"
] |
1606.06565#73 | Concrete Problems in AI Safety | [33] Ryan Calo. â Open roboticsâ . In: Maryland Law Review 70.3 (2011). [34] Paul Christiano. AI Control. [Online; accessed 13-June-2016]. 2015. url: https://medium. com/ai-control. [35] Fabio Cozman and Ira Cohen. â Risks of semi-supervised learningâ . In: Semi-Supervised Learn- ing (2006), pp. 56â 72. [36] Andrew Critch. â Parametric Bounded L¨obâ s Theorem and Robust Cooperation of Bounded Agentsâ | 1606.06565#72 | 1606.06565#74 | 1606.06565 | [
"1507.01986"
] |
1606.06565#74 | Concrete Problems in AI Safety | . In: (2016). [37] Christian Daniel et al. â Active reward learningâ . In: Proceedings of Robotics Science & Sys- tems. 2014. [38] Ernest Davis. â Ethical guidelines for a superintelligence.â In: Artif. Intell. 220 (2015), pp. 121â 124. [39] Alexander Philip Dawid and Allan M Skene. â Maximum likelihood estimation of observer error-rates using the EM algorithmâ | 1606.06565#73 | 1606.06565#75 | 1606.06565 | [
"1507.01986"
] |
1606.06565#75 | Concrete Problems in AI Safety | . In: Applied statistics (1979), pp. 20â 28. 23 [40] Peter Dayan and Geoï¬ rey E Hinton. â Feudal reinforcement learningâ . In: Advances in neural information processing systems. Morgan Kaufmann Publishers. 1993, pp. 271â 271. [41] Kalyanmoy Deb. â Multi-objective optimizationâ . In: Search methodologies. Springer, 2014, pp. 403â 449. [42] Daniel Dewey. â | 1606.06565#74 | 1606.06565#76 | 1606.06565 | [
"1507.01986"
] |
1606.06565#76 | Concrete Problems in AI Safety | Learning what to valueâ . In: Artiï¬ cial General Intelligence. Springer, 2011, pp. 309â 314. [43] Daniel Dewey. â Reinforcement learning and the reward engineering principleâ . In: 2014 AAAI Spring Symposium Series. 2014. [44] Pinar Donmez, Guy Lebanon, and Krishnakumar Balasubramanian. â Unsupervised super- vised learning i: Estimating classiï¬ cation and regression errors without labelsâ . In: The Jour- nal of Machine Learning Research 11 (2010), pp. 1323â 1351. [45] Gregory Druck, Gideon Mann, and Andrew McCallum. â | 1606.06565#75 | 1606.06565#77 | 1606.06565 | [
"1507.01986"
] |
1606.06565#77 | Concrete Problems in AI Safety | Learning from labeled features using generalized expectation criteriaâ . In: Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval. ACM. 2008, pp. 595â 602. [46] Cynthia Dwork et al. â Fairness through awarenessâ . In: Proceedings of the 3rd Innovations in Theoretical Computer Science Conference. ACM. 2012, pp. 214â 226. [47] Bradley Efron. â Computers and the theory of statistics: thinking the unthinkableâ . In: SIAM review 21.4 (1979), pp. 460â 480. [48] Owain Evans, Andreas Stuhlm¨uller, and Noah D Goodman. â Learning the preferences of ignorant, inconsistent agentsâ . In: arXiv preprint arXiv:1512.05832 (2015). [49] Tom Everitt and Marcus Hutter. â Avoiding wireheading with value reinforcement learningâ . In: arXiv preprint arXiv:1605.03143 (2016). [50] Tom Everitt et al. â Self-Modiï¬ cation of Policy and Utility Function in Rational Agentsâ . | 1606.06565#76 | 1606.06565#78 | 1606.06565 | [
"1507.01986"
] |
1606.06565#78 | Concrete Problems in AI Safety | In: arXiv preprint arXiv:1605.03142 (2016). [51] Chelsea Finn, Sergey Levine, and Pieter Abbeel. â Guided Cost Learning: Deep Inverse Op- timal Control via Policy Optimizationâ . In: arXiv preprint arXiv:1603.00448 (2016). [52] Carl Benedikt Frey and Michael A Osborne. â The future of employment: how susceptible are jobs to computerisationâ . In: Retrieved September 7 (2013), p. 2013. [53] Yarin Gal and Zoubin Ghahramani. â | 1606.06565#77 | 1606.06565#79 | 1606.06565 | [
"1507.01986"
] |
1606.06565#79 | Concrete Problems in AI Safety | Dropout as a Bayesian approximation: Representing model uncertainty in deep learningâ . In: arXiv preprint arXiv:1506.02142 (2015). [54] Joao Gama et al. â Learning with drift detectionâ . In: Advances in artiï¬ cial intelligenceâ SBIA 2004. Springer, 2004, pp. 286â 295. [55] Javier Garc´ıa and Fernando Fern´andez. â | 1606.06565#78 | 1606.06565#80 | 1606.06565 | [
"1507.01986"
] |
1606.06565#80 | Concrete Problems in AI Safety | A Comprehensive Survey on Safe Reinforcement Learningâ . In: Journal of Machine Learning Research 16 (2015), pp. 1437â 1480. [56] Scott Garrabrant, Nate Soares, and Jessica Taylor. â Asymptotic Convergence in Online Learning with Unbounded Delaysâ . In: arXiv preprint arXiv:1604.05280 (2016). [57] Scott Garrabrant et al. â Uniform Coherenceâ . In: arXiv preprint arXiv:1604.05288 (2016). [58] Shalini Ghosh et al. â Trusted Machine Learning for Probabilistic Modelsâ | 1606.06565#79 | 1606.06565#81 | 1606.06565 | [
"1507.01986"
] |
1606.06565#81 | Concrete Problems in AI Safety | . In: Reliable Ma- chine Learning in the Wild at ICML 2016 (2016). [59] Yolanda Gil et al. â Amplify scientiï¬ c discovery with artiï¬ cial intelligenceâ . In: Science 346.6206 (2014), pp. 171â 172. [60] Alec Go, Richa Bhayani, and Lei Huang. â Twitter sentiment classiï¬ cation using distant supervisionâ . In: CS224N Project Report, Stanford 1 (2009), p. 12. Ian Goodfellow et al. â Generative adversarial netsâ . In: Advances in Neural Information Processing Systems. 2014, pp. 2672â 2680. Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. â Explaining and harnessing ad- versarial examplesâ | 1606.06565#80 | 1606.06565#82 | 1606.06565 | [
"1507.01986"
] |
1606.06565#82 | Concrete Problems in AI Safety | . In: arXiv preprint arXiv:1412.6572 (2014). 61 62 [63] Charles AE Goodhart. Problems of monetary management: the UK experience. Springer, 1984. [64] Alex Graves, Greg Wayne, and Ivo Danihelka. â Neural turing machinesâ . In: arXiv preprint arXiv:1410.5401 (2014). 24 [65] Sonal Gupta. â | 1606.06565#81 | 1606.06565#83 | 1606.06565 | [
"1507.01986"
] |
1606.06565#83 | Concrete Problems in AI Safety | Distantly Supervised Information Extraction Using Bootstrapped Patternsâ . PhD thesis. Stanford University, 2015. [66] Dylan Hadï¬ eld-Menell et al. Cooperative Inverse Reinforcement Learning. 2016. [67] Dylan Hadï¬ eld-Menell et al. â The Oï¬ -Switchâ . In: (2016). [68] Lars Peter Hansen. â Large sample properties of generalized method of moments estimatorsâ . In: Econometrica: Journal of the Econometric Society (1982), pp. 1029â | 1606.06565#82 | 1606.06565#84 | 1606.06565 | [
"1507.01986"
] |
1606.06565#84 | Concrete Problems in AI Safety | 1054. [69] Lars Peter Hansen. â Nobel Lecture: Uncertainty Outside and Inside Economic Modelsâ . In: Journal of Political Economy 122.5 (2014), pp. 945â 987. [70] Mark Herbster and Manfred K Warmuth. â Tracking the best linear predictorâ . In: The Jour- nal of Machine Learning Research 1 (2001), pp. 281â 309. [71] Bill Hibbard. â Model-based utility functionsâ . In: Journal of Artiï¬ cial General Intelligence 3.1 (2012), pp. 1â 24. [72] Thomas Hofmann, Bernhard Sch¨olkopf, and Alexander J Smola. â Kernel methods in machine learningâ | 1606.06565#83 | 1606.06565#85 | 1606.06565 | [
"1507.01986"
] |
1606.06565#85 | Concrete Problems in AI Safety | . In: The annals of statistics (2008), pp. 1171â 1220. [73] Garud N Iyengar. â Robust dynamic programmingâ . In: Mathematics of Operations Research 30.2 (2005), pp. 257â 280. [74] Ariel Jaï¬ e, Boaz Nadler, and Yuval Kluger. â Estimating the accuracies of multiple classiï¬ ers without labeled dataâ . In: arXiv preprint arXiv:1407.7644 (2014). [75] Jean-Baptiste Jeannin et al. â A formally veriï¬ ed hybrid system for the next-generation air- borne collision avoidance systemâ . In: Tools and Algorithms for the Construction and Analysis of Systems. Springer, 2015, pp. 21â 36. [76] Zhanglong Ji, Zachary C Lipton, and Charles Elkan. â Diï¬ | 1606.06565#84 | 1606.06565#86 | 1606.06565 | [
"1507.01986"
] |
1606.06565#86 | Concrete Problems in AI Safety | erential privacy and machine learn- ing: A survey and reviewâ . In: arXiv preprint arXiv:1412.7584 (2014). [77] Fredrik D Johansson, Uri Shalit, and David Sontag. â Learning Representations for Counter- factual Inferenceâ . In: arXiv preprint arXiv:1605.03661 (2016). 78 79 Leslie Pack Kaelbling, Michael L Littman, and Anthony R Cassandra. â Planning and acting in partially observable stochastic domainsâ . In: Artificial intelligence 101.1 (1998), pp. 99- 134. Lukasz Kaiser and Ilya Sutskever. â Neural GPUs learn algorithmsâ | 1606.06565#85 | 1606.06565#87 | 1606.06565 | [
"1507.01986"
] |
1606.06565#87 | Concrete Problems in AI Safety | . In: arXiv preprint arXiv:1511.08228 (2015). [80] Yoshinobu Kawahara and Masashi Sugiyama. â Change-Point Detection in Time-Series Data by Direct Density-Ratio Estimation.â In: SDM. Vol. 9. SIAM. 2009, pp. 389â 400. [81] F. Khani, M. Rinard, and P. Liang. â Unanimous Prediction for 100Learning Semantic Parsersâ . In: Association for Computational Linguistics (ACL). 2016. [82] Alex Krizhevsky, Ilya Sutskever, and Geoï¬ | 1606.06565#86 | 1606.06565#88 | 1606.06565 | [
"1507.01986"
] |
1606.06565#88 | Concrete Problems in AI Safety | rey E Hinton. â Imagenet classiï¬ cation with deep convolutional neural networksâ . In: Advances in neural information processing systems. 2012, pp. 1097â 1105. [83] Volodymyr Kuleshov and Percy S Liang. â Calibrated Structured Predictionâ . In: Advances in Neural Information Processing Systems. 2015, pp. 3456â 3464. [84] Tejas D Kulkarni et al. â | 1606.06565#87 | 1606.06565#89 | 1606.06565 | [
"1507.01986"
] |
1606.06565#89 | Concrete Problems in AI Safety | Hierarchical Deep Reinforcement Learning: Integrating Temporal Abstraction and Intrinsic Motivationâ . In: arXiv preprint arXiv:1604.06057 (2016). [85] Neil Lawrence. Discussion of â Superintelligence: Paths, Dangers, Strategiesâ . 2016. [86] Jesse Levinson et al. â Towards fully autonomous driving: Systems and algorithmsâ . In: In- telligent Vehicles Symposium (IV), 2011 IEEE. IEEE. 2011, pp. 163â 168. [87] Lihong Li et al. â Knows what it knows: a framework for self-aware learningâ | 1606.06565#88 | 1606.06565#90 | 1606.06565 | [
"1507.01986"
] |
1606.06565#90 | Concrete Problems in AI Safety | . In: Machine learning 82.3 (2011), pp. 399â 443. [88] Yu-Feng Li and Zhi-Hua Zhou. â Towards making unlabeled data never hurtâ . In: Pattern Analysis and Machine Intelligence, IEEE Transactions on 37.1 (2015), pp. 175â 188. [89] Percy Liang. â On the Elusiveness of a Speciï¬ cation for AIâ . NIPS 2015, Symposium: Algo- rithms Among Us. 2015. url: http://research.microsoft.com/apps/video/default. aspx?id=260009&r=1. | 1606.06565#89 | 1606.06565#91 | 1606.06565 | [
"1507.01986"
] |
1606.06565#91 | Concrete Problems in AI Safety | 25 [90] Percy Liang and Dan Klein. â Analyzing the Errors of Unsupervised Learning.â In: ACL. 2008, pp. 879â 887. [91] Song Liu et al. â Change-point detection in time-series data by relative density-ratio estima- tionâ . In: Neural Networks 43 (2013), pp. 72â 83. [92] Sarah M Loos, David Renshaw, and Andr´e Platzer. â Formal veriï¬ cation of distributed air- craft controllersâ . In: Proceedings of the 16th international conference on Hybrid systems: computation and control. ACM. 2013, pp. 125â 130. [93] John Lygeros, Claire Tomlin, and Shankar Sastry. â Controllers for reachability speciï¬ cations for hybrid systemsâ . In: Automatica 35.3 (1999), pp. 349â 370. [94] Gideon S Mann and Andrew McCallum. â | 1606.06565#90 | 1606.06565#92 | 1606.06565 | [
"1507.01986"
] |
1606.06565#92 | Concrete Problems in AI Safety | Generalized expectation criteria for semi-supervised learning with weakly labeled dataâ . In: The Journal of Machine Learning Research 11 (2010), pp. 955â 984. [95] John McCarthy and Patrick J Hayes. â Some philosophical problems from the standpoint of artiï¬ cial intelligenceâ . In: Readings in artiï¬ cial intelligence (1969), pp. 431â 450. [96] Shike Mei and Xiaojin Zhu. â The Security of Latent Dirichlet Allocation.â In: AISTATS. 2015. [97] Shike Mei and Xiaojin Zhu. â Using Machine Teaching to Identify Optimal Training-Set At- tacks on Machine Learners.â In: AAAI. 2015, pp. 2871â 2877. [98] Bernard Merialdo. â | 1606.06565#91 | 1606.06565#93 | 1606.06565 | [
"1507.01986"
] |
1606.06565#93 | Concrete Problems in AI Safety | Tagging English text with a probabilistic modelâ . In: Computational linguistics 20.2 (1994), pp. 155â 171. [99] Mike Mintz et al. â Distant supervision for relation extraction without labeled dataâ . In: Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2- Volume 2. Association for Computational Linguistics. 2009, pp. 1003â 1011. Ian M Mitchell, Alexandre M Bayen, and Claire J Tomlin. â A time-dependent Hamilton- Jacobi formulation of reachable sets for continuous dynamic gamesâ . In: Automatic Control, IEEE Transactions on 50.7 (2005), pp. 947â 957. [101] Stefan Mitsch, Sarah M Loos, and Andr´e Platzer. â Towards formal veriï¬ cation of freeway traï¬ c controlâ . In: Cyber-Physical Systems (ICCPS), 2012 IEEE/ACM Third International Conference on. IEEE. 2012, pp. 171â 180. [102] Volodymyr Mnih et al. â Human-level control through deep reinforcement learningâ | 1606.06565#92 | 1606.06565#94 | 1606.06565 | [
"1507.01986"
] |
1606.06565#94 | Concrete Problems in AI Safety | . In: Nature 518.7540 (2015), pp. 529â 533. [103] Shakir Mohamed and Danilo Jimenez Rezende. â Variational Information Maximisation for Intrinsically Motivated Reinforcement Learningâ . In: Advances in Neural Information Pro- cessing Systems. 2015, pp. 2116â 2124. [104] Teodor Mihai Moldovan and Pieter Abbeel. â Safe exploration in markov decision processesâ . In: arXiv preprint arXiv:1205.4810 (2012). [105] Alexander Mordvintsev, Christopher Olah, and Mike Tyka. â Inceptionism: Going deeper into neural networksâ | 1606.06565#93 | 1606.06565#95 | 1606.06565 | [
"1507.01986"
] |
1606.06565#95 | Concrete Problems in AI Safety | . In: Google Research Blog. Retrieved June 20 (2015). [106] Jersey Neyman. â Sur les applications de la th´eorie des probabilit´es aux experiences agricoles: Essai des principesâ . In: Roczniki Nauk Rolniczych 10 (1923), pp. 1â 51. [107] Andrew Y Ng, Stuart J Russell, et al. â Algorithms for inverse reinforcement learning.â In: Icml. 2000, pp. 663â 670. [108] Anh Nguyen, Jason Yosinski, and Jeï¬ Clune. â | 1606.06565#94 | 1606.06565#96 | 1606.06565 | [
"1507.01986"
] |
1606.06565#96 | Concrete Problems in AI Safety | Deep neural networks are easily fooled: High conï¬ dence predictions for unrecognizable imagesâ . In: Computer Vision and Pattern Recog- nition (CVPR), 2015 IEEE Conference on. IEEE. 2015, pp. 427â 436. [109] Anh Nguyen et al. â Synthesizing the preferred inputs for neurons in neural networks via deep generator networksâ . In: arXiv preprint arXiv:1605.09304 (2016). [110] Kamal Nigam et al. â Learning to classify text from labeled and unlabeled documentsâ | 1606.06565#95 | 1606.06565#97 | 1606.06565 | [
"1507.01986"
] |
1606.06565#97 | Concrete Problems in AI Safety | . In: AAAI/IAAI 792 (1998). 26 [111] Arnab Nilim and Laurent El Ghaoui. â Robust control of Markov decision processes with uncertain transition matricesâ . In: Operations Research 53.5 (2005), pp. 780â 798. [112] Christopher Olah. Visualizing Representations: Deep Learning and Human Beings. 2015. url: http://colah.github.io/posts/2015-01-Visualizing-Representations/. | 1606.06565#96 | 1606.06565#98 | 1606.06565 | [
"1507.01986"
] |
1606.06565#98 | Concrete Problems in AI Safety | [113] Laurent Orseau and Stuart Armstrong. â Safely Interruptible Agentsâ . In: (2016). [114] Ian Osband et al. â Deep Exploration via Bootstrapped DQNâ . In: arXiv preprint arXiv:1602.04621 (2016). [115] Nicolas Papernot et al. â Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examplesâ . In: arXiv preprint arXiv:1602.02697 (2016). [116] Douglas B Paul and Janet M Baker. â The design for the Wall Street Journal-based CSR corpusâ . In: Proceedings of the workshop on Speech and Natural Language. Association for Computational Linguistics. 1992, pp. 357â 362. [117] Judea Pearl et al. â Causal inference in statistics: An overviewâ . In: Statistics Surveys 3 (2009), pp. 96â 146. [118] Martin Pecka and Tomas Svoboda. â | 1606.06565#97 | 1606.06565#99 | 1606.06565 | [
"1507.01986"
] |
1606.06565#99 | Concrete Problems in AI Safety | Safe exploration techniques for reinforcement learningâ an overviewâ . In: Modelling and Simulation for Autonomous Systems. Springer, 2014, pp. 357â 375. [119] Dino Pedreshi, Salvatore Ruggieri, and Franco Turini. â Discrimination-aware data miningâ . In: Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM. 2008, pp. 560â 568. [120] Jonas Peters et al. â Causal discovery with continuous additive noise modelsâ . In: The Journal of Machine Learning Research 15.1 (2014), pp. 2009â 2053. [121] Emmanouil Antonios Platanios. â | 1606.06565#98 | 1606.06565#100 | 1606.06565 | [
"1507.01986"
] |
1606.06565#100 | Concrete Problems in AI Safety | Estimating accuracy from unlabeled dataâ . MA thesis. Carnegie Mellon University, 2015. [122] Emmanouil Antonios Platanios, Avrim Blum, and Tom Mitchell. â Estimating accuracy from unlabeled dataâ . In: (2014). [123] Walter W Powell and Laurel Smith-Doerr. â Networks and economic lifeâ . In: The handbook of economic sociology 368 (1994), p. 380. [124] Joaquin Quinonero-Candela et al. | 1606.06565#99 | 1606.06565#101 | 1606.06565 | [
"1507.01986"
] |
1606.06565#101 | Concrete Problems in AI Safety | Dataset shift in machine learning, ser. Neural information processing series. 2009. [125] Rajat Raina et al. â Self-taught learning: transfer learning from unlabeled dataâ . In: Proceed- ings of the 24th international conference on Machine learning. ACM. 2007, pp. 759â 766. [126] Bharath Ramsundar et al. â Massively multitask networks for drug discoveryâ . In: arXiv preprint arXiv:1502.02072 (2015). [127] Mark Ring and Laurent Orseau. â | 1606.06565#100 | 1606.06565#102 | 1606.06565 | [
"1507.01986"
] |
1606.06565#102 | Concrete Problems in AI Safety | Delusion, survival, and intelligent agentsâ . In: Artiï¬ cial General Intelligence. Springer, 2011, pp. 11â 20. [128] St´ephane Ross, Geoï¬ rey J Gordon, and J Andrew Bagnell. â A reduction of imitation learning and structured prediction to no-regret online learningâ . In: arXiv preprint arXiv:1011.0686 (2010). [129] Donald B Rubin. â Estimating causal eï¬ ects of treatments in randomized and nonrandomized studies.â In: Journal of educational Psychology 66.5 (1974), p. 688. [130] Stuart Russell et al. â Research priorities for robust and beneï¬ cial artiï¬ cial intelligenceâ . In: Future of Life Institute (2015). [131] Christoph Salge, Cornelius Glackin, and Daniel Polani. â Empowermentâ an introductionâ . In: Guided Self-Organization: Inception. | 1606.06565#101 | 1606.06565#103 | 1606.06565 | [
"1507.01986"
] |
1606.06565#103 | Concrete Problems in AI Safety | Springer, 2014, pp. 67â 114. [132] J Denis Sargan. â The estimation of relationships with autocorrelated residuals by the use of instrumental variablesâ . In: Journal of the Royal Statistical Society. Series B (Methodological) (1959), pp. 91â 105. [133] John D Sargan. â The estimation of economic relationships using instrumental variablesâ . In: Econometrica: Journal of the Econometric Society (1958), pp. 393â | 1606.06565#102 | 1606.06565#104 | 1606.06565 | [
"1507.01986"
] |
1606.06565#104 | Concrete Problems in AI Safety | 415. 27 [134] John Schulman et al. â High-dimensional continuous control using generalized advantage es- timationâ . In: arXiv preprint arXiv:1506.02438 (2015). [135] D Sculley et al. â Machine Learning: The High-Interest Credit Card of Technical Debtâ . In: (2014). [136] Glenn Shafer and Vladimir Vovk. â A tutorial on conformal predictionâ . In: The Journal of Machine Learning Research 9 (2008), pp. 371â 421. [137] Uri Shalit, Fredrik Johansson, and David Sontag. â | 1606.06565#103 | 1606.06565#105 | 1606.06565 | [
"1507.01986"
] |
1606.06565#105 | Concrete Problems in AI Safety | Bounding and Minimizing Counterfactual Errorâ . In: arXiv preprint arXiv:1606.03976 (2016). [138] Hidetoshi Shimodaira. â Improving predictive inference under covariate shift by weighting the log-likelihood functionâ . In: Journal of statistical planning and inference 90.2 (2000), pp. 227â 244. [139] Jaeho Shin et al. â Incremental knowledge base construction using deepdiveâ . In: Proceedings of the VLDB Endowment 8.11 (2015), pp. 1310â 1321. [140] David Silver et al. â | 1606.06565#104 | 1606.06565#106 | 1606.06565 | [
"1507.01986"
] |
1606.06565#106 | Concrete Problems in AI Safety | Mastering the game of Go with deep neural networks and tree searchâ . In: Nature 529.7587 (2016), pp. 484â 489. [141] SNES Super Mario World (USA) â arbitrary code executionâ . Tool-assisted movies. 2014. url: http://tasvideos.org/2513M.html. [142] Nate Soares and Benja Fallenstein. â Toward idealized decision theoryâ . In: arXiv preprint arXiv:1507.01986 (2015). [143] Ray J Solomonoï¬ . â | 1606.06565#105 | 1606.06565#107 | 1606.06565 | [
"1507.01986"
] |
1606.06565#107 | Concrete Problems in AI Safety | A formal theory of inductive inference. Part Iâ . In: Information and control 7.1 (1964), pp. 1â 22. [144] Ray J Solomonoï¬ . â A formal theory of inductive inference. Part IIâ . In: Information and control 7.2 (1964), pp. 224â 254. [145] J Steinebach. â EL Lehmann, JP Romano: Testing statistical hypothesesâ . In: Metrika 64.2 (2006), pp. 255â 256. [146] Jacob Steinhardt. Long-Term and Short-Term Challenges to Ensuring the Safety of AI Sys- tems. [Online; accessed 13-June-2016]. 2015. url: https://jsteinhardt.wordpress.com/ 2015/06/24/long- term- and- short- term- challenges- to- ensuring- the- safety- of- ai-systems/. [147] Jacob Steinhardt and Percy Liang. â Unsupervised Risk Estimation with only Structural Assumptionsâ . In: (2016). [148] Jacob Steinhardt and Russ Tedrake. â Finite-time regional veriï¬ cation of stochastic non-linear systemsâ . In: The International Journal of Robotics Research 31.7 (2012), pp. 901â 923. [149] Jacob Steinhardt, Gregory Valiant, and Moses Charikar. â Avoiding Imposters and Delin- quents: Adversarial Crowdsourcing and Peer Predictionâ | 1606.06565#106 | 1606.06565#108 | 1606.06565 | [
"1507.01986"
] |
1606.06565#108 | Concrete Problems in AI Safety | . In: arxiv prepring arXiv:1606.05374 (2016). url: http://arxiv.org/abs/1606.05374. [150] Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. MIT press, 1998. [151] Adith Swaminathan and Thorsten Joachims. â Counterfactual risk minimization: Learning from logged bandit feedbackâ . In: arXiv preprint arXiv:1502.02362 (2015). [152] Christian Szegedy et al. â Intriguing properties of neural networksâ | 1606.06565#107 | 1606.06565#109 | 1606.06565 | [
"1507.01986"
] |
1606.06565#109 | Concrete Problems in AI Safety | . In: arXiv preprint arXiv:1312.6199 (2013). [153] Aviv Tamar, Yonatan Glassner, and Shie Mannor. â Policy gradients beyond expectations: Conditional value-at-riskâ . In: arXiv preprint arXiv:1404.3862 (2014). [154] Jessica Taylor. â Quantilizers: A Safer Alternative to Maximizers for Limited Optimizationâ . In: forthcoming). Submitted to AAAI (2016). [155] Matthew E Taylor and Peter Stone. â Transfer learning for reinforcement learning domains: | 1606.06565#108 | 1606.06565#110 | 1606.06565 | [
"1507.01986"
] |
1606.06565#110 | Concrete Problems in AI Safety | A surveyâ . In: Journal of Machine Learning Research 10.Jul (2009), pp. 1633â 1685. [156] Philip S Thomas, Georgios Theocharous, and Mohammad Ghavamzadeh. â High-Conï¬ dence # Oï¬ -Policy Evaluation.â In: AAAI. 2015, pp. 3000â 3006. [157] Adrian Thompson. Artiï¬ cial evolution in the physical world. 1997. | 1606.06565#109 | 1606.06565#111 | 1606.06565 | [
"1507.01986"
] |
1606.06565#111 | Concrete Problems in AI Safety | 28 [158] Antonio Torralba and Alexei A Efros. â Unbiased look at dataset biasâ . In: Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on. IEEE. 2011, pp. 1521â 1528. [159] Matteo Turchetta, Felix Berkenkamp, and Andreas Krause. â Safe Exploration in Finite Markov Decision Processes with Gaussian Processesâ . In: arXiv preprint arXiv:1606.04753 (2016). [160] Stefan Wager and Susan Athey. â | 1606.06565#110 | 1606.06565#112 | 1606.06565 | [
"1507.01986"
] |
1606.06565#112 | Concrete Problems in AI Safety | Estimation and Inference of Heterogeneous Treatment Ef- fects using Random Forestsâ . In: arXiv preprint arXiv:1510.04342 (2015). [161] Daniel Weld and Oren Etzioni. â The ï¬ rst law of robotics (a call to arms)â . In: AAAI. Vol. 94. 1994. 1994, pp. 1042â 1047. [162] Keenon Werling et al. â On-the-job learning with bayesian decision theoryâ . In: Advances in Neural Information Processing Systems. 2015, pp. 3447â 3455. [163] Jason Weston et al. â | 1606.06565#111 | 1606.06565#113 | 1606.06565 | [
"1507.01986"
] |
1606.06565#113 | Concrete Problems in AI Safety | Towards ai-complete question answering: A set of prerequisite toy tasksâ . In: arXiv preprint arXiv:1502.05698 (2015). [164] Wolfram Wiesemann, Daniel Kuhn, and Ber¸c Rustem. â Robust Markov decision processesâ . In: Mathematics of Operations Research 38.1 (2013), pp. 153â 183. [165] Roman V Yampolskiy. â Utility function security in artiï¬ cially intelligent agentsâ . In: Journal of Experimental & Theoretical Artiï¬ cial Intelligence 26.3 (2014), pp. 373â 389. [166] Jason Yosinski et al. â | 1606.06565#112 | 1606.06565#114 | 1606.06565 | [
"1507.01986"
] |
1606.06565#114 | Concrete Problems in AI Safety | Understanding neural networks through deep visualizationâ . In: arXiv preprint arXiv:1506.06579 (2015). [167] Eliezer Yudkowsky. â Artiï¬ cial intelligence as a positive and negative factor in global riskâ . In: Global catastrophic risks 1 (2008), p. 303. [168] Muhammad Bilal Zafar et al. â Learning Fair Classiï¬ ersâ . In: stat 1050 (2015), p. 29. [169] Richard S Zemel et al. â Learning Fair Representations.â In: ICML (3) 28 (2013), pp. 325â 333. [170] Yuchen Zhang et al. â Spectral methods meet EM: A provably optimal algorithm for crowd- sourcingâ | 1606.06565#113 | 1606.06565#115 | 1606.06565 | [
"1507.01986"
] |
1606.06565#115 | Concrete Problems in AI Safety | . In: Advances in neural information processing systems. 2014, pp. 1260â 1268. 29 | 1606.06565#114 | 1606.06565 | [
"1507.01986"
] |
|
1606.06160#0 | DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients | 8 1 0 2 b e F 2 ] E N . s c [ 3 v 0 6 1 6 0 . 6 0 6 1 : v i X r a DoReFa-Net DOREFA-NET: TRAINING LOW BITWIDTH CONVOLU- TIONAL NEURAL NETWORKS WITH LOW BITWIDTH GRADIENTS Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, Yuheng Zou Megvii Inc. {zsc, wyx, nzk, zxy, wenhe, zouyuheng}@megvii.com # ABSTRACT | 1606.06160#1 | 1606.06160 | [
"1502.03167"
] |
|
1606.06160#1 | DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients | We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradi- ents. In particular, during backward pass, parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional lay- ers. As convolutions during forward/backward passes can now operate on low bitwidth weights and activations/gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both training and inference. Moreover, as bit convolutions can be efï¬ ciently implemented on CPU, FPGA, ASIC and GPU, DoReFa-Net opens the way to accelerate training of low bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet datasets prove that DoReFa-Net can achieve comparable prediction accuracy as 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 6-bit gradients to get 46.1% top-1 accuracy on ImageNet validation set. The DoReFa-Net AlexNet model is released publicly. | 1606.06160#0 | 1606.06160#2 | 1606.06160 | [
"1502.03167"
] |
1606.06160#2 | DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients | # INTRODUCTION Recent progress in deep Convolutional Neural Networks (DCNN) has considerably changed the landscape of computer vision (Krizhevsky et al., 2012), speech recognition (Hinton et al., 2012a) and NLP (Bahdanau et al., 2014). However, a state-of-the-art DCNN usually has a lot of parameters and high computational complex- ity, which both impedes its application in embedded devices and slows down the iteration of its research and development. For example, the training process of a DCNN may take up to weeks on a modern multi-GPU server for large datasets like ImageNet (Deng et al., 2009). In light of this, substantial research efforts are invested in speeding up DCNNs at both run-time and training-time, on both general-purpose (Vanhoucke et al., 2011; Gong et al., 2014; Han et al., 2015b) and specialized computer hardware (Farabet et al., 2011; Pham et al., 2012; Chen et al., 2014a;b). Various approaches like quantization (Wu et al., 2015) and sparsiï¬ cation (Han et al., 2015a) have also been proposed. Recent research efforts (Courbariaux et al., 2014; Kim & Smaragdis, 2016; Rastegari et al., 2016; Merolla et al., 2016) have considerably reduced both model size and computation complexity by using low bitwidth weights and low bitwidth activations. In particular, in BNN (Courbariaux & Bengio, 2016) and XNOR-Net (Rastegari et al., 2016), both weights and input activations of convo- lutional layers1 are binarized. Hence during the forward pass the most computationally expensive convolutions can be done by bitwise operation kernels, thanks to the following formula which com- putes the dot product of two bit vectors x and y using bitwise operations, where bitcount counts the number of bits in a bit vector: x · y = bitcount(and(x, y)), xi, yi â {0, 1} â i. (1) 1Note fully-connected layers are special cases of convolutional layers. 1 | 1606.06160#1 | 1606.06160#3 | 1606.06160 | [
"1502.03167"
] |
1606.06160#3 | DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients | DoReFa-Net # 2 However, to the best of our knowledge, no previous work has succeeded in quantizing gradients to numbers with bitwidth less than 8 during the backward pass, while still achieving comparable prediction accuracy. In some previous research (Gupta et al., 2015; Courbariaux et al., 2014), con- volutions involve at least 10-bit numbers. In BNN and XNOR-Net, though weights are binarized, gradients are in full precision, therefore the backward-pass still requires convolution between 1-bit numbers and 32-bit ï¬ oating-points. The inability to exploit bit convolution during the backward pass means that most training time of BNN and XNOR-Net will be spent in backward pass. | 1606.06160#2 | 1606.06160#4 | 1606.06160 | [
"1502.03167"
] |
1606.06160#4 | DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients | This paper makes the following contributions: 1. We generalize the method of binarized neural networks to allow creating DoReFa-Net, a CNN that has arbitrary bitwidth in weights, activations, and gradients. As convolutions dur- ing forward/backward passes can then operate on low bit weights and activations/gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both the forward pass and the backward pass of the training process. 2. As bit convolutions can be efï¬ ciently implemented on CPU, FPGA, ASIC and GPU, DoReFa-Net opens the way to accelerate low bitwidth neural network training on these hardware. In particular, with the power efï¬ ciency of FPGA and ASIC, we may consider- ably reduce energy consumption of low bitwidth neural network training. | 1606.06160#3 | 1606.06160#5 | 1606.06160 | [
"1502.03167"
] |
1606.06160#5 | DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients | 3. We explore the conï¬ guration space of bitwidth for weights, activations and gradients for DoReFa-Net. E.g., training a network using 1-bit weights, 1-bit activations and 2-bit gradi- ents can lead to 93% accuracy on SVHN dataset. In our experiments, gradients in general require larger bitwidth than activations, and activations in general require larger bitwidth than weights, to lessen the degradation of prediction accuracy compared to 32-bit precision counterparts. | 1606.06160#4 | 1606.06160#6 | 1606.06160 | [
"1502.03167"
] |
1606.06160#6 | DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients | We name our method â DoReFa-Netâ to take note of these phenomena. 4. We release in TensorFlow (Abadi et al.) format a DoReFa-Net 3 derived from AlexNet (Krizhevsky et al., 2012) that gets 46.1% in single-crop top-1 accuracy on ILSVRC12 validation set. A reference implementation for training of a DoReFa-net on SVHN dataset is also available. # 2 DOREFA-NET In this section we detail our formulation of DoReFa-Net, a method to train neural network that has low bitwidth weights, activations with low bitwidth parameter gradients. We note that while weights and activations can be deterministically quantized, gradients need to be stochastically quantized. | 1606.06160#5 | 1606.06160#7 | 1606.06160 | [
"1502.03167"
] |
1606.06160#7 | DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients | We ï¬ rst outline how to exploit bit convolution kernel in DoReFa-Net and then elaborate the method to quantize weights, activations and gradients to low bitwidth numbers. 2.1 USING BIT CONVOLUTION KERNELS IN LOW BITWIDTH NEURAL NETWORK The 1-bit dot product kernel speciï¬ ed in Eqn. 1 can also be used to compute dot product, and consequently convolution, for low bitwidth ï¬ xed-point integers. Assume x is a sequence of M -bit ï¬ xed-point integers s.t. x = PM â 1 m=0 cm(x)2m and y is a sequence of K-bit ï¬ xed-point integers s.t. y = PKâ 1 k=0 are bit vectors, the dot product of x and 2When x and y are vectors of {â 1, 1}, Eqn. 1 has a variant that uses xnor instead: x · y = N â 2 à bitcount(xnor(x, y)), xi, yi â {â 1, 1} â i. (2) | 1606.06160#6 | 1606.06160#8 | 1606.06160 | [
"1502.03167"
] |
1606.06160#8 | DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients | 3The model and supplement materials are available at https://github.com/ppwwyyxx/ tensorpack/tree/master/examples/DoReFa-Net 2 DoReFa-Net y can be computed by bitwise operations as: x · y = M â 1 X Kâ 1 X 2m+k bitcount[and(cm(x), ck(y))], m=0 k=0 (3) cm(x)i, ck(y)i â {0, 1} â i, m, k. (4) In the above equation, the computation complexity is O(M K), i.e., directly proportional to bitwidth of x and y. 2.2 STRAIGHT-THROUGH ESTIMATOR The set of real numbers representable by a low bitwidth number k only has a small ordinality 2k. However, mathematically any continuous function whose range is a small ï¬ nite set would necessar- ily always have zero gradient with respect to its input. | 1606.06160#7 | 1606.06160#9 | 1606.06160 | [
"1502.03167"
] |
1606.06160#9 | DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients | We adopt the â straight-through estimatorâ (STE) method (Hinton et al., 2012b; Bengio et al., 2013) to circumvent this problem. An STE can be thought of as an operator that has arbitrary forward and backward operations. A simple example is the STE deï¬ ned for Bernoulli sampling with probability p â [0, 1]: # Forward: q â ¼ Bernoulli(p) Backward: â c â p = â c â q . Here c denotes the objective function. As sampling from a Bernoulli distribution is not a differen- tiable function, â â q â p â is not well deï¬ ned, hence the backward pass cannot be directly constructed from the forward pass using chain rule. Nevertheless, because q is on expectation equal to p, we â q as an approximation for â c may use the well-deï¬ ned gradient â c â p and construct a STE as above. In other words, STE construction gives a custom-deï¬ ned â â q An STE we will use extensively in this work is quantizek that quantizes a real number input ri â [0, 1] to a k-bit number output ro â [0, 1]. | 1606.06160#8 | 1606.06160#10 | 1606.06160 | [
"1502.03167"
] |
1606.06160#10 | DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients | This STE is deï¬ ned as below: 1 2k â 1 â c â ro Forward: ro = round((2k â 1)ri) (5) Backward: â c â ri = . (6) It is obvious by construction that the output q of quantizek STE is a real number representable by k bits. Also, since ro is a k-bit ï¬ xed-point integer, the dot product of two sequences of such k-bit real numbers can be efï¬ ciently calculated, by using ï¬ xed-point integer dot product in Eqn. 3 followed by proper scaling. # 2.3 LOW BITWIDTH QUANTIZATION OF WEIGHTS In this section we detail our approach to getting low bitwidth weights. In previous works, STE has been used to binarize the weights. For example in BNN, weights are binarized by the following STE: | 1606.06160#9 | 1606.06160#11 | 1606.06160 | [
"1502.03167"
] |
1606.06160#11 | DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients | Forward: ro = sign(ri) â c â ri Here sign(ri) = 2Iriâ ¥0 â 1 returns one of two possible values: {â 1, 1}. In XNOR-Net, weights are binarized by the following STE, with the difference being that weights are scaled after binarized: Forward: ro = sign(ri) à EF (|ri|) Backward: â c â ri = â c â ro . 3 DoReFa-Net In XNOR-Net, the scaling factor EF (|ri|) is the mean of absolute value of each output channel of weights. The rationale is that introducing this scaling factor will increase the value range of weights, while still being able to exploit bit convolution kernels. However, the channel-wise scaling factors will make it impossible to exploit bit convolution kernels when computing the convolution between gradients and the weights during back propagation. Hence, in our experiments, we use a constant scalar to scale all ï¬ lters instead of doing channel-wise scaling. We use the following STE for all neural networks that have binary weights in this paper: | 1606.06160#10 | 1606.06160#12 | 1606.06160 | [
"1502.03167"
] |
1606.06160#12 | DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients | Forward: ro = sign(ri) Ã E(|ri|) (7) Backward: â c â ri = â c â ro . (8) In case we use k-bit representation of the weights with k > 1, we apply the STE f k follows: Forward: ro = f k Ï (ri) = 2 quantizek( tanh(ri) 2 max(| tanh(ri)|) + 1 2 ) â 1. (9) Backward: â c â ri = â ro â ri â c â ro 4 (10) | 1606.06160#11 | 1606.06160#13 | 1606.06160 | [
"1502.03167"
] |
1606.06160#13 | DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients | Note here we use tanh to limit the value range of weights to [â 1, 1] before quantizing to k-bit. By 2 is a number in [0, 1], where the maximum is taken over all weights construction, in that layer. quantizek will then quantize this number to k-bit ï¬ xed-point ranging in [0, 1]. Finally an afï¬ ne transform will bring the range of f k Note that when k = 1, Eqn. 9 is different from Eqn. 7, providing a different way of binarizing weights. | 1606.06160#12 | 1606.06160#14 | 1606.06160 | [
"1502.03167"
] |
1606.06160#14 | DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients | Nevertheless, we ï¬ nd this difference insigniï¬ cant in experiments. 2.4 LOW BITWIDTH QUANTIZATION OF ACTIVATIONS Next we detail our approach to getting low bitwidth activations that are input to convolutions, which is of critical importance in replacing ï¬ oating-point convolutions by less computation-intensive bit convolutions. In BNN and XNOR-Net, activations are binarized in the same way as weights. However, we fail to reproduce the results of XNOR-Net if we follow their methods of binarizing activations, and the binarizing approach in BNN is claimed by (Rastegari et al., 2016) to cause severe prediction accuracy degradation when applied on ImageNet models like AlexNet. Hence instead, we apply an STE on input activations r of each weight layer. Here we assume the output of the previous layer has passed through a bounded activation function h, which ensures r â [0, 1]. In DoReFa-Net, quantization of activations r to k-bit is simply: # f k α(r) = quantizek(r). | 1606.06160#13 | 1606.06160#15 | 1606.06160 | [
"1502.03167"
] |
1606.06160#15 | DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients | (11) 2.5 LOW BITWIDTH QUANTIZATION OF GRADIENTS We have demonstrated deterministic quantization to produce low bitwidth weights and activations. However, we ï¬ nd stochastic quantization is necessary for low bitwidth gradients to be effective. This is in agreement with experiments of (Gupta et al., 2015) on 16-bit weights and 16-bit gradients. To quantize gradients to low bitwidth, it is important to note that gradients are unbounded and may have signiï¬ cantly larger value range than activations. Recall in Eqn. 11, we can map the range of activations to [0, 1] by passing values through differentiable nonlinear functions. However, this kind of construction does not exist for gradients. Therefore we designed the following function for k-bit quantization of gradients: dr 4 1) 1 2 fi (dr) = 2max(({dr|) quantizes (dr) 5 # 4Here â ro â ri is well-deï¬ ned because we already deï¬ ned quantizek as an STE 4 DoReFa-Net Here dr = â c â | 1606.06160#14 | 1606.06160#16 | 1606.06160 | [
"1502.03167"
] |
1606.06160#16 | DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients | r is the back-propagated gradient of the output r of some layer, and the maximum is taken over all axis of the gradient tensor dr except for the mini-batch axis (therefore each instance in a mini-batch will have its own scaling factor). The above function ï¬ rst applies an afï¬ ne transform on the gradient, to map it into [0, 1], and then inverts the transform after quantization. To further compensate the potential bias introduced by gradient quantization, we introduce an extra noise function N (k) = Ï | 1606.06160#15 | 1606.06160#17 | 1606.06160 | [
"1502.03167"
] |
1606.06160#17 | DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients | 2kâ 1 where Ï â ¼ U nif orm(â 0.5, 0.5). 5 The noise therefore has the same magnitude as the possible quantization error. We ï¬ nd that the artiï¬ cial noise to be critical for achieving good performance. Finally, the expression weâ ll use to quantize gradients to k-bit numbers is as follows: ad 2maxg(|dr|) | 2 fi (dr) = 2maxo(|dr|) | quantize, [ + N(k)| »)| . (12) The quantization of gradient is done on the backward pass only. Hence we apply the following STE on the output of each convolution layer: | 1606.06160#16 | 1606.06160#18 | 1606.06160 | [
"1502.03167"
] |
1606.06160#18 | DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients | Forward: ro = ri (13) Backward: â c â ri = f k γ ( â c â ro ). (14) Algorithm 1 Training a L-layer DoReFa-Net with W -bit weights and A-bit activations using G-bit gradients. Weights, activations and gradients are quantized according to Eqn. 9, Eqn. 11, Eqn. 12, respectively. Require: a minibatch of inputs and targets (a0, aâ ), previous weights W , learning rate η Ensure: updated weights W t+1 {1. Computing the parameter gradients:} {1.1 Forward propagation:} 1: for k = 1 to L do 2: W b 3: 4: 5: 6: 7: 8: 9: end for k â f W Ï (Wk) Ë ak â forward(ab ak â h(Ë ak) if k < L then k â f A ab end if Optionally apply pooling kâ 1, W b k ) α (ak) {1.2 Backward propagation:} Compute gaL = â C â aL knowing aL and aâ . 10: for k = L to 1 do 11: 12: 13: 14: 15: 16: end for Back-propagate gak through activation function h gb ak gakâ 1 â backward input(gb ak â backward weight(gb gW b ak Back-propagate gradients through pooling layer if there is one â f G γ (gak ) , W b k ) , ab kâ 1) k {2. Accumulating the parameters gradients:} 17: for k = 1 to L do 18: 19: W t+1 20: end for â W b k â Wk gWk = gW b k k â U pdate(Wk, gWk , η) 5Note here we do not need clip value of N (k) as the two end points of a uniform distribution are almost surely never attained. 5 | 1606.06160#17 | 1606.06160#19 | 1606.06160 | [
"1502.03167"
] |
1606.06160#19 | DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients | DoReFa-Net 2.6 THE ALGORITHM FOR DOREFA-NET We give a sample training algorithm of DoReFa-Net as Algorithm 1. W.l.o.g., the network is assumed to have a feed-forward linear topology, and details like batch normalization and pool- ing layers are omitted. Note that all the expensive operations forward, backward input, backward weight, in convolutional as well as fully-connected layers, are now operating on low bitwidth numbers. By construction, there is always an afï¬ ne mapping between these low bitwidth numbers and ï¬ xed-point integers. As a result, all the expensive operations can be accelerated signif- icantly by the ï¬ xed-point integer dot product kernel (Eqn. 3). 2.7 FIRST AND THE LAST LAYER Among all layers in a DCNN, the ï¬ rst and the last layers appear to be different from the rest, as they are interfacing the input and output of the network. For the ï¬ rst layer, the input is often an image, which may contain 8-bit features. On the other hand, the output layer typically produce approximately one-hot vectors, which are close to bit vectors by deï¬ | 1606.06160#18 | 1606.06160#20 | 1606.06160 | [
"1502.03167"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.