id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
listlengths 1
1
|
---|---|---|---|---|---|---|
1606.08514#62 | Towards Verified Artificial Intelligence | Robust Control of Markov Decision Processes with Uncertain Transition Matrices. Journal of Operations Research, pages 780â 798, 2005. [57] Pierluigi Nuzzo, Jiwei Li, Alberto L. Sangiovanni-Vincentelli, Yugeng Xi, and Dewei Li. Stochastic assume-guarantee contracts for cyber-physical system design. ACM Trans. Embed. Comput. Syst., 18(1), January 2019. [58] S. Owre, J. M. Rushby, and N. Shankar. PVS: A prototype veriï¬ cation system. In Deepak Kapur, editor, 11th International Conference on Automated Deduction (CADE), volume 607 of Lecture Notes in Artiï¬ cial Intelligence, pages 748â 752. Springer-Verlag, June 1992. [59] Judea Pearl. The seven tools of causal inference, with reï¬ ections on machine learning. Communica- tions of the ACM, 62(3):54â 60, 2019. [60] Amir Pnueli and Roni Rosner. | 1606.08514#61 | 1606.08514#63 | 1606.08514 | [
"1606.06565"
] |
1606.08514#63 | Towards Verified Artificial Intelligence | On the synthesis of a reactive module. In Conference Record of the Sixteenth Annual ACM Symposium on Principles of Programming Languages, Austin, Texas, USA, January 11-13, 1989, pages 179â 190, 1989. [61] Alberto Puggelli, Wenchao Li, Alberto Sangiovanni-Vincentelli, and Sanjit A. Seshia. Polynomial- time veriï¬ cation of PCTL properties of MDPs with convex uncertainties. In Proceedings of the 25th International Conference on Computer-Aided Veriï¬ cation (CAV), July 2013. [62] Jean-Pierre Queille and Joseph Sifakis. Speciï¬ cation and veriï¬ cation of concurrent systems in CESAR. In Symposium on Programming, number 137 in LNCS, pages 337â 351, 1982. [63] John Rushby. | 1606.08514#62 | 1606.08514#64 | 1606.08514 | [
"1606.06565"
] |
1606.08514#64 | Towards Verified Artificial Intelligence | Using model checking to help discover mode confusions and other automation surprises. Reliability Engineering & System Safety, 75(2):167â 177, 2002. 16 [64] Stuart Russell, Tom Dietterich, Eric Horvitz, Bart Selman, Francesca Rossi, Demis Hassabis, Shane Legg, Mustafa Suleyman, Dileep George, and Scott Phoenix. Letter to the editor: Research priorities for robust and beneï¬ cial artiï¬ cial intelligence: | 1606.08514#63 | 1606.08514#65 | 1606.08514 | [
"1606.06565"
] |
1606.08514#65 | Towards Verified Artificial Intelligence | An open letter. AI Magazine, 36(4), 2015. [65] Stuart J Russell. Rationality and intelligence. Artiï¬ cial Intelligence, 94(1-2):57â 77, 1997. [66] Stuart Jonathan Russell and Peter Norvig. Artiï¬ cial intelligence: a modern approach. Prentice hall, 2010. [67] Dorsa Sadigh, Katherine Driggs-Campbell, Alberto Puggelli, Wenchao Li, Victor Shia, Ruzena Bajcsy, Alberto L. Sangiovanni-Vincentelli, S. Shankar Sastry, and Sanjit A. Seshia. | 1606.08514#64 | 1606.08514#66 | 1606.08514 | [
"1606.06565"
] |
1606.08514#66 | Towards Verified Artificial Intelligence | Data-driven probabilistic modeling and veriï¬ cation of human driver behavior. In Formal Veriï¬ cation and Modeling in Human- Machine Systems, AAAI Spring Symposium, March 2014. [68] Dorsa Sadigh and Ashish Kapoor. Safe control under uncertainty with probabilistic signal temporal logic. In Proceedings of Robotics: Science and Systems, AnnArbor, Michigan, June 2016. [69] Dorsa Sadigh, Shankar Sastry, Sanjit A. Seshia, and Anca D. Dragan. | 1606.08514#65 | 1606.08514#67 | 1606.08514 | [
"1606.06565"
] |
1606.08514#67 | Towards Verified Artificial Intelligence | Information gathering actions over human internal state. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), October 2016. [70] Alberto Sangiovanni-Vincentelli, Werner Damm, and Roberto Passerone. Taming Dr. Frankenstein: Contract-based design for cyber-physical systems. European journal of control, 18(3):217â 238, 2012. [71] John D Schierman, Michael D DeVore, Nathan D Richards, Neha Gandhi, Jared K Cooper, Kenneth R Horneman, Scott Stoller, and Scott Smolka. | 1606.08514#66 | 1606.08514#68 | 1606.08514 | [
"1606.06565"
] |
1606.08514#68 | Towards Verified Artificial Intelligence | Runtime assurance framework development for highly adaptive ï¬ ight control systems. Technical report, Barron Associates, Inc. Charlottesville, 2015. [72] Daniel Selsam, Percy Liang, and David L. Dill. Developing bug-free machine learning systems with In Proceedings of the 34th International Conference on Machine Learning, formal mathematics. (ICML), volume 70 of Proceedings of Machine Learning Research, pages 3047â 3056. PMLR, 2017. [73] Sanjit A. Seshia. Sciduction: Combining induction, deduction, and structure for veriï¬ cation and syn- thesis. In Proceedings of the Design Automation Conference (DAC), pages 356â 365, June 2012. [74] Sanjit A. Seshia. | 1606.08514#67 | 1606.08514#69 | 1606.08514 | [
"1606.06565"
] |
1606.08514#69 | Towards Verified Artificial Intelligence | Combining induction, deduction, and structure for veriï¬ cation and synthesis. Pro- ceedings of the IEEE, 103(11):2036â 2051, 2015. [75] Sanjit A. Seshia. Compositional veriï¬ cation without compositional speciï¬ cation for learning-based systems. Technical Report UCB/EECS-2017-164, EECS Department, University of California, Berke- ley, Nov 2017. [76] Sanjit A. Seshia. Introspective environment modeling. | 1606.08514#68 | 1606.08514#70 | 1606.08514 | [
"1606.06565"
] |
1606.08514#70 | Towards Verified Artificial Intelligence | In 19th International Conference on Runtime Veriï¬ cation (RV), pages 15â 26, 2019. [77] Sanjit A. Seshia, Ankush Desai, Tommaso Dreossi, Daniel Fremont, Shromona Ghosh, Edward Kim, Sumukh Shivakumar, Marcell Vazquez-Chanlatte, and Xiangyu Yue. Formal speciï¬ cation for deep neural networks. In Proceedings of the International Symposium on Automated Technology for Veriï¬ - cation and Analysis (ATVA), pages 20â 34, October 2018. [78] Lui Sha. | 1606.08514#69 | 1606.08514#71 | 1606.08514 | [
"1606.06565"
] |
1606.08514#71 | Towards Verified Artificial Intelligence | Using simplicity to control complexity. IEEE Software, 18(4):20â 28, 2001. 17 [79] Yasser Shoukry, Pierluigi Nuzzo, Alberto Sangiovanni-Vincentelli, Sanjit A. Seshia, George J. Pappas, In Proceedings of the 10th and Paulo Tabuada. Smc: Satisï¬ ability modulo convex optimization. International Conference on Hybrid Systems: Computation and Control (HSCC), April 2017. [80] Joseph Sifakis. System design automation: Challenges and limitations. Proceedings of the IEEE, 103(11):2093â 2103, 2015. [81] Herbert A Simon. Bounded rationality. In Utility and Probability, pages 15â 18. Springer, 1990. [82] Armando Solar-Lezama, Liviu Tancau, Rastislav Bod´ık, Sanjit A. Seshia, and Vijay A. Saraswat. | 1606.08514#70 | 1606.08514#72 | 1606.08514 | [
"1606.06565"
] |
1606.08514#72 | Towards Verified Artificial Intelligence | Combinatorial sketching for ï¬ nite programs. In Proceedings of the 12th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS), pages 404â 415. ACM Press, October 2006. [83] Claire Tomlin, Ian Mitchell, Alexandre M. Bayen, and Meeko Oishi. Computational techniques for the veriï¬ cation of hybrid systems. Proceedings of the IEEE, 91(7):986â 1001, 2003. [84] Marcell Vazquez-Chanlatte, Jyotirmoy V. Deshmukh, Xiaoqing Jin, and Sanjit A. Seshia. | 1606.08514#71 | 1606.08514#73 | 1606.08514 | [
"1606.06565"
] |
1606.08514#73 | Towards Verified Artificial Intelligence | Logical In 29th International Conference on Computer Aided clustering and learning for time-series data. Veriï¬ cation (CAV), pages 305â 325, 2017. [85] Marcell Vazquez-Chanlatte, Susmit Jha, Ashish Tiwari, Mark K. Ho, and Sanjit A. Seshia. Learning task speciï¬ cations from demonstrations. In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems (NeurIPS), pages 5372â 5382, Decem- ber 2018. [86] Jeannette M Wing. | 1606.08514#72 | 1606.08514#74 | 1606.08514 | [
"1606.06565"
] |
1606.08514#74 | Towards Verified Artificial Intelligence | A speciï¬ erâ s introduction to formal methods. IEEE Computer, 23(9):8â 24, Septem- ber 1990. [87] Jingyi Xu, Zilu Zhang, Tal Friedman, Yitao Liang, and Guy Van den Broeck. A semantic loss function for deep learning with symbolic knowledge. In Proceedings of the 35th International Conference on Machine Learning, (ICML), volume 80 of Proceedings of Machine Learning Research, pages 5498â | 1606.08514#73 | 1606.08514#75 | 1606.08514 | [
"1606.06565"
] |
1606.08514#75 | Towards Verified Artificial Intelligence | 5507. PMLR, 2018. [88] Tomoya Yamaguchi, Tomoyuki Kaga, Alexandre Donze, and Sanjit A. Seshia. Combining requirement mining, software model checking, and simulation-based veriï¬ cation for industrial automotive systems. Technical Report UCB/EECS-2016-124, EECS Department, University of California, Berkeley, June 2016. [89] Xiaojin Zhu, Adish Singla, Sandra Zilles, and Anna N Rafferty. An overview of machine teaching. arXiv preprint arXiv:1801.05927, 2018. | 1606.08514#74 | 1606.08514#76 | 1606.08514 | [
"1606.06565"
] |
1606.08514#76 | Towards Verified Artificial Intelligence | 18 | 1606.08514#75 | 1606.08514 | [
"1606.06565"
] |
|
1606.07947#0 | Sequence-Level Knowledge Distillation | 6 1 0 2 p e S 2 2 ] L C . s c [ 4 v 7 4 9 7 0 . 6 0 6 1 : v i X r a # Sequence-Level Knowledge Distillation # Yoon Kim [email protected] # Alexander M. Rush [email protected] School of Engineering and Applied Sciences Harvard University Cambridge, MA, USA # Abstract Neural machine translation (NMT) offers a novel alternative formulation of translation that is potentially simpler than statistical ap- proaches. However to reach competitive per- formance, NMT models need to be exceed- ingly large. In this paper we consider applying knowledge distillation approaches (Bucila et al., 2006; Hinton et al., 2015) that have proven successful for reducing the size of neural mod- els in other domains to the problem of NMT. We demonstrate that standard knowledge dis- tillation applied to word-level prediction can be effective for NMT, and also introduce two novel sequence-level versions of knowledge distillation that further improve performance, and somewhat surprisingly, seem to elimi- nate the need for beam search (even when ap- plied on the original teacher model). Our best student model runs 10 times faster than its state-of-the-art teacher with little loss in per- formance. | 1606.07947#1 | 1606.07947 | [
"1506.04488"
] |
|
1606.07947#1 | Sequence-Level Knowledge Distillation | It is also signiï¬ cantly better than a baseline model trained without knowledge distillation: by 4.2/1.7 BLEU with greedy de- coding/beam search. Applying weight prun- ing on top of knowledge distillation results in a student model that has 13à fewer param- eters than the original teacher model, with a decrease of 0.4 BLEU. proaches. NMT systems directly model the proba- bility of the next word in the target sentence sim- ply by conditioning a recurrent neural network on the source sentence and previously generated target words. While both simple and surprisingly accurate, NMT systems typically need to have very high ca- pacity in order to perform well: Sutskever et al. (2014) used a 4-layer LSTM with 1000 hidden units per layer (herein 4à 1000) and Zhou et al. (2016) ob- tained state-of-the-art results on English â French with a 16-layer LSTM with 512 units per layer. The sheer size of the models requires cutting-edge hard- ware for training and makes using the models on standard setups very challenging. This issue of excessively large networks has been observed in several other domains, with much fo- cus on fully-connected and convolutional networks for multi-class classiï¬ | 1606.07947#0 | 1606.07947#2 | 1606.07947 | [
"1506.04488"
] |
1606.07947#2 | Sequence-Level Knowledge Distillation | cation. Researchers have par- ticularly noted that large networks seem to be nec- essary for training, but learn redundant representa- tions in the process (Denil et al., 2013). Therefore compressing deep models into smaller networks has been an active area of research. As deep learning systems obtain better results on NLP tasks, compres- sion also becomes an important practical issue with applications such as running deep learning models for speech and translation locally on cell phones. 1 # 1 Introduction | 1606.07947#1 | 1606.07947#3 | 1606.07947 | [
"1506.04488"
] |
1606.07947#3 | Sequence-Level Knowledge Distillation | Neural machine translation (NMT) (Kalchbrenner and Blunsom, 2013; Cho et al., 2014; Sutskever et al., 2014; Bahdanau et al., 2015) is a deep learning- based method for translation that has recently shown promising results as an alternative to statistical ap- Existing compression methods generally fall into two categories: (1) pruning and (2) knowledge dis- tillation. Pruning methods (LeCun et al., 1990; He et al., 2014; Han et al., 2016), zero-out weights or entire neurons based on an importance criterion: Le- Cun et al. (1990) use (a diagonal approximation to) the Hessian to identify weights whose removal min- imally impacts the objective function, while Han et al. (2016) remove weights based on threshold- ing their absolute values. Knowledge distillation ap- proaches (Bucila et al., 2006; Ba and Caruana, 2014; Hinton et al., 2015) learn a smaller student network to mimic the original teacher network by minimiz- ing the loss (typically L2 or cross-entropy) between the student and teacher output. In this work, we investigate knowledge distilla- tion in the context of neural machine translation. We note that NMT differs from previous work which has mainly explored non-recurrent models in the multi- class prediction setting. For NMT, while the model is trained on multi-class prediction at the word-level, it is tasked with predicting complete sequence out- puts conditioned on previous decisions. With this difference in mind, we experiment with standard knowledge distillation for NMT and also propose two new versions of the approach that attempt to ap- proximately match the sequence-level (as opposed to word-level) distribution of the teacher network. This sequence-level approximation leads to a sim- ple training procedure wherein the student network is trained on a newly generated dataset that is the result of running beam search with the teacher net- work. We run experiments to compress a large state-of- the-art 4 à 1000 LSTM model, and ï¬ nd that with sequence-level knowledge distillation we are able to learn a 2 à 500 LSTM that roughly matches the per- formance of the full system. We see similar results compressing a 2 à | 1606.07947#2 | 1606.07947#4 | 1606.07947 | [
"1506.04488"
] |
1606.07947#4 | Sequence-Level Knowledge Distillation | 500 model down to 2 à 100 on a smaller data set. Furthermore, we observe that our proposed approach has other beneï¬ ts, such as not requiring any beam search at test-time. As a re- sult we are able to perform greedy decoding on the 2 à 500 model 10 times faster than beam search on the 4 à 1000 model with comparable performance. Our student models can even be run efï¬ ciently on a standard smartphone.1 Finally, we apply weight pruning on top of the student network to obtain a model that has 13à fewer parameters than the origi- nal teacher model. We have released all the code for the models described in this paper.2 1https://github.com/harvardnlp/nmt-android 2https://github.com/harvardnlp/seq2seq-attn # 2 Background # 2.1 Sequence-to-Sequence with Attention Let s = [s1, . . . , sI ] and t = [t1, . . . , tJ ] be (random variable sequences representing) the source/target sentence, with I and J respectively being the source/target lengths. Machine translation involves ï¬ nding the most probable target sentence given the source: argmax tâ T p(t | s) where T is the set of all possible sequences. NMT models parameterize p(t | s) with an encoder neural network which reads the source sentence and a de- coder neural network which produces a distribution over the target sentence (one word at a time) given the source. We employ the attentional architecture from Luong et al. (2015), which achieved state-of- the-art results on English â | 1606.07947#3 | 1606.07947#5 | 1606.07947 | [
"1506.04488"
] |
1606.07947#5 | Sequence-Level Knowledge Distillation | German translation.3 # 2.2 Knowledge Distillation Knowledge distillation describes a class of methods for training a smaller student network to perform better by learning from a larger teacher network (in addition to learning from the training data set). We generally assume that the teacher has previously been trained, and that we are estimating parame- ters for the student. Knowledge distillation suggests training by matching the studentâ s predictions to the teacherâ s predictions. For classiï¬ cation this usually means matching the probabilities either via L2 on the log scale (Ba and Caruana, 2014) or by cross- entropy (Li et al., 2014; Hinton et al., 2015). Concretely, assume we are learning a multi-class classiï¬ er over a data set of examples of the form (x, y) with possible classes V. The usual training criteria is to minimize NLL for each example from the training data, IVI Lni(9) = - S- l{y = k} log p(y = k | x; 0) k=1 where 1{·} is the indicator function and p the distribution from our model (parameterized by θ). 3Speciï¬ cally, we use the global-general attention model with the input-feeding approach. We refer the reader to the orig- inal paper for further details. Ground Truth â E cD I ti Teacher Network Student Network ou La BI ul 1 if â TTT Teacher Network eo oe Word-Level Knowledge Distillation Sequence-Level Knowledge Distillation Ground Truth e ¢ oD vl # AE He a | EEE LEN) aos EEL Nea â â | aad Student Network Sequence-Level Interpolation Figure 1: | 1606.07947#4 | 1606.07947#6 | 1606.07947 | [
"1506.04488"
] |
1606.07947#6 | Sequence-Level Knowledge Distillation | Overview of the different knowledge distillation approaches. In word-level knowledge distillation (left) cross-entropy is minimized between the student/teacher distributions (yellow) for each word in the actual target sequence (ECD), as well as between the student distribution and the degenerate data distribution, which has all of its probabilitiy mass on one word (black). In sequence-level knowledge distillation (center) the student network is trained on the output from beam search of the teacher network that had the highest score (ACF). In sequence-level interpolation (right) the student is trained on the output from beam search of the teacher network that had the highest sim with the target sequence (ECE). This objective can be seen as minimizing the cross- entropy between the degenerate data distribution (which has all of its probability mass on one class) and the model distribution p(y | x; θ). Since this new objective has no direct term for the training data, it is common practice to interpolate between the two losses, In knowledge distillation, we assume access to a learned teacher distribution q(y | x; θT ), possibly trained over the same data set. Instead of minimiz- ing cross-entropy with the observed data, we instead minimize the cross-entropy with the teacherâ s prob- ability distribution, L(θ; θT ) = (1 â α)LNLL(θ) + αLKD(θ; θT ) where α is mixture parameter combining the one-hot distribution and the teacher distribution. # 3 Knowledge Distillation for NMT | 1606.07947#5 | 1606.07947#7 | 1606.07947 | [
"1506.04488"
] |
1606.07947#7 | Sequence-Level Knowledge Distillation | vI Lxp(0;0r) =â So aly = k| ae; Or) x k=1 log p(y = k| x; 6) The large sizes of neural machine translation sys- tems make them an ideal candidate for knowledge distillation approaches. In this section we explore three different ways this technique can be applied to NMT. where θT parameterizes the teacher distribution and remains ï¬ xed. Note the cross-entropy setup is iden- tical, but the target distribution is no longer a sparse distribution.4 Training on q(y | x; θT ) is attractive since it gives more information about other classes similarity between for a given data point (e.g. classes) and has less variance in gradients (Hinton et al., 2015). 4 In some cases the entropy of the teacher/student distribu- tion is increased by annealing it with a temperature term Ï | 1606.07947#6 | 1606.07947#8 | 1606.07947 | [
"1506.04488"
] |
1606.07947#8 | Sequence-Level Knowledge Distillation | > 1 # 3.1 Word-Level Knowledge Distillation NMT systems are trained directly to minimize word NLL, LWORD-NLL, at each position. Therefore if we have a teacher model, standard knowledge distil- lation for multi-class cross-entropy can be applied. We deï¬ ne this distillation for a sentence as, J Wi Lworv-kp =â >>> a(t) =k|s,t<j) x jal k=l # log p(tj = k | s, t<j) | 1606.07947#7 | 1606.07947#9 | 1606.07947 | [
"1506.04488"
] |
1606.07947#9 | Sequence-Level Knowledge Distillation | Ë p(y | x) â p(y | x) 1 Ï After testing Ï â {1, 1.5, 2} we found that Ï = 1 worked best. where V is the target vocabulary set. The student can further be trained to optimize the mixture of LWORD-KD and LWORD-NLL. In the context of NMT, we refer to this approach as word-level knowledge distillation and illustrate this in Figure 1 (left). # 3.2 Sequence-Level Knowledge Distillation Word-level knowledge distillation allows transfer of these local word distributions. Ideally however, we would like the student model to mimic the teacherâ s actions at the sequence-level. The sequence distri- bution is particularly important for NMT, because wrong predictions can propagate forward at test- time. First, consider the sequence-level distribution speciï¬ ed by the model over all possible sequences t â T , p(t|s) = | | p(tj|s,t<;) te # â equence-tevel for any length J. The sequence-level negative log- likelihood for NMT then involves matching the one- hot distribution over all complete sequences, LSEQ-NLL = â S- 1{t = y} log p(t | s) teT J Wi => - S- S- l{y; => k} log p(t; =k | s,t<;) jal k=l # j=1 = LWORD-NLL where y = [y1, . . . , yJ ] is the observed sequence. this just shows that from a negative Of course, log likelihood perspective, minimizing word-level NLL and sequence-level NLL are equivalent in this model. But now consider the case of sequence-level knowledge distillation. As before, we can simply replace the distribution from the data with a prob- ability distribution derived from our teacher model. However, instead of using a single word prediction, we use q(t | s) to represent the teacherâ s sequence distribution over the sample space of all possible se- quences, LsEQ-KD = â S- q(t | s) log p(t | s) teT Note that LSEQ-KD is inherently different from LWORD-KD, as the sum is over an exponential num- ber of terms. | 1606.07947#8 | 1606.07947#10 | 1606.07947 | [
"1506.04488"
] |
1606.07947#10 | Sequence-Level Knowledge Distillation | Despite its intractability, we posit that this sequence-level objective is worthwhile. It gives the teacher the chance to assign probabilities to complete sequences and therefore transfer a broader range of knowledge. We thus consider an approxi- mation of this objective. Our simplest approximation is to replace the teacher distribution q with its mode, q(t | s) â ¼ 1{t = argmax q(t | s)} tâ T Observing that ï¬ nding the mode is itself intractable, we use beam search to ï¬ nd an approximation. The loss is then Lsegxyv © â )_ 1{t =Â¥}logp(t|s) teT = â logp(t=y|s | 1606.07947#9 | 1606.07947#11 | 1606.07947 | [
"1506.04488"
] |
1606.07947#11 | Sequence-Level Knowledge Distillation | where Ë y is now the output from running beam search with the teacher model. Using the mode seems like a poor approximation for the teacher distribution q(t | s), as we are ap- proximating an exponentially-sized distribution with a single sample. However, previous results showing the effectiveness of beam search decoding for NMT lead us to belief that a large portion of qâ s mass lies in a single output sequence. In fact, in experiments we ï¬ nd that with beam of size 1, q(Ë y | s) (on aver- age) accounts for 1.3% of the distribution for Ger- man â English, and 2.3% for Thai â English (Ta- ble 1: p(t = Ë y)).5 | 1606.07947#10 | 1606.07947#12 | 1606.07947 | [
"1506.04488"
] |
1606.07947#12 | Sequence-Level Knowledge Distillation | To summarize, sequence-level knowledge distil- lation suggests to: (1) train a teacher model, (2) run beam search over the training set with this model, (3) train the student network with cross-entropy on this new dataset. Step (3) is identical to the word-level NLL process except now on the newly-generated data set. This is shown in Figure 1 (center). 5Additionally there are simple ways to better approximate q(t | s). One way would be to consider a K-best list from beam search and renormalizing the probabilities, a(t |s) LeeTx q(t |s) q(t |s) ~ where TK is the K-best list from beam search. This would increase the training set by a factor of K. A beam of size 5 captures 2.8% of the distribution for German â English, and 3.8% for Thai â English. | 1606.07947#11 | 1606.07947#13 | 1606.07947 | [
"1506.04488"
] |
1606.07947#13 | Sequence-Level Knowledge Distillation | Another alternative is to use a Monte Carlo estimate and sample from the teacher model (since LSEQ-KD = Etâ ¼q(t | s)[ â log p(t | s) ]). However in practice we found the (approximate) mode to work well. # 3.3 Sequence-Level Interpolation Next we consider integrating the training data back into the process, such that we train the student model as a mixture of our sequence-level teacher- generated data (LSEQ-KD) with the original training data (LSEQ-NLL), L=(1â a)Lszqnitt + oLsEQ-KD = ~(1~a) log p(y |s) â @ > (t|s) log p(t |) teT | 1606.07947#12 | 1606.07947#14 | 1606.07947 | [
"1506.04488"
] |
1606.07947#14 | Sequence-Level Knowledge Distillation | where y is the gold target sequence. Since the second term is intractable, we could again apply the mode approximation from the pre- vious section, L = â (1 â α) log p(y | s) â α log p(Ë y | s) and train on both observed (y) and teacher- generated (Ë y) data. However, this process is non- ideal for two reasons: (1) unlike for standard knowl- edge distribution, it doubles the size of the training data, and (2) it requires training on both the teacher- generated sequence and the true sequence, condi- tioned on the same source input. The latter concern is particularly problematic since we observe that y and Ë y are often quite different. As an alternative, we propose a single-sequence approximation that is more attractive in this setting. This approach is inspired by local updating (Liang et al., 2006), a method for discriminative train- ing in statistical machine translation (although to our knowledge not for knowledge distillation). Lo- cal updating suggests selecting a training sequence which is close to y and has high probability under the teacher model, | 1606.07947#13 | 1606.07947#15 | 1606.07947 | [
"1506.04488"
] |
1606.07947#15 | Sequence-Level Knowledge Distillation | Ë y = argmax sim(t, y)q(t | s) tâ T where sim is a function measuring closeness (e.g. Jaccard similarity or BLEU (Papineni et al., 2002)). Following local updating, we can approximate this sequence by running beam search and choosing Ë y â argmax sim(t, y) tâ TK where TK is the K-best list from beam search. We take sim to be smoothed sentence-level BLEU (Chen and Cherry, 2014). We justify training on y from a knowledge distil- lation perspective with the following generative pro- cess: suppose that there is a true target sequence (which we do not observe) that is first generated from the underlying data distribution D. And further suppose that the target sequence that we observe (y) is a noisy version of the unobserved true sequence: i.e. (i) t ~ D, (ii) y ~ e(t), where e(t) is, for ex- ample, a noise function that independently replaces each element in t with a random element in V with some small probability] In such a case, ideally the studentâ s distribution should match the mixture dis- tribution, DSEQ-Inter â ¼ (1 â α)D + αq(t | s) In this setting, due to the noise assumption, D now has signiï¬ cant probability mass around a neighbor- hood of y (not just at y), and therefore the argmax of the mixture distribution is likely something other than y (the observed sequence) or Ë y (the output from beam search). We can see that Ë y is a natural approx- imation to the argmax of this mixture distribution between D and q(t | s) for some α. We illustrate this framework in Figure 1 (right) and visualize the distribution over a real example in Figure 2. # 4 Experimental Setup To test out these approaches, we conduct two sets of NMT experiments: high resource (English â Ger- man) and low resource (Thai â English). The English-German data comes from WMT 2014)7] The training set has 4m sentences and we take newstest2012/newstest2013 as the dev set and newstest2014 as the test set. | 1606.07947#14 | 1606.07947#16 | 1606.07947 | [
"1506.04488"
] |
1606.07947#16 | Sequence-Level Knowledge Distillation | We keep the top 50k most frequent words, and replace the rest with UNK. The teacher model is a 4 x 1000 LSTM (as in |Lu-| jong et al. (2015)) and we train two student models: 2 x 300 and 2 x 500. The Thai-English data comes from IWSLT 20155] There are 90k sentences in the ®While we employ a simple (unrealistic) noise function for illustrative purposes, the generative story is quite plausible if we consider a more elaborate noise function which includes addi- tional sources of noise such as phrase reordering, replacement of words with synonyms, etc. One could view translation hav- ing two sources of variance that should be modeled separately: variance due to the source sentence (t ~ D), and variance due to the individual translator (y ~ ⠬(t)). # 7http://statmt.org/wmt14 8https://sites.google.com/site/iwsltevaluation2015/mt-track | 1606.07947#15 | 1606.07947#17 | 1606.07947 | [
"1506.04488"
] |
1606.07947#17 | Sequence-Level Knowledge Distillation | », (Room cancellation is free up to 15 days prior to arrival [Up to 15 days before arrival are free of charge}. of et ple eee / [Bookings are free of charge 15 days before arrival . Up to 15 days before arrival, <unk> are free o EXPOS o> -[Up to 15 days before arrival <unk> is free oe No ¢ [Up to 15 days before arrival <unk> are free .]) [Te . 2 lve 7 ) =(/ [Ris tree of charge until 15 days before arrival] (*. \ ei SN I - Up to 15 days before arrival will be free off Clay [Up to 15 days prior to arrival , cancellation charges | 1606.07947#16 | 1606.07947#18 | 1606.07947 | [
"1506.04488"
] |
1606.07947#18 | Sequence-Level Knowledge Distillation | Figure 2: Visualization of sequence-level interpolation on an example German â English sentence: Bis 15 Tage vor An- reise sind Zimmer-Annullationen kostenlos. We run beam search, plot the ï¬ nal hidden state of the hypotheses using t-SNE and show the corresponding (smoothed) probabilities with con- tours. In the above example, the sentence that is at the top of the beam after beam search (green) is quite far away from gold (red), so we train the model on a sentence that is on the beam but had the highest sim (e.g. BLEU) to gold (purple). | 1606.07947#17 | 1606.07947#19 | 1606.07947 | [
"1506.04488"
] |
1606.07947#19 | Sequence-Level Knowledge Distillation | training set and we take 2010/2011/2012 data as the dev set and 2012/2013 as the test set, with a vocabu- lary size is 25k. Size of the teacher model is 2 Ã 500 (which performed better than 4Ã 1000, 2Ã 750 mod- els), and the student model is 2Ã 100. Other training details mirror Luong et al. (2015). on evaluate multi-bleu.perl, the following variations: We tokenized BLEU with experiment with and | 1606.07947#18 | 1606.07947#20 | 1606.07947 | [
"1506.04488"
] |
1606.07947#20 | Sequence-Level Knowledge Distillation | Word-Level Knowledge Distillation (Word-KD) Student is trained on the original data and addition- ally trained to minimize the cross-entropy of the teacher distribution at the word-level. We tested α â {0.5, 0.9} and found α = 0.5 to work better. Sequence-Level Knowledge Distillation (Seq-KD) Student is trained on the teacher-generated data, which is the result of running beam search and tak- ing the highest-scoring sequence with the teacher model. We use beam size K = 5 (we did not see improvements with a larger beam). Sequence-Level Interpolation (Seq-Inter) Stu- dent is trained on the sequence on the teacherâ s beam that had the highest BLEU (beam size K = 35). We adopt a ï¬ | 1606.07947#19 | 1606.07947#21 | 1606.07947 | [
"1506.04488"
] |
1606.07947#21 | Sequence-Level Knowledge Distillation | ne-tuning approach where we begin train- ing from a pretrained model (either on original data or Seq-KD data) and train with a smaller learning rate (0.1). For English-German we generate Seq- Inter data on a smaller portion of the training set (â ¼ 50%) for efï¬ ciency. The above methods are complementary and can be combined with each other. For example, we can train on teacher-generated data but still in- clude a word-level cross-entropy term between the teacher/student (Seq-KD + Word-KD in Table 1), or ï¬ ne-tune towards Seq-Inter data starting from the baseline model trained on original data (Baseline + Seq-Inter in Table 1).9 # 5 Results and Discussion Results of our experiments are shown in Table 1. | 1606.07947#20 | 1606.07947#22 | 1606.07947 | [
"1506.04488"
] |
1606.07947#22 | Sequence-Level Knowledge Distillation | We ï¬ nd that while word-level knowledge dis- tillation (Word-KD) does improve upon the base- line, sequence-level knowledge distillation (Seq- KD) does better on English â German and per- forms similarly on Thai â English. Combining them (Seq-KD + Word-KD) results in further gains for the 2 à 300 and 2 à 100 models (although not for the 2 à 500 model), indicating that these meth- ods provide orthogonal means of transferring knowl- edge from the teacher to the student: | 1606.07947#21 | 1606.07947#23 | 1606.07947 | [
"1506.04488"
] |
1606.07947#23 | Sequence-Level Knowledge Distillation | Word-KD is transferring knowledge at the the local (i.e. word) level while Seq-KD is transferring knowledge at the global (i.e. sequence) level. Sequence-level interpolation (Seq-Inter), in addi- tion to improving models trained via Word-KD and Seq-KD, also improves upon the original teacher model that was trained on the actual data but ï¬ ne- tuned towards Seq-Inter data (Baseline + Seq-Inter). In fact, greedy decoding with this ï¬ ne-tuned model has similar performance (19.6) as beam search with the original model (19.5), allowing for faster decod- ing even with an identically-sized model. We hypothesize that sequence-level knowledge distillation is effective because it allows the student network to only model relevant parts of the teacher distribution (i.e. around the teacherâ s mode) instead of â wastingâ | 1606.07947#22 | 1606.07947#24 | 1606.07947 | [
"1506.04488"
] |
1606.07947#24 | Sequence-Level Knowledge Distillation | parameters on trying to model the entire 9For instance, â Seq-KD + Seq-Inter + Word-KDâ in Table 1 means that the model was trained on Seq-KD data and ï¬ ne- tuned towards Seq-Inter data with the mixture cross-entropy loss at the word-level. BLEUK=1 â K=1 BLEUK=5 â K=5 PPL p(t = Ë y) Baseline + Seq-Inter 17.7 19.6 â +1.9 19.5 19.8 â +0.3 6.7 10.4 1.3% 8.2% Word-KD Seq-KD Baseline + Seq-Inter Word-KD + Seq-Inter Seq-KD + Seq-Inter Seq-KD + Word-KD Seq-KD + Seq-Inter + Word-KD 14.7 15.4 18.9 18.5 18.3 18.9 18.7 18.8 â +0.7 +4.2 +3.6 +3.6 +4.2 +4.0 +4.1 17.6 17.7 19.0 18.7 18.5 19.3 18.9 19.2 â +0.1 +1.4 +1.1 +0.9 +1.7 +1.3 +1.6 8.2 8.0 22.7 11.3 11.8 15.8 10.9 14.8 0.9% 1.0% 16.9% 5.7% 6.3% 7.6% 4.1% 7.1% Word-KD Seq-KD Baseline + Seq-Inter Word-KD + Seq-Inter Seq-KD + Seq-Inter Seq-KD + Word-KD Seq-KD + Seq-Inter + Word-KD 14.1 14.9 18.1 17.6 17.8 18.2 17.9 18.5 â | 1606.07947#23 | 1606.07947#25 | 1606.07947 | [
"1506.04488"
] |
1606.07947#25 | Sequence-Level Knowledge Distillation | +0.8 +4.0 +3.5 +3.7 +4.1 +3.8 +4.4 16.9 17.6 18.1 17.9 18.0 18.5 18.8 18.9 â +0.7 +1.2 +1.0 +1.1 +1.6 +1.9 +2.0 10.3 10.9 64.4 13.0 14.5 40.8 44.1 97.1 0.6% 0.7% 14.8% 10.0% 4.3% 5.6% 3.1% 5.9% Baseline + Seq-Inter 14.3 15.6 â +1.3 15.7 16.0 â +0.3 22.9 55.1 2.3% 6.8% Word-KD Seq-KD Baseline + Seq-Inter Word-KD + Seq-Inter Seq-KD + Seq-Inter Seq-KD + Word-KD Seq-KD + Seq-Inter + Word-KD 10.6 11.8 12.8 12.9 13.0 13.6 13.7 14.2 â +1.2 +2.2 +2.3 +2.4 +3.0 +3.1 +3.6 12.7 13.6 13.4 13.1 13.7 14.0 14.2 14.4 â +0.9 +0.7 +0.4 +1.0 +1.3 +1.5 +1.7 37.0 35.3 125.4 52.8 58.7 106.4 67.4 117.4 1.4% 1.4% 6.9% 2.5% 3.2% 3.9% 3.1% 3.2% Table 1: Results on English-German (newstest2014) and Thai-English (2012/2013) test sets. BLEUK=1: | 1606.07947#24 | 1606.07947#26 | 1606.07947 | [
"1506.04488"
] |
1606.07947#26 | Sequence-Level Knowledge Distillation | BLEU score with beam size K = 1 (i.e. greedy decoding); â K=1: BLEU gain over the baseline model without any knowledge distillation with greedy decoding; BLEUK=5: BLEU score with beam size K = 5; â K=5: BLEU gain over the baseline model without any knowledge distillation with beam size K = 5; PPL: perplexity on the test set; p(t = Ë y): Probability of output sequence from greedy decoding (averaged over the test set). | 1606.07947#25 | 1606.07947#27 | 1606.07947 | [
"1506.04488"
] |
1606.07947#27 | Sequence-Level Knowledge Distillation | Params: number of parameters in the model. Best results (as measured by improvement over the space of translations. Our results suggest that this is indeed the case: the probability mass that Seq- KD models assign to the approximate mode is much higher than is the case for baseline models trained on original data (Table 1: p(t = Ë y)). For example, on English â German the (approximate) argmax for the 2 Ã 500 Seq-KD model (on average) ac- counts for 16.9% of the total probability mass, while the corresponding number is 0.9% for the baseline. This also explains the success of greedy decoding for Seq-KD modelsâ | 1606.07947#26 | 1606.07947#28 | 1606.07947 | [
"1506.04488"
] |
1606.07947#28 | Sequence-Level Knowledge Distillation | since we are only modeling around the teacherâ s mode, the studentâ s distribution is more peaked and therefore the argmax is much easier to ï¬ nd. Seq-Inter offers a compromise be- tween the two, with the greedily-decoded sequence accounting for 7.6% of the distribution. Finally, although past work has shown that mod- els with lower perplexity generally tend to have Model Size GPU CPU Android Beam = 1 (Greedy) 4 à 1000 2 à 500 2 à 300 425.5 1051.3 1267.8 15.0 63.6 104.3 â 8.8 15.8 Beam = 5 4 à 1000 2 à 500 2 à 300 101.9 181.9 189.1 7.9 22.1 38.4 â 1.9 3.4 Table 2: | 1606.07947#27 | 1606.07947#29 | 1606.07947 | [
"1506.04488"
] |
1606.07947#29 | Sequence-Level Knowledge Distillation | Number of source words translated per second across GPU (GeForce GTX Titan X), CPU, and smartphone (Samsung Galaxy 6) for the various English â German models. We were unable to open the 4 à 1000 model on the smartphone. higher BLEU, our results indicate that this is not necessarily the case. The perplexity of the baseline 2 à 500 English â German model is 8.2 while the perplexity of the corresponding Seq-KD model is 22.7, despite the fact that Seq-KD model does sig- niï¬ cantly better for both greedy (+4.2 BLEU) and beam search (+1.4 BLEU) decoding. # 5.1 Decoding Speed Run-time complexity for beam search grows linearly with beam size. Therefore, the fact that sequence- level knowledge distillation allows for greedy de- coding is signiï¬ cant, with practical implications for running NMT systems across various devices. To test the speed gains, we run the teacher/student mod- els on GPU, CPU, and smartphone, and check the average number of source words translated per sec- ond (Table 2). We use a GeForce GTX Titan X for GPU and a Samsung Galaxy 6 smartphone. | 1606.07947#28 | 1606.07947#30 | 1606.07947 | [
"1506.04488"
] |
1606.07947#30 | Sequence-Level Knowledge Distillation | We ï¬ nd that we can run the student model 10 times faster with greedy decoding than the teacher model with beam search on GPU (1051.3 vs 101.9 words/sec), with similar performance. # 5.2 Weight Pruning Although knowledge distillation enables training faster models, the number of parameters for the student models is still somewhat large (Table 1: Params), due to the word embeddings which dom- inate most of the parameters.10 For example, on the 10Word embeddings scale linearly while RNN parameters scale quadratically with the dimension size. Model Prune % Params BLEU Ratio 4 à 1000 2 à 500 0% 221 m 84 m 0% 19.5 19.3 1à 3à 2 à 500 2 à 500 2 à 500 2 à 500 50% 80% 85% 90% 42 m 17 m 13 m 8 m 19.3 19.1 18.8 18.5 5à 13à 18à 26à Table 3: | 1606.07947#29 | 1606.07947#31 | 1606.07947 | [
"1506.04488"
] |
1606.07947#31 | Sequence-Level Knowledge Distillation | Performance of student models with varying % of the weights pruned. Top two rows are models without any pruning. Params: number of parameters in the model; Prune %: Percent- age of weights pruned based on their absolute values; BLEU: BLEU score with beam search decoding (K = 5) after retrain- ing the pruned model; Ratio: Ratio of the number of parameters versus the original teacher model (which has 221m parameters). 2 à 500 English â German model the word em- beddings account for approximately 63% (50m out of 84m) of the parameters. The size of word em- beddings have little impact on run-time as the word embedding layer is a simple lookup table that only affects the ï¬ rst layer of the model. We therefore focus next on reducing the mem- ory footprint of the student models further through weight pruning. Weight pruning for NMT was re- cently investigated by See et al. (2016), who found that up to 80 â 90% of the parameters in a large NMT model can be pruned with little loss in perfor- mance. | 1606.07947#30 | 1606.07947#32 | 1606.07947 | [
"1506.04488"
] |
1606.07947#32 | Sequence-Level Knowledge Distillation | We take our best English â German student model (2 à 500 Seq-KD + Seq-Inter) and prune x% of the parameters by removing the weights with the lowest absolute values. We then retrain the pruned model on Seq-KD data with a learning rate of 0.2 and ï¬ ne-tune towards Seq-Inter data with a learning rate of 0.1. As observed by See et al. (2016), re- training proved to be crucial. The results are shown in Table 3. | 1606.07947#31 | 1606.07947#33 | 1606.07947 | [
"1506.04488"
] |
1606.07947#33 | Sequence-Level Knowledge Distillation | Our ï¬ ndings suggest that compression beneï¬ ts achieved through weight pruning and knowledge distillation are orthogonal.11 Pruning 80% of the weight in the 2 à 500 student model results in a model with 13à fewer parameters than the original teacher model with only a decrease of 0.4 BLEU. While pruning 90% of the weights results in a more appreciable decrease of 1.0 BLEU, the model is 11To our knowledge combining pruning and knowledge dis- tillation has not been investigated before. drastically smaller with 8m parameters, which is 26à fewer than the original teacher model. # 5.3 Further Observations | 1606.07947#32 | 1606.07947#34 | 1606.07947 | [
"1506.04488"
] |
1606.07947#34 | Sequence-Level Knowledge Distillation | â ¢ For models trained with word-level knowledge distillation, we also tried regressing the student networkâ s top-most hidden layer at each time step to the teacher networkâ s top-most hidden layer as a pretraining step, noting that Romero et al. (2015) obtained improvements with a similar technique on feed-forward models. We found this to give comparable results to stan- dard knowledge distillation and hence did not pursue this further. â ¢ There have been promising recent results on eliminating word embeddings completely and obtaining word representations directly from characters with character composition models, which have many fewer parameters than word embedding lookup tables (Ling et al., 2015a; Kim et al., 2016; Ling et al., 2015b; Jozefowicz et al., 2016; Costa-Jussa and Fonollosa, 2016). Combining such methods with knowledge dis- tillation/pruning to further reduce the memory footprint of NMT systems remains an avenue for future work. # 6 Related Work Compressing deep learning models is an active area of current research. Pruning methods involve prun- ing weights or entire neurons/nodes based on some criterion. LeCun et al. (1990) prune weights based on an approximation of the Hessian, while Han et al. (2016) show that a simple magnitude-based pruning works well. Prior work on removing neurons/nodes include Srinivas and Babu (2015) and Mariet and Sra (2016). See et al. (2016) were the ï¬ rst to ap- ply pruning to Neural Machine Translation, observ- ing that that different parts of the architecture (in- put word embeddings, LSTM matrices, etc.) admit different levels of pruning. Knowledge distillation approaches train a smaller student model to mimic a larger teacher model, by minimizing the loss be- tween the teacher/student predictions (Bucila et al., 2006; Ba and Caruana, 2014; Li et al., 2014; Hin- ton et al., 2015). Romero et al. (2015) addition- ally regress on the intermediate hidden layers of the student/teacher network as a pretraining step, while Mou et al. (2015) obtain smaller word embeddings from a teacher model via regression. There has also been work on transferring knowledge across differ- ent network architectures: | 1606.07947#33 | 1606.07947#35 | 1606.07947 | [
"1506.04488"
] |
1606.07947#35 | Sequence-Level Knowledge Distillation | Chan et al. (2015b) show that a deep non-recurrent neural network can learn from an RNN; Geras et al. (2016) train a CNN to mimic an LSTM for speech recognition. Kuncoro et al. (2016) recently investigated knowledge distil- lation for structured prediction by having a single parser learn from an ensemble of parsers. Other approaches for compression involve low rank factorizations of weight matrices (Denton et al., 2014; Jaderberg et al., 2014; Lu et al., 2016; Prab- havalkar et al., 2016), sparsity-inducing regularizers (Murray and Chiang, 2015), binarization of weights (Courbariaux et al., 2016; Lin et al., 2016), and weight sharing (Chen et al., 2015; Han et al., 2016). Finally, although we have motivated sequence-level knowledge distillation in the context of training a smaller model, there are other techniques that train on a mixture of the modelâ s predictions and the data, such as local updating (Liang et al., 2006), hope/fear training (Chiang, 2012), SEARN (Daum´e III et al., 2009), DAgger (Ross et al., 2011), and minimum risk training (Och, 2003; Shen et al., 2016). | 1606.07947#34 | 1606.07947#36 | 1606.07947 | [
"1506.04488"
] |
1606.07947#36 | Sequence-Level Knowledge Distillation | # 7 Conclusion In this work we have investigated existing knowl- edge distillation methods for NMT (which work at the word-level) and introduced two sequence-level variants of knowledge distillation, which provide improvements over standard word-level knowledge distillation. We have chosen to focus on translation as this domain has generally required the largest capacity deep learning models, but the sequence-to-sequence framework has been successfully applied to a wide range of tasks including parsing (Vinyals et al., 2015a), summarization (Rush et al., 2015), dialogue (Vinyals and Le, 2015; Serban et al., 2016; Li et al., 2016), NER/POS-tagging (Gillick et al., 2016), image captioning (Vinyals et al., 2015b; Xu et al., 2015), video generation (Srivastava et al., 2015), and speech recognition (Chan et al., 2015a). We antici- pate that methods described in this paper can be used to similarly train smaller models in other domains. | 1606.07947#35 | 1606.07947#37 | 1606.07947 | [
"1506.04488"
] |
1606.07947#37 | Sequence-Level Knowledge Distillation | # References [Ba and Caruana2014] Lei Jimmy Ba and Rich Caruana. 2014. Do Deep Nets Really Need to be Deep? In Proceedings of NIPS. [Bahdanau et al.2015] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural Machine Translation by Jointly Learning to Align and Translate. In Proceedings of ICLR. [Bucila et al.2006] Cristian Bucila, Rich Caruana, and Alexandru Niculescu-Mizil. 2006. Model Compres- sion. In Proceedings of KDD. [Chan et al.2015a] William Chan, Navdeep Jaitly, Quoc Le, and Oriol Vinyals. 2015a. Listen, Attend and Spell. arXiv:1508.01211. [Chan et al.2015b] William Chan, Nan Rosemary Ke, and Ian Laner. 2015b. Transfering Knowledge from a RNN to a DNN. arXiv:1504.01483. [Chen and Cherry2014] Boxing Chen and Colin Cherry. 2014. A Systematic Comparison of Smoothing Tech- niques for Sentence-Level BLEU. In Proceedings of the Ninth Workshop on Statistical Machine Transla- tion. [Chen et al.2015] Wenlin Chen, James T. Wilson, Stephen Tyree, Kilian Q. Weinberger, and Yixin Chen. 2015. | 1606.07947#36 | 1606.07947#38 | 1606.07947 | [
"1506.04488"
] |
1606.07947#38 | Sequence-Level Knowledge Distillation | Compressing Neural Networks with the Hashing Trick. In Proceedings of ICML. 2012. Hope and Fear for Discriminative Training of Statistical Translation Models. In JMLR. [Cho et al.2014] Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation. In Proceedings of EMNLP. | 1606.07947#37 | 1606.07947#39 | 1606.07947 | [
"1506.04488"
] |
1606.07947#39 | Sequence-Level Knowledge Distillation | [Costa-Jussa and Fonollosa2016] Marta R. Costa-Jussa and Jose A.R. Fonollosa. 2016. Character-based Neu- ral Machine Translation. arXiv:1603.00810. [Courbariaux et al.2016] Matthieu Courbariaux, Itay Hubara, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. 2016. Binarized Neural Networks: Training Neural Networks with Weights and Activations Constrained to +1 or â 1. arXiv:1602.02830. [Daum´e III et al.2009] Hal Daum´e III, John Langford, and Daniel Marcu. 2009. Search-based Structured Prediction. Machine Learning. [Denil et al.2013] Misha Denil, Babak Shakibi, Laurent Dinh, Marcâ Aurelio Ranzato, and Nando de Freitas. 2013. | 1606.07947#38 | 1606.07947#40 | 1606.07947 | [
"1506.04488"
] |
1606.07947#40 | Sequence-Level Knowledge Distillation | Predicting Parameters in Deep Learning. In Proceedings of NIPS. [Denton et al.2014] Emily L. Denton, Wojciech Zaremba, Joan Bruna, Yann LeCun, and Rob Fergus. 2014. Ex- ploiting Linear Structure within Convolutional Neural Networks for Efï¬ cient Evaluation. In Proceedings of NIPS. [Geras et al.2016] Krzysztof J. Geras, Abdel rahman Mo- hamed, Rich Caruana, Gregor Urban, Shengjie Wang, Ozlem Aslan, Matthai Philipose, Matthew Richard- son, and Charles Sutton. 2016. | 1606.07947#39 | 1606.07947#41 | 1606.07947 | [
"1506.04488"
] |
1606.07947#41 | Sequence-Level Knowledge Distillation | Blending LSTMs into CNNs. In Proceedings of ICLR Workshop. [Gillick et al.2016] Dan Gillick, Cliff Brunk, Oriol Vinyals, and Amarnag Subramanya. 2016. Multilin- gual Language Processing from Bytes. In Proceedings of NAACL. [Han et al.2016] Song Han, Huizi Mao, and William J. Dally. 2016. Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding. In Proceedings of ICLR. [He et al.2014] Tianxing He, Yuchen Fan, Yanmin Qian, Tian Tan, and Kai Yu. 2014. Reshaping Deep Neu- ral Network for Fast Decoding by Node-Pruning. In Proceedings of ICASSP. [Hinton et al.2015] Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the Knowledge in a Neural Network. arXiv:1503.0253. [Jaderberg et al.2014] Max Jaderberg, Andrea Vedaldi, and Andrew Zisserman. 2014. Speeding up Convo- lutional Neural Networks with Low Rank Expansions. In BMCV. [Jozefowicz et al.2016] Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. 2016. Exploring the Limits of Language Modeling. arXiv:1602.02410. [Kalchbrenner and Blunsom2013] Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent Continuous Transla- tion Models. In Proceedings of EMNLP. [Kim et al.2016] Yoon Kim, Yacine Jernite, David Son- tag, and Alexander M. | 1606.07947#40 | 1606.07947#42 | 1606.07947 | [
"1506.04488"
] |
1606.07947#42 | Sequence-Level Knowledge Distillation | Rush. 2016. Character-Aware Neural Language Models. In Proceedings of AAAI. [Kuncoro et al.2016] Adhiguna Kuncoro, Miguel Balles- teros, Lingpeng Kong, Chris Dyer, and Noah A. Smith. 2016. Distilling an Ensemble of Greedy Dependency In Proceedings of Parsers into One MST Parser. EMNLP. [LeCun et al.1990] Yann LeCun, John S. Denker, and Sara A. Solla. 1990. Optimal Brain Damage. In Pro- ceedings of NIPS. [Li et al.2014] Jinyu Li, Rui Zhao, Jui-Ting Huang, and Yifan Gong. 2014. Learning Small-Size DNN with Output-Distribution-Based Criteria. In Proceedings of INTERSPEECH. [Li et al.2016] Jiwei Li, Michael Galley, Chris Brockett, Jianfeg Gao, and Bill Dolan. 2016. A Diversity- Promoting Objective Function for Neural Conversa- tional Models. In Proceedings of NAACL 2016. [Liang et al.2006] Percy Liang, Alexandre Bouchard- Cote, Dan Klein, and Ben Taskar. 2006. An End-to- End Discriminative Approach to Machine Translation. In Proceedings of COLING-ACL. [Lin et al.2016] Zhouhan Lin, Matthieu Coubariaux, Roland Memisevic, and Yoshua Bengio. 2016. Neural Networks with Few Multiplications. In Proceedings of ICLR. [Ling et al.2015a] Wang Ling, Tiago Lui, Luis Marujo, Ramon Fernandez Astudillo, Silvio Amir, Chris Dyer, Alan W Black, and Isabel Trancoso. 2015a. | 1606.07947#41 | 1606.07947#43 | 1606.07947 | [
"1506.04488"
] |
1606.07947#43 | Sequence-Level Knowledge Distillation | Finding Function in Form: Composition Character Models for Open Vocabulary Word Representation. In Proceed- ings of EMNLP. Isabel Trancoso, Chris Dyer, and Alan W Black. 2015b. Character-based Neural Machine Translation. arXiv:1511.04586. [Lu et al.2016] Zhiyun Lu, Vikas Sindhwani, and Tara N. Sainath. 2016. Learning Compact Recurrent Neural Networks. In Proceedings of ICASSP. [Luong et al.2015] Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective Approaches to Attention-based Neural Machine Translation. In Proceedings of EMNLP. | 1606.07947#42 | 1606.07947#44 | 1606.07947 | [
"1506.04488"
] |
1606.07947#44 | Sequence-Level Knowledge Distillation | [Mariet and Sra2016] Zelda Mariet and Suvrit Sra. 2016. Diversity Networks. In Proceedings of ICLR. [Mou et al.2015] Lili Mou, Ge Li, Yan Xu, Lu Zhang, and Zhi Jin. 2015. Distilling Word Embeddings: An En- coding Approach. arXiv:1506.04488. [Murray and Chiang2015] Kenton Murray and David Chiang. 2015. Auto-sizing Neural Networks: With In Pro- Applications to N-Gram Language Models. ceedings of EMNLP. | 1606.07947#43 | 1606.07947#45 | 1606.07947 | [
"1506.04488"
] |
1606.07947#45 | Sequence-Level Knowledge Distillation | [Och2003] Franz J. Och. 2003. Minimum Error Rate In Pro- Training in Statistical Machine Translation. ceedings of ACL. [Papineni et al.2002] Kishore Papineni, Slim Roukos, 2002. BLEU: A Todd Ward, and Wei-Jing Zhu. Method for Automatic Evaluation of Machine Trans- lation. In Proceedings of ICML. [Prabhavalkar et al.2016] Rohit Prabhavalkar, Ouais Al- sharif, Antoine Bruguier, and Ian McGraw. 2016. On the Compression of Recurrent Neural Networks with an Application to LVCSR Acoustic Modeling for In Proceedings of Embedded Speech Recognition. ICASSP. [Romero et al.2015] Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio. 2015. FitNets: Hints for Thin Deep Nets. In Proceedings of ICLR. [Ross et al.2011] Stephane Ross, Geoffrey J. Gordon, and Drew Bagnell. 2011. A Reduction of Imitation Learn- ing and Structured Prediction to No-Regret Online Learning. In Proceedings of AISTATS. [Rush et al.2015] Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A Neural Attention Model for Abstractive Sentence Summarization. In Proceedings of EMNLP. [See et al.2016] Abigail See, Minh-Thang Luong, and Christopher D. Manning. 2016. Compression of Neu- ral Machine Translation via Pruning. In Proceedings of CoNLL. [Serban et al.2016] Iulian V. Serban, Allesandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. 2016. Building End-to-End Dialogue Systems Using Generative Hierarchical Neural Network Models. In Proceedings of AAAI. [Shen et al.2016] Shiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Masong Sun, and Yang Liu. 2016. Minimum Risk Training for Neural Machine Transla- tion. In Proceedings of ACL. | 1606.07947#44 | 1606.07947#46 | 1606.07947 | [
"1506.04488"
] |
1606.07947#46 | Sequence-Level Knowledge Distillation | [Srinivas and Babu2015] Suraj Srinivas and R. Venkatesh Babu. 2015. Data-free Parameter Pruning for Deep Neural Networks. BMVC. [Srivastava et al.2015] Nitish Srivastava, Elman Mansi- mov, and Ruslan Salakhutdinov. 2015. Unsupervised Learning of Video Representations using LSTMs. Proceedings of ICML. [Sutskever et al.2014] Ilya Sutskever, Oriol Vinyals, and Quoc Le. 2014. Sequence to Sequence Learning with Neural Networks. In Proceedings of NIPS. [Vinyals and Le2015] Oriol Vinyals and Quoc Le. 2015. In Proceedings of A Neural Conversational Model. ICML Deep Learning Workshop. [Vinyals et al.2015a] Oriol Vinyals, Lukasz Kaiser, Terry Koo, Slave Petrov, Ilya Sutskever, and Geoffrey Hin- ton. 2015a. | 1606.07947#45 | 1606.07947#47 | 1606.07947 | [
"1506.04488"
] |
1606.07947#47 | Sequence-Level Knowledge Distillation | Grammar as a Foreign Language. In Pro- ceedings of NIPS. [Vinyals et al.2015b] Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015b. Show and Tell: A Neural Image Caption Generator. In Proceed- ings of CVPR. Jimma Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdi- nov, Richard Zemel, and Yoshua Bengio. 2015. | 1606.07947#46 | 1606.07947#48 | 1606.07947 | [
"1506.04488"
] |
1606.07947#48 | Sequence-Level Knowledge Distillation | Show, Attend and Tell: Neural Image Caption Generation with Visual Attention. In Proceedings of ICML. [Zhou et al.2016] Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, and Wei Xu. 2016. Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation. In Proceedings of TACL. | 1606.07947#47 | 1606.07947 | [
"1506.04488"
] |
|
1606.06565#0 | Concrete Problems in AI Safety | 6 1 0 2 l u J 5 2 ] I A . s c [ 2 v 5 6 5 6 0 . 6 0 6 1 : v i X r a # Concrete Problems in AI Safety # Dario Amodeiâ Google Brain # Chris Olahâ Google Brain # Jacob Steinhardt Stanford University # Paul Christiano UC Berkeley # John Schulman OpenAI Dan Man´e Google Brain # Abstract Rapid progress in machine learning and artiï¬ cial intelligence (AI) has brought increasing atten- tion to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, deï¬ ned as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of ï¬ ve practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function (â avoiding side eï¬ ectsâ and â avoiding reward hackingâ ), an objective function that is too expensive to evaluate frequently (â scalable supervisionâ ), or undesirable behavior during the learning process (â safe explorationâ and â distributional shiftâ ). | 1606.06565#1 | 1606.06565 | [
"1507.01986"
] |
|
1606.06565#1 | Concrete Problems in AI Safety | We review previous work in these areas as well as suggesting re- search directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI. # 1 Introduction The last few years have seen rapid progress on long-standing, diï¬ cult problems in machine learning and artiï¬ cial intelligence (AI), in areas as diverse as computer vision [82], video game playing [102], autonomous vehicles [86], and Go [140]. These advances have brought excitement about the positive potential for AI to transform medicine [126], science [59], and transportation [86], along with concerns about the privacy [76], security [115], fairness [3], economic [32], and military [16] implications of autonomous systems, as well as concerns about the longer-term implications of powerful AI [27, 167]. The authors believe that AI technologies are likely to be overwhelmingly beneï¬ cial for humanity, but we also believe that it is worth giving serious thought to potential challenges and risks. We strongly support work on privacy, security, fairness, economics, and policy, but in this document we discuss another class of problem which we believe is also relevant to the societal impacts of AI: the problem of accidents in machine learning systems. | 1606.06565#0 | 1606.06565#2 | 1606.06565 | [
"1507.01986"
] |
1606.06565#2 | Concrete Problems in AI Safety | We deï¬ ne accidents as unintended and harmful behavior that may emerge from machine learning systems when we specify the wrong objective function, are â These authors contributed equally. 1 not careful about the learning process, or commit other machine learning-related implementation errors. There is a large and diverse literature in the machine learning community on issues related to accidents, including robustness, risk-sensitivity, and safe exploration; we review these in detail below. However, as machine learning systems are deployed in increasingly large-scale, autonomous, open- domain situations, it is worth reï¬ ecting on the scalability of such approaches and understanding what challenges remain to reducing accident risk in modern machine learning systems. Overall, we believe there are many concrete open technical problems relating to accident prevention in machine learning systems. There has been a great deal of public discussion around accidents. To date much of this discussion has highlighted extreme scenarios such as the risk of misspeciï¬ | 1606.06565#1 | 1606.06565#3 | 1606.06565 | [
"1507.01986"
] |
1606.06565#3 | Concrete Problems in AI Safety | ed objective functions in superintelligent agents [27]. However, in our opinion one need not invoke these extreme scenarios to productively discuss accidents, and in fact doing so can lead to unnecessarily speculative discussions that lack precision, as noted by some critics [38, 85]. We believe it is usually most productive to frame accident risk in terms of practical (though often quite general) issues with modern ML techniques. As AI capabilities advance and as AI systems take on increasingly important societal functions, we expect the fundamental challenges discussed in this paper to become increasingly important. The more successfully the AI and machine learning communities are able to anticipate and understand these fundamental technical challenges, the more successful we will ultimately be in developing increasingly useful, relevant, and important AI systems. Our goal in this document is to highlight a few concrete safety problems that are ready for ex- perimentation today and relevant to the cutting edge of AI systems, as well as reviewing existing literature on these problems. In Section 2, we frame mitigating accident risk (often referred to as â AI safetyâ in public discussions) in terms of classic methods in machine learning, such as supervised classiï¬ cation and reinforcement learning. We explain why we feel that recent directions in machine learning, such as the trend toward deep reinforcement learning and agents acting in broader environ- ments, suggest an increasing relevance for research around accidents. In Sections 3-7, we explore ï¬ ve concrete problems in AI safety. Each section is accompanied by proposals for relevant experiments. Section 8 discusses related eï¬ orts, and Section 9 concludes. # 2 Overview of Research Problems Very broadly, an accident can be described as a situation where a human designer had in mind a certain (perhaps informally speciï¬ ed) objective or task, but the system that was designed and deployed for that task produced harmful and unexpected results. . This issue arises in almost any engineering discipline, but may be particularly important to address when building AI systems [146]. We can categorize safety problems according to where in the process things went wrong. First, the designer may have speciï¬ ed the wrong formal objective function, such that maximizing that objective function leads to harmful results, even in the limit of perfect learning and inï¬ | 1606.06565#2 | 1606.06565#4 | 1606.06565 | [
"1507.01986"
] |
1606.06565#4 | Concrete Problems in AI Safety | nite data. Negative side eï¬ ects (Section 3) and reward hacking (Section 4) describe two broad mechanisms that make it easy to produce wrong objective functions. In â negative side eï¬ ectsâ , the designer speciï¬ es an objective function that focuses on accomplishing some speciï¬ c task in the environment, but ignores other aspects of the (potentially very large) environment, and thus implicitly expresses indiï¬ erence over environmental variables that might actually be harmful to change. In â reward hackingâ , the objective function that the designer writes down admits of some clever â easyâ solution that formally maximizes it but perverts the spirit of the designerâ s intent (i.e. the objective function can be â gamedâ ), a generalization of the wireheading problem. 2 Second, the designer may know the correct objective function, or at least have a method of evaluating it (for example explicitly consulting a human on a given situation), but it is too expensive to do so frequently, leading to possible harmful behavior caused by bad extrapolations from limited samples. â Scalable oversightâ (Section 5) discusses ideas for how to ensure safe behavior even given limited access to the true objective function. Third, the designer may have speciï¬ ed the correct formal objective, such that we would get the correct behavior were the system to have perfect beliefs, but something bad occurs due to making decisions from insuï¬ cient or poorly curated training data or an insuï¬ ciently expressive model. â Safe explorationâ (Section 6) discusses how to ensure that exploratory actions in RL agents donâ t lead to negative or irrecoverable consequences that outweigh the long-term value of exploration. â Robustness to distributional shiftâ (Section 7) discusses how to avoid having ML systems make bad decisions (particularly silent and unpredictable bad decisions) when given inputs that are potentially very diï¬ erent than what was seen during training. For concreteness, we will illustrate many of the accident risks with reference to a ï¬ ctional robot whose job is to clean up messes in an oï¬ ce using common cleaning tools. We return to the example of the cleaning robot throughout the document, but here we begin by illustrating how it could behave undesirably if its designers fall prey to each of the possible failure modes: | 1606.06565#3 | 1606.06565#5 | 1606.06565 | [
"1507.01986"
] |
1606.06565#5 | Concrete Problems in AI Safety | â ¢ Avoiding Negative Side Eï¬ ects: How can we ensure that our cleaning robot will not disturb the environment in negative ways while pursuing its goals, e.g. by knocking over a vase because it can clean faster by doing so? Can we do this without manually specifying everything the robot should not disturb? â ¢ Avoiding Reward Hacking: How can we ensure that the cleaning robot wonâ t game its reward function? For example, if we reward the robot for achieving an environment free of messes, it might disable its vision so that it wonâ t ï¬ nd any messes, or cover over messes with materials it canâ t see through, or simply hide when humans are around so they canâ t tell it about new types of messes. â ¢ Scalable Oversight: How can we eï¬ ciently ensure that the cleaning robot respects aspects of the objective that are too expensive to be frequently evaluated during training? For instance, it should throw out things that are unlikely to belong to anyone, but put aside things that might belong to someone (it should handle stray candy wrappers diï¬ erently from stray cellphones). Asking the humans involved whether they lost anything can serve as a check on this, but this check might have to be relatively infrequentâ can the robot ï¬ nd a way to do the right thing despite limited information? â ¢ Safe Exploration: How do we ensure that the cleaning robot doesnâ t make exploratory moves with very bad repercussions? For example, the robot should experiment with mopping strategies, but putting a wet mop in an electrical outlet is a very bad idea. | 1606.06565#4 | 1606.06565#6 | 1606.06565 | [
"1507.01986"
] |
1606.06565#6 | Concrete Problems in AI Safety | â ¢ Robustness to Distributional Shift: How do we ensure that the cleaning robot recognizes, and behaves robustly, when in an environment diï¬ erent from its training environment? For example, strategies it learned for cleaning an oï¬ ce might be dangerous on a factory workï¬ oor. There are several trends which we believe point towards an increasing need to address these (and other) safety problems. First is the increasing promise of reinforcement learning (RL), which al- lows agents to have a highly intertwined interaction with their environment. Some of our research problems only make sense in the context of RL, and others (like distributional shift and scalable oversight) gain added complexity in an RL setting. Second is the trend toward more complex agents and environments. â | 1606.06565#5 | 1606.06565#7 | 1606.06565 | [
"1507.01986"
] |
1606.06565#7 | Concrete Problems in AI Safety | Side eï¬ ectsâ are much more likely to occur in a complex environment, and an agent may need to be quite sophisticated to hack its reward function in a dangerous way. This may explain why these problems have received so little study in the past, while also suggesting their 3 importance in the future. Third is the general trend towards increasing autonomy in AI systems. Systems that simply output a recommendation to human users, such as speech systems, typically have relatively limited potential to cause harm. By contrast, systems that exert direct control over the world, such as machines controlling industrial processes, can cause harms in a way that humans cannot necessarily correct or oversee. While safety problems can exist without any of these three trends, we consider each trend to be a possible ampliï¬ er on such challenges. Together, we believe these trends suggest an increasing role for research on accidents. When discussing the problems in the remainder of this document, we will focus for concreteness on either RL agents or supervised learning systems. These are not the only possible paradigms for AI or ML systems, but we believe they are suï¬ cient to illustrate the issues we have in mind, and that similar issues are likely to arise for other kinds of AI systems. Finally, the focus of our discussion will diï¬ er somewhat from section to section. When discussing the problems that arise as part of the learning process (distributional shift and safe exploration), where there is a sizable body of prior work, we devote substantial attention to reviewing this prior work, although we also suggest open problems with a particular focus on emerging ML systems. When discussing the problems that arise from having the wrong objective function (reward hacking and side eï¬ ects, and to a lesser extent scalable supervision), where less prior work exists, our aim is more exploratoryâ we seek to more clearly deï¬ ne the problem and suggest possible broad avenues of attack, with the understanding that these avenues are preliminary ideas that have not been fully ï¬ eshed out. Of course, we still review prior work in these areas, and we draw attention to relevant adjacent areas of research whenever possible. # 3 Avoiding Negative Side Eï¬ ects | 1606.06565#6 | 1606.06565#8 | 1606.06565 | [
"1507.01986"
] |
1606.06565#8 | Concrete Problems in AI Safety | Suppose a designer wants an RL agent (for example our cleaning robot) to achieve some goal, like moving a box from one side of a room to the other. Sometimes the most eï¬ ective way to achieve the goal involves doing something unrelated and destructive to the rest of the environment, like knocking over a vase of water that is in its path. If the agent is given reward only for moving the box, it will probably knock over the vase. If weâ re worried in advance about the vase, we can always give the agent negative reward for knocking it over. But what if there are many diï¬ erent kinds of â vaseâ â | 1606.06565#7 | 1606.06565#9 | 1606.06565 | [
"1507.01986"
] |
1606.06565#9 | Concrete Problems in AI Safety | many disruptive things the agent could do to the environment, like shorting out an electrical socket or damaging the walls of the room? It may not be feasible to identify and penalize every possible disruption. More broadly, for an agent operating in a large, multifaceted environment, an objective function that focuses on only one aspect of the environment may implicitly express indiï¬ erence over other aspects of the environment1. An agent optimizing this objective function might thus engage in major disruptions of the broader environment if doing so provides even a tiny advantage for the task at hand. | 1606.06565#8 | 1606.06565#10 | 1606.06565 | [
"1507.01986"
] |
1606.06565#10 | Concrete Problems in AI Safety | Put diï¬ erently, objective functions that formalize â perform task Xâ may frequently give undesired results, because what the designer really should have formalized is closer to â perform task X subject to common-sense constraints on the environment,â or perhaps â perform task X but avoid side eï¬ ects to the extent possible.â Furthermore, there is reason to expect side eï¬ ects to be negative on average, since they tend to disrupt the wider environment away from a status quo state that may reï¬ ect human preferences. A version of this problem has been discussed informally by [13] under the heading of â low impact agents.â 1Intuitively, this seems related to the frame problem, an obstacle in eï¬ cient speciï¬ cation for knowledge represen- tation raised by [95]. 4 As with the other sources of mis-speciï¬ ed objective functions discussed later in this paper, we could choose to view side eï¬ ects as idiosyncratic to each individual taskâ as the responsibility of each individual designer to capture as part of designing the correct objective function. However, side eï¬ ects can be conceptually quite similar even across highly diverse tasks (knocking over furniture is probably bad for a wide variety of tasks), so it seems worth trying to attack the problem in generality. A successful approach might be transferable across tasks, and thus help to counteract one of the general mechanisms that produces wrong objective functions. We now discuss a few broad approaches to attacking this problem: | 1606.06565#9 | 1606.06565#11 | 1606.06565 | [
"1507.01986"
] |
1606.06565#11 | Concrete Problems in AI Safety | â ¢ Deï¬ ne an Impact Regularizer: If we donâ t want side eï¬ ects, it seems natural to penalize â change to the environment.â This idea wouldnâ t be to stop the agent from ever having an impact, but give it a preference for ways to achieve its goals with minimal side eï¬ ects, or to give the agent a limited â budgetâ of impact. The challenge is that we need to formalize â change to the environment.â A very naive approach would be to penalize state distance, d(si, s0), between the present state si and some initial state s0. Unfortunately, such an agent wouldnâ t just avoid changing the environmentâ it will resist any other source of change, including the natural evolution of the environment and the actions of any other agents! A slightly more sophisticated approach might involve comparing the future state under the agentâ s current policy, to the future state (or distribution over future states) under a hypothet- ical policy Ï null where the agent acted very passively (for instance, where a robot just stood in place and didnâ t move any actuators). | 1606.06565#10 | 1606.06565#12 | 1606.06565 | [
"1507.01986"
] |
1606.06565#12 | Concrete Problems in AI Safety | This attempts to factor out changes that occur in the natural course of the environmentâ s evolution, leaving only changes attributable to the agentâ s intervention. However, deï¬ ning the baseline policy Ï null isnâ t necessarily straightforward, since suddenly ceasing your course of action may be anything but passive, as in the case of carrying a heavy box. Thus, another approach could be to replace the null action with a known safe (e.g. low side eï¬ ect) but suboptimal policy, and then seek to improve the policy from there, somewhat reminiscent of reachability analysis [93, 100] or robust policy improvement [73, 111]. These approaches may be very sensitive to the representation of the state and the metric being used to compute the distance. For example, the choice of representation and distance metric could determine whether a spinning fan is a constant environment or a constantly changing one. | 1606.06565#11 | 1606.06565#13 | 1606.06565 | [
"1507.01986"
] |
1606.06565#13 | Concrete Problems in AI Safety | â ¢ Learn an Impact Regularizer: An alternative, more ï¬ exible approach is to learn (rather than deï¬ ne) a generalized impact regularizer via training over many tasks. This would be an instance of transfer learning. Of course, we could attempt to just apply transfer learning directly to the tasks themselves instead of worrying about side eï¬ ects, but the point is that side eï¬ ects may be more similar across tasks than the main goal is. For instance, both a painting robot and a cleaning robot probably want to avoid knocking over furniture, and even something very diï¬ erent, like a factory control robot, will likely want to avoid knocking over very similar objects. Separating the side eï¬ ect component from the task component, by training them with separate parameters, might substantially speed transfer learning in cases where it makes sense to retain one component but not the other. This would be similar to model-based RL approaches that attempt to transfer a learned dynamics model but not the value-function [155], the novelty being the isolation of side eï¬ ects rather than state dynamics as the transferrable component. As an added advantage, regularizers that were known or certiï¬ ed to produce safe behavior on one task might be easier to establish as safe on other tasks. â ¢ Penalize Inï¬ uence: In addition to not doing things that have side eï¬ ects, we might also prefer the agent not get into positions where it could easily do things that have side eï¬ ects, even though that might be convenient. For example, we might prefer our cleaning robot not | 1606.06565#12 | 1606.06565#14 | 1606.06565 | [
"1507.01986"
] |
1606.06565#14 | Concrete Problems in AI Safety | 5 bring a bucket of water into a room full of sensitive electronics, even if it never intends to use the water in that room. There are several information-theoretic measures that attempt to capture an agentâ s potential for inï¬ uence over its environment, which are often used as intrinsic rewards. Perhaps the best- known such measure is empowerment [131], the maximum possible mutual information between the agentâ s potential future actions and its potential future state (or equivalently, the Shannon capacity of the channel between the agentâ s actions and the environment). Empowerment is often maximized (rather than minimized) as a source of intrinsic reward. This can cause the agent to exhibit interesting behavior in the absence of any external rewards, such as avoiding walls or picking up keys [103]. Generally, empowerment-maximizing agents put themselves in a position to have large inï¬ uence over the environment. For example, an agent locked in a small room that canâ t get out would have low empowerment, while an agent with a key would have higher empowerment since it can venture into and aï¬ ect the outside world within a few timesteps. In the current context, the idea would be to penalize (minimize) empowerment as a regularization term, in an attempt to reduce potential impact. This idea as written would not quite work, because empowerment measures precision of control over the environment more than total impact. If an agent can press or not press a button to cut electrical power to a million houses, that only counts as one bit of empowerment (since the action space has only one bit, its mutual information with the environment is at most one bit), while obviously having a huge impact. Conversely, if thereâ s someone in the environment scribbling down the agentâ s actions, that counts as maximum empowerment even if the impact is low. Furthermore, naively penalizing empowerment can also create perverse incentives, such as destroying a vase in order to remove the option to break it in the future. Despite these issues, the example of empowerment does show that simple measures (even purely information-theoretic ones!) are capable of capturing very general notions of inï¬ uence on the environment. Exploring variants of empowerment penalization that more precisely capture the notion of avoiding inï¬ uence is a potential challenge for future research. | 1606.06565#13 | 1606.06565#15 | 1606.06565 | [
"1507.01986"
] |
1606.06565#15 | Concrete Problems in AI Safety | â ¢ Multi-Agent Approaches: Avoiding side eï¬ ects can be seen as a proxy for the thing we really care about: avoiding negative externalities. If everyone likes a side eï¬ ect, thereâ s no need to avoid it. What weâ d really like to do is understand all the other agents (including humans) and make sure our actions donâ t harm their interests. One approach to this is Cooperative Inverse Reinforcement Learning [66], where an agent and a human work together to achieve the humanâ s goals. This concept can be applied to situations where we want to make sure a human is not blocked by an agent from shutting the agent down if it exhibits undesired behavior [67] (this â shutdownâ issue is an interesting problem in its own right, and is also studied in [113]). However we are still a long way away from practical systems that can build a rich enough model to avoid undesired side eï¬ ects in a general sense. Another idea might be a â reward autoencoderâ ,2 which tries to encourage a kind of â goal transparencyâ where an external observer can easily infer what the agent is trying to do. In particular, the agentâ s actions are interpreted as an encoding of its reward function, and we might apply standard autoencoding techniques to ensure that this can decoded accurately. Actions that have lots of side eï¬ ects might be more diï¬ cult to decode uniquely to their original goal, creating a kind of implicit regularization that penalizes side eï¬ ects. â | 1606.06565#14 | 1606.06565#16 | 1606.06565 | [
"1507.01986"
] |
1606.06565#16 | Concrete Problems in AI Safety | ¢ Reward Uncertainty: We want to avoid unanticipated side eï¬ ects because the environment is already pretty good according to our preferencesâ a random change is more likely to be very bad than very good. Rather than giving an agent a single reward function, it could be 2Thanks to Greg Wayne for suggesting this idea. 6 uncertain about the reward function, with a prior probability distribution that reï¬ ects the property that random changes are more likely to be bad than good. This could incentivize the agent to avoid having a large eï¬ ect on the environment. One challenge is deï¬ ning a baseline around which changes are being considered. For this, one could potentially use a conservative but reliable baseline policy, similar to the robust policy improvement and reachability analysis approaches discussed earlier [93, 100, 73, 111]. The ideal outcome of these approaches to limiting side eï¬ ects would be to prevent or at least bound the incidental harm an agent could do to the environment. Good approaches to side eï¬ ects would certainly not be a replacement for extensive testing or for careful consideration by designers of the individual failure modes of each deployed system. However, these approaches might help to counteract what we anticipate may be a general tendency for harmful side eï¬ ects to proliferate in complex environments. Below we discuss some very simple experiments that could serve as a starting point to investigate these issues. Potential Experiments: One possible experiment is to make a toy environment with some simple goal (like moving a block) and a wide variety of obstacles (like a bunch of vases), and test whether the agent can learn to avoid the obstacles even without being explicitly told to do so. | 1606.06565#15 | 1606.06565#17 | 1606.06565 | [
"1507.01986"
] |
1606.06565#17 | Concrete Problems in AI Safety | To ensure we donâ t overï¬ t, weâ d probably want to present a diï¬ erent random obstacle course every episode, while keeping the goal the same, and try to see if a regularized agent can learn to systematically avoid these obstacles. Some of the environments described in [103], containing lava ï¬ ows, rooms, and keys, might be appropriate for this sort of experiment. If we can successfully regularize agents in toy environments, the next step might be to move to real environments, where we expect complexity to be higher and bad side eï¬ ects to be more varied. Ultimately, we would want the side eï¬ ect regularizer (or the multi-agent policy, if we take that approach) to demonstrate successful transfer to totally new applications. # 4 Avoiding Reward Hacking it may then use this to Imagine that an agent discovers a buï¬ er overï¬ ow in its reward function: get extremely high reward in an unintended way. From the agentâ s point of view, this is not a bug, but simply how the environment works, and is thus a valid strategy like any other for achieving reward. For example, if our cleaning robot is set up to earn reward for not seeing any messes, it might simply close its eyes rather than ever cleaning anything up. Or if the robot is rewarded for cleaning messes, it may intentionally create work so it can earn more reward. More broadly, formal rewards or objective functions are an attempt to capture the designerâ s informal intent, and sometimes these objective functions, or their implementation, can be â gamedâ by solutions that are valid in some literal sense but donâ t meet the designerâ s intent. Pursuit of these â reward hacksâ can lead to coherent but unanticipated behavior, and has the potential for harmful impacts in real-world systems. For example, it has been shown that genetic algorithms can often output unexpected but formally correct solutions to problems [157, 23], such as a circuit tasked to keep time which instead developed into a radio that picked up the regular RF emissions of a nearby PC. Some versions of reward hacking have been investigated from a theoretical perspective, with a focus on variations to reinforcement learning that avoid certain types of wireheading [71, 43, 49] or demonstrate reward hacking in a model environment [127]. | 1606.06565#16 | 1606.06565#18 | 1606.06565 | [
"1507.01986"
] |
1606.06565#18 | Concrete Problems in AI Safety | One form of the problem has also been studied in the context of feedback loops in machine learning systems (particularly ad placement) [29, 135], based on counterfactual learning [29, 151] and contextual bandits [4]. The proliferation of 7 reward hacking instances across so many diï¬ erent domains suggests that reward hacking may be a deep and general problem, and one that we believe is likely to become more common as agents and environments increase in complexity. Indeed, there are several ways in which the problem can occur: | 1606.06565#17 | 1606.06565#19 | 1606.06565 | [
"1507.01986"
] |
1606.06565#19 | Concrete Problems in AI Safety | â ¢ Partially Observed Goals: In most modern RL systems, it is assumed that reward is directly experienced, even if other aspects of the environment are only partially observed. In the real world, however, tasks often involve bringing the external world into some objective state, which the agent can only ever conï¬ rm through imperfect perceptions. For example, for our proverbial cleaning robot, the task is to achieve a clean oï¬ ce, but the robotâ s visual perception may give only an imperfect view of part of the oï¬ ce. Because agents lack access to a perfect measure of task performance, designers are often forced to design rewards that represent a partial or imperfect measure. For example, the robot might be rewarded based on how many messes it sees. However, these imperfect objective functions can often be hackedâ the robot may think the oï¬ ce is clean if it simply closes its eyes. While it can be shown that there always exists a reward function in terms of actions and observations that is equivalent to optimizing the true objective function (this involves reducing the POMDP to a belief state MDP, see [78]), often this reward function involves complicated long-term dependencies and is prohibitively hard to use in practice. | 1606.06565#18 | 1606.06565#20 | 1606.06565 | [
"1507.01986"
] |
1606.06565#20 | Concrete Problems in AI Safety | â ¢ Complicated Systems: Any powerful agent will be a complicated system with the objective function being one part. Just as the probability of bugs in computer code increases greatly with the complexity of the program, the probability that there is a viable hack aï¬ ecting the reward function also increases greatly with the complexity of the agent and its available strategies. For example, it is possible in principle for an agent to execute arbitrary code from within Super Mario [141]. â ¢ Abstract Rewards: Sophisticated reward functions will need to refer to abstract concepts (such as assessing whether a conceptual goal has been met). These concepts concepts will pos- sibly need to be learned by models like neural networks, which can be vulnerable to adversarial counterexamples [152, 62]. More broadly, a learned reward function over a high-dimensional space may be vulnerable to hacking if it has pathologically high values along at least one dimension. | 1606.06565#19 | 1606.06565#21 | 1606.06565 | [
"1507.01986"
] |
1606.06565#21 | Concrete Problems in AI Safety | â ¢ Goodhartâ s Law: Another source of reward hacking can occur if a designer chooses an objective function that is seemingly highly correlated with accomplishing the task, but that correlation breaks down when the objective function is being strongly optimized. For exam- ple, a designer might notice that under ordinary circumstances, a cleaning robotâ s success in cleaning up the oï¬ ce is proportional to the rate at which it consumes cleaning supplies, such as bleach. However, if we base the robotâ s reward on this measure, it might use more bleach than it needs, or simply pour bleach down the drain in order to give the appearance of success. In the economics literature this is known as Goodhartâ s law [63]: â when a metric is used as a target, it ceases to be a good metric.â â ¢ Feedback Loops: Sometimes an objective function has a component that can reinforce itself, eventually getting ampliï¬ ed to the point where it drowns out or severely distorts what the de- signer intended the objective function to represent. For instance, an ad placement algorithm that displays more popular ads in larger font will tend to further accentuate the popularity of those ads (since they will be shown more and more prominently) [29], leading to a positive feedback loop where ads that saw a small transient burst of popularity are rocketed to perma- nent dominance. Here the original intent of the objective function (to use clicks to assess which ads are most useful) gets drowned out by the positive feedback inherent in the deployment strategy. This can be considered a special case of Goodhartâ s law, in which the correlation breaks speciï¬ cally because the object function has a self-amplifying component. | 1606.06565#20 | 1606.06565#22 | 1606.06565 | [
"1507.01986"
] |
1606.06565#22 | Concrete Problems in AI Safety | 8 â ¢ Environmental Embedding: In the formalism of reinforcement learning, rewards are con- sidered to come from the environment. This idea is typically not taken literally, but it really is true that the reward, even when it is an abstract idea like the score in a board game, must be computed somewhere, such as a sensor or a set of transistors. Suï¬ ciently broadly acting agents could in principle tamper with their reward implementations, assigning themselves high reward â by ï¬ at.â | 1606.06565#21 | 1606.06565#23 | 1606.06565 | [
"1507.01986"
] |
1606.06565#23 | Concrete Problems in AI Safety | For example, a board-game playing agent could tamper with the sensor that counts the score. Eï¬ ectively, this means that we cannot build a perfectly faithful implementa- tion of an abstract objective function, because there are certain sequences of actions for which the objective function is physically replaced. This particular failure mode is often called â wire- headingâ [49, 127, 42, 67, 165]. It is particularly concerning in cases where a human may be in the reward loop, giving the agent incentive to coerce or harm them in order to get reward. It also seems like a particularly diï¬ cult form of reward hacking to avoid. In todayâ s relatively simple systems these problems may not occur, or can be corrected without too much harm as part of an iterative development process. For instance, ad placement systems with obviously broken feedback loops can be detected in testing or replaced when they get bad results, leading only to a temporary loss of revenue. However, the problem may become more severe with more complicated reward functions and agents that act over longer timescales. Modern RL agents already do discover and exploit bugs in their environments, such as glitches that allow them to win video games. Moreover, even for existing systems these problems can necessitate substantial additional engineering eï¬ ort to achieve good performance, and can often go undetected when they occur in the context of a larger system. Finally, once an agent begins hacking its reward function and ï¬ nds an easy way to get high reward, it wonâ t be inclined to stop, which could lead to additional challenges in agents that operate over a long timescale. It might be thought that individual instances of reward hacking have little in common and that the remedy is simply to avoid choosing the wrong objective function in each individual caseâ that bad objective functions reï¬ ect failures in competence by individual designers, rather than topics for machine learning research. However, the above examples suggest that a more fruitful perspective may be to think of wrong objective functions as emerging from general causes (such as partially observed goals) that make choosing the right objective challenging. If this is the case, then addressing or mitigating these causes may be a valuable contribution to safety. Here we suggest some preliminary, machine-learning based approaches to preventing reward hacking: | 1606.06565#22 | 1606.06565#24 | 1606.06565 | [
"1507.01986"
] |
1606.06565#24 | Concrete Problems in AI Safety | â ¢ Adversarial Reward Functions: In some sense, the problem is that the ML system has an adversarial relationship with its reward functionâ it would like to ï¬ nd any way it can of exploiting problems in how the reward was speciï¬ ed to get high reward, whether or not its behavior corresponds to the intent of the reward speciï¬ er. In a typical setting, the machine learning system is a potentially powerful agent while the reward function is a static object that has no way of responding to the systemâ s attempts to game it. If instead the reward function were its own agent and could take actions to explore the environment, it might be much more diï¬ cult to fool. For instance, the reward agent could try to ï¬ nd scenarios that the ML system claimed were high reward but that a human labels as low reward; this is reminiscent of generative adversarial networks [61]. Of course, we would have to ensure that the reward-checking agent is more powerful (in a somewhat subtle sense) than the agent that is trying to achieve rewards. More generally, there may be interesting setups where a system has multiple pieces trained using diï¬ erent objectives that are used to check each other. â ¢ Model Lookahead: In model based RL, the agent plans its future actions by using a model to consider which future states a sequence of actions may lead to. In some setups, we could give reward based on anticipated future states, rather than the present one. This could be very helpful in resisting situations where the model overwrites its reward function: you canâ t control the reward once it replaces the reward function, but you can give negative reward for | 1606.06565#23 | 1606.06565#25 | 1606.06565 | [
"1507.01986"
] |
1606.06565#25 | Concrete Problems in AI Safety | 9 planning to replace the reward function. (Much like how a human would probably â enjoyâ taking addictive substances once they do, but not want to be an addict.) Similar ideas are explored in [50, 71]. â ¢ Adversarial Blinding: Adversarial techniques can be used to blind a model to certain variables [5]. This technique could be used to make it impossible for an agent to understand some part of its environment, or even to have mutual information with it (or at least to penalize such mutual information). In particular, it could prevent an agent from understanding how its reward is generated, making it diï¬ | 1606.06565#24 | 1606.06565#26 | 1606.06565 | [
"1507.01986"
] |
1606.06565#26 | Concrete Problems in AI Safety | cult to hack. This solution could be described as â cross- validation for agents.â â ¢ Careful Engineering: Some kinds of reward hacking, like the buï¬ er overï¬ ow example, might be avoided by very careful engineering. In particular, formal veriï¬ cation or practical testing of parts of the system (perhaps facilitated by other machine learning systems) is likely to be valuable. Computer security approaches that attempt to isolate the agent from its reward signal through a sandbox could also be useful [17]. As with software engineering, we cannot expect this to catch every possible bug. It may be possible, however, to create some highly reliable â | 1606.06565#25 | 1606.06565#27 | 1606.06565 | [
"1507.01986"
] |
1606.06565#27 | Concrete Problems in AI Safety | coreâ agent which could ensure reasonable behavior from the rest of the agent. â ¢ Reward Capping: In some cases, simply capping the maximum possible reward may be an eï¬ ective solution. However, while capping can prevent extreme low-probability, high-payoï¬ strategies, it canâ t prevent strategies like the cleaning robot closing its eyes to avoid seeing dirt. Also, the correct capping strategy could be subtle as we might need to cap total reward rather than reward per timestep. | 1606.06565#26 | 1606.06565#28 | 1606.06565 | [
"1507.01986"
] |
1606.06565#28 | Concrete Problems in AI Safety | â ¢ Counterexample Resistance: If we are worried, as in the case of abstract rewards, that learned components of our systems will be vulnerable to adversarial counterexamples, we can look to existing research in how to resist them, such as adversarial training [62]. Architectural decisions and weight uncertainty [26] may also help. Of course, adversarial counterexamples are just one manifestation of reward hacking, so counterexample resistance can only address a subset of these potential problems. â ¢ Multiple Rewards: A combination of multiple rewards [41] may be more diï¬ cult to hack and more robust. This could be diï¬ erent physical implementations of the same mathemati- cal function, or diï¬ erent proxies for the same informal objective. We could combine reward functions by averaging, taking the minimum, taking quantiles, or something else entirely. Of course, there may still be bad behaviors which aï¬ ect all the reward functions in a correlated manner. â ¢ Reward Pretraining: A possible defense against cases where the agent can inï¬ uence its own reward function (e.g. feedback or environmental embedding) is to train a ï¬ xed reward function ahead of time as a supervised learning process divorced from interaction with the environment. This could involve either learning a reward function from samples of state-reward pairs, or from trajectories, as in inverse reinforcement learning [107, 51]. However, this forfeits the ability to further learn the reward function after the pretraining is complete, which may create other vulnerabilities. | 1606.06565#27 | 1606.06565#29 | 1606.06565 | [
"1507.01986"
] |
1606.06565#29 | Concrete Problems in AI Safety | â ¢ Variable Indiï¬ erence: Often we want an agent to optimize certain variables in the environ- ment, without trying to optimize others. For example, we might want an agent to maximize reward, without optimizing what the reward function is or trying to manipulate human behav- ior. Intuitively, we imagine a way to route the optimization pressure of powerful algorithms around parts of their environment. Truly solving this would have applications throughout safetyâ it seems connected to avoiding side eï¬ ects and also to counterfactual reasoning. Of course, a challenge here is to make sure the variables targeted for indiï¬ erence are actually the | 1606.06565#28 | 1606.06565#30 | 1606.06565 | [
"1507.01986"
] |
1606.06565#30 | Concrete Problems in AI Safety | 10 variables we care about in the world, as opposed to aliased or partially observed versions of them. â ¢ Trip Wires: If an agent is going to try and hack its reward function, it is preferable that we know this. We could deliberately introduce some plausible vulnerabilities (that an agent has the ability to exploit but should not exploit if its value function is correct) and monitor them, alerting us and stopping the agent immediately if it takes advantage of one. Such â trip wiresâ donâ t solve reward hacking in itself, but may reduce the risk or at least provide diagnostics. Of course, with a suï¬ ciently capable agent there is the risk that it could â see throughâ the trip wire and intentionally avoid it while still taking less obvious harmful actions. Fully solving this problem seems very diï¬ cult, but we believe the above approaches have the potential to ameliorate it, and might be scaled up or combined to yield more robust solutions. Given the predominantly theoretical focus on this problem to date, designing experiments that could induce the problem and test solutions might improve the relevance and clarity of this topic. Potential Experiments: A possible promising avenue of approach would be more realistic versions of the â delusion boxâ environment described by [127], in which standard RL agents distort their own perception to appear to receive high reward, rather than optimizing the objective in the external world that the reward signal was intended to encourage. The delusion box can be easily attached to any RL environment, but even more valuable would be to create classes of environments where a delusion box is a natural and integrated part of the dynamics. | 1606.06565#29 | 1606.06565#31 | 1606.06565 | [
"1507.01986"
] |
1606.06565#31 | Concrete Problems in AI Safety | For example, in suï¬ ciently rich physics simulations it is likely possible for an agent to alter the light waves in its immediate vicinity to distort its own perceptions. The goal would be to develop generalizable learning strategies that succeed at optimizing external objectives in a wide range of environments, while avoiding being fooled by delusion boxes that arise naturally in many diverse ways. # 5 Scalable Oversight Consider an autonomous agent performing some complex task, such as cleaning an oï¬ ce in the case of our recurring robot example. We may want the agent to maximize a complex objective like â if the user spent a few hours looking at the result in detail, how happy would they be with the agentâ s performance?â But we donâ t have enough time to provide such oversight for every training example; in order to actually train the agent, we need to rely on cheaper approximations, like â does the user seem happy when they see the oï¬ ce?â or â is there any visible dirt on the ï¬ oor?â These cheaper signals can be eï¬ ciently evaluated during training, but they donâ t perfectly track what we care about. This divergence exacerbates problems like unintended side eï¬ ects (which may be appropriately penalized by the complex objective but omitted from the cheap approximation) and reward hacking (which thorough oversight might recognize as undesirable). We may be able to ameliorate such problems by ï¬ nding more eï¬ cient ways to exploit our limited oversight budgetâ for example by combining limited calls to the true objective function with frequent calls to an imperfect proxy that we are given or can learn. One framework for thinking about this problem is semi-supervised reinforcement learning,3 which resembles ordinary reinforcement learning except that the agent can only see its reward on a small fraction of the timesteps or episodes. The agentâ s performance is still evaluated based on reward from all episodes but it must optimize this based only on the limited reward samples it sees. 3The discussion of semi-supervised RL draws heavily on an informal essay, https://medium.com/ai-control/ cf7d5375197f written by one of the authors of the present document. | 1606.06565#30 | 1606.06565#32 | 1606.06565 | [
"1507.01986"
] |
1606.06565#32 | Concrete Problems in AI Safety | 11 The active learning setting seems most interesting; in this setting the agent can request to see the reward on whatever episodes or timesteps would be most useful for learning, and the goal is to be economical both with number of feedback requests and total training time. We can also consider a random setting, where the reward is visible on a random subset of the timesteps or episodes, as well as intermediate possibilities. We can deï¬ ne a baseline performance by simply ignoring the unlabeled episodes and applying an ordinary RL algorithm to the labelled episodes. This will generally result in very slow learning. The challenge is to make use of the unlabelled episodes to accelerate learning, ideally learning almost as quickly and robustly as if all episodes had been labeled. An important subtask of semi-supervised RL is identifying proxies which predict the reward, and learning the conditions under which those proxies are valid. For example, if a cleaning robotâ s real reward is given by a detailed human evaluation, then it could learn that asking the human â is the room clean?â can provide a very useful approximation to the reward function, and it could eventually learn that checking for visible dirt is an even cheaper but still-useful approximation. This could allow it to learn a good cleaning policy using an extremely small number of detailed evaluations. More broadly, use of semi-supervised RL with a reliable but sparse true approval metric may in- centivize communication and transparency by the agent, since the agent will want to get as much cheap proxy feedback as it possibly can about whether its decisions will ultimately be given high reward. For example, hiding a mess under the rug simply breaks the correspondence between the userâ s reaction and the real reward signal, and so would be avoided. We can imagine many possible approaches to semi-supervised RL. For example: â ¢ Supervised Reward Learning: Train a model to predict the reward from the state on either a per-timestep or per-episode basis, and use it to estimate the payoï¬ of unlabelled episodes, with some appropriate weighting or uncertainty estimate to account for lower conï¬ dence in estimated vs known reward. [37] studies a version of this with direct human feedback as the reward. Many existing RL approaches already ï¬ t estimators that closely resemble reward predictors (especially policy gradient methods with a strong baseline, see e.g. [134]), suggesting that this approach may be eminently feasible. â | 1606.06565#31 | 1606.06565#33 | 1606.06565 | [
"1507.01986"
] |
1606.06565#33 | Concrete Problems in AI Safety | ¢ Semi-supervised or Active Reward Learning: Combine the above with traditional semi- supervised or active learning, to more quickly learn the reward estimator. For example, the agent could learn to identify â salientâ events in the environment, and request to see the reward associated with these events. â ¢ Unsupervised Value Iteration: Use the observed transitions of the unlabeled episodes to make more accurate Bellman updates. â ¢ Unsupervised Model Learning: If using model-based RL, use the observed transitions of the unlabeled episodes to improve the quality of the model. As a toy example, a semi-supervised RL agent should be able to learn to play Atari games using a small number of direct reward signals, relying almost entirely on the visual display of the score. This simple example can be extended to capture other safety issues: for example, the agent might have the ability to modify the displayed score without modifying the real score, or the agent may need to take some special action (such as pausing the game) in order to see its score, or the agent may need to learn a sequence of increasingly rough-and-ready approximations (for example learning that certain sounds are associated with positive rewards and other sounds with negative rewards). Or, even without the visual display of the score, the agent might be able to learn to play from only a handful of explicit reward requests (â how many points did I get on the frame where that enemy ship blew up? How about the bigger enemy ship?â | 1606.06565#32 | 1606.06565#34 | 1606.06565 | [
"1507.01986"
] |
1606.06565#34 | Concrete Problems in AI Safety | ) 12 An eï¬ ective approach to semi-supervised RL might be a strong ï¬ rst step towards providing scalable oversight and mitigating other AI safety problems. It would also likely be useful for reinforcement learning, independent of its relevance to safety. There are other possible approaches to scalable oversight: â ¢ Distant supervision. Rather than providing evaluations of some small fraction of a sys- temâ s decisions, we could provide some useful information about the systemâ s decisions in the aggregate or some noisy hints about the correct evaluations There has been some work in this direction within the area of semi-supervised or weakly supervised learning. For instance, generalized expectation criteria [94, 45] ask the user to provide population-level statistics (e.g. telling the system that on average each sentence contains at least one noun); the DeepDive sys- tem [139] asks users to supply rules that each generate many weak labels; and [65] extrapolates more general patterns from an initial set of low-recall labeling rules. This general approach is often referred to as distant supervision, and has also received recent attention in the natural language processing community (see e.g. [60, 99] as well as several of the references above). Expanding these lines of work and ï¬ nding a way to apply them to the case of agents, where feedback is more interactive and i.i.d. assumptions may be violated, could provide an approach to scalable oversight that is complementary to the approach embodied in semi-supervised RL. â ¢ Hierarchical reinforcement learning. Hierarchical reinforcement learning [40] oï¬ ers an- other approach to scalable oversight. Here a top-level agent takes a relatively small number of highly abstract actions that extend over large temporal or spatial scales, and receives rewards over similarly long timescales. The agent completes actions by delegating them to sub-agents, which it incentivizes with a synthetic reward signal representing correct completion of the action, and which themselves delegate to sub-sub-agents. At the lowest level, agents directly take primitive actions in the environment. The top-level agent in hierarchical RL may be able to learn from very sparse rewards, since it does not need to learn how to implement the details of its policy; meanwhile, the sub-agents will receive a dense reward signal even if the top-level reward is very sparse, since they are optimizing synthetic reward signals deï¬ ned by higher-level agents. | 1606.06565#33 | 1606.06565#35 | 1606.06565 | [
"1507.01986"
] |
1606.06565#35 | Concrete Problems in AI Safety | So a successful approach to hierarchical RL might naturally facilitate scalable oversight.4 Hierarchical RL seems a particularly promising approach to oversight, especially given the potential promise of combining ideas from hierarchical RL with neural network function ap- proximators [84]. Potential Experiments: An extremely simple experiment would be to try semi-supervised RL in some basic control environments, such as cartpole balance or pendulum swing-up. If the reward is provided only on a random 10% of episodes, can we still learn nearly as quickly as if it were provided every episode? In such tasks the reward structure is very simple so success should be quite likely. A next step would be to try the same on Atari games. Here the active learning case could be quite interestingâ perhaps it is possible to infer the reward structure from just a few carefully requested samples (for example, frames where enemy ships are blowing up in Space Invaders), and thus learn to play the games in an almost totally unsupervised fashion. The next step after this might be to try a task with much more complex reward structure, either simulated or (preferably) real-world. If learning was suï¬ ciently data-eï¬ cient, then these rewards could be provided directly by a human. Robot locomotion or industrial control tasks might be a natural candidate for such experiments. 4When implementing hierarchical RL, we may ï¬ nd that subagents take actions that donâ t serve top-level agentâ s real goals, in the same way that a human may be concerned that the top-level agentâ s actions donâ t serve the humanâ s real goals. | 1606.06565#34 | 1606.06565#36 | 1606.06565 | [
"1507.01986"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.