doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1606.08415 | 19 | 1Thank you to Dmytro Mishkin for bringing an approximation like this to our attention.
6
# ACKNOWLEDGMENT
We would like to thank NVIDIA Corporation for donating several TITAN X GPUs used in this research.
# REFERENCES
Jimmy Ba and Brendan Frey. Adaptive dropout for training deep neural networks. In Neural Infor- mation Processing Systems, 2013.
Philip Bachman, Ouais Alsharif, and Doina Precup. Learning with pseudo-ensembles. In Neural Information Processing Systems, 2014.
Amit Choudhury. A simple approximation to the area under standard normal curve. In Mathematics and Statistics, 2014.
Djork-Arn´e Clevert, Thomas Unterthiner, and Sepp Hochreiter. Fast and accurate deep network learning by exponential linear units (ELUs). In International Conference on Learning Represen- tations, 2016.
Guillaume Desjardins, Karen Simonyan, Razvan Pascanu, and Koray Kavukcuoglu. Natural neural networks. In arXiv, 2015. | 1606.08415#19 | Gaussian Error Linear Units (GELUs) | We propose the Gaussian Error Linear Unit (GELU), a high-performing neural
network activation function. The GELU activation function is $x\Phi(x)$, where
$\Phi(x)$ the standard Gaussian cumulative distribution function. The GELU
nonlinearity weights inputs by their value, rather than gates inputs by their
sign as in ReLUs ($x\mathbf{1}_{x>0}$). We perform an empirical evaluation of
the GELU nonlinearity against the ReLU and ELU activations and find performance
improvements across all considered computer vision, natural language
processing, and speech tasks. | http://arxiv.org/pdf/1606.08415 | Dan Hendrycks, Kevin Gimpel | cs.LG | Trimmed version of 2016 draft | null | cs.LG | 20160627 | 20230606 | [] |
1606.08514 | 19 | We believe the answer is yes, but that more needs to be done. Formal methods has proved effective for the systematic generation of counterexamples and test data that satisfy constraints including for simulation- based veriï¬cation of circuits (e.g., [44]) and ï¬nding security exploits in commodity software (e.g., [5]). However, the requirements for AI/ML systems are different. The types of constraints can be much more complex, e.g., encoding requirements on ârealismâ of data captured using sensors from a complex envi- ronment such as a trafï¬c situation. We need to generate not just single data items, but an ensemble that satisï¬es distributional constraints. Additionally, data generation must be selective, e.g., meeting objectives on data set size and diversity for effective training and generalization. All of these additional requirements necessitate the development of a new suite of formal techniques. Quantitative Veriï¬cation: Several safety-critical applications of AI-based systems are in robotics and cyber- physical systems. In such systems, the scalability challenge for veriï¬cation can be very high. | 1606.08514#19 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08415 | 20 | Guillaume Desjardins, Karen Simonyan, Razvan Pascanu, and Koray Kavukcuoglu. Natural neural networks. In arXiv, 2015.
Kevin Gimpel, Nathan Schneider, Brendan Oâ²Connor, Dipanjan Das, Daniel Mills, Jacob Eisenstein, Michael Heilman, Dani Yogatama, Jeffrey Flanigan, and Noah A. Smith. Part-of-Speech Tagging for Twitter: Annotation, Features, and Experiments. Association for Computational Linguistics (ACL), 2011.
Dan Hendrycks and Kevin Gimpel. Adjusting for dropout variance in batch normalization and weight initialization. In arXiv, 2016.
John Hopfield. Neural networks and physical systems with emergent collective computational abil- ities. In Proceedings of the National Academy of Sciences of the USA, 1982.
Diederik Kingma and Jimmy Ba. Adam: A Method for Stochastic Optimization. International Conference for Learning Representations, 2015.
Alex Krizhevsky. Learning Multiple Layers of Features from Tiny Images, 2009. | 1606.08415#20 | Gaussian Error Linear Units (GELUs) | We propose the Gaussian Error Linear Unit (GELU), a high-performing neural
network activation function. The GELU activation function is $x\Phi(x)$, where
$\Phi(x)$ the standard Gaussian cumulative distribution function. The GELU
nonlinearity weights inputs by their value, rather than gates inputs by their
sign as in ReLUs ($x\mathbf{1}_{x>0}$). We perform an empirical evaluation of
the GELU nonlinearity against the ReLU and ELU activations and find performance
improvements across all considered computer vision, natural language
processing, and speech tasks. | http://arxiv.org/pdf/1606.08415 | Dan Hendrycks, Kevin Gimpel | cs.LG | Trimmed version of 2016 draft | null | cs.LG | 20160627 | 20230606 | [] |
1606.08514 | 20 | systems are in robotics and cyber- physical systems. In such systems, the scalability challenge for veriï¬cation can be very high. In addition to the scale of systems as measured by traditional metrics (dimension of state space, number of components, etc.), the types of components can be much more complex. For instance, in (semi-)autonomous driving, autonomous vehicles and their controllers need to be modeled as hybrid systems combining both discrete and continuous dynamics. Moreover, agents in the environment (humans, other vehicles) may need to be modeled as probabilistic processes. Finally, the requirements may involve not only traditional Boolean speciï¬cations on safety and liveness, but also quantitative requirements on system robustness and perfor- mance. Yet, most of the existing veriï¬cation methods are targeted towards answering Boolean veriï¬cation questions. To address this gap, new scalable engines for quantitative veriï¬cation must be developed. Compositional Reasoning: In order for formal methods to scale to large AI/ML systems, compositional (modular) reasoning is essential. In compositional veriï¬cation, a large system | 1606.08514#20 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08415 | 21 | Alex Krizhevsky. Learning Multiple Layers of Features from Tiny Images, 2009.
David Krueger, Tegan Maharaj, J´anos Kram´ar, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke1, Anirudh Goyal, Yoshua Bengio, Hugo Larochelle, Aaron Courville, and Chris Pal. Zoneout: Regularizing RNNs by randomly preserving hidden activations. In Neural Information Processing Systems, 2016.
Ilya Loshchilov and Frank Hutter. SGDR: Stochastic gradient descent with restarts. arXiv, 2016.
Andrew L. Maas, Awni Y. Hannun, , and Andrew Y. Ng. Rectifier nonlinearities improve neural network acoustic models. In International Conference on Machine Learning, 2013.
Warren S. McCulloch and Walter Pitts. A logical calculus of the ideas immanent in nervous activity. In Bulletin of Mathematical Biophysics, 1943.
Dmytro Mishkin and Jiri Matas. All you need is a good init. In International Conference on Learning Representations, 2016.
Abdelrahman Mohamed, George E. Dahl, and Geoffrey E. Hinton. Acoustic modeling using deep belief networks. In IEEE Transactions on Audio, Speech, and Language Processing, 2012. | 1606.08415#21 | Gaussian Error Linear Units (GELUs) | We propose the Gaussian Error Linear Unit (GELU), a high-performing neural
network activation function. The GELU activation function is $x\Phi(x)$, where
$\Phi(x)$ the standard Gaussian cumulative distribution function. The GELU
nonlinearity weights inputs by their value, rather than gates inputs by their
sign as in ReLUs ($x\mathbf{1}_{x>0}$). We perform an empirical evaluation of
the GELU nonlinearity against the ReLU and ELU activations and find performance
improvements across all considered computer vision, natural language
processing, and speech tasks. | http://arxiv.org/pdf/1606.08415 | Dan Hendrycks, Kevin Gimpel | cs.LG | Trimmed version of 2016 draft | null | cs.LG | 20160627 | 20230606 | [] |
1606.08514 | 21 | formal methods to scale to large AI/ML systems, compositional (modular) reasoning is essential. In compositional veriï¬cation, a large system (e.g., program) is split up into its components (e.g., procedures), each component is veriï¬ed against a speciï¬cation, and then the com- ponent speciï¬cations together entail the system-level speciï¬cation. A common approach for compositional veriï¬cation is the use of assume-guarantee contracts. For example, a procedure assumes something about its starting state (pre-condition) and in turn guarantees something about its ending state (post-condition). Similar assume-guarantee paradigms have been developed for concurrent software and hardware systems. A theory of assume-guarantee contracts does not yet exist for AI-based systems. | 1606.08514#21 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08415 | 22 | Abdelrahman Mohamed, George E. Dahl, and Geoffrey E. Hinton. Acoustic modeling using deep belief networks. In IEEE Transactions on Audio, Speech, and Language Processing, 2012.
Vinod Nair and Geoffrey E. Hinton. Rectified linear units improve restricted boltzmann machines. In International Conference on Machine Learning, 2010.
Olutobi Owoputi, Brendan OâConnor, Chris Dyer, Kevin Gimpel, Nathan Schneider, and Noah A. In Smith. North American Chapter of the Association for Computational Linguistics (NAACL), 2013. Improved part-of-speech tagging for online conversational text with word clusters.
7
Tim Salimans and Diederik P. Kingma. Weight normalization: A simple reparameterization to accelerate training of deep neural networks. In Neural Information Processing Systems, 2016.
Andrew M. Saxe, James L. McClelland, and Surya Ganguli. Exact solutions to the nonlinear dy- In International Conference on Learning namics of learning in deep linear neural networks. Representations, 2014.
Anish Shah, Sameer Shinde, Eashan Kadam, Hena Shah, and Sandip Shingade. Deep residual networks with exponential linear unit. In Vision Net, 2016.
Nitish Srivastava. Improving neural networks with dropout. In University of Toronto, 2013. | 1606.08415#22 | Gaussian Error Linear Units (GELUs) | We propose the Gaussian Error Linear Unit (GELU), a high-performing neural
network activation function. The GELU activation function is $x\Phi(x)$, where
$\Phi(x)$ the standard Gaussian cumulative distribution function. The GELU
nonlinearity weights inputs by their value, rather than gates inputs by their
sign as in ReLUs ($x\mathbf{1}_{x>0}$). We perform an empirical evaluation of
the GELU nonlinearity against the ReLU and ELU activations and find performance
improvements across all considered computer vision, natural language
processing, and speech tasks. | http://arxiv.org/pdf/1606.08415 | Dan Hendrycks, Kevin Gimpel | cs.LG | Trimmed version of 2016 draft | null | cs.LG | 20160627 | 20230606 | [] |
1606.08514 | 22 | Moreover, AI/ML systems pose a particularly vexing challenge for compositional reasoning. Composi- tional veriï¬cation requires compositional speciï¬cation â i.e., the components must be formally-speciï¬able. However, as noted in Sec. 3.2, it may be impossible to formally specify the correct behavior of a perception component. One of the challenges, then, is to develop techniques for compositional reasoning that do not rely on having complete compositional speciï¬cations [75]. Additionally, more work needs to be done for extending the theory and application of compositional reasoning to probabilistic systems and speciï¬cations.
# 3.5 Correct-by-Construction Intelligent Systems | 1606.08514#22 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08415 | 23 | Nitish Srivastava. Improving neural networks with dropout. In University of Toronto, 2013.
Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. In Journal of Machine Dropout: A simple way to prevent neural networks from overfitting. Learning Research, 2014.
Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. British Machine Vision Confer- ence, 2016.
8
# A NEURAL NETWORK ARCHITECTURE FOR CIFAR-10 EXPERIMENTS
Table 1: Neural network architecture for CIFAR-10.
Layer Type # channels raw RGB input ZCA whitening Gaussian noise Ï = 0.15 3 Ã 3 conv with activation 3 Ã 3 conv with activation 3 Ã 3 conv with activation 2 Ã 2 max pool, stride 2 dropout with p = 0.5 3 Ã 3 conv with activation 3 Ã 3 conv with activation 3 Ã 3 conv with activation 2 Ã 2 max pool, stride 2 dropout with p = 0.5 3 Ã 3 conv with activation 1 Ã 1 conv with activation 1 Ã 1 conv with activation global average pool softmax output 3 3 3 96 96 96 96 96 192 192 192 192 192 192 192 192 192 10 32 32 32 32 32 32 16 16 16 16 16 8 8 6 6 6 1 1
# x, y dimension | 1606.08415#23 | Gaussian Error Linear Units (GELUs) | We propose the Gaussian Error Linear Unit (GELU), a high-performing neural
network activation function. The GELU activation function is $x\Phi(x)$, where
$\Phi(x)$ the standard Gaussian cumulative distribution function. The GELU
nonlinearity weights inputs by their value, rather than gates inputs by their
sign as in ReLUs ($x\mathbf{1}_{x>0}$). We perform an empirical evaluation of
the GELU nonlinearity against the ReLU and ELU activations and find performance
improvements across all considered computer vision, natural language
processing, and speech tasks. | http://arxiv.org/pdf/1606.08415 | Dan Hendrycks, Kevin Gimpel | cs.LG | Trimmed version of 2016 draft | null | cs.LG | 20160627 | 20230606 | [] |
1606.08514 | 23 | # 3.5 Correct-by-Construction Intelligent Systems
In an ideal world, veriï¬cation should be integrated with the design process so that the system is âcorrect-by- construction.â Such an approach could either interleave veriï¬cation steps with compilation/synthesis steps, such as in the register-transfer-level (RTL) design ï¬ow common in integrated circuits, or devise synthesis al- gorithms so as to ensure that the implementation satisï¬es the speciï¬cation, such as in reactive synthesis from temporal logic [60]. Can we devise a suitable correct-by-construction design ï¬ow for AI-based systems? Speciï¬cation-Driven Design of ML Components: Can we design, from scratch, a machine learning com- ponent (model) that provably satisï¬es a formal speciï¬cation? (This assumes, of course, that we solve the formal speciï¬cation challenge described above in Sec. 3.2.) The clean-slate design of an ML component has many aspects: (1) designing the data set, (2) synthesizing the structure of the model, (3) generating a
6 | 1606.08514#23 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08415 | 24 | # x, y dimension
# B HISTORY OF THE GELU AND SILU
This paper arose from DHâs first research internship as an undergraduate in June 2016. The start of the week after, this paper was put on arXiv, in which we discuss smoother ReLU activation functions (x à P (X ⤠x)) and their relation to stochastic regularizers. In 2016, we submitted the paper to ICLR and made the paper and code publicly available. In the paper, we introduced and coined the Sigmoid Linear Unit (SiLU) as x · Ï(x).
In the first half of 2017, Elfwing et al. published a paper that proposed the same activation function as SiLU, x · Ï(x), which they called âSIL.â At the end of 2017, over a year after this paper was first released, Quoc Le and others from Google Brain put out a paper proposing x · Ï(x) without citing either the Elfwing et al. paper or this work. Upon learning this, we contacted both parties. Elfwing quickly updated their work to call the activation the âSiLUâ instead of âSILâ to recognize that we originally introduced the activation. | 1606.08415#24 | Gaussian Error Linear Units (GELUs) | We propose the Gaussian Error Linear Unit (GELU), a high-performing neural
network activation function. The GELU activation function is $x\Phi(x)$, where
$\Phi(x)$ the standard Gaussian cumulative distribution function. The GELU
nonlinearity weights inputs by their value, rather than gates inputs by their
sign as in ReLUs ($x\mathbf{1}_{x>0}$). We perform an empirical evaluation of
the GELU nonlinearity against the ReLU and ELU activations and find performance
improvements across all considered computer vision, natural language
processing, and speech tasks. | http://arxiv.org/pdf/1606.08415 | Dan Hendrycks, Kevin Gimpel | cs.LG | Trimmed version of 2016 draft | null | cs.LG | 20160627 | 20230606 | [] |
1606.08514 | 24 | good set of features, (4) synthesizing hyper-parameters and other aspects of ML algorithm selection, and (5) automated techniques for debugging ML models or the speciï¬cation when synthesis fails. More progress is needed on all these fronts. Theories of Compositional Design: Another challenge is to design the overall system comprising multiple learning and non-learning components. While theories of compositional design have been developed for digital circuits and embedded systems (e.g. [70, 80]), we do not as yet have such theories for AI-based systems. For example, if two ML models are used for perception on two different types of sensor data (e.g., LiDAR and visual images), and individually satisfy their speciï¬cations under certain assumptions, under what conditions can they be used together to improve the reliability of the overall system? And how can one design a planning component so as to overcome limitations of an ML-based perception component that it receives input from? Bridging Design Time and Run Time for Resilient AI: Due to the complexity of AI-based systems and the environments in which they operate, even if all the challenges for speciï¬cation and veriï¬cation | 1606.08514#24 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08415 | 25 | Unlike Elfwing et al., the Google Brain researchers continued calling the activation âswish.â How- ever, there was no novelty. The first author of the âswishâ paper stated their oversight in public, saying, âAs has been pointed out, we missed prior works that proposed the same activation function. The fault lies entirely with me for not conducting a thorough enough literature search.â To subdue criticism, an update to the paper was released a week later. Rather than give credit to this work for the SiLU, the update only cited this work for the GELU so that the âswishâ appeared more novel. In the updated paper, a learnable hyperparameter β was introduced, and the swish was changed from x · Ï(x) to x · Ï(β · x). This staked all of the ideaâs novelty on an added learnable hyperparameter β. | 1606.08415#25 | Gaussian Error Linear Units (GELUs) | We propose the Gaussian Error Linear Unit (GELU), a high-performing neural
network activation function. The GELU activation function is $x\Phi(x)$, where
$\Phi(x)$ the standard Gaussian cumulative distribution function. The GELU
nonlinearity weights inputs by their value, rather than gates inputs by their
sign as in ReLUs ($x\mathbf{1}_{x>0}$). We perform an empirical evaluation of
the GELU nonlinearity against the ReLU and ELU activations and find performance
improvements across all considered computer vision, natural language
processing, and speech tasks. | http://arxiv.org/pdf/1606.08415 | Dan Hendrycks, Kevin Gimpel | cs.LG | Trimmed version of 2016 draft | null | cs.LG | 20160627 | 20230606 | [] |
1606.08514 | 25 | to the complexity of AI-based systems and the environments in which they operate, even if all the challenges for speciï¬cation and veriï¬cation are solved, it is likely that one will not be able to prove unconditional safe and correct operation. There will always be situations in which we do not have a provable guarantee of correctness. Therefore, techniques for achieving fault tolerance and error resilience at run time must play a crucial role. In particular, there is not yet a systematic understanding of what can be achieved at design time, how the design process can contribute to safe and correct operation of the AI system at run time, and how the design-time and run-time techniques can interoperate effectively. | 1606.08514#25 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08415 | 26 | Despite the addition of the hyperparameter beta, nearly all of the community still used the original âswishâ function without β (i.e., with β = 1). Since this paper was from Google Brain, the Tensor- flow implementation ended up being called âswish,â and the default setting removed β, rendering it identical to the SiLU. The practice of adding an unused hyperparameter allowed claiming novelty while effectively receiving credit for an idea that originated elsewhere. Future papers with the same senior authors persistently referred to the âswishâ function even when not using β, making it identi- cal to the SiLU, originally proposed in this work. This resulted in the âswishâ paper inappropriately gaining credit for the idea.
9
Things changed as the GELU began to be used in BERT and GPT, becoming the default activation for state-of-the-art Transformers. Now it is substantially more commonly used than the SiLU. | 1606.08415#26 | Gaussian Error Linear Units (GELUs) | We propose the Gaussian Error Linear Unit (GELU), a high-performing neural
network activation function. The GELU activation function is $x\Phi(x)$, where
$\Phi(x)$ the standard Gaussian cumulative distribution function. The GELU
nonlinearity weights inputs by their value, rather than gates inputs by their
sign as in ReLUs ($x\mathbf{1}_{x>0}$). We perform an empirical evaluation of
the GELU nonlinearity against the ReLU and ELU activations and find performance
improvements across all considered computer vision, natural language
processing, and speech tasks. | http://arxiv.org/pdf/1606.08415 | Dan Hendrycks, Kevin Gimpel | cs.LG | Trimmed version of 2016 draft | null | cs.LG | 20160627 | 20230606 | [] |
1606.08514 | 26 | # 4 Principles for Veriï¬ed AI
For each of the challenges described in the preceding section, we suggest a corresponding set of principles to follow in the design/veriï¬cation process to address that challenge. These ï¬ve principles are: 1. Use an introspective, data-driven, and probabilistic approach to model the environment; 2. Combine formal speciï¬cations of end-to-end behavior with hybrid Boolean-quantitative formalisms for learning systems and perception components and use speciï¬cation mining to bridge the data-property gap;
3. For ML components, develop new abstractions, explanations, and semantic analysis techniques; 4. Create a new class of compositional, randomized, and quantitative formal methods for data generation,
testing, and veriï¬cation, and
5. Develop techniques for formal inductive synthesis of AI-based systems and design of safe learning systems, supported by techniques for run-time assurance.
We have successfully applied these principles over the past few years, and, based on this experience, believe that they provide a good starting point for applying formal methods to AI-based systems. Our formal methods perspective on the problem complements other perspectives that have been expressed (e.g., [4]). Experience over the past few years provides evidence that the principles we suggest can point a way towards the goal of Veriï¬ed AI. | 1606.08514#26 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08415 | 27 | Separately, a reddit post âGoogle has a credit assignment problem in researchâ became popular and focused on how they refer to the SiLU as the swish. As an example, they mentioned âSmooth Adversarial Trainingâ as an example of poor credit assignment. In the âSmooth Adversarial Train- ingâ paper, which came from the senior author of the swish, the term âswishâ was used instead of âSiLU.â To reduce blowback from the post, the authors updated the paper and replaced âswishâ with the âSiLU,â recognizing this paper as the original source of the idea. After this post, popular libraries such as Tensorflow and PyTorch also began to rename the function to âSiLUâ instead of âswish.â For close observers, this issue has been largely settled, and we are grateful for the proper recognition that has largely come to pass.
10 | 1606.08415#27 | Gaussian Error Linear Units (GELUs) | We propose the Gaussian Error Linear Unit (GELU), a high-performing neural
network activation function. The GELU activation function is $x\Phi(x)$, where
$\Phi(x)$ the standard Gaussian cumulative distribution function. The GELU
nonlinearity weights inputs by their value, rather than gates inputs by their
sign as in ReLUs ($x\mathbf{1}_{x>0}$). We perform an empirical evaluation of
the GELU nonlinearity against the ReLU and ELU activations and find performance
improvements across all considered computer vision, natural language
processing, and speech tasks. | http://arxiv.org/pdf/1606.08415 | Dan Hendrycks, Kevin Gimpel | cs.LG | Trimmed version of 2016 draft | null | cs.LG | 20160627 | 20230606 | [] |
1606.08514 | 27 | # 4.1 Environment Modeling: Introspection, Probabilities, and Data
Recall from Sec. 3.1, the three challenges for modeling the environment E of an AI-based system S: un- known variables, model ï¬delity, and human modeling. We propose to tackle these challenges with three corresponding principles. Introspective Environment Modeling: We suggest to address the unknown variables problem by developing design and veriï¬cation methods that are introspective, i.e., they algorithmically identify assumptions A that system S makes about the environment E that are sufï¬cient to guarantee the satisfaction of the speciï¬cation
7 | 1606.08514#27 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08514 | 28 | Φ [76]. The assumptions A must be ideally the weakest such assumptions, and also must be efï¬cient to generate at design time and monitor at run time over available sensors and other sources of information about the environment so that mitigating actions can be taken when they are violated. Moreover, if there is a human operator involved, one might want A to be translatable into an explanation that is human understand- able, so that S can âexplainâ to the human why it may not be able to satisfy the speciï¬cation Φ. Dealing with these multiple requirements, as well as the need for good sensor models, makes introspective environment modeling a highly non-trivial task that requires substantial progress [76]. Preliminary work by the authors has shown that such extraction of monitorable assumptions is feasible in very simple cases [48], although more research is required to make this practical. Active Data-Driven Modeling: We believe human modeling requires an active data-driven approach. Rel- evant theories from cognitive science and psychology, such as that of bounded rationality [81, 65], must be leveraged, but it is important for those models to be expressed in formalisms compatible with formal methods. Additionally, while | 1606.08514#28 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08514 | 29 | bounded rationality [81, 65], must be leveraged, but it is important for those models to be expressed in formalisms compatible with formal methods. Additionally, while using a data-driven approach to infer a model, one must be careful to craft the right model structure and features. A critical aspect of human modeling is to capture human intent. We believe a three-pronged approach is required: ï¬rst, deï¬ne model templates/features based on expert knowl- edge; then, use ofï¬ine learning to complete the model for design time use, and ï¬nally, learn and update environment models at run time by monitoring and interact with the environment. Initial work has shown how data gathered from driving simulators via human subject experiments can be used to generate models of human driver behavior that are useful for veriï¬cation and control of autonomous vehicles [67, 69]. Probabilistic Formal Modeling: In order to tackle the model ï¬delity challenge, we suggest to use formalisms that combine probabilistic and non-deterministic modeling. Where probability distributions can be reliably speciï¬ed or estimated, one can use probabilistic modeling. In other cases, | 1606.08514#29 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08514 | 30 | modeling. Where probability distributions can be reliably speciï¬ed or estimated, one can use probabilistic modeling. In other cases, non-deterministic modeling can be used to over-approximate environment behaviors. While formalisms such as Markov Decision Processes (MDPs) already provide a way to blend probability and non-determinism, we believe techniques that blend probability and logical or automata-theoretic formalisms, such as the paradigm of probabilistic program- ming [52, 32], can provide an expressive and programmatic way to model environments. We expect that In in many cases, such probabilistic programs will need to be learned/synthesized (in part) from data. this case, any uncertainty in learned parameters must be propagated to the rest of the system and repre- sented in the probabilistic model. For example, the formalism of convex Markov decision processes (convex MDPs) [56, 61, 67] provide a way of representing uncertainty in the values of learned transition probabili- ties. Algorithms for veriï¬cation and control may then need to be | 1606.08514#30 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08514 | 33 | Writing formal speciï¬cations for AI/ML components is hard, arguably even impossible if the component imitates a human perceptual task. Even so, we think the challenges described in Sec. 3.2 can be addressed by following three guiding principles. End-to-End/System-Level Speciï¬cations: In order to address the speciï¬cation-for-perception challenge, let us change the problem slightly. We suggest to ï¬rst focus on precisely specifying the end-to-end behavior of the AI-based system. By âend-to-endâ we mean the speciï¬cation on the entire closed-loop system (see Fig. 2) or a precisely-speciï¬able sub-system containing the AI/ML component, not on the component alone. Such a speciï¬cation is also referred to as a âsystem-levelâ speciï¬cation. For our AEBS example, this involves specifying the property Φ corresponding to maintaining a minimum distance from any object during motion. Starting with such a system-level (end-to-end) speciï¬cation, we then derive from it constraints on the input- output interface of the perception | 1606.08514#33 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08514 | 36 | Hybrid Quantitative-Boolean Speciï¬cations: Boolean and quantitative speciï¬cations both have their ad- vantages. On the one hand, Boolean speciï¬cations are easier to compose. On the other hand, objective functions lend themselves to optimization based techniques for veriï¬cation and synthesis, and to deï¬ning ï¬ner granularities of property satisfaction. One approach to bridge this gap is to move to quantitative speci- ï¬cation languages, such as logics with both Boolean and quantitative semantics (e.g. STL [49]) or notions of weighted automata (e.g. [13]). Another approach is to combine both Boolean and quantitative speciï¬cations into a common speciï¬cation structure, such as a rulebook [10], where speciï¬cations can be organized in a hierarchy, compared, and aggregated. Additionally, novel formalisms bridging ideas from formal methods and machine learning are being developed to model the different variants of properties such as robustness, fairness, and privacy, including notions of semantic robustness (see, e.g., [77, 24]). | 1606.08514#36 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08514 | 37 | the different variants of properties such as robustness, fairness, and privacy, including notions of semantic robustness (see, e.g., [77, 24]). Speciï¬cation Mining: In order to bridge the gap between data and formal speciï¬cations, we suggest the use of techniques for inferring speciï¬cations from behaviors and other artifacts â so-called speciï¬cation mining techiques (e.g., [26, 47]). Such methods could be used for ML components in general, including for perception components, since in many cases it is not required to have an exact speciï¬cation or one that is human-readable. Speciï¬cation mining methods could also be used to infer human intent and other properties from demonstrations [85] or more complex forms of interaction between multiple agents, both human and robotic. | 1606.08514#37 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08514 | 39 | Let us now consider the challenges, described in Sec. 3.3, arising in modeling systems S that learn from experience. In our opinion, advances in three areas are needed in order to address these challenges: Automated Abstraction: Techniques for automatically generating abstractions of systems have been the linchpins of formal methods, playing crucial roles in extending the reach of formal methods to large hard- ware and software systems. In order to address the challenges of very high dimensional hybrid state spaces and input spaces for ML-based systems, we need to develop effective techniques to abstract ML models into simpler models that are more amenable to formal analysis. Some promising advances in this regard include the use of abstract interpretation to analyze deep neural networks (e.g. [35]), the use of abstractions for falsifying cyber-physical systems with ML components [22], and the development of probabilistic logics that capture guarantees provided by ML algorithms (e.g., [68]). Explanation Generation: The task of modeling a learning system can be made easier if the learner ac- companies its predictions with explanations of how those predictions result from the data and background knowledge. In fact, this | 1606.08514#39 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08514 | 40 | a learning system can be made easier if the learner ac- companies its predictions with explanations of how those predictions result from the data and background knowledge. In fact, this idea is not new â it has long been investigated by the ML community under terms such as explanation-based generalization [54]. Recently, there has been a renewal of interest in using logic to explain the output of learning systems (e.g. [84, 40]). Such approaches to generating explanations that are compatible with the modeling languages used in formal methods can make the task of system modeling for veriï¬cation considerably easier. ML techniques that incorporate causal and counterfactual reasoning [59] can also ease the generation of explanations for use with formal methods. Semantic Feature Spaces: The veriï¬cation and adversarial analysis [36] of ML models is more meaningful when the generated adversarial inputs and counterexamples have semantic meaning in the context in which the ML models are used. There is thus a need for techniques that can analyze ML models in the context of the systems within which they are used, i.e., for semantic adversarial analysis [25]. A key step is to represent the semantic feature space modeling the environment in which the | 1606.08514#40 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08514 | 41 | systems within which they are used, i.e., for semantic adversarial analysis [25]. A key step is to represent the semantic feature space modeling the environment in which the ML system operates, as opposed to the concrete feature space which deï¬nes the input space for the ML model. This follows the intuition that the semantically meaningful part of the concrete feature space (e.g. images of trafï¬c scenes) form a much lower dimensional latent space as compared to the full concrete feature space. For our illustrative example in Fig. 2, the semantic feature space is the lower-dimensional space representing the 3D world around the autonomous vehicle, whereas the concrete feature space is the high-dimensional pixel space. Since the | 1606.08514#41 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08514 | 42 | 9
semantic feature space is lower dimensional, it can be easier to search over (e.g. [22, 38]). However, one typically needs to have a ârendererâ that maps a point in the semantic feature space to one in the concrete feature space, and certain properties of this renderer, such as differentiability [46], make it easier to apply formal methods to perform goal-directed search of the semantic feature space.
# 4.4 Compositional and Quantitative Methods for Design and Veriï¬cation of Models and Data | 1606.08514#42 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08514 | 43 | # 4.4 Compositional and Quantitative Methods for Design and Veriï¬cation of Models and Data
Consider the challenge, described in Sec. 3.4, of devising computational engines for scalable training, test- ing, and veriï¬cation of AI-based systems. We see three promising directions to tackle this challenge. Controlled Randomization in Formal Methods: Consider the problem of data set design â i.e., systematically generating training data for a ML component in an AI-based system. This synthetic data generation problem has many facets. First, one must deï¬ne the space of âlegalâ inputs so that the examples are well formed according to the application semantics. Secondly, one might want to impose constraints on ârealismâ, e.g., a measure of similarity with real-world data. Third, one might need to impose constraints on the distribution of the generated examples in order to obtain guarantees about convergence of the learning algorithm to the true concept. What can formal methods offer towards solving this problem? | 1606.08514#43 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08514 | 44 | We believe that the answer may lie in a new class of randomized formal methods â randomized algo- rithms for generating test inputs subject to formal constraints and distribution requirements. Speciï¬cally, a recently deï¬ned class of techniques, termed control improvisation [31], holds promise. An improviser is a generator of random strings (examples) x that satisfy three constraints: (i) a hard constraint that deï¬nes the space of legal x; (ii) a soft constraint deï¬ning how the generated x must be similar to real-world examples, and (iii) a randomness requirement deï¬ning a constraint on the output distribution. The theory of control improvisation is still in its infancy, and we are just starting to understand the computational complexity and to devise efï¬cient algorithms. Improvisation, in turn, relies on recent progress on computational problems such as constrained random sampling and model counting (e.g., [51, 11, 12]), and generative approaches based on probabilistic programming (e.g. [32]). Quantitative Veriï¬cation on the Semantic Feature Space: Recall the challenge to develop techniques for veriï¬cation of quantitative requirements â where the output of the veriï¬er is not just YES/NO but a numeric value. | 1606.08514#44 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08514 | 45 | The complexity and heterogeneity of AI-based systems means that, in general, formal veriï¬cation of speciï¬cations, Boolean or quantitative, is undecidable. (For example, even deciding whether a state of a linear hybrid system is reachable is undecidable.) To overcome this obstacle posed by computational com- plexity, one must augment the abstraction and modeling methods discussed earlier in this section with novel techniques for probabilistic and quantitative veriï¬cation over the semantic feature space. For speciï¬cation formalisms that have both Boolean and quantitative semantics, in formalisms such as metric temporal logic, the formulation of veriï¬cation as optimization is crucial to unifying computational methods from formal methods with those from the optimization literature, such as in simulation-based temporal logic falsiï¬cation (e.g. [42, 27, 88]), although they must be applied to the semantic feature space for efï¬ciency [23]. Such falsiï¬cation techniques can also be used for the systematic, adversarial generation of training data for ML components [23]. Techniques for probabilistic | 1606.08514#45 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08514 | 46 | Such falsiï¬cation techniques can also be used for the systematic, adversarial generation of training data for ML components [23]. Techniques for probabilistic veriï¬cation, such as probabilistic model checking [45, 18], should be extended beyond traditional formalisms such as Markov chains or Markov Decision Processes to verify probabilistic programs over semantic feature spaces. Similarly, work on SMT solving must be extended to more effectively handle cost constraints â in other words, combining SMT solving with opti- mization methods (e.g., [79, 8]). Compositional Reasoning: As in all applications of formal methods, modularity will be crucial to scalable veriï¬cation of AI-based systems. However, compositional design and analysis of AI-based systems faces some unique challenges. First, theories of probabilistic assume-guarantee design and veriï¬cation need to | 1606.08514#46 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08514 | 47 | 10
be developed for the semantic spaces for such systems, building on some promising initial work (e.g. [57]). Second, we suggest the use of inductive synthesis [74] to generate assume-guaranteee contracts algorith- mically, to reduce the speciï¬cation burden and ease the use of compositional reasoning. Third, to handle the case of components, such as perception, that do not have precise formal speciï¬cations, we suggest tech- niques that infer component-level constraints from system-level analysis (e.g. [22]) and use such constraints to focus component-level analysis, including adversarial analysis.
# 4.5 Formal Inductive Synthesis, Safe Learning, and Run-Time Assurance | 1606.08514#47 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08514 | 48 | Developing a correct-by-construction design methodology for AI-based systems, with associated tools, is perhaps the toughest challenge of all. For this to be fully solved, the preceding four challenges must be successfully addressed. However, we do not need to wait until we solve those problems in order to start working on this one. Indeed, a methodology to âdesign for veriï¬cationâ may well ease the task on the other four challenges. Formal Inductive Synthesis: First consider the problem of synthesizing learning components correct by construction. The emerging theory of formal inductive synthesis [39, 41] addresses this problem. Formal inductive synthesis is the synthesis from examples of programs that satisfy formal speciï¬cations. In ma- chine learning terms, it is the synthesis of models/classiï¬ers that additionally satisfy a formal speciï¬cation. The most common approach to solving a formal inductive synthesis problem is to use an oracle-guided approach. In oracle-guided synthesis, a learner is paired with an oracle who answers queries. The set of query-response types is deï¬ned by an | 1606.08514#48 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08514 | 49 | oracle-guided synthesis, a learner is paired with an oracle who answers queries. The set of query-response types is deï¬ned by an oracle interface. For the example of Fig. 2, the oracle can be a falsiï¬er that can generate counterexamples showing how a failure of the learned component violates the system-level speciï¬cation. This approach, also known as counterexample-guided inductive synthesis [82], has proved ef- fective in many scenarios. In general, oracle-guided inductive synthesis techniques show much promise for the synthesis of learned components by blending expert human insight, inductive learning, and deductive reasoning [73, 74]. These methods also have a close relation to the sub-ï¬eld of machine teaching [89]. Safe Learning by Design: There has been considerable recent work on using design-time methods to analyze or constrain learning components so as to ensure safe operation within speciï¬ed assumptions. A prominent example is safe learning-based control (e.g., [3, 28]). In this approach, a safety envelope is pre-computed and a learning algorithm is used to tune a | 1606.08514#49 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08514 | 50 | learning-based control (e.g., [3, 28]). In this approach, a safety envelope is pre-computed and a learning algorithm is used to tune a controller within that envelope. Techniques for efï¬ciently comput- ing such safety envelopes based, for example, on reachability analysis [83], are needed. Relatedly, several methods have been proposed for safe reinforcement learning (see [34]). Another promising direction is to enforce properties on ML models through the use of semantic loss functions (e.g. [87, 25]), though this problem is largely unsolved. Finally, the use of theorem proving for ensuring correctness of algorithms used for training ML models (e.g. [72]) is also an important step towards improving the assurance in ML based systems. Run-Time Assurance: Due to the undecidability of veriï¬cation in most instances and the challenge of en- vironment modeling, we believe it will be difï¬cult, if not impossible, to synthesize correct-by-construction AI-based systems or to formally verify correct operation without making restrictive assumptions. Therefore, design-time veriï¬cation must be combined with run-time | 1606.08514#50 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08514 | 51 | systems or to formally verify correct operation without making restrictive assumptions. Therefore, design-time veriï¬cation must be combined with run-time assurance, i.e., run-time veriï¬cation and mitiga- tion techniques. For example, the Simplex technique [78] provides one approach to combining a complex, but error-prone module with a safe, formally-veriï¬ed backup module. Recent techniques for combining design-time and run-time assurance methods (e.g., [71, 19, 20]) have shown how unveriï¬ed components, including those based on AI and ML, can be wrapped within a runtime assurance framework to provide guarantees of safe operation. However, the problems of extracting environment assumptions and synthesiz- ing them into runtime monitors (e.g., as described for introspective environment modeling [76]) and devising runtime mitigation strategies remain a largely unsolved problem. | 1606.08514#51 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08514 | 52 | 11
Challenges Environment (incl. Human) Modeling Active Data-Driven, Introspective, Probabilistic Modeling Start at System Level, Derive Component Speciï¬cations; Formal Speciï¬cation Hybrid Boolean-Quantitative Speciï¬cation; Speciï¬cation Mining Abstractions, Explanations, Semantic Feature Spaces Compositional Reasoning, Controlled Randomization, Quantitative Semantic Analysis Formal Inductive Synthesis, Safe Learning by Design, Run-Time Assurance
Table 1: Summary of the ï¬ve challenges for Veriï¬ed AI presented in this paper, and the corresponding principles proposed to address them.
# 5 Conclusion | 1606.08514#52 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08514 | 53 | Table 1: Summary of the ï¬ve challenges for Veriï¬ed AI presented in this paper, and the corresponding principles proposed to address them.
# 5 Conclusion
Taking a formal methods perspective, we have analyzed the challenge of developing and applying formal methods to systems that are substantially based on artiï¬cial intelligence or machine learning. As summarized in Table 1, we have identiï¬ed ï¬ve main challenges for applying formal methods to AI-based systems. For each of these ï¬ve challenges, we have identiï¬ed corresponding principles for design and veriï¬cation that hold promise for addressing that challenge. Since the original version of this paper was published in 2016, several researchers including the authors have been working on addressing these challenges; a few sample advances are described in this paper. In particular, we have developed open-source tools, VerifAI [2] and Scenic [1] that implement techniques based on the principles described in this paper, and which have been applied to industrial-scale systems in the autonomous driving [33] and aerospace [30] domains. These results are but a start and much more remains to be done. The topic of Veriï¬ed AI promises to continue to be a fruitful area for research in the years to come.
# Acknowledgments | 1606.08514#53 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08514 | 54 | # Acknowledgments
The authorsâ work has been supported in part by NSF grants CCF-1139138, CCF-1116993, CNS-1545126 (VeHICaL), CNS-1646208, and CCF-1837132 (FMitF), by an NDSEG Fellowship, by the TerraSwarm Research Center, one of six centers supported by the STARnet phase of the Focus Center Research Pro- gram (FCRP) a Semiconductor Research Corporation program sponsored by MARCO and DARPA, by the DARPA BRASS and Assured Autonomy programs, by Toyota under the iCyPhy center, and by Berkeley Deep Drive. We gratefully acknowledge the many colleagues with whom our conversations and collabora- tions have helped shape this article.
# References
[1] Scenic Environment Modeling and Scenario Description Language. http://github.com/ BerkeleyLearnVerify/Scenic.
[2] VerifAI: A toolkit for design and veriï¬cation of AI-based systems. http://github.com/ BerkeleyLearnVerify/VerifAI. | 1606.08514#54 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08514 | 55 | [2] VerifAI: A toolkit for design and veriï¬cation of AI-based systems. http://github.com/ BerkeleyLearnVerify/VerifAI.
[3] Anayo K Akametalu, Jaime F Fisac, Jeremy H Gillula, Shahab Kaynama, Melanie N Zeilinger, and Claire J Tomlin. Reachability-based safe learning with Gaussian processes. In 53rd IEEE Conference on Decision and Control, pages 1424â1431, 2014.
12
[4] Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Man´e. Con- crete problems in AI safety. arXiv preprint arXiv:1606.06565, 2016.
[5] Thanassis Avgerinos, Sang Kil Cha, Alexandre Rebert, Edward J. Schwartz, Maverick Woo, and David Brumley. Automatic exploit generation. Commun. ACM, 57(2):74â84, 2014.
[6] Clark Barrett, Roberto Sebastiani, Sanjit A. Seshia, and Cesare Tinelli. Satisï¬ability modulo theories. In Armin Biere, Hans van Maaren, and Toby Walsh, editors, Handbook of Satisï¬ability, volume 4, chapter 8. IOS Press, 2009. | 1606.08514#55 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08514 | 56 | [7] I. Beer, S. Ben-David, C. Eisner, and Y. Rodeh. Efï¬cient detection of vacuity in ACTL formulas. Formal Methods in System Design, 18(2):141â162, 2001.
In Inter- national Conference on Tools and Algorithms for the Construction and Analysis of Systems, pages 194â199. Springer, 2015.
[9] Randal E. Bryant. Graph-based algorithms for Boolean function manipulation. IEEE Transactions on Computers, C-35(8):677â691, August 1986.
[10] Andrea Censi, Konstantin Slutsky, Tichakorn Wongpiromsarn, Dmitry Yershov, Scott Pendleton, James Fu, and Emilio Frazzoli. Liability, ethics, and culture-aware behavior speciï¬cation using rule- In 2019 International Conference on Robotics and Automation (ICRA), pages 8536â8542. books. IEEE, 2019.
[11] Supratik Chakraborty, Daniel J. Fremont, Kuldeep S. Meel, Sanjit A. Seshia, and Moshe Y. Vardi. Distribution-aware sampling and weighted model counting for SAT. In Proceedings of the 28th AAAI Conference on Artiï¬cial Intelligence (AAAI), pages 1722â1730, July 2014. | 1606.08514#56 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08514 | 57 | [12] Supratik Chakraborty, Daniel J. Fremont, Kuldeep S. Meel, Sanjit A. Seshia, and Moshe Y. Vardi. On parallel scalable uniform sat witness generation. In Proceedings of the 21st International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS), pages 304â319, April 2015.
[13] Krishnendu Chatterjee, Laurent Doyen, and Thomas A Henzinger. Quantitative languages. ACM Transactions on Computational Logic (TOCL), 11(4):23, 2010.
[14] Edmund M. Clarke and E. Allen Emerson. Design and synthesis of synchronization skeletons using branching-time temporal logic. In Logic of Programs, pages 52â71, 1981.
[15] Edmund M. Clarke, Orna Grumberg, and Doron A. Peled. Model Checking. MIT Press, 2000.
[16] Edmund M Clarke and Jeannette M Wing. Formal methods: State of the art and future directions. ACM Computing Surveys (CSUR), 28(4):626â643, 1996.
[17] Committee on Information Technology, Automation, and the U.S. Workforce. Information technology and the U.S. workforce: Where are we and where do we go from here? http://www.nap.edu/24649. | 1606.08514#57 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08514 | 58 | [18] Christian Dehnert, Sebastian Junges, Joost-Pieter Katoen, and Matthias Volk. A storm is coming: A modern probabilistic model checker. In International Conference on Computer Aided Veriï¬cation (CAV), pages 592â600. Springer, 2017.
13
[19] Ankush Desai, Tommaso Dreossi, and Sanjit A. Seshia. Combining model checking and runtime veriï¬cation for safe robotics. In Runtime Veriï¬cation - 17th International Conference, RV 2017, Seattle, WA, USA, September 13-16, 2017, Proceedings, pages 172â189, 2017.
[20] Ankush Desai, Shromona Ghosh, Sanjit A. Seshia, Natarajan Shankar, and Ashish Tiwari. A runtime assurance framework for programming safe robotics systems. In IEEE/IFIP International Conference on Dependable Systems and Networks (DSN), June 2019.
[21] Thomas G Dietterich and Eric J Horvitz. Rise of concerns about AI: reï¬ections and directions. Com- munications of the ACM, 58(10):38â40, 2015. | 1606.08514#58 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08514 | 59 | [22] Tommaso Dreossi, Alexandre Donze, and Sanjit A. Seshia. Compositional falsiï¬cation of cyber- physical systems with machine learning components. In Proceedings of the NASA Formal Methods Conference (NFM), May 2017.
[23] Tommaso Dreossi, Daniel J. Fremont, Shromona Ghosh, Edward Kim, Hadi Ravanbakhsh, Marcell Vazquez-Chanlatte, and Sanjit A. Seshia. VerifAI: A toolkit for the formal design and analysis of artiï¬cial intelligence-based systems. In 31st International Conference on Computer Aided Veriï¬cation (CAV), July 2019.
[24] Tommaso Dreossi, Shromona Ghosh, Alberto L. Sangiovanni-Vincentelli, and Sanjit A. Seshia. A formalization of robustness for deep neural networks. In Proceedings of the AAAI Spring Symposium Workshop on Veriï¬cation of Neural Networks (VNN), March 2019.
[25] Tommaso Dreossi, Somesh Jha, and Sanjit A. Seshia. Semantic adversarial deep learning. In 30th International Conference on Computer Aided Veriï¬cation (CAV), 2018. | 1606.08514#59 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08514 | 60 | [26] Michael Ernst. Dynamically Discovering Likely Program Invariants. PhD thesis, University of Wash- ington, Seattle, 2000.
[27] Georgios E. Fainekos. Automotive control design bug-ï¬nding with the S-TaLiRo tool. In American Control Conference (ACC), page 4096, 2015.
[28] Jaime F Fisac, Anayo K Akametalu, Melanie N Zeilinger, Shahab Kaynama, Jeremy Gillula, and Claire J Tomlin. A general safety framework for learning-based control in uncertain robotic systems. IEEE Transactions on Automatic Control, 64(7):2737â2752, 2018.
[29] Harry Foster. Applied Assertion-Based Veriï¬cation: An Industry Perspective. Now Publishers Inc., 2009.
[30] Daniel J. Fremont, Johnathan Chiu, Dragos D. Margineantu, Denis Osipychev, and Sanjit A. Seshia. Formal analysis and redesign of a neural network-based aircraft taxiing system with verifai. In 32nd International Conference on Computer-Aided Veriï¬cation (CAV), pages 122â134, 2020. | 1606.08514#60 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08514 | 61 | [31] Daniel J. Fremont, Alexandre Donz´e, Sanjit A. Seshia, and David Wessel. Control improvisation. In 35th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2015), pages 463â474, 2015.
[32] Daniel J. Fremont, Tommaso Dreossi, Shromona Ghosh, Xiangyu Yue, Alberto L. Sangiovanni- Vincentelli, and Sanjit A. Seshia. Scenic: A language for scenario speciï¬cation and scene generation. In Proceedings of the 40th annual ACM SIGPLAN conference on Programming Language Design and Implementation (PLDI), June 2019.
14
[33] Daniel J. Fremont, Edward Kim, Yash Vardhan Pant, Sanjit A. Seshia, Atul Acharya, Xantha Bruso, Paul Wells, Steve Lemke, Qiang Lu, and Shalin Mehta. Formal scenario-based testing of autonomous vehicles: From simulation to the real world. In IEEE Intelligent Transportation Systems Conference (ITSC), 2020.
[34] Javier Garcıa and Fernando Fern´andez. A comprehensive survey on safe reinforcement learning. Jour- nal of Machine Learning Research, 16(1):1437â1480, 2015. | 1606.08514#61 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08514 | 62 | [35] Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, and Martin Vechev. AI2: Safety and robustness certiï¬cation of neural networks with abstract interpretation. In IEEE Symposium on Security and Privacy (SP), pages 3â18. IEEE, 2018.
[36] Ian Goodfellow, Patrick McDaniel, and Nicolas Papernot. Making machine learning robust against adversarial inputs. Communications of the ACM, 61(7):56â66, 2018.
[37] M. J. C. Gordon and T. F. Melham. Introduction to HOL: A Theorem Proving Environment for Higher- Order Logic. Cambridge University Press, 1993.
[38] Xiaowei Huang, Marta Kwiatkowska, Sen Wang, and Min Wu. Safety veriï¬cation of deep neural networks. In International Conference on Computer Aided Veriï¬cation, pages 3â29. Springer, 2017.
[39] S. Jha and S. A. Seshia. A Theory of Formal Synthesis via Inductive Learning. ArXiv e-prints, May 2015. | 1606.08514#62 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08514 | 63 | [39] S. Jha and S. A. Seshia. A Theory of Formal Synthesis via Inductive Learning. ArXiv e-prints, May 2015.
[40] Susmit Jha, Tuhin Sahai, Vasumathi Raman, Alessandro Pinto, and Michael Francis. Explaining AI decisions using efï¬cient methods for learning sparse boolean formulae. J. Autom. Reasoning, 63(4):1055â1075, 2019.
[41] Susmit Jha and Sanjit A. Seshia. A Theory of Formal Synthesis via Inductive Learning. Acta Infor- matica, 2017.
[42] Xiaoqing Jin, Alexandre Donz´e, Jyotirmoy Deshmukh, and Sanjit A. Seshia. Mining requirements from closed-loop control models. IEEE Transactions on Computer-Aided Design of Circuits and Sys- tems, 34(11):1704â1717, 2015.
[43] Matt Kaufmann, Panagiotis Manolios, and J. Strother Moore. Computer-Aided Reasoning: An Ap- proach. Kluwer Academic Publishers, 2000. | 1606.08514#63 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08514 | 64 | [44] Nathan Kitchen and Andreas Kuehlmann. Stimulus generation for constrained random simulation. In Proceedings of the 2007 IEEE/ACM International Conference on Computer-Aided Design (ICCAD), pages 258â265. IEEE Press, 2007.
[45] Marta Kwiatkowska, Gethin Norman, and David Parker. PRISM 4.0: Veriï¬cation of probabilistic real- In International Conference on Computer Aided Veriï¬cation (CAV), pages 585â591. time systems. Springer, 2011.
[46] Tzu-Mao Li, Miika Aittala, Fr´edo Durand, and Jaakko Lehtinen. Differentiable monte carlo ray tracing through edge sampling. ACM Trans. Graph. (Proc. SIGGRAPH Asia), 37(6):222:1â222:11, 2018.
[47] Wenchao Li. Speciï¬cation Mining: New Formalisms, Algorithms and Applications. PhD thesis, EECS Department, University of California, Berkeley, Mar 2014.
15 | 1606.08514#64 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08514 | 65 | 15
[48] Wenchao Li, Dorsa Sadigh, S. Shankar Sastry, and Sanjit A. Seshia. Synthesis for human-in-the-loop control systems. In Proceedings of the 20th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS), pages 470â484, April 2014.
[49] Oded Maler and Dejan Nickovic. Monitoring temporal properties of continuous signals. MATS/FTRTFT, pages 152â166, 2004. In FOR[50] Sharad Malik and Lintao Zhang. Boolean satisï¬ability: From theoretical hardness to practical success. Communications of the ACM (CACM), 52(8):76â82, 2009.
[51] Kuldeep S. Meel, Moshe Y. Vardi, Supratik Chakraborty, Daniel J. Fremont, Sanjit A. Seshia, Dror Fried, Alexander Ivrii, and Sharad Malik. Constrained sampling and counting: Universal hashing meets SAT solving. In Beyond NP, Papers from the 2016 AAAI Workshop, Phoenix, Arizona, USA, February 12, 2016., 2016. | 1606.08514#65 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08514 | 66 | [52] Brian Milch, Bhaskara Marthi, Stuart Russell, David Sontag, Daniel L Ong, and Andrey Kolobov. Blog: Probabilistic models with unknown objects. Statistical Relational Learning, page 373, 2007.
[53] Tom M. Mitchell. Machine Learning. McGraw-Hill, 1997.
[54] Tom M Mitchell, Richard M Keller, and Smadar T Kedar-Cabelli. Explanation-based generalization: A unifying view. Machine learning, 1(1):47â80, 1986.
[55] Andrew Y. Ng and Stuart J. Russell. Algorithms for inverse reinforcement learning. In Proceedings of the Seventeenth International Conference on Machine Learning (ICML), pages 663â670, 2000.
[56] A. Nilim and L. El Ghaoui. Robust Control of Markov Decision Processes with Uncertain Transition Matrices. Journal of Operations Research, pages 780â798, 2005.
[57] Pierluigi Nuzzo, Jiwei Li, Alberto L. Sangiovanni-Vincentelli, Yugeng Xi, and Dewei Li. Stochastic assume-guarantee contracts for cyber-physical system design. ACM Trans. Embed. Comput. Syst., 18(1), January 2019. | 1606.08514#66 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08514 | 67 | [58] S. Owre, J. M. Rushby, and N. Shankar. PVS: A prototype veriï¬cation system. In Deepak Kapur, editor, 11th International Conference on Automated Deduction (CADE), volume 607 of Lecture Notes in Artiï¬cial Intelligence, pages 748â752. Springer-Verlag, June 1992.
[59] Judea Pearl. The seven tools of causal inference, with reï¬ections on machine learning. Communica- tions of the ACM, 62(3):54â60, 2019.
[60] Amir Pnueli and Roni Rosner. On the synthesis of a reactive module. In Conference Record of the Sixteenth Annual ACM Symposium on Principles of Programming Languages, Austin, Texas, USA, January 11-13, 1989, pages 179â190, 1989.
[61] Alberto Puggelli, Wenchao Li, Alberto Sangiovanni-Vincentelli, and Sanjit A. Seshia. Polynomial- time veriï¬cation of PCTL properties of MDPs with convex uncertainties. In Proceedings of the 25th International Conference on Computer-Aided Veriï¬cation (CAV), July 2013. | 1606.08514#67 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08514 | 68 | [62] Jean-Pierre Queille and Joseph Sifakis. Speciï¬cation and veriï¬cation of concurrent systems in CESAR. In Symposium on Programming, number 137 in LNCS, pages 337â351, 1982.
[63] John Rushby. Using model checking to help discover mode confusions and other automation surprises. Reliability Engineering & System Safety, 75(2):167â177, 2002.
16
[64] Stuart Russell, Tom Dietterich, Eric Horvitz, Bart Selman, Francesca Rossi, Demis Hassabis, Shane Legg, Mustafa Suleyman, Dileep George, and Scott Phoenix. Letter to the editor: Research priorities for robust and beneï¬cial artiï¬cial intelligence: An open letter. AI Magazine, 36(4), 2015.
[65] Stuart J Russell. Rationality and intelligence. Artiï¬cial Intelligence, 94(1-2):57â77, 1997.
[66] Stuart Jonathan Russell and Peter Norvig. Artiï¬cial intelligence: a modern approach. Prentice hall, 2010. | 1606.08514#68 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08514 | 69 | [66] Stuart Jonathan Russell and Peter Norvig. Artiï¬cial intelligence: a modern approach. Prentice hall, 2010.
[67] Dorsa Sadigh, Katherine Driggs-Campbell, Alberto Puggelli, Wenchao Li, Victor Shia, Ruzena Bajcsy, Alberto L. Sangiovanni-Vincentelli, S. Shankar Sastry, and Sanjit A. Seshia. Data-driven probabilistic modeling and veriï¬cation of human driver behavior. In Formal Veriï¬cation and Modeling in Human- Machine Systems, AAAI Spring Symposium, March 2014.
[68] Dorsa Sadigh and Ashish Kapoor. Safe control under uncertainty with probabilistic signal temporal logic. In Proceedings of Robotics: Science and Systems, AnnArbor, Michigan, June 2016.
[69] Dorsa Sadigh, Shankar Sastry, Sanjit A. Seshia, and Anca D. Dragan. Information gathering actions over human internal state. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), October 2016.
[70] Alberto Sangiovanni-Vincentelli, Werner Damm, and Roberto Passerone. Taming Dr. Frankenstein: Contract-based design for cyber-physical systems. European journal of control, 18(3):217â238, 2012. | 1606.08514#69 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08514 | 70 | [71] John D Schierman, Michael D DeVore, Nathan D Richards, Neha Gandhi, Jared K Cooper, Kenneth R Horneman, Scott Stoller, and Scott Smolka. Runtime assurance framework development for highly adaptive ï¬ight control systems. Technical report, Barron Associates, Inc. Charlottesville, 2015.
[72] Daniel Selsam, Percy Liang, and David L. Dill. Developing bug-free machine learning systems with In Proceedings of the 34th International Conference on Machine Learning, formal mathematics. (ICML), volume 70 of Proceedings of Machine Learning Research, pages 3047â3056. PMLR, 2017.
[73] Sanjit A. Seshia. Sciduction: Combining induction, deduction, and structure for veriï¬cation and syn- thesis. In Proceedings of the Design Automation Conference (DAC), pages 356â365, June 2012.
[74] Sanjit A. Seshia. Combining induction, deduction, and structure for veriï¬cation and synthesis. Pro- ceedings of the IEEE, 103(11):2036â2051, 2015. | 1606.08514#70 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08514 | 71 | [75] Sanjit A. Seshia. Compositional veriï¬cation without compositional speciï¬cation for learning-based systems. Technical Report UCB/EECS-2017-164, EECS Department, University of California, Berke- ley, Nov 2017.
[76] Sanjit A. Seshia. Introspective environment modeling. In 19th International Conference on Runtime Veriï¬cation (RV), pages 15â26, 2019.
[77] Sanjit A. Seshia, Ankush Desai, Tommaso Dreossi, Daniel Fremont, Shromona Ghosh, Edward Kim, Sumukh Shivakumar, Marcell Vazquez-Chanlatte, and Xiangyu Yue. Formal speciï¬cation for deep neural networks. In Proceedings of the International Symposium on Automated Technology for Veriï¬- cation and Analysis (ATVA), pages 20â34, October 2018.
[78] Lui Sha. Using simplicity to control complexity. IEEE Software, 18(4):20â28, 2001.
17 | 1606.08514#71 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08514 | 72 | [78] Lui Sha. Using simplicity to control complexity. IEEE Software, 18(4):20â28, 2001.
17
[79] Yasser Shoukry, Pierluigi Nuzzo, Alberto Sangiovanni-Vincentelli, Sanjit A. Seshia, George J. Pappas, In Proceedings of the 10th and Paulo Tabuada. Smc: Satisï¬ability modulo convex optimization. International Conference on Hybrid Systems: Computation and Control (HSCC), April 2017.
[80] Joseph Sifakis. System design automation: Challenges and limitations. Proceedings of the IEEE, 103(11):2093â2103, 2015.
[81] Herbert A Simon. Bounded rationality. In Utility and Probability, pages 15â18. Springer, 1990.
[82] Armando Solar-Lezama, Liviu Tancau, Rastislav Bod´ık, Sanjit A. Seshia, and Vijay A. Saraswat. Combinatorial sketching for ï¬nite programs. In Proceedings of the 12th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS), pages 404â415. ACM Press, October 2006. | 1606.08514#72 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08514 | 73 | [83] Claire Tomlin, Ian Mitchell, Alexandre M. Bayen, and Meeko Oishi. Computational techniques for the veriï¬cation of hybrid systems. Proceedings of the IEEE, 91(7):986â1001, 2003.
[84] Marcell Vazquez-Chanlatte, Jyotirmoy V. Deshmukh, Xiaoqing Jin, and Sanjit A. Seshia. Logical In 29th International Conference on Computer Aided clustering and learning for time-series data. Veriï¬cation (CAV), pages 305â325, 2017.
[85] Marcell Vazquez-Chanlatte, Susmit Jha, Ashish Tiwari, Mark K. Ho, and Sanjit A. Seshia. Learning task speciï¬cations from demonstrations. In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems (NeurIPS), pages 5372â5382, Decem- ber 2018.
[86] Jeannette M Wing. A speciï¬erâs introduction to formal methods. IEEE Computer, 23(9):8â24, Septem- ber 1990. | 1606.08514#73 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08514 | 74 | [87] Jingyi Xu, Zilu Zhang, Tal Friedman, Yitao Liang, and Guy Van den Broeck. A semantic loss function for deep learning with symbolic knowledge. In Proceedings of the 35th International Conference on Machine Learning, (ICML), volume 80 of Proceedings of Machine Learning Research, pages 5498â 5507. PMLR, 2018.
[88] Tomoya Yamaguchi, Tomoyuki Kaga, Alexandre Donze, and Sanjit A. Seshia. Combining requirement mining, software model checking, and simulation-based veriï¬cation for industrial automotive systems. Technical Report UCB/EECS-2016-124, EECS Department, University of California, Berkeley, June 2016.
[89] Xiaojin Zhu, Adish Singla, Sandra Zilles, and Anna N Rafferty. An overview of machine teaching. arXiv preprint arXiv:1801.05927, 2018.
18 | 1606.08514#74 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.07947 | 1 | School of Engineering and Applied Sciences Harvard University Cambridge, MA, USA
# Abstract
Neural machine translation (NMT) offers a novel alternative formulation of translation that is potentially simpler than statistical ap- proaches. However to reach competitive per- formance, NMT models need to be exceed- ingly large. In this paper we consider applying knowledge distillation approaches (Bucila et al., 2006; Hinton et al., 2015) that have proven successful for reducing the size of neural mod- els in other domains to the problem of NMT. We demonstrate that standard knowledge dis- tillation applied to word-level prediction can be effective for NMT, and also introduce two novel sequence-level versions of knowledge distillation that further improve performance, and somewhat surprisingly, seem to elimi- nate the need for beam search (even when ap- plied on the original teacher model). Our best student model runs 10 times faster than its state-of-the-art teacher with little loss in per- formance. It is also signiï¬cantly better than a baseline model trained without knowledge distillation: by 4.2/1.7 BLEU with greedy de- coding/beam search. Applying weight prun- ing on top of knowledge distillation results in a student model that has 13à fewer param- eters than the original teacher model, with a decrease of 0.4 BLEU. | 1606.07947#1 | Sequence-Level Knowledge Distillation | Neural machine translation (NMT) offers a novel alternative formulation of
translation that is potentially simpler than statistical approaches. However to
reach competitive performance, NMT models need to be exceedingly large. In this
paper we consider applying knowledge distillation approaches (Bucila et al.,
2006; Hinton et al., 2015) that have proven successful for reducing the size of
neural models in other domains to the problem of NMT. We demonstrate that
standard knowledge distillation applied to word-level prediction can be
effective for NMT, and also introduce two novel sequence-level versions of
knowledge distillation that further improve performance, and somewhat
surprisingly, seem to eliminate the need for beam search (even when applied on
the original teacher model). Our best student model runs 10 times faster than
its state-of-the-art teacher with little loss in performance. It is also
significantly better than a baseline model trained without knowledge
distillation: by 4.2/1.7 BLEU with greedy decoding/beam search. Applying weight
pruning on top of knowledge distillation results in a student model that has 13
times fewer parameters than the original teacher model, with a decrease of 0.4
BLEU. | http://arxiv.org/pdf/1606.07947 | Yoon Kim, Alexander M. Rush | cs.CL, cs.LG, cs.NE | EMNLP 2016 | null | cs.CL | 20160625 | 20160922 | [
{
"id": "1506.04488"
},
{
"id": "1504.01483"
},
{
"id": "1508.01211"
},
{
"id": "1602.02410"
},
{
"id": "1602.02830"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
}
] |
1606.07947 | 2 | proaches. NMT systems directly model the proba- bility of the next word in the target sentence sim- ply by conditioning a recurrent neural network on the source sentence and previously generated target words.
While both simple and surprisingly accurate, NMT systems typically need to have very high ca- pacity in order to perform well: Sutskever et al. (2014) used a 4-layer LSTM with 1000 hidden units per layer (herein 4Ã1000) and Zhou et al. (2016) ob- tained state-of-the-art results on English â French with a 16-layer LSTM with 512 units per layer. The sheer size of the models requires cutting-edge hard- ware for training and makes using the models on standard setups very challenging.
This issue of excessively large networks has been observed in several other domains, with much fo- cus on fully-connected and convolutional networks for multi-class classiï¬cation. Researchers have par- ticularly noted that large networks seem to be nec- essary for training, but learn redundant representa- tions in the process (Denil et al., 2013). Therefore compressing deep models into smaller networks has been an active area of research. As deep learning systems obtain better results on NLP tasks, compres- sion also becomes an important practical issue with applications such as running deep learning models for speech and translation locally on cell phones.
1 | 1606.07947#2 | Sequence-Level Knowledge Distillation | Neural machine translation (NMT) offers a novel alternative formulation of
translation that is potentially simpler than statistical approaches. However to
reach competitive performance, NMT models need to be exceedingly large. In this
paper we consider applying knowledge distillation approaches (Bucila et al.,
2006; Hinton et al., 2015) that have proven successful for reducing the size of
neural models in other domains to the problem of NMT. We demonstrate that
standard knowledge distillation applied to word-level prediction can be
effective for NMT, and also introduce two novel sequence-level versions of
knowledge distillation that further improve performance, and somewhat
surprisingly, seem to eliminate the need for beam search (even when applied on
the original teacher model). Our best student model runs 10 times faster than
its state-of-the-art teacher with little loss in performance. It is also
significantly better than a baseline model trained without knowledge
distillation: by 4.2/1.7 BLEU with greedy decoding/beam search. Applying weight
pruning on top of knowledge distillation results in a student model that has 13
times fewer parameters than the original teacher model, with a decrease of 0.4
BLEU. | http://arxiv.org/pdf/1606.07947 | Yoon Kim, Alexander M. Rush | cs.CL, cs.LG, cs.NE | EMNLP 2016 | null | cs.CL | 20160625 | 20160922 | [
{
"id": "1506.04488"
},
{
"id": "1504.01483"
},
{
"id": "1508.01211"
},
{
"id": "1602.02410"
},
{
"id": "1602.02830"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
}
] |
1606.07947 | 3 | 1
# 1 Introduction
Neural machine translation (NMT) (Kalchbrenner and Blunsom, 2013; Cho et al., 2014; Sutskever et al., 2014; Bahdanau et al., 2015) is a deep learning- based method for translation that has recently shown promising results as an alternative to statistical apExisting compression methods generally fall into two categories: (1) pruning and (2) knowledge dis- tillation. Pruning methods (LeCun et al., 1990; He et al., 2014; Han et al., 2016), zero-out weights or entire neurons based on an importance criterion: Le- Cun et al. (1990) use (a diagonal approximation to)
the Hessian to identify weights whose removal min- imally impacts the objective function, while Han et al. (2016) remove weights based on threshold- ing their absolute values. Knowledge distillation ap- proaches (Bucila et al., 2006; Ba and Caruana, 2014; Hinton et al., 2015) learn a smaller student network to mimic the original teacher network by minimiz- ing the loss (typically L2 or cross-entropy) between the student and teacher output. | 1606.07947#3 | Sequence-Level Knowledge Distillation | Neural machine translation (NMT) offers a novel alternative formulation of
translation that is potentially simpler than statistical approaches. However to
reach competitive performance, NMT models need to be exceedingly large. In this
paper we consider applying knowledge distillation approaches (Bucila et al.,
2006; Hinton et al., 2015) that have proven successful for reducing the size of
neural models in other domains to the problem of NMT. We demonstrate that
standard knowledge distillation applied to word-level prediction can be
effective for NMT, and also introduce two novel sequence-level versions of
knowledge distillation that further improve performance, and somewhat
surprisingly, seem to eliminate the need for beam search (even when applied on
the original teacher model). Our best student model runs 10 times faster than
its state-of-the-art teacher with little loss in performance. It is also
significantly better than a baseline model trained without knowledge
distillation: by 4.2/1.7 BLEU with greedy decoding/beam search. Applying weight
pruning on top of knowledge distillation results in a student model that has 13
times fewer parameters than the original teacher model, with a decrease of 0.4
BLEU. | http://arxiv.org/pdf/1606.07947 | Yoon Kim, Alexander M. Rush | cs.CL, cs.LG, cs.NE | EMNLP 2016 | null | cs.CL | 20160625 | 20160922 | [
{
"id": "1506.04488"
},
{
"id": "1504.01483"
},
{
"id": "1508.01211"
},
{
"id": "1602.02410"
},
{
"id": "1602.02830"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
}
] |
1606.07947 | 4 | In this work, we investigate knowledge distilla- tion in the context of neural machine translation. We note that NMT differs from previous work which has mainly explored non-recurrent models in the multi- class prediction setting. For NMT, while the model is trained on multi-class prediction at the word-level, it is tasked with predicting complete sequence out- puts conditioned on previous decisions. With this difference in mind, we experiment with standard knowledge distillation for NMT and also propose two new versions of the approach that attempt to ap- proximately match the sequence-level (as opposed to word-level) distribution of the teacher network. This sequence-level approximation leads to a sim- ple training procedure wherein the student network is trained on a newly generated dataset that is the result of running beam search with the teacher net- work. | 1606.07947#4 | Sequence-Level Knowledge Distillation | Neural machine translation (NMT) offers a novel alternative formulation of
translation that is potentially simpler than statistical approaches. However to
reach competitive performance, NMT models need to be exceedingly large. In this
paper we consider applying knowledge distillation approaches (Bucila et al.,
2006; Hinton et al., 2015) that have proven successful for reducing the size of
neural models in other domains to the problem of NMT. We demonstrate that
standard knowledge distillation applied to word-level prediction can be
effective for NMT, and also introduce two novel sequence-level versions of
knowledge distillation that further improve performance, and somewhat
surprisingly, seem to eliminate the need for beam search (even when applied on
the original teacher model). Our best student model runs 10 times faster than
its state-of-the-art teacher with little loss in performance. It is also
significantly better than a baseline model trained without knowledge
distillation: by 4.2/1.7 BLEU with greedy decoding/beam search. Applying weight
pruning on top of knowledge distillation results in a student model that has 13
times fewer parameters than the original teacher model, with a decrease of 0.4
BLEU. | http://arxiv.org/pdf/1606.07947 | Yoon Kim, Alexander M. Rush | cs.CL, cs.LG, cs.NE | EMNLP 2016 | null | cs.CL | 20160625 | 20160922 | [
{
"id": "1506.04488"
},
{
"id": "1504.01483"
},
{
"id": "1508.01211"
},
{
"id": "1602.02410"
},
{
"id": "1602.02830"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
}
] |
1606.07947 | 5 | We run experiments to compress a large state-of- the-art 4 à 1000 LSTM model, and ï¬nd that with sequence-level knowledge distillation we are able to learn a 2 à 500 LSTM that roughly matches the per- formance of the full system. We see similar results compressing a 2 à 500 model down to 2 à 100 on a smaller data set. Furthermore, we observe that our proposed approach has other beneï¬ts, such as not requiring any beam search at test-time. As a re- sult we are able to perform greedy decoding on the 2 à 500 model 10 times faster than beam search on the 4 à 1000 model with comparable performance. Our student models can even be run efï¬ciently on a standard smartphone.1 Finally, we apply weight pruning on top of the student network to obtain a model that has 13à fewer parameters than the origi- nal teacher model. We have released all the code for the models described in this paper.2
1https://github.com/harvardnlp/nmt-android 2https://github.com/harvardnlp/seq2seq-attn
# 2 Background
# 2.1 Sequence-to-Sequence with Attention | 1606.07947#5 | Sequence-Level Knowledge Distillation | Neural machine translation (NMT) offers a novel alternative formulation of
translation that is potentially simpler than statistical approaches. However to
reach competitive performance, NMT models need to be exceedingly large. In this
paper we consider applying knowledge distillation approaches (Bucila et al.,
2006; Hinton et al., 2015) that have proven successful for reducing the size of
neural models in other domains to the problem of NMT. We demonstrate that
standard knowledge distillation applied to word-level prediction can be
effective for NMT, and also introduce two novel sequence-level versions of
knowledge distillation that further improve performance, and somewhat
surprisingly, seem to eliminate the need for beam search (even when applied on
the original teacher model). Our best student model runs 10 times faster than
its state-of-the-art teacher with little loss in performance. It is also
significantly better than a baseline model trained without knowledge
distillation: by 4.2/1.7 BLEU with greedy decoding/beam search. Applying weight
pruning on top of knowledge distillation results in a student model that has 13
times fewer parameters than the original teacher model, with a decrease of 0.4
BLEU. | http://arxiv.org/pdf/1606.07947 | Yoon Kim, Alexander M. Rush | cs.CL, cs.LG, cs.NE | EMNLP 2016 | null | cs.CL | 20160625 | 20160922 | [
{
"id": "1506.04488"
},
{
"id": "1504.01483"
},
{
"id": "1508.01211"
},
{
"id": "1602.02410"
},
{
"id": "1602.02830"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
}
] |
1606.07947 | 6 | # 2 Background
# 2.1 Sequence-to-Sequence with Attention
Let s = [s1, . . . , sI ] and t = [t1, . . . , tJ ] be (random variable sequences representing) the source/target sentence, with I and J respectively being the source/target lengths. Machine translation involves ï¬nding the most probable target sentence given the source:
argmax tâT p(t | s)
where T is the set of all possible sequences. NMT models parameterize p(t | s) with an encoder neural network which reads the source sentence and a de- coder neural network which produces a distribution over the target sentence (one word at a time) given the source. We employ the attentional architecture from Luong et al. (2015), which achieved state-of- the-art results on English â German translation.3
# 2.2 Knowledge Distillation | 1606.07947#6 | Sequence-Level Knowledge Distillation | Neural machine translation (NMT) offers a novel alternative formulation of
translation that is potentially simpler than statistical approaches. However to
reach competitive performance, NMT models need to be exceedingly large. In this
paper we consider applying knowledge distillation approaches (Bucila et al.,
2006; Hinton et al., 2015) that have proven successful for reducing the size of
neural models in other domains to the problem of NMT. We demonstrate that
standard knowledge distillation applied to word-level prediction can be
effective for NMT, and also introduce two novel sequence-level versions of
knowledge distillation that further improve performance, and somewhat
surprisingly, seem to eliminate the need for beam search (even when applied on
the original teacher model). Our best student model runs 10 times faster than
its state-of-the-art teacher with little loss in performance. It is also
significantly better than a baseline model trained without knowledge
distillation: by 4.2/1.7 BLEU with greedy decoding/beam search. Applying weight
pruning on top of knowledge distillation results in a student model that has 13
times fewer parameters than the original teacher model, with a decrease of 0.4
BLEU. | http://arxiv.org/pdf/1606.07947 | Yoon Kim, Alexander M. Rush | cs.CL, cs.LG, cs.NE | EMNLP 2016 | null | cs.CL | 20160625 | 20160922 | [
{
"id": "1506.04488"
},
{
"id": "1504.01483"
},
{
"id": "1508.01211"
},
{
"id": "1602.02410"
},
{
"id": "1602.02830"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
}
] |
1606.07947 | 7 | # 2.2 Knowledge Distillation
Knowledge distillation describes a class of methods for training a smaller student network to perform better by learning from a larger teacher network (in addition to learning from the training data set). We generally assume that the teacher has previously been trained, and that we are estimating parame- ters for the student. Knowledge distillation suggests training by matching the studentâs predictions to the teacherâs predictions. For classiï¬cation this usually means matching the probabilities either via L2 on the log scale (Ba and Caruana, 2014) or by cross- entropy (Li et al., 2014; Hinton et al., 2015).
Concretely, assume we are learning a multi-class classiï¬er over a data set of examples of the form (x, y) with possible classes V. The usual training criteria is to minimize NLL for each example from the training data,
IVI Lni(9) = - S- l{y = k} log p(y = k | x; 0) k=1
where 1{·} is the indicator function and p the distribution from our model (parameterized by θ).
3Speciï¬cally, we use the global-general attention model with the input-feeding approach. We refer the reader to the orig- inal paper for further details. | 1606.07947#7 | Sequence-Level Knowledge Distillation | Neural machine translation (NMT) offers a novel alternative formulation of
translation that is potentially simpler than statistical approaches. However to
reach competitive performance, NMT models need to be exceedingly large. In this
paper we consider applying knowledge distillation approaches (Bucila et al.,
2006; Hinton et al., 2015) that have proven successful for reducing the size of
neural models in other domains to the problem of NMT. We demonstrate that
standard knowledge distillation applied to word-level prediction can be
effective for NMT, and also introduce two novel sequence-level versions of
knowledge distillation that further improve performance, and somewhat
surprisingly, seem to eliminate the need for beam search (even when applied on
the original teacher model). Our best student model runs 10 times faster than
its state-of-the-art teacher with little loss in performance. It is also
significantly better than a baseline model trained without knowledge
distillation: by 4.2/1.7 BLEU with greedy decoding/beam search. Applying weight
pruning on top of knowledge distillation results in a student model that has 13
times fewer parameters than the original teacher model, with a decrease of 0.4
BLEU. | http://arxiv.org/pdf/1606.07947 | Yoon Kim, Alexander M. Rush | cs.CL, cs.LG, cs.NE | EMNLP 2016 | null | cs.CL | 20160625 | 20160922 | [
{
"id": "1506.04488"
},
{
"id": "1504.01483"
},
{
"id": "1508.01211"
},
{
"id": "1602.02410"
},
{
"id": "1602.02830"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
}
] |
1606.07947 | 8 | 3Speciï¬cally, we use the global-general attention model with the input-feeding approach. We refer the reader to the orig- inal paper for further details.
Ground Truth âE cD I ti Teacher Network Student Network ou La BI ul 1 if âTTT Teacher Network eo oe Word-Level Knowledge Distillation Sequence-Level Knowledge Distillation Ground Truth e ¢ oD vl # AE He a | EEE LEN) aos EEL Nea ââ | aad Student Network Sequence-Level Interpolation
Figure 1: Overview of the different knowledge distillation approaches. In word-level knowledge distillation (left) cross-entropy is minimized between the student/teacher distributions (yellow) for each word in the actual target sequence (ECD), as well as between the student distribution and the degenerate data distribution, which has all of its probabilitiy mass on one word (black). In sequence-level knowledge distillation (center) the student network is trained on the output from beam search of the teacher network that had the highest score (ACF). In sequence-level interpolation (right) the student is trained on the output from beam search of the teacher network that had the highest sim with the target sequence (ECE). | 1606.07947#8 | Sequence-Level Knowledge Distillation | Neural machine translation (NMT) offers a novel alternative formulation of
translation that is potentially simpler than statistical approaches. However to
reach competitive performance, NMT models need to be exceedingly large. In this
paper we consider applying knowledge distillation approaches (Bucila et al.,
2006; Hinton et al., 2015) that have proven successful for reducing the size of
neural models in other domains to the problem of NMT. We demonstrate that
standard knowledge distillation applied to word-level prediction can be
effective for NMT, and also introduce two novel sequence-level versions of
knowledge distillation that further improve performance, and somewhat
surprisingly, seem to eliminate the need for beam search (even when applied on
the original teacher model). Our best student model runs 10 times faster than
its state-of-the-art teacher with little loss in performance. It is also
significantly better than a baseline model trained without knowledge
distillation: by 4.2/1.7 BLEU with greedy decoding/beam search. Applying weight
pruning on top of knowledge distillation results in a student model that has 13
times fewer parameters than the original teacher model, with a decrease of 0.4
BLEU. | http://arxiv.org/pdf/1606.07947 | Yoon Kim, Alexander M. Rush | cs.CL, cs.LG, cs.NE | EMNLP 2016 | null | cs.CL | 20160625 | 20160922 | [
{
"id": "1506.04488"
},
{
"id": "1504.01483"
},
{
"id": "1508.01211"
},
{
"id": "1602.02410"
},
{
"id": "1602.02830"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
}
] |
1606.07947 | 9 | This objective can be seen as minimizing the cross- entropy between the degenerate data distribution (which has all of its probability mass on one class) and the model distribution p(y | x; θ).
Since this new objective has no direct term for the training data, it is common practice to interpolate between the two losses,
In knowledge distillation, we assume access to a learned teacher distribution q(y | x; θT ), possibly trained over the same data set. Instead of minimiz- ing cross-entropy with the observed data, we instead minimize the cross-entropy with the teacherâs prob- ability distribution,
L(θ; θT ) = (1 â α)LNLL(θ) + αLKD(θ; θT )
where α is mixture parameter combining the one-hot distribution and the teacher distribution.
# 3 Knowledge Distillation for NMT
vI Lxp(0;0r) =â So aly = k| ae; Or) x k=1 log p(y = k| x; 6)
The large sizes of neural machine translation sys- tems make them an ideal candidate for knowledge distillation approaches. In this section we explore three different ways this technique can be applied to NMT. | 1606.07947#9 | Sequence-Level Knowledge Distillation | Neural machine translation (NMT) offers a novel alternative formulation of
translation that is potentially simpler than statistical approaches. However to
reach competitive performance, NMT models need to be exceedingly large. In this
paper we consider applying knowledge distillation approaches (Bucila et al.,
2006; Hinton et al., 2015) that have proven successful for reducing the size of
neural models in other domains to the problem of NMT. We demonstrate that
standard knowledge distillation applied to word-level prediction can be
effective for NMT, and also introduce two novel sequence-level versions of
knowledge distillation that further improve performance, and somewhat
surprisingly, seem to eliminate the need for beam search (even when applied on
the original teacher model). Our best student model runs 10 times faster than
its state-of-the-art teacher with little loss in performance. It is also
significantly better than a baseline model trained without knowledge
distillation: by 4.2/1.7 BLEU with greedy decoding/beam search. Applying weight
pruning on top of knowledge distillation results in a student model that has 13
times fewer parameters than the original teacher model, with a decrease of 0.4
BLEU. | http://arxiv.org/pdf/1606.07947 | Yoon Kim, Alexander M. Rush | cs.CL, cs.LG, cs.NE | EMNLP 2016 | null | cs.CL | 20160625 | 20160922 | [
{
"id": "1506.04488"
},
{
"id": "1504.01483"
},
{
"id": "1508.01211"
},
{
"id": "1602.02410"
},
{
"id": "1602.02830"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
}
] |
1606.07947 | 10 | The large sizes of neural machine translation sys- tems make them an ideal candidate for knowledge distillation approaches. In this section we explore three different ways this technique can be applied to NMT.
where θT parameterizes the teacher distribution and remains ï¬xed. Note the cross-entropy setup is iden- tical, but the target distribution is no longer a sparse distribution.4 Training on q(y | x; θT ) is attractive since it gives more information about other classes similarity between for a given data point (e.g. classes) and has less variance in gradients (Hinton et al., 2015).
4 In some cases the entropy of the teacher/student distribu- tion is increased by annealing it with a temperature term Ï > 1
# 3.1 Word-Level Knowledge Distillation
NMT systems are trained directly to minimize word NLL, LWORD-NLL, at each position. Therefore if we have a teacher model, standard knowledge distil- lation for multi-class cross-entropy can be applied. We deï¬ne this distillation for a sentence as,
J Wi Lworv-kp =â >>> a(t) =k|s,t<j) x jal k=l
# log p(tj = k | s, t<j)
Ëp(y | x) â p(y | x) 1 Ï | 1606.07947#10 | Sequence-Level Knowledge Distillation | Neural machine translation (NMT) offers a novel alternative formulation of
translation that is potentially simpler than statistical approaches. However to
reach competitive performance, NMT models need to be exceedingly large. In this
paper we consider applying knowledge distillation approaches (Bucila et al.,
2006; Hinton et al., 2015) that have proven successful for reducing the size of
neural models in other domains to the problem of NMT. We demonstrate that
standard knowledge distillation applied to word-level prediction can be
effective for NMT, and also introduce two novel sequence-level versions of
knowledge distillation that further improve performance, and somewhat
surprisingly, seem to eliminate the need for beam search (even when applied on
the original teacher model). Our best student model runs 10 times faster than
its state-of-the-art teacher with little loss in performance. It is also
significantly better than a baseline model trained without knowledge
distillation: by 4.2/1.7 BLEU with greedy decoding/beam search. Applying weight
pruning on top of knowledge distillation results in a student model that has 13
times fewer parameters than the original teacher model, with a decrease of 0.4
BLEU. | http://arxiv.org/pdf/1606.07947 | Yoon Kim, Alexander M. Rush | cs.CL, cs.LG, cs.NE | EMNLP 2016 | null | cs.CL | 20160625 | 20160922 | [
{
"id": "1506.04488"
},
{
"id": "1504.01483"
},
{
"id": "1508.01211"
},
{
"id": "1602.02410"
},
{
"id": "1602.02830"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
}
] |
1606.07947 | 11 | # log p(tj = k | s, t<j)
Ëp(y | x) â p(y | x) 1 Ï
After testing Ï â {1, 1.5, 2} we found that Ï = 1 worked best.
where V is the target vocabulary set. The student can further be trained to optimize the mixture of
LWORD-KD and LWORD-NLL. In the context of NMT, we refer to this approach as word-level knowledge distillation and illustrate this in Figure 1 (left).
# 3.2 Sequence-Level Knowledge Distillation
Word-level knowledge distillation allows transfer of these local word distributions. Ideally however, we would like the student model to mimic the teacherâs actions at the sequence-level. The sequence distri- bution is particularly important for NMT, because wrong predictions can propagate forward at test- time.
First, consider the sequence-level distribution speciï¬ed by the model over all possible sequences t â T ,
p(t|s) = | | p(tj|s,t<;) te
# âequence-tevel
for any length J. The sequence-level negative log- likelihood for NMT then involves matching the one- hot distribution over all complete sequences, | 1606.07947#11 | Sequence-Level Knowledge Distillation | Neural machine translation (NMT) offers a novel alternative formulation of
translation that is potentially simpler than statistical approaches. However to
reach competitive performance, NMT models need to be exceedingly large. In this
paper we consider applying knowledge distillation approaches (Bucila et al.,
2006; Hinton et al., 2015) that have proven successful for reducing the size of
neural models in other domains to the problem of NMT. We demonstrate that
standard knowledge distillation applied to word-level prediction can be
effective for NMT, and also introduce two novel sequence-level versions of
knowledge distillation that further improve performance, and somewhat
surprisingly, seem to eliminate the need for beam search (even when applied on
the original teacher model). Our best student model runs 10 times faster than
its state-of-the-art teacher with little loss in performance. It is also
significantly better than a baseline model trained without knowledge
distillation: by 4.2/1.7 BLEU with greedy decoding/beam search. Applying weight
pruning on top of knowledge distillation results in a student model that has 13
times fewer parameters than the original teacher model, with a decrease of 0.4
BLEU. | http://arxiv.org/pdf/1606.07947 | Yoon Kim, Alexander M. Rush | cs.CL, cs.LG, cs.NE | EMNLP 2016 | null | cs.CL | 20160625 | 20160922 | [
{
"id": "1506.04488"
},
{
"id": "1504.01483"
},
{
"id": "1508.01211"
},
{
"id": "1602.02410"
},
{
"id": "1602.02830"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
}
] |
1606.07947 | 12 | # âequence-tevel
for any length J. The sequence-level negative log- likelihood for NMT then involves matching the one- hot distribution over all complete sequences,
LSEQ-NLL = â S- 1{t = y} log p(t | s) teT J Wi => - S- S- l{y; => k} log p(t; =k | s,t<;) jal k=l
# j=1 = LWORD-NLL
where y = [y1, . . . , yJ ] is the observed sequence. this just shows that from a negative Of course, log likelihood perspective, minimizing word-level NLL and sequence-level NLL are equivalent in this model.
But now consider the case of sequence-level knowledge distillation. As before, we can simply replace the distribution from the data with a prob- ability distribution derived from our teacher model. However, instead of using a single word prediction, we use q(t | s) to represent the teacherâs sequence distribution over the sample space of all possible se- quences,
LsEQ-KD = â S- q(t | s) log p(t | s) teT
Note that LSEQ-KD is inherently different from LWORD-KD, as the sum is over an exponential num- ber of terms. Despite its intractability, we posit | 1606.07947#12 | Sequence-Level Knowledge Distillation | Neural machine translation (NMT) offers a novel alternative formulation of
translation that is potentially simpler than statistical approaches. However to
reach competitive performance, NMT models need to be exceedingly large. In this
paper we consider applying knowledge distillation approaches (Bucila et al.,
2006; Hinton et al., 2015) that have proven successful for reducing the size of
neural models in other domains to the problem of NMT. We demonstrate that
standard knowledge distillation applied to word-level prediction can be
effective for NMT, and also introduce two novel sequence-level versions of
knowledge distillation that further improve performance, and somewhat
surprisingly, seem to eliminate the need for beam search (even when applied on
the original teacher model). Our best student model runs 10 times faster than
its state-of-the-art teacher with little loss in performance. It is also
significantly better than a baseline model trained without knowledge
distillation: by 4.2/1.7 BLEU with greedy decoding/beam search. Applying weight
pruning on top of knowledge distillation results in a student model that has 13
times fewer parameters than the original teacher model, with a decrease of 0.4
BLEU. | http://arxiv.org/pdf/1606.07947 | Yoon Kim, Alexander M. Rush | cs.CL, cs.LG, cs.NE | EMNLP 2016 | null | cs.CL | 20160625 | 20160922 | [
{
"id": "1506.04488"
},
{
"id": "1504.01483"
},
{
"id": "1508.01211"
},
{
"id": "1602.02410"
},
{
"id": "1602.02830"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
}
] |
1606.07947 | 13 | Note that LSEQ-KD is inherently different from LWORD-KD, as the sum is over an exponential num- ber of terms. Despite its intractability, we posit
that this sequence-level objective is worthwhile. It gives the teacher the chance to assign probabilities to complete sequences and therefore transfer a broader range of knowledge. We thus consider an approxi- mation of this objective.
Our simplest approximation is to replace the teacher distribution q with its mode,
q(t | s) â¼ 1{t = argmax q(t | s)} tâT
Observing that ï¬nding the mode is itself intractable, we use beam search to ï¬nd an approximation. The loss is then
Lsegxyv © â)_ 1{t =Â¥}logp(t|s) teT = âlogp(t=y|s
where Ëy is now the output from running beam search with the teacher model. | 1606.07947#13 | Sequence-Level Knowledge Distillation | Neural machine translation (NMT) offers a novel alternative formulation of
translation that is potentially simpler than statistical approaches. However to
reach competitive performance, NMT models need to be exceedingly large. In this
paper we consider applying knowledge distillation approaches (Bucila et al.,
2006; Hinton et al., 2015) that have proven successful for reducing the size of
neural models in other domains to the problem of NMT. We demonstrate that
standard knowledge distillation applied to word-level prediction can be
effective for NMT, and also introduce two novel sequence-level versions of
knowledge distillation that further improve performance, and somewhat
surprisingly, seem to eliminate the need for beam search (even when applied on
the original teacher model). Our best student model runs 10 times faster than
its state-of-the-art teacher with little loss in performance. It is also
significantly better than a baseline model trained without knowledge
distillation: by 4.2/1.7 BLEU with greedy decoding/beam search. Applying weight
pruning on top of knowledge distillation results in a student model that has 13
times fewer parameters than the original teacher model, with a decrease of 0.4
BLEU. | http://arxiv.org/pdf/1606.07947 | Yoon Kim, Alexander M. Rush | cs.CL, cs.LG, cs.NE | EMNLP 2016 | null | cs.CL | 20160625 | 20160922 | [
{
"id": "1506.04488"
},
{
"id": "1504.01483"
},
{
"id": "1508.01211"
},
{
"id": "1602.02410"
},
{
"id": "1602.02830"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
}
] |
1606.07947 | 14 | where Ëy is now the output from running beam search with the teacher model.
Using the mode seems like a poor approximation for the teacher distribution q(t | s), as we are ap- proximating an exponentially-sized distribution with a single sample. However, previous results showing the effectiveness of beam search decoding for NMT lead us to belief that a large portion of qâs mass lies in a single output sequence. In fact, in experiments we ï¬nd that with beam of size 1, q(Ëy | s) (on aver- age) accounts for 1.3% of the distribution for Ger- man â English, and 2.3% for Thai â English (Ta- ble 1: p(t = Ëy)).5
To summarize, sequence-level knowledge distil- lation suggests to: (1) train a teacher model, (2) run beam search over the training set with this model, (3) train the student network with cross-entropy on this new dataset. Step (3) is identical to the word-level NLL process except now on the newly-generated data set. This is shown in Figure 1 (center).
5Additionally there are simple ways to better approximate q(t | s). One way would be to consider a K-best list from beam search and renormalizing the probabilities, | 1606.07947#14 | Sequence-Level Knowledge Distillation | Neural machine translation (NMT) offers a novel alternative formulation of
translation that is potentially simpler than statistical approaches. However to
reach competitive performance, NMT models need to be exceedingly large. In this
paper we consider applying knowledge distillation approaches (Bucila et al.,
2006; Hinton et al., 2015) that have proven successful for reducing the size of
neural models in other domains to the problem of NMT. We demonstrate that
standard knowledge distillation applied to word-level prediction can be
effective for NMT, and also introduce two novel sequence-level versions of
knowledge distillation that further improve performance, and somewhat
surprisingly, seem to eliminate the need for beam search (even when applied on
the original teacher model). Our best student model runs 10 times faster than
its state-of-the-art teacher with little loss in performance. It is also
significantly better than a baseline model trained without knowledge
distillation: by 4.2/1.7 BLEU with greedy decoding/beam search. Applying weight
pruning on top of knowledge distillation results in a student model that has 13
times fewer parameters than the original teacher model, with a decrease of 0.4
BLEU. | http://arxiv.org/pdf/1606.07947 | Yoon Kim, Alexander M. Rush | cs.CL, cs.LG, cs.NE | EMNLP 2016 | null | cs.CL | 20160625 | 20160922 | [
{
"id": "1506.04488"
},
{
"id": "1504.01483"
},
{
"id": "1508.01211"
},
{
"id": "1602.02410"
},
{
"id": "1602.02830"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
}
] |
1606.07947 | 15 | 5Additionally there are simple ways to better approximate q(t | s). One way would be to consider a K-best list from beam search and renormalizing the probabilities,
a(t |s) LeeTx q(t |s) q(t |s) ~
where TK is the K-best list from beam search. This would increase the training set by a factor of K. A beam of size 5 captures 2.8% of the distribution for German â English, and 3.8% for Thai â English. Another alternative is to use a Monte Carlo estimate and sample from the teacher model (since LSEQ-KD = Etâ¼q(t | s)[ â log p(t | s) ]). However in practice we found the (approximate) mode to work well.
# 3.3 Sequence-Level Interpolation
Next we consider integrating the training data back into the process, such that we train the student model as a mixture of our sequence-level teacher- generated data (LSEQ-KD) with the original training data (LSEQ-NLL),
L=(1âa)Lszqnitt + oLsEQ-KD = ~(1~a) log p(y |s) â @ > (t|s) log p(t |) teT
where y is the gold target sequence. | 1606.07947#15 | Sequence-Level Knowledge Distillation | Neural machine translation (NMT) offers a novel alternative formulation of
translation that is potentially simpler than statistical approaches. However to
reach competitive performance, NMT models need to be exceedingly large. In this
paper we consider applying knowledge distillation approaches (Bucila et al.,
2006; Hinton et al., 2015) that have proven successful for reducing the size of
neural models in other domains to the problem of NMT. We demonstrate that
standard knowledge distillation applied to word-level prediction can be
effective for NMT, and also introduce two novel sequence-level versions of
knowledge distillation that further improve performance, and somewhat
surprisingly, seem to eliminate the need for beam search (even when applied on
the original teacher model). Our best student model runs 10 times faster than
its state-of-the-art teacher with little loss in performance. It is also
significantly better than a baseline model trained without knowledge
distillation: by 4.2/1.7 BLEU with greedy decoding/beam search. Applying weight
pruning on top of knowledge distillation results in a student model that has 13
times fewer parameters than the original teacher model, with a decrease of 0.4
BLEU. | http://arxiv.org/pdf/1606.07947 | Yoon Kim, Alexander M. Rush | cs.CL, cs.LG, cs.NE | EMNLP 2016 | null | cs.CL | 20160625 | 20160922 | [
{
"id": "1506.04488"
},
{
"id": "1504.01483"
},
{
"id": "1508.01211"
},
{
"id": "1602.02410"
},
{
"id": "1602.02830"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
}
] |
1606.07947 | 16 | where y is the gold target sequence.
Since the second term is intractable, we could again apply the mode approximation from the pre- vious section,
L = â(1 â α) log p(y | s) â α log p(Ëy | s)
and train on both observed (y) and teacher- generated (Ëy) data. However, this process is non- ideal for two reasons: (1) unlike for standard knowl- edge distribution, it doubles the size of the training data, and (2) it requires training on both the teacher- generated sequence and the true sequence, condi- tioned on the same source input. The latter concern is particularly problematic since we observe that y and Ëy are often quite different.
As an alternative, we propose a single-sequence approximation that is more attractive in this setting. This approach is inspired by local updating (Liang et al., 2006), a method for discriminative train- ing in statistical machine translation (although to our knowledge not for knowledge distillation). Lo- cal updating suggests selecting a training sequence which is close to y and has high probability under the teacher model,
Ëy = argmax sim(t, y)q(t | s) tâT | 1606.07947#16 | Sequence-Level Knowledge Distillation | Neural machine translation (NMT) offers a novel alternative formulation of
translation that is potentially simpler than statistical approaches. However to
reach competitive performance, NMT models need to be exceedingly large. In this
paper we consider applying knowledge distillation approaches (Bucila et al.,
2006; Hinton et al., 2015) that have proven successful for reducing the size of
neural models in other domains to the problem of NMT. We demonstrate that
standard knowledge distillation applied to word-level prediction can be
effective for NMT, and also introduce two novel sequence-level versions of
knowledge distillation that further improve performance, and somewhat
surprisingly, seem to eliminate the need for beam search (even when applied on
the original teacher model). Our best student model runs 10 times faster than
its state-of-the-art teacher with little loss in performance. It is also
significantly better than a baseline model trained without knowledge
distillation: by 4.2/1.7 BLEU with greedy decoding/beam search. Applying weight
pruning on top of knowledge distillation results in a student model that has 13
times fewer parameters than the original teacher model, with a decrease of 0.4
BLEU. | http://arxiv.org/pdf/1606.07947 | Yoon Kim, Alexander M. Rush | cs.CL, cs.LG, cs.NE | EMNLP 2016 | null | cs.CL | 20160625 | 20160922 | [
{
"id": "1506.04488"
},
{
"id": "1504.01483"
},
{
"id": "1508.01211"
},
{
"id": "1602.02410"
},
{
"id": "1602.02830"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
}
] |
1606.07947 | 17 | Ëy = argmax sim(t, y)q(t | s) tâT
where sim is a function measuring closeness (e.g. Jaccard similarity or BLEU (Papineni et al., 2002)). Following local updating, we can approximate this sequence by running beam search and choosing
Ëy â argmax sim(t, y) tâTK
where TK is the K-best list from beam search. We take sim to be smoothed sentence-level BLEU (Chen and Cherry, 2014).
We justify training on y from a knowledge distil- lation perspective with the following generative pro- cess: suppose that there is a true target sequence (which we do not observe) that is first generated from the underlying data distribution D. And further suppose that the target sequence that we observe (y) is a noisy version of the unobserved true sequence: i.e. (i) t ~ D, (ii) y ~ e(t), where e(t) is, for ex- ample, a noise function that independently replaces each element in t with a random element in V with some small probability] In such a case, ideally the studentâs distribution should match the mixture dis- tribution,
DSEQ-Inter â¼ (1 â α)D + αq(t | s) | 1606.07947#17 | Sequence-Level Knowledge Distillation | Neural machine translation (NMT) offers a novel alternative formulation of
translation that is potentially simpler than statistical approaches. However to
reach competitive performance, NMT models need to be exceedingly large. In this
paper we consider applying knowledge distillation approaches (Bucila et al.,
2006; Hinton et al., 2015) that have proven successful for reducing the size of
neural models in other domains to the problem of NMT. We demonstrate that
standard knowledge distillation applied to word-level prediction can be
effective for NMT, and also introduce two novel sequence-level versions of
knowledge distillation that further improve performance, and somewhat
surprisingly, seem to eliminate the need for beam search (even when applied on
the original teacher model). Our best student model runs 10 times faster than
its state-of-the-art teacher with little loss in performance. It is also
significantly better than a baseline model trained without knowledge
distillation: by 4.2/1.7 BLEU with greedy decoding/beam search. Applying weight
pruning on top of knowledge distillation results in a student model that has 13
times fewer parameters than the original teacher model, with a decrease of 0.4
BLEU. | http://arxiv.org/pdf/1606.07947 | Yoon Kim, Alexander M. Rush | cs.CL, cs.LG, cs.NE | EMNLP 2016 | null | cs.CL | 20160625 | 20160922 | [
{
"id": "1506.04488"
},
{
"id": "1504.01483"
},
{
"id": "1508.01211"
},
{
"id": "1602.02410"
},
{
"id": "1602.02830"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
}
] |
1606.07947 | 18 | DSEQ-Inter â¼ (1 â α)D + αq(t | s)
In this setting, due to the noise assumption, D now has signiï¬cant probability mass around a neighbor- hood of y (not just at y), and therefore the argmax of the mixture distribution is likely something other than y (the observed sequence) or Ëy (the output from beam search). We can see that Ëy is a natural approx- imation to the argmax of this mixture distribution between D and q(t | s) for some α. We illustrate this framework in Figure 1 (right) and visualize the distribution over a real example in Figure 2.
# 4 Experimental Setup
To test out these approaches, we conduct two sets of NMT experiments: high resource (English â Ger- man) and low resource (Thai â English). | 1606.07947#18 | Sequence-Level Knowledge Distillation | Neural machine translation (NMT) offers a novel alternative formulation of
translation that is potentially simpler than statistical approaches. However to
reach competitive performance, NMT models need to be exceedingly large. In this
paper we consider applying knowledge distillation approaches (Bucila et al.,
2006; Hinton et al., 2015) that have proven successful for reducing the size of
neural models in other domains to the problem of NMT. We demonstrate that
standard knowledge distillation applied to word-level prediction can be
effective for NMT, and also introduce two novel sequence-level versions of
knowledge distillation that further improve performance, and somewhat
surprisingly, seem to eliminate the need for beam search (even when applied on
the original teacher model). Our best student model runs 10 times faster than
its state-of-the-art teacher with little loss in performance. It is also
significantly better than a baseline model trained without knowledge
distillation: by 4.2/1.7 BLEU with greedy decoding/beam search. Applying weight
pruning on top of knowledge distillation results in a student model that has 13
times fewer parameters than the original teacher model, with a decrease of 0.4
BLEU. | http://arxiv.org/pdf/1606.07947 | Yoon Kim, Alexander M. Rush | cs.CL, cs.LG, cs.NE | EMNLP 2016 | null | cs.CL | 20160625 | 20160922 | [
{
"id": "1506.04488"
},
{
"id": "1504.01483"
},
{
"id": "1508.01211"
},
{
"id": "1602.02410"
},
{
"id": "1602.02830"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
}
] |
1606.07947 | 19 | # 4 Experimental Setup
To test out these approaches, we conduct two sets of NMT experiments: high resource (English â Ger- man) and low resource (Thai â English).
The English-German data comes from WMT 2014)7] The training set has 4m sentences and we take newstest2012/newstest2013 as the dev set and newstest2014 as the test set. We keep the top 50k most frequent words, and replace the rest with UNK. The teacher model is a 4 x 1000 LSTM (as in |Lu-| jong et al. (2015)) and we train two student models: 2 x 300 and 2 x 500. The Thai-English data comes from IWSLT 20155] There are 90k sentences in the ®While we employ a simple (unrealistic) noise function for illustrative purposes, the generative story is quite plausible if we consider a more elaborate noise function which includes addi- tional sources of noise such as phrase reordering, replacement of words with synonyms, etc. One could view translation hav- ing two sources of variance that should be modeled separately: variance due to the source sentence (t ~ D), and variance due to the individual translator (y ~ â¬(t)).
# 7http://statmt.org/wmt14 8https://sites.google.com/site/iwsltevaluation2015/mt-track | 1606.07947#19 | Sequence-Level Knowledge Distillation | Neural machine translation (NMT) offers a novel alternative formulation of
translation that is potentially simpler than statistical approaches. However to
reach competitive performance, NMT models need to be exceedingly large. In this
paper we consider applying knowledge distillation approaches (Bucila et al.,
2006; Hinton et al., 2015) that have proven successful for reducing the size of
neural models in other domains to the problem of NMT. We demonstrate that
standard knowledge distillation applied to word-level prediction can be
effective for NMT, and also introduce two novel sequence-level versions of
knowledge distillation that further improve performance, and somewhat
surprisingly, seem to eliminate the need for beam search (even when applied on
the original teacher model). Our best student model runs 10 times faster than
its state-of-the-art teacher with little loss in performance. It is also
significantly better than a baseline model trained without knowledge
distillation: by 4.2/1.7 BLEU with greedy decoding/beam search. Applying weight
pruning on top of knowledge distillation results in a student model that has 13
times fewer parameters than the original teacher model, with a decrease of 0.4
BLEU. | http://arxiv.org/pdf/1606.07947 | Yoon Kim, Alexander M. Rush | cs.CL, cs.LG, cs.NE | EMNLP 2016 | null | cs.CL | 20160625 | 20160922 | [
{
"id": "1506.04488"
},
{
"id": "1504.01483"
},
{
"id": "1508.01211"
},
{
"id": "1602.02410"
},
{
"id": "1602.02830"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
}
] |
1606.07947 | 20 | # 7http://statmt.org/wmt14 8https://sites.google.com/site/iwsltevaluation2015/mt-track
», (Room cancellation is free up to 15 days prior to arrival [Up to 15 days before arrival are free of charge}. of et ple eee / [Bookings are free of charge 15 days before arrival . Up to 15 days before arrival, <unk> are free o EXPOS o> -[Up to 15 days before arrival <unk> is free oe No ¢ [Up to 15 days before arrival <unk> are free .]) [Te . 2 lve 7 ) =(/ [Ris tree of charge until 15 days before arrival] (*. \ ei SN I - Up to 15 days before arrival will be free off Clay [Up to 15 days prior to arrival , cancellation charges | 1606.07947#20 | Sequence-Level Knowledge Distillation | Neural machine translation (NMT) offers a novel alternative formulation of
translation that is potentially simpler than statistical approaches. However to
reach competitive performance, NMT models need to be exceedingly large. In this
paper we consider applying knowledge distillation approaches (Bucila et al.,
2006; Hinton et al., 2015) that have proven successful for reducing the size of
neural models in other domains to the problem of NMT. We demonstrate that
standard knowledge distillation applied to word-level prediction can be
effective for NMT, and also introduce two novel sequence-level versions of
knowledge distillation that further improve performance, and somewhat
surprisingly, seem to eliminate the need for beam search (even when applied on
the original teacher model). Our best student model runs 10 times faster than
its state-of-the-art teacher with little loss in performance. It is also
significantly better than a baseline model trained without knowledge
distillation: by 4.2/1.7 BLEU with greedy decoding/beam search. Applying weight
pruning on top of knowledge distillation results in a student model that has 13
times fewer parameters than the original teacher model, with a decrease of 0.4
BLEU. | http://arxiv.org/pdf/1606.07947 | Yoon Kim, Alexander M. Rush | cs.CL, cs.LG, cs.NE | EMNLP 2016 | null | cs.CL | 20160625 | 20160922 | [
{
"id": "1506.04488"
},
{
"id": "1504.01483"
},
{
"id": "1508.01211"
},
{
"id": "1602.02410"
},
{
"id": "1602.02830"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
}
] |
1606.07947 | 21 | Figure 2: Visualization of sequence-level interpolation on an example German â English sentence: Bis 15 Tage vor An- reise sind Zimmer-Annullationen kostenlos. We run beam search, plot the ï¬nal hidden state of the hypotheses using t-SNE and show the corresponding (smoothed) probabilities with con- tours. In the above example, the sentence that is at the top of the beam after beam search (green) is quite far away from gold (red), so we train the model on a sentence that is on the beam but had the highest sim (e.g. BLEU) to gold (purple).
training set and we take 2010/2011/2012 data as the dev set and 2012/2013 as the test set, with a vocabu- lary size is 25k. Size of the teacher model is 2 Ã 500 (which performed better than 4Ã1000, 2Ã750 mod- els), and the student model is 2Ã100. Other training details mirror Luong et al. (2015).
on evaluate multi-bleu.perl, the following variations: We tokenized BLEU with experiment with and | 1606.07947#21 | Sequence-Level Knowledge Distillation | Neural machine translation (NMT) offers a novel alternative formulation of
translation that is potentially simpler than statistical approaches. However to
reach competitive performance, NMT models need to be exceedingly large. In this
paper we consider applying knowledge distillation approaches (Bucila et al.,
2006; Hinton et al., 2015) that have proven successful for reducing the size of
neural models in other domains to the problem of NMT. We demonstrate that
standard knowledge distillation applied to word-level prediction can be
effective for NMT, and also introduce two novel sequence-level versions of
knowledge distillation that further improve performance, and somewhat
surprisingly, seem to eliminate the need for beam search (even when applied on
the original teacher model). Our best student model runs 10 times faster than
its state-of-the-art teacher with little loss in performance. It is also
significantly better than a baseline model trained without knowledge
distillation: by 4.2/1.7 BLEU with greedy decoding/beam search. Applying weight
pruning on top of knowledge distillation results in a student model that has 13
times fewer parameters than the original teacher model, with a decrease of 0.4
BLEU. | http://arxiv.org/pdf/1606.07947 | Yoon Kim, Alexander M. Rush | cs.CL, cs.LG, cs.NE | EMNLP 2016 | null | cs.CL | 20160625 | 20160922 | [
{
"id": "1506.04488"
},
{
"id": "1504.01483"
},
{
"id": "1508.01211"
},
{
"id": "1602.02410"
},
{
"id": "1602.02830"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
}
] |
1606.07947 | 22 | on evaluate multi-bleu.perl, the following variations: We tokenized BLEU with experiment with and
Word-Level Knowledge Distillation (Word-KD) Student is trained on the original data and addition- ally trained to minimize the cross-entropy of the teacher distribution at the word-level. We tested α â {0.5, 0.9} and found α = 0.5 to work better.
Sequence-Level Knowledge Distillation (Seq-KD) Student is trained on the teacher-generated data, which is the result of running beam search and tak- ing the highest-scoring sequence with the teacher model. We use beam size K = 5 (we did not see improvements with a larger beam).
Sequence-Level Interpolation (Seq-Inter) Stu- dent is trained on the sequence on the teacherâs beam that had the highest BLEU (beam size K = 35). We
adopt a ï¬ne-tuning approach where we begin train- ing from a pretrained model (either on original data or Seq-KD data) and train with a smaller learning rate (0.1). For English-German we generate Seq- Inter data on a smaller portion of the training set (â¼ 50%) for efï¬ciency. | 1606.07947#22 | Sequence-Level Knowledge Distillation | Neural machine translation (NMT) offers a novel alternative formulation of
translation that is potentially simpler than statistical approaches. However to
reach competitive performance, NMT models need to be exceedingly large. In this
paper we consider applying knowledge distillation approaches (Bucila et al.,
2006; Hinton et al., 2015) that have proven successful for reducing the size of
neural models in other domains to the problem of NMT. We demonstrate that
standard knowledge distillation applied to word-level prediction can be
effective for NMT, and also introduce two novel sequence-level versions of
knowledge distillation that further improve performance, and somewhat
surprisingly, seem to eliminate the need for beam search (even when applied on
the original teacher model). Our best student model runs 10 times faster than
its state-of-the-art teacher with little loss in performance. It is also
significantly better than a baseline model trained without knowledge
distillation: by 4.2/1.7 BLEU with greedy decoding/beam search. Applying weight
pruning on top of knowledge distillation results in a student model that has 13
times fewer parameters than the original teacher model, with a decrease of 0.4
BLEU. | http://arxiv.org/pdf/1606.07947 | Yoon Kim, Alexander M. Rush | cs.CL, cs.LG, cs.NE | EMNLP 2016 | null | cs.CL | 20160625 | 20160922 | [
{
"id": "1506.04488"
},
{
"id": "1504.01483"
},
{
"id": "1508.01211"
},
{
"id": "1602.02410"
},
{
"id": "1602.02830"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
}
] |
1606.07947 | 23 | The above methods are complementary and can be combined with each other. For example, we can train on teacher-generated data but still in- clude a word-level cross-entropy term between the teacher/student (Seq-KD + Word-KD in Table 1), or ï¬ne-tune towards Seq-Inter data starting from the baseline model trained on original data (Baseline + Seq-Inter in Table 1).9
# 5 Results and Discussion
Results of our experiments are shown in Table 1. We ï¬nd that while word-level knowledge dis- tillation (Word-KD) does improve upon the base- line, sequence-level knowledge distillation (Seq- KD) does better on English â German and per- forms similarly on Thai â English. Combining them (Seq-KD + Word-KD) results in further gains for the 2 à 300 and 2 à 100 models (although not for the 2 à 500 model), indicating that these meth- ods provide orthogonal means of transferring knowl- edge from the teacher to the student: Word-KD is transferring knowledge at the the local (i.e. word) level while Seq-KD is transferring knowledge at the global (i.e. sequence) level. | 1606.07947#23 | Sequence-Level Knowledge Distillation | Neural machine translation (NMT) offers a novel alternative formulation of
translation that is potentially simpler than statistical approaches. However to
reach competitive performance, NMT models need to be exceedingly large. In this
paper we consider applying knowledge distillation approaches (Bucila et al.,
2006; Hinton et al., 2015) that have proven successful for reducing the size of
neural models in other domains to the problem of NMT. We demonstrate that
standard knowledge distillation applied to word-level prediction can be
effective for NMT, and also introduce two novel sequence-level versions of
knowledge distillation that further improve performance, and somewhat
surprisingly, seem to eliminate the need for beam search (even when applied on
the original teacher model). Our best student model runs 10 times faster than
its state-of-the-art teacher with little loss in performance. It is also
significantly better than a baseline model trained without knowledge
distillation: by 4.2/1.7 BLEU with greedy decoding/beam search. Applying weight
pruning on top of knowledge distillation results in a student model that has 13
times fewer parameters than the original teacher model, with a decrease of 0.4
BLEU. | http://arxiv.org/pdf/1606.07947 | Yoon Kim, Alexander M. Rush | cs.CL, cs.LG, cs.NE | EMNLP 2016 | null | cs.CL | 20160625 | 20160922 | [
{
"id": "1506.04488"
},
{
"id": "1504.01483"
},
{
"id": "1508.01211"
},
{
"id": "1602.02410"
},
{
"id": "1602.02830"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
}
] |
1606.07947 | 24 | Sequence-level interpolation (Seq-Inter), in addi- tion to improving models trained via Word-KD and Seq-KD, also improves upon the original teacher model that was trained on the actual data but ï¬ne- tuned towards Seq-Inter data (Baseline + Seq-Inter). In fact, greedy decoding with this ï¬ne-tuned model has similar performance (19.6) as beam search with the original model (19.5), allowing for faster decod- ing even with an identically-sized model.
We hypothesize that sequence-level knowledge distillation is effective because it allows the student network to only model relevant parts of the teacher distribution (i.e. around the teacherâs mode) instead of âwastingâ parameters on trying to model the entire
9For instance, âSeq-KD + Seq-Inter + Word-KDâ in Table 1 means that the model was trained on Seq-KD data and ï¬ne- tuned towards Seq-Inter data with the mixture cross-entropy loss at the word-level. | 1606.07947#24 | Sequence-Level Knowledge Distillation | Neural machine translation (NMT) offers a novel alternative formulation of
translation that is potentially simpler than statistical approaches. However to
reach competitive performance, NMT models need to be exceedingly large. In this
paper we consider applying knowledge distillation approaches (Bucila et al.,
2006; Hinton et al., 2015) that have proven successful for reducing the size of
neural models in other domains to the problem of NMT. We demonstrate that
standard knowledge distillation applied to word-level prediction can be
effective for NMT, and also introduce two novel sequence-level versions of
knowledge distillation that further improve performance, and somewhat
surprisingly, seem to eliminate the need for beam search (even when applied on
the original teacher model). Our best student model runs 10 times faster than
its state-of-the-art teacher with little loss in performance. It is also
significantly better than a baseline model trained without knowledge
distillation: by 4.2/1.7 BLEU with greedy decoding/beam search. Applying weight
pruning on top of knowledge distillation results in a student model that has 13
times fewer parameters than the original teacher model, with a decrease of 0.4
BLEU. | http://arxiv.org/pdf/1606.07947 | Yoon Kim, Alexander M. Rush | cs.CL, cs.LG, cs.NE | EMNLP 2016 | null | cs.CL | 20160625 | 20160922 | [
{
"id": "1506.04488"
},
{
"id": "1504.01483"
},
{
"id": "1508.01211"
},
{
"id": "1602.02410"
},
{
"id": "1602.02830"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
}
] |
1606.07947 | 25 | BLEUK=1 âK=1 BLEUK=5 âK=5 PPL p(t = Ëy) Baseline + Seq-Inter 17.7 19.6 â +1.9 19.5 19.8 â +0.3 6.7 10.4 1.3% 8.2% Word-KD Seq-KD Baseline + Seq-Inter Word-KD + Seq-Inter Seq-KD + Seq-Inter Seq-KD + Word-KD Seq-KD + Seq-Inter + Word-KD 14.7 15.4 18.9 18.5 18.3 18.9 18.7 18.8 â +0.7 +4.2 +3.6 +3.6 +4.2 +4.0 +4.1 17.6 17.7 19.0 18.7 18.5 19.3 18.9 19.2 â +0.1 +1.4 +1.1 +0.9 +1.7 +1.3 +1.6 8.2 8.0 22.7 11.3 11.8 15.8 10.9 14.8 0.9% 1.0% 16.9% 5.7% 6.3% 7.6% 4.1% 7.1% Word-KD | 1606.07947#25 | Sequence-Level Knowledge Distillation | Neural machine translation (NMT) offers a novel alternative formulation of
translation that is potentially simpler than statistical approaches. However to
reach competitive performance, NMT models need to be exceedingly large. In this
paper we consider applying knowledge distillation approaches (Bucila et al.,
2006; Hinton et al., 2015) that have proven successful for reducing the size of
neural models in other domains to the problem of NMT. We demonstrate that
standard knowledge distillation applied to word-level prediction can be
effective for NMT, and also introduce two novel sequence-level versions of
knowledge distillation that further improve performance, and somewhat
surprisingly, seem to eliminate the need for beam search (even when applied on
the original teacher model). Our best student model runs 10 times faster than
its state-of-the-art teacher with little loss in performance. It is also
significantly better than a baseline model trained without knowledge
distillation: by 4.2/1.7 BLEU with greedy decoding/beam search. Applying weight
pruning on top of knowledge distillation results in a student model that has 13
times fewer parameters than the original teacher model, with a decrease of 0.4
BLEU. | http://arxiv.org/pdf/1606.07947 | Yoon Kim, Alexander M. Rush | cs.CL, cs.LG, cs.NE | EMNLP 2016 | null | cs.CL | 20160625 | 20160922 | [
{
"id": "1506.04488"
},
{
"id": "1504.01483"
},
{
"id": "1508.01211"
},
{
"id": "1602.02410"
},
{
"id": "1602.02830"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
}
] |
1606.07947 | 26 | 14.8 0.9% 1.0% 16.9% 5.7% 6.3% 7.6% 4.1% 7.1% Word-KD Seq-KD Baseline + Seq-Inter Word-KD + Seq-Inter Seq-KD + Seq-Inter Seq-KD + Word-KD Seq-KD + Seq-Inter + Word-KD 14.1 14.9 18.1 17.6 17.8 18.2 17.9 18.5 â +0.8 +4.0 +3.5 +3.7 +4.1 +3.8 +4.4 16.9 17.6 18.1 17.9 18.0 18.5 18.8 18.9 â +0.7 +1.2 +1.0 +1.1 +1.6 +1.9 +2.0 10.3 10.9 64.4 13.0 14.5 40.8 44.1 97.1 0.6% 0.7% 14.8% 10.0% 4.3% 5.6% 3.1% 5.9% Baseline + Seq-Inter 14.3 15.6 â +1.3 15.7 16.0 â +0.3 22.9 55.1 2.3% | 1606.07947#26 | Sequence-Level Knowledge Distillation | Neural machine translation (NMT) offers a novel alternative formulation of
translation that is potentially simpler than statistical approaches. However to
reach competitive performance, NMT models need to be exceedingly large. In this
paper we consider applying knowledge distillation approaches (Bucila et al.,
2006; Hinton et al., 2015) that have proven successful for reducing the size of
neural models in other domains to the problem of NMT. We demonstrate that
standard knowledge distillation applied to word-level prediction can be
effective for NMT, and also introduce two novel sequence-level versions of
knowledge distillation that further improve performance, and somewhat
surprisingly, seem to eliminate the need for beam search (even when applied on
the original teacher model). Our best student model runs 10 times faster than
its state-of-the-art teacher with little loss in performance. It is also
significantly better than a baseline model trained without knowledge
distillation: by 4.2/1.7 BLEU with greedy decoding/beam search. Applying weight
pruning on top of knowledge distillation results in a student model that has 13
times fewer parameters than the original teacher model, with a decrease of 0.4
BLEU. | http://arxiv.org/pdf/1606.07947 | Yoon Kim, Alexander M. Rush | cs.CL, cs.LG, cs.NE | EMNLP 2016 | null | cs.CL | 20160625 | 20160922 | [
{
"id": "1506.04488"
},
{
"id": "1504.01483"
},
{
"id": "1508.01211"
},
{
"id": "1602.02410"
},
{
"id": "1602.02830"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
}
] |
1606.07947 | 27 | Baseline + Seq-Inter 14.3 15.6 â +1.3 15.7 16.0 â +0.3 22.9 55.1 2.3% 6.8% Word-KD Seq-KD Baseline + Seq-Inter Word-KD + Seq-Inter Seq-KD + Seq-Inter Seq-KD + Word-KD Seq-KD + Seq-Inter + Word-KD 10.6 11.8 12.8 12.9 13.0 13.6 13.7 14.2 â +1.2 +2.2 +2.3 +2.4 +3.0 +3.1 +3.6 12.7 13.6 13.4 13.1 13.7 14.0 14.2 14.4 â +0.9 +0.7 +0.4 +1.0 +1.3 +1.5 +1.7 37.0 35.3 125.4 52.8 58.7 106.4 67.4 117.4 1.4% 1.4% 6.9% 2.5% 3.2% 3.9% 3.1% 3.2% | 1606.07947#27 | Sequence-Level Knowledge Distillation | Neural machine translation (NMT) offers a novel alternative formulation of
translation that is potentially simpler than statistical approaches. However to
reach competitive performance, NMT models need to be exceedingly large. In this
paper we consider applying knowledge distillation approaches (Bucila et al.,
2006; Hinton et al., 2015) that have proven successful for reducing the size of
neural models in other domains to the problem of NMT. We demonstrate that
standard knowledge distillation applied to word-level prediction can be
effective for NMT, and also introduce two novel sequence-level versions of
knowledge distillation that further improve performance, and somewhat
surprisingly, seem to eliminate the need for beam search (even when applied on
the original teacher model). Our best student model runs 10 times faster than
its state-of-the-art teacher with little loss in performance. It is also
significantly better than a baseline model trained without knowledge
distillation: by 4.2/1.7 BLEU with greedy decoding/beam search. Applying weight
pruning on top of knowledge distillation results in a student model that has 13
times fewer parameters than the original teacher model, with a decrease of 0.4
BLEU. | http://arxiv.org/pdf/1606.07947 | Yoon Kim, Alexander M. Rush | cs.CL, cs.LG, cs.NE | EMNLP 2016 | null | cs.CL | 20160625 | 20160922 | [
{
"id": "1506.04488"
},
{
"id": "1504.01483"
},
{
"id": "1508.01211"
},
{
"id": "1602.02410"
},
{
"id": "1602.02830"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
}
] |
1606.07947 | 28 | Table 1: Results on English-German (newstest2014) and Thai-English (2012/2013) test sets. BLEUK=1: BLEU score with beam size K = 1 (i.e. greedy decoding); âK=1: BLEU gain over the baseline model without any knowledge distillation with greedy decoding; BLEUK=5: BLEU score with beam size K = 5; âK=5: BLEU gain over the baseline model without any knowledge distillation with beam size K = 5; PPL: perplexity on the test set; p(t = Ëy): Probability of output sequence from greedy decoding (averaged over the test set). Params: number of parameters in the model. Best results (as measured by improvement over the
space of translations. Our results suggest that this is indeed the case: the probability mass that Seq- KD models assign to the approximate mode is much higher than is the case for baseline models trained on original data (Table 1: p(t = Ëy)). For example, on English â German the (approximate) argmax for the 2 Ã 500 Seq-KD model (on average) ac- counts for 16.9% of the total probability mass, while the corresponding number is 0.9% for the baseline. | 1606.07947#28 | Sequence-Level Knowledge Distillation | Neural machine translation (NMT) offers a novel alternative formulation of
translation that is potentially simpler than statistical approaches. However to
reach competitive performance, NMT models need to be exceedingly large. In this
paper we consider applying knowledge distillation approaches (Bucila et al.,
2006; Hinton et al., 2015) that have proven successful for reducing the size of
neural models in other domains to the problem of NMT. We demonstrate that
standard knowledge distillation applied to word-level prediction can be
effective for NMT, and also introduce two novel sequence-level versions of
knowledge distillation that further improve performance, and somewhat
surprisingly, seem to eliminate the need for beam search (even when applied on
the original teacher model). Our best student model runs 10 times faster than
its state-of-the-art teacher with little loss in performance. It is also
significantly better than a baseline model trained without knowledge
distillation: by 4.2/1.7 BLEU with greedy decoding/beam search. Applying weight
pruning on top of knowledge distillation results in a student model that has 13
times fewer parameters than the original teacher model, with a decrease of 0.4
BLEU. | http://arxiv.org/pdf/1606.07947 | Yoon Kim, Alexander M. Rush | cs.CL, cs.LG, cs.NE | EMNLP 2016 | null | cs.CL | 20160625 | 20160922 | [
{
"id": "1506.04488"
},
{
"id": "1504.01483"
},
{
"id": "1508.01211"
},
{
"id": "1602.02410"
},
{
"id": "1602.02830"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
}
] |
1606.07947 | 29 | This also explains the success of greedy decoding for Seq-KD modelsâsince we are only modeling around the teacherâs mode, the studentâs distribution is more peaked and therefore the argmax is much easier to ï¬nd. Seq-Inter offers a compromise be- tween the two, with the greedily-decoded sequence accounting for 7.6% of the distribution.
Finally, although past work has shown that mod- els with lower perplexity generally tend to have
Model Size GPU CPU Android Beam = 1 (Greedy) 4 Ã 1000 2 Ã 500 2 Ã 300 425.5 1051.3 1267.8 15.0 63.6 104.3 â 8.8 15.8 Beam = 5 4 Ã 1000 2 Ã 500 2 Ã 300 101.9 181.9 189.1 7.9 22.1 38.4 â 1.9 3.4
Table 2: Number of source words translated per second across GPU (GeForce GTX Titan X), CPU, and smartphone (Samsung Galaxy 6) for the various English â German models. We were unable to open the 4 Ã 1000 model on the smartphone. | 1606.07947#29 | Sequence-Level Knowledge Distillation | Neural machine translation (NMT) offers a novel alternative formulation of
translation that is potentially simpler than statistical approaches. However to
reach competitive performance, NMT models need to be exceedingly large. In this
paper we consider applying knowledge distillation approaches (Bucila et al.,
2006; Hinton et al., 2015) that have proven successful for reducing the size of
neural models in other domains to the problem of NMT. We demonstrate that
standard knowledge distillation applied to word-level prediction can be
effective for NMT, and also introduce two novel sequence-level versions of
knowledge distillation that further improve performance, and somewhat
surprisingly, seem to eliminate the need for beam search (even when applied on
the original teacher model). Our best student model runs 10 times faster than
its state-of-the-art teacher with little loss in performance. It is also
significantly better than a baseline model trained without knowledge
distillation: by 4.2/1.7 BLEU with greedy decoding/beam search. Applying weight
pruning on top of knowledge distillation results in a student model that has 13
times fewer parameters than the original teacher model, with a decrease of 0.4
BLEU. | http://arxiv.org/pdf/1606.07947 | Yoon Kim, Alexander M. Rush | cs.CL, cs.LG, cs.NE | EMNLP 2016 | null | cs.CL | 20160625 | 20160922 | [
{
"id": "1506.04488"
},
{
"id": "1504.01483"
},
{
"id": "1508.01211"
},
{
"id": "1602.02410"
},
{
"id": "1602.02830"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
}
] |
1606.07947 | 30 | higher BLEU, our results indicate that this is not necessarily the case. The perplexity of the baseline 2 à 500 English â German model is 8.2 while the perplexity of the corresponding Seq-KD model is 22.7, despite the fact that Seq-KD model does sig- niï¬cantly better for both greedy (+4.2 BLEU) and beam search (+1.4 BLEU) decoding.
# 5.1 Decoding Speed
Run-time complexity for beam search grows linearly with beam size. Therefore, the fact that sequence- level knowledge distillation allows for greedy de- coding is signiï¬cant, with practical implications for running NMT systems across various devices. To test the speed gains, we run the teacher/student mod- els on GPU, CPU, and smartphone, and check the average number of source words translated per sec- ond (Table 2). We use a GeForce GTX Titan X for GPU and a Samsung Galaxy 6 smartphone. We ï¬nd that we can run the student model 10 times faster with greedy decoding than the teacher model with beam search on GPU (1051.3 vs 101.9 words/sec), with similar performance.
# 5.2 Weight Pruning | 1606.07947#30 | Sequence-Level Knowledge Distillation | Neural machine translation (NMT) offers a novel alternative formulation of
translation that is potentially simpler than statistical approaches. However to
reach competitive performance, NMT models need to be exceedingly large. In this
paper we consider applying knowledge distillation approaches (Bucila et al.,
2006; Hinton et al., 2015) that have proven successful for reducing the size of
neural models in other domains to the problem of NMT. We demonstrate that
standard knowledge distillation applied to word-level prediction can be
effective for NMT, and also introduce two novel sequence-level versions of
knowledge distillation that further improve performance, and somewhat
surprisingly, seem to eliminate the need for beam search (even when applied on
the original teacher model). Our best student model runs 10 times faster than
its state-of-the-art teacher with little loss in performance. It is also
significantly better than a baseline model trained without knowledge
distillation: by 4.2/1.7 BLEU with greedy decoding/beam search. Applying weight
pruning on top of knowledge distillation results in a student model that has 13
times fewer parameters than the original teacher model, with a decrease of 0.4
BLEU. | http://arxiv.org/pdf/1606.07947 | Yoon Kim, Alexander M. Rush | cs.CL, cs.LG, cs.NE | EMNLP 2016 | null | cs.CL | 20160625 | 20160922 | [
{
"id": "1506.04488"
},
{
"id": "1504.01483"
},
{
"id": "1508.01211"
},
{
"id": "1602.02410"
},
{
"id": "1602.02830"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
}
] |
1606.07947 | 31 | # 5.2 Weight Pruning
Although knowledge distillation enables training faster models, the number of parameters for the student models is still somewhat large (Table 1: Params), due to the word embeddings which dom- inate most of the parameters.10 For example, on the
10Word embeddings scale linearly while RNN parameters scale quadratically with the dimension size.
Model Prune % Params BLEU Ratio 4 Ã 1000 2 Ã 500 0% 221 m 84 m 0% 19.5 19.3 1Ã 3Ã 2 Ã 500 2 Ã 500 2 Ã 500 2 Ã 500 50% 80% 85% 90% 42 m 17 m 13 m 8 m 19.3 19.1 18.8 18.5 5Ã 13Ã 18Ã 26Ã
Table 3: Performance of student models with varying % of the weights pruned. Top two rows are models without any pruning. Params: number of parameters in the model; Prune %: Percent- age of weights pruned based on their absolute values; BLEU: BLEU score with beam search decoding (K = 5) after retrain- ing the pruned model; Ratio: Ratio of the number of parameters versus the original teacher model (which has 221m parameters). | 1606.07947#31 | Sequence-Level Knowledge Distillation | Neural machine translation (NMT) offers a novel alternative formulation of
translation that is potentially simpler than statistical approaches. However to
reach competitive performance, NMT models need to be exceedingly large. In this
paper we consider applying knowledge distillation approaches (Bucila et al.,
2006; Hinton et al., 2015) that have proven successful for reducing the size of
neural models in other domains to the problem of NMT. We demonstrate that
standard knowledge distillation applied to word-level prediction can be
effective for NMT, and also introduce two novel sequence-level versions of
knowledge distillation that further improve performance, and somewhat
surprisingly, seem to eliminate the need for beam search (even when applied on
the original teacher model). Our best student model runs 10 times faster than
its state-of-the-art teacher with little loss in performance. It is also
significantly better than a baseline model trained without knowledge
distillation: by 4.2/1.7 BLEU with greedy decoding/beam search. Applying weight
pruning on top of knowledge distillation results in a student model that has 13
times fewer parameters than the original teacher model, with a decrease of 0.4
BLEU. | http://arxiv.org/pdf/1606.07947 | Yoon Kim, Alexander M. Rush | cs.CL, cs.LG, cs.NE | EMNLP 2016 | null | cs.CL | 20160625 | 20160922 | [
{
"id": "1506.04488"
},
{
"id": "1504.01483"
},
{
"id": "1508.01211"
},
{
"id": "1602.02410"
},
{
"id": "1602.02830"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
}
] |
1606.07947 | 32 | 2 à 500 English â German model the word em- beddings account for approximately 63% (50m out of 84m) of the parameters. The size of word em- beddings have little impact on run-time as the word embedding layer is a simple lookup table that only affects the ï¬rst layer of the model.
We therefore focus next on reducing the mem- ory footprint of the student models further through weight pruning. Weight pruning for NMT was re- cently investigated by See et al. (2016), who found that up to 80 â 90% of the parameters in a large NMT model can be pruned with little loss in perfor- mance. We take our best English â German student model (2 à 500 Seq-KD + Seq-Inter) and prune x% of the parameters by removing the weights with the lowest absolute values. We then retrain the pruned model on Seq-KD data with a learning rate of 0.2 and ï¬ne-tune towards Seq-Inter data with a learning rate of 0.1. As observed by See et al. (2016), re- training proved to be crucial. The results are shown in Table 3. | 1606.07947#32 | Sequence-Level Knowledge Distillation | Neural machine translation (NMT) offers a novel alternative formulation of
translation that is potentially simpler than statistical approaches. However to
reach competitive performance, NMT models need to be exceedingly large. In this
paper we consider applying knowledge distillation approaches (Bucila et al.,
2006; Hinton et al., 2015) that have proven successful for reducing the size of
neural models in other domains to the problem of NMT. We demonstrate that
standard knowledge distillation applied to word-level prediction can be
effective for NMT, and also introduce two novel sequence-level versions of
knowledge distillation that further improve performance, and somewhat
surprisingly, seem to eliminate the need for beam search (even when applied on
the original teacher model). Our best student model runs 10 times faster than
its state-of-the-art teacher with little loss in performance. It is also
significantly better than a baseline model trained without knowledge
distillation: by 4.2/1.7 BLEU with greedy decoding/beam search. Applying weight
pruning on top of knowledge distillation results in a student model that has 13
times fewer parameters than the original teacher model, with a decrease of 0.4
BLEU. | http://arxiv.org/pdf/1606.07947 | Yoon Kim, Alexander M. Rush | cs.CL, cs.LG, cs.NE | EMNLP 2016 | null | cs.CL | 20160625 | 20160922 | [
{
"id": "1506.04488"
},
{
"id": "1504.01483"
},
{
"id": "1508.01211"
},
{
"id": "1602.02410"
},
{
"id": "1602.02830"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
}
] |
1606.07947 | 33 | Our ï¬ndings suggest that compression beneï¬ts achieved through weight pruning and knowledge distillation are orthogonal.11 Pruning 80% of the weight in the 2 à 500 student model results in a model with 13à fewer parameters than the original teacher model with only a decrease of 0.4 BLEU. While pruning 90% of the weights results in a more appreciable decrease of 1.0 BLEU, the model is
11To our knowledge combining pruning and knowledge dis- tillation has not been investigated before.
drastically smaller with 8m parameters, which is 26Ã fewer than the original teacher model.
# 5.3 Further Observations
⢠For models trained with word-level knowledge distillation, we also tried regressing the student networkâs top-most hidden layer at each time step to the teacher networkâs top-most hidden layer as a pretraining step, noting that Romero et al. (2015) obtained improvements with a similar technique on feed-forward models. We found this to give comparable results to stan- dard knowledge distillation and hence did not pursue this further. | 1606.07947#33 | Sequence-Level Knowledge Distillation | Neural machine translation (NMT) offers a novel alternative formulation of
translation that is potentially simpler than statistical approaches. However to
reach competitive performance, NMT models need to be exceedingly large. In this
paper we consider applying knowledge distillation approaches (Bucila et al.,
2006; Hinton et al., 2015) that have proven successful for reducing the size of
neural models in other domains to the problem of NMT. We demonstrate that
standard knowledge distillation applied to word-level prediction can be
effective for NMT, and also introduce two novel sequence-level versions of
knowledge distillation that further improve performance, and somewhat
surprisingly, seem to eliminate the need for beam search (even when applied on
the original teacher model). Our best student model runs 10 times faster than
its state-of-the-art teacher with little loss in performance. It is also
significantly better than a baseline model trained without knowledge
distillation: by 4.2/1.7 BLEU with greedy decoding/beam search. Applying weight
pruning on top of knowledge distillation results in a student model that has 13
times fewer parameters than the original teacher model, with a decrease of 0.4
BLEU. | http://arxiv.org/pdf/1606.07947 | Yoon Kim, Alexander M. Rush | cs.CL, cs.LG, cs.NE | EMNLP 2016 | null | cs.CL | 20160625 | 20160922 | [
{
"id": "1506.04488"
},
{
"id": "1504.01483"
},
{
"id": "1508.01211"
},
{
"id": "1602.02410"
},
{
"id": "1602.02830"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
}
] |
1606.07947 | 34 | ⢠There have been promising recent results on eliminating word embeddings completely and obtaining word representations directly from characters with character composition models, which have many fewer parameters than word embedding lookup tables (Ling et al., 2015a; Kim et al., 2016; Ling et al., 2015b; Jozefowicz et al., 2016; Costa-Jussa and Fonollosa, 2016). Combining such methods with knowledge dis- tillation/pruning to further reduce the memory footprint of NMT systems remains an avenue for future work.
# 6 Related Work | 1606.07947#34 | Sequence-Level Knowledge Distillation | Neural machine translation (NMT) offers a novel alternative formulation of
translation that is potentially simpler than statistical approaches. However to
reach competitive performance, NMT models need to be exceedingly large. In this
paper we consider applying knowledge distillation approaches (Bucila et al.,
2006; Hinton et al., 2015) that have proven successful for reducing the size of
neural models in other domains to the problem of NMT. We demonstrate that
standard knowledge distillation applied to word-level prediction can be
effective for NMT, and also introduce two novel sequence-level versions of
knowledge distillation that further improve performance, and somewhat
surprisingly, seem to eliminate the need for beam search (even when applied on
the original teacher model). Our best student model runs 10 times faster than
its state-of-the-art teacher with little loss in performance. It is also
significantly better than a baseline model trained without knowledge
distillation: by 4.2/1.7 BLEU with greedy decoding/beam search. Applying weight
pruning on top of knowledge distillation results in a student model that has 13
times fewer parameters than the original teacher model, with a decrease of 0.4
BLEU. | http://arxiv.org/pdf/1606.07947 | Yoon Kim, Alexander M. Rush | cs.CL, cs.LG, cs.NE | EMNLP 2016 | null | cs.CL | 20160625 | 20160922 | [
{
"id": "1506.04488"
},
{
"id": "1504.01483"
},
{
"id": "1508.01211"
},
{
"id": "1602.02410"
},
{
"id": "1602.02830"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
}
] |
1606.07947 | 35 | # 6 Related Work
Compressing deep learning models is an active area of current research. Pruning methods involve prun- ing weights or entire neurons/nodes based on some criterion. LeCun et al. (1990) prune weights based on an approximation of the Hessian, while Han et al. (2016) show that a simple magnitude-based pruning works well. Prior work on removing neurons/nodes include Srinivas and Babu (2015) and Mariet and Sra (2016). See et al. (2016) were the ï¬rst to ap- ply pruning to Neural Machine Translation, observ- ing that that different parts of the architecture (in- put word embeddings, LSTM matrices, etc.) admit different levels of pruning. Knowledge distillation approaches train a smaller student model to mimic a larger teacher model, by minimizing the loss be- tween the teacher/student predictions (Bucila et al., 2006; Ba and Caruana, 2014; Li et al., 2014; Hin- ton et al., 2015). Romero et al. (2015) addition- ally regress on the intermediate hidden layers of the | 1606.07947#35 | Sequence-Level Knowledge Distillation | Neural machine translation (NMT) offers a novel alternative formulation of
translation that is potentially simpler than statistical approaches. However to
reach competitive performance, NMT models need to be exceedingly large. In this
paper we consider applying knowledge distillation approaches (Bucila et al.,
2006; Hinton et al., 2015) that have proven successful for reducing the size of
neural models in other domains to the problem of NMT. We demonstrate that
standard knowledge distillation applied to word-level prediction can be
effective for NMT, and also introduce two novel sequence-level versions of
knowledge distillation that further improve performance, and somewhat
surprisingly, seem to eliminate the need for beam search (even when applied on
the original teacher model). Our best student model runs 10 times faster than
its state-of-the-art teacher with little loss in performance. It is also
significantly better than a baseline model trained without knowledge
distillation: by 4.2/1.7 BLEU with greedy decoding/beam search. Applying weight
pruning on top of knowledge distillation results in a student model that has 13
times fewer parameters than the original teacher model, with a decrease of 0.4
BLEU. | http://arxiv.org/pdf/1606.07947 | Yoon Kim, Alexander M. Rush | cs.CL, cs.LG, cs.NE | EMNLP 2016 | null | cs.CL | 20160625 | 20160922 | [
{
"id": "1506.04488"
},
{
"id": "1504.01483"
},
{
"id": "1508.01211"
},
{
"id": "1602.02410"
},
{
"id": "1602.02830"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
}
] |
1606.07947 | 36 | student/teacher network as a pretraining step, while Mou et al. (2015) obtain smaller word embeddings from a teacher model via regression. There has also been work on transferring knowledge across differ- ent network architectures: Chan et al. (2015b) show that a deep non-recurrent neural network can learn from an RNN; Geras et al. (2016) train a CNN to mimic an LSTM for speech recognition. Kuncoro et al. (2016) recently investigated knowledge distil- lation for structured prediction by having a single parser learn from an ensemble of parsers. | 1606.07947#36 | Sequence-Level Knowledge Distillation | Neural machine translation (NMT) offers a novel alternative formulation of
translation that is potentially simpler than statistical approaches. However to
reach competitive performance, NMT models need to be exceedingly large. In this
paper we consider applying knowledge distillation approaches (Bucila et al.,
2006; Hinton et al., 2015) that have proven successful for reducing the size of
neural models in other domains to the problem of NMT. We demonstrate that
standard knowledge distillation applied to word-level prediction can be
effective for NMT, and also introduce two novel sequence-level versions of
knowledge distillation that further improve performance, and somewhat
surprisingly, seem to eliminate the need for beam search (even when applied on
the original teacher model). Our best student model runs 10 times faster than
its state-of-the-art teacher with little loss in performance. It is also
significantly better than a baseline model trained without knowledge
distillation: by 4.2/1.7 BLEU with greedy decoding/beam search. Applying weight
pruning on top of knowledge distillation results in a student model that has 13
times fewer parameters than the original teacher model, with a decrease of 0.4
BLEU. | http://arxiv.org/pdf/1606.07947 | Yoon Kim, Alexander M. Rush | cs.CL, cs.LG, cs.NE | EMNLP 2016 | null | cs.CL | 20160625 | 20160922 | [
{
"id": "1506.04488"
},
{
"id": "1504.01483"
},
{
"id": "1508.01211"
},
{
"id": "1602.02410"
},
{
"id": "1602.02830"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
}
] |
1606.07947 | 37 | Other approaches for compression involve low rank factorizations of weight matrices (Denton et al., 2014; Jaderberg et al., 2014; Lu et al., 2016; Prab- havalkar et al., 2016), sparsity-inducing regularizers (Murray and Chiang, 2015), binarization of weights (Courbariaux et al., 2016; Lin et al., 2016), and weight sharing (Chen et al., 2015; Han et al., 2016). Finally, although we have motivated sequence-level knowledge distillation in the context of training a smaller model, there are other techniques that train on a mixture of the modelâs predictions and the data, such as local updating (Liang et al., 2006), hope/fear training (Chiang, 2012), SEARN (Daum´e III et al., 2009), DAgger (Ross et al., 2011), and minimum risk training (Och, 2003; Shen et al., 2016).
# 7 Conclusion
In this work we have investigated existing knowl- edge distillation methods for NMT (which work at the word-level) and introduced two sequence-level variants of knowledge distillation, which provide improvements over standard word-level knowledge distillation. | 1606.07947#37 | Sequence-Level Knowledge Distillation | Neural machine translation (NMT) offers a novel alternative formulation of
translation that is potentially simpler than statistical approaches. However to
reach competitive performance, NMT models need to be exceedingly large. In this
paper we consider applying knowledge distillation approaches (Bucila et al.,
2006; Hinton et al., 2015) that have proven successful for reducing the size of
neural models in other domains to the problem of NMT. We demonstrate that
standard knowledge distillation applied to word-level prediction can be
effective for NMT, and also introduce two novel sequence-level versions of
knowledge distillation that further improve performance, and somewhat
surprisingly, seem to eliminate the need for beam search (even when applied on
the original teacher model). Our best student model runs 10 times faster than
its state-of-the-art teacher with little loss in performance. It is also
significantly better than a baseline model trained without knowledge
distillation: by 4.2/1.7 BLEU with greedy decoding/beam search. Applying weight
pruning on top of knowledge distillation results in a student model that has 13
times fewer parameters than the original teacher model, with a decrease of 0.4
BLEU. | http://arxiv.org/pdf/1606.07947 | Yoon Kim, Alexander M. Rush | cs.CL, cs.LG, cs.NE | EMNLP 2016 | null | cs.CL | 20160625 | 20160922 | [
{
"id": "1506.04488"
},
{
"id": "1504.01483"
},
{
"id": "1508.01211"
},
{
"id": "1602.02410"
},
{
"id": "1602.02830"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
}
] |
1606.07947 | 38 | We have chosen to focus on translation as this domain has generally required the largest capacity deep learning models, but the sequence-to-sequence framework has been successfully applied to a wide range of tasks including parsing (Vinyals et al., 2015a), summarization (Rush et al., 2015), dialogue (Vinyals and Le, 2015; Serban et al., 2016; Li et al., 2016), NER/POS-tagging (Gillick et al., 2016), image captioning (Vinyals et al., 2015b; Xu et al., 2015), video generation (Srivastava et al., 2015), and speech recognition (Chan et al., 2015a). We antici- pate that methods described in this paper can be used to similarly train smaller models in other domains.
# References
[Ba and Caruana2014] Lei Jimmy Ba and Rich Caruana. 2014. Do Deep Nets Really Need to be Deep? In Proceedings of NIPS.
[Bahdanau et al.2015] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural Machine Translation by Jointly Learning to Align and Translate. In Proceedings of ICLR.
[Bucila et al.2006] Cristian Bucila, Rich Caruana, and Alexandru Niculescu-Mizil. 2006. Model Compres- sion. In Proceedings of KDD. | 1606.07947#38 | Sequence-Level Knowledge Distillation | Neural machine translation (NMT) offers a novel alternative formulation of
translation that is potentially simpler than statistical approaches. However to
reach competitive performance, NMT models need to be exceedingly large. In this
paper we consider applying knowledge distillation approaches (Bucila et al.,
2006; Hinton et al., 2015) that have proven successful for reducing the size of
neural models in other domains to the problem of NMT. We demonstrate that
standard knowledge distillation applied to word-level prediction can be
effective for NMT, and also introduce two novel sequence-level versions of
knowledge distillation that further improve performance, and somewhat
surprisingly, seem to eliminate the need for beam search (even when applied on
the original teacher model). Our best student model runs 10 times faster than
its state-of-the-art teacher with little loss in performance. It is also
significantly better than a baseline model trained without knowledge
distillation: by 4.2/1.7 BLEU with greedy decoding/beam search. Applying weight
pruning on top of knowledge distillation results in a student model that has 13
times fewer parameters than the original teacher model, with a decrease of 0.4
BLEU. | http://arxiv.org/pdf/1606.07947 | Yoon Kim, Alexander M. Rush | cs.CL, cs.LG, cs.NE | EMNLP 2016 | null | cs.CL | 20160625 | 20160922 | [
{
"id": "1506.04488"
},
{
"id": "1504.01483"
},
{
"id": "1508.01211"
},
{
"id": "1602.02410"
},
{
"id": "1602.02830"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
}
] |
1606.07947 | 39 | [Chan et al.2015a] William Chan, Navdeep Jaitly, Quoc Le, and Oriol Vinyals. 2015a. Listen, Attend and Spell. arXiv:1508.01211.
[Chan et al.2015b] William Chan, Nan Rosemary Ke, and Ian Laner. 2015b. Transfering Knowledge from a RNN to a DNN. arXiv:1504.01483.
[Chen and Cherry2014] Boxing Chen and Colin Cherry. 2014. A Systematic Comparison of Smoothing Tech- niques for Sentence-Level BLEU. In Proceedings of the Ninth Workshop on Statistical Machine Transla- tion.
[Chen et al.2015] Wenlin Chen, James T. Wilson, Stephen Tyree, Kilian Q. Weinberger, and Yixin Chen. 2015. Compressing Neural Networks with the Hashing Trick. In Proceedings of ICML.
2012. Hope and Fear for Discriminative Training of Statistical Translation Models. In JMLR. | 1606.07947#39 | Sequence-Level Knowledge Distillation | Neural machine translation (NMT) offers a novel alternative formulation of
translation that is potentially simpler than statistical approaches. However to
reach competitive performance, NMT models need to be exceedingly large. In this
paper we consider applying knowledge distillation approaches (Bucila et al.,
2006; Hinton et al., 2015) that have proven successful for reducing the size of
neural models in other domains to the problem of NMT. We demonstrate that
standard knowledge distillation applied to word-level prediction can be
effective for NMT, and also introduce two novel sequence-level versions of
knowledge distillation that further improve performance, and somewhat
surprisingly, seem to eliminate the need for beam search (even when applied on
the original teacher model). Our best student model runs 10 times faster than
its state-of-the-art teacher with little loss in performance. It is also
significantly better than a baseline model trained without knowledge
distillation: by 4.2/1.7 BLEU with greedy decoding/beam search. Applying weight
pruning on top of knowledge distillation results in a student model that has 13
times fewer parameters than the original teacher model, with a decrease of 0.4
BLEU. | http://arxiv.org/pdf/1606.07947 | Yoon Kim, Alexander M. Rush | cs.CL, cs.LG, cs.NE | EMNLP 2016 | null | cs.CL | 20160625 | 20160922 | [
{
"id": "1506.04488"
},
{
"id": "1504.01483"
},
{
"id": "1508.01211"
},
{
"id": "1602.02410"
},
{
"id": "1602.02830"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
}
] |
1606.07947 | 40 | 2012. Hope and Fear for Discriminative Training of Statistical Translation Models. In JMLR.
[Cho et al.2014] Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation. In Proceedings of EMNLP.
[Costa-Jussa and Fonollosa2016] Marta R. Costa-Jussa and Jose A.R. Fonollosa. 2016. Character-based Neu- ral Machine Translation. arXiv:1603.00810. [Courbariaux et al.2016] Matthieu Courbariaux,
Itay Hubara, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. 2016. Binarized Neural Networks: Training Neural Networks with Weights and Activations Constrained to +1 or â1. arXiv:1602.02830.
[Daum´e III et al.2009] Hal Daum´e III, John Langford, and Daniel Marcu. 2009. Search-based Structured Prediction. Machine Learning. | 1606.07947#40 | Sequence-Level Knowledge Distillation | Neural machine translation (NMT) offers a novel alternative formulation of
translation that is potentially simpler than statistical approaches. However to
reach competitive performance, NMT models need to be exceedingly large. In this
paper we consider applying knowledge distillation approaches (Bucila et al.,
2006; Hinton et al., 2015) that have proven successful for reducing the size of
neural models in other domains to the problem of NMT. We demonstrate that
standard knowledge distillation applied to word-level prediction can be
effective for NMT, and also introduce two novel sequence-level versions of
knowledge distillation that further improve performance, and somewhat
surprisingly, seem to eliminate the need for beam search (even when applied on
the original teacher model). Our best student model runs 10 times faster than
its state-of-the-art teacher with little loss in performance. It is also
significantly better than a baseline model trained without knowledge
distillation: by 4.2/1.7 BLEU with greedy decoding/beam search. Applying weight
pruning on top of knowledge distillation results in a student model that has 13
times fewer parameters than the original teacher model, with a decrease of 0.4
BLEU. | http://arxiv.org/pdf/1606.07947 | Yoon Kim, Alexander M. Rush | cs.CL, cs.LG, cs.NE | EMNLP 2016 | null | cs.CL | 20160625 | 20160922 | [
{
"id": "1506.04488"
},
{
"id": "1504.01483"
},
{
"id": "1508.01211"
},
{
"id": "1602.02410"
},
{
"id": "1602.02830"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.