id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
listlengths 1
1
|
---|---|---|---|---|---|---|
1607.00036#58 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | 24 | 1607.00036#57 | 1607.00036 | [
"1511.02301"
] |
|
1606.09274#0 | Compression of Neural Machine Translation Models via Pruning | 6 1 0 2 n u J 9 2 ] I A . s c [ 1 v 4 7 2 9 0 . 6 0 6 1 : v i X r a # Compression of Neural Machine Translation Models via Pruning Abigail Seeâ Minh-Thang Luongâ Christopher D. Manning Computer Science Department, Stanford University, Stanford, CA 94305 {abisee,lmthang,manning}@stanford.edu # Abstract | 1606.09274#1 | 1606.09274 | [
"1602.07360"
] |
|
1606.09274#1 | Compression of Neural Machine Translation Models via Pruning | Neural Machine Translation (NMT), like many other deep learning domains, typ- ically suffers from over-parameterization, resulting in large storage sizes. This paper examines three simple magnitude-based pruning schemes to compress NMT mod- els, namely class-blind, class-uniform, and class-distribution, which differ in terms of how pruning thresholds are com- puted for the different classes of weights in the NMT architecture. We demonstrate the efï¬ cacy of weight pruning as a compres- sion technique for a state-of-the-art NMT system. We show that an NMT model with over 200 million parameters can be pruned by 40% with very little performance loss as measured on the WMTâ | 1606.09274#0 | 1606.09274#2 | 1606.09274 | [
"1602.07360"
] |
1606.09274#2 | Compression of Neural Machine Translation Models via Pruning | 14 English- German translation task. This sheds light on the distribution of redundancy in the NMT architecture. Our main result is that with retraining, we can recover and even surpass the original performance with an 80%-pruned model. # Introduction Neural Machine Translation (NMT) is a simple new architecture for translating texts from one lan- guage into another (Sutskever et al., 2014; Cho et al., 2014). NMT is a single deep neural network that is trained end-to-end, holding several advan- tages such as the ability to capture long-range de- pendencies in sentences, and generalization to un- seen texts. Despite being relatively new, NMT has already achieved state-of-the-art translation re- sults for several language pairs including English- French (Luong et al., 2015b), English-German (Jean et al., 2015a; Luong et al., 2015a; Luong and | 1606.09274#1 | 1606.09274#3 | 1606.09274 | [
"1602.07360"
] |
1606.09274#3 | Compression of Neural Machine Translation Models via Pruning | â Both authors contributed equally. target language output â â â _ Je suis étudiant â IT | Je suis étudiant i J Y | am a student rT Tl Y source language input target language input Figure 1: A simpliï¬ ed diagram of NMT. Manning, 2015; Sennrich et al., 2016), English- Turkish (Sennrich et al., 2016), and English-Czech (Jean et al., 2015b; Luong and Manning, 2016). Figure 1 gives an example of an NMT system. While NMT has a signiï¬ cantly smaller memory footprint than traditional phrase-based approaches (which need to store gigantic phrase-tables and language models), the model size of NMT is still prohibitively large for mobile devices. For exam- ple, a recent state-of-the-art NMT system requires over 200 million parameters, resulting in a stor- age size of hundreds of megabytes (Luong et al., 2015a). Though the trend for bigger and deeper neural networks has brought great progress, it has also introduced over-parameterization, resulting in long running times, overï¬ tting, and the storage size issue discussed above. A solution to the over- parameterization problem could potentially aid all three issues, though the ï¬ rst (long running times) is outside the scope of this paper. In this paper we investi- gate the efï¬ cacy of weight pruning for NMT as a means of compression. We show that despite its simplicity, magnitude-based pruning with re- training is highly effective, and we compare three magnitude-based pruning schemes â class-blind, class-uniform and class-distribution. Though re- cent work has chosen to use the latter two, we ï¬ nd the ï¬ rst and simplest scheme â class-blind â the most successful. | 1606.09274#2 | 1606.09274#4 | 1606.09274 | [
"1602.07360"
] |
1606.09274#4 | Compression of Neural Machine Translation Models via Pruning | We are able to prune 40% of the weights of a state-of-the-art NMT system with negligible performance loss, and by adding a retraining phase after pruning, we can prune 80% with no performance loss. Our pruning experi- ments also reveal some patterns in the distribution of redundancy in NMT. In particular we ï¬ nd that higher layers, attention and softmax weights are the most important, while lower layers and the em- bedding weights hold a lot of redundancy. For the Long Short-Term Memory (LSTM) architecture, we ï¬ nd that at lower layers the parameters for the input are most crucial, but at higher layers the pa- rameters for the gates also become important. # 2 Related Work Pruning the parameters from a neural network, referred to as weight pruning or network prun- ing, is a well-established idea though it can be implemented in many ways. Among the most popular are the Optimal Brain Damage (OBD) (Le Cun et al., 1989) and Optimal Brain Sur- geon (OBS) (Hassibi and Stork, 1993) techniques, which involve computing the Hessian matrix of the loss function with respect to the parameters, in order to assess the saliency of each parame- ter. Parameters with low saliency are then pruned from the network and the remaining sparse net- work is retrained. Both OBD and OBS were shown to perform better than the so-called â naive magnitude-based approachâ , which prunes param- eters according to their magnitude (deleting pa- rameters close to zero). However, the high com- putational complexity of OBD and OBS compare unfavorably to the computational simplicity of the magnitude-based approach, especially for large networks (Augasta and Kathirvalavakumar, 2013). In recent years, the deep learning renaissance has prompted a re-investigation of network prun- ing for modern models and tasks. Magnitude- based pruning with iterative retraining has yielded strong results for Convolutional Neural Networks (CNN) performing visual tasks. Collins and Kohli (2014) prune 75% of AlexNet parameters with small accuracy loss on the ImageNet task, while Han et al. (2015b) prune 89% of AlexNet parame- ters with no accuracy loss on the ImageNet task. | 1606.09274#3 | 1606.09274#5 | 1606.09274 | [
"1602.07360"
] |
1606.09274#5 | Compression of Neural Machine Translation Models via Pruning | Other approaches focus on pruning neurons rather than parameters, via sparsity-inducing regu- larizers (Murray and Chiang, 2015) or â wiring to- getherâ pairs of neurons with similar input weights (Srinivas and Babu, 2015). These approaches are much more constrained than weight-pruning schemes; they necessitate ï¬ nding entire zero rows of weight matrices, or near-identical pairs of rows, in order to prune a single neuron. By contrast weight-pruning approaches allow weights to be pruned freely and independently of each other. The neuron-pruning approach of Srinivas and Babu (2015) was shown to perform poorly (it suf- fered performance loss after removing only 35% of AlexNet parameters) compared to the weight- pruning approach of Han et al. (2015b). Though Murray and Chiang (2015) demonstrates neuron- pruning for language modeling as part of a (non- neural) Machine Translation pipeline, their ap- proach is more geared towards architecture selec- tion than compression. There are many other compression techniques for neural networks, including approaches based on on low-rank approximations for weight matri- ces (Jaderberg et al., 2014; Denton et al., 2014), or weight sharing via hash functions (Chen et al., 2015). Several methods involve reducing the pre- cision of the weights or activations (Courbariaux et al., 2015), sometimes in conjunction with spe- cialized hardware (Gupta et al., 2015), or even us- ing binary weights (Lin et al., 2016). | 1606.09274#4 | 1606.09274#6 | 1606.09274 | [
"1602.07360"
] |
1606.09274#6 | Compression of Neural Machine Translation Models via Pruning | The â knowl- edge distillationâ technique of Hinton et al. (2015) involves training a small â studentâ network on the soft outputs of a large â teacherâ network. Some approaches use a sophisticated pipeline of several techniques to achieve impressive feats of compres- sion (Han et al., 2015a; Iandola et al., 2016). Most of the above work has focused on com- pressing CNNs for vision tasks. We extend the magnitude-based pruning approach of Han et al. (2015b) to recurrent neural networks (RNN), in particular LSTM architectures for NMT, and to our knowledge we are the ï¬ | 1606.09274#5 | 1606.09274#7 | 1606.09274 | [
"1602.07360"
] |
1606.09274#7 | Compression of Neural Machine Translation Models via Pruning | rst to do so. There has been some recent work on compression for RNNs (Lu et al., 2016; Prabhavalkar et al., 2016), but it focuses on other, non-pruning compression techniques. Nonetheless, our general observations on the distribution of redundancy in a LSTM, de- tailed in Section 4.5, are corroborated by Lu et al. # target language output â â seb 7 one-hot vectors Je suis étudiant â } length V » » A» » context vector 5 ; (one for each | scores Key to weight classes target word) length V softmax weights length n * * size: Vxn a initial (zero) | attention hidden attention states length n weights A A TAY . size: nx 2n . source â > target â > , | pen ayer 2 layer 2 layer 2 J weights weights size: 4n x 2n size: 4n x 2n . hidden layer 1 source â > target â > F ten th ne layer 1 layer 1 J weights weights size: 4n x 2n size: 4n x 2n | word embeddings length n source embedding target embedding 4 - weights | weights t i] i] i} size: nx V size: nx V 7 1am a student â Je _â suis étudiant =} toate yes N J \ J Y Y source language input target language Figure 2: NMT architecture. This example has two layers, but our system has four. The different weight classes are indicated by arrows of different color (the black arrows in the top right represent simply choosing the highest-scoring word, and thus require no parameters). | 1606.09274#6 | 1606.09274#8 | 1606.09274 | [
"1602.07360"
] |
1606.09274#8 | Compression of Neural Machine Translation Models via Pruning | Best viewed in color. (2016). # 3 Our Approach We ï¬ rst give a brief overview of Neural Ma- chine Translation before describing the model ar- chitecture of interest, the deep multi-layer recur- rent model with LSTM. We then explain the dif- ferent types of NMT weights together with our ap- proaches to pruning and retraining. # 3.1 Neural Machine Translation Neural machine translation aims to directly model the conditional probability p(y|x) of translating a source sentence, x1, . . . , xn, to a target sentence, y1, . . . , ym. It accomplishes this goal through an encoder-decoder framework (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Cho et al., 2014). The encoder computes a representation s for each source sentence. Based on that source representation, the decoder generates a transla- tion, one target word at a time, and hence, decom- poses the log conditional probability as: log p(yl) = 32" logp (vely<e,8) | 1606.09274#7 | 1606.09274#9 | 1606.09274 | [
"1602.07360"
] |
1606.09274#9 | Compression of Neural Machine Translation Models via Pruning | Most NMT work uses RNNs, but approaches (a) architecture, which can be unidirectional, bidirectional, or deep multi- layer RNN; and (b) RNN type, which can be Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) or the Gated Recurrent Unit (Cho et al., 2014). In this work, we speciï¬ cally consider the deep multi-layer recurrent architecture with LSTM as the hidden unit type. Figure 1 illustrates an in- stance of that architecture during training in which the source and target sentence pair are input for su- pervised learning. During testing, the target sen- tence is not known in advance; instead, the most probable target words predicted by the model are fed as inputs into the next timestep. The network stops when it emits the end-of-sentence symbol â | 1606.09274#8 | 1606.09274#10 | 1606.09274 | [
"1602.07360"
] |
1606.09274#10 | Compression of Neural Machine Translation Models via Pruning | a special â wordâ in the vocabulary, represented by a dash in Figure 1. # 3.2 Understanding NMT Weights Figure 2 shows the same system in more detail, highlighting the different types of parameters, or weights, in the model. We will go through the architecture from bottom to top. First, a vocab- ulary is chosen for each language, assuming that the top V frequent words are selected. Thus, ev- ery word in the source or target vocabulary can be represented by a one-hot vector of length V . | 1606.09274#9 | 1606.09274#11 | 1606.09274 | [
"1602.07360"
] |
1606.09274#11 | Compression of Neural Machine Translation Models via Pruning | # layer The source input sentence and target input sen- tence, represented as a sequence of one-hot vec- tors, are transformed into a sequence of word em- beddings by the embedding weights. These em- bedding weights, which are learned during train- ing, are different for the source words and the tar- get words. The word embeddings and all hidden layers are vectors of length n (a chosen hyperpa- rameter). The word embeddings are then fed as input into the main network, which consists of two multi- layer RNNs â | 1606.09274#10 | 1606.09274#12 | 1606.09274 | [
"1602.07360"
] |
1606.09274#12 | Compression of Neural Machine Translation Models via Pruning | stuck togetherâ â an encoder for the source language and a decoder for the target lan- guage, each with their own weights. The feed- forward (vertical) weights connect the hidden unit from the layer below to the upper RNN block, and the recurrent (horizontal) weights connect the hid- den unit from the previous time-step RNN block to the current time-step RNN block. The hidden state at the top layer of the decoder is fed through an attention layer, which guides the translation by â paying attentionâ to relevant parts of the source sentence; for more information see Bahdanau et al. (2015) or Section 3 of Luong et al. (2015a). Finally, for each target word, the top layer hidden unit is transformed by the softmax weights into a score vector of length V . The tar- get word with the highest score is selected as the output translation. Weight Subgroups in LSTM â For the afore- mentioned RNN block, we choose to use LSTM as the hidden unit type. To facilitate our later discus- sion on the different subgroups of weights within LSTM, we ï¬ rst review the details of LSTM as for- mulated by Zaremba et al. (2014) as follows: i sigm f | â | sigm nit o} | sigm Tan.2n hi, 2) h tanh d=fod_,+ioh (3) hi, = 00 tanh(c}) (4) Here, each LSTM block at time t and layer l com- putes as output a pair of hidden and memory vec- t) given the previous pair (hl tors (hl tâ 1) and an input vector hlâ 1 (either from the LSTM block below or the embedding weights if l = 1). All of these vectors have length n. The core of a LSTM block is the weight matrix T4n,2n of size 4n à 2n. This matrix can be decom- posed into 8 subgroups that are responsible for the interactions between {input gate i, forget gate f , output gate o, input signal Ë h} à {feed-forward in- put hlâ 1 t | 1606.09274#11 | 1606.09274#13 | 1606.09274 | [
"1602.07360"
] |
1606.09274#13 | Compression of Neural Machine Translation Models via Pruning | # 3.3 Pruning Schemes We follow the general magnitude-based approach of Han et al. (2015b), which consists of pruning weights with smallest absolute value. However, we question the authorsâ pruning scheme with re- spect to the different weight classes, and exper- iment with three pruning schemes. Suppose we wish to prune x% of the total parameters in the model. How do we distribute the pruning over the different weight classes (illustrated in Figure 2) of our model? We propose to examine three different pruning schemes: 1. Class-blind: Take all parameters, sort them by magnitude and prune the x% with smallest (So magnitude, regardless of weight class. some classes are pruned proportionally more than others). 2. Class-uniform: Within each class, sort the weights by magnitude and prune the x% with smallest magnitude. (So all classes have ex- actly x% of their parameters pruned). 3. Class-distribution: For each class c, weights with magnitude less than Î»Ï c are pruned. Here, Ï c is the standard deviation of that class and λ is a universal parameter chosen such that in total, x% of all parameters are pruned. This is used by Han et al. (2015b). All these schemes have their seeming advantages. Class-blind pruning is the simplest and adheres to the principle that pruning weights (or equivalently, setting them to zero) is least damaging when those weights are small, regardless of their locations in the architecture. Class-uniform pruning and class- distribution pruning both seek to prune proportion- ally within each weight class, either absolutely, or relative to the standard deviation of that class. | 1606.09274#12 | 1606.09274#14 | 1606.09274 | [
"1602.07360"
] |
1606.09274#14 | Compression of Neural Machine Translation Models via Pruning | We ï¬ nd that class-blind pruning outperforms both other schemes (see Section 4.1). # 3.4 Retraining In order to prune NMT models aggressively with- out performance loss, we retrain our pruned net- works. That is, we continue to train the remaining weights, but maintain the sparse structure intro- duced by pruning. In our implementation, pruned 20 e r o c s U E L B 10 class-blind class-uniform class-distribution 0 0 10 20 30 40 50 60 70 80 90 percentage pruned Figure 3: Effects of different pruning schemes. weights are represented by zeros in the weight ma- trices, and we use binary â | 1606.09274#13 | 1606.09274#15 | 1606.09274 | [
"1602.07360"
] |
1606.09274#15 | Compression of Neural Machine Translation Models via Pruning | maskâ matrices, which represent the sparse structure of a network, to ig- nore updates to weights at pruned locations. This implementation has the advantage of simplicity as it requires minimal changes to the training and deployment code, but we note that a more complex implementation utilizing sparse matrices and sparse matrix multiplication could potentially yield speed improvements. However, such an im- plementation is beyond the scope of this paper. # 4 Experiments We evaluate the effectiveness of our pruning approaches on a state-of-the-art NMT model.1 Speciï¬ cally, an attention-based English-German NMT system from Luong et al. (2015a) is consid- ered. Training data was obtained from WMTâ 14 consisting of 4.5M sentence pairs (116M English words, 110M German words). For more details on training hyperparameters, we refer readers to Section 4.1 of Luong et al. (2015a). All models are tested on newstest2014 (2737 sentences). The model achieves a perplexity of 6.1 and a BLEU score of 20.5 (after unknown word replacement).2 When retraining pruned NMT systems, we use the following settings: (a) we start with a smaller learning rate of 0.5 (the original model uses a learning rate of 1.0), (b) we train for fewer epochs, 4 instead of 12, using plain SGD, (c) a simple learning rate schedule is employed; after 2 epochs, we begin to halve the learning rate every half an epoch, and (d) all other hyperparameters are the 1We thank the authors of Luong et al. (2015a) for provid- ing their trained models and assistance in using the codebase at https://github.com/lmthang/nmt.matlab. 2The performance of this model is reported under row global (dot) in Table 4 of Luong et al. (2015a). same, such as mini-batch size 128, maximum gra- dient norm 5, and dropout with probability 0.2. # 4.1 Comparing pruning schemes Despite its simplicity, we observe in Figure 3 that class-blind pruning outperforms both other schemes in terms of translation quality at all prun- ing percentages. | 1606.09274#14 | 1606.09274#16 | 1606.09274 | [
"1602.07360"
] |
1606.09274#16 | Compression of Neural Machine Translation Models via Pruning | In order to understand this result, for each of the three pruning schemes, we pruned each class separately and recorded the effect on performance (as measured by perplexity). Figure 4 shows that with class-uniform pruning, the over- all performance loss is caused disproportionately by a few classes: target layer 4, attention and soft- max weights. Looking at Figure 5, we see that the most damaging classes to prune also tend to be those with weights of greater magnitude â these classes have much larger weights than others at the same percentile, so pruning them under the class- uniform pruning scheme is more damaging. The situation is similar for class-distribution pruning. By contrast, Figure 4 shows that under class- blind pruning, the damage caused by pruning soft- max, attention and target layer 4 weights is greatly decreased, and the contribution of each class to- wards the performance loss is overall more uni- form. In fact, the distribution begins to reï¬ ect the number of parameters in each class â for ex- ample, the source and target embedding classes have larger contributions because they have more weights. We use only class-blind pruning for the rest of the experiments. Figure 4 also reveals some interesting informa- tion about the distribution of redundancy in NMT architectures â namely it seems that higher lay- ers are more important than lower layers, and that attention and softmax weights are crucial. We will explore the distribution of redundancy further in Section 4.5. # 4.2 Pruning and retraining Pruning has an immediate negative impact on per- formance (as measured by BLEU) that is exponen- tial in pruning percentage; this is demonstrated by the blue line in Figure 6. | 1606.09274#15 | 1606.09274#17 | 1606.09274 | [
"1602.07360"
] |
1606.09274#17 | Compression of Neural Machine Translation Models via Pruning | However we ï¬ nd that up to about 40% pruning, performance is mostly un- affected, indicating a large amount of redundancy and over-parameterization in NMT. We now consider the effect of retraining pruned models. The orange line in Figure 6 shows that af- ter retraining the pruned models, baseline perfor- mance (20.48 BLEU) is both recovered and im- 15 10 class-blind class-uniform class-distribution 5 0 sourcelayer1 sourcelayer2 sourcelayer3 sourcelayer4 targetlayer1 targetlayer2 targetlayer3 targetlayer4 attention softm ax sourcee m bedding targete m bedding # e g n a h c y t i x e l p r e p Figure 4: â Breakdownâ | 1606.09274#16 | 1606.09274#18 | 1606.09274 | [
"1602.07360"
] |
1606.09274#18 | Compression of Neural Machine Translation Models via Pruning | of performance loss (i.e., perplexity increase) by weight class, when pruning 90% of weights using each of the three pruning schemes. Each of the ï¬ rst eight classes have 8 million weights, attention has 2 million, and the last three have 50 million weights each. e g n a h c y t i x e l p r e p 101 100 0 0.1 0.2 0.3 0.4 magnitude of largest deleted weight 0.5 20 10 0 0 pruned pruned and retrained sparse from the beginning 10 20 30 40 50 60 70 80 90 percentage pruned | 1606.09274#17 | 1606.09274#19 | 1606.09274 | [
"1602.07360"
] |
1606.09274#19 | Compression of Neural Machine Translation Models via Pruning | # e r o c s U E L B Figure 5: Magnitude of largest deleted weight vs. perplexity change, for the 12 different weight classes when pruning 90% of parameters by class- uniform pruning. Figure 6: Performance of pruned models (a) after pruning, (b) after pruning and retraining, and (c) when trained with sparsity structure from the out- set (see Section 4.3). proved upon, up to 80% pruning (20.91 BLEU), with only a small performance loss at 90% pruning (20.13 BLEU). This may seem surprising, as we might not expect a sparse model to signiï¬ cantly out-perform a model with ï¬ ve times as many pa- rameters. | 1606.09274#18 | 1606.09274#20 | 1606.09274 | [
"1602.07360"
] |
1606.09274#20 | Compression of Neural Machine Translation Models via Pruning | There are several possible explanations, two of which are given below. Firstly, we found that the less-pruned models perform better on the training set than the vali- dation set, whereas the more-pruned models have closer performance on the two sets. This indicates that pruning has a regularizing effect on the re- training phase, though clearly more is not always better, as the 50% pruned and retrained model has better validation set performance than the 90% pruned and retrained model. Nonetheless, this reg- ularization effect may explain why the pruned and retrained models outperform the baseline. Alternatively, pruning may serve as a means to escape a local optimum. Figure 7 shows the loss function over time during the training, pruning and retraining process. During the original training process, the loss curve ï¬ attens out and seems to converge (note that we use early stopping to ob- tain our baseline model, so the original model was trained for longer than shown in Figure 7). Prun- ing causes an immediate increase in the loss func- tion, but enables further gradient descent, allowing the retraining process to ï¬ nd a new, better local optimum. | 1606.09274#19 | 1606.09274#21 | 1606.09274 | [
"1602.07360"
] |
1606.09274#21 | Compression of Neural Machine Translation Models via Pruning | It seems that the disruption caused by most common word least common word ¢ > target embedding weights 00 a source embedeing weights source layer 1 weights source layer 2 weights source layer 3 weights source layer 4 weights input gate < forget gate < output gate < input < U| ~~ feed-forward recurrent + ~, Yi \ target layer 1 weights target layer 2 weights target layer 3 weights target layer 4 weights Figure 8: Graphical representation of the location of small weights in various parts of the model. Black pixels represent weights with absolute size in the bottom 80%; white pixels represent those with absolute size in the top 20%. Equivalently, these pictures illustrate which parameters remain after pruning 80% using our class-blind pruning scheme. 8 6 s s o l 4 2 0 1 2 3 4 training iterations 5 ·105 Figure 7: The validation set loss during training, pruning and retraining. The vertical dotted line marks the point when 80% of the parameters are pruned. The horizontal dotted line marks the best performance of the unpruned baseline. pruning is beneï¬ cial in the long-run. # 4.3 Starting with sparse models The favorable performance of the pruned and re- trained models raises the question: can we get a shortcut to this performance by starting with sparse models? That is, rather than train, prune, and retrain, what if we simply prune then train? To test this, we took the sparsity structure of our 50%â 90% pruned models, and trained completely new models with the same sparsity structure. The purple line in Figure 6 shows that the â sparse from the beginningâ models do not perform as well as the pruned and retrained models, but they do come close to the baseline performance. This shows that while the sparsity structure alone contains useful information about redundancy and can therefore produce a competitive compressed model, it is im- portant to interleave pruning with training. Though our method involves just one pruning stage, other pruning methods interleave pruning with training more closely by including several iterations (Collins and Kohli, 2014; Han et al., 2015b). We expect that implementing this for NMT would likely result in further compression and performance improvements. # 4.4 Storage size | 1606.09274#20 | 1606.09274#22 | 1606.09274 | [
"1602.07360"
] |
1606.09274#22 | Compression of Neural Machine Translation Models via Pruning | The original unpruned model (a MATLAB ï¬ le) has size 782MB. The 80% pruned and retrained model is 272MB, which is a 65.2% reduction. In this work we focus on compression in terms of number of parameters rather than storage size, be- cause it is invariant across implementations. # 4.5 Distribution of redundancy in NMT We visualize in Figure 8 the redundancy struc- tore of our NMT baseline model. Black pix- els represent weights near to zero (those that can be pruned); white pixels represent larger ones. First we consider the embedding weight matrices, whose columns correspond to words in the vocab- ulary. Unsurprisingly, in Figure 8, we see that the parameters corresponding to the less common words are more dispensable. In fact, at the 80% pruning rate, for 100 uncommon source words and 1194 uncommon target words, we delete all parameters corresponding to that word. This is not quite the same as removing the word from the vocabulary â true out-of-vocabulary words are mapped to the embedding for the â unknown wordâ symbol, whereas these â pruned-outâ words are mapped to a zero embedding. However in the original unpruned model these uncommon words already had near-zero embeddings, indicating that the model was unable to learn sufï¬ ciently distinc- tive representations. Returning to Figure 8, now look at the eight weight matrices for the source and target connec- tions at each of the four layers. Each matrix corre- sponds to the 4n à 2n matrix T4n,2n in Equation (2). In all eight matrices, we observe â as does Lu et al. (2016) â that the weights connecting to the input Ë h are most crucial, followed by the in- put gate i, then the output gate o, then the forget gate f . This is particularly true of the lower lay- ers, which focus primarily on the input Ë h. How- ever for higher layers, especially on the target side, weights connecting to the gates are as important as those connecting to the input Ë h. The gates repre- sent the LSTMâ s ability to add to, delete from or retrieve information from the memory cell. | 1606.09274#21 | 1606.09274#23 | 1606.09274 | [
"1602.07360"
] |
1606.09274#23 | Compression of Neural Machine Translation Models via Pruning | Figure 8 therefore shows that these sophisticated memory cell abilities are most important at the end of the NMT pipeline (the top layer of the decoder). This is reasonable, as we expect higher-level features to be learned later in a deep learning pipeline. We also observe that for lower layers, the feed- forward input is much more important than the re- current input, whereas for higher layers the recur- rent input becomes more important. This makes sense: lower layers concentrate on the low-level information from the current word embedding (the feed-forward input), whereas higher layers make use of the higher-level representation of the sen- tence so far (the recurrent input). Lastly, on close inspection, we notice several white diagonals emerging within some subsquares of the matrices in Figure 8, indicating that even without initializing the weights to identity ma- trices (as is sometimes done (Le et al., 2015)), an identity-like weight matrix is learned. At higher pruning percentages, these diagonals be- come more pronounced. # 5 Generalizability of our results To test the generalizability of our results, we also test our pruning approach on a smaller, non- state-of-the-art NMT model trained on the WIT3 Vietnamese-English dataset (Cettolo et al., 2012), which consists of 133,000 sentence pairs. This model is effectively a scaled-down version of the state-of-the-art model in Luong et al. (2015a), with fewer layers, smaller vocabulary size, smaller hid- den layer size, no attention mechanism, and about 11% as many parameters in total. It achieves a BLEU score of 9.61 on the validation set. Although this model and its training set are on a different scale to our main model, and the lan- guage pair is different, we found very similar re- sults. For this model, it is possible to prune 60% of parameters with no immediate performance loss, and with retraining it is possible to prune 90%, and regain original performance. Our main observa- tions from Sections 4.1 to 4.5 are also replicated; in particular, class-blind pruning is most success- ful, â sparse from the beginningâ models are less successful than pruned and retrained models, and we observe the same patterns as seen in Figure 8. # 6 Future Work | 1606.09274#22 | 1606.09274#24 | 1606.09274 | [
"1602.07360"
] |
1606.09274#24 | Compression of Neural Machine Translation Models via Pruning | As noted in Section 4.3, including several itera- tions of pruning and retraining would likely im- prove the compression and performance of our If possible it would be highly pruning method. valuable to exploit the sparsity of the pruned mod- els to speed up training and runtime, perhaps through sparse matrix representations and mul- tiplications (see Section 3.4). Though we have found magnitude-based pruning to perform very well, it would be instructive to revisit the orig- inal claim that other pruning methods (for ex- ample Optimal Brain Damage and Optimal Brain Surgery) are more principled, and perform a com- parative study. | 1606.09274#23 | 1606.09274#25 | 1606.09274 | [
"1602.07360"
] |
1606.09274#25 | Compression of Neural Machine Translation Models via Pruning | # 7 Conclusion We have shown that weight pruning with retrain- ing is a highly effective method of compression and regularization on a state-of-the-art NMT sys- tem, compressing the model to 20% of its size with no loss of performance. Though we are the ï¬ rst to apply compression techniques to NMT, we obtain a similar degree of compression to other current work on compressing state-of-the-art deep neural networks, with an approach that is simpler than most. We have found that the absolute size of pa- rameters is of primary importance when choosing which to prune, leading to an approach that is ex- tremely simple to implement, and can be applied to any neural network. Lastly, we have gained insight into the distribution of redundancy in the NMT architecture. | 1606.09274#24 | 1606.09274#26 | 1606.09274 | [
"1602.07360"
] |
1606.09274#26 | Compression of Neural Machine Translation Models via Pruning | # 8 Acknowledgment This work was partially supported by NSF Award IIS-1514268 and partially supported by a gift from Bloomberg L.P. We gratefully acknowledge the support of the Defense Advanced Research Projects Agency (DARPA) Communicating with Computers (CwC) program under ARO prime contract no. W911NF-15-1-0462. Lastly, we ac- knowledge NVIDIA Corporation for the donation of Tesla K40 GPUs. # References M. Gethsiyal Augasta and Thangairulappan Kathir- valavakumar. 2013. | 1606.09274#25 | 1606.09274#27 | 1606.09274 | [
"1602.07360"
] |
1606.09274#27 | Compression of Neural Machine Translation Models via Pruning | Pruning algorithms of neural networks - a comparative study. Central European Journal of Computer Science, 3(3):105â 115. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR. Mauro Cettolo, Christian Girardi, and Marcello Fed- erico. 2012. Wit3: Web inventory of transcribed and translated talks. In EAMT. Wenlin Chen, James T Wilson, Stephen Tyree, Kilian Q Weinberger, and Yixin Chen. 2015. Compressing neural networks with the hashing trick. In ICML. Kyunghyun Cho, Bart van Merrienboer, Caglar Gul- cehre, Fethi Bougares, Holger Schwenk, and Yoshua 2014. Learning phrase representations Bengio. using RNN encoder-decoder for statistical machine translation. In EMNLP. | 1606.09274#26 | 1606.09274#28 | 1606.09274 | [
"1602.07360"
] |
1606.09274#28 | Compression of Neural Machine Translation Models via Pruning | Maxwell D. Collins and Pushmeet Kohli. 2014. Mem- ory bounded deep convolutional networks. arXiv preprint arXiv:1412.1442. Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. 2015. Training deep neural networks with low precision multiplications. In ICLR workshop. Emily L. Denton, Wojciech Zaremba, Joan Bruna, Yann LeCun, and Rob Fergus. 2014. Exploiting lin- ear structure within convolutional networks for efï¬ | 1606.09274#27 | 1606.09274#29 | 1606.09274 | [
"1602.07360"
] |
1606.09274#29 | Compression of Neural Machine Translation Models via Pruning | - cient evaluation. In NIPS. Suyog Gupta, Ankur Agrawal, Kailash Gopalakrish- nan, and Pritish Narayanan. 2015. Deep learning with limited numerical precision. In ICML. Song Han, Huizi Mao, and William J Dally. 2015a. Deep compression: Compressing deep neural net- works with pruning, trained quantization and huff- man coding. In ICLR. Song Han, Jeff Pool, John Tran, and William Dally. 2015b. Learning both weights and connections for efï¬ cient neural network. In NIPS. | 1606.09274#28 | 1606.09274#30 | 1606.09274 | [
"1602.07360"
] |
1606.09274#30 | Compression of Neural Machine Translation Models via Pruning | Babak Hassibi and David G. Stork. 1993. Second or- der derivatives for network pruning: Optimal brain surgeon. Morgan Kaufmann. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. In NIPS Deep Learning Workshop. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Neural computation, Long short-term memory. 9(8):1735â | 1606.09274#29 | 1606.09274#31 | 1606.09274 | [
"1602.07360"
] |
1606.09274#31 | Compression of Neural Machine Translation Models via Pruning | 1780. Forrest N. Iandola, Matthew W. Moskewicz, Khalid Ashraf, Song Han, William J. Dally, and Kurt Keutzer. 2016. Squeezenet: Alexnet-level accuracy with 50x fewer parameters and < 0.5MB model size. arXiv preprint arXiv:1602.07360. Max Jaderberg, Andrea Vedaldi, and Andrew Zisser- man. 2014. Speeding up convolutional neural net- works with low rank expansions. In NIPS. S´ebastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015a. On using very large target vocabulary for neural machine translation. | 1606.09274#30 | 1606.09274#32 | 1606.09274 | [
"1602.07360"
] |
1606.09274#32 | Compression of Neural Machine Translation Models via Pruning | In ACL. S´ebastien Jean, Orhan Firat, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015b. Montreal neural machine translation systems for WMTâ 15. In WMT. Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous translation models. In EMNLP. Quoc V. Le, Navdeep Jaitly, and Geoffrey E. Hin- ton. 2015. | 1606.09274#31 | 1606.09274#33 | 1606.09274 | [
"1602.07360"
] |
1606.09274#33 | Compression of Neural Machine Translation Models via Pruning | A simple way to initialize recurrent networks of rectiï¬ ed linear units. arXiv preprint arXiv:1504.00941. Yann Le Cun, John S. Denker, and Sara A. Solla. 1989. Optimal brain damage. In NIPS. Zhouhan Lin, Matthieu Courbariaux, Roland Memise- vic, and Yoshua Bengio. 2016. Neural networks with few multiplications. In ICLR. Zhiyun Lu, Vikas Sindhwani, and Tara N. Sainath. 2016. | 1606.09274#32 | 1606.09274#34 | 1606.09274 | [
"1602.07360"
] |
1606.09274#34 | Compression of Neural Machine Translation Models via Pruning | Learning compact recurrent neural networks. In ICASSP. Minh-Thang Luong and Christopher D. Manning. 2015. Stanford neural machine translation systems for spoken language domain. In IWSLT. Minh-Thang Luong and Christopher D. Manning. 2016. Achieving open vocabulary neural machine translation with hybrid word-character models. In ACL. Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015a. Effective approaches to attention- based neural machine translation. In EMNLP. Minh-Thang Luong, Ilya Sutskever, Quoc V. Le, Oriol Vinyals, and Wojciech Zaremba. 2015b. Address- ing the rare word problem in neural machine trans- lation. In ACL. Kenton Murray and David Chiang. 2015. | 1606.09274#33 | 1606.09274#35 | 1606.09274 | [
"1602.07360"
] |
1606.09274#35 | Compression of Neural Machine Translation Models via Pruning | Auto-sizing neural networks: With applications to n-gram lan- guage models. In EMNLP. Rohit Prabhavalkar, Ouais Alsharif, Antoine Bruguier, 2016. On the compression and Ian McGraw. of recurrent neural networks with an application to LVCSR acoustic modeling for embedded speech recognition. In ICASSP. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation models with monolingual data. | 1606.09274#34 | 1606.09274#36 | 1606.09274 | [
"1602.07360"
] |
1606.09274#36 | Compression of Neural Machine Translation Models via Pruning | In ACL. Suraj Srinivas and R. Venkatesh Babu. 2015. Data- free parameter pruning for deep neural networks. In BMVC. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural net- works. In NIPS. Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. Recurrent neural network regularization. 2014. arXiv preprint arXiv:1409.2329. | 1606.09274#35 | 1606.09274 | [
"1602.07360"
] |
|
1606.08514#0 | Towards Verified Artificial Intelligence | 0 2 0 2 l u J 3 2 ] I A . s c [ 4 v 4 1 5 8 0 . 6 0 6 1 : v i X r a # Towards Veriï¬ ed Artiï¬ cial Intelligence Sanjit A. Seshiaâ , Dorsa Sadighâ , and S. Shankar Sastryâ â Stanford University [email protected] July 21, 2020 # Abstract | 1606.08514#1 | 1606.08514 | [
"1606.06565"
] |
|
1606.08514#1 | Towards Verified Artificial Intelligence | Veriï¬ ed artiï¬ cial intelligence (AI) is the goal of designing AI-based systems that have strong, ideally provable, assurances of correctness with respect to mathematically-speciï¬ ed requirements. This paper considers Veriï¬ ed AI from a formal methods perspective. We describe ï¬ ve challenges for achieving Veriï¬ ed AI, and ï¬ ve corresponding principles for addressing these challenges. # 1 Introduction Artiï¬ cial intelligence (AI) is a term used for computational systems that attempt to mimic aspects of human intelligence, including functions we intuitively associate with human minds such as â learningâ and â problem solvingâ (e.g., see [17]). Russell and Norvig [66] describe AI as the study of general principles of rational agents and components for constructing these agents. We interpret the term AI broadly to include closely- related areas such as machine learning (ML) [53]. Systems that heavily use AI, henceforth referred to as AI-based systems, have had a signiï¬ cant impact in society in domains that include healthcare, transportation, ï¬ nance, social networking, e-commerce, education, etc. This growing societal-scale impact has brought with it a set of risks and concerns including errors in AI software, cyber-attacks, and safety of AI-based systems [64, 21, 4]. | 1606.08514#0 | 1606.08514#2 | 1606.08514 | [
"1606.06565"
] |
1606.08514#2 | Towards Verified Artificial Intelligence | Therefore, the question of veriï¬ cation and validation of AI-based systems has begun to demand the attention of the research community. We deï¬ ne â Veriï¬ ed AIâ as the goal of designing AI- based systems that have strong, ideally provable, assurances of correctness with respect to mathematically- speciï¬ ed requirements. How can we achieve this goal? A natural starting point is to consider formal methods â a ï¬ eld of computer science and engineering concerned with the rigorous mathematical speciï¬ cation, design, and veriï¬ cation of systems [86, 16]. At its core, formal methods is about proof: formulating speciï¬ cations that form proof obligations, designing systems to meet those obligations, and verifying, via algorithmic proof search, that the systems indeed meet their speciï¬ cations. A spectrum of formal methods, from speciï¬ cation-driven testing and simulation [29], to model checking [14, 62, 15] and theorem proving (see, e.g. [58, 43, 37]) are used routinely in the computer- aided design of integrated circuits and have been widely applied to ï¬ nd bugs in software, analyze embedded systems, and ï¬ nd security vulnerabilities. At the heart of these advances are computational proof engines such as Boolean satisï¬ ability (SAT) solvers [50], Boolean reasoning and manipulation routines based on Binary Decision Diagrams (BDDs) [9], and satisï¬ ability modulo theories (SMT) solvers [6]. In this paper, we consider the challenge of Veriï¬ ed AI from a formal methods perspective. That is, we review the manner in which formal methods have traditionally been applied, analyze the challenges this approach may face for AI-based systems, and propose ideas to overcome these challenges. We emphasize that our discussion is focused on the role of formal methods and does not cover the broader set of techniques | 1606.08514#1 | 1606.08514#3 | 1606.08514 | [
"1606.06565"
] |
1606.08514#3 | Towards Verified Artificial Intelligence | 1 that could be used to improve assurance in AI-based systems. Additionally, we seek to identify challenges applicable to a broad range of AI/ML systems, and not limited to speciï¬ c technologies such as deep neural networks (DNNs) or reinforcement learning (RL) systems. Our view of the challenges is largely shaped by problems arising from the use of AI and ML in autonomous and semi-autonomous systems, though we believe the ideas presented here apply more broadly. We begin in Sec. 2 with some brief background on formal veriï¬ cation and an illustrative example. We then outline challenges for Veriï¬ ed AI in Sec. 3 below, and describe ideas to address each of these challenges in Sec. 4.1 # 2 Background and Illustrative Example Consider the typical formal veriï¬ cation process as shown in Figure 1, which begins with the following three inputs: 1. A model of the system to be veriï¬ ed, S; 2. A model of the environment, E, and 3. The property to be veriï¬ ed, Φ. The veriï¬ er generates as output a YES/NO answer, indicating whether or not S satisï¬ es the property Φ in environment E. Typically, a NO output is accompanied by a counterexample, also called an error trace, which is an execution of the system that indicates how Φ is violated. Some formal veriï¬ cation tools also include a proof or certiï¬ cate of correctness with a YES answer. | 1606.08514#2 | 1606.08514#4 | 1606.08514 | [
"1606.06565"
] |
1606.08514#4 | Towards Verified Artificial Intelligence | In this paper, we take a broad view of Property co) YES System Ivete) 5 [proof] Environment l Compose E NO # counterexample Figure 1: Formal veriï¬ cation procedure. formal methods: any technique that uses some aspect of formal speciï¬ cation, or veriï¬ cation, or synthesis, is included. For instance, we include simulation-based hardware veriï¬ cation methods or model-based testing methods for software since they use formal speciï¬ cations or models to guide the process of simulation or testing. In order to apply formal veriï¬ cation to AI-based systems, at a minimum, one must be able to represent the three inputs S, E and Φ in formalisms for which (ideally) there exist efï¬ cient decision procedures to answer the YES/NO question as described above. However, as we describe in Sec. 3, even constructing good representations of the three inputs is not straightforward, let alone dealing with the complexity of the underlying decision problems and associated design issues. We will illustrate the ideas in this paper with examples from the domain of (semi-)autonomous driving. Fig 2 shows an illustrative example of an AI-based system: a closed-loop cyber-physical system comprising | 1606.08514#3 | 1606.08514#5 | 1606.08514 | [
"1606.06565"
] |
1606.08514#5 | Towards Verified Artificial Intelligence | 1The ï¬ rst version of this paper was published in July 2016 in response to the call for white papers for the CMU Exploratory Workshop on Safety and Control for AI held in June 2016, and a second version in October 2017. This is the latest version reï¬ ecting the evolution of the authorsâ view of the challenges and approaches for Veriï¬ ed AI. 2 a semi-autonomous vehicle with machine learning components along with its environment. | 1606.08514#4 | 1606.08514#6 | 1606.08514 | [
"1606.06565"
] |
1606.08514#6 | Towards Verified Artificial Intelligence | Speciï¬ cally, assume that the semi-autonomous â ego vehicleâ has an automated emergency braking system (AEBS) that attempts to detect and classify objects in front of it and actuate the brakes when needed to avert a collision. Figure 2 shows the AEBS as a system composed of a controller (automatic braking), a plant (vehicle sub- system under control including other parts of the autonomy stack), and a sensor (camera) along with a perception component implemented using a deep neural network. The AEBS, when combined with the vehicleâ s environment, forms a closed loop cyber-physical system. The controller regulates the acceleration and braking of the plant using the velocity of the ego vehicle and the distance between it and an obstacle. The environment of the ego vehicle comprises both agents and objects outside the vehicle (other vehicles, | 1606.08514#5 | 1606.08514#7 | 1606.08514 | [
"1606.06565"
] |
1606.08514#7 | Towards Verified Artificial Intelligence | â â â Environment Sensor Input Ke, : â Controller |__| Plant Learning-Based Perception of closed-loop cyber-physical system with machine learning components (introduced objects, etc.) as well as inside the vehicle (e.g., a driver). A safety requirement for can be informally characterized as the property of maintaining a safe distance between vehicle and any other agent or object on the road. However, as we will see in Sec. 3, to the specification, modeling, and verification of a system such as this one. for Verified AI major challenges to achieving formally-verified AI-based systems, described in more # Figure 2: in [22]). # Example | 1606.08514#6 | 1606.08514#8 | 1606.08514 | [
"1606.06565"
] |
1606.08514#8 | Towards Verified Artificial Intelligence | pedestrians, road closed loop system the moving ego are many nuances # 3 Challenges for Veriï¬ ed AI We identify ï¬ ve major challenges to achieving formally-veriï¬ ed AI-based systems, described in more detail below. # 3.1 Environment Modeling The environments in which AI/ML-based systems operate can be very complex, with considerable uncer- tainty even about how many and which agents are in the environment (both human and robotic), let alone about their intentions and behaviors. As an example, consider the difï¬ culty in modeling urban trafï¬ c envi- ronments in which an autonomous car must operate. Indeed, AI/ML is often introduced into these systems precisely to deal with such complexity and uncertainty! From a formal methods perspective, this makes it very hard to create realistic environment models with respect to which one can perform veriï¬ cation or synthesis. We see the main challenges for environment modeling as being threefold: â ¢ Unknown Variables: In the traditional success stories for formal veriï¬ cation, such as verifying cache coherence protocols or device drivers, the interface between the system S and its environment E is well- deï¬ | 1606.08514#7 | 1606.08514#9 | 1606.08514 | [
"1606.06565"
] |
1606.08514#9 | Towards Verified Artificial Intelligence | ned. The environment can only inï¬ uence the system through this interface. However, for AI-based systems, such as an autonomous vehicle example of Sec. 2, it may be impossible to precisely deï¬ ne all the variables (features) of the environment. Even in restricted scenarios where the environment variables 3 # this # there (agents) are known, there is a striking lack of information, especially at design time, about their behaviors. Additionally, modeling sensors such as LiDAR that represent the interface to the environment is in itself a major technical challenge. | 1606.08514#8 | 1606.08514#10 | 1606.08514 | [
"1606.06565"
] |
1606.08514#10 | Towards Verified Artificial Intelligence | â ¢ Modeling with the Right Fidelity: In traditional uses of formal veriï¬ cation, it is usually acceptable to model the environment as a non-deterministic process subject to constraints speciï¬ ed in a suitable logic or automata-based formalism. Typically such an environment model is termed as being â over-approximateâ , meaning that it may include (many) more environment behaviors than are possible. Over-approximate environment modeling permits one to perform sound veriï¬ cation without a detailed environment model, which can be inefï¬ | 1606.08514#9 | 1606.08514#11 | 1606.08514 | [
"1606.06565"
] |
1606.08514#11 | Towards Verified Artificial Intelligence | cient to reason with and hard to obtain. However, for AI-based autonomy, purely non-deterministic modeling is likely to produce highly over-approximate models, which in turn yields too many spurious bug reports, rendering the veriï¬ cation process useless in practice. Moreover, many AI-based systems make distributional assumptions on the environment, thus requiring the need for prob- abilistic modeling; however, it can be difï¬ cult to exactly ascertain the underlying distributions. One can address this by learning a probabilistic model from data, but in this case it is important to remember that the model parameters (e.g., transition probabilities) are only estimates, not precise representations of en- vironment behavior. | 1606.08514#10 | 1606.08514#12 | 1606.08514 | [
"1606.06565"
] |
1606.08514#12 | Towards Verified Artificial Intelligence | Thus, veriï¬ cation algorithms cannot consider the resulting probabilistic model to be â perfectâ ; we need to represent uncertainty in the model itself. â ¢ Modeling Human Behavior: For many AI-based systems, such as semi-autonomous vehicles, human agents are a key part of the environment and/or system. Researchers have attempted modeling humans as non-deterministic or stochastic processes with the goal of verifying the correctness of the overall sys- tem [63, 67]. However, such approaches must deal with the variability and uncertainty in human behavior. One could take a data-driven approach based on machine learning (e.g., [55]), but such an approach is sensitive to the expressivity of the features used by the ML model and the quality of data. | 1606.08514#11 | 1606.08514#13 | 1606.08514 | [
"1606.06565"
] |
1606.08514#13 | Towards Verified Artificial Intelligence | In order to achieve Veriï¬ ed AI for such human-in-the-loop systems, we need to address the limitations of current human modeling techniques and provide guarantees about their prediction accuracy and convergence. When learned models are used, one must represent any uncertainty in the learned parameters as a ï¬ rst- class entity in the model, and take that into account in veriï¬ cation and control. The ï¬ rst challenge, then, is to come up with a systematic method of environment modeling that allows one to provide provable guarantees on the systemâ s behavior even when there is considerable uncertainty about the environment. # 3.2 Formal Speciï¬ cation Formal veriï¬ cation critically relies on having a formal speciï¬ cation â a precise, mathematical statement of what the system is supposed to do. However, the challenge of coming up with a high-quality formal speciï¬ cation is well known, even in application domains in which formal veriï¬ cation has found considerable success (see, e.g., [7]). This challenge is only exacerbated in AI-based systems. We identify three major problems. | 1606.08514#12 | 1606.08514#14 | 1606.08514 | [
"1606.06565"
] |
1606.08514#14 | Towards Verified Artificial Intelligence | Speciï¬ cation for Hard-to-Formalize Tasks: Consider the perception module in the AEBS controller of Fig. 2 which must detect and classify objects, distinguishing vehicles and pedestrians from other objects. Correct- ness for this module in the classic formal methods sense requires a formal deï¬ nition of each type of road user, which is extremely difï¬ cult, if not impossible. Similar problems arise for other tasks involving per- ception and communication, such as natural language processing. How then, do we specify correctness properties for such a module? What should the speciï¬ cation language be and what tools can one use to construct a speciï¬ cation? Quantitative vs. Boolean Speciï¬ cations: Traditionally, formal speciï¬ cations tend to be Boolean, mapping a given system behavior to true or false. However, in AI and ML, speciï¬ cations are often given as objective 4 functions specifying costs or rewards. Moreover, there can be multiple objectives, some of which must be satisï¬ ed together, and others that may need to be traded off against each other in certain environments. What are the best ways to unify Boolean and quantitative approaches to speciï¬ cation? Are there formalisms that can capture commonly discussed properties of AI components such as robustness and fairness in a uniï¬ ed manner? Data vs. Formal Requirements: The view of â data as speciï¬ cationâ | 1606.08514#13 | 1606.08514#15 | 1606.08514 | [
"1606.06565"
] |
1606.08514#15 | Towards Verified Artificial Intelligence | is common in machine learning. Labeled â ground truthâ data is often the only speciï¬ cation of correct behavior. On the other hand, a speciï¬ cation in formal methods is a mathematical property that deï¬ nes the set of correct behaviors. How can we bridge this gap? Thus, the second challenge is to design effective methods to specify desired and undesired properties of systems that use AI- or ML-based components. # 3.3 Modeling Learning Systems In most traditional applications of formal veriï¬ cation, the system S is precisely known: it is a program or a circuit described in a programming language or hardware description language. The system modeling problem is primarily concerned with reducing the size of S to a more tractable one by abstracting away irrelevant details. AI-based systems lead to a very different challenge for system modeling, primarily stemming from the use of machine learning: â ¢ Very high-dimensional input space: ML components used for perception usually operate over very high- dimensional input spaces. For the illustrative example of Sec. 2 from [22], each input RGB image is of dimension 1000 à 600 pixels, contains 2561000à 600à | 1606.08514#14 | 1606.08514#16 | 1606.08514 | [
"1606.06565"
] |
1606.08514#16 | Towards Verified Artificial Intelligence | 3 elements, and in general the input is a stream of such high-dimensional vectors. Although formal methods has been used for high-dimensional input spaces (e.g., in digital circuits), the nature of the input spaces for ML-based perception is different â not entirely Boolean, but hybrid, including both discrete and continuous variables. â ¢ Very high-dimensional parameter/state space: ML components such as deep neural networks have any- where from thousands to millions of model parameters and primitive components. For example, state- of-the-art DNNs used by the authors in instantiations of the example of Fig. 2 have up to 60 million parameters and tens of layers. This gives rise to a huge search space for veriï¬ cation that requires careful abstraction. â ¢ Online adaptation and evolution: Some learning systems, such as a robot using reinforcement learning, evolve as they encounter new data and situations. For such systems, design-time veriï¬ cation must either account for future changes in the behavior of the system, or else be performed incrementally and online as the learning system evolves. â ¢ Modeling systems in context: For many AI/ML components, their speciï¬ cation is only deï¬ ned by the context. | 1606.08514#15 | 1606.08514#17 | 1606.08514 | [
"1606.06565"
] |
1606.08514#17 | Towards Verified Artificial Intelligence | For example, verifying robustness of a DNN such as the one in Fig. 2 requires us to capture a model of the surrounding system. We need techniques to model ML components along with their context so that semantically meaningful properties can be veriï¬ ed. # 3.4 Efï¬ cient and Scalable Design and Veriï¬ cation of Models and Data The effectiveness of formal methods in the domains of hardware and software has been driven by advances in underlying â computational enginesâ â e.g., SAT, SMT, numerical simulation, and model checking. Given the scale of AI/ML systems, the complexity of their environments, and the new types of speciï¬ cations involved, several advances are needed in creating computational engines for efï¬ cient and scalable training, testing, design, and veriï¬ cation of AI-based systems. We identify here the key challenges that must be overcome in order to achieve these advances. | 1606.08514#16 | 1606.08514#18 | 1606.08514 | [
"1606.06565"
] |
1606.08514#18 | Towards Verified Artificial Intelligence | 5 Data Generation: Data is the fundamental starting point for machine learning. Any quest to improve the quality of a machine learning system must improve the quality of the data it learns from. Can formal methods help to systematically select, design and augment the data used for machine learning? We believe the answer is yes, but that more needs to be done. Formal methods has proved effective for the systematic generation of counterexamples and test data that satisfy constraints including for simulation- based veriï¬ cation of circuits (e.g., [44]) and ï¬ nding security exploits in commodity software (e.g., [5]). | 1606.08514#17 | 1606.08514#19 | 1606.08514 | [
"1606.06565"
] |
1606.08514#19 | Towards Verified Artificial Intelligence | However, the requirements for AI/ML systems are different. The types of constraints can be much more complex, e.g., encoding requirements on â realismâ of data captured using sensors from a complex envi- ronment such as a trafï¬ c situation. We need to generate not just single data items, but an ensemble that satisï¬ es distributional constraints. Additionally, data generation must be selective, e.g., meeting objectives on data set size and diversity for effective training and generalization. All of these additional requirements necessitate the development of a new suite of formal techniques. Quantitative Veriï¬ cation: | 1606.08514#18 | 1606.08514#20 | 1606.08514 | [
"1606.06565"
] |
1606.08514#20 | Towards Verified Artificial Intelligence | Several safety-critical applications of AI-based systems are in robotics and cyber- physical systems. In such systems, the scalability challenge for veriï¬ cation can be very high. In addition to the scale of systems as measured by traditional metrics (dimension of state space, number of components, etc.), the types of components can be much more complex. For instance, in (semi-)autonomous driving, autonomous vehicles and their controllers need to be modeled as hybrid systems combining both discrete and continuous dynamics. Moreover, agents in the environment (humans, other vehicles) may need to be modeled as probabilistic processes. Finally, the requirements may involve not only traditional Boolean speciï¬ cations on safety and liveness, but also quantitative requirements on system robustness and perfor- mance. Yet, most of the existing veriï¬ cation methods are targeted towards answering Boolean veriï¬ cation questions. To address this gap, new scalable engines for quantitative veriï¬ cation must be developed. Compositional Reasoning: In order for formal methods to scale to large AI/ML systems, compositional (modular) reasoning is essential. In compositional veriï¬ cation, a large system (e.g., program) is split up into its components (e.g., procedures), each component is veriï¬ ed against a speciï¬ cation, and then the com- ponent speciï¬ cations together entail the system-level speciï¬ cation. A common approach for compositional veriï¬ cation is the use of assume-guarantee contracts. For example, a procedure assumes something about its starting state (pre-condition) and in turn guarantees something about its ending state (post-condition). Similar assume-guarantee paradigms have been developed for concurrent software and hardware systems. A theory of assume-guarantee contracts does not yet exist for AI-based systems. Moreover, AI/ML systems pose a particularly vexing challenge for compositional reasoning. Composi- tional veriï¬ cation requires compositional speciï¬ cation â i.e., the components must be formally-speciï¬ able. However, as noted in Sec. 3.2, it may be impossible to formally specify the correct behavior of a perception component. One of the challenges, then, is to develop techniques for compositional reasoning that do not rely on having complete compositional speciï¬ cations [75]. | 1606.08514#19 | 1606.08514#21 | 1606.08514 | [
"1606.06565"
] |
1606.08514#21 | Towards Verified Artificial Intelligence | Additionally, more work needs to be done for extending the theory and application of compositional reasoning to probabilistic systems and speciï¬ cations. # 3.5 Correct-by-Construction Intelligent Systems In an ideal world, veriï¬ cation should be integrated with the design process so that the system is â correct-by- construction.â Such an approach could either interleave veriï¬ cation steps with compilation/synthesis steps, such as in the register-transfer-level (RTL) design ï¬ ow common in integrated circuits, or devise synthesis al- gorithms so as to ensure that the implementation satisï¬ es the speciï¬ cation, such as in reactive synthesis from temporal logic [60]. Can we devise a suitable correct-by-construction design ï¬ ow for AI-based systems? | 1606.08514#20 | 1606.08514#22 | 1606.08514 | [
"1606.06565"
] |
1606.08514#22 | Towards Verified Artificial Intelligence | Speciï¬ cation-Driven Design of ML Components: Can we design, from scratch, a machine learning com- ponent (model) that provably satisï¬ es a formal speciï¬ cation? (This assumes, of course, that we solve the formal speciï¬ cation challenge described above in Sec. 3.2.) The clean-slate design of an ML component has many aspects: (1) designing the data set, (2) synthesizing the structure of the model, (3) generating a | 1606.08514#21 | 1606.08514#23 | 1606.08514 | [
"1606.06565"
] |
1606.08514#23 | Towards Verified Artificial Intelligence | 6 good set of features, (4) synthesizing hyper-parameters and other aspects of ML algorithm selection, and (5) automated techniques for debugging ML models or the speciï¬ cation when synthesis fails. More progress is needed on all these fronts. Theories of Compositional Design: Another challenge is to design the overall system comprising multiple learning and non-learning components. While theories of compositional design have been developed for digital circuits and embedded systems (e.g. [70, 80]), we do not as yet have such theories for AI-based systems. For example, if two ML models are used for perception on two different types of sensor data (e.g., LiDAR and visual images), and individually satisfy their speciï¬ cations under certain assumptions, under what conditions can they be used together to improve the reliability of the overall system? And how can one design a planning component so as to overcome limitations of an ML-based perception component that it receives input from? Bridging Design Time and Run Time for Resilient AI: Due to the complexity of AI-based systems and the environments in which they operate, even if all the challenges for speciï¬ cation and veriï¬ cation are solved, it is likely that one will not be able to prove unconditional safe and correct operation. There will always be situations in which we do not have a provable guarantee of correctness. Therefore, techniques for achieving fault tolerance and error resilience at run time must play a crucial role. In particular, there is not yet a systematic understanding of what can be achieved at design time, how the design process can contribute to safe and correct operation of the AI system at run time, and how the design-time and run-time techniques can interoperate effectively. # 4 Principles for Veriï¬ ed AI For each of the challenges described in the preceding section, we suggest a corresponding set of principles to follow in the design/veriï¬ cation process to address that challenge. These ï¬ ve principles are: 1. Use an introspective, data-driven, and probabilistic approach to model the environment; 2. Combine formal speciï¬ cations of end-to-end behavior with hybrid Boolean-quantitative formalisms for learning systems and perception components and use speciï¬ cation mining to bridge the data-property gap; 3. For ML components, develop new abstractions, explanations, and semantic analysis techniques; 4. | 1606.08514#22 | 1606.08514#24 | 1606.08514 | [
"1606.06565"
] |
1606.08514#24 | Towards Verified Artificial Intelligence | Create a new class of compositional, randomized, and quantitative formal methods for data generation, testing, and veriï¬ cation, and 5. Develop techniques for formal inductive synthesis of AI-based systems and design of safe learning systems, supported by techniques for run-time assurance. We have successfully applied these principles over the past few years, and, based on this experience, believe that they provide a good starting point for applying formal methods to AI-based systems. Our formal methods perspective on the problem complements other perspectives that have been expressed (e.g., [4]). Experience over the past few years provides evidence that the principles we suggest can point a way towards the goal of Veriï¬ | 1606.08514#23 | 1606.08514#25 | 1606.08514 | [
"1606.06565"
] |
1606.08514#25 | Towards Verified Artificial Intelligence | ed AI. # 4.1 Environment Modeling: Introspection, Probabilities, and Data Recall from Sec. 3.1, the three challenges for modeling the environment E of an AI-based system S: un- known variables, model ï¬ delity, and human modeling. We propose to tackle these challenges with three corresponding principles. Introspective Environment Modeling: We suggest to address the unknown variables problem by developing design and veriï¬ cation methods that are introspective, i.e., they algorithmically identify assumptions A that system S makes about the environment E that are sufï¬ cient to guarantee the satisfaction of the speciï¬ cation 7 Φ [76]. The assumptions A must be ideally the weakest such assumptions, and also must be efï¬ cient to generate at design time and monitor at run time over available sensors and other sources of information about the environment so that mitigating actions can be taken when they are violated. Moreover, if there is a human operator involved, one might want A to be translatable into an explanation that is human understand- able, so that S can â explainâ to the human why it may not be able to satisfy the speciï¬ cation Φ. Dealing with these multiple requirements, as well as the need for good sensor models, makes introspective environment modeling a highly non-trivial task that requires substantial progress [76]. Preliminary work by the authors has shown that such extraction of monitorable assumptions is feasible in very simple cases [48], although more research is required to make this practical. Active Data-Driven Modeling: We believe human modeling requires an active data-driven approach. Rel- evant theories from cognitive science and psychology, such as that of bounded rationality [81, 65], must be leveraged, but it is important for those models to be expressed in formalisms compatible with formal methods. Additionally, while using a data-driven approach to infer a model, one must be careful to craft the right model structure and features. A critical aspect of human modeling is to capture human intent. We believe a three-pronged approach is required: ï¬ rst, deï¬ ne model templates/features based on expert knowl- edge; then, use ofï¬ ine learning to complete the model for design time use, and ï¬ nally, learn and update environment models at run time by monitoring and interact with the environment. | 1606.08514#24 | 1606.08514#26 | 1606.08514 | [
"1606.06565"
] |
1606.08514#26 | Towards Verified Artificial Intelligence | Initial work has shown how data gathered from driving simulators via human subject experiments can be used to generate models of human driver behavior that are useful for veriï¬ cation and control of autonomous vehicles [67, 69]. Probabilistic Formal Modeling: In order to tackle the model ï¬ delity challenge, we suggest to use formalisms that combine probabilistic and non-deterministic modeling. Where probability distributions can be reliably speciï¬ ed or estimated, one can use probabilistic modeling. In other cases, non-deterministic modeling can be used to over-approximate environment behaviors. While formalisms such as Markov Decision Processes (MDPs) already provide a way to blend probability and non-determinism, we believe techniques that blend probability and logical or automata-theoretic formalisms, such as the paradigm of probabilistic program- ming [52, 32], can provide an expressive and programmatic way to model environments. We expect that In in many cases, such probabilistic programs will need to be learned/synthesized (in part) from data. this case, any uncertainty in learned parameters must be propagated to the rest of the system and repre- sented in the probabilistic model. For example, the formalism of convex Markov decision processes (convex MDPs) [56, 61, 67] provide a way of representing uncertainty in the values of learned transition probabili- ties. Algorithms for veriï¬ cation and control may then need to be extended to handle these new abstractions (see, e.g., [61]). # 4.2 End-to-End Speciï¬ cations, Hybrid Speciï¬ cations, and Speciï¬ cation Mining Writing formal speciï¬ cations for AI/ML components is hard, arguably even impossible if the component imitates a human perceptual task. Even so, we think the challenges described in Sec. 3.2 can be addressed by following three guiding principles. End-to-End/System-Level Speciï¬ | 1606.08514#25 | 1606.08514#27 | 1606.08514 | [
"1606.06565"
] |
1606.08514#27 | Towards Verified Artificial Intelligence | cations: In order to address the speciï¬ cation-for-perception challenge, let us change the problem slightly. We suggest to ï¬ rst focus on precisely specifying the end-to-end behavior of the AI-based system. By â end-to-endâ we mean the speciï¬ cation on the entire closed-loop system (see Fig. 2) or a precisely-speciï¬ able sub-system containing the AI/ML component, not on the component alone. Such a speciï¬ cation is also referred to as a â system-levelâ speciï¬ cation. For our AEBS example, this involves specifying the property Φ corresponding to maintaining a minimum distance from any object during motion. Starting with such a system-level (end-to-end) speciï¬ cation, we then derive from it constraints on the input- output interface of the perception component that guarantee that the system-level speciï¬ cation is satisï¬ ed. Such constraints serve as a partial speciï¬ cation under which the perception component can be analyzed (see [22]). Note that these constraints need not be human-readable. | 1606.08514#26 | 1606.08514#28 | 1606.08514 | [
"1606.06565"
] |
1606.08514#28 | Towards Verified Artificial Intelligence | 8 Hybrid Quantitative-Boolean Speciï¬ cations: Boolean and quantitative speciï¬ cations both have their ad- vantages. On the one hand, Boolean speciï¬ cations are easier to compose. On the other hand, objective functions lend themselves to optimization based techniques for veriï¬ cation and synthesis, and to deï¬ ning ï¬ ner granularities of property satisfaction. One approach to bridge this gap is to move to quantitative speci- ï¬ cation languages, such as logics with both Boolean and quantitative semantics (e.g. STL [49]) or notions of weighted automata (e.g. [13]). Another approach is to combine both Boolean and quantitative speciï¬ cations into a common speciï¬ cation structure, such as a rulebook [10], where speciï¬ cations can be organized in a hierarchy, compared, and aggregated. Additionally, novel formalisms bridging ideas from formal methods and machine learning are being developed to model the different variants of properties such as robustness, fairness, and privacy, including notions of semantic robustness (see, e.g., [77, 24]). | 1606.08514#27 | 1606.08514#29 | 1606.08514 | [
"1606.06565"
] |
1606.08514#29 | Towards Verified Artificial Intelligence | Speciï¬ cation Mining: In order to bridge the gap between data and formal speciï¬ cations, we suggest the use of techniques for inferring speciï¬ cations from behaviors and other artifacts â so-called speciï¬ cation mining techiques (e.g., [26, 47]). Such methods could be used for ML components in general, including for perception components, since in many cases it is not required to have an exact speciï¬ cation or one that is human-readable. Speciï¬ cation mining methods could also be used to infer human intent and other properties from demonstrations [85] or more complex forms of interaction between multiple agents, both human and robotic. # 4.3 System Modeling: Abstractions, Explanations, and Semantic Feature Spaces Let us now consider the challenges, described in Sec. 3.3, arising in modeling systems S that learn from experience. In our opinion, advances in three areas are needed in order to address these challenges: Automated Abstraction: Techniques for automatically generating abstractions of systems have been the linchpins of formal methods, playing crucial roles in extending the reach of formal methods to large hard- ware and software systems. In order to address the challenges of very high dimensional hybrid state spaces and input spaces for ML-based systems, we need to develop effective techniques to abstract ML models into simpler models that are more amenable to formal analysis. Some promising advances in this regard include the use of abstract interpretation to analyze deep neural networks (e.g. [35]), the use of abstractions for falsifying cyber-physical systems with ML components [22], and the development of probabilistic logics that capture guarantees provided by ML algorithms (e.g., [68]). Explanation Generation: The task of modeling a learning system can be made easier if the learner ac- companies its predictions with explanations of how those predictions result from the data and background knowledge. | 1606.08514#28 | 1606.08514#30 | 1606.08514 | [
"1606.06565"
] |
1606.08514#30 | Towards Verified Artificial Intelligence | In fact, this idea is not new â it has long been investigated by the ML community under terms such as explanation-based generalization [54]. Recently, there has been a renewal of interest in using logic to explain the output of learning systems (e.g. [84, 40]). Such approaches to generating explanations that are compatible with the modeling languages used in formal methods can make the task of system modeling for veriï¬ cation considerably easier. ML techniques that incorporate causal and counterfactual reasoning [59] can also ease the generation of explanations for use with formal methods. | 1606.08514#29 | 1606.08514#31 | 1606.08514 | [
"1606.06565"
] |
1606.08514#31 | Towards Verified Artificial Intelligence | Semantic Feature Spaces: The veriï¬ cation and adversarial analysis [36] of ML models is more meaningful when the generated adversarial inputs and counterexamples have semantic meaning in the context in which the ML models are used. There is thus a need for techniques that can analyze ML models in the context of the systems within which they are used, i.e., for semantic adversarial analysis [25]. A key step is to represent the semantic feature space modeling the environment in which the ML system operates, as opposed to the concrete feature space which deï¬ nes the input space for the ML model. This follows the intuition that the semantically meaningful part of the concrete feature space (e.g. images of trafï¬ c scenes) form a much lower dimensional latent space as compared to the full concrete feature space. For our illustrative example in Fig. 2, the semantic feature space is the lower-dimensional space representing the 3D world around the autonomous vehicle, whereas the concrete feature space is the high-dimensional pixel space. | 1606.08514#30 | 1606.08514#32 | 1606.08514 | [
"1606.06565"
] |
1606.08514#32 | Towards Verified Artificial Intelligence | Since the 9 semantic feature space is lower dimensional, it can be easier to search over (e.g. [22, 38]). However, one typically needs to have a â rendererâ that maps a point in the semantic feature space to one in the concrete feature space, and certain properties of this renderer, such as differentiability [46], make it easier to apply formal methods to perform goal-directed search of the semantic feature space. # 4.4 Compositional and Quantitative Methods for Design and Veriï¬ cation of Models and Data Consider the challenge, described in Sec. 3.4, of devising computational engines for scalable training, test- ing, and veriï¬ cation of AI-based systems. We see three promising directions to tackle this challenge. Controlled Randomization in Formal Methods: Consider the problem of data set design â i.e., systematically generating training data for a ML component in an AI-based system. This synthetic data generation problem has many facets. | 1606.08514#31 | 1606.08514#33 | 1606.08514 | [
"1606.06565"
] |
1606.08514#33 | Towards Verified Artificial Intelligence | First, one must deï¬ ne the space of â legalâ inputs so that the examples are well formed according to the application semantics. Secondly, one might want to impose constraints on â realismâ , e.g., a measure of similarity with real-world data. Third, one might need to impose constraints on the distribution of the generated examples in order to obtain guarantees about convergence of the learning algorithm to the true concept. What can formal methods offer towards solving this problem? We believe that the answer may lie in a new class of randomized formal methods â randomized algo- rithms for generating test inputs subject to formal constraints and distribution requirements. | 1606.08514#32 | 1606.08514#34 | 1606.08514 | [
"1606.06565"
] |
1606.08514#34 | Towards Verified Artificial Intelligence | Speciï¬ cally, a recently deï¬ ned class of techniques, termed control improvisation [31], holds promise. An improviser is a generator of random strings (examples) x that satisfy three constraints: (i) a hard constraint that deï¬ nes the space of legal x; (ii) a soft constraint deï¬ ning how the generated x must be similar to real-world examples, and (iii) a randomness requirement deï¬ ning a constraint on the output distribution. The theory of control improvisation is still in its infancy, and we are just starting to understand the computational complexity and to devise efï¬ cient algorithms. Improvisation, in turn, relies on recent progress on computational problems such as constrained random sampling and model counting (e.g., [51, 11, 12]), and generative approaches based on probabilistic programming (e.g. [32]). Quantitative Veriï¬ cation on the Semantic Feature Space: Recall the challenge to develop techniques for veriï¬ cation of quantitative requirements â where the output of the veriï¬ er is not just YES/NO but a numeric value. The complexity and heterogeneity of AI-based systems means that, in general, formal veriï¬ cation of speciï¬ cations, Boolean or quantitative, is undecidable. (For example, even deciding whether a state of a linear hybrid system is reachable is undecidable.) To overcome this obstacle posed by computational com- plexity, one must augment the abstraction and modeling methods discussed earlier in this section with novel techniques for probabilistic and quantitative veriï¬ cation over the semantic feature space. | 1606.08514#33 | 1606.08514#35 | 1606.08514 | [
"1606.06565"
] |
1606.08514#35 | Towards Verified Artificial Intelligence | For speciï¬ cation formalisms that have both Boolean and quantitative semantics, in formalisms such as metric temporal logic, the formulation of veriï¬ cation as optimization is crucial to unifying computational methods from formal methods with those from the optimization literature, such as in simulation-based temporal logic falsiï¬ cation (e.g. [42, 27, 88]), although they must be applied to the semantic feature space for efï¬ ciency [23]. Such falsiï¬ cation techniques can also be used for the systematic, adversarial generation of training data for ML components [23]. Techniques for probabilistic veriï¬ cation, such as probabilistic model checking [45, 18], should be extended beyond traditional formalisms such as Markov chains or Markov Decision Processes to verify probabilistic programs over semantic feature spaces. Similarly, work on SMT solving must be extended to more effectively handle cost constraints â in other words, combining SMT solving with opti- mization methods (e.g., [79, 8]). Compositional Reasoning: As in all applications of formal methods, modularity will be crucial to scalable veriï¬ cation of AI-based systems. However, compositional design and analysis of AI-based systems faces some unique challenges. First, theories of probabilistic assume-guarantee design and veriï¬ | 1606.08514#34 | 1606.08514#36 | 1606.08514 | [
"1606.06565"
] |
1606.08514#36 | Towards Verified Artificial Intelligence | cation need to 10 be developed for the semantic spaces for such systems, building on some promising initial work (e.g. [57]). Second, we suggest the use of inductive synthesis [74] to generate assume-guaranteee contracts algorith- mically, to reduce the speciï¬ cation burden and ease the use of compositional reasoning. Third, to handle the case of components, such as perception, that do not have precise formal speciï¬ cations, we suggest tech- niques that infer component-level constraints from system-level analysis (e.g. [22]) and use such constraints to focus component-level analysis, including adversarial analysis. # 4.5 Formal Inductive Synthesis, Safe Learning, and Run-Time Assurance Developing a correct-by-construction design methodology for AI-based systems, with associated tools, is perhaps the toughest challenge of all. For this to be fully solved, the preceding four challenges must be successfully addressed. However, we do not need to wait until we solve those problems in order to start working on this one. Indeed, a methodology to â design for veriï¬ cationâ | 1606.08514#35 | 1606.08514#37 | 1606.08514 | [
"1606.06565"
] |
1606.08514#37 | Towards Verified Artificial Intelligence | may well ease the task on the other four challenges. Formal Inductive Synthesis: First consider the problem of synthesizing learning components correct by construction. The emerging theory of formal inductive synthesis [39, 41] addresses this problem. Formal inductive synthesis is the synthesis from examples of programs that satisfy formal speciï¬ cations. In ma- chine learning terms, it is the synthesis of models/classiï¬ ers that additionally satisfy a formal speciï¬ cation. The most common approach to solving a formal inductive synthesis problem is to use an oracle-guided approach. In oracle-guided synthesis, a learner is paired with an oracle who answers queries. The set of query-response types is deï¬ ned by an oracle interface. For the example of Fig. 2, the oracle can be a falsiï¬ er that can generate counterexamples showing how a failure of the learned component violates the system-level speciï¬ cation. This approach, also known as counterexample-guided inductive synthesis [82], has proved ef- fective in many scenarios. In general, oracle-guided inductive synthesis techniques show much promise for the synthesis of learned components by blending expert human insight, inductive learning, and deductive reasoning [73, 74]. These methods also have a close relation to the sub-ï¬ eld of machine teaching [89]. Safe Learning by Design: There has been considerable recent work on using design-time methods to analyze or constrain learning components so as to ensure safe operation within speciï¬ ed assumptions. A prominent example is safe learning-based control (e.g., [3, 28]). In this approach, a safety envelope is pre-computed and a learning algorithm is used to tune a controller within that envelope. Techniques for efï¬ ciently comput- ing such safety envelopes based, for example, on reachability analysis [83], are needed. Relatedly, several methods have been proposed for safe reinforcement learning (see [34]). Another promising direction is to enforce properties on ML models through the use of semantic loss functions (e.g. [87, 25]), though this problem is largely unsolved. Finally, the use of theorem proving for ensuring correctness of algorithms used for training ML models (e.g. [72]) is also an important step towards improving the assurance in ML based systems. Run-Time Assurance: Due to the undecidability of veriï¬ | 1606.08514#36 | 1606.08514#38 | 1606.08514 | [
"1606.06565"
] |
1606.08514#38 | Towards Verified Artificial Intelligence | cation in most instances and the challenge of en- vironment modeling, we believe it will be difï¬ cult, if not impossible, to synthesize correct-by-construction AI-based systems or to formally verify correct operation without making restrictive assumptions. Therefore, design-time veriï¬ cation must be combined with run-time assurance, i.e., run-time veriï¬ cation and mitiga- tion techniques. For example, the Simplex technique [78] provides one approach to combining a complex, but error-prone module with a safe, formally-veriï¬ ed backup module. Recent techniques for combining design-time and run-time assurance methods (e.g., [71, 19, 20]) have shown how unveriï¬ ed components, including those based on AI and ML, can be wrapped within a runtime assurance framework to provide guarantees of safe operation. However, the problems of extracting environment assumptions and synthesiz- ing them into runtime monitors (e.g., as described for introspective environment modeling [76]) and devising runtime mitigation strategies remain a largely unsolved problem. | 1606.08514#37 | 1606.08514#39 | 1606.08514 | [
"1606.06565"
] |
1606.08514#39 | Towards Verified Artificial Intelligence | 11 Challenges Environment (incl. Human) Modeling Active Data-Driven, Introspective, Probabilistic Modeling Start at System Level, Derive Component Speciï¬ cations; Formal Speciï¬ cation Hybrid Boolean-Quantitative Speciï¬ cation; Speciï¬ cation Mining Abstractions, Explanations, Semantic Feature Spaces Compositional Reasoning, Controlled Randomization, Quantitative Semantic Analysis Formal Inductive Synthesis, Safe Learning by Design, Run-Time Assurance Table 1: Summary of the ï¬ ve challenges for Veriï¬ ed AI presented in this paper, and the corresponding principles proposed to address them. # 5 Conclusion Taking a formal methods perspective, we have analyzed the challenge of developing and applying formal methods to systems that are substantially based on artiï¬ cial intelligence or machine learning. As summarized in Table 1, we have identiï¬ ed ï¬ | 1606.08514#38 | 1606.08514#40 | 1606.08514 | [
"1606.06565"
] |
1606.08514#40 | Towards Verified Artificial Intelligence | ve main challenges for applying formal methods to AI-based systems. For each of these ï¬ ve challenges, we have identiï¬ ed corresponding principles for design and veriï¬ cation that hold promise for addressing that challenge. Since the original version of this paper was published in 2016, several researchers including the authors have been working on addressing these challenges; a few sample advances are described in this paper. In particular, we have developed open-source tools, VerifAI [2] and Scenic [1] that implement techniques based on the principles described in this paper, and which have been applied to industrial-scale systems in the autonomous driving [33] and aerospace [30] domains. These results are but a start and much more remains to be done. The topic of Veriï¬ ed AI promises to continue to be a fruitful area for research in the years to come. # Acknowledgments The authorsâ work has been supported in part by NSF grants CCF-1139138, CCF-1116993, CNS-1545126 (VeHICaL), CNS-1646208, and CCF-1837132 (FMitF), by an NDSEG Fellowship, by the TerraSwarm Research Center, one of six centers supported by the STARnet phase of the Focus Center Research Pro- gram (FCRP) a Semiconductor Research Corporation program sponsored by MARCO and DARPA, by the DARPA BRASS and Assured Autonomy programs, by Toyota under the iCyPhy center, and by Berkeley Deep Drive. We gratefully acknowledge the many colleagues with whom our conversations and collabora- tions have helped shape this article. # References [1] Scenic Environment Modeling and Scenario Description Language. http://github.com/ BerkeleyLearnVerify/Scenic. [2] VerifAI: A toolkit for design and veriï¬ cation of AI-based systems. http://github.com/ BerkeleyLearnVerify/VerifAI. [3] Anayo K Akametalu, Jaime F Fisac, Jeremy H Gillula, Shahab Kaynama, Melanie N Zeilinger, and Claire J Tomlin. Reachability-based safe learning with Gaussian processes. In 53rd IEEE Conference on Decision and Control, pages 1424â 1431, 2014. 12 | 1606.08514#39 | 1606.08514#41 | 1606.08514 | [
"1606.06565"
] |
1606.08514#41 | Towards Verified Artificial Intelligence | [4] Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Man´e. Con- crete problems in AI safety. arXiv preprint arXiv:1606.06565, 2016. [5] Thanassis Avgerinos, Sang Kil Cha, Alexandre Rebert, Edward J. Schwartz, Maverick Woo, and David Brumley. Automatic exploit generation. Commun. ACM, 57(2):74â 84, 2014. [6] Clark Barrett, Roberto Sebastiani, Sanjit A. Seshia, and Cesare Tinelli. Satisï¬ | 1606.08514#40 | 1606.08514#42 | 1606.08514 | [
"1606.06565"
] |
1606.08514#42 | Towards Verified Artificial Intelligence | ability modulo theories. In Armin Biere, Hans van Maaren, and Toby Walsh, editors, Handbook of Satisï¬ ability, volume 4, chapter 8. IOS Press, 2009. [7] I. Beer, S. Ben-David, C. Eisner, and Y. Rodeh. Efï¬ cient detection of vacuity in ACTL formulas. Formal Methods in System Design, 18(2):141â 162, 2001. In Inter- national Conference on Tools and Algorithms for the Construction and Analysis of Systems, pages 194â 199. Springer, 2015. [9] Randal E. Bryant. | 1606.08514#41 | 1606.08514#43 | 1606.08514 | [
"1606.06565"
] |
1606.08514#43 | Towards Verified Artificial Intelligence | Graph-based algorithms for Boolean function manipulation. IEEE Transactions on Computers, C-35(8):677â 691, August 1986. [10] Andrea Censi, Konstantin Slutsky, Tichakorn Wongpiromsarn, Dmitry Yershov, Scott Pendleton, James Fu, and Emilio Frazzoli. Liability, ethics, and culture-aware behavior speciï¬ cation using rule- In 2019 International Conference on Robotics and Automation (ICRA), pages 8536â 8542. books. IEEE, 2019. [11] Supratik Chakraborty, Daniel J. Fremont, Kuldeep S. Meel, Sanjit A. Seshia, and Moshe Y. Vardi. | 1606.08514#42 | 1606.08514#44 | 1606.08514 | [
"1606.06565"
] |
1606.08514#44 | Towards Verified Artificial Intelligence | Distribution-aware sampling and weighted model counting for SAT. In Proceedings of the 28th AAAI Conference on Artiï¬ cial Intelligence (AAAI), pages 1722â 1730, July 2014. [12] Supratik Chakraborty, Daniel J. Fremont, Kuldeep S. Meel, Sanjit A. Seshia, and Moshe Y. Vardi. On parallel scalable uniform sat witness generation. In Proceedings of the 21st International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS), pages 304â 319, April 2015. [13] Krishnendu Chatterjee, Laurent Doyen, and Thomas A Henzinger. Quantitative languages. ACM Transactions on Computational Logic (TOCL), 11(4):23, 2010. [14] Edmund M. Clarke and E. Allen Emerson. | 1606.08514#43 | 1606.08514#45 | 1606.08514 | [
"1606.06565"
] |
1606.08514#45 | Towards Verified Artificial Intelligence | Design and synthesis of synchronization skeletons using branching-time temporal logic. In Logic of Programs, pages 52â 71, 1981. [15] Edmund M. Clarke, Orna Grumberg, and Doron A. Peled. Model Checking. MIT Press, 2000. [16] Edmund M Clarke and Jeannette M Wing. Formal methods: State of the art and future directions. ACM Computing Surveys (CSUR), 28(4):626â 643, 1996. [17] Committee on Information Technology, Automation, and the U.S. Workforce. Information technology and the U.S. workforce: Where are we and where do we go from here? http://www.nap.edu/24649. | 1606.08514#44 | 1606.08514#46 | 1606.08514 | [
"1606.06565"
] |
1606.08514#46 | Towards Verified Artificial Intelligence | [18] Christian Dehnert, Sebastian Junges, Joost-Pieter Katoen, and Matthias Volk. A storm is coming: A modern probabilistic model checker. In International Conference on Computer Aided Veriï¬ cation (CAV), pages 592â 600. Springer, 2017. 13 [19] Ankush Desai, Tommaso Dreossi, and Sanjit A. Seshia. Combining model checking and runtime veriï¬ cation for safe robotics. In Runtime Veriï¬ cation - 17th International Conference, RV 2017, Seattle, WA, USA, September 13-16, 2017, Proceedings, pages 172â 189, 2017. [20] Ankush Desai, Shromona Ghosh, Sanjit A. Seshia, Natarajan Shankar, and Ashish Tiwari. | 1606.08514#45 | 1606.08514#47 | 1606.08514 | [
"1606.06565"
] |
1606.08514#47 | Towards Verified Artificial Intelligence | A runtime assurance framework for programming safe robotics systems. In IEEE/IFIP International Conference on Dependable Systems and Networks (DSN), June 2019. [21] Thomas G Dietterich and Eric J Horvitz. Rise of concerns about AI: reï¬ ections and directions. Com- munications of the ACM, 58(10):38â 40, 2015. [22] Tommaso Dreossi, Alexandre Donze, and Sanjit A. Seshia. | 1606.08514#46 | 1606.08514#48 | 1606.08514 | [
"1606.06565"
] |
1606.08514#48 | Towards Verified Artificial Intelligence | Compositional falsiï¬ cation of cyber- physical systems with machine learning components. In Proceedings of the NASA Formal Methods Conference (NFM), May 2017. [23] Tommaso Dreossi, Daniel J. Fremont, Shromona Ghosh, Edward Kim, Hadi Ravanbakhsh, Marcell Vazquez-Chanlatte, and Sanjit A. Seshia. VerifAI: A toolkit for the formal design and analysis of artiï¬ cial intelligence-based systems. In 31st International Conference on Computer Aided Veriï¬ cation (CAV), July 2019. [24] Tommaso Dreossi, Shromona Ghosh, Alberto L. Sangiovanni-Vincentelli, and Sanjit A. Seshia. | 1606.08514#47 | 1606.08514#49 | 1606.08514 | [
"1606.06565"
] |
1606.08514#49 | Towards Verified Artificial Intelligence | A formalization of robustness for deep neural networks. In Proceedings of the AAAI Spring Symposium Workshop on Veriï¬ cation of Neural Networks (VNN), March 2019. [25] Tommaso Dreossi, Somesh Jha, and Sanjit A. Seshia. Semantic adversarial deep learning. In 30th International Conference on Computer Aided Veriï¬ cation (CAV), 2018. [26] Michael Ernst. Dynamically Discovering Likely Program Invariants. PhD thesis, University of Wash- ington, Seattle, 2000. [27] Georgios E. Fainekos. | 1606.08514#48 | 1606.08514#50 | 1606.08514 | [
"1606.06565"
] |
1606.08514#50 | Towards Verified Artificial Intelligence | Automotive control design bug-ï¬ nding with the S-TaLiRo tool. In American Control Conference (ACC), page 4096, 2015. [28] Jaime F Fisac, Anayo K Akametalu, Melanie N Zeilinger, Shahab Kaynama, Jeremy Gillula, and Claire J Tomlin. A general safety framework for learning-based control in uncertain robotic systems. IEEE Transactions on Automatic Control, 64(7):2737â 2752, 2018. [29] Harry Foster. | 1606.08514#49 | 1606.08514#51 | 1606.08514 | [
"1606.06565"
] |
1606.08514#51 | Towards Verified Artificial Intelligence | Applied Assertion-Based Veriï¬ cation: An Industry Perspective. Now Publishers Inc., 2009. [30] Daniel J. Fremont, Johnathan Chiu, Dragos D. Margineantu, Denis Osipychev, and Sanjit A. Seshia. Formal analysis and redesign of a neural network-based aircraft taxiing system with verifai. In 32nd International Conference on Computer-Aided Veriï¬ cation (CAV), pages 122â 134, 2020. [31] Daniel J. Fremont, Alexandre Donz´e, Sanjit A. Seshia, and David Wessel. | 1606.08514#50 | 1606.08514#52 | 1606.08514 | [
"1606.06565"
] |
1606.08514#52 | Towards Verified Artificial Intelligence | Control improvisation. In 35th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2015), pages 463â 474, 2015. [32] Daniel J. Fremont, Tommaso Dreossi, Shromona Ghosh, Xiangyu Yue, Alberto L. Sangiovanni- Vincentelli, and Sanjit A. Seshia. Scenic: A language for scenario speciï¬ cation and scene generation. In Proceedings of the 40th annual ACM SIGPLAN conference on Programming Language Design and Implementation (PLDI), June 2019. | 1606.08514#51 | 1606.08514#53 | 1606.08514 | [
"1606.06565"
] |
1606.08514#53 | Towards Verified Artificial Intelligence | 14 [33] Daniel J. Fremont, Edward Kim, Yash Vardhan Pant, Sanjit A. Seshia, Atul Acharya, Xantha Bruso, Paul Wells, Steve Lemke, Qiang Lu, and Shalin Mehta. Formal scenario-based testing of autonomous vehicles: From simulation to the real world. In IEEE Intelligent Transportation Systems Conference (ITSC), 2020. [34] Javier Garcıa and Fernando Fern´andez. A comprehensive survey on safe reinforcement learning. Jour- nal of Machine Learning Research, 16(1):1437â | 1606.08514#52 | 1606.08514#54 | 1606.08514 | [
"1606.06565"
] |
1606.08514#54 | Towards Verified Artificial Intelligence | 1480, 2015. [35] Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, and Martin Vechev. AI2: Safety and robustness certiï¬ cation of neural networks with abstract interpretation. In IEEE Symposium on Security and Privacy (SP), pages 3â 18. IEEE, 2018. [36] Ian Goodfellow, Patrick McDaniel, and Nicolas Papernot. Making machine learning robust against adversarial inputs. Communications of the ACM, 61(7):56â 66, 2018. [37] M. J. C. Gordon and T. F. Melham. Introduction to HOL: A Theorem Proving Environment for Higher- Order Logic. | 1606.08514#53 | 1606.08514#55 | 1606.08514 | [
"1606.06565"
] |
1606.08514#55 | Towards Verified Artificial Intelligence | Cambridge University Press, 1993. [38] Xiaowei Huang, Marta Kwiatkowska, Sen Wang, and Min Wu. Safety veriï¬ cation of deep neural networks. In International Conference on Computer Aided Veriï¬ cation, pages 3â 29. Springer, 2017. [39] S. Jha and S. A. Seshia. A Theory of Formal Synthesis via Inductive Learning. ArXiv e-prints, May 2015. [40] Susmit Jha, Tuhin Sahai, Vasumathi Raman, Alessandro Pinto, and Michael Francis. Explaining AI decisions using efï¬ cient methods for learning sparse boolean formulae. | 1606.08514#54 | 1606.08514#56 | 1606.08514 | [
"1606.06565"
] |
1606.08514#56 | Towards Verified Artificial Intelligence | J. Autom. Reasoning, 63(4):1055â 1075, 2019. [41] Susmit Jha and Sanjit A. Seshia. A Theory of Formal Synthesis via Inductive Learning. Acta Infor- matica, 2017. [42] Xiaoqing Jin, Alexandre Donz´e, Jyotirmoy Deshmukh, and Sanjit A. Seshia. Mining requirements from closed-loop control models. IEEE Transactions on Computer-Aided Design of Circuits and Sys- tems, 34(11):1704â | 1606.08514#55 | 1606.08514#57 | 1606.08514 | [
"1606.06565"
] |
1606.08514#57 | Towards Verified Artificial Intelligence | 1717, 2015. [43] Matt Kaufmann, Panagiotis Manolios, and J. Strother Moore. Computer-Aided Reasoning: An Ap- proach. Kluwer Academic Publishers, 2000. [44] Nathan Kitchen and Andreas Kuehlmann. Stimulus generation for constrained random simulation. In Proceedings of the 2007 IEEE/ACM International Conference on Computer-Aided Design (ICCAD), pages 258â 265. IEEE Press, 2007. [45] Marta Kwiatkowska, Gethin Norman, and David Parker. PRISM 4.0: Veriï¬ cation of probabilistic real- In International Conference on Computer Aided Veriï¬ cation (CAV), pages 585â 591. time systems. Springer, 2011. [46] Tzu-Mao Li, Miika Aittala, Fr´edo Durand, and Jaakko Lehtinen. | 1606.08514#56 | 1606.08514#58 | 1606.08514 | [
"1606.06565"
] |
1606.08514#58 | Towards Verified Artificial Intelligence | Differentiable monte carlo ray tracing through edge sampling. ACM Trans. Graph. (Proc. SIGGRAPH Asia), 37(6):222:1â 222:11, 2018. [47] Wenchao Li. Speciï¬ cation Mining: New Formalisms, Algorithms and Applications. PhD thesis, EECS Department, University of California, Berkeley, Mar 2014. 15 [48] Wenchao Li, Dorsa Sadigh, S. Shankar Sastry, and Sanjit A. Seshia. | 1606.08514#57 | 1606.08514#59 | 1606.08514 | [
"1606.06565"
] |
1606.08514#59 | Towards Verified Artificial Intelligence | Synthesis for human-in-the-loop control systems. In Proceedings of the 20th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS), pages 470â 484, April 2014. [49] Oded Maler and Dejan Nickovic. Monitoring temporal properties of continuous signals. MATS/FTRTFT, pages 152â 166, 2004. In FOR- [50] Sharad Malik and Lintao Zhang. Boolean satisï¬ ability: From theoretical hardness to practical success. Communications of the ACM (CACM), 52(8):76â 82, 2009. [51] Kuldeep S. Meel, Moshe Y. Vardi, Supratik Chakraborty, Daniel J. Fremont, Sanjit A. Seshia, Dror Fried, Alexander Ivrii, and Sharad Malik. | 1606.08514#58 | 1606.08514#60 | 1606.08514 | [
"1606.06565"
] |
1606.08514#60 | Towards Verified Artificial Intelligence | Constrained sampling and counting: Universal hashing meets SAT solving. In Beyond NP, Papers from the 2016 AAAI Workshop, Phoenix, Arizona, USA, February 12, 2016., 2016. [52] Brian Milch, Bhaskara Marthi, Stuart Russell, David Sontag, Daniel L Ong, and Andrey Kolobov. Blog: Probabilistic models with unknown objects. Statistical Relational Learning, page 373, 2007. | 1606.08514#59 | 1606.08514#61 | 1606.08514 | [
"1606.06565"
] |
1606.08514#61 | Towards Verified Artificial Intelligence | [53] Tom M. Mitchell. Machine Learning. McGraw-Hill, 1997. [54] Tom M Mitchell, Richard M Keller, and Smadar T Kedar-Cabelli. Explanation-based generalization: A unifying view. Machine learning, 1(1):47â 80, 1986. [55] Andrew Y. Ng and Stuart J. Russell. Algorithms for inverse reinforcement learning. In Proceedings of the Seventeenth International Conference on Machine Learning (ICML), pages 663â 670, 2000. [56] A. Nilim and L. El Ghaoui. | 1606.08514#60 | 1606.08514#62 | 1606.08514 | [
"1606.06565"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.