id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
listlengths 1
1
|
---|---|---|---|---|---|---|
1703.01780#41 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | 11 Table 6: The convolutional network architecture we used in the experiments. Input Translation Horizontal ï¬ ipa Randomly p = 0.5 Gaussian noise Convolutional Convolutional Convolutional Pooling Dropout Convolutional Convolutional Convolutional Pooling Dropout Convolutional Convolutional Convolutional Pooling Softmax Ï = 0.15 128 ï¬ lters, 3 à 3, same padding 128 ï¬ lters, 3 à 3, same padding 128 ï¬ lters, 3 à 3, same padding Maxpool 2 à 2 p = 0.5 256 ï¬ lters, 3 à 3, same padding 256 ï¬ lters, 3 à 3, same padding 256 ï¬ lters, 3 à 3, same padding Maxpool 2 à 2 p = 0.5 512 ï¬ lters, 3 à 3, valid padding 256 ï¬ lters, 1 à 1, same padding 128 ï¬ lters, 1 à 1, same padding Average pool (6 à 6 â 1à 1 pixels) Fully connected 128 â 10 a Not applied on SVHN experiments | 1703.01780#40 | 1703.01780#42 | 1703.01780 | [
"1706.04599"
]
|
1703.01780#42 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | our baseline Î model and our Mean Teacher model is whether the teacher weights are identical to the student weights or an EMA of the student weights. In addition, the Î models (both the original and ours) backpropagate gradients to both sides of the model whereas Mean Teacher applies them only to the student side. Table 6 describes the architecture of the convolutional network. We applied mean-only batch normalization and weight normalization [24] on convolutional and softmax layers. We used Leaky ReLu [15] with α = 0.1 as the nonlinearity on each of the convolutional layers. We used cross-entropy between the student softmax output and the one-hot label as the classiï¬ cation cost, and the mean square error between the student and teacher softmax outputs as the consistency cost. The total cost was the weighted sum of these costs, where the weight of classiï¬ cation cost was the expected number of labeled examples per minibatch, subject to the ramp-ups described below. We trained the network with minibatches of size 100. We used Adam Optimizer [12] for training with learning rate 0.003 and parameters β1 = 0.9, β2 = 0.999, and ε = 10â | 1703.01780#41 | 1703.01780#43 | 1703.01780 | [
"1706.04599"
]
|
1703.01780#43 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | 8. In our baseline Î model we applied gradients through both teacher and student sides of the network. In Mean teacher model, the teacher model parameters were updated after each training step using an EMA with α = 0.999. These hyperparameters were subject to the ramp-ups and ramp-downs described below. We applied a ramp-up period of 40000 training steps at the beginning of training. The consistency cost coefï¬ cient and the learning rate were ramped up from 0 to their maximum values, using a sigmoid-shaped function eâ 5(1â x)2 | 1703.01780#42 | 1703.01780#44 | 1703.01780 | [
"1706.04599"
]
|
1703.01780#44 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | We used different training settings in different experiments. In the CIFAR-10 experiment, we matched the settings of Laine & Aila [13] as closely as possible. In the SVHN experiments, we diverged from Laine & Aila [13] to accommodate for the sparsity of labeled data. Table 7 summarizes the differences between our experiments. # B.1.1 ConvNet on CIFAR-10 We normalized the input images with ZCA based on training set statistics. | 1703.01780#43 | 1703.01780#45 | 1703.01780 | [
"1706.04599"
]
|
1703.01780#45 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | 12 For sampling minibatches, the labeled and unlabeled examples were treated equally, and thus the number of labeled examples varied from minibatch to minibatch. We applied a ramp-down for the last 25000 training steps. The learning rate coefï¬ cient was ramped down to 0 from its maximum value. Adam β1 was ramped down to 0.5 from its maximum value. The ramp-downs were performed using sigmoid-shaped function 1 â eâ 12.5x2 , where x â [0, 1]. | 1703.01780#44 | 1703.01780#46 | 1703.01780 | [
"1706.04599"
]
|
1703.01780#46 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | These ramp-downs did not improve the results, but were used to stay as close as possible to the settings of Laine & Aila [13]. # B.1.2 ConvNet on SVHN We normalized the input images to have zero mean and unit variance. When doing semi-supervised training, we used 1 labeled example and 99 unlabeled examples in each mini-batch. This was important to speed up training when using extra unlabeled data. After all labeled examples had been used, they were shufï¬ ed and reused. Similarly, after all unlabeled examples had been used, they were shufï¬ ed and reused. We applied different values for Adam β2 and EMA decay rate during the ramp-up period and the rest of the training. Both of the values were 0.99 during the ï¬ rst 40000 steps, and 0.999 afterwards. This helped the 250-label case converge reliably. We trained the network for 180000 steps when not using extra unlabeled examples, for 400000 steps when using 100k extra unlabeled examples, and for 600000 steps when using 500k extra unlabeled examples. # B.1.3 The baseline ConvNet models For training the supervised-only and Î model baselines we used the same hyperparameters as for training the Mean Teacher, except we stopped training earlier to prevent over-ï¬ | 1703.01780#45 | 1703.01780#47 | 1703.01780 | [
"1706.04599"
]
|
1703.01780#47 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | tting. For supervised- only runs we did not include any unlabeled examples and did not apply the consistency cost. We trained the supervised-only model on CIFAR-10 for 7500 steps when using 1000 images, for 15000 steps when using 2000 images, for 30000 steps when using 4000 images and for 150000 steps when using all images. We trained it on SVHN for 40000 steps when using 250, 500 or 1000 labels, and for 180000 steps when using all labels. We trained the Î model on CIFAR-10 for 60000 steps when using 1000 labels, for 100000 steps when using 2000 labels, and for 180000 steps when using 4000 labels or all labels. We trained it on SVHN for 100000 steps when using 250 labels, and for 180000 steps when using 500, 1000, or all labels. # B.2 Residual network models We implemented our residual network experiments in PyTorch1. We used different architectures for our CIFAR-10 and ImageNet experiments. # B.2.1 ResNet on CIFAR-10 For CIFAR-10, we replicated the 26-2x96d Shake-Shake regularized architecture described in [5], and consisting of 4+4+4 residual blocks. We trained the network on 4 GPUs using minibatches of 512 images, 124 of which were labeled. We sampled the images in the same way as described in the SVHN experiments above. We augmented the input images with 4x4 random translations (reï¬ ecting the pixels at borders when necessary) and random horizontal ï¬ ips. (Note that following [5] we used a larger translation size than on our earlier experiments.) We normalized the images to have channel-wise zero mean and unit variance over training data. We trained the network using stochastic gradient descent with initial learning rate 0.2 and Nesterov momentum 0.9. We trained for 180 epochs (when training with 1000 labels) or 300 epochs (when training with 4000 labels), decaying the learning rate with cosine annealing [14] so that it would # 1https://github.com/pytorch/pytorch | 1703.01780#46 | 1703.01780#48 | 1703.01780 | [
"1706.04599"
]
|
1703.01780#48 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | 13 Table 7: Differences in training settings between the ConvNet experiments # semi-supervised CIFAR-10 Aspect image pre-processing zero mean, unit variance zero mean, unit variance ZCA image augmentation translation translation translation + horizontal ï¬ ip number of labeled examples per minibatch 1 100 varying training steps 180000-600000 180000 150000 Adam β2 during and after ramp-up 0.99, 0.999 0.99, 0.999 0.999, 0.999 EMA decay rate during and after ramp-up 0.99, 0.999 0.99, 0.999 0.999, 0.999 Ramp-downs No No Yes have reached zero after 210 epochs (when 1000 labels) or 350 epochs (when 4000 labels). | 1703.01780#47 | 1703.01780#49 | 1703.01780 | [
"1706.04599"
]
|
1703.01780#49 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | We deï¬ ne epoch as one pass through all the unlabeled examples â each labeled example was included many times in one such epoch. We used a total cost function consisting of classiï¬ cation cost and three other costs: We used the dual output trick described in subsection 3.4 and Figure 4(e) with MSE cost between logits with coefï¬ cient 0.01. This simpliï¬ ed other hyperparameter choices and improved the results. We used MSE consistency cost with coefï¬ cient ramping up from 0 to 100.0 during the ï¬ rst 5 epochs, using the same sigmoid ramp-up shape as in the experiments above. We also used an L2 weight decay with coefï¬ cient 2e-4. We used EMA decay value 0.97 (when 1000 labels) or 0.99 (when 4000 labels). # B.2.2 ResNet on ImageNet On our ImageNet evaluation runs, we used a 152-layer ResNeXt architecture [33] consisting of 3+8+36+3 residual blocks, with 32 groups of 4 channels on the ï¬ | 1703.01780#48 | 1703.01780#50 | 1703.01780 | [
"1706.04599"
]
|
1703.01780#50 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | rst block. We trained the network on 10 GPUs using minibatches of 400 images, 200 of which were labeled. We sampled the images in the same way as described in the SVHN experiments above. Following [10], we randomly augmented images using a 10 degree rotation, a crop with aspect ratio between 3/4 and 4/3 resized to 224x224 pixels, a random horizontal ï¬ ip and a color jitter. We then normalized images to have channel-wise zero mean and unit variance over training data. We trained the network using stochastic gradient descent with maximum learning rate 0.25 and Nesterov momentum 0.9. We ramped up the learning rate linearly during the ï¬ rst two epochs from 0.1 to 0.25. We trained for 60 epochs, decaying the learning rate with cosine annealing so that it would have reached zero after 75 epochs. We used a total cost function consisting of classiï¬ cation cost and three other costs: We used the dual output trick described in subsection 3.4 and Figure 4(e) with MSE cost between logits with coefï¬ cient 0.01. We used a KL-divergence consistency cost with coefï¬ cient ramping up from 0 to 10.0 during the ï¬ rst 5 epochs, using the same sigmoid ramp-up shape as in the experiments above. We also used an L2 weight decay with coefï¬ cient 5e-5. We used EMA decay value 0.9997. 14 15% + (f) 10% | 5% MSE a 4 a a Gs 6 @ o o KL-div cons. cost function Tt Figure 5: Copy of Figure 4(f) in the main text. Validation error on 250-label SVHN over four runs and their mean, when varying the consistency cost shape hyperparameter Ï between mean squared error (Ï = 0) and KL-divergence (Ï = 1). | 1703.01780#49 | 1703.01780#51 | 1703.01780 | [
"1706.04599"
]
|
1703.01780#51 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | # B.3 Use of training, validation and test data In the development phase of our work with CIFAR-10 and SVHN datasets, we separated 10% of training data into a validation set. We removed randomly most of the labels from the remaining training data, retaining an equal number of labels from each class. We used a different set of labels for each of the evaluation runs. We retained labels in the validation set to enable exploration of the results. In the ï¬ nal evaluation phase we used the entire training set, including the validation set but with labels removed. On a real-world use case we would not possess a large fully-labeled validation set. However, this setup is useful in a research setting, since it enables a more thorough analysis of the results. To the best of our knowledge, this is the common practice when carrying out research on semi-supervised learning. By retaining the hyperparameters from previous work where possible we decreased the chance of over-ï¬ tting our results to validation labels. In the ImageNet experiments we removed randomly most of the labels from the training set, retaining an equal number of labels from each class. For validation we used the given validation set without modiï¬ | 1703.01780#50 | 1703.01780#52 | 1703.01780 | [
"1706.04599"
]
|
1703.01780#52 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | cations. We used a different set of training labels for each of the evaluation runs and evaluated the results against the validation set. # C Varying between mean squared error and KL-divergence As mentioned in subsection 3.4, we ran an experiment varying the consistency cost function between MSE and KL-divergence (reproduced in Figure 5). The exact consistency function we used was 2 lâ -t lâ -t C,(p.4) = Z,Dei(prlidr), where Z, = 5p, Pr = TPH = TIF aH Ï â | 1703.01780#51 | 1703.01780#53 | 1703.01780 | [
"1706.04599"
]
|
1703.01780#53 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | (0, 1] and N is the number of classes. Taking the Taylor expansion we get 1 Dy (villa) = >> 57 N (Pi â qi)? + O (N27) a where the zeroth- and ï¬ rst-order terms vanish. Consequently, 1 2 Cr(p.9) +5 Swi - a) when 7 â 0 F 2 C-(p. 4) =Fy3 Pa. (Ila) when 7 = 1. | 1703.01780#52 | 1703.01780#54 | 1703.01780 | [
"1706.04599"
]
|
1703.01780#54 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | The results in Figure 5 show that MSE performs better than KL-divergence or CÏ with any Ï . We also tried other consistency cost weights with KL-divergence and did not reach the accuracy of MSE. 15 The exact reason why MSE performs better than KL-divergence remains unclear, but the form of CÏ may help explain it. Modern neural network architectures tend to produce accurate but overly conï¬ dent predictions [7]. We can assume that the true labels are accurate, but we should discount the conï¬ dence of the teacher predictions. We can do that by having Ï = 1 for the classiï¬ cation cost and Ï < 1 for the consistency cost. Then pÏ and qÏ discount the conï¬ dence of the approximations while ZÏ | 1703.01780#53 | 1703.01780#55 | 1703.01780 | [
"1706.04599"
]
|
1703.01780#55 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | keeps gradients large enough to provide a useful training signal. However, we did not perform experiments to validate this explanation. 16 | 1703.01780#54 | 1703.01780 | [
"1706.04599"
]
|
|
1703.01041#0 | Large-Scale Evolution of Image Classifiers | 7 1 0 2 n u J 1 1 ] E N . s c [ 2 v 1 4 0 1 0 . 3 0 7 1 : v i X r a # Large-Scale Evolution of Image Classiï¬ ers # Esteban Real 1 Sherry Moore 1 Andrew Selle 1 Saurabh Saxena 1 Yutaka Leon Suematsu 2 Jie Tan 1 Quoc V. Le 1 Alexey Kurakin 1 | 1703.01041#1 | 1703.01041 | [
"1502.03167"
]
|
|
1703.01041#1 | Large-Scale Evolution of Image Classifiers | Abstract Neural networks have proven effective at solv- ing difï¬ cult problems but designing their archi- tectures can be challenging, even for image clas- siï¬ cation problems alone. Our goal is to min- imize human participation, so we employ evo- lutionary algorithms to discover such networks automatically. Despite signiï¬ cant computational requirements, we show that it is now possible to evolve models with accuracies within the range of those published in the last year. Speciï¬ - cally, we employ simple evolutionary techniques at unprecedented scales to discover models for the CIFAR-10 and CIFAR-100 datasets, start- ing from trivial initial conditions and reaching accuracies of 94.6% (95.6% for ensemble) and 77.0%, respectively. To do this, we use novel and intuitive mutation operators that navigate large search spaces; we stress that no human participa- tion is required once evolution starts and that the output is a fully-trained model. Throughout this work, we place special emphasis on the repeata- bility of results, the variability in the outcomes and the computational requirements. | 1703.01041#0 | 1703.01041#2 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#2 | Large-Scale Evolution of Image Classifiers | # 1. Introduction Neural networks can successfully perform difï¬ cult tasks where large amounts of training data are available (He et al., 2015; Weyand et al., 2016; Silver et al., 2016; Wu et al., 2016). Discovering neural network architectures, however, remains a laborious task. Even within the spe- ciï¬ c problem of image classiï¬ cation, the state of the art was attained through many years of focused investigation by hundreds of researchers (Krizhevsky et al. (2012); Si- monyan & Zisserman (2014); Szegedy et al. (2015); He et al. (2016); Huang et al. (2016a), among many others). It is therefore not surprising that in recent years, tech- niques to automatically discover these architectures have been gaining popularity (Bergstra & Bengio, 2012; Snoek et al., 2012; Han et al., 2015; Baker et al., 2016; Zoph & Le, 2016). One of the earliest such â neuro-discoveryâ methods was neuro-evolution (Miller et al., 1989; Stanley & Miikkulainen, 2002; Stanley, 2007; Bayer et al., 2009; Stanley et al., 2009; Breuel & Shafait, 2010; Pugh & Stan- ley, 2013; Kim & Rigazio, 2015; Zaremba, 2015; Fernando et al., 2016; Morse & Stanley, 2016). Despite the promis- ing results, the deep learning community generally per- ceives evolutionary algorithms to be incapable of match- ing the accuracies of hand-designed models (Verbancsics & Harguess, 2013; Baker et al., 2016; Zoph & Le, 2016). In this paper, we show that it is possible to evolve such com- petitive models today, given enough computational power. We used slightly-modiï¬ ed known evolutionary algorithms and scaled up the computation to unprecedented levels, as far as we know. This, together with a set of novel and intuitive mutation operators, allowed us to reach compet- itive accuracies on the CIFAR-10 dataset. | 1703.01041#1 | 1703.01041#3 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#3 | Large-Scale Evolution of Image Classifiers | This dataset was chosen because it requires large networks to reach high accuracies, thus presenting a computational challenge. We also took a small ï¬ rst step toward generalization and evolved networks on the CIFAR-100 dataset. In transi- tioning from CIFAR-10 to CIFAR-100, we did not mod- ify any aspect or parameter of our algorithm. Our typical neuro-evolution outcome on CIFAR-10 had a test accuracy with µ = 94.1%, Ï = 0.4% @ 9 à 1019 FLOPs, and our top model (by validation accuracy) had a test accuracy of 94.6% @ 4à 1020 FLOPs. Ensembling the validation-top 2 models from each population reaches a test accuracy of 95.6%, at no additional training cost. On CIFAR-100, our single experiment resulted in a test accuracy of 77.0% @ 2 à 1020 FLOPs. As far as we know, these are the most accurate results obtained on these datasets by automated discovery methods that start from trivial initial conditions. 1Google Brain, Mountain View, California, USA 2Google Re- search, Mountain View, California, USA. Correspondence to: Es- teban Real <[email protected]>. Proceedings of the 34 th International Conference on Machine Learning, Sydney, Australia, PMLR 70, 2017. Copyright 2017 by the author(s). Throughout this study, we placed special emphasis on the simplicity of the algorithm. In particular, it is a â one- shotâ technique, producing a fully trained neural network It also has few impactful requiring no post-processing. meta-parameters (i.e. parameters not optimized by the al- gorithm). Starting out with poor-performing models with Large-Scale Evolution Table 1. Comparison with single-model hand-designed architectures. | 1703.01041#2 | 1703.01041#4 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#4 | Large-Scale Evolution of Image Classifiers | The â C10+â and â C100+â columns indicate the test accuracy on the data-augmented CIFAR-10 and CIFAR-100 datasets, respectively. The â Reachable?â column denotes whether the given hand- designed model lies within our search space. An entry of â â â indicates that no value was reported. The â indicates a result reported by Huang et al. (2016b) instead of the original author. Much of this table was based on that presented in Huang et al. (2016a). STUDY PARAMS. C10+ C100+ REACHABLE? MAXOUT (GOODFELLOW ET AL., 2013) NETWORK IN NETWORK (LIN ET AL., 2013) ALL-CNN (SPRINGENBERG ET AL., 2014) DEEPLY SUPERVISED (LEE ET AL., 2015) HIGHWAY (SRIVASTAVA ET AL., 2015) RESNET (HE ET AL., 2016) EVOLUTION (OURS) WIDE RESNET 28-10 (ZAGORUYKO & KOMODAKIS, 2016) WIDE RESNET 40-10+D/O (ZAGORUYKO & KOMODAKIS, 2016) DENSENET (HUANG ET AL., 2016A) 61.4% 90.7% â 91.2% 66.3% 1.3 M 92.8% 65.4% 92.0% 2.3 M 92.3% 67.6% 1.7 M 93.4% 72.8%â 5.4 M 94.6% 40.4 M 36.5 M 96.0% 50.7 M 96.2% 25.6 M 96.7% â â â 77.0% 80.0% 81.7% 82.8% NO NO YES NO NO YES N/A YES NO NO | 1703.01041#3 | 1703.01041#5 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#5 | Large-Scale Evolution of Image Classifiers | no convolutions, the algorithm must evolve complex con- volutional neural networks while navigating a fairly unre- stricted search space: no ï¬ xed depth, arbitrary skip con- nections, and numerical parameters that have few restric- tions on the values they can take. We also paid close atten- tion to result reporting. Namely, we present the variabil- ity in our results in addition to the top value, we account for researcher degrees of freedom (Simmons et al., 2011), we study the dependence on the meta-parameters, and we disclose the amount of computation necessary to reach the main results. We are hopeful that our explicit discussion of computation cost could spark more study of efï¬ cient model search and training. Studying model performance normal- ized by computational investment allows consideration of economic concepts like opportunity cost. # 2. Related Work | 1703.01041#4 | 1703.01041#6 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#6 | Large-Scale Evolution of Image Classifiers | 2015; Fernando et al., 2016). For example, the CPPN (Stanley, 2007; Stanley et al., 2009) allows for the evolu- tion of repeating features at different scales. Also, Kim & Rigazio (2015) use an indirect encoding to improve the convolution ï¬ lters in an initially highly-optimized ï¬ xed ar- chitecture. Research on weight evolution is still ongoing (Morse & Stanley, 2016) but the broader machine learning commu- nity defaults to back-propagation for optimizing neural net- work weights (Rumelhart et al., 1988). Back-propagation and evolution can be combined as in Stanley et al. (2009), where only the structure is evolved. Their algorithm fol- lows an alternation of architectural mutations and weight back-propagation. Similarly, Breuel & Shafait (2010) use this approach for hyper-parameter search. Fernando et al. (2016) also use back-propagation, allowing the trained weights to be inherited through the structural modiï¬ | 1703.01041#5 | 1703.01041#7 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#7 | Large-Scale Evolution of Image Classifiers | ca- tions. Neuro-evolution dates back many years (Miller et al., 1989), originally being used only to evolve the weights of a ï¬ xed architecture. Stanley & Miikkulainen (2002) showed that it was advantageous to simultaneously evolve the architecture using the NEAT algorithm. NEAT has three kinds of mutations: (i) modify a weight, (ii) add a connection between existing nodes, or (iii) insert a node while splitting an existing connection. It also has a mech- anism for recombining two models into one and a strategy to promote diversity known as ï¬ tness sharing (Goldberg et al., 1987). Evolutionary algorithms represent the models using an encoding that is convenient for their purposeâ analogous to natureâ s DNA. NEAT uses a direct encoding: every node and every connection is stored in the DNA. The alternative paradigm, indirect encoding, has been the sub- ject of much neuro-evolution research (Gruau, 1993; Stan- ley et al., 2009; Pugh & Stanley, 2013; Kim & Rigazio, The above studies create neural networks that are small in comparison to the typical modern architectures used for im- age classiï¬ cation (He et al., 2016; Huang et al., 2016a). Their focus is on the encoding or the efï¬ ciency of the evo- lutionary process, but not on the scale. When it comes to images, some neuro-evolution results reach the computa- tional scale required to succeed on the MNIST dataset (Le- Cun et al., 1998). Yet, modern classiï¬ ers are often tested on realistic images, such as those in the CIFAR datasets (Krizhevsky & Hinton, 2009), which are much more chal- lenging. These datasets require large models to achieve high accuracy. Non-evolutionary neuro-discovery methods have been more successful at tackling realistic image data. Snoek et al. (2012) used Bayesian optimization to tune 9 hyper-parameters for a ï¬ xed-depth architecture, reach- Large-Scale Evolution Table 2. Comparison with automatically discovered architectures. The â C10+â and â C100+â | 1703.01041#6 | 1703.01041#8 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#8 | Large-Scale Evolution of Image Classifiers | contain the test accuracy on the data- augmented CIFAR-10 and CIFAR-100 datasets, respectively. An entry of â â â indicates that the information was not reported or is not known to us. For Zoph & Le (2016), we quote the result with the most similar search space to ours, as well as their best result. Please refer to Table 1 for hand-designed results, including the state of the art. â Discrete params.â means that the parameters can be picked from a handful of values only (e.g. strides â {1, 2, 4}). STUDY STARTING POINT CONSTRAINTS POST-PROCESSING PARAMS. C10+ BAYESIAN (SNOEK ET AL., 2012) 3 LAYERS FIXED ARCHITECTURE, NO SKIPS NONE â 90.5% Q-LEARNING (BAKER ET AL., 2016) â DISCRETE PARAMS., MAX. NUM. LAYERS, NO SKIPS TUNE, RETRAIN 11.2 M 93.1% RL (ZOPH & LE, 2016) 20 LAYERS, 50% SKIPS DISCRETE PARAMS., EXACTLY 20 LAYERS SMALL GRID SEARCH, RETRAIN 2.5 M 94.0% RL (ZOPH & LE, 2016) 39 LAYERS, 2 POOL LAYERS AT 13 AND 26, 50% SKIPS DISCRETE PARAMS., EXACTLY 39 LAYERS, 2 POOL LAYERS AT 13 AND 26 ADD MORE FILTERS, SMALL GRID SEARCH, RETRAIN 37.0 M 96.4% EVOLUTION (OURS) SINGLE LAYER, ZERO CONVS. POWER-OF-2 STRIDES NONE 5.4 M 40.4 M ENSEMB. 94.6% 95.6% C100+ â 72.9% â â 77.0% | 1703.01041#7 | 1703.01041#9 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#9 | Large-Scale Evolution of Image Classifiers | ing a new state of the art at the time. Zoph & Le (2016) used reinforcement learning on a deeper In their approach, a neu- ï¬ xed-length architecture. ral networkâ the â discovererâ â constructs a convolutional neural networkâ the â discoveredâ â one layer at a time. In addition to tuning layer parameters, they add and remove skip connections. This, together with some manual post- processing, gets them very close to the (current) state of the art. (Additionally, they surpassed the state of the art on a sequence-to-sequence problem.) Baker et al. (2016) use Q-learning to also discover a network one layer at a time, but in their approach, the number of layers is decided by the discoverer. This is a desirable feature, as it would allow a system to construct shallow or deep solutions, as may be the requirements of the dataset at hand. Different datasets would not require specially tuning the algorithm. Compar- isons among these methods are difï¬ cult because they ex- plore very different search spaces and have very different initial conditions (Table 2). Tangentially, there has also been neuro-evolution work on LSTM structure (Bayer et al., 2009; Zaremba, 2015), but this is beyond the scope of this paper. Also related to this work is that of Saxena & Verbeek (2016), who embed con- volutions with different parameters into a species of â | 1703.01041#8 | 1703.01041#10 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#10 | Large-Scale Evolution of Image Classifiers | super- networkâ with many parallel paths. Their algorithm then selects and ensembles paths in the super-network. Finally, canonical approaches to hyper-parameter search are grid search (used in Zagoruyko & Komodakis (2016), for ex- ample) and random search, the latter being the better of the # two (Bergstra & Bengio, 2012). Our approach builds on previous work, with some im- portant differences. We explore large model-architecture search spaces starting with basic initial conditions to avoid priming the system with information about known good strategies for the speciï¬ c dataset at hand. Our encoding is different from the neuro-evolution methods mentioned above: we use a simpliï¬ ed graph as our DNA, which is transformed to a full neural network graph for training and evaluation (Section 3). Some of the mutations acting on this DNA are reminiscent of NEAT. However, instead of single nodes, one mutation can insert whole layersâ i.e. tens to hundreds of nodes at a time. We also allow for these layers to be removed, so that the evolutionary process can simplify an architecture in addition to complexifying it. Layer parameters are also mutable, but we do not prescribe a small set of possible values to choose from, to allow for a larger search space. We do not use ï¬ | 1703.01041#9 | 1703.01041#11 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#11 | Large-Scale Evolution of Image Classifiers | tness sharing. We report additional results using recombination, but for the most part, we used mutation only. On the other hand, we do use back-propagation to optimize the weights, which can be inherited across mutations. Together with a learn- ing rate mutation, this allows the exploration of the space of learning rate schedules, yielding fully trained models at the end of the evolutionary process (Section 3). Ta- bles 1 and 2 compare our approach with hand-designed ar- chitectures and with other neuro-discovery techniques, re- spectively. Large-Scale Evolution # 3. Methods # 3.1. Evolutionary Algorithm To automatically search for high-performing neural net- work architectures, we evolve a population of models. Each modelâ or individualâ is a trained architecture. The modelâ s accuracy on a separate validation dataset is a mea- sure of the individualâ s quality or ï¬ tness. During each evo- lutionary step, a computerâ a workerâ chooses two indi- viduals at random from this population and compares their ï¬ tnesses. The worst of the pair is immediately removed from the populationâ it is killed. The best of the pair is selected to be a parent, that is, to undergo reproduction. By this we mean that the worker creates a copy of the par- ent and modiï¬ es this copy by applying a mutation, as de- scribed below. We will refer to this modiï¬ ed copy as the child. After the worker creates the child, it trains this child, evaluates it on the validation set, and puts it back into the population. The child then becomes aliveâ i.e. free to act as a parent. | 1703.01041#10 | 1703.01041#12 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#12 | Large-Scale Evolution of Image Classifiers | Our scheme, therefore, uses repeated pairwise competitions of random individuals, which makes it an ex- ample of tournament selection (Goldberg & Deb, 1991). Using pairwise comparisons instead of whole population operations prevents workers from idling when they ï¬ nish early. Code and more detail about the methods described below can be found in Supplementary Section S1. lutional network, two of the dimensions of the tensor rep- resent the spatial coordinates of the image and the third is a number of channels. Activation functions are applied at the vertices and can be either (i) batch-normalization (Ioffe & Szegedy, 2015) with rectiï¬ ed linear units (ReLUs) or (ii) plain linear units. The graphâ s edges represent identity con- nections or convolutions and contain the mutable numeri- cal parameters deï¬ ning the convolutionâ s properties. When multiple edges are incident on a vertex, their spatial scales or numbers of channels may not coincide. However, the vertex must have a single size and number of channels for its activations. The inconsistent inputs must be resolved. Resolution is done by choosing one of the incoming edges as the primary one. We pick this primary edge to be the one that is not a skip connection. The activations coming from the non-primary edges are reshaped through zeroth- order interpolation in the case of the size and through trun- cation/padding in the case of the number of channels, as in He et al. (2016). | 1703.01041#11 | 1703.01041#13 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#13 | Large-Scale Evolution of Image Classifiers | In addition to the graph, the learning-rate value is also stored in the DNA. A child is similar but not identical to the parent because of the action of a mutation. In each reproduction event, the worker picks a mutation at random from a predetermined set. The set contains the following mutations: Using this strategy to search large spaces of complex im- age models requires considerable computation. To achieve scale, we developed a massively-parallel, lock-free infras- tructure. Many workers operate asynchronously on differ- ent computers. They do not communicate directly with each other. Instead, they use a shared ï¬ le-system, where the population is stored. The ï¬ le-system contains direc- tories that represent the individuals. Operations on these individuals, such as the killing of one, are represented as atomic renames on the directory2. Occasionally, a worker may concurrently modify the individual another worker is operating on. In this case, the affected worker simply gives up and tries again. | 1703.01041#12 | 1703.01041#14 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#14 | Large-Scale Evolution of Image Classifiers | The population size is 1000 individuals, unless otherwise stated. The number of workers is always 1 4 of the population size. To allow for long run-times with a limited amount of space, dead individualsâ directories are frequently garbage-collected. ALTER-LEARNING-RATE (sampling details below). â ¢ IDENTITY (effectively means â keep trainingâ ). â ¢ RESET-WEIGHTS (sampled as in He et al. (2015), for example). â ¢ INSERT-CONVOLUTION (inserts a convolution at a ran- dom location in the â convolutional backboneâ , as in Fig- ure 1. The inserted convolution has 3 à 3 ï¬ lters, strides of 1 or 2 at random, number of channels same as input. May apply batch-normalization and ReLU activation or none at random). REMOVE-CONVOLUTION. â ¢ ALTER-STRIDE (only powers of 2 are allowed). â ¢ ALTER-NUMBER-OF-CHANNELS (of random conv.). â ¢ FILTER-SIZE (horizontal or vertical at random, on ran- dom convolution, odd values only). â ¢ INSERT-ONE-TO-ONE (inserts a one-to-one/identity connection, analogous to insert-convolution mutation). ADD-SKIP (identity between random layers). â ¢ REMOVE-SKIP (removes random skip). # 3.2. Encoding and Mutations Individual architectures are encoded as a graph that we refer to as the DNA. In this graph, the vertices represent rank-3 tensors or activations. | 1703.01041#13 | 1703.01041#15 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#15 | Large-Scale Evolution of Image Classifiers | As is standard for a convo- These speciï¬ c mutations were chosen for their similarity to the actions that a human designer may take when im- proving an architecture. This may clear the way for hybrid evolutionaryâ hand-design methods in the future. The prob- abilities for the mutations were not tuned in any way. 2The use of the ï¬ le-name string to contain key information about the individual was inspired by Breuel & Shafait (2010), and it speeds up disk access enormously. In our case, the ï¬ le name contains the state of the individual (alive, dead, training, etc.). | 1703.01041#14 | 1703.01041#16 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#16 | Large-Scale Evolution of Image Classifiers | A mutation that acts on a numerical parameter chooses the new value at random around the existing value. All sam- pling is from uniform distributions. For example, a muta- tion acting on a convolution with 10 output channels will Large-Scale Evolution result in a convolution having between 5 and 20 output channels (that is, half to twice the original value). All val- ues within the range are possible. As a result, the models are not constrained to a number of ï¬ lters that is known to work well. The same is true for all other parameters, yield- ing a â denseâ search space. | 1703.01041#15 | 1703.01041#17 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#17 | Large-Scale Evolution of Image Classifiers | In the case of the strides, this applies to the log-base-2 of the value, to allow for activa- tion shapes to match more easily3. In principle, there is also no upper limit to any of the parameters. All model depths are attainable, for example. Up to hardware constraints, the search space is unbounded. The dense and unbounded na- ture of the parameters result in the exploration of a truly large set of possible architectures. # 3.3. Initial Conditions Every evolution experiment begins with a population of simple individuals, all with a learning rate of 0.1. | 1703.01041#16 | 1703.01041#18 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#18 | Large-Scale Evolution of Image Classifiers | They are all very bad performers. Each initial individual consti- tutes just a single-layer model with no convolutions. This conscious choice of poor initial conditions forces evolution to make the discoveries by itself. The experimenter con- tributes mostly through the choice of mutations that demar- cate a search space. Altogether, the use of poor initial con- ditions and a large search space limits the experimenterâ s impact. In other words, it prevents the experimenter from â riggingâ the experiment to succeed. # 3.5. Computation cost To estimate computation costs, we identiï¬ ed the basic TensorFlow (TF) operations used by our model training and validation, like convolutions, generic matrix multipli- cations, etc. For each of these TF operations, we esti- mated the theoretical number of ï¬ oating-point operations (FLOPs) required. This resulted in a map from TF opera- tion to FLOPs, which is valid for all our experiments. For each individual within an evolution experiment, we compute the total FLOPs incurred by the TF operations in its architecture over one batch of examples, both during its training (Ft FLOPs) and during its validation (Fv FLOPs). Then we assign to the individual the cost FtNt + FvNv, where Nt and Nv are the number of training and validation batches, respectively. The cost of the experiment is then the sum of the costs of all its individuals. We intend our FLOPs measurement as a coarse estimate only. We do not take into account input/output, data prepro- cessing, TF graph building or memory-copying operations. Some of these unaccounted operations take place once per training run or once per step and some have a component that is constant in the model size (such as disk-access la- tency or input data cropping). We therefore expect the esti- mate to be more useful for large architectures (for example, those with many convolutions). # 3.4. Training and Validation # 3.6. Weight Inheritance | 1703.01041#17 | 1703.01041#19 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#19 | Large-Scale Evolution of Image Classifiers | Training and validation is done on the CIFAR-10 dataset. This dataset consists of 50,000 training examples and 10,000 test examples, all of which are 32 x 32 color images labeled with 1 of 10 common object classes (Krizhevsky & Hinton, 2009). 5,000 of the training examples are held out in a validation set. The remaining 45,000 examples consti- tute our actual training set. The training set is augmented as in He et al. (2016). The CIFAR-100 dataset has the same number of dimensions, colors and examples as CIFAR-10, but uses 100 classes, making it much more challenging. Training is done with TensorFlow (Abadi et al., 2016), us- ing SGD with a momentum of 0.9 (Sutskever et al., 2013), a batch size of 50, and a weight decay of 0.0001. Each train- ing runs for 25,600 steps, a value chosen to be brief enough so that each individual could be trained in a few seconds to a few hours, depending on model size. | 1703.01041#18 | 1703.01041#20 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#20 | Large-Scale Evolution of Image Classifiers | The loss function is the cross-entropy. Once training is complete, a single eval- uation on the validation set provides the accuracy to use as the individualâ s ï¬ tness. Ensembling was done by majority voting during the testing evaluation. The models used in the ensemble were selected by validation accuracy. 3For integer DNA parameters, we actually store and mutate a ï¬ oating-point value. This allows multiple small mutations to have a cumulative effect in spite of integer round-off. | 1703.01041#19 | 1703.01041#21 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#21 | Large-Scale Evolution of Image Classifiers | We need architectures that are trained to completion within an evolution experiment. If this does not happen, we are forced to retrain the best model at the end, possibly hav- ing to explore its hyper-parameters. Such extra explo- ration tends to depend on the details of the model being retrained. On the other hand, 25,600 steps are not enough to fully train each individual. Training a large model to completion is prohibitively slow for evolution. To resolve this dilemma, we allow the children to inherit the par- entsâ weights whenever possible. Namely, if a layer has matching shapes, the weights are preserved. Consequently, some mutations preserve all the weights (like the identity or learning-rate mutations), some preserve none (the weight- resetting mutation), and most preserve some but not all. | 1703.01041#20 | 1703.01041#22 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#22 | Large-Scale Evolution of Image Classifiers | An example of the latter is the ï¬ lter-size mutation: only the ï¬ l- ters of the convolution being mutated will be discarded. # 3.7. Reporting Methodology To avoid over-ï¬ tting, neither the evolutionary algorithm nor the neural network training ever see the testing set. Each time we refer to â the best modelâ , we mean the model with the highest validation accuracy. However, we always report the test accuracy. This applies not only to the choice of the best individual within an experiment, but also to the choice | 1703.01041#21 | 1703.01041#23 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#23 | Large-Scale Evolution of Image Classifiers | Large-Scale Evolution of the best experiment. Moreover, we only include ex- periments that we managed to reproduce, unless explicitly noted. Any statistical analysis was fully decided upon be- fore seeing the results of the experiment reported, to avoid tailoring our analysis to our experimental data (Simmons et al., 2011). We also ran a partial control where the weight-inheritance mechanism is disabled. This run also results in a lower accuracy (92.2 %) in the same amount of time (Figure 2), using 9Ã 1019 FLOPs. This shows that weight inheritance is important in the process. # 4. Experiments and Results We want to answer the following questions: â | 1703.01041#22 | 1703.01041#24 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#24 | Large-Scale Evolution of Image Classifiers | ¢ Can a simple one-shot evolutionary process start from trivial initial conditions and yield fully trained models that rival hand-designed architectures? Finally, we applied our neuro-evolution algorithm, with- out any changes and with the same meta-parameters, to CIFAR-100. Our only experiment reached an accuracy of 77.0 %, using 2 Ã 1020 FLOPs. We did not attempt other datasets. Table 1 shows that both the CIFAR-10 and CIFAR-100 results are competitive with modern hand- designed networks. | 1703.01041#23 | 1703.01041#25 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#25 | Large-Scale Evolution of Image Classifiers | â ¢ What are the variability in outcomes, the parallelizabil- ity, and the computation cost of the method? # 5. Analysis â ¢ Can an algorithm designed iterating on CIFAR-10 be ap- plied, without any changes at all, to CIFAR-100 and still produce competitive models? We used the algorithm in Section 3 to perform several ex- periments. Each experiment evolves a population in a few days, typiï¬ ed by the example in Figure 1. The ï¬ gure also contains examples of the architectures discovered, which turn out to be surprisingly simple. | 1703.01041#24 | 1703.01041#26 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#26 | Large-Scale Evolution of Image Classifiers | Evolution attempts skip connections but frequently rejects them. Meta-parameters. We observe that populations evolve until they plateau at some local optimum (Figure 2). The ï¬ tness (i.e. validation accuracy) value at this optimum varies between experiments (Figure 2, inset). Since not all experiments reach the highest possible value, some popu- lations are getting â trappedâ at inferior local optima. This entrapment is affected by two important meta-parameters (i.e. parameters that are not optimized by the algorithm). These are the population size and the number of training steps per individual. Below we discuss them and consider their relationship to local optima. To get a sense of the variability in outcomes, we repeated the experiment 5 times. Across all 5 experiment runs, the best model by validation accuracy has a testing accuracy of 94.6 %. Not all experiments reach the same accuracy, but they get close (µ = 94.1%, Ï = 0.4). Fine differences in the experiment outcome may be somewhat distinguishable by validation accuracy (correlation coefï¬ cient = 0.894). The total amount of computation across all 5 experiments was 4à 1020 FLOPs (or 9à 1019 FLOPs on average per exper- iment). Each experiment was distributed over 250 parallel workers (Section 3.1). Figure 2 shows the progress of the experiments in detail. As a control, we disabled the selection mechanism, thereby reproducing and killing random individuals. This is the form of random search that is most compatible with our infrastructure. The probability distributions for the pa- rameters are implicitly determined by the mutations. This control only achieves an accuracy of 87.3 % in the same amount of run time on the same hardware (Figure 2). The total amount of computation was 2à 1017 FLOPs. The low FLOP count is a consequence of random search generating many small, inadequate models that train quickly but con- sume roughly constant amounts of setup time (not included in the FLOP count). We attempted to minimize this over- head by avoiding unnecessary disk access operations, to no avail: too much overhead remains spent on a combination of neural network setup, data augmentation, and training step initialization. Effect of population size. | 1703.01041#25 | 1703.01041#27 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#27 | Large-Scale Evolution of Image Classifiers | Larger populations explore the space of models more thoroughly, and this helps reach bet- ter optima (Figure 3, left). Note, in particular, that a pop- ulation of size 2 can get trapped at very low ï¬ tness values. Some intuition about this can be gained by considering the fate of a super-ï¬ t individual, i.e. an individual such that any one architectural mutation reduces its ï¬ tness (even though a sequence of many mutations may improve it). In the case of a population of size 2, if the super-ï¬ t individual wins once, it will win every time. After the ï¬ rst win, it will pro- duce a child that is one mutation away. By deï¬ nition of super-ï¬ t, therefore, this child is inferior4. Consequently, in the next round of tournament selection, the super-ï¬ t in- dividual competes against its child and wins again. This cycle repeats forever and the population is trapped. Even if a sequence of two mutations would allow for an â escapeâ from the local optimum, such a sequence can never take place. This is only a rough argument to heuristically sug- gest why a population of size 2 is easily trapped. More generally, Figure 3 (left) empirically demonstrates a bene- ï¬ t from an increase in population size. Theoretical analy- ses of this dependence are quite complex and assume very speciï¬ c models of population dynamics; often larger pop- ulations are better at handling local optima, at least beyond a size threshold (Weinreich & Chao (2005) and references 4Except after identity or learning rate mutations, but these pro- duce a child with the same architecture as the parent. Large-Scale Evolution | 1703.01041#26 | 1703.01041#28 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#28 | Large-Scale Evolution of Image Classifiers | 94.6 91.8 85.3 test accuracy (%) 22.6 a ; < : ¢ Global Poo}! eae a2 ES TESES TES] : D =a 3 % ay ei C#ENSR . 5 Output ss Global Pool 7 Real C+BNTR Zs Output Global Pool Output 0.9 28.1 70.2 wall time (hours) 256.2 Figure 1. Progress of an evolution experiment. Each dot represents an individual in the population. Blue dots (darker, top-right) are alive. The rest have been killed. The four diagrams show examples of discovered architectures. These correspond to the best individual (right- most) and three of its ancestors. The best individual was selected by its validation accuracy. Evolution sometimes stacks convolutions without any nonlinearity in between (â | 1703.01041#27 | 1703.01041#29 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#29 | Large-Scale Evolution of Image Classifiers | Câ , white background), which are mathematically equivalent to a single linear operation. Unlike typical hand-designed architectures, some convolutions are followed by more than one nonlinear function (â C+BN +R+BN +R+...â , orange background). therein). Effect of number of training steps. The other meta- parameter is the number T of training steps for each indi- vidual. Accuracy increases with T (Figure 3, right). Larger T means an individual needs to undergo fewer identity mu- tations to reach a given level of training. Escaping local optima. While we might increase popu- lation size or number of steps to prevent a trapped popu- lation from forming, we can also free an already trapped population. For example, increasing the mutation rate or resetting all the weights of a population (Figure 4) work well but are quite costly (more details in Supplementary Section S3). Recombination. None of the results presented so far used recombination. However, we explored three forms of recombination in additional experiments. Following Tuson & Ross (1998), we attempted to evolve the mutation prob- ability distribution too. On top of this, we employed a re- combination strategy by which a child could inherit struc- ture from one parent and mutation probabilities from an- other. The goal was to allow individuals that progressed well due to good mutation choices to quickly propagate such choices to others. In a separate experiment, we at- tempted recombining the trained weights from two parents in the hope that each parent may have learned different concepts from the training data. In a third experiment, we recombined structures so that the child fused the ar- chitectures of both parents side-by-side, generating wide models fast. While none of these approaches improved our recombination-free results, further study seems warranted. # 6. Conclusion | 1703.01041#28 | 1703.01041#30 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#30 | Large-Scale Evolution of Image Classifiers | In this paper we have shown that (i) neuro-evolution is ca- pable of constructing large, accurate networks for two chal- lenging and popular image classiï¬ cation benchmarks; (ii) neuro-evolution can do this starting from trivial initial con- ditions while searching a very large space; (iii) the pro- cess, once started, needs no experimenter participation; and (iv) the process yields fully trained models. Completely training models required weight inheritance (Sections 3.6). In contrast to reinforcement learning, evolution provides a natural framework for weight inheritance: mutations can be constructed to guarantee a large degree of similarity be- Large-Scale Evolution | 1703.01041#29 | 1703.01041#31 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#31 | Large-Scale Evolution of Image Classifiers | 100.0 94.6 g > o e 5 3 o B 3 * |â â Evolution 04 ee _ Evolution w/o weight inheritance + Random search of ee r 20.04 : wall-clock time 0 wall-clock time (hours) Figure 2. Repeatability of results and controls. In this plot, the vertical axis at wall-time t is deï¬ ned as the test accuracy of the individual with the highest validation accuracy that became alive at or before t. The inset magniï¬ es a portion of the main graph. The curves show the progress of various experiments, as follows. The top line (solid, blue) shows the mean test accuracy across 5 large-scale evolution experiments. The shaded area around this top line has a width of ±2Ï (clearer in inset). The next line down (dashed, orange, main graph and inset) represents a single experi- ment in which weight-inheritance was disabled, so every individ- ual has to train from random weights. The lowest curve (dotted- dashed) is a random-search control. All experiments occupied the same amount and type of hardware. A small amount of noise in the generalization from the validation to the test set explains why the lines are not monotonically increasing. Note the narrow width of the ±2Ï area (main graph and inset), which shows that the high accuracies obtained in evolution experiments are repeatable. tween the original and mutated modelsâ as we did. Evo- lution also has fewer tunable meta-parameters with a fairly predictable effect on the variance of the results, which can be made small. While we did not focus on reducing computation costs, we hope that future algorithmic and hardware improvement will allow more economical implementation. In that case, evolution would become an appealing approach to neuro- discovery for reasons beyond the scope of this paper. For example, it â hits the ground runningâ , improving on arbi- trary initial models as soon as the experiment begins. The mutations used can implement recent advances in the ï¬ eld and can be introduced without having to restart an exper- iment. Furthermore, recombination can merge improve- ments developed by different individuals, even if they come from other populations. Moreover, it may be possible to combine neuro-evolution with other automatic architecture discovery methods. | 1703.01041#30 | 1703.01041#32 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#32 | Large-Scale Evolution of Image Classifiers | 250 100 100 g °o.)66 > = ° 8 x slo g 8 o § se 8 5 | 8 2} 8 g|8 3 g 5 £|° 2]° g 50. 1 1 1 5 : 2 1000 256 2560. 25600 10 43 . population size training steps Figure 3. Dependence on meta-parameters. In both graphs, each circle represents the result of a full evolution experiment. Both vertical axes show the test accuracy for the individual with the highest validation accuracy at the end of the experiment. All pop- ulations evolved for the same total wall-clock time. There are 5 data points at each horizontal axis value. LEFT: effect of pop- ulation size. To economize resources, in these experiments the number of individual training steps is only 2560. Note how the ac- curacy increases with population size. RIGHT: effect of number of training steps per individual. Note how the accuracy increases with more steps. Increased mutation rate oo SS wo accuracy (%) Increased mutation rate oo SS wo accuracy (%) 92.0 87.3 accuracy (%) ~ 3 ° z ree Fra i} 16: 333 550 733 wall time (hours) 92.0 87.3 accuracy (%) ~ 3 ° z ree Fra i} 16: 333 550 733 wall time (hours) Figure 4. | 1703.01041#31 | 1703.01041#33 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#33 | Large-Scale Evolution of Image Classifiers | Escaping local optima in two experiments. We used smaller populations and fewer training steps per individual (2560) to make it more likely for a population to get trapped and to re- duce resource usage. Each dot represents an individual. The verti- cal axis is the accuracy. TOP: example of a population of size 100 escaping a local optimum by using a period of increased mutation rate in the middle (Section 5). BOTTOM: example of a population of size 50 escaping a local optimum by means of three consecu- tive weight resetting events (Section 5). Details in Supplementary Section S3. Large-Scale Evolution # Acknowledgements We wish to thank Vincent Vanhoucke, Megan Kacho- lia, Rajat Monga, and especially Jeff Dean for their sup- port and valuable input; Geoffrey Hinton, Samy Ben- gio, Thomas Breuel, Mark DePristo, Vishy Tirumalashetty, Martin Abadi, Noam Shazeer, Yoram Singer, Dumitru Er- han, Pierre Sermanet, Xiaoqiang Zheng, Shan Carter and Vijay Vasudevan for helpful discussions; Thomas Breuel, Xin Pan and Andy Davis for coding contributions; and the larger Google Brain team for help with TensorFlow and training vision models. # References Goodfellow, Ian J, Warde-Farley, David, Mirza, Mehdi, Courville, Aaron C, and Bengio, Yoshua. Maxout net- works. International Conference on Machine Learning, 28:1319â 1327, 2013. | 1703.01041#32 | 1703.01041#34 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#34 | Large-Scale Evolution of Image Classifiers | Gruau, Frederic. Genetic synthesis of modular neural net- works. In Proceedings of the 5th International Confer- ence on Genetic Algorithms, pp. 318â 325. Morgan Kauf- mann Publishers Inc., 1993. Han, Song, Pool, Jeff, Tran, John, and Dally, William. Learning both weights and connections for efï¬ cient neu- ral network. In Advances in Neural Information Process- ing Systems, pp. 1135â 1143, 2015. Abadi, Mart´ın, Agarwal, Ashish, Barham, Paul, Brevdo, Eugene, Chen, Zhifeng, Citro, Craig, Corrado, Greg S, Davis, Andy, Dean, Jeffrey, Devin, Matthieu, et al. Ten- sorï¬ ow: | 1703.01041#33 | 1703.01041#35 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#35 | Large-Scale Evolution of Image Classifiers | Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016. He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Delving deep into rectiï¬ ers: Surpassing human- In Pro- level performance on imagenet classiï¬ cation. ceedings of the IEEE international conference on com- puter vision, pp. 1026â 1034, 2015. and Raskar, Ramesh. | 1703.01041#34 | 1703.01041#36 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#36 | Large-Scale Evolution of Image Classifiers | Designing neural network archi- tectures using reinforcement learning. arXiv preprint arXiv:1611.02167, 2016. He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vi- sion and Pattern Recognition, pp. 770â 778, 2016. Bayer, Justin, Wierstra, Daan, Togelius, Julian, and Schmidhuber, J¨urgen. Evolving memory cell structures In International Conference on for sequence learning. | 1703.01041#35 | 1703.01041#37 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#37 | Large-Scale Evolution of Image Classifiers | Artiï¬ cial Neural Networks, pp. 755â 764. Springer, 2009. Huang, Gao, Liu, Zhuang, Weinberger, Kilian Q, and van der Maaten, Laurens. Densely connected convo- arXiv preprint arXiv:1608.06993, lutional networks. 2016a. Bergstra, James and Bengio, Yoshua. Random search for hyper-parameter optimization. Journal of Machine Learning Research, 13(Feb):281â 305, 2012. Huang, Gao, Sun, Yu, Liu, Zhuang, Sedra, Daniel, and Weinberger, Kilian Q. | 1703.01041#36 | 1703.01041#38 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#38 | Large-Scale Evolution of Image Classifiers | Deep networks with stochastic depth. In European Conference on Computer Vision, pp. 646â 661. Springer, 2016b. Breuel, Thomas and Shafait, Faisal. Automlp: Simple, effective, fully automated learning rate and size adjust- ment. In The Learning Workshop. Utah, 2010. Ioffe, Sergey and Szegedy, Christian. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. | 1703.01041#37 | 1703.01041#39 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#39 | Large-Scale Evolution of Image Classifiers | Fernando, Chrisantha, Banarse, Dylan, Reynolds, Mal- colm, Besse, Frederic, Pfau, David, Jaderberg, Max, Lanctot, Marc, and Wierstra, Daan. Convolution by evo- lution: Differentiable pattern producing networks. In Proceedings of the 2016 on Genetic and Evolutionary Computation Conference, pp. 109â 116. ACM, 2016. Kim, Minyoung and Rigazio, Luca. Deep clustered convo- lutional kernels. arXiv preprint arXiv:1503.01824, 2015. Krizhevsky, Alex and Hinton, Geoffrey. Learning multiple layers of features from tiny images. 2009. Goldberg, David E and Deb, Kalyanmoy. A comparative analysis of selection schemes used in genetic algorithms. Foundations of genetic algorithms, 1:69â | 1703.01041#38 | 1703.01041#40 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#40 | Large-Scale Evolution of Image Classifiers | 93, 1991. Krizhevsky, Alex, Sutskever, Ilya, and Hinton, Geoffrey E. Imagenet classiï¬ cation with deep convolutional neural networks. In Advances in Neural Information Processing Systems, pp. 1097â 1105, 2012. Goldberg, David E, Richardson, Jon, et al. Genetic algo- rithms with sharing for multimodal function optimiza- tion. In Genetic algorithms and their applications: Pro- ceedings of the Second International Conference on Ge- netic Algorithms, pp. 41â | 1703.01041#39 | 1703.01041#41 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#41 | Large-Scale Evolution of Image Classifiers | 49. Hillsdale, NJ: Lawrence Erlbaum, 1987. LeCun, Yann, Cortes, Corinna, and Burges, Christo- pher JC. The mnist database of handwritten digits, 1998. Lee, Chen-Yu, Xie, Saining, Gallagher, Patrick W, Zhang, Zhengyou, and Tu, Zhuowen. Deeply-supervised nets. In AISTATS, volume 2, pp. 5, 2015. Large-Scale Evolution | 1703.01041#40 | 1703.01041#42 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#42 | Large-Scale Evolution of Image Classifiers | Lin, Min, Chen, Qiang, and Yan, Shuicheng. Network in network. arXiv preprint arXiv:1312.4400, 2013. Stanley, Kenneth O. Compositional pattern producing net- works: A novel abstraction of development. Genetic pro- gramming and evolvable machines, 8(2):131â 162, 2007. Miller, Geoffrey F, Todd, Peter M, and Hegde, Shailesh U. Designing neural networks using genetic algorithms. In Proceedings of the third international conference on Ge- netic algorithms, pp. 379â | 1703.01041#41 | 1703.01041#43 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#43 | Large-Scale Evolution of Image Classifiers | 384. Morgan Kaufmann Pub- lishers Inc., 1989. Stanley, Kenneth O and Miikkulainen, Risto. Evolving neural networks through augmenting topologies. Evo- lutionary Computation, 10(2):99â 127, 2002. Morse, Gregory and Stanley, Kenneth O. Simple evo- lutionary optimization can rival stochastic gradient de- scent in neural networks. In Proceedings of the 2016 on Genetic and Evolutionary Computation Conference, pp. 477â | 1703.01041#42 | 1703.01041#44 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#44 | Large-Scale Evolution of Image Classifiers | 484. ACM, 2016. Pugh, Justin K and Stanley, Kenneth O. Evolving mul- In Proceedings of timodal controllers with hyperneat. the 15th annual conference on Genetic and evolutionary computation, pp. 735â 742. ACM, 2013. Rumelhart, David E, Hinton, Geoffrey E, and Williams, Ronald J. Learning representations by back-propagating errors. Cognitive Modeling, 5(3):1, 1988. | 1703.01041#43 | 1703.01041#45 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#45 | Large-Scale Evolution of Image Classifiers | Stanley, Kenneth O, Dâ Ambrosio, David B, and Gauci, Ja- son. A hypercube-based encoding for evolving large- scale neural networks. Artiï¬ cial Life, 15(2):185â 212, 2009. Sutskever, Ilya, Martens, James, Dahl, George E, and Hin- ton, Geoffrey E. On the importance of initialization and momentum in deep learning. ICML (3), 28:1139â 1147, 2013. Szegedy, Christian, Liu, Wei, Jia, Yangqing, Sermanet, Pierre, Reed, Scott, Anguelov, Dragomir, Erhan, Du- mitru, Vanhoucke, Vincent, and Rabinovich, Andrew. In Proceedings of Going deeper with convolutions. the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1â | 1703.01041#44 | 1703.01041#46 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#46 | Large-Scale Evolution of Image Classifiers | 9, 2015. Saxena, Shreyas and Verbeek, Jakob. Convolutional neural fabrics. In Advances In Neural Information Processing Systems, pp. 4053â 4061, 2016. Tuson, Andrew and Ross, Peter. Adapting operator settings in genetic algorithms. Evolutionary computation, 6(2): 161â 184, 1998. Silver, David, Huang, Aja, Maddison, Chris J, Guez, Arthur, Sifre, Laurent, Van Den Driessche, George, Schrittwieser, Julian, Antonoglou, Ioannis, Panneershel- vam, Veda, Lanctot, Marc, et al. | 1703.01041#45 | 1703.01041#47 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#47 | Large-Scale Evolution of Image Classifiers | Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484â 489, 2016. Simmons, Joseph P, Nelson, Leif D, and Simonsohn, Uri. False-positive psychology: Undisclosed ï¬ exibility in data collection and analysis allows presenting anything Psychological Science, 22(11):1359â as signiï¬ cant. 1366, 2011. Verbancsics, Phillip and Harguess, Josh. neuroevolution for deep learning. arXiv:1312.5355, 2013. Generative arXiv preprint Weinreich, Daniel M and Chao, Lin. Rapid evolutionary escape by large populations from local ï¬ tness peaks is likely in nature. Evolution, 59(6):1175â 1182, 2005. | 1703.01041#46 | 1703.01041#48 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#48 | Large-Scale Evolution of Image Classifiers | Weyand, Tobias, Kostrikov, Ilya, and Philbin, James. Planet-photo geolocation with convolutional neural net- works. In European Conference on Computer Vision, pp. 37â 55. Springer, 2016. Simonyan, Karen and Zisserman, Andrew. Very deep con- volutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. Snoek, Jasper, Larochelle, Hugo, and Adams, Ryan P. Practical bayesian optimization of machine learning al- gorithms. In Advances in neural information processing systems, pp. 2951â 2959, 2012. Wu, Yonghui, Schuster, Mike, Chen, Zhifeng, Le, Quoc V., Norouzi, Mohammad, et al. Googleâ s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144, 2016. Zagoruyko, Sergey and Komodakis, Nikos. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016. Springenberg, Jost Tobias, Dosovitskiy, Alexey, Brox, Striving for sim- arXiv preprint Thomas, and Riedmiller, Martin. plicity: The all convolutional net. arXiv:1412.6806, 2014. Srivastava, Rupesh Kumar, Greff, Klaus, and Schmid- arXiv preprint huber, J¨urgen. Highway networks. arXiv:1505.00387, 2015. Zaremba, Wojciech. An empirical exploration of recurrent network architectures. 2015. Zoph, Barret and Le, Quoc V. search with reinforcement learning. arXiv:1611.01578, 2016. Neural architecture arXiv preprint # Large-Scale Evolution of Image Classiï¬ | 1703.01041#47 | 1703.01041#49 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#49 | Large-Scale Evolution of Image Classifiers | ers # Supplementary Material # S1. Methods Details This section contains additional implementation details, roughly following the order in Section 3. Short code snippets illustrate the ideas. The code is not intended to run on its own and it has been highly edited for clarity. In our implementation, each worker runs an outer loop that is responsible for selecting a pair of random individuals from the population. The individual with the highest ï¬ tness usually becomes a parent and the one with the lowest ï¬ tness is usually killed (Section 3.1). Occasionally, either of these two actions is not carried out in order to keep the population size close to a set-point: def evolve_population(self): # Iterate indefinitely. while True: # Select two random individuals from the population. valid_individuals = [] for individual in self.load_individuals(): # Only loads the IDs and states. if individual.state in [TRAINING, ALIVE]: valid_individuals.append(individual) individual_pair = random.sample(valid_individuals, 2) for individual in individual_pair: # Sync changes from other workers from file-system. Loads everything else. individual.update_if_necessary() # Ensure the individual is fully trained. if individual.state == TRAINING: self._train(individual) # Select by fitness (accuracy). individual_pair.sort(key=lambda i: i.fitness, reverse=True) better_individual = individual_pair[0] worse_individual = individual_pair[1] # If the population is not too small, kill the worst of the pair. if self._population_size() >= self._population_size_setpoint: self._kill_individual(worse_individual) # If the population is not too large, reproduce the best of the pair. if self._population_size() < self._population_size_setpoint: self._reproduce_and_train_individual(better_individual) Much of the code is wrapped in try-except blocks to handle various kinds of errors. | 1703.01041#48 | 1703.01041#50 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#50 | Large-Scale Evolution of Image Classifiers | These have been removed from the code snippets for clarity. For example, the method above would be wrapped like this: # def evolve_population(self): while True: try: # Select two random individuals from the population. ... except: except exceptions.PopulationTooSmallException: self._create_new_individual() continue Large-Scale Evolution except exceptions.ConcurrencyException: # Another worker did something that interfered with the action of this worker. # Abandon the current task and keep going. continue The encoding for an individual is represented by a serializable DNA class instance containing all information except for the trained weights (Section 3.2). For all results in this paper, this encoding is a directed, acyclic graph where edges represent convolutions and vertices represent nonlinearities. This is a sketch of the DNA class: # class DNA(object): def __init__(self, dna_proto): """Initializes the â DNAâ instance from a protocol buffer. The â dna_protoâ is a protocol buffer used to restore the DNA state from disk. Together with the corresponding â to_protoâ method, they allow for a serialization-deserialization mechanism. """ # Allows evolving the learning rate, i.e. exploring the space of # learning rate schedules. self.learning_rate = dna_proto.learning_rate self._vertices = {} for vertex_id in dna_proto.vertices: # String vertex ID to â Vertexâ instance. vertices[vertex_id] = Vertex(vertex_proto=dna_sproto.vertices[vertex_id]) self._edges = {} for edge_id in dna_proto.edges: # String edge ID to â Edgeâ instance. mutable_edges[edge_id] = Edge(edge_proto=dna_proto.edges[edge_id]) ... # def to_proto(self): """Returns this instance in protocol buffer form.""" dna_proto = dna_pb2.DnaProto(learning_rate=self.learning_rate) for vertex_id, vertex in self._vertices.iteritems(): dna_proto.vertices[vertex_id].CopyFrom(vertex.to_proto()) # for edge_id, edge in self._edges.iteritems(): # dna_proto.edges[edge_id].CopyFrom(edge.to_proto()) | 1703.01041#49 | 1703.01041#51 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#51 | Large-Scale Evolution of Image Classifiers | ... return dna_proto def add_edge(self, dna, from_vertex_id, to_vertex_id, edge_type, edge_id): """Adds an edge to the DNA graph, ensuring internal consistency.""" # â EdgeProtoâ defines defaults for other attributes. edge = Edge(EdgeProto( from_vertex=from_vertex_id, to_vertex=to_vertex_id, type=edge_type)) self._edges[edge_id] = edge self._vertices[from_vertex_id].edges_out.add(edge_id) self._vertices[to_vertex].edges_in.add(edge_id) return edge # Other methods like â add_edgeâ to manipulate the graph structure. ... The DNA holds Vertex and Edge instances. The Vertex class looks like this: class Vertex(object): def __init__(self, vertex_proto): # Relationship to the rest of the graph. | 1703.01041#50 | 1703.01041#52 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#52 | Large-Scale Evolution of Image Classifiers | Large-Scale Evolution self.edges_in = set(vertex_proto.edges_in) self.edges_out = set(vertex_proto.edges_out) # Incoming edge IDs. # Outgoing edge IDs. # The type of activations. if vertex_proto.HasField(â linearâ ): self.type = LINEAR # Linear activations. elif vertex_proto.HasField(â bn_reluâ ): self.type = BN_RELU # ReLU activations with batch-normalization. else: raise NotImplementedError() # Some parts of the graph can be prevented from being acted upon by mutations. # The following boolean flags control this. self.inputs_mutable = vertex_proto.inputs_mutable self.outputs_mutable = vertex_proto.outputs_mutable self.properties_mutable = vertex_proto.properties_mutable # Each vertex represents a 2Ë s x 2Ë s x d block of nodes. s and d are positive # integers computed dynamically from the in-edges. s stands for "scale" so # that 2Ë x x 2Ë s is the spatial size of the activations. d stands for "depth", # the number of channels. def to_proto(self): ... The Edge class looks like this: class Edge(object): def __init__(self, edge_proto): # Relationship to the rest of the graph. self.from_vertex = edge_proto.from_vertex self.to_vertex = edge_proto.to_vertex # Source vertex ID. # Destination vertex ID. if edge_proto.HasField(â | 1703.01041#51 | 1703.01041#53 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#53 | Large-Scale Evolution of Image Classifiers | convâ ): # In this case, the edge represents a convolution. self.type = CONV # Controls the depth (i.e. number of channels) in the output, relative to the # input. For example if there is only one input edge with a depth of 16 channels # and â self._depth_factorâ is 2, then this convolution will result in an output # depth of 32 channels. Multiple-inputs with conflicting depth must undergo # depth resolution first. self.depth_factor = edge_proto.conv.depth_factor # Control the shape of the convolution filters (i.e. transfer function). # This parameterization ensures that the filter width and height are odd # numbers: filter_width = 2 * filter_half_width + 1. self.filter_half_width = edge_proto.conv.filter_half_width self.filter_half_height = edge_proto.conv.filter_half_height # Controls the strides of the convolution. | 1703.01041#52 | 1703.01041#54 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#54 | Large-Scale Evolution of Image Classifiers | It will be 2Ë stride_scale. # Note that conflicting input scales must undergo scale resolution. This # controls the spatial scale of the output activations relative to the # spatial scale of the input activations. self.stride_scale = edge_proto.conv.stride_scale # elif edge_spec.HasField(â identityâ ): # self.type = IDENTITY # else: # raise NotImplementedError() # In case depth or scale resolution is necessary due to conflicts in inputs, # These integer parameters determine which of the inputs takes precedence in # deciding the resolved depth or scale. self.depth_precedence = edge_proto.depth_precedence | 1703.01041#53 | 1703.01041#55 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#55 | Large-Scale Evolution of Image Classifiers | Large-Scale Evolution self.scale_precedence = edge_proto.scale_precedence # def to_proto(self): ... Mutations act on DNA instances. The set of mutations restricts the space explored somewhat (Section 3.2). The following are some example mutations. The AlterLearningRateMutation simply randomly modiï¬ es the attribute in the DNA: # class AlterLearningRateMutation(Mutation): """Mutation that modifies the learning rate.""" # def mutate(self, dna): mutated_dna = copy.deepcopy(dna) # Mutate the learning rate by a random factor between 0.5 and 2.0, # uniformly distributed in log scale. factor = 2**random.uniform(-1.0, 1.0) mutated_dna.learning_rate = dna.learning_rate * factor # return mutated_dna Many mutations modify the structure. Mutations to insert and excise vertex-edge pairs build up a main convolutional column, while mutations to add and remove edges can handle the skip connections. For example, the AddEdgeMutation can add a skip connection between random vertices. | 1703.01041#54 | 1703.01041#56 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#56 | Large-Scale Evolution of Image Classifiers | class AddEdgeMutation(Mutation): """Adds a single edge to the graph.""" def mutate(self, dna): # Try the candidates in random order until one has the right connectivity. for from_vertex_id, to_vertex_id in self._vertex_pair_candidates(dna): mutated_dna = copy.deepcopy(dna) if (self._mutate_structure(mutated_dna, from_vertex_id, to_vertex_id)): return mutated_dna raise exceptions.MutationException() # Try another mutation. def _vertex_pair_candidates(self, dna): """Yields connectable vertex pairs.""" from_vertex_ids = _find_allowed_vertices(dna, self._to_regex, ...) if not from_vertex_ids: raise exceptions.MutationException() # Try another mutation. random.shuffle(from_vertex_ids) to_vertex_ids = _find_allowed_vertices(dna, self._from_regex, ...) if not to_vertex_ids: raise exceptions.MutationException() # Try another mutation. random.shuffle(to_vertex_ids) for to_vertex_id in to_vertex_ids: # Avoid back-connections. disallowed_from_vertex_ids, _ = topology.propagated_set(to_vertex_id) for from_vertex_id in from_vertex_ids: if from_vertex_id in disallowed_from_vertex_ids: continue # This pair does not generate a cycle, so we yield it. yield from_vertex_id, to_vertex_id def _mutate_structure(self, dna, from_vertex_id, to_vertex_id): """Adds the edge to the DNA instance.""" edge_id = _random_id() edge_type = random.choice(self._edge_types) if dna.has_edge(from_vertex_id, to_vertex_id): return False else: new_edge = dna.add_edge(from_vertex_id, to_vertex_id, edge_type, edge_id) class AddEdgeMutation(Mutation): """Adds a single edge to the graph.""" raise exceptions.MutationException() # Try another mutation. Large-Scale Evolution | 1703.01041#55 | 1703.01041#57 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#57 | Large-Scale Evolution of Image Classifiers | # ... return True For clarity, we omitted the details of a vertex ID targeting mechanism based on regular expressions, which is used to constrain where the additional edges are placed. This mechanism ensured the skip connections only joined points in the â main convolutional backboneâ of the convnet. The precedence range is used to give the main backbone precedence over the skip connections when resolving scale and depth conï¬ icts in the presence of multiple incoming edges to a vertex. Also omitted are details about the attributes of the edge to add. | 1703.01041#56 | 1703.01041#58 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#58 | Large-Scale Evolution of Image Classifiers | To evaluate an individualâ s ï¬ tness, its DNA is unfolded into a TensorFlow model by the Model class. This describes how each Vertex and Edge should be interpreted. For example: class Model(object): ... def _compute_vertex_nonlinearity(self, tensor, vertex): """Applies the necessary vertex operations depending on the vertex type.""" if vertex.type == LINEAR: pass elif vertex.type == BN_RELU: tensor = slim.batch_norm( inputs=tensor, decay=0.9, center=True, scale=True, epsilon=self._batch_norm_epsilon, activation_fn=None, updates_collections=None, is_training=self.is_training, scope=â batch_normâ ) tensor = tf.maximum(tensor, vertex.leakiness * tensor, name=â reluâ ) else: raise NotImplementedError() return tensor def _compute_edge_connection(self, tensor, edge, init_scale): """Applies the necessary edge connection ops depending on the edge type.""" scale, depth = self._get_scale_and_depth(tensor) if edge.type == CONV: scale_out = scale depth_out = edge.depth_out(depth) stride = 2**edge.stride_scale # â init_scaleâ is used to normalize the initial weights in the case of # multiple incoming edges. weights_initializer = slim.variance_scaling_initializer( factor=2.0 * init_scale**2, uniform=False) weights_regularizer = slim.l2_regularizer( weight=self._dna.weight_decay_rate) tensor = slim.conv2d( inputs=tensor, num_outputs=depth_out, kernel_size=[edge.filter_width(), edge.filter_height()], stride=stride, weights_initializer=weights_initializer, weights_regularizer=weights_regularizer, biases_initializer=None, activation_fn=None, scope=â convâ ) elif edge.type == IDENTITY: pass else: raise NotImplementedError() # return tensor The training and evaluation (Section 3.4) is done in a fairly standard way, similar to that in the tensorï¬ ow.org tutorials for image models. | 1703.01041#57 | 1703.01041#59 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#59 | Large-Scale Evolution of Image Classifiers | The individualâ s ï¬ tness is the accuracy on a held-out validation dataset, as described in the main text. Parents are able to pass some of their learned weights to their children (Section 3.6). When a child is constructed from a parent, it inherits IDs for the different sets of trainable weights (convolution ï¬ lters, batch norm shifts, etc.). These IDs are embedded in the TensorFlow variable names. When the childâ s weights are initialized, those that have a matching ID in the parent are inherited, provided they have the same shape: | 1703.01041#58 | 1703.01041#60 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#60 | Large-Scale Evolution of Image Classifiers | graph = tf.Graph() Large-Scale Evolution with graph.as_default(): # Build the neural network using the â Modelâ class and the â DNAâ instance. ... # tf.Session.reset(self._master) with tf.Session(self._master, graph=graph) as sess: # # Initialize all variables ... # Make sure we can inherit batch-norm variables properly. # The TF-slim batch-norm variables must be handled separately here because some # of them are not trainable (the moving averages). batch_norm_extras = [x for x in tf.all_variables() if ( | 1703.01041#59 | 1703.01041#61 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#61 | Large-Scale Evolution of Image Classifiers | # x.name.find(â moving_varâ ) != -1 or x.name.find(â moving_meanâ ) != -1)] # These are the variables that we will attempt to inherit from the parent. vars_to_restore = tf.trainable_variables() + batch_norm_extras # Copy as many of the weights as possible. if mutated_weights: assignments = [] for var in vars_to_restore: stripped_name = var.name.split(â :â )[0] if stripped_name in mutated_weights: shape_mutated = mutated_weights[stripped_name].shape shape_needed = var.get_shape() if shape_mutated == shape_needed: assignments.append(var.assign(mutated_weights[stripped_name])) sess.run(assignments) # S2. FLOPs estimation This section describes how we estimate the number of ï¬ oating point operations (FLOPs) required for an entire evolution experiment. To obtain the total FLOPs, we sum the FLOPs for each individual ever constructed. | 1703.01041#60 | 1703.01041#62 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#62 | Large-Scale Evolution of Image Classifiers | An individualâ s FLOPs are the sum of its training and validation FLOPs. Namely, the individual FLOPs are given by FtNt + FvNv, where Ft is the FLOPs in one training step, Nt is the number of training steps, Fv is the FLOPs required to evaluate one validation batch of examples and Nv is the number of validation batches. The number of training steps and the number of validation batches are known in advance and are constant throughout the experiment. Ft was obtained analytically as the sum of the FLOPs required to compute each operation executed during training (that is, each node in the TensorFlow graph). Fv was found analogously. Below is the code snippet that computes FLOPs for the training of one individual, for example. # import tensorflow as tf tfprof_logger = tf.contrib.tfprof.python.tools.tfprof.tfprof_logger # def compute_flops(): """Compute flops for one iteration of training.""" graph = tf.Graph() with graph.as_default(): # Build model ... # Build model ... # Run one iteration of training and collect run metadata. # This metadata will be used to determine the nodes which were # actually executed as well as their argument shapes. run_meta = tf.RunMetadata() with tf.Session(graph=graph) as sess: feed_dict = {...} _ = sess.run( | 1703.01041#61 | 1703.01041#63 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#63 | Large-Scale Evolution of Image Classifiers | Large-Scale Evolution [train_op], feed_dict=feed_dict, run_metadata=run_meta, options=tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE)) # Compute analytical FLOPs for all nodes in the graph. logged_ops = tfprof_logger._get_logged_ops(graph, run_meta=run_metadata) # Determine which nodes were executed during one training step # by looking at elapsed execution time of each node. elapsed_us_for_ops = {} for dev_stat in run_metadata.step_stats.dev_stats: for node_stat in dev_stat.node_stats: name = node_stat.node_name elapsed_us = node_stat.op_end_rel_micros - node_stat.op_start_rel_micros elapsed_us_for_ops[name] = elapsed_us # Compute FLOPs of executed nodes. total_flops = 0 for op in graph.get_operations(): name = op.name if elapsed_us_for_ops.get(name, 0) > 0 and name in logged_ops: total_flops += logged_ops[name].float_ops return total_flops Note that we also need to declare how to compute FLOPs for each operation type present (that is, for each node type in the TensorFlow graph). We did this for the following operation types (and their gradients, where applicable): unary math operations: square, squre root, log, negation, element-wise inverse, softmax, L2 norm; | 1703.01041#62 | 1703.01041#64 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#64 | Large-Scale Evolution of Image Classifiers | â ¢ binary element-wise operations: addition, subtraction, multiplication, division, minimum, maximum, power, squared difference, comparison operations; â ¢ reduction operations: mean, sum, argmax, argmin; â ¢ convolution, average pooling, max pooling; â ¢ matrix multiplication. # For example, for the element-wise addition operation type: from tensorflow.python.framework import graph_util from tensorflow.python.framework import ops @ops.RegisterStatistics("Add", "flops") def _add_flops(graph, node): """Compute flops for the Add operation.""" out_shape = graph_util.tensor_shape_from_node_def_name(graph, node.name) out_shape.assert_is_fully_defined() return ops.OpStats("flops", out_shape.num_elements()) | 1703.01041#63 | 1703.01041#65 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#65 | Large-Scale Evolution of Image Classifiers | # S3. Escaping Local Optima Details # S3.1. Local optima and mutation rate Entrapment at a local optimum may mean a general lack of exploration in our search algorithm. To encourage more exploration, we increased the mutation rate (Section 5). In more detail, we carried out experiments in which we ï¬ rst waited until the populations converged. Some reached higher ï¬ tnesses and others got trapped at poor local optima. At this point, we modiï¬ ed the algorithm slightly: instead of performing 1 mutation at each reproduction event, we performed 5 mutations. We evolved with this increased mutation rate for a while and ï¬ nally we switched back to the original single- mutation version. During the 5-mutation stage, some populations escape the local optimum, as in Figure 4 (top), and none Large-Scale Evolution | 1703.01041#64 | 1703.01041#66 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#66 | Large-Scale Evolution of Image Classifiers | get worse. Across populations, however, the escape was not frequent enough (8 out of 10) and took too long for us to propose this as an efï¬ cient technique to escape optima. An interesting direction for future work would be to study more elegant methods to manage the exploration vs. exploitation trade-off in large-scale neuro-evolution. # S3.2. Local optima and weight resetting The identity mutation offers a mechanism for populations to get trapped in local optima. Some individuals may get trained more than their peers just because they happen to have undergone more identity mutations. It may, therefore, occur that a poor architecture may become more accurate than potentially better architectures that still need more training. In the extreme case, the well-trained poor architecture may become a super-ï¬ | 1703.01041#65 | 1703.01041#67 | 1703.01041 | [
"1502.03167"
]
|
1703.01041#67 | Large-Scale Evolution of Image Classifiers | t individual and take over the population. Suspecting this scenario, we performed experiments in which we simultaneously reset all the weights in a population that had plateaued (Section 5). The simultaneous reset should put all the individuals on the same footing, so individuals that had accidentally trained more no longer have the unfair advantage. Indeed, the results matched our expectation. The populations suffer a temporary degradation in ï¬ tness immediately after the reset, as the individuals need to retrain. Later, however, the populations end up reaching higher optima (for example, Figure 4, bottom). Across 10 experiments, we ï¬ nd that three successive resets tend to cause improvement (p < 0.001). We mention this effect merely as evidence of this particular drawback of weight inheritance. In our main results, we circumvented the problem by using longer training times and larger populations. Future work may explore more efï¬ cient solutions. | 1703.01041#66 | 1703.01041 | [
"1502.03167"
]
|
|
1703.00441#0 | Learning to Optimize Neural Nets | 7 1 0 2 v o N 0 3 ] G L . s c [ 2 v 1 4 4 0 0 . 3 0 7 1 : v i X r a # Learning to Optimize Neural Nets # Ke Li 1 Jitendra Malik 1 # Abstract Learning to Optimize (Li & Malik, 2016) is a recently proposed framework for learning opti- mization algorithms using reinforcement learn- In this paper, we explore learning an op- ing. timization algorithm for training shallow neu- ral nets. Such high-dimensional stochastic opti- mization problems present interesting challenges for existing reinforcement learning algorithms. We develop an extension that is suited to learn- ing optimization algorithms in this setting and demonstrate that the learned optimization algo- rithm consistently outperforms other known op- timization algorithms even on unseen tasks and is robust to changes in stochasticity of gradients and the neural net architecture. | 1703.00441#1 | 1703.00441 | [
"1606.01467"
]
|
|
1703.00441#1 | Learning to Optimize Neural Nets | More speciï¬ - cally, we show that an optimization algorithm trained with the proposed method on the prob- lem of training a neural net on MNIST general- izes to the problems of training neural nets on the Toronto Faces Dataset, CIFAR-10 and CIFAR- 100. # 1. Introduction optimization algorithm. Given this state of affairs, perhaps it is time for us to start practicing what we preach and learn how to learn. Recently, Li & Malik (2016) and Andrychowicz et al. (2016) introduced two different frameworks for learning optimization algorithms. Whereas Andrychowicz et al. (2016) focuses on learning an optimization algorithm for training models on a particular task, Li & Malik (2016) sets a more ambitious objective of learning an optimiza- tion algorithm for training models that is task-independent. We study the latter paradigm in this paper and develop a method for learning an optimization algorithm for high- like the dimensional stochastic optimization problems, problem of training shallow neural nets. | 1703.00441#0 | 1703.00441#2 | 1703.00441 | [
"1606.01467"
]
|
1703.00441#2 | Learning to Optimize Neural Nets | Under the â Learning to Optimizeâ framework proposed by Li & Malik (2016), the problem of learning an optimization algorithm is formulated as a reinforcement learning prob- lem. We consider the general structure of an unconstrained continuous optimization algorithm, as shown in Algorithm 1. In each iteration, the algorithm takes a step â x and uses it to update the current iterate x(i). In hand-engineered op- timization algorithms, â x is computed using some ï¬ xed formula Ï that depends on the objective function, the cur- rent iterate and past iterates. Often, it is simply a function of the current and past gradients. | 1703.00441#1 | 1703.00441#3 | 1703.00441 | [
"1606.01467"
]
|
1703.00441#3 | Learning to Optimize Neural Nets | Machine learning is centred on the philosophy that learn- ing patterns automatically from data is generally better than meticulously crafting rules by hand. This data-driven ap- proach has delivered: today, machine learning techniques can be found in a wide range of application areas, both in AI and beyond. Yet, there is one domain that has conspicu- ously been left untouched by machine learning: the design of tools that power machine learning itself. One of the most widely used tools in machine learning is optimization algorithms. We have grown accustomed to seeing an optimization algorithm as a black box that takes in a model that we design and the data that we collect and outputs the optimal model parameters. The optimization al- gorithm itself largely stays static: its design is reserved for human experts, who must toil through many rounds of the- oretical analysis and empirical validation to devise a better | 1703.00441#2 | 1703.00441#4 | 1703.00441 | [
"1606.01467"
]
|
1703.00441#4 | Learning to Optimize Neural Nets | 1University of California, Berkeley, CA 94720, United States. Correspondence to: Ke Li <[email protected]>. Algorithm 1 General structure of optimization algorithms Require: Objective function f x(0) â random point in the domain of f for i = 1, 2, . . . do â x â Ï (f, {x(0), . . . , x(iâ 1)}) if stopping condition is met then return x(iâ 1) end if x(i) â x(iâ 1) + â x end for Different choices of Ï yield different optimization algo- rithms and so each optimization algorithm is essentially characterized by its update formula Ï | 1703.00441#3 | 1703.00441#5 | 1703.00441 | [
"1606.01467"
]
|
1703.00441#5 | Learning to Optimize Neural Nets | . Hence, by learn- ing Ï , we can learn an optimization algorithm. Li & Ma- lik (2016) observed that an optimization algorithm can be viewed as a Markov decision process (MDP), where the state includes the current iterate, the action is the step vec- Learning to Optimize Neural Nets tor â x and the policy is the update formula Ï . Hence, the problem of learning Ï simply reduces to a policy search problem. In this paper, we build on the method proposed in (Li & Malik, 2016) and develop an extension that is suited to learning optimization algorithms for high-dimensional stochastic problems. We use it to learn an optimization algorithm for training shallow neural nets and show that it outperforms popular hand-engineered optimization algo- rithms like ADAM (Kingma & Ba, 2014), AdaGrad (Duchi et al., 2011) and RMSprop (Tieleman & Hinton, 2012) and an optimization algorithm learned using the supervised learning method proposed in (Andrychowicz et al., 2016). Furthermore, we demonstrate that our optimization algo- rithm learned from the experience of training on MNIST generalizes to training on other datasets that have very dis- similar statistics, like the Toronto Faces Dataset, CIFAR-10 and CIFAR-100. | 1703.00441#4 | 1703.00441#6 | 1703.00441 | [
"1606.01467"
]
|
1703.00441#6 | Learning to Optimize Neural Nets | # 2. Related Work # 2.2. Learning Which Model to Learn Methods in this category (Brazdil et al., 2008) aim to learn which base-level learner achieves the best performance on a task. The meta-knowledge captures correlations between different tasks and the performance of different base-level learners on those tasks. One challenge under this setting is to decide on a parameterization of the space of base-level learners that is both rich enough to be capable of repre- senting disparate base-level learners and compact enough to permit tractable search over this space. Brazdil et al. (2003) proposes a nonparametric representation and stores examples of different base-level learners in a database, whereas Schmidhuber (2004) proposes representing base- level learners as general-purpose programs. The former has limited representation power, while the latter makes search and learning in the space of base-level learners intractable. Hochreiter et al. (2001) views the (online) training proce- dure of any base-learner as a black box function that maps a sequence of training examples to a sequence of predictions and models it as a recurrent neural net. Under this formu- lation, meta-training reduces to training the recurrent net, and the base-level learner is encoded in the memory state of the recurrent net. The line of work on learning optimization algorithms is fairly recent. Li & Malik (2016) and Andrychowicz et al. (2016) were the ï¬ rst to propose learning general opti- mization algorithms. Li & Malik (2016) explored learn- ing task-independent optimization algorithms and used re- inforcement learning to learn the optimization algorithm, while Andrychowicz et al. (2016) investigated learning task-dependent optimization algorithms and used super- vised learning. In the special case where objective functions that the opti- mization algorithm is trained on are loss functions for train- ing other models, these methods can be used for â learning to learnâ or â meta-learningâ . While these terms have ap- peared from time to time in the literature (Baxter et al., 1995; Vilalta & Drissi, 2002; Brazdil et al., 2008; Thrun & Pratt, 2012), they have been used by different authors to refer to disparate methods with different purposes. | 1703.00441#5 | 1703.00441#7 | 1703.00441 | [
"1606.01467"
]
|
1703.00441#7 | Learning to Optimize Neural Nets | These methods all share the objective of learning some form of meta-knowledge about learning, but differ in the type of meta-knowledge they aim to learn. We can divide the vari- ous methods into the following three categories. Hyperparameter optimization can be seen as another ex- ample of methods in this category. The space of base-level learners to search over is parameterized by a predeï¬ ned set of hyperparameters. Unlike the methods above, multiple trials with different hyperparameter settings on the same task are permitted, and so generalization across tasks is not required. The discovered hyperparameters are generally speciï¬ c to the task at hand and hyperparameter optimiza- tion must be rerun for new tasks. Various kinds of methods have been proposed, such those based on Bayesian opti- mization (Hutter et al., 2011; Bergstra et al., 2011; Snoek et al., 2012; Swersky et al., 2013; Feurer et al., 2015), random search (Bergstra & Bengio, 2012) and gradient- based optimization (Bengio, 2000; Domke, 2012; Maclau- rin et al., 2015). | 1703.00441#6 | 1703.00441#8 | 1703.00441 | [
"1606.01467"
]
|
1703.00441#8 | Learning to Optimize Neural Nets | # 2.3. Learning How to Learn # 2.1. Learning What to Learn Methods in this category (Thrun & Pratt, 2012) aim to learn what parameter values of the base-level learner are useful across a family of related tasks. The meta-knowledge cap- tures commonalities shared by tasks in the family, which enables learning on a new task from the family to be done more quickly. Most early methods fall into this category; this line of work has blossomed into an area that has later become known as transfer learning and multi-task learning. Methods in this category aim to learn a good algorithm for training a base-level learner. Unlike methods in the pre- vious categories, the goal is not to learn about the out- come of learning, but rather the process of learning. The meta-knowledge captures commonalities in the behaviours of learning algorithms that achieve good performance. The base-level learner and the task are given by the user, so the learned algorithm must generalize across base-level learn- ers and tasks. Since learning in most cases is equivalent to optimizing some objective function, learning a learning algorithm often reduces to learning an optimization algo- rithm. This problem was explored in (Li & Malik, 2016) Learning to Optimize Neural Nets and (Andrychowicz et al., 2016). Closely related is (Ben- gio et al., 1991), which learns a Hebb-like synaptic learn- ing rule that does not depend on the objective function, which does not allow for generalization to different objec- tive functions. Various work has explored learning how to adjust the hyperparameters of hand-engineered optimization algo- rithms, like the step size (Hansen, 2016; Daniel et al., 2016; Fu et al., 2016) or the damping factor in the Levenberg- Marquardt algorithm (Ruvolo et al., 2009). Related to this line of work is stochastic meta-descent (Bray et al., 2004), which derives a rule for adjusting the step size analytically. A different line of work (Gregor & LeCun, 2010; Sprech- mann et al., 2013) parameterizes intermediate operands of special-purpose solvers for a class of optimization prob- lems that arise in sparse coding and learns them using su- pervised learning. | 1703.00441#7 | 1703.00441#9 | 1703.00441 | [
"1606.01467"
]
|
1703.00441#9 | Learning to Optimize Neural Nets | may be completely unrelated to tasks used for training the optimization algorithm. Therefore, the learned optimiza- tion algorithm must not learn anything about the tasks used for training. Instead, the goal is to learn an optimization al- gorithm that can exploit the geometric structure of the error surface induced by the base-learners. For example, if the base-level model is a neural net with ReLU activation units, the optimization algorithm should hopefully learn to lever- age the piecewise linearity of the model. Hence, there is a clear division of responsibilities between the meta-learner and base-learners. The knowledge learned at the meta-level should be pertinent for all tasks, whereas the knowledge learned at the base-level should be task-speciï¬ | 1703.00441#8 | 1703.00441#10 | 1703.00441 | [
"1606.01467"
]
|
1703.00441#10 | Learning to Optimize Neural Nets | c. The meta- learner should therefore generalize across tasks, whereas the base-learner should generalize across instances. # 3.2. RL Preliminaries # 3. Learning to Optimize # 3.1. Setting In the â Learning to Optimizeâ framework, we are given a set of training objective functions f1,..., fn drawn from some distribution F. An optimization algorithm A takes an objective function f and an initial iterate 2 as in- put and produces a sequence of iterates +@),..., (7), where x7) is the solution found by the optimizer. We are also given a distribution D that generates the initial iterate 2°) and a meta-loss £, which takes an objective unction f and a sequence of iterates x,..., â 7 pro- duced by an optimization algorithm as input and outputs a scalar that measures the quality of the iterates. The goal is to learn an optimization algorithm A* such that Epwrxowp [L(f,A*(f,2))] is minimized. The meta- loss is chosen to penalize optimization algorithms that ex- hibit behaviours we find undesirable, like slow convergence or excessive oscillations. Assuming we would like to learn an algorithm that minimizes the objective function it is given, a good choice of meta-loss would then simply be an f(x), which can be interpreted as the area under the curve of objective values over time. The goal of reinforcement learning is to learn to interact with an environment in a way that minimizes cumulative costs that are expected to be incurred over time. The en- vironment is formalized as a partially observable Markov decision process (POMDP)!, which is defined by the tuple (S,O, A, Di, P,Po,¢, T), where S C R? is the set of states, O C Râ | 1703.00441#9 | 1703.00441#11 | 1703.00441 | [
"1606.01467"
]
|
1703.00441#11 | Learning to Optimize Neural Nets | â is the set of observations, A C R7 is the set of actions, p; (89) is the probability density over initial states 80, P(St41 |8z,@z) is the probability density over the sub- sequent state s,;; given the current state s, and action a;,, Do (ot |S¢) is the probability density over the current obser- vation 0; given the current state s;, c : S + R is a function that assigns a cost to each state and T is the time horizon. Often, the probability densities p and p, are unknown and not given to the learning algorithm. A policy Ï (at |ot, t ) is a conditional probability density over actions at given the current observation ot and time step t. When a policy is independent of t, it is known as a stationary policy. The goal of the reinforcement learning algorithm is to learn a policy Ï | 1703.00441#10 | 1703.00441#12 | 1703.00441 | [
"1606.01467"
]
|
1703.00441#12 | Learning to Optimize Neural Nets | â that minimizes the total expected cost over time. More precisely, T m* =argminE.o5.01,..0r | > ¢(se)| » t=0 The objective functions f1, . . . , fn may correspond to loss functions for training base-level learners, in which case the algorithm that learns the optimization algorithm can be viewed as a meta-learner. In this setting, each objective function is the loss function for training a particular base- learner on a particular task, and so the set of training ob- jective functions can be loss functions for training a base- learner or a family of base-learners on different tasks. At test time, the learned optimization algorithm is evaluated on unseen objective functions, which correspond to loss functions for training base-learners on new tasks, which where the expectation is taken with respect to the joint dis- tribution over the sequence of states and actions, often re- ferred to as a trajectory, which has the density pi (S0) Po (00| So) 8 » T (azt| 01, t) p (Se41| 81,4) Po (Or41| St41) - t=0 1What is described is an undiscounted ï¬ nite-horizon POMDP with continuous state, observation and action spaces. | 1703.00441#11 | 1703.00441#13 | 1703.00441 | [
"1606.01467"
]
|
1703.00441#13 | Learning to Optimize Neural Nets | Learning to Optimize Neural Nets To make learning tractable, Ï is often constrained to lie in a parameterized family. A common assumption is that Ï ( at| ot, t) = N (ÂµÏ (ot), Î£Ï (ot)), where N (µ, Σ) de- notes the density of a Gaussian with mean µ and covari- ance Σ. The functions ÂµÏ (·) and possibly Î£Ï (·) are mod- elled using function approximators, whose parameters are learned. optimization is challenging. In each iteration, it performs policy optimization on Ï , and uses the resulting policy as supervision to train Ï | 1703.00441#12 | 1703.00441#14 | 1703.00441 | [
"1606.01467"
]
|
1703.00441#14 | Learning to Optimize Neural Nets | . More precisely, GPS solves the following constrained opti- mization problem: # T T min Ey b 2) s.t. U (az| 82,037) = 7 (a2| $430) Vaz, se,t 6, 7 t=0 # 3.3. Formulation In our setting, the state st consists of the current iterate x(t) and features Φ(·) that depend on the history of iterates x(1), . . . , x(t), (noisy) gradients â | 1703.00441#13 | 1703.00441#15 | 1703.00441 | [
"1606.01467"
]
|
1703.00441#15 | Learning to Optimize Neural Nets | Ë f (x(1)), . . . , â Ë f (x(t)) and (noisy) objective values Ë f (x(1)), . . . , Ë f (x(t)). The ac- tion at is the step â x that will be used to update the iterate. The observation ot excludes x(t) and consists of features Ψ(·) that depend on the iterates, gradient and objective val- ues from recent iterations, and the previous memory state of the learned optimization algorithm, which takes the form of a recurrent neural net. This memory state can be viewed as a statistic of the previous observations that is learned jointly with the policy. where 7 and 6 denote the parameters of y and 7 respec- tively, E, [-] denotes the expectation taken with respect to the trajectory induced by a policy p and 7 (a¢| 5430) = Jon T (ay| Or; 9) Po (o| %)°. | 1703.00441#14 | 1703.00441#16 | 1703.00441 | [
"1606.01467"
]
|
1703.00441#16 | Learning to Optimize Neural Nets | # ot Since there are an inï¬ nite number of equality constraints, the problem is relaxed by enforcing equality on the mean actions taken by Ï and Ï at every time step3. So, the prob- lem becomes: min Ey b 7) s.t. Ey [ae] = Ey [Ex [ae| s¢]] Vt t=0 Under this formulation, the initial probability density p; captures how the initial iterate, gradient and objective value tend to be distributed. The transition probability density p captures the how the gradient and objective value are likely to change given the step that is taken currently; in other words, it encodes the local geometry of the training ob- jective functions. Assuming the goal is to learn an opti- mization algorithm that minimizes the objective function, the cost ¢ of a state s, = (ec, ® ())7 is simply the true objective value f(a). This problem is solved using Bregman ADMM (Wang & Banerjee, 2014), which performs the following updates in each iteration: T n+ arg min S> Ey [e(ss) - Aa] + Dz (0,4) 7 t=0 T Oe aremn AP Ey [Ex [ae se] + Di (0,7) Me = Ap + aM% (Ey [Ex [az| s2]] â Ey [ae]) Ve, where D, (8,7) = Ey [Dict (m (ai| 8439) |] Â¥ (ail 82, 67))] and D; (7,9) = Ey [Dxz (~ (ai| se, t; 9)|| + (ae| se; 9))). Any particular policy Ï (at |ot, t ), which generates at = â x at every time step, corresponds to a particular (noisy) update formula Ï , and therefore a particular (noisy) opti- mization algorithm. Therefore, learning an optimization algorithm simply reduces to searching for the optimal pol- icy. | 1703.00441#15 | 1703.00441#17 | 1703.00441 | [
"1606.01467"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.