doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1703.01041 | 30 | # 3.7. Reporting Methodology
To avoid over-ï¬tting, neither the evolutionary algorithm nor the neural network training ever see the testing set. Each time we refer to âthe best modelâ, we mean the model with the highest validation accuracy. However, we always report the test accuracy. This applies not only to the choice of the best individual within an experiment, but also to the choice
Large-Scale Evolution
of the best experiment. Moreover, we only include ex- periments that we managed to reproduce, unless explicitly noted. Any statistical analysis was fully decided upon be- fore seeing the results of the experiment reported, to avoid tailoring our analysis to our experimental data (Simmons et al., 2011).
We also ran a partial control where the weight-inheritance mechanism is disabled. This run also results in a lower accuracy (92.2 %) in the same amount of time (Figure 2), using 9Ã1019 FLOPs. This shows that weight inheritance is important in the process.
# 4. Experiments and Results
We want to answer the following questions:
⢠Can a simple one-shot evolutionary process start from trivial initial conditions and yield fully trained models that rival hand-designed architectures? | 1703.01041#30 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 31 | We want to answer the following questions:
⢠Can a simple one-shot evolutionary process start from trivial initial conditions and yield fully trained models that rival hand-designed architectures?
Finally, we applied our neuro-evolution algorithm, with- out any changes and with the same meta-parameters, to CIFAR-100. Our only experiment reached an accuracy of 77.0 %, using 2 Ã 1020 FLOPs. We did not attempt other datasets. Table 1 shows that both the CIFAR-10 and CIFAR-100 results are competitive with modern hand- designed networks.
⢠What are the variability in outcomes, the parallelizabil- ity, and the computation cost of the method?
# 5. Analysis
⢠Can an algorithm designed iterating on CIFAR-10 be ap- plied, without any changes at all, to CIFAR-100 and still produce competitive models?
We used the algorithm in Section 3 to perform several ex- periments. Each experiment evolves a population in a few days, typiï¬ed by the example in Figure 1. The ï¬gure also contains examples of the architectures discovered, which turn out to be surprisingly simple. Evolution attempts skip connections but frequently rejects them. | 1703.01041#31 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 32 | Meta-parameters. We observe that populations evolve until they plateau at some local optimum (Figure 2). The ï¬tness (i.e. validation accuracy) value at this optimum varies between experiments (Figure 2, inset). Since not all experiments reach the highest possible value, some popu- lations are getting âtrappedâ at inferior local optima. This entrapment is affected by two important meta-parameters (i.e. parameters that are not optimized by the algorithm). These are the population size and the number of training steps per individual. Below we discuss them and consider their relationship to local optima.
To get a sense of the variability in outcomes, we repeated the experiment 5 times. Across all 5 experiment runs, the best model by validation accuracy has a testing accuracy of 94.6 %. Not all experiments reach the same accuracy, but they get close (µ = 94.1%, Ï = 0.4). Fine differences in the experiment outcome may be somewhat distinguishable by validation accuracy (correlation coefï¬cient = 0.894). The total amount of computation across all 5 experiments was 4Ã1020 FLOPs (or 9Ã1019 FLOPs on average per exper- iment). Each experiment was distributed over 250 parallel workers (Section 3.1). Figure 2 shows the progress of the experiments in detail. | 1703.01041#32 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 33 | As a control, we disabled the selection mechanism, thereby reproducing and killing random individuals. This is the form of random search that is most compatible with our infrastructure. The probability distributions for the pa- rameters are implicitly determined by the mutations. This control only achieves an accuracy of 87.3 % in the same amount of run time on the same hardware (Figure 2). The total amount of computation was 2Ã1017 FLOPs. The low FLOP count is a consequence of random search generating many small, inadequate models that train quickly but con- sume roughly constant amounts of setup time (not included in the FLOP count). We attempted to minimize this over- head by avoiding unnecessary disk access operations, to no avail: too much overhead remains spent on a combination of neural network setup, data augmentation, and training step initialization. | 1703.01041#33 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 34 | Effect of population size. Larger populations explore the space of models more thoroughly, and this helps reach bet- ter optima (Figure 3, left). Note, in particular, that a pop- ulation of size 2 can get trapped at very low ï¬tness values. Some intuition about this can be gained by considering the fate of a super-ï¬t individual, i.e. an individual such that any one architectural mutation reduces its ï¬tness (even though a sequence of many mutations may improve it). In the case of a population of size 2, if the super-ï¬t individual wins once, it will win every time. After the ï¬rst win, it will pro- duce a child that is one mutation away. By deï¬nition of super-ï¬t, therefore, this child is inferior4. Consequently, in the next round of tournament selection, the super-ï¬t in- dividual competes against its child and wins again. This cycle repeats forever and the population is trapped. Even if a sequence of two mutations would allow for an âescapeâ from the local optimum, such a sequence can never take place. This is only a rough argument to heuristically sug- gest why a | 1703.01041#34 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 35 | would allow for an âescapeâ from the local optimum, such a sequence can never take place. This is only a rough argument to heuristically sug- gest why a population of size 2 is easily trapped. More generally, Figure 3 (left) empirically demonstrates a bene- ï¬t from an increase in population size. Theoretical analy- ses of this dependence are quite complex and assume very speciï¬c models of population dynamics; often larger pop- ulations are better at handling local optima, at least beyond a size threshold (Weinreich & Chao (2005) and references | 1703.01041#35 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 36 | 4Except after identity or learning rate mutations, but these pro- duce a child with the same architecture as the parent.
Large-Scale Evolution
94.6 91.8 85.3 test accuracy (%) 22.6 a ; < : ¢ Global Poo}! eae a2 ES TESES TES] : D =a 3 % ay ei C#ENSR . 5 Output ss Global Pool 7 Real C+BNTR Zs Output Global Pool Output 0.9 28.1 70.2 wall time (hours) 256.2
Figure 1. Progress of an evolution experiment. Each dot represents an individual in the population. Blue dots (darker, top-right) are alive. The rest have been killed. The four diagrams show examples of discovered architectures. These correspond to the best individual (right- most) and three of its ancestors. The best individual was selected by its validation accuracy. Evolution sometimes stacks convolutions without any nonlinearity in between (âCâ, white background), which are mathematically equivalent to a single linear operation. Unlike typical hand-designed architectures, some convolutions are followed by more than one nonlinear function (âC+BN +R+BN +R+...â, orange background). | 1703.01041#36 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 37 | therein). Effect of number of training steps. The other meta- parameter is the number T of training steps for each indi- vidual. Accuracy increases with T (Figure 3, right). Larger T means an individual needs to undergo fewer identity mu- tations to reach a given level of training. Escaping local optima. While we might increase popu- lation size or number of steps to prevent a trapped popu- lation from forming, we can also free an already trapped population. For example, increasing the mutation rate or resetting all the weights of a population (Figure 4) work well but are quite costly (more details in Supplementary Section S3). Recombination. None of the results presented so far used recombination. However, we explored three forms of recombination in additional experiments. Following Tuson & Ross (1998), we attempted to evolve the mutation prob- ability distribution too. On top of this, we employed a re- combination strategy by which a child could inherit struc- ture from one parent and mutation probabilities from an- other. The goal was to allow individuals that progressed well due to good mutation choices to quickly propagate | 1703.01041#37 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 38 | such choices to others. In a separate experiment, we at- tempted recombining the trained weights from two parents in the hope that each parent may have learned different concepts from the training data. In a third experiment, we recombined structures so that the child fused the ar- chitectures of both parents side-by-side, generating wide models fast. While none of these approaches improved our recombination-free results, further study seems warranted.
# 6. Conclusion
In this paper we have shown that (i) neuro-evolution is ca- pable of constructing large, accurate networks for two chal- lenging and popular image classiï¬cation benchmarks; (ii) neuro-evolution can do this starting from trivial initial con- ditions while searching a very large space; (iii) the pro- cess, once started, needs no experimenter participation; and (iv) the process yields fully trained models. Completely training models required weight inheritance (Sections 3.6). In contrast to reinforcement learning, evolution provides a natural framework for weight inheritance: mutations can be constructed to guarantee a large degree of similarity beLarge-Scale Evolution
100.0 94.6 g > o e 5 3 o B 3 * |ââ Evolution 04 ee _ Evolution w/o weight inheritance + Random search of ee r 20.04 : wall-clock time
0
wall-clock time (hours) | 1703.01041#38 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 39 | 0
wall-clock time (hours)
Figure 2. Repeatability of results and controls. In this plot, the vertical axis at wall-time t is deï¬ned as the test accuracy of the individual with the highest validation accuracy that became alive at or before t. The inset magniï¬es a portion of the main graph. The curves show the progress of various experiments, as follows. The top line (solid, blue) shows the mean test accuracy across 5 large-scale evolution experiments. The shaded area around this top line has a width of ±2Ï (clearer in inset). The next line down (dashed, orange, main graph and inset) represents a single experi- ment in which weight-inheritance was disabled, so every individ- ual has to train from random weights. The lowest curve (dotted- dashed) is a random-search control. All experiments occupied the same amount and type of hardware. A small amount of noise in the generalization from the validation to the test set explains why the lines are not monotonically increasing. Note the narrow width of the ±2Ï area (main graph and inset), which shows that the high accuracies obtained in evolution experiments are repeatable. | 1703.01041#39 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 40 | tween the original and mutated modelsâas we did. Evo- lution also has fewer tunable meta-parameters with a fairly predictable effect on the variance of the results, which can be made small.
While we did not focus on reducing computation costs, we hope that future algorithmic and hardware improvement will allow more economical implementation. In that case, evolution would become an appealing approach to neuro- discovery for reasons beyond the scope of this paper. For example, it âhits the ground runningâ, improving on arbi- trary initial models as soon as the experiment begins. The mutations used can implement recent advances in the ï¬eld and can be introduced without having to restart an exper- iment. Furthermore, recombination can merge improve- ments developed by different individuals, even if they come from other populations. Moreover, it may be possible to combine neuro-evolution with other automatic architecture discovery methods.
250
100 100 g °o.)66 > = ° 8 x slo g 8 o § se 8 5 | 8 2} 8 g|8 3 g 5 £|° 2]° g 50. 1 1 1 5 : 2 1000 256 2560. 25600 10 43 . population size training steps | 1703.01041#40 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 41 | Figure 3. Dependence on meta-parameters. In both graphs, each circle represents the result of a full evolution experiment. Both vertical axes show the test accuracy for the individual with the highest validation accuracy at the end of the experiment. All pop- ulations evolved for the same total wall-clock time. There are 5 data points at each horizontal axis value. LEFT: effect of pop- ulation size. To economize resources, in these experiments the number of individual training steps is only 2560. Note how the ac- curacy increases with population size. RIGHT: effect of number of training steps per individual. Note how the accuracy increases with more steps.
Increased mutation rate oo SS wo accuracy (%)
Increased mutation rate oo SS wo accuracy (%) 92.0 87.3 accuracy (%) ~ 3 ° z ree Fra i} 16: 333 550 733 wall time (hours)
92.0 87.3 accuracy (%) ~ 3 ° z ree Fra i} 16: 333 550 733 wall time (hours) | 1703.01041#41 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 42 | 92.0 87.3 accuracy (%) ~ 3 ° z ree Fra i} 16: 333 550 733 wall time (hours)
Figure 4. Escaping local optima in two experiments. We used smaller populations and fewer training steps per individual (2560) to make it more likely for a population to get trapped and to re- duce resource usage. Each dot represents an individual. The verti- cal axis is the accuracy. TOP: example of a population of size 100 escaping a local optimum by using a period of increased mutation rate in the middle (Section 5). BOTTOM: example of a population of size 50 escaping a local optimum by means of three consecu- tive weight resetting events (Section 5). Details in Supplementary Section S3.
Large-Scale Evolution
# Acknowledgements | 1703.01041#42 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 43 | Large-Scale Evolution
# Acknowledgements
We wish to thank Vincent Vanhoucke, Megan Kacho- lia, Rajat Monga, and especially Jeff Dean for their sup- port and valuable input; Geoffrey Hinton, Samy Ben- gio, Thomas Breuel, Mark DePristo, Vishy Tirumalashetty, Martin Abadi, Noam Shazeer, Yoram Singer, Dumitru Er- han, Pierre Sermanet, Xiaoqiang Zheng, Shan Carter and Vijay Vasudevan for helpful discussions; Thomas Breuel, Xin Pan and Andy Davis for coding contributions; and the larger Google Brain team for help with TensorFlow and training vision models.
# References
Goodfellow, Ian J, Warde-Farley, David, Mirza, Mehdi, Courville, Aaron C, and Bengio, Yoshua. Maxout net- works. International Conference on Machine Learning, 28:1319â1327, 2013.
Gruau, Frederic. Genetic synthesis of modular neural net- works. In Proceedings of the 5th International Confer- ence on Genetic Algorithms, pp. 318â325. Morgan Kauf- mann Publishers Inc., 1993. | 1703.01041#43 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 44 | Han, Song, Pool, Jeff, Tran, John, and Dally, William. Learning both weights and connections for efï¬cient neu- ral network. In Advances in Neural Information Process- ing Systems, pp. 1135â1143, 2015.
Abadi, Mart´ın, Agarwal, Ashish, Barham, Paul, Brevdo, Eugene, Chen, Zhifeng, Citro, Craig, Corrado, Greg S, Davis, Andy, Dean, Jeffrey, Devin, Matthieu, et al. Ten- sorï¬ow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016.
He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Delving deep into rectiï¬ers: Surpassing human- In Pro- level performance on imagenet classiï¬cation. ceedings of the IEEE international conference on com- puter vision, pp. 1026â1034, 2015.
and Raskar, Ramesh. Designing neural network archi- tectures using reinforcement learning. arXiv preprint arXiv:1611.02167, 2016. | 1703.01041#44 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 45 | and Raskar, Ramesh. Designing neural network archi- tectures using reinforcement learning. arXiv preprint arXiv:1611.02167, 2016.
He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vi- sion and Pattern Recognition, pp. 770â778, 2016.
Bayer, Justin, Wierstra, Daan, Togelius, Julian, and Schmidhuber, J¨urgen. Evolving memory cell structures In International Conference on for sequence learning. Artiï¬cial Neural Networks, pp. 755â764. Springer, 2009.
Huang, Gao, Liu, Zhuang, Weinberger, Kilian Q, and van der Maaten, Laurens. Densely connected convo- arXiv preprint arXiv:1608.06993, lutional networks. 2016a.
Bergstra, James and Bengio, Yoshua. Random search for hyper-parameter optimization. Journal of Machine Learning Research, 13(Feb):281â305, 2012. | 1703.01041#45 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 46 | Bergstra, James and Bengio, Yoshua. Random search for hyper-parameter optimization. Journal of Machine Learning Research, 13(Feb):281â305, 2012.
Huang, Gao, Sun, Yu, Liu, Zhuang, Sedra, Daniel, and Weinberger, Kilian Q. Deep networks with stochastic depth. In European Conference on Computer Vision, pp. 646â661. Springer, 2016b.
Breuel, Thomas and Shafait, Faisal. Automlp: Simple, effective, fully automated learning rate and size adjust- ment. In The Learning Workshop. Utah, 2010.
Ioffe, Sergey and Szegedy, Christian. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
Fernando, Chrisantha, Banarse, Dylan, Reynolds, Mal- colm, Besse, Frederic, Pfau, David, Jaderberg, Max, Lanctot, Marc, and Wierstra, Daan. Convolution by evo- lution: Differentiable pattern producing networks. In Proceedings of the 2016 on Genetic and Evolutionary Computation Conference, pp. 109â116. ACM, 2016. | 1703.01041#46 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 47 | Kim, Minyoung and Rigazio, Luca. Deep clustered convo- lutional kernels. arXiv preprint arXiv:1503.01824, 2015.
Krizhevsky, Alex and Hinton, Geoffrey. Learning multiple layers of features from tiny images. 2009.
Goldberg, David E and Deb, Kalyanmoy. A comparative analysis of selection schemes used in genetic algorithms. Foundations of genetic algorithms, 1:69â93, 1991.
Krizhevsky, Alex, Sutskever, Ilya, and Hinton, Geoffrey E. Imagenet classiï¬cation with deep convolutional neural networks. In Advances in Neural Information Processing Systems, pp. 1097â1105, 2012.
Goldberg, David E, Richardson, Jon, et al. Genetic algo- rithms with sharing for multimodal function optimiza- tion. In Genetic algorithms and their applications: Pro- ceedings of the Second International Conference on Ge- netic Algorithms, pp. 41â49. Hillsdale, NJ: Lawrence Erlbaum, 1987.
LeCun, Yann, Cortes, Corinna, and Burges, Christo- pher JC. The mnist database of handwritten digits, 1998. | 1703.01041#47 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 48 | LeCun, Yann, Cortes, Corinna, and Burges, Christo- pher JC. The mnist database of handwritten digits, 1998.
Lee, Chen-Yu, Xie, Saining, Gallagher, Patrick W, Zhang, Zhengyou, and Tu, Zhuowen. Deeply-supervised nets. In AISTATS, volume 2, pp. 5, 2015.
Large-Scale Evolution
Lin, Min, Chen, Qiang, and Yan, Shuicheng. Network in network. arXiv preprint arXiv:1312.4400, 2013.
Stanley, Kenneth O. Compositional pattern producing net- works: A novel abstraction of development. Genetic pro- gramming and evolvable machines, 8(2):131â162, 2007.
Miller, Geoffrey F, Todd, Peter M, and Hegde, Shailesh U. Designing neural networks using genetic algorithms. In Proceedings of the third international conference on Ge- netic algorithms, pp. 379â384. Morgan Kaufmann Pub- lishers Inc., 1989.
Stanley, Kenneth O and Miikkulainen, Risto. Evolving neural networks through augmenting topologies. Evo- lutionary Computation, 10(2):99â127, 2002. | 1703.01041#48 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 49 | Morse, Gregory and Stanley, Kenneth O. Simple evo- lutionary optimization can rival stochastic gradient de- scent in neural networks. In Proceedings of the 2016 on Genetic and Evolutionary Computation Conference, pp. 477â484. ACM, 2016.
Pugh, Justin K and Stanley, Kenneth O. Evolving mul- In Proceedings of timodal controllers with hyperneat. the 15th annual conference on Genetic and evolutionary computation, pp. 735â742. ACM, 2013.
Rumelhart, David E, Hinton, Geoffrey E, and Williams, Ronald J. Learning representations by back-propagating errors. Cognitive Modeling, 5(3):1, 1988.
Stanley, Kenneth O, DâAmbrosio, David B, and Gauci, Ja- son. A hypercube-based encoding for evolving large- scale neural networks. Artiï¬cial Life, 15(2):185â212, 2009.
Sutskever, Ilya, Martens, James, Dahl, George E, and Hin- ton, Geoffrey E. On the importance of initialization and momentum in deep learning. ICML (3), 28:1139â1147, 2013. | 1703.01041#49 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 50 | Szegedy, Christian, Liu, Wei, Jia, Yangqing, Sermanet, Pierre, Reed, Scott, Anguelov, Dragomir, Erhan, Du- mitru, Vanhoucke, Vincent, and Rabinovich, Andrew. In Proceedings of Going deeper with convolutions. the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1â9, 2015.
Saxena, Shreyas and Verbeek, Jakob. Convolutional neural fabrics. In Advances In Neural Information Processing Systems, pp. 4053â4061, 2016.
Tuson, Andrew and Ross, Peter. Adapting operator settings in genetic algorithms. Evolutionary computation, 6(2): 161â184, 1998.
Silver, David, Huang, Aja, Maddison, Chris J, Guez, Arthur, Sifre, Laurent, Van Den Driessche, George, Schrittwieser, Julian, Antonoglou, Ioannis, Panneershel- vam, Veda, Lanctot, Marc, et al. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484â489, 2016. | 1703.01041#50 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 51 | Simmons, Joseph P, Nelson, Leif D, and Simonsohn, Uri. False-positive psychology: Undisclosed ï¬exibility in data collection and analysis allows presenting anything Psychological Science, 22(11):1359â as signiï¬cant. 1366, 2011.
Verbancsics, Phillip and Harguess, Josh. neuroevolution for deep learning. arXiv:1312.5355, 2013. Generative arXiv preprint
Weinreich, Daniel M and Chao, Lin. Rapid evolutionary escape by large populations from local ï¬tness peaks is likely in nature. Evolution, 59(6):1175â1182, 2005.
Weyand, Tobias, Kostrikov, Ilya, and Philbin, James. Planet-photo geolocation with convolutional neural net- works. In European Conference on Computer Vision, pp. 37â55. Springer, 2016.
Simonyan, Karen and Zisserman, Andrew. Very deep con- volutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
Snoek, Jasper, Larochelle, Hugo, and Adams, Ryan P. Practical bayesian optimization of machine learning al- gorithms. In Advances in neural information processing systems, pp. 2951â2959, 2012. | 1703.01041#51 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 52 | Wu, Yonghui, Schuster, Mike, Chen, Zhifeng, Le, Quoc V., Norouzi, Mohammad, et al. Googleâs neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144, 2016.
Zagoruyko, Sergey and Komodakis, Nikos. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016.
Springenberg, Jost Tobias, Dosovitskiy, Alexey, Brox, Striving for sim- arXiv preprint Thomas, and Riedmiller, Martin. plicity: The all convolutional net. arXiv:1412.6806, 2014.
Srivastava, Rupesh Kumar, Greff, Klaus, and Schmid- arXiv preprint huber, J¨urgen. Highway networks. arXiv:1505.00387, 2015.
Zaremba, Wojciech. An empirical exploration of recurrent network architectures. 2015.
Zoph, Barret and Le, Quoc V. search with reinforcement learning. arXiv:1611.01578, 2016. Neural architecture arXiv preprint
# Large-Scale Evolution of Image Classiï¬ers
# Supplementary Material
# S1. Methods Details | 1703.01041#52 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 53 | # Large-Scale Evolution of Image Classiï¬ers
# Supplementary Material
# S1. Methods Details
This section contains additional implementation details, roughly following the order in Section 3. Short code snippets illustrate the ideas. The code is not intended to run on its own and it has been highly edited for clarity.
In our implementation, each worker runs an outer loop that is responsible for selecting a pair of random individuals from the population. The individual with the highest ï¬tness usually becomes a parent and the one with the lowest ï¬tness is usually killed (Section 3.1). Occasionally, either of these two actions is not carried out in order to keep the population size close to a set-point: | 1703.01041#53 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 54 | def evolve_population(self): # Iterate indefinitely. while True: # Select two random individuals from the population. valid_individuals = [] for individual in self.load_individuals(): # Only loads the IDs and states. if individual.state in [TRAINING, ALIVE]: valid_individuals.append(individual) individual_pair = random.sample(valid_individuals, 2) for individual in individual_pair: # Sync changes from other workers from file-system. Loads everything else. individual.update_if_necessary() # Ensure the individual is fully trained. if individual.state == TRAINING: self._train(individual) # Select by fitness (accuracy). individual_pair.sort(key=lambda i: i.fitness, reverse=True) better_individual = individual_pair[0] worse_individual = individual_pair[1] # If the population is not too small, kill the worst of the pair. if self._population_size() >= self._population_size_setpoint: self._kill_individual(worse_individual) # If the population is not too large, reproduce the best of the pair. if self._population_size() < self._population_size_setpoint: self._reproduce_and_train_individual(better_individual) Much of the code is wrapped | 1703.01041#54 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 56 | # def evolve_population(self):
while True: try: # Select two random individuals from the population. ... except: except exceptions.PopulationTooSmallException: self._create_new_individual() continue
Large-Scale Evolution
except exceptions.ConcurrencyException: # Another worker did something that interfered with the action of this worker. # Abandon the current task and keep going. continue
The encoding for an individual is represented by a serializable DNA class instance containing all information except for the trained weights (Section 3.2). For all results in this paper, this encoding is a directed, acyclic graph where edges represent convolutions and vertices represent nonlinearities. This is a sketch of the DNA class:
# class DNA(object): | 1703.01041#56 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 57 | # class DNA(object):
def __init__(self, dna_proto): """Initializes the âDNAâ instance from a protocol buffer. The âdna_protoâ is a protocol buffer used to restore the DNA state from disk. Together with the corresponding âto_protoâ method, they allow for a serialization-deserialization mechanism. """ # Allows evolving the learning rate, i.e. exploring the space of # learning rate schedules. self.learning_rate = dna_proto.learning_rate self._vertices = {} for vertex_id in dna_proto.vertices: # String vertex ID to âVertexâ instance. vertices[vertex_id] = Vertex(vertex_proto=dna_sproto.vertices[vertex_id]) self._edges = {} for edge_id in dna_proto.edges: # String edge ID to âEdgeâ instance. mutable_edges[edge_id] = Edge(edge_proto=dna_proto.edges[edge_id])
...
# def to_proto(self):
"""Returns this instance in protocol buffer form.""" dna_proto = dna_pb2.DnaProto(learning_rate=self.learning_rate) | 1703.01041#57 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 58 | """Returns this instance in protocol buffer form.""" dna_proto = dna_pb2.DnaProto(learning_rate=self.learning_rate)
for vertex_id, vertex in self._vertices.iteritems(): dna_proto.vertices[vertex_id].CopyFrom(vertex.to_proto())
# for edge_id, edge in self._edges.iteritems():
# dna_proto.edges[edge_id].CopyFrom(edge.to_proto())
...
return dna_proto
def add_edge(self, dna, from_vertex_id, to_vertex_id, edge_type, edge_id): """Adds an edge to the DNA graph, ensuring internal consistency.""" # âEdgeProtoâ defines defaults for other attributes. edge = Edge(EdgeProto( from_vertex=from_vertex_id, to_vertex=to_vertex_id, type=edge_type)) self._edges[edge_id] = edge self._vertices[from_vertex_id].edges_out.add(edge_id) self._vertices[to_vertex].edges_in.add(edge_id) return edge
# Other methods like âadd_edgeâ to manipulate the graph structure. ... | 1703.01041#58 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 60 | # The type of activations. if vertex_proto.HasField(âlinearâ): self.type = LINEAR # Linear activations. elif vertex_proto.HasField(âbn_reluâ): self.type = BN_RELU # ReLU activations with batch-normalization. else: raise NotImplementedError() # Some parts of the graph can be prevented from being acted upon by mutations. # The following boolean flags control this. self.inputs_mutable = vertex_proto.inputs_mutable self.outputs_mutable = vertex_proto.outputs_mutable self.properties_mutable = vertex_proto.properties_mutable # Each vertex represents a 2Ës x 2Ës x d block of nodes. s and d are positive # integers computed dynamically from the in-edges. s stands for "scale" so # that 2Ëx x 2Ës is the spatial size of the activations. d stands for "depth", # the number of channels. def to_proto(self): ... The Edge class looks like this: class Edge(object): def __init__(self, edge_proto): # Relationship to the rest of the graph. | 1703.01041#60 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 61 | ... The Edge class looks like this: class Edge(object): def __init__(self, edge_proto): # Relationship to the rest of the graph. self.from_vertex = edge_proto.from_vertex self.to_vertex = edge_proto.to_vertex # Source vertex ID. # Destination vertex ID. if edge_proto.HasField(âconvâ): # In this case, the edge represents a convolution. self.type = CONV # Controls the depth (i.e. number of channels) in the output, relative to the # input. For example if there is only one input edge with a depth of 16 channels # and âself._depth_factorâ is 2, then this convolution will result in an output # depth of 32 channels. Multiple-inputs with conflicting depth must undergo # depth resolution first. self.depth_factor = edge_proto.conv.depth_factor | 1703.01041#61 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 62 | # Control the shape of the convolution filters (i.e. transfer function). # This parameterization ensures that the filter width and height are odd # numbers: filter_width = 2 * filter_half_width + 1. self.filter_half_width = edge_proto.conv.filter_half_width self.filter_half_height = edge_proto.conv.filter_half_height
# Controls the strides of the convolution. It will be 2Ëstride_scale. # Note that conflicting input scales must undergo scale resolution. This # controls the spatial scale of the output activations relative to the # spatial scale of the input activations. self.stride_scale = edge_proto.conv.stride_scale
# elif edge_spec.HasField(âidentityâ):
# self.type = IDENTITY
# else:
# raise NotImplementedError()
# In case depth or scale resolution is necessary due to conflicts in inputs, # These integer parameters determine which of the inputs takes precedence in # deciding the resolved depth or scale. self.depth_precedence = edge_proto.depth_precedence
Large-Scale Evolution
self.scale_precedence = edge_proto.scale_precedence
# def to_proto(self):
... | 1703.01041#62 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 63 | Large-Scale Evolution
self.scale_precedence = edge_proto.scale_precedence
# def to_proto(self):
...
Mutations act on DNA instances. The set of mutations restricts the space explored somewhat (Section 3.2). The following are some example mutations. The AlterLearningRateMutation simply randomly modiï¬es the attribute in the DNA:
# class AlterLearningRateMutation(Mutation):
"""Mutation that modifies the learning rate."""
# def mutate(self, dna):
mutated_dna = copy.deepcopy(dna)
# Mutate the learning rate by a random factor between 0.5 and 2.0, # uniformly distributed in log scale. factor = 2**random.uniform(-1.0, 1.0) mutated_dna.learning_rate = dna.learning_rate * factor
# return mutated_dna
Many mutations modify the structure. Mutations to insert and excise vertex-edge pairs build up a main convolutional column, while mutations to add and remove edges can handle the skip connections. For example, the AddEdgeMutation can add a skip connection between random vertices. | 1703.01041#63 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 64 | class AddEdgeMutation(Mutation): """Adds a single edge to the graph.""" def mutate(self, dna): # Try the candidates in random order until one has the right connectivity. for from_vertex_id, to_vertex_id in self._vertex_pair_candidates(dna): mutated_dna = copy.deepcopy(dna) if (self._mutate_structure(mutated_dna, from_vertex_id, to_vertex_id)): return mutated_dna raise exceptions.MutationException() # Try another mutation. def _vertex_pair_candidates(self, dna): """Yields connectable vertex pairs.""" from_vertex_ids = _find_allowed_vertices(dna, self._to_regex, ...) if not from_vertex_ids: raise exceptions.MutationException() # Try another mutation. random.shuffle(from_vertex_ids) to_vertex_ids = _find_allowed_vertices(dna, self._from_regex, ...) if not to_vertex_ids: raise exceptions.MutationException() # Try another mutation. random.shuffle(to_vertex_ids) for to_vertex_id in | 1703.01041#64 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 65 | raise exceptions.MutationException() # Try another mutation. random.shuffle(to_vertex_ids) for to_vertex_id in to_vertex_ids: # Avoid back-connections. disallowed_from_vertex_ids, _ = topology.propagated_set(to_vertex_id) for from_vertex_id in from_vertex_ids: if from_vertex_id in disallowed_from_vertex_ids: continue # This pair does not generate a cycle, so we yield it. yield from_vertex_id, to_vertex_id def _mutate_structure(self, dna, from_vertex_id, to_vertex_id): """Adds the edge to the DNA instance.""" edge_id = _random_id() edge_type = random.choice(self._edge_types) if dna.has_edge(from_vertex_id, to_vertex_id): return False else: new_edge = dna.add_edge(from_vertex_id, to_vertex_id, edge_type, edge_id) | 1703.01041#65 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 66 | class AddEdgeMutation(Mutation): """Adds a single edge to the graph."""
raise exceptions.MutationException() # Try another mutation.
Large-Scale Evolution
# ... return True
For clarity, we omitted the details of a vertex ID targeting mechanism based on regular expressions, which is used to constrain where the additional edges are placed. This mechanism ensured the skip connections only joined points in the âmain convolutional backboneâ of the convnet. The precedence range is used to give the main backbone precedence over the skip connections when resolving scale and depth conï¬icts in the presence of multiple incoming edges to a vertex. Also omitted are details about the attributes of the edge to add.
To evaluate an individualâs ï¬tness, its DNA is unfolded into a TensorFlow model by the Model class. This describes how each Vertex and Edge should be interpreted. For example: | 1703.01041#66 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 67 | class Model(object): ... def _compute_vertex_nonlinearity(self, tensor, vertex): """Applies the necessary vertex operations depending on the vertex type.""" if vertex.type == LINEAR: pass elif vertex.type == BN_RELU: tensor = slim.batch_norm( inputs=tensor, decay=0.9, center=True, scale=True, epsilon=self._batch_norm_epsilon, activation_fn=None, updates_collections=None, is_training=self.is_training, scope=âbatch_normâ) tensor = tf.maximum(tensor, vertex.leakiness * tensor, name=âreluâ) else: raise NotImplementedError() return tensor def _compute_edge_connection(self, tensor, edge, init_scale): """Applies the necessary edge connection ops depending on the edge type.""" scale, depth = self._get_scale_and_depth(tensor) if edge.type == CONV: scale_out = scale depth_out = edge.depth_out(depth) stride = 2**edge.stride_scale # âinit_scaleâ is used to normalize the initial weights in the case of # multiple incoming | 1703.01041#67 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 68 | stride = 2**edge.stride_scale # âinit_scaleâ is used to normalize the initial weights in the case of # multiple incoming edges. weights_initializer = slim.variance_scaling_initializer( factor=2.0 * init_scale**2, uniform=False) weights_regularizer = slim.l2_regularizer( weight=self._dna.weight_decay_rate) tensor = slim.conv2d( inputs=tensor, num_outputs=depth_out, kernel_size=[edge.filter_width(), edge.filter_height()], stride=stride, weights_initializer=weights_initializer, weights_regularizer=weights_regularizer, biases_initializer=None, activation_fn=None, scope=âconvâ) elif edge.type == IDENTITY: pass else: raise NotImplementedError() | 1703.01041#68 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 69 | # return tensor
The training and evaluation (Section 3.4) is done in a fairly standard way, similar to that in the tensorï¬ow.org tutorials for image models. The individualâs ï¬tness is the accuracy on a held-out validation dataset, as described in the main text.
Parents are able to pass some of their learned weights to their children (Section 3.6). When a child is constructed from a parent, it inherits IDs for the different sets of trainable weights (convolution ï¬lters, batch norm shifts, etc.). These IDs are embedded in the TensorFlow variable names. When the childâs weights are initialized, those that have a matching ID in the parent are inherited, provided they have the same shape:
graph = tf.Graph()
Large-Scale Evolution
with graph.as_default():
# Build the neural network using the âModelâ class and the âDNAâ instance. ...
# tf.Session.reset(self._master) with tf.Session(self._master, graph=graph) as sess:
# # Initialize all variables ... | 1703.01041#69 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 70 | # tf.Session.reset(self._master) with tf.Session(self._master, graph=graph) as sess:
# # Initialize all variables ...
# Make sure we can inherit batch-norm variables properly. # The TF-slim batch-norm variables must be handled separately here because some # of them are not trainable (the moving averages). batch_norm_extras = [x for x in tf.all_variables() if (
# x.name.find(âmoving_varâ) != -1 or x.name.find(âmoving_meanâ) != -1)]
# These are the variables that we will attempt to inherit from the parent. vars_to_restore = tf.trainable_variables() + batch_norm_extras
# Copy as many of the weights as possible. if mutated_weights: assignments = [] for var in vars_to_restore: stripped_name = var.name.split(â:â)[0] if stripped_name in mutated_weights: shape_mutated = mutated_weights[stripped_name].shape shape_needed = var.get_shape() if shape_mutated == shape_needed: assignments.append(var.assign(mutated_weights[stripped_name])) sess.run(assignments) | 1703.01041#70 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 71 | # S2. FLOPs estimation
This section describes how we estimate the number of ï¬oating point operations (FLOPs) required for an entire evolution experiment. To obtain the total FLOPs, we sum the FLOPs for each individual ever constructed. An individualâs FLOPs are the sum of its training and validation FLOPs. Namely, the individual FLOPs are given by FtNt + FvNv, where Ft is the FLOPs in one training step, Nt is the number of training steps, Fv is the FLOPs required to evaluate one validation batch of examples and Nv is the number of validation batches.
The number of training steps and the number of validation batches are known in advance and are constant throughout the experiment. Ft was obtained analytically as the sum of the FLOPs required to compute each operation executed during training (that is, each node in the TensorFlow graph). Fv was found analogously.
Below is the code snippet that computes FLOPs for the training of one individual, for example.
# import tensorflow as tf tfprof_logger = tf.contrib.tfprof.python.tools.tfprof.tfprof_logger
# def compute_flops():
"""Compute flops for one iteration of training.""" graph = tf.Graph() with graph.as_default(): # Build model ... | 1703.01041#71 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 72 | # def compute_flops():
"""Compute flops for one iteration of training.""" graph = tf.Graph() with graph.as_default(): # Build model ...
# Build model ... # Run one iteration of training and collect run metadata. # This metadata will be used to determine the nodes which were # actually executed as well as their argument shapes. run_meta = tf.RunMetadata() with tf.Session(graph=graph) as sess: feed_dict = {...} _ = sess.run(
Large-Scale Evolution | 1703.01041#72 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 73 | Large-Scale Evolution
[train_op], feed_dict=feed_dict, run_metadata=run_meta, options=tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE)) # Compute analytical FLOPs for all nodes in the graph. logged_ops = tfprof_logger._get_logged_ops(graph, run_meta=run_metadata) # Determine which nodes were executed during one training step # by looking at elapsed execution time of each node. elapsed_us_for_ops = {} for dev_stat in run_metadata.step_stats.dev_stats: for node_stat in dev_stat.node_stats: name = node_stat.node_name elapsed_us = node_stat.op_end_rel_micros - node_stat.op_start_rel_micros elapsed_us_for_ops[name] = elapsed_us # Compute FLOPs of executed nodes. total_flops = 0 for op in graph.get_operations(): name = op.name if elapsed_us_for_ops.get(name, 0) > 0 and name in logged_ops: total_flops += logged_ops[name].float_ops
return total_flops | 1703.01041#73 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 74 | return total_flops
Note that we also need to declare how to compute FLOPs for each operation type present (that is, for each node type in the TensorFlow graph). We did this for the following operation types (and their gradients, where applicable):
unary math operations: square, squre root, log, negation, element-wise inverse, softmax, L2 norm;
⢠binary element-wise operations: addition, subtraction, multiplication, division, minimum, maximum, power, squared difference, comparison operations;
⢠reduction operations: mean, sum, argmax, argmin;
⢠convolution, average pooling, max pooling;
⢠matrix multiplication.
# For example, for the element-wise addition operation type:
from tensorflow.python.framework import graph_util from tensorflow.python.framework import ops
@ops.RegisterStatistics("Add", "flops") def _add_flops(graph, node): """Compute flops for the Add operation.""" out_shape = graph_util.tensor_shape_from_node_def_name(graph, node.name) out_shape.assert_is_fully_defined() return ops.OpStats("flops", out_shape.num_elements())
# S3. Escaping Local Optima Details
# S3.1. Local optima and mutation rate | 1703.01041#74 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 75 | # S3. Escaping Local Optima Details
# S3.1. Local optima and mutation rate
Entrapment at a local optimum may mean a general lack of exploration in our search algorithm. To encourage more exploration, we increased the mutation rate (Section 5). In more detail, we carried out experiments in which we ï¬rst waited until the populations converged. Some reached higher ï¬tnesses and others got trapped at poor local optima. At this point, we modiï¬ed the algorithm slightly: instead of performing 1 mutation at each reproduction event, we performed 5 mutations. We evolved with this increased mutation rate for a while and ï¬nally we switched back to the original single- mutation version. During the 5-mutation stage, some populations escape the local optimum, as in Figure 4 (top), and none
Large-Scale Evolution
get worse. Across populations, however, the escape was not frequent enough (8 out of 10) and took too long for us to propose this as an efï¬cient technique to escape optima. An interesting direction for future work would be to study more elegant methods to manage the exploration vs. exploitation trade-off in large-scale neuro-evolution.
# S3.2. Local optima and weight resetting | 1703.01041#75 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 76 | # S3.2. Local optima and weight resetting
The identity mutation offers a mechanism for populations to get trapped in local optima. Some individuals may get trained more than their peers just because they happen to have undergone more identity mutations. It may, therefore, occur that a poor architecture may become more accurate than potentially better architectures that still need more training. In the extreme case, the well-trained poor architecture may become a super-ï¬t individual and take over the population. Suspecting this scenario, we performed experiments in which we simultaneously reset all the weights in a population that had plateaued (Section 5). The simultaneous reset should put all the individuals on the same footing, so individuals that had accidentally trained more no longer have the unfair advantage. Indeed, the results matched our expectation. The populations suffer a temporary degradation in ï¬tness immediately after the reset, as the individuals need to retrain. Later, however, the populations end up reaching higher optima (for example, Figure 4, bottom). Across 10 experiments, we ï¬nd that three successive resets tend to cause improvement (p < 0.001). We mention this effect merely as evidence of this particular drawback of weight inheritance. In our main results, we circumvented the problem by using longer training times and larger populations. Future work may explore more efï¬cient solutions. | 1703.01041#76 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.00441 | 0 | 7 1 0 2 v o N 0 3 ] G L . s c [
2 v 1 4 4 0 0 . 3 0 7 1 : v i X r a
# Learning to Optimize Neural Nets
# Ke Li 1 Jitendra Malik 1
# Abstract
Learning to Optimize (Li & Malik, 2016) is a recently proposed framework for learning opti- mization algorithms using reinforcement learn- In this paper, we explore learning an op- ing. timization algorithm for training shallow neu- ral nets. Such high-dimensional stochastic opti- mization problems present interesting challenges for existing reinforcement learning algorithms. We develop an extension that is suited to learn- ing optimization algorithms in this setting and demonstrate that the learned optimization algo- rithm consistently outperforms other known op- timization algorithms even on unseen tasks and is robust to changes in stochasticity of gradients and the neural net architecture. More speciï¬- cally, we show that an optimization algorithm trained with the proposed method on the prob- lem of training a neural net on MNIST general- izes to the problems of training neural nets on the Toronto Faces Dataset, CIFAR-10 and CIFAR- 100.
# 1. Introduction
optimization algorithm. Given this state of affairs, perhaps it is time for us to start practicing what we preach and learn how to learn. | 1703.00441#0 | Learning to Optimize Neural Nets | Learning to Optimize is a recently proposed framework for learning
optimization algorithms using reinforcement learning. In this paper, we explore
learning an optimization algorithm for training shallow neural nets. Such
high-dimensional stochastic optimization problems present interesting
challenges for existing reinforcement learning algorithms. We develop an
extension that is suited to learning optimization algorithms in this setting
and demonstrate that the learned optimization algorithm consistently
outperforms other known optimization algorithms even on unseen tasks and is
robust to changes in stochasticity of gradients and the neural net
architecture. More specifically, we show that an optimization algorithm trained
with the proposed method on the problem of training a neural net on MNIST
generalizes to the problems of training neural nets on the Toronto Faces
Dataset, CIFAR-10 and CIFAR-100. | http://arxiv.org/pdf/1703.00441 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 10 pages, 15 figures | null | cs.LG | 20170301 | 20171130 | [
{
"id": "1606.01467"
},
{
"id": "1606.04474"
},
{
"id": "1602.04062"
},
{
"id": "1502.03492"
},
{
"id": "1504.00702"
}
] |
1703.00441 | 1 | # 1. Introduction
optimization algorithm. Given this state of affairs, perhaps it is time for us to start practicing what we preach and learn how to learn.
Recently, Li & Malik (2016) and Andrychowicz et al. (2016) introduced two different frameworks for learning optimization algorithms. Whereas Andrychowicz et al. (2016) focuses on learning an optimization algorithm for training models on a particular task, Li & Malik (2016) sets a more ambitious objective of learning an optimiza- tion algorithm for training models that is task-independent. We study the latter paradigm in this paper and develop a method for learning an optimization algorithm for high- like the dimensional stochastic optimization problems, problem of training shallow neural nets.
Under the âLearning to Optimizeâ framework proposed by Li & Malik (2016), the problem of learning an optimization algorithm is formulated as a reinforcement learning prob- lem. We consider the general structure of an unconstrained continuous optimization algorithm, as shown in Algorithm 1. In each iteration, the algorithm takes a step âx and uses it to update the current iterate x(i). In hand-engineered op- timization algorithms, âx is computed using some ï¬xed formula Ï that depends on the objective function, the cur- rent iterate and past iterates. Often, it is simply a function of the current and past gradients. | 1703.00441#1 | Learning to Optimize Neural Nets | Learning to Optimize is a recently proposed framework for learning
optimization algorithms using reinforcement learning. In this paper, we explore
learning an optimization algorithm for training shallow neural nets. Such
high-dimensional stochastic optimization problems present interesting
challenges for existing reinforcement learning algorithms. We develop an
extension that is suited to learning optimization algorithms in this setting
and demonstrate that the learned optimization algorithm consistently
outperforms other known optimization algorithms even on unseen tasks and is
robust to changes in stochasticity of gradients and the neural net
architecture. More specifically, we show that an optimization algorithm trained
with the proposed method on the problem of training a neural net on MNIST
generalizes to the problems of training neural nets on the Toronto Faces
Dataset, CIFAR-10 and CIFAR-100. | http://arxiv.org/pdf/1703.00441 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 10 pages, 15 figures | null | cs.LG | 20170301 | 20171130 | [
{
"id": "1606.01467"
},
{
"id": "1606.04474"
},
{
"id": "1602.04062"
},
{
"id": "1502.03492"
},
{
"id": "1504.00702"
}
] |
1703.00441 | 2 | Machine learning is centred on the philosophy that learn- ing patterns automatically from data is generally better than meticulously crafting rules by hand. This data-driven ap- proach has delivered: today, machine learning techniques can be found in a wide range of application areas, both in AI and beyond. Yet, there is one domain that has conspicu- ously been left untouched by machine learning: the design of tools that power machine learning itself.
One of the most widely used tools in machine learning is optimization algorithms. We have grown accustomed to seeing an optimization algorithm as a black box that takes in a model that we design and the data that we collect and outputs the optimal model parameters. The optimization al- gorithm itself largely stays static: its design is reserved for human experts, who must toil through many rounds of the- oretical analysis and empirical validation to devise a better
1University of California, Berkeley, CA 94720, United States. Correspondence to: Ke Li <[email protected]>.
Algorithm 1 General structure of optimization algorithms | 1703.00441#2 | Learning to Optimize Neural Nets | Learning to Optimize is a recently proposed framework for learning
optimization algorithms using reinforcement learning. In this paper, we explore
learning an optimization algorithm for training shallow neural nets. Such
high-dimensional stochastic optimization problems present interesting
challenges for existing reinforcement learning algorithms. We develop an
extension that is suited to learning optimization algorithms in this setting
and demonstrate that the learned optimization algorithm consistently
outperforms other known optimization algorithms even on unseen tasks and is
robust to changes in stochasticity of gradients and the neural net
architecture. More specifically, we show that an optimization algorithm trained
with the proposed method on the problem of training a neural net on MNIST
generalizes to the problems of training neural nets on the Toronto Faces
Dataset, CIFAR-10 and CIFAR-100. | http://arxiv.org/pdf/1703.00441 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 10 pages, 15 figures | null | cs.LG | 20170301 | 20171130 | [
{
"id": "1606.01467"
},
{
"id": "1606.04474"
},
{
"id": "1602.04062"
},
{
"id": "1502.03492"
},
{
"id": "1504.00702"
}
] |
1703.00441 | 3 | Algorithm 1 General structure of optimization algorithms
Require: Objective function f x(0) â random point in the domain of f for i = 1, 2, . . . do âx â Ï(f, {x(0), . . . , x(iâ1)}) if stopping condition is met then return x(iâ1) end if x(i) â x(iâ1) + âx end for
Different choices of Ï yield different optimization algo- rithms and so each optimization algorithm is essentially characterized by its update formula Ï. Hence, by learn- ing Ï, we can learn an optimization algorithm. Li & Ma- lik (2016) observed that an optimization algorithm can be viewed as a Markov decision process (MDP), where the state includes the current iterate, the action is the step vecLearning to Optimize Neural Nets
tor âx and the policy is the update formula Ï. Hence, the problem of learning Ï simply reduces to a policy search problem. | 1703.00441#3 | Learning to Optimize Neural Nets | Learning to Optimize is a recently proposed framework for learning
optimization algorithms using reinforcement learning. In this paper, we explore
learning an optimization algorithm for training shallow neural nets. Such
high-dimensional stochastic optimization problems present interesting
challenges for existing reinforcement learning algorithms. We develop an
extension that is suited to learning optimization algorithms in this setting
and demonstrate that the learned optimization algorithm consistently
outperforms other known optimization algorithms even on unseen tasks and is
robust to changes in stochasticity of gradients and the neural net
architecture. More specifically, we show that an optimization algorithm trained
with the proposed method on the problem of training a neural net on MNIST
generalizes to the problems of training neural nets on the Toronto Faces
Dataset, CIFAR-10 and CIFAR-100. | http://arxiv.org/pdf/1703.00441 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 10 pages, 15 figures | null | cs.LG | 20170301 | 20171130 | [
{
"id": "1606.01467"
},
{
"id": "1606.04474"
},
{
"id": "1602.04062"
},
{
"id": "1502.03492"
},
{
"id": "1504.00702"
}
] |
1703.00441 | 4 | tor âx and the policy is the update formula Ï. Hence, the problem of learning Ï simply reduces to a policy search problem.
In this paper, we build on the method proposed in (Li & Malik, 2016) and develop an extension that is suited to learning optimization algorithms for high-dimensional stochastic problems. We use it to learn an optimization algorithm for training shallow neural nets and show that it outperforms popular hand-engineered optimization algo- rithms like ADAM (Kingma & Ba, 2014), AdaGrad (Duchi et al., 2011) and RMSprop (Tieleman & Hinton, 2012) and an optimization algorithm learned using the supervised learning method proposed in (Andrychowicz et al., 2016). Furthermore, we demonstrate that our optimization algo- rithm learned from the experience of training on MNIST generalizes to training on other datasets that have very dis- similar statistics, like the Toronto Faces Dataset, CIFAR-10 and CIFAR-100.
# 2. Related Work
# 2.2. Learning Which Model to Learn | 1703.00441#4 | Learning to Optimize Neural Nets | Learning to Optimize is a recently proposed framework for learning
optimization algorithms using reinforcement learning. In this paper, we explore
learning an optimization algorithm for training shallow neural nets. Such
high-dimensional stochastic optimization problems present interesting
challenges for existing reinforcement learning algorithms. We develop an
extension that is suited to learning optimization algorithms in this setting
and demonstrate that the learned optimization algorithm consistently
outperforms other known optimization algorithms even on unseen tasks and is
robust to changes in stochasticity of gradients and the neural net
architecture. More specifically, we show that an optimization algorithm trained
with the proposed method on the problem of training a neural net on MNIST
generalizes to the problems of training neural nets on the Toronto Faces
Dataset, CIFAR-10 and CIFAR-100. | http://arxiv.org/pdf/1703.00441 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 10 pages, 15 figures | null | cs.LG | 20170301 | 20171130 | [
{
"id": "1606.01467"
},
{
"id": "1606.04474"
},
{
"id": "1602.04062"
},
{
"id": "1502.03492"
},
{
"id": "1504.00702"
}
] |
1703.00441 | 5 | # 2. Related Work
# 2.2. Learning Which Model to Learn
Methods in this category (Brazdil et al., 2008) aim to learn which base-level learner achieves the best performance on a task. The meta-knowledge captures correlations between different tasks and the performance of different base-level learners on those tasks. One challenge under this setting is to decide on a parameterization of the space of base-level learners that is both rich enough to be capable of repre- senting disparate base-level learners and compact enough to permit tractable search over this space. Brazdil et al. (2003) proposes a nonparametric representation and stores examples of different base-level learners in a database, whereas Schmidhuber (2004) proposes representing base- level learners as general-purpose programs. The former has limited representation power, while the latter makes search and learning in the space of base-level learners intractable. Hochreiter et al. (2001) views the (online) training proce- dure of any base-learner as a black box function that maps a sequence of training examples to a sequence of predictions and models it as a recurrent neural net. Under this formu- lation, meta-training reduces to training the recurrent net, and the base-level learner is encoded in the memory state of the recurrent net. | 1703.00441#5 | Learning to Optimize Neural Nets | Learning to Optimize is a recently proposed framework for learning
optimization algorithms using reinforcement learning. In this paper, we explore
learning an optimization algorithm for training shallow neural nets. Such
high-dimensional stochastic optimization problems present interesting
challenges for existing reinforcement learning algorithms. We develop an
extension that is suited to learning optimization algorithms in this setting
and demonstrate that the learned optimization algorithm consistently
outperforms other known optimization algorithms even on unseen tasks and is
robust to changes in stochasticity of gradients and the neural net
architecture. More specifically, we show that an optimization algorithm trained
with the proposed method on the problem of training a neural net on MNIST
generalizes to the problems of training neural nets on the Toronto Faces
Dataset, CIFAR-10 and CIFAR-100. | http://arxiv.org/pdf/1703.00441 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 10 pages, 15 figures | null | cs.LG | 20170301 | 20171130 | [
{
"id": "1606.01467"
},
{
"id": "1606.04474"
},
{
"id": "1602.04062"
},
{
"id": "1502.03492"
},
{
"id": "1504.00702"
}
] |
1703.00441 | 6 | The line of work on learning optimization algorithms is fairly recent. Li & Malik (2016) and Andrychowicz et al. (2016) were the ï¬rst to propose learning general opti- mization algorithms. Li & Malik (2016) explored learn- ing task-independent optimization algorithms and used re- inforcement learning to learn the optimization algorithm, while Andrychowicz et al. (2016) investigated learning task-dependent optimization algorithms and used super- vised learning.
In the special case where objective functions that the opti- mization algorithm is trained on are loss functions for train- ing other models, these methods can be used for âlearning to learnâ or âmeta-learningâ. While these terms have ap- peared from time to time in the literature (Baxter et al., 1995; Vilalta & Drissi, 2002; Brazdil et al., 2008; Thrun & Pratt, 2012), they have been used by different authors to refer to disparate methods with different purposes. These methods all share the objective of learning some form of meta-knowledge about learning, but differ in the type of meta-knowledge they aim to learn. We can divide the vari- ous methods into the following three categories. | 1703.00441#6 | Learning to Optimize Neural Nets | Learning to Optimize is a recently proposed framework for learning
optimization algorithms using reinforcement learning. In this paper, we explore
learning an optimization algorithm for training shallow neural nets. Such
high-dimensional stochastic optimization problems present interesting
challenges for existing reinforcement learning algorithms. We develop an
extension that is suited to learning optimization algorithms in this setting
and demonstrate that the learned optimization algorithm consistently
outperforms other known optimization algorithms even on unseen tasks and is
robust to changes in stochasticity of gradients and the neural net
architecture. More specifically, we show that an optimization algorithm trained
with the proposed method on the problem of training a neural net on MNIST
generalizes to the problems of training neural nets on the Toronto Faces
Dataset, CIFAR-10 and CIFAR-100. | http://arxiv.org/pdf/1703.00441 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 10 pages, 15 figures | null | cs.LG | 20170301 | 20171130 | [
{
"id": "1606.01467"
},
{
"id": "1606.04474"
},
{
"id": "1602.04062"
},
{
"id": "1502.03492"
},
{
"id": "1504.00702"
}
] |
1703.00441 | 7 | Hyperparameter optimization can be seen as another ex- ample of methods in this category. The space of base-level learners to search over is parameterized by a predeï¬ned set of hyperparameters. Unlike the methods above, multiple trials with different hyperparameter settings on the same task are permitted, and so generalization across tasks is not required. The discovered hyperparameters are generally speciï¬c to the task at hand and hyperparameter optimiza- tion must be rerun for new tasks. Various kinds of methods have been proposed, such those based on Bayesian opti- mization (Hutter et al., 2011; Bergstra et al., 2011; Snoek et al., 2012; Swersky et al., 2013; Feurer et al., 2015), random search (Bergstra & Bengio, 2012) and gradient- based optimization (Bengio, 2000; Domke, 2012; Maclau- rin et al., 2015).
# 2.3. Learning How to Learn
# 2.1. Learning What to Learn | 1703.00441#7 | Learning to Optimize Neural Nets | Learning to Optimize is a recently proposed framework for learning
optimization algorithms using reinforcement learning. In this paper, we explore
learning an optimization algorithm for training shallow neural nets. Such
high-dimensional stochastic optimization problems present interesting
challenges for existing reinforcement learning algorithms. We develop an
extension that is suited to learning optimization algorithms in this setting
and demonstrate that the learned optimization algorithm consistently
outperforms other known optimization algorithms even on unseen tasks and is
robust to changes in stochasticity of gradients and the neural net
architecture. More specifically, we show that an optimization algorithm trained
with the proposed method on the problem of training a neural net on MNIST
generalizes to the problems of training neural nets on the Toronto Faces
Dataset, CIFAR-10 and CIFAR-100. | http://arxiv.org/pdf/1703.00441 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 10 pages, 15 figures | null | cs.LG | 20170301 | 20171130 | [
{
"id": "1606.01467"
},
{
"id": "1606.04474"
},
{
"id": "1602.04062"
},
{
"id": "1502.03492"
},
{
"id": "1504.00702"
}
] |
1703.00441 | 8 | # 2.3. Learning How to Learn
# 2.1. Learning What to Learn
Methods in this category (Thrun & Pratt, 2012) aim to learn what parameter values of the base-level learner are useful across a family of related tasks. The meta-knowledge cap- tures commonalities shared by tasks in the family, which enables learning on a new task from the family to be done more quickly. Most early methods fall into this category; this line of work has blossomed into an area that has later become known as transfer learning and multi-task learning.
Methods in this category aim to learn a good algorithm for training a base-level learner. Unlike methods in the pre- vious categories, the goal is not to learn about the out- come of learning, but rather the process of learning. The meta-knowledge captures commonalities in the behaviours of learning algorithms that achieve good performance. The base-level learner and the task are given by the user, so the learned algorithm must generalize across base-level learn- ers and tasks. Since learning in most cases is equivalent to optimizing some objective function, learning a learning algorithm often reduces to learning an optimization algo- rithm. This problem was explored in (Li & Malik, 2016)
Learning to Optimize Neural Nets | 1703.00441#8 | Learning to Optimize Neural Nets | Learning to Optimize is a recently proposed framework for learning
optimization algorithms using reinforcement learning. In this paper, we explore
learning an optimization algorithm for training shallow neural nets. Such
high-dimensional stochastic optimization problems present interesting
challenges for existing reinforcement learning algorithms. We develop an
extension that is suited to learning optimization algorithms in this setting
and demonstrate that the learned optimization algorithm consistently
outperforms other known optimization algorithms even on unseen tasks and is
robust to changes in stochasticity of gradients and the neural net
architecture. More specifically, we show that an optimization algorithm trained
with the proposed method on the problem of training a neural net on MNIST
generalizes to the problems of training neural nets on the Toronto Faces
Dataset, CIFAR-10 and CIFAR-100. | http://arxiv.org/pdf/1703.00441 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 10 pages, 15 figures | null | cs.LG | 20170301 | 20171130 | [
{
"id": "1606.01467"
},
{
"id": "1606.04474"
},
{
"id": "1602.04062"
},
{
"id": "1502.03492"
},
{
"id": "1504.00702"
}
] |
1703.00441 | 9 | Learning to Optimize Neural Nets
and (Andrychowicz et al., 2016). Closely related is (Ben- gio et al., 1991), which learns a Hebb-like synaptic learn- ing rule that does not depend on the objective function, which does not allow for generalization to different objec- tive functions.
Various work has explored learning how to adjust the hyperparameters of hand-engineered optimization algo- rithms, like the step size (Hansen, 2016; Daniel et al., 2016; Fu et al., 2016) or the damping factor in the Levenberg- Marquardt algorithm (Ruvolo et al., 2009). Related to this line of work is stochastic meta-descent (Bray et al., 2004), which derives a rule for adjusting the step size analytically. A different line of work (Gregor & LeCun, 2010; Sprech- mann et al., 2013) parameterizes intermediate operands of special-purpose solvers for a class of optimization prob- lems that arise in sparse coding and learns them using su- pervised learning. | 1703.00441#9 | Learning to Optimize Neural Nets | Learning to Optimize is a recently proposed framework for learning
optimization algorithms using reinforcement learning. In this paper, we explore
learning an optimization algorithm for training shallow neural nets. Such
high-dimensional stochastic optimization problems present interesting
challenges for existing reinforcement learning algorithms. We develop an
extension that is suited to learning optimization algorithms in this setting
and demonstrate that the learned optimization algorithm consistently
outperforms other known optimization algorithms even on unseen tasks and is
robust to changes in stochasticity of gradients and the neural net
architecture. More specifically, we show that an optimization algorithm trained
with the proposed method on the problem of training a neural net on MNIST
generalizes to the problems of training neural nets on the Toronto Faces
Dataset, CIFAR-10 and CIFAR-100. | http://arxiv.org/pdf/1703.00441 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 10 pages, 15 figures | null | cs.LG | 20170301 | 20171130 | [
{
"id": "1606.01467"
},
{
"id": "1606.04474"
},
{
"id": "1602.04062"
},
{
"id": "1502.03492"
},
{
"id": "1504.00702"
}
] |
1703.00441 | 10 | may be completely unrelated to tasks used for training the optimization algorithm. Therefore, the learned optimiza- tion algorithm must not learn anything about the tasks used for training. Instead, the goal is to learn an optimization al- gorithm that can exploit the geometric structure of the error surface induced by the base-learners. For example, if the base-level model is a neural net with ReLU activation units, the optimization algorithm should hopefully learn to lever- age the piecewise linearity of the model. Hence, there is a clear division of responsibilities between the meta-learner and base-learners. The knowledge learned at the meta-level should be pertinent for all tasks, whereas the knowledge learned at the base-level should be task-speciï¬c. The meta- learner should therefore generalize across tasks, whereas the base-learner should generalize across instances.
# 3.2. RL Preliminaries
# 3. Learning to Optimize
# 3.1. Setting | 1703.00441#10 | Learning to Optimize Neural Nets | Learning to Optimize is a recently proposed framework for learning
optimization algorithms using reinforcement learning. In this paper, we explore
learning an optimization algorithm for training shallow neural nets. Such
high-dimensional stochastic optimization problems present interesting
challenges for existing reinforcement learning algorithms. We develop an
extension that is suited to learning optimization algorithms in this setting
and demonstrate that the learned optimization algorithm consistently
outperforms other known optimization algorithms even on unseen tasks and is
robust to changes in stochasticity of gradients and the neural net
architecture. More specifically, we show that an optimization algorithm trained
with the proposed method on the problem of training a neural net on MNIST
generalizes to the problems of training neural nets on the Toronto Faces
Dataset, CIFAR-10 and CIFAR-100. | http://arxiv.org/pdf/1703.00441 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 10 pages, 15 figures | null | cs.LG | 20170301 | 20171130 | [
{
"id": "1606.01467"
},
{
"id": "1606.04474"
},
{
"id": "1602.04062"
},
{
"id": "1502.03492"
},
{
"id": "1504.00702"
}
] |
1703.00441 | 11 | # 3.2. RL Preliminaries
# 3. Learning to Optimize
# 3.1. Setting
In the âLearning to Optimizeâ framework, we are given a set of training objective functions f1,..., fn drawn from some distribution F. An optimization algorithm A takes an objective function f and an initial iterate 2 as in- put and produces a sequence of iterates +@),..., (7), where x7) is the solution found by the optimizer. We are also given a distribution D that generates the initial iterate 2°) and a meta-loss £, which takes an objective unction f and a sequence of iterates x,..., â7 pro- duced by an optimization algorithm as input and outputs a scalar that measures the quality of the iterates. The goal is to learn an optimization algorithm A* such that Epwrxowp [L(f,A*(f,2))] is minimized. The meta- loss is chosen to penalize optimization algorithms that ex- hibit behaviours we find undesirable, like slow convergence or excessive oscillations. Assuming we would like to learn an algorithm that minimizes the objective function it is given, a good choice of meta-loss would then simply be an f(x), which can be interpreted as the area under the curve of objective values over time. | 1703.00441#11 | Learning to Optimize Neural Nets | Learning to Optimize is a recently proposed framework for learning
optimization algorithms using reinforcement learning. In this paper, we explore
learning an optimization algorithm for training shallow neural nets. Such
high-dimensional stochastic optimization problems present interesting
challenges for existing reinforcement learning algorithms. We develop an
extension that is suited to learning optimization algorithms in this setting
and demonstrate that the learned optimization algorithm consistently
outperforms other known optimization algorithms even on unseen tasks and is
robust to changes in stochasticity of gradients and the neural net
architecture. More specifically, we show that an optimization algorithm trained
with the proposed method on the problem of training a neural net on MNIST
generalizes to the problems of training neural nets on the Toronto Faces
Dataset, CIFAR-10 and CIFAR-100. | http://arxiv.org/pdf/1703.00441 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 10 pages, 15 figures | null | cs.LG | 20170301 | 20171130 | [
{
"id": "1606.01467"
},
{
"id": "1606.04474"
},
{
"id": "1602.04062"
},
{
"id": "1502.03492"
},
{
"id": "1504.00702"
}
] |
1703.00441 | 12 | The goal of reinforcement learning is to learn to interact with an environment in a way that minimizes cumulative costs that are expected to be incurred over time. The en- vironment is formalized as a partially observable Markov decision process (POMDP)!, which is defined by the tuple (S,O, A, Di, P,Po,¢, T), where S C R? is the set of states, O C Rââ is the set of observations, A C R7 is the set of actions, p; (89) is the probability density over initial states 80, P(St41 |8z,@z) is the probability density over the sub- sequent state s,;; given the current state s, and action a;,, Do (ot |S¢) is the probability density over the current obser- vation 0; given the current state s;, c : S + R is a function that assigns a cost to each state and T is the time horizon. Often, the probability densities p and p, are unknown and not given to the learning algorithm. | 1703.00441#12 | Learning to Optimize Neural Nets | Learning to Optimize is a recently proposed framework for learning
optimization algorithms using reinforcement learning. In this paper, we explore
learning an optimization algorithm for training shallow neural nets. Such
high-dimensional stochastic optimization problems present interesting
challenges for existing reinforcement learning algorithms. We develop an
extension that is suited to learning optimization algorithms in this setting
and demonstrate that the learned optimization algorithm consistently
outperforms other known optimization algorithms even on unseen tasks and is
robust to changes in stochasticity of gradients and the neural net
architecture. More specifically, we show that an optimization algorithm trained
with the proposed method on the problem of training a neural net on MNIST
generalizes to the problems of training neural nets on the Toronto Faces
Dataset, CIFAR-10 and CIFAR-100. | http://arxiv.org/pdf/1703.00441 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 10 pages, 15 figures | null | cs.LG | 20170301 | 20171130 | [
{
"id": "1606.01467"
},
{
"id": "1606.04474"
},
{
"id": "1602.04062"
},
{
"id": "1502.03492"
},
{
"id": "1504.00702"
}
] |
1703.00441 | 13 | A policy Ï (at |ot, t ) is a conditional probability density over actions at given the current observation ot and time step t. When a policy is independent of t, it is known as a stationary policy. The goal of the reinforcement learning algorithm is to learn a policy Ïâ that minimizes the total expected cost over time. More precisely,
T m* =argminE.o5.01,..0r | > ¢(se)| » t=0
The objective functions f1, . . . , fn may correspond to loss functions for training base-level learners, in which case the algorithm that learns the optimization algorithm can be viewed as a meta-learner. In this setting, each objective function is the loss function for training a particular base- learner on a particular task, and so the set of training ob- jective functions can be loss functions for training a base- learner or a family of base-learners on different tasks. At test time, the learned optimization algorithm is evaluated on unseen objective functions, which correspond to loss functions for training base-learners on new tasks, which
where the expectation is taken with respect to the joint dis- tribution over the sequence of states and actions, often re- ferred to as a trajectory, which has the density | 1703.00441#13 | Learning to Optimize Neural Nets | Learning to Optimize is a recently proposed framework for learning
optimization algorithms using reinforcement learning. In this paper, we explore
learning an optimization algorithm for training shallow neural nets. Such
high-dimensional stochastic optimization problems present interesting
challenges for existing reinforcement learning algorithms. We develop an
extension that is suited to learning optimization algorithms in this setting
and demonstrate that the learned optimization algorithm consistently
outperforms other known optimization algorithms even on unseen tasks and is
robust to changes in stochasticity of gradients and the neural net
architecture. More specifically, we show that an optimization algorithm trained
with the proposed method on the problem of training a neural net on MNIST
generalizes to the problems of training neural nets on the Toronto Faces
Dataset, CIFAR-10 and CIFAR-100. | http://arxiv.org/pdf/1703.00441 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 10 pages, 15 figures | null | cs.LG | 20170301 | 20171130 | [
{
"id": "1606.01467"
},
{
"id": "1606.04474"
},
{
"id": "1602.04062"
},
{
"id": "1502.03492"
},
{
"id": "1504.00702"
}
] |
1703.00441 | 14 | where the expectation is taken with respect to the joint dis- tribution over the sequence of states and actions, often re- ferred to as a trajectory, which has the density
pi (S0) Po (00| So) 8 » T (azt| 01, t) p (Se41| 81,4) Po (Or41| St41) - t=0
1What is described is an undiscounted ï¬nite-horizon POMDP with continuous state, observation and action spaces.
Learning to Optimize Neural Nets
To make learning tractable, Ï is often constrained to lie in a parameterized family. A common assumption is that Ï ( at| ot, t) = N (µÏ(ot), ΣÏ(ot)), where N (µ, Σ) de- notes the density of a Gaussian with mean µ and covari- ance Σ. The functions µÏ(·) and possibly ΣÏ(·) are mod- elled using function approximators, whose parameters are learned.
optimization is challenging. In each iteration, it performs policy optimization on Ï, and uses the resulting policy as supervision to train Ï.
More precisely, GPS solves the following constrained opti- mization problem:
# T | 1703.00441#14 | Learning to Optimize Neural Nets | Learning to Optimize is a recently proposed framework for learning
optimization algorithms using reinforcement learning. In this paper, we explore
learning an optimization algorithm for training shallow neural nets. Such
high-dimensional stochastic optimization problems present interesting
challenges for existing reinforcement learning algorithms. We develop an
extension that is suited to learning optimization algorithms in this setting
and demonstrate that the learned optimization algorithm consistently
outperforms other known optimization algorithms even on unseen tasks and is
robust to changes in stochasticity of gradients and the neural net
architecture. More specifically, we show that an optimization algorithm trained
with the proposed method on the problem of training a neural net on MNIST
generalizes to the problems of training neural nets on the Toronto Faces
Dataset, CIFAR-10 and CIFAR-100. | http://arxiv.org/pdf/1703.00441 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 10 pages, 15 figures | null | cs.LG | 20170301 | 20171130 | [
{
"id": "1606.01467"
},
{
"id": "1606.04474"
},
{
"id": "1602.04062"
},
{
"id": "1502.03492"
},
{
"id": "1504.00702"
}
] |
1703.00441 | 15 | More precisely, GPS solves the following constrained opti- mization problem:
# T
T min Ey b 2) s.t. U (az| 82,037) = 7 (a2| $430) Vaz, se,t 6, 7 t=0
# 3.3. Formulation
In our setting, the state st consists of the current iterate x(t) and features Φ(·) that depend on the history of iterates x(1), . . . , x(t), (noisy) gradients â Ëf (x(1)), . . . , â Ëf (x(t)) and (noisy) objective values Ëf (x(1)), . . . , Ëf (x(t)). The ac- tion at is the step âx that will be used to update the iterate. The observation ot excludes x(t) and consists of features Ψ(·) that depend on the iterates, gradient and objective val- ues from recent iterations, and the previous memory state of the learned optimization algorithm, which takes the form of a recurrent neural net. This memory state can be viewed as a statistic of the previous observations that is learned jointly with the policy. | 1703.00441#15 | Learning to Optimize Neural Nets | Learning to Optimize is a recently proposed framework for learning
optimization algorithms using reinforcement learning. In this paper, we explore
learning an optimization algorithm for training shallow neural nets. Such
high-dimensional stochastic optimization problems present interesting
challenges for existing reinforcement learning algorithms. We develop an
extension that is suited to learning optimization algorithms in this setting
and demonstrate that the learned optimization algorithm consistently
outperforms other known optimization algorithms even on unseen tasks and is
robust to changes in stochasticity of gradients and the neural net
architecture. More specifically, we show that an optimization algorithm trained
with the proposed method on the problem of training a neural net on MNIST
generalizes to the problems of training neural nets on the Toronto Faces
Dataset, CIFAR-10 and CIFAR-100. | http://arxiv.org/pdf/1703.00441 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 10 pages, 15 figures | null | cs.LG | 20170301 | 20171130 | [
{
"id": "1606.01467"
},
{
"id": "1606.04474"
},
{
"id": "1602.04062"
},
{
"id": "1502.03492"
},
{
"id": "1504.00702"
}
] |
1703.00441 | 16 | where 7 and 6 denote the parameters of y and 7 respec- tively, E, [-] denotes the expectation taken with respect to the trajectory induced by a policy p and 7 (a¢| 5430) = Jon T (ay| Or; 9) Po (o| %)°.
# ot
Since there are an inï¬nite number of equality constraints, the problem is relaxed by enforcing equality on the mean actions taken by Ï and Ï at every time step3. So, the prob- lem becomes:
min Ey b 7) s.t. Ey [ae] = Ey [Ex [ae| s¢]] Vt t=0
Under this formulation, the initial probability density p; captures how the initial iterate, gradient and objective value tend to be distributed. The transition probability density p captures the how the gradient and objective value are likely to change given the step that is taken currently; in other words, it encodes the local geometry of the training ob- jective functions. Assuming the goal is to learn an opti- mization algorithm that minimizes the objective function, the cost ¢ of a state s, = (ec, ® ())7 is simply the true objective value f(a).
This problem is solved using Bregman ADMM (Wang & Banerjee, 2014), which performs the following updates in each iteration: | 1703.00441#16 | Learning to Optimize Neural Nets | Learning to Optimize is a recently proposed framework for learning
optimization algorithms using reinforcement learning. In this paper, we explore
learning an optimization algorithm for training shallow neural nets. Such
high-dimensional stochastic optimization problems present interesting
challenges for existing reinforcement learning algorithms. We develop an
extension that is suited to learning optimization algorithms in this setting
and demonstrate that the learned optimization algorithm consistently
outperforms other known optimization algorithms even on unseen tasks and is
robust to changes in stochasticity of gradients and the neural net
architecture. More specifically, we show that an optimization algorithm trained
with the proposed method on the problem of training a neural net on MNIST
generalizes to the problems of training neural nets on the Toronto Faces
Dataset, CIFAR-10 and CIFAR-100. | http://arxiv.org/pdf/1703.00441 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 10 pages, 15 figures | null | cs.LG | 20170301 | 20171130 | [
{
"id": "1606.01467"
},
{
"id": "1606.04474"
},
{
"id": "1602.04062"
},
{
"id": "1502.03492"
},
{
"id": "1504.00702"
}
] |
1703.00441 | 17 | This problem is solved using Bregman ADMM (Wang & Banerjee, 2014), which performs the following updates in each iteration:
T n+ arg min S> Ey [e(ss) - Aa] + Dz (0,4) 7 t=0 T Oe aremn AP Ey [Ex [ae se] + Di (0,7) Me = Ap + aM% (Ey [Ex [az| s2]] â Ey [ae]) Ve,
where D, (8,7) = Ey [Dict (m (ai| 8439) |] ¥ (ail 82, 67))] and D; (7,9) = Ey [Dxz (~ (ai| se, t; 9)|| + (ae| se; 9))).
Any particular policy Ï (at |ot, t ), which generates at = âx at every time step, corresponds to a particular (noisy) update formula Ï, and therefore a particular (noisy) opti- mization algorithm. Therefore, learning an optimization algorithm simply reduces to searching for the optimal pol- icy. | 1703.00441#17 | Learning to Optimize Neural Nets | Learning to Optimize is a recently proposed framework for learning
optimization algorithms using reinforcement learning. In this paper, we explore
learning an optimization algorithm for training shallow neural nets. Such
high-dimensional stochastic optimization problems present interesting
challenges for existing reinforcement learning algorithms. We develop an
extension that is suited to learning optimization algorithms in this setting
and demonstrate that the learned optimization algorithm consistently
outperforms other known optimization algorithms even on unseen tasks and is
robust to changes in stochasticity of gradients and the neural net
architecture. More specifically, we show that an optimization algorithm trained
with the proposed method on the problem of training a neural net on MNIST
generalizes to the problems of training neural nets on the Toronto Faces
Dataset, CIFAR-10 and CIFAR-100. | http://arxiv.org/pdf/1703.00441 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 10 pages, 15 figures | null | cs.LG | 20170301 | 20171130 | [
{
"id": "1606.01467"
},
{
"id": "1606.04474"
},
{
"id": "1602.04062"
},
{
"id": "1502.03492"
},
{
"id": "1504.00702"
}
] |
1703.00441 | 18 | = that Ï ( at| st, t; η) The := (Kt, kt, Gt)T N (Ktst + kt, Gt), where η t=1 and Ï(ot), ΣÏ), where θ := (Ï, ΣÏ) Ï ( at| ot; θ) = N (ÂµÏ and ÂµÏ Ï(·) can be an arbitrary function that is typically modelled using a nonlinear function approximator like a neural net.
The mean of the policy is modelled as a recurrent neural net fragment that corresponds to a single time step, which takes the observation features Ψ(·) and the previous mem- ory state as input and outputs the step to take.
# 3.4. Guided Policy Search
The reinforcement learning method we use is guided pol- icy search (GPS) (Levine et al., 2015), which is a policy search method designed for searching over large classes of expressive non-linear policies in continuous state and ac- tion spaces. It maintains two policies, Ï and Ï, where the former lies in a time-varying linear policy class in which the optimal policy can found in closed form, and the latter lies in a stationary non-linear policy class in which policy | 1703.00441#18 | Learning to Optimize Neural Nets | Learning to Optimize is a recently proposed framework for learning
optimization algorithms using reinforcement learning. In this paper, we explore
learning an optimization algorithm for training shallow neural nets. Such
high-dimensional stochastic optimization problems present interesting
challenges for existing reinforcement learning algorithms. We develop an
extension that is suited to learning optimization algorithms in this setting
and demonstrate that the learned optimization algorithm consistently
outperforms other known optimization algorithms even on unseen tasks and is
robust to changes in stochasticity of gradients and the neural net
architecture. More specifically, we show that an optimization algorithm trained
with the proposed method on the problem of training a neural net on MNIST
generalizes to the problems of training neural nets on the Toronto Faces
Dataset, CIFAR-10 and CIFAR-100. | http://arxiv.org/pdf/1703.00441 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 10 pages, 15 figures | null | cs.LG | 20170301 | 20171130 | [
{
"id": "1606.01467"
},
{
"id": "1606.04474"
},
{
"id": "1602.04062"
},
{
"id": "1502.03492"
},
{
"id": "1504.00702"
}
] |
1703.00441 | 19 | the algorithm con- At structs a model of the transition probability density Ëp ( st+1| st, at, t; ζ) = N (Atst+Btat+ct, Ft), where ζ := (At, Bt, ct, Ft)T t=1 is ï¬tted to samples of st drawn from the trajectory induced by Ï, which essentially amounts to a local linearization of the true transition probability p ( st+1| st, at, t). We will use E ËÏ [·] to denote expecta- tion taken with respect to the trajectory induced by Ï under
2In practice, the explicit form of the observation probability po is usually not known or the integral may be intractable to compute. So, a linear Gaussian model is ï¬tted to samples of st and at and used in place of the true Ï ( at| st; θ) where necessary.
3Though the Bregman divergence penalty is applied to the original probability distributions over at.
Learning to Optimize Neural Nets | 1703.00441#19 | Learning to Optimize Neural Nets | Learning to Optimize is a recently proposed framework for learning
optimization algorithms using reinforcement learning. In this paper, we explore
learning an optimization algorithm for training shallow neural nets. Such
high-dimensional stochastic optimization problems present interesting
challenges for existing reinforcement learning algorithms. We develop an
extension that is suited to learning optimization algorithms in this setting
and demonstrate that the learned optimization algorithm consistently
outperforms other known optimization algorithms even on unseen tasks and is
robust to changes in stochasticity of gradients and the neural net
architecture. More specifically, we show that an optimization algorithm trained
with the proposed method on the problem of training a neural net on MNIST
generalizes to the problems of training neural nets on the Toronto Faces
Dataset, CIFAR-10 and CIFAR-100. | http://arxiv.org/pdf/1703.00441 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 10 pages, 15 figures | null | cs.LG | 20170301 | 20171130 | [
{
"id": "1606.01467"
},
{
"id": "1606.04474"
},
{
"id": "1602.04062"
},
{
"id": "1502.03492"
},
{
"id": "1504.00702"
}
] |
1703.00441 | 20 | 3Though the Bregman divergence penalty is applied to the original probability distributions over at.
Learning to Optimize Neural Nets
the modelled transition probability Ëp. Additionally, the al- gorithm ï¬ts local quadratic approximations to c(st) around samples of st drawn from the trajectory induced by Ï so that c(st) â Ëc(st) := 1 t st + ht for stâs that are near the samples.
spaces. For example, in the case of GPS, because the run- ning time of LQG is cubic in dimensionality of the state space, performing policy search even in the simple class of linear-Gaussian policies would be prohibitively expen- sive when the dimensionality of the optimization problem is high.
With these assumptions, the subproblem that needs to be solved to update η = (Kt, kt, Gt)T
Tr min >? E; [é(sx) - Mai +â¢%Dz (n, 6) t=0 T s.t. SOE; [Die (w (az| 82, t;7) ||» (a,| st,t;7/))| <e, t=0 | 1703.00441#20 | Learning to Optimize Neural Nets | Learning to Optimize is a recently proposed framework for learning
optimization algorithms using reinforcement learning. In this paper, we explore
learning an optimization algorithm for training shallow neural nets. Such
high-dimensional stochastic optimization problems present interesting
challenges for existing reinforcement learning algorithms. We develop an
extension that is suited to learning optimization algorithms in this setting
and demonstrate that the learned optimization algorithm consistently
outperforms other known optimization algorithms even on unseen tasks and is
robust to changes in stochasticity of gradients and the neural net
architecture. More specifically, we show that an optimization algorithm trained
with the proposed method on the problem of training a neural net on MNIST
generalizes to the problems of training neural nets on the Toronto Faces
Dataset, CIFAR-10 and CIFAR-100. | http://arxiv.org/pdf/1703.00441 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 10 pages, 15 figures | null | cs.LG | 20170301 | 20171130 | [
{
"id": "1606.01467"
},
{
"id": "1606.04474"
},
{
"id": "1602.04062"
},
{
"id": "1502.03492"
},
{
"id": "1504.00702"
}
] |
1703.00441 | 21 | where 77â denotes the old 7 from the previous iteration. Be- cause p and @ are only valid locally around the trajectory induced by ~, the constraint is added to limit the amount by which 77 is updated. It turns out that the unconstrained prob- lem can be solved in closed form using a dynamic program- ming algorithm known as linear-quadratic-Gaussian (LQG) regulator in time linear in the time horizon T' and cubic in the dimensionality of the state space D. The constrained problem is solved using dual gradient descent, which uses LQG as a subroutine to solve for the primal variables in each iteration and increments the dual variable on the con- straint until it is satisfied.
Updating θ is straightforward, since expectations taken with respect to the trajectory induced by Ï are always con- ditioned on st and all outer expectations over st are taken with respect to the trajectory induced by Ï. Therefore, Ï is essentially decoupled from the transition probabil- ity p ( st+1| st, at, t) and so its parameters can be updated without affecting the distribution of stâs. The subproblem that needs to be solved to update θ therefore amounts to a standard supervised learning problem. | 1703.00441#21 | Learning to Optimize Neural Nets | Learning to Optimize is a recently proposed framework for learning
optimization algorithms using reinforcement learning. In this paper, we explore
learning an optimization algorithm for training shallow neural nets. Such
high-dimensional stochastic optimization problems present interesting
challenges for existing reinforcement learning algorithms. We develop an
extension that is suited to learning optimization algorithms in this setting
and demonstrate that the learned optimization algorithm consistently
outperforms other known optimization algorithms even on unseen tasks and is
robust to changes in stochasticity of gradients and the neural net
architecture. More specifically, we show that an optimization algorithm trained
with the proposed method on the problem of training a neural net on MNIST
generalizes to the problems of training neural nets on the Toronto Faces
Dataset, CIFAR-10 and CIFAR-100. | http://arxiv.org/pdf/1703.00441 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 10 pages, 15 figures | null | cs.LG | 20170301 | 20171130 | [
{
"id": "1606.01467"
},
{
"id": "1606.04474"
},
{
"id": "1602.04062"
},
{
"id": "1502.03492"
},
{
"id": "1504.00702"
}
] |
1703.00441 | 22 | Since Ï ( at| st, t; η) and Ï ( at| st; θ) are Gaussian, D (θ, η) can be computed analytically. More concretely, if we assume Î£Ï to be ï¬xed for simplicity, the subproblem that is solved for updating θ = (Ï, ΣÏ) is:
T . 7 âA lyr 7 minEy > At HE (08) + z (tr (G7*E ) â log |=*|) +24 (u5(0t) â Ey [ail set)â Gr? (WS (or) â By [ael se, |
Note that the last term is the squared Mahalanobis distance between the mean actions of Ï and Ï at time step t, which is intuitive as we would like to encourage Ï to match Ï. | 1703.00441#22 | Learning to Optimize Neural Nets | Learning to Optimize is a recently proposed framework for learning
optimization algorithms using reinforcement learning. In this paper, we explore
learning an optimization algorithm for training shallow neural nets. Such
high-dimensional stochastic optimization problems present interesting
challenges for existing reinforcement learning algorithms. We develop an
extension that is suited to learning optimization algorithms in this setting
and demonstrate that the learned optimization algorithm consistently
outperforms other known optimization algorithms even on unseen tasks and is
robust to changes in stochasticity of gradients and the neural net
architecture. More specifically, we show that an optimization algorithm trained
with the proposed method on the problem of training a neural net on MNIST
generalizes to the problems of training neural nets on the Toronto Faces
Dataset, CIFAR-10 and CIFAR-100. | http://arxiv.org/pdf/1703.00441 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 10 pages, 15 figures | null | cs.LG | 20170301 | 20171130 | [
{
"id": "1606.01467"
},
{
"id": "1606.04474"
},
{
"id": "1602.04062"
},
{
"id": "1502.03492"
},
{
"id": "1504.00702"
}
] |
1703.00441 | 23 | Fortunately, many high-dimensional optimization prob- lems have underlying structure that can be exploited. For example, the parameters of neural nets are equivalent up to permutation among certain coordinates. More concretely, for fully connected neural nets, the dimensions of a hidden layer and the corresponding weights can be permuted ar- bitrarily without changing the function they compute. Be- cause permuting the dimensions of two adjacent layers can permute the weight matrix arbitrarily, an optimization algo- rithm should be invariant to permutations of the rows and columns of a weight matrix. A reasonable prior to impose is that the algorithm should behave in the same manner on all coordinates that correspond to entries in the same ma- trix. That is, if the values of two coordinates in all cur- rent and past gradients and iterates are identical, then the step vector produced by the algorithm should have identi- cal values in these two coordinates. We will refer to the set of coordinates on which permutation invariance is en- forced as a coordinate group. For the purposes of learning an optimization algorithm for neural nets, a natural choice would be to make each coordinate group correspond to a weight matrix or a bias vector. Hence, the total number of coordinate groups is twice the number of layers, which is usually fairly small. | 1703.00441#23 | Learning to Optimize Neural Nets | Learning to Optimize is a recently proposed framework for learning
optimization algorithms using reinforcement learning. In this paper, we explore
learning an optimization algorithm for training shallow neural nets. Such
high-dimensional stochastic optimization problems present interesting
challenges for existing reinforcement learning algorithms. We develop an
extension that is suited to learning optimization algorithms in this setting
and demonstrate that the learned optimization algorithm consistently
outperforms other known optimization algorithms even on unseen tasks and is
robust to changes in stochasticity of gradients and the neural net
architecture. More specifically, we show that an optimization algorithm trained
with the proposed method on the problem of training a neural net on MNIST
generalizes to the problems of training neural nets on the Toronto Faces
Dataset, CIFAR-10 and CIFAR-100. | http://arxiv.org/pdf/1703.00441 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 10 pages, 15 figures | null | cs.LG | 20170301 | 20171130 | [
{
"id": "1606.01467"
},
{
"id": "1606.04474"
},
{
"id": "1602.04062"
},
{
"id": "1502.03492"
},
{
"id": "1504.00702"
}
] |
1703.00441 | 24 | In the case of GPS, we impose this prior on both w and 7. For the purposes of updating 7, we first impose a block- diagonal structure on the parameters A;, B, and F; of the fitted transition probability density p (s:41| s:,42,t;¢) = N(Atse + Brat + ce, Fi), so that for each coordinate in the optimization problem, the dimensions of s;4 1 that cor- respond to the coordinate only depend on the dimensions of s; and a, that correspond to the same coordinate. As a result, p ($:41| Sz, at, t;¢) decomposes into multiple inde- pendent probability densities p/ (sha| sl, ai, t; @â), one for each coordinate 7. Similarly, we also impose a block- diagonal structure on C; for fitting ¢(s;) and on the pa- rameter matrix of the fitted model for 7 (a;| s,;0). Under these assumptions, Aâ, and G;, are guaranteed to be block- diagonal as well. Hence, the Bregman divergence penalty term, D (7,6) decomposes into a sum of Bregman diver- gence terms, one for each coordinate.
# 3.5. Convolutional GPS | 1703.00441#24 | Learning to Optimize Neural Nets | Learning to Optimize is a recently proposed framework for learning
optimization algorithms using reinforcement learning. In this paper, we explore
learning an optimization algorithm for training shallow neural nets. Such
high-dimensional stochastic optimization problems present interesting
challenges for existing reinforcement learning algorithms. We develop an
extension that is suited to learning optimization algorithms in this setting
and demonstrate that the learned optimization algorithm consistently
outperforms other known optimization algorithms even on unseen tasks and is
robust to changes in stochasticity of gradients and the neural net
architecture. More specifically, we show that an optimization algorithm trained
with the proposed method on the problem of training a neural net on MNIST
generalizes to the problems of training neural nets on the Toronto Faces
Dataset, CIFAR-10 and CIFAR-100. | http://arxiv.org/pdf/1703.00441 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 10 pages, 15 figures | null | cs.LG | 20170301 | 20171130 | [
{
"id": "1606.01467"
},
{
"id": "1606.04474"
},
{
"id": "1602.04062"
},
{
"id": "1502.03492"
},
{
"id": "1504.00702"
}
] |
1703.00441 | 25 | # 3.5. Convolutional GPS
The problem of learning high-dimensional optimization al- gorithms presents challenges for reinforcement learning al- gorithms due to high dimensionality of the state and action
We then further constrain dual variables λt, sub-vectors of parameter vectors and sub-matrices of parameter matri- ces corresponding to each coordinate group to be identical across the group. Additionally, we replace the weight νt on D (η, θ) with an individual weight on each Bregman
Learning to Optimize Neural Nets
(a) (b) (c)
Figure 1. Comparison of the various hand-engineered and learned algorithms on training neural nets with 48 input and hidden units on (a) TFD, (b) CIFAR-10 and (c) CIFAR-100 with mini-batches of size 64. The vertical axis is the true objective value and the horizontal axis represents the iteration. Best viewed in colour.
divergence term for each coordinate group. The problem then decomposes into multiple independent subproblems, one for each coordinate group. Because the dimensionality of the state subspace corresponding to each coordinate is constant, LQG can be executed on each subproblem much more efï¬ciently. | 1703.00441#25 | Learning to Optimize Neural Nets | Learning to Optimize is a recently proposed framework for learning
optimization algorithms using reinforcement learning. In this paper, we explore
learning an optimization algorithm for training shallow neural nets. Such
high-dimensional stochastic optimization problems present interesting
challenges for existing reinforcement learning algorithms. We develop an
extension that is suited to learning optimization algorithms in this setting
and demonstrate that the learned optimization algorithm consistently
outperforms other known optimization algorithms even on unseen tasks and is
robust to changes in stochasticity of gradients and the neural net
architecture. More specifically, we show that an optimization algorithm trained
with the proposed method on the problem of training a neural net on MNIST
generalizes to the problems of training neural nets on the Toronto Faces
Dataset, CIFAR-10 and CIFAR-100. | http://arxiv.org/pdf/1703.00441 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 10 pages, 15 figures | null | cs.LG | 20170301 | 20171130 | [
{
"id": "1606.01467"
},
{
"id": "1606.04474"
},
{
"id": "1602.04062"
},
{
"id": "1502.03492"
},
{
"id": "1504.00702"
}
] |
1703.00441 | 26 | 25 e {VF(e)/ (| VF(em@-564) tmoasy)| +4 1)} 24 ha i=0 { |[2GnaxGâ8G4D) tmods+5)) _p@max(tâ5(i+2),tmods)) . wâ5i) â@(t=90FD))] 40,1
Similarly, for Ï, we choose a ÂµÏ Ï(·) that shares parameters across different coordinates in the same group. We also impose a block-diagonal structure on Î£Ï and constrain the appropriate sub-matrices to share their entries.
Note that all operations are applied element-wise. Also, whenever a feature becomes undeï¬ned (i.e.: when the time step index becomes negative), it is replaced with the all- zeros vector.
# 3.6. Features
We describe the features Φ(·) and Ψ(·) at time step t, which deï¬ne the state st and observation ot respectively.
Unlike state features, which are only used when training the optimization algorithm, observation features Ψ(·) are used both during training and at test time. Consequently, we use noisier observation features that can be computed more efï¬ciently and require less memory overhead. The observation features consist of the following: | 1703.00441#26 | Learning to Optimize Neural Nets | Learning to Optimize is a recently proposed framework for learning
optimization algorithms using reinforcement learning. In this paper, we explore
learning an optimization algorithm for training shallow neural nets. Such
high-dimensional stochastic optimization problems present interesting
challenges for existing reinforcement learning algorithms. We develop an
extension that is suited to learning optimization algorithms in this setting
and demonstrate that the learned optimization algorithm consistently
outperforms other known optimization algorithms even on unseen tasks and is
robust to changes in stochasticity of gradients and the neural net
architecture. More specifically, we show that an optimization algorithm trained
with the proposed method on the problem of training a neural net on MNIST
generalizes to the problems of training neural nets on the Toronto Faces
Dataset, CIFAR-10 and CIFAR-100. | http://arxiv.org/pdf/1703.00441 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 10 pages, 15 figures | null | cs.LG | 20170301 | 20171130 | [
{
"id": "1606.01467"
},
{
"id": "1606.04474"
},
{
"id": "1602.04062"
},
{
"id": "1502.03492"
},
{
"id": "1504.00702"
}
] |
1703.00441 | 27 | Because of the stochasticity of gradients and objective val- ues, the state features ®(-) are defined in terms of sum- mary statistics of the history of iterates {2 gradi- * t yt ents {VF} and objective values {fe} . i=0 i=0 We define the following statistics, which we will refer to as the average recent iterate, gradient and objective value respectively:
e (f@) ~ fw) ica © VF(@)/(|VFeemâ¢â2)| +1) [a (mare(#â2,1)) _ p(max(tâ2,0)) | . a@)â2(-D] 40.1
i) 1 7 j ets min(i+1,3) Dj =max(iâ2,0) ol) eo VE(e®) = BINGE) Lj=max(éâ2,0) Vi (2) * £2) = gagrtsy Cj-maxce2,0) fe)
# 4. Experiments
For clarity, we will refer to training of the optimization algorithm as âmeta-trainingâ to differentiate it from base- level training, which will simply be referred to as âtrain- ingâ. | 1703.00441#27 | Learning to Optimize Neural Nets | Learning to Optimize is a recently proposed framework for learning
optimization algorithms using reinforcement learning. In this paper, we explore
learning an optimization algorithm for training shallow neural nets. Such
high-dimensional stochastic optimization problems present interesting
challenges for existing reinforcement learning algorithms. We develop an
extension that is suited to learning optimization algorithms in this setting
and demonstrate that the learned optimization algorithm consistently
outperforms other known optimization algorithms even on unseen tasks and is
robust to changes in stochasticity of gradients and the neural net
architecture. More specifically, we show that an optimization algorithm trained
with the proposed method on the problem of training a neural net on MNIST
generalizes to the problems of training neural nets on the Toronto Faces
Dataset, CIFAR-10 and CIFAR-100. | http://arxiv.org/pdf/1703.00441 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 10 pages, 15 figures | null | cs.LG | 20170301 | 20171130 | [
{
"id": "1606.01467"
},
{
"id": "1606.04474"
},
{
"id": "1602.04062"
},
{
"id": "1502.03492"
},
{
"id": "1504.00702"
}
] |
1703.00441 | 28 | The state features Φ(·) consist of the relative change in the average recent objective value, the average recent gradient normalized by the magnitude of the a previous average re- cent gradient and a previous change in average recent iter- ate relative to the current change in average recent iterate:
© {FEM â FED) FEM} 24 i=
We meta-trained an optimization algorithm on a single ob- jective function, which corresponds to the problem of train- ing a two-layer neural net with 48 input units, 48 hidden units and 10 output units on a randomly projected and nor- malized version of the MNIST training set with dimension- ality 48 and unit variance in each dimension. We modelled the optimization algorithm using an recurrent neural net
i=0
Learning to Optimize Neural Nets
(a) (b) (c)
Figure 2. Comparison of the various hand-engineered and learned algorithms on training neural nets with 100 input units and 200 hidden units on (a) TFD, (b) CIFAR-10 and (c) CIFAR-100 with mini-batches of size 64. The vertical axis is the true objective value and the horizontal axis represents the iteration. Best viewed in colour.
(a) (b) (c) | 1703.00441#28 | Learning to Optimize Neural Nets | Learning to Optimize is a recently proposed framework for learning
optimization algorithms using reinforcement learning. In this paper, we explore
learning an optimization algorithm for training shallow neural nets. Such
high-dimensional stochastic optimization problems present interesting
challenges for existing reinforcement learning algorithms. We develop an
extension that is suited to learning optimization algorithms in this setting
and demonstrate that the learned optimization algorithm consistently
outperforms other known optimization algorithms even on unseen tasks and is
robust to changes in stochasticity of gradients and the neural net
architecture. More specifically, we show that an optimization algorithm trained
with the proposed method on the problem of training a neural net on MNIST
generalizes to the problems of training neural nets on the Toronto Faces
Dataset, CIFAR-10 and CIFAR-100. | http://arxiv.org/pdf/1703.00441 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 10 pages, 15 figures | null | cs.LG | 20170301 | 20171130 | [
{
"id": "1606.01467"
},
{
"id": "1606.04474"
},
{
"id": "1602.04062"
},
{
"id": "1502.03492"
},
{
"id": "1504.00702"
}
] |
1703.00441 | 29 | (a) (b) (c)
Figure 3. Comparison of the various hand-engineered and learned algorithms on training neural nets with 48 input and hidden units on (a) TFD, (b) CIFAR-10 and (c) CIFAR-100 with mini-batches of size 10. The vertical axis is the true objective value and the horizontal axis represents the iteration. Best viewed in colour.
with a single layer of 128 LSTM (Hochreiter & Schmid- huber, 1997) cells. We used a time horizon of 400 itera- tions and a mini-batch size of 64 for computing stochas- tic gradients and objective values. We evaluate the opti- mization algorithm on its ability to generalize to unseen objective functions, which correspond to the problems of training neural nets on different tasks/datasets. We evalu- ate the learned optimization algorithm on three datasets, the Toronto Faces Dataset (TFD), CIFAR-10 and CIFAR-100. These datasets are chosen for their very different character- istics from MNIST and each other: TFD contains 3300 grayscale images that have relatively little variation and has seven different categories, whereas CIFAR-100 con- tains 50,000 colour images that have varied appearance and has 100 different categories. | 1703.00441#29 | Learning to Optimize Neural Nets | Learning to Optimize is a recently proposed framework for learning
optimization algorithms using reinforcement learning. In this paper, we explore
learning an optimization algorithm for training shallow neural nets. Such
high-dimensional stochastic optimization problems present interesting
challenges for existing reinforcement learning algorithms. We develop an
extension that is suited to learning optimization algorithms in this setting
and demonstrate that the learned optimization algorithm consistently
outperforms other known optimization algorithms even on unseen tasks and is
robust to changes in stochasticity of gradients and the neural net
architecture. More specifically, we show that an optimization algorithm trained
with the proposed method on the problem of training a neural net on MNIST
generalizes to the problems of training neural nets on the Toronto Faces
Dataset, CIFAR-10 and CIFAR-100. | http://arxiv.org/pdf/1703.00441 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 10 pages, 15 figures | null | cs.LG | 20170301 | 20171130 | [
{
"id": "1606.01467"
},
{
"id": "1606.04474"
},
{
"id": "1602.04062"
},
{
"id": "1502.03492"
},
{
"id": "1504.00702"
}
] |
1703.00441 | 30 | All algorithms are tuned on the training objective function. For hand-engineered algorithms, this entails choosing the best hyperparameters; for learned algorithms, this entails meta-training on the objective function. We compare to the seven hand-engineered algorithms: stochastic gradient de- scent, momentum, conjugate gradient, L-BFGS, ADAM, AdaGrad and RMSprop. In addition, we compare to an optimization algorithm meta-trained using the method described in (Andrychowicz et al., 2016) on the same train- ing objective function (training two-layer neural net on ran- domly projected and normalized MNIST) under the same setting (a time horizon of 400 iterations and a mini-batch size of 64). | 1703.00441#30 | Learning to Optimize Neural Nets | Learning to Optimize is a recently proposed framework for learning
optimization algorithms using reinforcement learning. In this paper, we explore
learning an optimization algorithm for training shallow neural nets. Such
high-dimensional stochastic optimization problems present interesting
challenges for existing reinforcement learning algorithms. We develop an
extension that is suited to learning optimization algorithms in this setting
and demonstrate that the learned optimization algorithm consistently
outperforms other known optimization algorithms even on unseen tasks and is
robust to changes in stochasticity of gradients and the neural net
architecture. More specifically, we show that an optimization algorithm trained
with the proposed method on the problem of training a neural net on MNIST
generalizes to the problems of training neural nets on the Toronto Faces
Dataset, CIFAR-10 and CIFAR-100. | http://arxiv.org/pdf/1703.00441 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 10 pages, 15 figures | null | cs.LG | 20170301 | 20171130 | [
{
"id": "1606.01467"
},
{
"id": "1606.04474"
},
{
"id": "1602.04062"
},
{
"id": "1502.03492"
},
{
"id": "1504.00702"
}
] |
1703.00441 | 31 | First, we examine the performance of various optimization algorithms on similar objective functions. The optimiza- tion problems under consideration are those for training neural nets that have the same number of input and hidden units (48 and 48) as those used during meta-training. The number of output units varies with the number of categories in each dataset. We use the same mini-batch size as that used during meta-training. As shown in Figure 1, the opti- mization algorithm meta-trained using our method (which we will refer to as Predicted Step Descent) consistently de- scends to the optimum the fastest across all datasets. On the other hand, other algorithms are not as consistent and the relative ranking of other algorithms varies by dataset. This suggests that Predicted Step Descent has learned to be robust to variations in the data distributions, despite be- ing trained on only one objective function, which is associ- ated with a very speciï¬c data distribution that character- izes MNIST. It is also interesting to note that while the
Learning to Optimize Neural Nets
(a) (b) (c) | 1703.00441#31 | Learning to Optimize Neural Nets | Learning to Optimize is a recently proposed framework for learning
optimization algorithms using reinforcement learning. In this paper, we explore
learning an optimization algorithm for training shallow neural nets. Such
high-dimensional stochastic optimization problems present interesting
challenges for existing reinforcement learning algorithms. We develop an
extension that is suited to learning optimization algorithms in this setting
and demonstrate that the learned optimization algorithm consistently
outperforms other known optimization algorithms even on unseen tasks and is
robust to changes in stochasticity of gradients and the neural net
architecture. More specifically, we show that an optimization algorithm trained
with the proposed method on the problem of training a neural net on MNIST
generalizes to the problems of training neural nets on the Toronto Faces
Dataset, CIFAR-10 and CIFAR-100. | http://arxiv.org/pdf/1703.00441 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 10 pages, 15 figures | null | cs.LG | 20170301 | 20171130 | [
{
"id": "1606.01467"
},
{
"id": "1606.04474"
},
{
"id": "1602.04062"
},
{
"id": "1502.03492"
},
{
"id": "1504.00702"
}
] |
1703.00441 | 32 | Learning to Optimize Neural Nets
(a) (b) (c)
Figure 4. Comparison of the various hand-engineered and learned algorithms on training neural nets with 100 input units and 200 hidden units on (a) TFD, (b) CIFAR-10 and (c) CIFAR-100 with mini-batches of size 10. The vertical axis is the true objective value and the horizontal axis represents the iteration. Best viewed in colour.
(a) (b) (c)
Figure 5. Comparison of the various hand-engineered and learned algorithms on training neural nets with 100 input units and 200 hidden units on (a) TFD, (b) CIFAR-10 and (c) CIFAR-100 for 800 iterations with mini-batches of size 64. The vertical axis is the true objective value and the horizontal axis represents the iteration. Best viewed in colour.
algorithm meta-trained using (Andrychowicz et al., 2016) (which we will refer to as L2LBGDBGD) performs well on CIFAR, it is unable to reach the optimum on TFD. | 1703.00441#32 | Learning to Optimize Neural Nets | Learning to Optimize is a recently proposed framework for learning
optimization algorithms using reinforcement learning. In this paper, we explore
learning an optimization algorithm for training shallow neural nets. Such
high-dimensional stochastic optimization problems present interesting
challenges for existing reinforcement learning algorithms. We develop an
extension that is suited to learning optimization algorithms in this setting
and demonstrate that the learned optimization algorithm consistently
outperforms other known optimization algorithms even on unseen tasks and is
robust to changes in stochasticity of gradients and the neural net
architecture. More specifically, we show that an optimization algorithm trained
with the proposed method on the problem of training a neural net on MNIST
generalizes to the problems of training neural nets on the Toronto Faces
Dataset, CIFAR-10 and CIFAR-100. | http://arxiv.org/pdf/1703.00441 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 10 pages, 15 figures | null | cs.LG | 20170301 | 20171130 | [
{
"id": "1606.01467"
},
{
"id": "1606.04474"
},
{
"id": "1602.04062"
},
{
"id": "1502.03492"
},
{
"id": "1504.00702"
}
] |
1703.00441 | 33 | Next, we change the architecture of the neural nets and see if Predicted Step Descent generalizes to the new architec- ture. We increase the number of input units to 100 and the number of hidden units to 200, so that the number of pa- rameters is roughly increased by a factor of 8. As shown in Figure 2, Predicted Step Descent consistently outperforms other algorithms on each dataset, despite having not been trained to optimize neural nets of this architecture. Interest- ingly, while it exhibited a bit of oscillation initially on TFD and CIFAR-10, it quickly recovered and overtook other al- gorithms, which is reminiscent of the phenomenon reported in (Li & Malik, 2016) for low-dimensional optimization problems. This suggests that it has learned to detect when it is performing poorly and knows how to change tack ac- cordingly. L2LBGDBGD experienced difï¬culties on TFD and CIFAR-10 as well, but slowly diverged. | 1703.00441#33 | Learning to Optimize Neural Nets | Learning to Optimize is a recently proposed framework for learning
optimization algorithms using reinforcement learning. In this paper, we explore
learning an optimization algorithm for training shallow neural nets. Such
high-dimensional stochastic optimization problems present interesting
challenges for existing reinforcement learning algorithms. We develop an
extension that is suited to learning optimization algorithms in this setting
and demonstrate that the learned optimization algorithm consistently
outperforms other known optimization algorithms even on unseen tasks and is
robust to changes in stochasticity of gradients and the neural net
architecture. More specifically, we show that an optimization algorithm trained
with the proposed method on the problem of training a neural net on MNIST
generalizes to the problems of training neural nets on the Toronto Faces
Dataset, CIFAR-10 and CIFAR-100. | http://arxiv.org/pdf/1703.00441 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 10 pages, 15 figures | null | cs.LG | 20170301 | 20171130 | [
{
"id": "1606.01467"
},
{
"id": "1606.04474"
},
{
"id": "1602.04062"
},
{
"id": "1502.03492"
},
{
"id": "1504.00702"
}
] |
1703.00441 | 34 | from 64 to 10 on both the original architecture with 48 in- put and hidden units and the enlarged architecture with 100 input units and 200 hidden units. As shown in Figure 3, on the original architecture, Predicted Step Descent still out- performs all other algorithms and is able to handle the in- creased stochasticity fairly well. In contrast, conjugate gra- dient and L2LBGDBGD had some difï¬culty handling the increased stochasticity on TFD and to a lesser extent, on CIFAR-10. In the former case, both diverged; in the latter case, both were progressing slowly towards the optimum.
On the enlarged architecture, Predicted Step Descent expe- rienced some signiï¬cant oscillations on TFD and CIFAR- 10, but still managed to achieve a much better objective value than all the other algorithms. Many hand-engineered algorithms also experienced much greater oscillations than previously, suggesting that the optimization problems are inherently harder. L2LBGDBGD diverged fairly quickly on these two datasets.
We now investigate how robust Predicted Step Descent is to stochasticity of the gradients. To this end, we take a look at its performance when we reduce the mini-batch size | 1703.00441#34 | Learning to Optimize Neural Nets | Learning to Optimize is a recently proposed framework for learning
optimization algorithms using reinforcement learning. In this paper, we explore
learning an optimization algorithm for training shallow neural nets. Such
high-dimensional stochastic optimization problems present interesting
challenges for existing reinforcement learning algorithms. We develop an
extension that is suited to learning optimization algorithms in this setting
and demonstrate that the learned optimization algorithm consistently
outperforms other known optimization algorithms even on unseen tasks and is
robust to changes in stochasticity of gradients and the neural net
architecture. More specifically, we show that an optimization algorithm trained
with the proposed method on the problem of training a neural net on MNIST
generalizes to the problems of training neural nets on the Toronto Faces
Dataset, CIFAR-10 and CIFAR-100. | http://arxiv.org/pdf/1703.00441 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 10 pages, 15 figures | null | cs.LG | 20170301 | 20171130 | [
{
"id": "1606.01467"
},
{
"id": "1606.04474"
},
{
"id": "1602.04062"
},
{
"id": "1502.03492"
},
{
"id": "1504.00702"
}
] |
1703.00441 | 35 | We now investigate how robust Predicted Step Descent is to stochasticity of the gradients. To this end, we take a look at its performance when we reduce the mini-batch size
Finally, we try doubling the number of iterations. As shown in Figure 5, despite being trained over a time horizon of 400 iterations, Predicted Step Descent behaves reasonably beyond the number of iterations it is trained for.
Learning to Optimize Neural Nets
# 5. Conclusion
In this paper, we presented a new method for learning opti- mization algorithms for high-dimensional stochastic prob- lems. We applied the method to learning an optimization algorithm for training shallow neural nets. We showed that the algorithm learned using our method on the problem of training a neural net on MNIST generalizes to the prob- lems of training neural nets on unrelated tasks/datasets like the Toronto Faces Dataset, CIFAR-10 and CIFAR-100. We also demonstrated that the learned optimization algorithm is robust to changes in the stochasticity of gradients and the neural net architecture.
and Da Costa, Joaquim Pinto. Ranking learning algorithms: Using ibl and meta-learning on accuracy and time results. Machine Learning, 50(3):251â277, 2003. | 1703.00441#35 | Learning to Optimize Neural Nets | Learning to Optimize is a recently proposed framework for learning
optimization algorithms using reinforcement learning. In this paper, we explore
learning an optimization algorithm for training shallow neural nets. Such
high-dimensional stochastic optimization problems present interesting
challenges for existing reinforcement learning algorithms. We develop an
extension that is suited to learning optimization algorithms in this setting
and demonstrate that the learned optimization algorithm consistently
outperforms other known optimization algorithms even on unseen tasks and is
robust to changes in stochasticity of gradients and the neural net
architecture. More specifically, we show that an optimization algorithm trained
with the proposed method on the problem of training a neural net on MNIST
generalizes to the problems of training neural nets on the Toronto Faces
Dataset, CIFAR-10 and CIFAR-100. | http://arxiv.org/pdf/1703.00441 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 10 pages, 15 figures | null | cs.LG | 20170301 | 20171130 | [
{
"id": "1606.01467"
},
{
"id": "1606.04474"
},
{
"id": "1602.04062"
},
{
"id": "1502.03492"
},
{
"id": "1504.00702"
}
] |
1703.00441 | 36 | and Da Costa, Joaquim Pinto. Ranking learning algorithms: Using ibl and meta-learning on accuracy and time results. Machine Learning, 50(3):251â277, 2003.
Daniel, Christian, Taylor, Jonathan, and Nowozin, Sebas- tian. Learning step size controllers for robust neural net- work training. In Thirtieth AAAI Conference on Artiï¬cial Intelligence, 2016.
Domke, Justin. Generic methods for optimization-based modeling. In AISTATS, volume 22, pp. 318â326, 2012.
# References
Andrychowicz, Marcin, Denil, Misha, Gomez, Sergio, Hoffman, Matthew W, Pfau, David, Schaul, Tom, and de Freitas, Nando. Learning to learn by gradient descent by gradient descent. arXiv preprint arXiv:1606.04474, 2016.
Duchi, John, Hazan, Elad, and Singer, Yoram. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121â2159, 2011.
Feurer, Matthias, Springenberg, Jost Tobias, and Hutter, Initializing bayesian hyperparameter optimiza- Frank. tion via meta-learning. In AAAI, pp. 1128â1135, 2015. | 1703.00441#36 | Learning to Optimize Neural Nets | Learning to Optimize is a recently proposed framework for learning
optimization algorithms using reinforcement learning. In this paper, we explore
learning an optimization algorithm for training shallow neural nets. Such
high-dimensional stochastic optimization problems present interesting
challenges for existing reinforcement learning algorithms. We develop an
extension that is suited to learning optimization algorithms in this setting
and demonstrate that the learned optimization algorithm consistently
outperforms other known optimization algorithms even on unseen tasks and is
robust to changes in stochasticity of gradients and the neural net
architecture. More specifically, we show that an optimization algorithm trained
with the proposed method on the problem of training a neural net on MNIST
generalizes to the problems of training neural nets on the Toronto Faces
Dataset, CIFAR-10 and CIFAR-100. | http://arxiv.org/pdf/1703.00441 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 10 pages, 15 figures | null | cs.LG | 20170301 | 20171130 | [
{
"id": "1606.01467"
},
{
"id": "1606.04474"
},
{
"id": "1602.04062"
},
{
"id": "1502.03492"
},
{
"id": "1504.00702"
}
] |
1703.00441 | 37 | Baxter, Jonathan, Caruana, Rich, Mitchell, Tom, Pratt, Lorien Y, Silver, Daniel L, and Thrun, Sebastian. NIPS 1995 workshop on learning to learn: Knowledge con- solidation and transfer in inductive systems. https: //web.archive.org/web/20000618135816/ http://www.cs.cmu.edu/afs/cs.cmu.edu/ user/caruana/pub/transfer.html, 1995. Accessed: 2015-12-05.
Fu, Jie, Lin, Zichuan, Liu, Miao, Leonard, Nicholas, Feng, Jiashi, and Chua, Tat-Seng. Deep q-networks for acceler- ating the training of deep neural networks. arXiv preprint arXiv:1606.01467, 2016.
Gregor, Karol and LeCun, Yann. Learning fast approxima- tions of sparse coding. In Proceedings of the 27th Inter- national Conference on Machine Learning (ICML-10), pp. 399â406, 2010.
Bengio, Y, Bengio, S, and Cloutier, J. Learning a synaptic In Neural Networks, 1991., IJCNN-91- learning rule. Seattle International Joint Conference on, volume 2, pp. 969âvol. IEEE, 1991. | 1703.00441#37 | Learning to Optimize Neural Nets | Learning to Optimize is a recently proposed framework for learning
optimization algorithms using reinforcement learning. In this paper, we explore
learning an optimization algorithm for training shallow neural nets. Such
high-dimensional stochastic optimization problems present interesting
challenges for existing reinforcement learning algorithms. We develop an
extension that is suited to learning optimization algorithms in this setting
and demonstrate that the learned optimization algorithm consistently
outperforms other known optimization algorithms even on unseen tasks and is
robust to changes in stochasticity of gradients and the neural net
architecture. More specifically, we show that an optimization algorithm trained
with the proposed method on the problem of training a neural net on MNIST
generalizes to the problems of training neural nets on the Toronto Faces
Dataset, CIFAR-10 and CIFAR-100. | http://arxiv.org/pdf/1703.00441 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 10 pages, 15 figures | null | cs.LG | 20170301 | 20171130 | [
{
"id": "1606.01467"
},
{
"id": "1606.04474"
},
{
"id": "1602.04062"
},
{
"id": "1502.03492"
},
{
"id": "1504.00702"
}
] |
1703.00441 | 38 | Hansen, Samantha. Using deep q-learning to con- arXiv preprint trol optimization hyperparameters. arXiv:1602.04062, 2016.
Bengio, Yoshua. Gradient-based optimization of hyperpa- rameters. Neural computation, 12(8):1889â1900, 2000.
Hochreiter, Sepp and Schmidhuber, J¨urgen. Long short- term memory. Neural computation, 9(8):1735â1780, 1997.
Bergstra, James and Bengio, Yoshua. Random search for hyper-parameter optimization. The Journal of Machine Learning Research, 13(1):281â305, 2012.
Bergstra, James S, Bardenet, R´emi, Bengio, Yoshua, and K´egl, Bal´azs. Algorithms for hyper-parameter optimiza- tion. In Advances in Neural Information Processing Sys- tems, pp. 2546â2554, 2011.
Bray, M, Koller-Meier, E, Muller, P, Van Gool, L, and Schraudolph, NN. 3D hand tracking by rapid stochas- In Visual tic gradient descent using a skinning model. Media Production, 2004.(CVMP). 1st European Confer- ence on, pp. 59â68. IET, 2004. | 1703.00441#38 | Learning to Optimize Neural Nets | Learning to Optimize is a recently proposed framework for learning
optimization algorithms using reinforcement learning. In this paper, we explore
learning an optimization algorithm for training shallow neural nets. Such
high-dimensional stochastic optimization problems present interesting
challenges for existing reinforcement learning algorithms. We develop an
extension that is suited to learning optimization algorithms in this setting
and demonstrate that the learned optimization algorithm consistently
outperforms other known optimization algorithms even on unseen tasks and is
robust to changes in stochasticity of gradients and the neural net
architecture. More specifically, we show that an optimization algorithm trained
with the proposed method on the problem of training a neural net on MNIST
generalizes to the problems of training neural nets on the Toronto Faces
Dataset, CIFAR-10 and CIFAR-100. | http://arxiv.org/pdf/1703.00441 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 10 pages, 15 figures | null | cs.LG | 20170301 | 20171130 | [
{
"id": "1606.01467"
},
{
"id": "1606.04474"
},
{
"id": "1602.04062"
},
{
"id": "1502.03492"
},
{
"id": "1504.00702"
}
] |
1703.00441 | 39 | Hochreiter, Sepp, Younger, A Steven, and Conwell, Pe- ter R. Learning to learn using gradient descent. In Inter- national Conference on Artiï¬cial Neural Networks, pp. 87â94. Springer, 2001.
Hutter, Frank, Hoos, Holger H, and Leyton-Brown, Kevin. Sequential model-based optimization for general algo- In Learning and Intelligent Opti- rithm conï¬guration. mization, pp. 507â523. Springer, 2011.
Kingma, Diederik and Ba, Jimmy. method for stochastic optimization. arXiv:1412.6980, 2014. A arXiv preprint Adam:
Brazdil, Pavel, Carrier, Christophe Giraud, Soares, Carlos, and Vilalta, Ricardo. Metalearning: applications to data mining. Springer Science & Business Media, 2008.
Levine, Sergey, Finn, Chelsea, Darrell, Trevor, and Abbeel, Pieter. End-to-end training of deep visuomotor policies. arXiv preprint arXiv:1504.00702, 2015.
Learning to Optimize Neural Nets
Li, Ke and Malik, Jitendra. Learning to optimize. CoRR, abs/1606.01885, 2016. | 1703.00441#39 | Learning to Optimize Neural Nets | Learning to Optimize is a recently proposed framework for learning
optimization algorithms using reinforcement learning. In this paper, we explore
learning an optimization algorithm for training shallow neural nets. Such
high-dimensional stochastic optimization problems present interesting
challenges for existing reinforcement learning algorithms. We develop an
extension that is suited to learning optimization algorithms in this setting
and demonstrate that the learned optimization algorithm consistently
outperforms other known optimization algorithms even on unseen tasks and is
robust to changes in stochasticity of gradients and the neural net
architecture. More specifically, we show that an optimization algorithm trained
with the proposed method on the problem of training a neural net on MNIST
generalizes to the problems of training neural nets on the Toronto Faces
Dataset, CIFAR-10 and CIFAR-100. | http://arxiv.org/pdf/1703.00441 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 10 pages, 15 figures | null | cs.LG | 20170301 | 20171130 | [
{
"id": "1606.01467"
},
{
"id": "1606.04474"
},
{
"id": "1602.04062"
},
{
"id": "1502.03492"
},
{
"id": "1504.00702"
}
] |
1703.00441 | 40 | Learning to Optimize Neural Nets
Li, Ke and Malik, Jitendra. Learning to optimize. CoRR, abs/1606.01885, 2016.
Maclaurin, Dougal, Duvenaud, David, and Adams, Ryan P. Gradient-based hyperparameter optimization through re- arXiv preprint arXiv:1502.03492, versible learning. 2015.
Ruvolo, Paul L, Fasel, Ian, and Movellan, Javier R. Op- timization on a budget: A reinforcement learning ap- proach. In Advances in Neural Information Processing Systems, pp. 1385â1392, 2009.
Schmidhuber, J¨urgen. Optimal ordered problem solver. Machine Learning, 54(3):211â254, 2004.
Snoek, Jasper, Larochelle, Hugo, and Adams, Ryan P. Practical bayesian optimization of machine learning al- gorithms. In Advances in neural information processing systems, pp. 2951â2959, 2012.
Sprechmann, Pablo, Litman, Roee, Yakar, Tal Ben, Bron- stein, Alexander M, and Sapiro, Guillermo. Supervised sparse analysis and synthesis operators. In Advances in Neural Information Processing Systems, pp. 908â916, 2013. | 1703.00441#40 | Learning to Optimize Neural Nets | Learning to Optimize is a recently proposed framework for learning
optimization algorithms using reinforcement learning. In this paper, we explore
learning an optimization algorithm for training shallow neural nets. Such
high-dimensional stochastic optimization problems present interesting
challenges for existing reinforcement learning algorithms. We develop an
extension that is suited to learning optimization algorithms in this setting
and demonstrate that the learned optimization algorithm consistently
outperforms other known optimization algorithms even on unseen tasks and is
robust to changes in stochasticity of gradients and the neural net
architecture. More specifically, we show that an optimization algorithm trained
with the proposed method on the problem of training a neural net on MNIST
generalizes to the problems of training neural nets on the Toronto Faces
Dataset, CIFAR-10 and CIFAR-100. | http://arxiv.org/pdf/1703.00441 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 10 pages, 15 figures | null | cs.LG | 20170301 | 20171130 | [
{
"id": "1606.01467"
},
{
"id": "1606.04474"
},
{
"id": "1602.04062"
},
{
"id": "1502.03492"
},
{
"id": "1504.00702"
}
] |
1703.00441 | 41 | Swersky, Kevin, Snoek, Jasper, and Adams, Ryan P. Multi- task bayesian optimization. In Advances in neural infor- mation processing systems, pp. 2004â2012, 2013.
Thrun, Sebastian and Pratt, Lorien. Learning to learn. Springer Science & Business Media, 2012.
Tieleman, Tijmen and Hinton, Geoffrey. Lecture 6.5- rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning, 4(2), 2012.
Vilalta, Ricardo and Drissi, Youssef. A perspective view and survey of meta-learning. Artiï¬cial Intelligence Re- view, 18(2):77â95, 2002.
Wang, Huahua and Banerjee, Arindam. Bregman al- CoRR, ternating direction method of multipliers. abs/1306.3203, 2014. | 1703.00441#41 | Learning to Optimize Neural Nets | Learning to Optimize is a recently proposed framework for learning
optimization algorithms using reinforcement learning. In this paper, we explore
learning an optimization algorithm for training shallow neural nets. Such
high-dimensional stochastic optimization problems present interesting
challenges for existing reinforcement learning algorithms. We develop an
extension that is suited to learning optimization algorithms in this setting
and demonstrate that the learned optimization algorithm consistently
outperforms other known optimization algorithms even on unseen tasks and is
robust to changes in stochasticity of gradients and the neural net
architecture. More specifically, we show that an optimization algorithm trained
with the proposed method on the problem of training a neural net on MNIST
generalizes to the problems of training neural nets on the Toronto Faces
Dataset, CIFAR-10 and CIFAR-100. | http://arxiv.org/pdf/1703.00441 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 10 pages, 15 figures | null | cs.LG | 20170301 | 20171130 | [
{
"id": "1606.01467"
},
{
"id": "1606.04474"
},
{
"id": "1602.04062"
},
{
"id": "1502.03492"
},
{
"id": "1504.00702"
}
] |
1702.08608 | 1 | From autonomous cars and adaptive email-ï¬lters to predictive policing systems, machine learn- ing (ML) systems are increasingly ubiquitous; they outperform humans on speciï¬c tasks [Mnih et al., 2013, Silver et al., 2016, Hamill, 2017] and often guide processes of human understanding and decisions [Carton et al., 2016, Doshi-Velez et al., 2014]. The deployment of ML systems in complex applications has led to a surge of interest in systems optimized not only for expected task performance but also other important criteria such as safety [Otte, 2013, Amodei et al., 2016, Varshney and Alemzadeh, 2016], nondiscrimination [Bostrom and Yudkowsky, 2014, Ruggieri et al., 2010, Hardt et al., 2016], avoiding technical debt [Sculley et al., 2015], or providing the right to explanation [Goodman and Flaxman, 2016]. For ML systems to be used safely, satisfying these auxiliary criteria is critical. However, unlike measures of performance such as accuracy, these crite- ria often cannot be completely quantiï¬ed. For example, we might not be able to enumerate all | 1702.08608#1 | Towards A Rigorous Science of Interpretable Machine Learning | As machine learning systems become ubiquitous, there has been a surge of
interest in interpretable machine learning: systems that provide explanation
for their outputs. These explanations are often used to qualitatively assess
other criteria such as safety or non-discrimination. However, despite the
interest in interpretability, there is very little consensus on what
interpretable machine learning is and how it should be measured. In this
position paper, we first define interpretability and describe when
interpretability is needed (and when it is not). Next, we suggest a taxonomy
for rigorous evaluation and expose open questions towards a more rigorous
science of interpretable machine learning. | http://arxiv.org/pdf/1702.08608 | Finale Doshi-Velez, Been Kim | stat.ML, cs.AI, cs.LG | null | null | stat.ML | 20170228 | 20170302 | [
{
"id": "1606.04155"
},
{
"id": "1606.06565"
},
{
"id": "1602.04938"
},
{
"id": "1606.01540"
},
{
"id": "1612.09030"
},
{
"id": "1606.08813"
}
] |
1702.08734 | 1 | Jeff Johnson Facebook AI Research New York
Matthijs Douze Facebook AI Research Paris
Herv ´e J ´egou Facebook AI Research Paris
ABSTRACT Similarity search ï¬nds application in specialized database systems handling complex data such as images or videos, which are typically represented by high-dimensional features and require speciï¬c indexing structures. This paper tackles the problem of better utilizing GPUs for this task. While GPUs excel at data-parallel tasks, prior approaches are bot- tlenecked by algorithms that expose less parallelism, such as k-min selection, or make poor use of the memory hierarchy. We propose a design for k-selection that operates at up to 55% of theoretical peak performance, enabling a nearest neighbor implementation that is 8.5à faster than prior GPU state of the art. We apply it in diï¬erent similarity search scenarios, by proposing optimized design for brute-force, ap- proximate and compressed-domain search based on product quantization. In all these setups, we outperform the state of the art by large margins. Our implementation enables the construction of a high accuracy k-NN graph on 95 million images from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced our approach1 for the sake of comparison and reproducibility. | 1702.08734#1 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08608 | 2 | unlike measures of performance such as accuracy, these crite- ria often cannot be completely quantiï¬ed. For example, we might not be able to enumerate all unit tests required for the safe operation of a semi-autonomous car or all confounds that might cause a credit scoring system to be discriminatory. In such cases, a popular fallback is the criterion of interpretability: if the system can explain its reasoning, we then can verify whether that reasoning is sound with respect to these auxiliary criteria. | 1702.08608#2 | Towards A Rigorous Science of Interpretable Machine Learning | As machine learning systems become ubiquitous, there has been a surge of
interest in interpretable machine learning: systems that provide explanation
for their outputs. These explanations are often used to qualitatively assess
other criteria such as safety or non-discrimination. However, despite the
interest in interpretability, there is very little consensus on what
interpretable machine learning is and how it should be measured. In this
position paper, we first define interpretability and describe when
interpretability is needed (and when it is not). Next, we suggest a taxonomy
for rigorous evaluation and expose open questions towards a more rigorous
science of interpretable machine learning. | http://arxiv.org/pdf/1702.08608 | Finale Doshi-Velez, Been Kim | stat.ML, cs.AI, cs.LG | null | null | stat.ML | 20170228 | 20170302 | [
{
"id": "1606.04155"
},
{
"id": "1606.06565"
},
{
"id": "1602.04938"
},
{
"id": "1606.01540"
},
{
"id": "1612.09030"
},
{
"id": "1606.08813"
}
] |
1702.08734 | 2 | as the underlying processes either have high arithmetic com- plexity and/or high data bandwidth demands [28], or cannot be eï¬ectively partitioned without failing due to communi- cation overhead or representation quality [38]. Once pro- duced, their manipulation is itself arithmetically intensive. However, how to utilize GPU assets is not straightforward. More generally, how to exploit new heterogeneous architec- tures is a key subject for the database community [9].
In this context, searching by numerical similarity rather than via structured relations is more suitable. This could be to ï¬nd the most similar content to a picture, or to ï¬nd the vectors that have the highest response to a linear classiï¬er on all vectors of a collection.
One of the most expensive operations to be performed on large collections is to compute a k-NN graph. It is a directed graph where each vector of the database is a node and each edge connects a node to its k nearest neighbors. This is our ï¬agship application. Note, state of the art methods like NN-Descent [15] have a large memory overhead on top of the dataset itself and cannot readily scale to the billion-sized databases we consider.
# INTRODUCTION | 1702.08734#2 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08608 | 3 | Unfortunately, there is little consensus on what interpretability in machine learning is and how to evaluate it for benchmarking. Current interpretability evaluation typically falls into two categories. The ï¬rst evaluates interpretability in the context of an application: if the system is useful in either a practical application or a simpliï¬ed version of it, then it must be somehow interpretable (e.g. Ribeiro et al. [2016], Lei et al. [2016], Kim et al. [2015a], Doshi-Velez et al. [2015], Kim et al. [2015b]). The second evaluates interpretability via a quantiï¬able proxy: a researcher might ï¬rst sparse linear models, rule lists, gradient boosted treesâare claim that some model classâe.g. interpretable and then present algorithms to optimize within that class (e.g. Bucilu et al. [2006], Wang et al. [2017], Wang and Rudin [2015], Lou et al. [2012]). | 1702.08608#3 | Towards A Rigorous Science of Interpretable Machine Learning | As machine learning systems become ubiquitous, there has been a surge of
interest in interpretable machine learning: systems that provide explanation
for their outputs. These explanations are often used to qualitatively assess
other criteria such as safety or non-discrimination. However, despite the
interest in interpretability, there is very little consensus on what
interpretable machine learning is and how it should be measured. In this
position paper, we first define interpretability and describe when
interpretability is needed (and when it is not). Next, we suggest a taxonomy
for rigorous evaluation and expose open questions towards a more rigorous
science of interpretable machine learning. | http://arxiv.org/pdf/1702.08608 | Finale Doshi-Velez, Been Kim | stat.ML, cs.AI, cs.LG | null | null | stat.ML | 20170228 | 20170302 | [
{
"id": "1606.04155"
},
{
"id": "1606.06565"
},
{
"id": "1602.04938"
},
{
"id": "1606.01540"
},
{
"id": "1612.09030"
},
{
"id": "1606.08813"
}
] |
1702.08734 | 3 | # INTRODUCTION
Images and videos constitute a new massive source of data for indexing and search. Extensive metadata for this con- tent is often not available. Search and interpretation of this and other human-generated content, like text, is diï¬cult and important. A variety of machine learning and deep learn- ing algorithms are being used to interpret and classify these complex, real-world entities. Popular examples include the text representation known as word2vec [32], representations of images by convolutional neural networks [39, 19], and im- age descriptors for instance search [20]. Such representations or embeddings are usually real-valued, high-dimensional vec- tors of 50 to 1000+ dimensions. Many of these vector repre- sentations can only eï¬ectively be produced on GPU systems,
1https://github.com/facebookresearch/faiss | 1702.08734#3 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08608 | 4 | To large extent, both evaluation approaches rely on some notion of âyouâll know it when you see it.â Should we be concerned about a lack of rigor? Yes and no: the notions of interpretability above appear reasonable because they are reasonable: they meet the ï¬rst test of having face- validity on the correct test set of subjects: human beings. However, this basic notion leaves many kinds of questions unanswerable: Are all models in all deï¬ned-to-be-interpretable model classes equally interpretable? Quantiï¬able proxies such as sparsity may seem to allow for comparison, but how does one think about comparing a model sparse in features to a model sparse in prototypes? Moreover, do all applications have the same interpretability needs? If we are to move this ï¬eld forwardâto compare methods and understand when methods may generalizeâwe need to formalize these notions and make them evidence-based.
The objective of this review is to chart a path toward the deï¬nition and rigorous evaluation of interpretability. The need is urgent: recent European Union regulation will require algorithms âAuthors contributed equally.
1
Humans Tasks Application-grounded Evaluation [nce More " . . Real Simple Specific Human-grounded Evaluation and Costly . . No Real Proxy Functionally-grounded Evaluation Humans Tasks | 1702.08608#4 | Towards A Rigorous Science of Interpretable Machine Learning | As machine learning systems become ubiquitous, there has been a surge of
interest in interpretable machine learning: systems that provide explanation
for their outputs. These explanations are often used to qualitatively assess
other criteria such as safety or non-discrimination. However, despite the
interest in interpretability, there is very little consensus on what
interpretable machine learning is and how it should be measured. In this
position paper, we first define interpretability and describe when
interpretability is needed (and when it is not). Next, we suggest a taxonomy
for rigorous evaluation and expose open questions towards a more rigorous
science of interpretable machine learning. | http://arxiv.org/pdf/1702.08608 | Finale Doshi-Velez, Been Kim | stat.ML, cs.AI, cs.LG | null | null | stat.ML | 20170228 | 20170302 | [
{
"id": "1606.04155"
},
{
"id": "1606.06565"
},
{
"id": "1602.04938"
},
{
"id": "1606.01540"
},
{
"id": "1612.09030"
},
{
"id": "1606.08813"
}
] |
1702.08734 | 4 | 1https://github.com/facebookresearch/faiss
Such applications must deal with the curse of dimension- ality [46], rendering both exhaustive search or exact index- ing for non-exhaustive search impractical on billion-scale databases. This is why there is a large body of work on approximate search and/or graph construction. To handle huge datasets that do not ï¬t in RAM, several approaches employ an internal compressed representation of the vec- tors using an encoding. This is especially convenient for memory-limited devices like GPUs. It turns out that accept- ing a minimal accuracy loss results in orders of magnitude of compression [21]. The most popular vector compression methods can be classiï¬ed into either binary codes [18, 22], or quantization methods [25, 37]. Both have the desirable property that searching neighbors does not require recon- structing the vectors. | 1702.08734#4 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08608 | 5 | Humans Tasks Application-grounded Evaluation [nce More " . . Real Simple Specific Human-grounded Evaluation and Costly . . No Real Proxy Functionally-grounded Evaluation Humans Tasks
Figure 1: Taxonomy of evaluation approaches for interpretability
that make decisions based on user-level predictors, which âsigniï¬cantly aï¬ectâ users to provide explanation (âright to explanationâ) by 2018 [Parliament and of the European Union, 2016]. In addition, the volume of research on interpretability is rapidly growing.1 In section 1, we discuss what interpretability is and contrast with other criteria such as reliability and fairness. In section 2, we consider scenarios in which interpretability is needed and why. In section 3, we propose a taxonomy for the evaluation of interpretabilityâapplication-grounded, human-grounded and functionally- grounded. We conclude with important open questions in section 4 and speciï¬c suggestions for researchers doing work in interpretability in section 5.
# 1 What is Interpretability? | 1702.08608#5 | Towards A Rigorous Science of Interpretable Machine Learning | As machine learning systems become ubiquitous, there has been a surge of
interest in interpretable machine learning: systems that provide explanation
for their outputs. These explanations are often used to qualitatively assess
other criteria such as safety or non-discrimination. However, despite the
interest in interpretability, there is very little consensus on what
interpretable machine learning is and how it should be measured. In this
position paper, we first define interpretability and describe when
interpretability is needed (and when it is not). Next, we suggest a taxonomy
for rigorous evaluation and expose open questions towards a more rigorous
science of interpretable machine learning. | http://arxiv.org/pdf/1702.08608 | Finale Doshi-Velez, Been Kim | stat.ML, cs.AI, cs.LG | null | null | stat.ML | 20170228 | 20170302 | [
{
"id": "1606.04155"
},
{
"id": "1606.06565"
},
{
"id": "1602.04938"
},
{
"id": "1606.01540"
},
{
"id": "1612.09030"
},
{
"id": "1606.08813"
}
] |
1702.08734 | 5 | Our paper focuses on methods based on product quanti- zation (PQ) codes, as these were shown to be more eï¬ective than binary codes [34]. In addition, binary codes incur im- portant overheads for non-exhaustive search methods [35]. Several improvements were proposed after the original prod- uct quantization proposal known as IVFADC [25]; most are diï¬cult to implement eï¬ciently on GPU. For instance, the inverted multi-index [4], useful for high-speed/low-quality operating points, depends on a complicated âmulti-sequenceâ algorithm. The optimized product quantization or OPQ [17] is a linear transformation on the input vectors that improves the accuracy of the product quantization; it can be applied
1
as a pre-processing. The SIMD-optimized IVFADC imple- mentation from [2] operates only with sub-optimal parame- ters (few coarse quantization centroids). Many other meth- ods, like LOPQ and the Polysemous codes [27, 16] are too complex to be implemented eï¬ciently on GPUs. | 1702.08734#5 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08608 | 6 | # 1 What is Interpretability?
Deï¬nition Interpret means to explain or to present in understandable terms.2 In the context of ML systems, we deï¬ne interpretability as the ability to explain or to present in understandable terms to a human. A formal deï¬nition of explanation remains elusive; in the ï¬eld of psychology, Lombrozo [2006] states âexplanations are... the currency in which we exchanged beliefsâ and notes that questions such as what constitutes an explanation, what makes some explanations better than others, how explanations are generated and when explanations are sought are just beginning to be addressed. Researchers have classiï¬ed explanations from being âdeductive-nomologicalâ in nature [Hempel and Oppenheim, 1948] (i.e. as logical proofs) to providing some sense of mechanism [Bechtel and Abrahamsen, 2005, Chater and Oaksford, 2006, Glennan, 2002]. Keil [2006] considered a broader deï¬nition: implicit explanatory understanding. In this work, we propose data-driven ways to derive operational deï¬nitions and evaluations of explanations, and thus, interpretability. | 1702.08608#6 | Towards A Rigorous Science of Interpretable Machine Learning | As machine learning systems become ubiquitous, there has been a surge of
interest in interpretable machine learning: systems that provide explanation
for their outputs. These explanations are often used to qualitatively assess
other criteria such as safety or non-discrimination. However, despite the
interest in interpretability, there is very little consensus on what
interpretable machine learning is and how it should be measured. In this
position paper, we first define interpretability and describe when
interpretability is needed (and when it is not). Next, we suggest a taxonomy
for rigorous evaluation and expose open questions towards a more rigorous
science of interpretable machine learning. | http://arxiv.org/pdf/1702.08608 | Finale Doshi-Velez, Been Kim | stat.ML, cs.AI, cs.LG | null | null | stat.ML | 20170228 | 20170302 | [
{
"id": "1606.04155"
},
{
"id": "1606.06565"
},
{
"id": "1602.04938"
},
{
"id": "1606.01540"
},
{
"id": "1612.09030"
},
{
"id": "1606.08813"
}
] |
1702.08734 | 6 | There are many implementations of similarity search on GPUs, but mostly with binary codes [36], small datasets [44], or exhaustive search [14, 40, 41]. To the best of our knowl- edge, only the work by Wieschollek et al. [47] appears suit- able for billion-scale datasets with quantization codes. This is the prior state of the art on GPUs, which we compare against in Section 6.4.
This paper makes the following contributions:
⢠a GPU k-selection algorithm, operating in fast register memory and ï¬exible enough to be fusable with other kernels, for which we provide a complexity analysis;
⢠a near-optimal algorithmic layout for exact and ap- proximate k-nearest neighbor search on GPU;
⢠a range of experiments that show that these improve- ments outperform previous art by a large margin on mid- to large-scale nearest-neighbor search tasks, in single or multi-GPU conï¬gurations. | 1702.08734#6 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08608 | 7 | Interpretability is used to conï¬rm other important desiderata of ML systems There exist many auxiliary criteria that one may wish to optimize. Notions of fairness or unbiasedness imply that protected groups (explicit or implicit) are not somehow discriminated against. Privacy means the method protects sensitive information in the data. Properties such as reliability and robustness ascertain whether algorithms reach certain levels of performance in the face of parameter or input variation. Causality implies that the predicted change in output due to a perturbation will occur in the real system. Usable methods provide information that assist users to accomplish a taskâe.g. a knob to tweak image lightingâwhile trusted systems have the conï¬dence of human usersâe.g. aircraft collision avoidance systems. Some areas, such as the fairness [Hardt et al.,
1Google Scholar ï¬nds more than 20,000 publications related to interpretability in ML in the last ï¬ve years. 2Merriam-Webster dictionary, accessed 2017-02-07
2 | 1702.08608#7 | Towards A Rigorous Science of Interpretable Machine Learning | As machine learning systems become ubiquitous, there has been a surge of
interest in interpretable machine learning: systems that provide explanation
for their outputs. These explanations are often used to qualitatively assess
other criteria such as safety or non-discrimination. However, despite the
interest in interpretability, there is very little consensus on what
interpretable machine learning is and how it should be measured. In this
position paper, we first define interpretability and describe when
interpretability is needed (and when it is not). Next, we suggest a taxonomy
for rigorous evaluation and expose open questions towards a more rigorous
science of interpretable machine learning. | http://arxiv.org/pdf/1702.08608 | Finale Doshi-Velez, Been Kim | stat.ML, cs.AI, cs.LG | null | null | stat.ML | 20170228 | 20170302 | [
{
"id": "1606.04155"
},
{
"id": "1606.06565"
},
{
"id": "1602.04938"
},
{
"id": "1606.01540"
},
{
"id": "1612.09030"
},
{
"id": "1606.08813"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.