doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1703.03400 | 49 | the negative absolute value between the current velocity of the agent and a goal, which is chosen uniformly at random between 0.0 and 2.0 for the cheetah and between 0.0 and 3.0 for the ant. In the goal direction experiments, the re- ward is the magnitude of the velocity in either the forward or backward direction, chosen at random for each task in p(T ). The horizon is H = 200, with 20 rollouts per gradi- ent step for all problems except the ant forward/backward task, which used 40 rollouts per step. The results in Fig- ure 5 show that MAML learns a model that can quickly adapt its velocity and direction with even just a single gra- dient update, and continues to improve with more gradi- ent steps. The results also show that, on these challenging tasks, the MAML initialization substantially outperforms random initialization and pretraining. In fact, pretraining is in some cases worse than random initialization, a fact observed in prior RL work (Parisotto et al., 2016).
# 6. Discussion and Future Work | 1703.03400#49 | Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks | We propose an algorithm for meta-learning that is model-agnostic, in the
sense that it is compatible with any model trained with gradient descent and
applicable to a variety of different learning problems, including
classification, regression, and reinforcement learning. The goal of
meta-learning is to train a model on a variety of learning tasks, such that it
can solve new learning tasks using only a small number of training samples. In
our approach, the parameters of the model are explicitly trained such that a
small number of gradient steps with a small amount of training data from a new
task will produce good generalization performance on that task. In effect, our
method trains the model to be easy to fine-tune. We demonstrate that this
approach leads to state-of-the-art performance on two few-shot image
classification benchmarks, produces good results on few-shot regression, and
accelerates fine-tuning for policy gradient reinforcement learning with neural
network policies. | http://arxiv.org/pdf/1703.03400 | Chelsea Finn, Pieter Abbeel, Sergey Levine | cs.LG, cs.AI, cs.CV, cs.NE | ICML 2017. Code at https://github.com/cbfinn/maml, Videos of RL
results at https://sites.google.com/view/maml, Blog post at
http://bair.berkeley.edu/blog/2017/07/18/learning-to-learn/ | null | cs.LG | 20170309 | 20170718 | [
{
"id": "1612.00796"
},
{
"id": "1611.02779"
},
{
"id": "1603.04467"
},
{
"id": "1703.05175"
},
{
"id": "1508.03854"
},
{
"id": "1611.05763"
}
] |
1703.03400 | 50 | # 6. Discussion and Future Work
We introduced a meta-learning method based on learning easily adaptable model parameters through gradient de- scent. Our approach has a number of beneï¬ts. It is simple and does not introduce any learned parameters for meta- learning. It can be combined with any model representation that is amenable to gradient-based training, and any differ- entiable objective, including classiï¬cation, regression, and reinforcement learning. Lastly, since our method merely produces a weight initialization, adaptation can be per- formed with any amount of data and any number of gra- dient steps, though we demonstrate state-of-the-art results on classiï¬cation with only one or ï¬ve examples per class. We also show that our method can adapt an RL agent using policy gradients and a very modest amount of experience.
Locomotion. To study how well MAML can scale to more complex deep RL problems, we also study adaptation on high-dimensional locomotion tasks with the MuJoCo sim- ulator (Todorov et al., 2012). The tasks require two sim- ulated robots â a planar cheetah and a 3D quadruped (the âantâ) â to run in a particular direction or at a particular In the goal velocity experiments, the reward is velocity. | 1703.03400#50 | Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks | We propose an algorithm for meta-learning that is model-agnostic, in the
sense that it is compatible with any model trained with gradient descent and
applicable to a variety of different learning problems, including
classification, regression, and reinforcement learning. The goal of
meta-learning is to train a model on a variety of learning tasks, such that it
can solve new learning tasks using only a small number of training samples. In
our approach, the parameters of the model are explicitly trained such that a
small number of gradient steps with a small amount of training data from a new
task will produce good generalization performance on that task. In effect, our
method trains the model to be easy to fine-tune. We demonstrate that this
approach leads to state-of-the-art performance on two few-shot image
classification benchmarks, produces good results on few-shot regression, and
accelerates fine-tuning for policy gradient reinforcement learning with neural
network policies. | http://arxiv.org/pdf/1703.03400 | Chelsea Finn, Pieter Abbeel, Sergey Levine | cs.LG, cs.AI, cs.CV, cs.NE | ICML 2017. Code at https://github.com/cbfinn/maml, Videos of RL
results at https://sites.google.com/view/maml, Blog post at
http://bair.berkeley.edu/blog/2017/07/18/learning-to-learn/ | null | cs.LG | 20170309 | 20170718 | [
{
"id": "1612.00796"
},
{
"id": "1611.02779"
},
{
"id": "1603.04467"
},
{
"id": "1703.05175"
},
{
"id": "1508.03854"
},
{
"id": "1611.05763"
}
] |
1703.03400 | 51 | Reusing knowledge from past tasks may be a crucial in- gredient in making high-capacity scalable models, such as deep neural networks, amenable to fast training with small datasets. We believe that this work is one step toward a sim- ple and general-purpose meta-learning technique that can be applied to any problem and any model. Further research in this area can make multitask initialization a standard in- gredient in deep learning and reinforcement learning.
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
# Acknowledgements
The authors would like to thank Xi Chen and Trevor Darrell for helpful discussions, Yan Duan and Alex Lee for techni- cal advice, Nikhil Mishra, Haoran Tang, and Greg Kahn for feedback on an early draft of the paper, and the anonymous reviewers for their comments. This work was supported in part by an ONR PECASE award and an NSF GRFP award.
Ha, David, Dai, Andrew, and Le, Quoc V. Hypernetworks. International Conference on Learning Representations (ICLR), 2017.
Hochreiter, Sepp, Younger, A Steven, and Conwell, Pe- In ter R. Learning to learn using gradient descent. International Conference on Artiï¬cial Neural Networks. Springer, 2001.
# References | 1703.03400#51 | Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks | We propose an algorithm for meta-learning that is model-agnostic, in the
sense that it is compatible with any model trained with gradient descent and
applicable to a variety of different learning problems, including
classification, regression, and reinforcement learning. The goal of
meta-learning is to train a model on a variety of learning tasks, such that it
can solve new learning tasks using only a small number of training samples. In
our approach, the parameters of the model are explicitly trained such that a
small number of gradient steps with a small amount of training data from a new
task will produce good generalization performance on that task. In effect, our
method trains the model to be easy to fine-tune. We demonstrate that this
approach leads to state-of-the-art performance on two few-shot image
classification benchmarks, produces good results on few-shot regression, and
accelerates fine-tuning for policy gradient reinforcement learning with neural
network policies. | http://arxiv.org/pdf/1703.03400 | Chelsea Finn, Pieter Abbeel, Sergey Levine | cs.LG, cs.AI, cs.CV, cs.NE | ICML 2017. Code at https://github.com/cbfinn/maml, Videos of RL
results at https://sites.google.com/view/maml, Blog post at
http://bair.berkeley.edu/blog/2017/07/18/learning-to-learn/ | null | cs.LG | 20170309 | 20170718 | [
{
"id": "1612.00796"
},
{
"id": "1611.02779"
},
{
"id": "1603.04467"
},
{
"id": "1703.05175"
},
{
"id": "1508.03854"
},
{
"id": "1611.05763"
}
] |
1703.03400 | 52 | # References
Abadi, Mart´ın, Agarwal, Ashish, Barham, Paul, Brevdo, Eugene, Chen, Zhifeng, Citro, Craig, Corrado, Greg S, Davis, Andy, Dean, Jeffrey, Devin, Matthieu, et al. Ten- sorï¬ow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016.
Andrychowicz, Marcin, Denil, Misha, Gomez, Sergio, Hoffman, Matthew W, Pfau, David, Schaul, Tom, and de Freitas, Nando. Learning to learn by gradient descent by gradient descent. In Neural Information Processing Systems (NIPS), 2016.
Husken, Michael and Goerick, Christian. Fast learning for problem classes using knowledge based network initial- In Neural Networks, 2000. IJCNN 2000, Pro- ization. ceedings of the IEEE-INNS-ENNS International Joint Conference on, volume 6, pp. 619â624. IEEE, 2000.
Ioffe, Sergey and Szegedy, Christian. Batch normalization: Accelerating deep network training by reducing internal International Conference on Machine covariate shift. Learning (ICML), 2015. | 1703.03400#52 | Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks | We propose an algorithm for meta-learning that is model-agnostic, in the
sense that it is compatible with any model trained with gradient descent and
applicable to a variety of different learning problems, including
classification, regression, and reinforcement learning. The goal of
meta-learning is to train a model on a variety of learning tasks, such that it
can solve new learning tasks using only a small number of training samples. In
our approach, the parameters of the model are explicitly trained such that a
small number of gradient steps with a small amount of training data from a new
task will produce good generalization performance on that task. In effect, our
method trains the model to be easy to fine-tune. We demonstrate that this
approach leads to state-of-the-art performance on two few-shot image
classification benchmarks, produces good results on few-shot regression, and
accelerates fine-tuning for policy gradient reinforcement learning with neural
network policies. | http://arxiv.org/pdf/1703.03400 | Chelsea Finn, Pieter Abbeel, Sergey Levine | cs.LG, cs.AI, cs.CV, cs.NE | ICML 2017. Code at https://github.com/cbfinn/maml, Videos of RL
results at https://sites.google.com/view/maml, Blog post at
http://bair.berkeley.edu/blog/2017/07/18/learning-to-learn/ | null | cs.LG | 20170309 | 20170718 | [
{
"id": "1612.00796"
},
{
"id": "1611.02779"
},
{
"id": "1603.04467"
},
{
"id": "1703.05175"
},
{
"id": "1508.03854"
},
{
"id": "1611.05763"
}
] |
1703.03400 | 53 | Ioffe, Sergey and Szegedy, Christian. Batch normalization: Accelerating deep network training by reducing internal International Conference on Machine covariate shift. Learning (ICML), 2015.
Kaiser, Lukasz, Nachum, Oï¬r, Roy, Aurko, and Bengio, Samy. Learning to remember rare events. International Conference on Learning Representations (ICLR), 2017.
Bengio, Samy, Bengio, Yoshua, Cloutier, Jocelyn, and Gecsei, Jan. On the optimization of a synaptic learning In Optimality in Artiï¬cial and Biological Neural rule. Networks, pp. 6â8, 1992.
Kingma, Diederik and Ba, Jimmy. Adam: A method for International Conference on stochastic optimization. Learning Representations (ICLR), 2015.
Bengio, Yoshua, Bengio, Samy, and Cloutier, Jocelyn. Learning a synaptic learning rule. Universit´e de Montr´eal, D´epartement dâinformatique et de recherche op´erationnelle, 1990. | 1703.03400#53 | Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks | We propose an algorithm for meta-learning that is model-agnostic, in the
sense that it is compatible with any model trained with gradient descent and
applicable to a variety of different learning problems, including
classification, regression, and reinforcement learning. The goal of
meta-learning is to train a model on a variety of learning tasks, such that it
can solve new learning tasks using only a small number of training samples. In
our approach, the parameters of the model are explicitly trained such that a
small number of gradient steps with a small amount of training data from a new
task will produce good generalization performance on that task. In effect, our
method trains the model to be easy to fine-tune. We demonstrate that this
approach leads to state-of-the-art performance on two few-shot image
classification benchmarks, produces good results on few-shot regression, and
accelerates fine-tuning for policy gradient reinforcement learning with neural
network policies. | http://arxiv.org/pdf/1703.03400 | Chelsea Finn, Pieter Abbeel, Sergey Levine | cs.LG, cs.AI, cs.CV, cs.NE | ICML 2017. Code at https://github.com/cbfinn/maml, Videos of RL
results at https://sites.google.com/view/maml, Blog post at
http://bair.berkeley.edu/blog/2017/07/18/learning-to-learn/ | null | cs.LG | 20170309 | 20170718 | [
{
"id": "1612.00796"
},
{
"id": "1611.02779"
},
{
"id": "1603.04467"
},
{
"id": "1703.05175"
},
{
"id": "1508.03854"
},
{
"id": "1611.05763"
}
] |
1703.03400 | 54 | Donahue, Jeff, Jia, Yangqing, Vinyals, Oriol, Hoffman, Judy, Zhang, Ning, Tzeng, Eric, and Darrell, Trevor. De- caf: A deep convolutional activation feature for generic visual recognition. In International Conference on Ma- chine Learning (ICML), 2014.
Kirkpatrick, James, Pascanu, Razvan, Rabinowitz, Neil, Veness, Joel, Desjardins, Guillaume, Rusu, Andrei A, Milan, Kieran, Quan, John, Ramalho, Tiago, Grabska- Overcoming catas- Barwinska, Agnieszka, et al. trophic forgetting in neural networks. arXiv preprint arXiv:1612.00796, 2016.
Koch, Gregory. Siamese neural networks for one-shot im- age recognition. ICML Deep Learning Workshop, 2015.
Duan, Yan, Chen, Xi, Houthooft, Rein, Schulman, John, and Abbeel, Pieter. Benchmarking deep reinforcement In International Con- learning for continuous control. ference on Machine Learning (ICML), 2016a. | 1703.03400#54 | Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks | We propose an algorithm for meta-learning that is model-agnostic, in the
sense that it is compatible with any model trained with gradient descent and
applicable to a variety of different learning problems, including
classification, regression, and reinforcement learning. The goal of
meta-learning is to train a model on a variety of learning tasks, such that it
can solve new learning tasks using only a small number of training samples. In
our approach, the parameters of the model are explicitly trained such that a
small number of gradient steps with a small amount of training data from a new
task will produce good generalization performance on that task. In effect, our
method trains the model to be easy to fine-tune. We demonstrate that this
approach leads to state-of-the-art performance on two few-shot image
classification benchmarks, produces good results on few-shot regression, and
accelerates fine-tuning for policy gradient reinforcement learning with neural
network policies. | http://arxiv.org/pdf/1703.03400 | Chelsea Finn, Pieter Abbeel, Sergey Levine | cs.LG, cs.AI, cs.CV, cs.NE | ICML 2017. Code at https://github.com/cbfinn/maml, Videos of RL
results at https://sites.google.com/view/maml, Blog post at
http://bair.berkeley.edu/blog/2017/07/18/learning-to-learn/ | null | cs.LG | 20170309 | 20170718 | [
{
"id": "1612.00796"
},
{
"id": "1611.02779"
},
{
"id": "1603.04467"
},
{
"id": "1703.05175"
},
{
"id": "1508.03854"
},
{
"id": "1611.05763"
}
] |
1703.03400 | 55 | Kr¨ahenb¨uhl, Philipp, Doersch, Carl, Donahue, Jeff, and Darrell, Trevor. Data-dependent initializations of con- volutional neural networks. International Conference on Learning Representations (ICLR), 2016.
Duan, Yan, Schulman, John, Chen, Xi, Bartlett, Peter L, Sutskever, Ilya, and Abbeel, Pieter. Rl2: Fast reinforce- ment learning via slow reinforcement learning. arXiv preprint arXiv:1611.02779, 2016b.
Lake, Brenden M, Salakhutdinov, Ruslan, Gross, Jason, and Tenenbaum, Joshua B. One shot learning of simple visual concepts. In Conference of the Cognitive Science Society (CogSci), 2011.
Edwards, Harrison and Storkey, Amos. Towards a neural statistician. International Conference on Learning Rep- resentations (ICLR), 2017.
Li, Ke and Malik, Jitendra. Learning to optimize. Interna- tional Conference on Learning Representations (ICLR), 2017.
Goodfellow, Ian J, Shlens, Jonathon, and Szegedy, Chris- tian. Explaining and harnessing adversarial examples. International Conference on Learning Representations (ICLR), 2015. | 1703.03400#55 | Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks | We propose an algorithm for meta-learning that is model-agnostic, in the
sense that it is compatible with any model trained with gradient descent and
applicable to a variety of different learning problems, including
classification, regression, and reinforcement learning. The goal of
meta-learning is to train a model on a variety of learning tasks, such that it
can solve new learning tasks using only a small number of training samples. In
our approach, the parameters of the model are explicitly trained such that a
small number of gradient steps with a small amount of training data from a new
task will produce good generalization performance on that task. In effect, our
method trains the model to be easy to fine-tune. We demonstrate that this
approach leads to state-of-the-art performance on two few-shot image
classification benchmarks, produces good results on few-shot regression, and
accelerates fine-tuning for policy gradient reinforcement learning with neural
network policies. | http://arxiv.org/pdf/1703.03400 | Chelsea Finn, Pieter Abbeel, Sergey Levine | cs.LG, cs.AI, cs.CV, cs.NE | ICML 2017. Code at https://github.com/cbfinn/maml, Videos of RL
results at https://sites.google.com/view/maml, Blog post at
http://bair.berkeley.edu/blog/2017/07/18/learning-to-learn/ | null | cs.LG | 20170309 | 20170718 | [
{
"id": "1612.00796"
},
{
"id": "1611.02779"
},
{
"id": "1603.04467"
},
{
"id": "1703.05175"
},
{
"id": "1508.03854"
},
{
"id": "1611.05763"
}
] |
1703.03400 | 56 | Maclaurin, Dougal, Duvenaud, David, and Adams, Ryan. Gradient-based hyperparameter optimization through re- In International Conference on Ma- versible learning. chine Learning (ICML), 2015.
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Munkhdalai, Tsendsuren and Yu, Hong. Meta net- works. International Conferecence on Machine Learn- ing (ICML), 2017.
Snell, Jake, Swersky, Kevin, and Zemel, Richard S. Pro- totypical networks for few-shot learning. arXiv preprint arXiv:1703.05175, 2017.
Naik, Devang K and Mammone, RJ. Meta-neural networks that learn by learning. In International Joint Conference on Neural Netowrks (IJCNN), 1992.
Thrun, Sebastian and Pratt, Lorien. Learning to learn. Springer Science & Business Media, 1998.
Parisotto, Emilio, Ba, Jimmy Lei, and Salakhutdinov, Rus- lan. Actor-mimic: Deep multitask and transfer reinforce- International Conference on Learning ment learning. Representations (ICLR), 2016. | 1703.03400#56 | Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks | We propose an algorithm for meta-learning that is model-agnostic, in the
sense that it is compatible with any model trained with gradient descent and
applicable to a variety of different learning problems, including
classification, regression, and reinforcement learning. The goal of
meta-learning is to train a model on a variety of learning tasks, such that it
can solve new learning tasks using only a small number of training samples. In
our approach, the parameters of the model are explicitly trained such that a
small number of gradient steps with a small amount of training data from a new
task will produce good generalization performance on that task. In effect, our
method trains the model to be easy to fine-tune. We demonstrate that this
approach leads to state-of-the-art performance on two few-shot image
classification benchmarks, produces good results on few-shot regression, and
accelerates fine-tuning for policy gradient reinforcement learning with neural
network policies. | http://arxiv.org/pdf/1703.03400 | Chelsea Finn, Pieter Abbeel, Sergey Levine | cs.LG, cs.AI, cs.CV, cs.NE | ICML 2017. Code at https://github.com/cbfinn/maml, Videos of RL
results at https://sites.google.com/view/maml, Blog post at
http://bair.berkeley.edu/blog/2017/07/18/learning-to-learn/ | null | cs.LG | 20170309 | 20170718 | [
{
"id": "1612.00796"
},
{
"id": "1611.02779"
},
{
"id": "1603.04467"
},
{
"id": "1703.05175"
},
{
"id": "1508.03854"
},
{
"id": "1611.05763"
}
] |
1703.03400 | 57 | Todorov, Emanuel, Erez, Tom, and Tassa, Yuval. Mujoco: In Inter- A physics engine for model-based control. national Conference on Intelligent Robots and Systems (IROS), 2012.
Ravi, Sachin and Larochelle, Hugo. Optimization as a In International Confer- model for few-shot learning. ence on Learning Representations (ICLR), 2017.
Vinyals, Oriol, Blundell, Charles, Lillicrap, Tim, Wierstra, Daan, et al. Matching networks for one shot learning. In Neural Information Processing Systems (NIPS), 2016.
Rei, Marek. Online representation learning in re- arXiv preprint current neural arXiv:1508.03854, 2015. language models.
Wang, Jane X, Kurth-Nelson, Zeb, Tirumala, Dhruva, Soyer, Hubert, Leibo, Joel Z, Munos, Remi, Blun- dell, Charles, Kumaran, Dharshan, and Botvinick, Matt. Learning to reinforcement learn. arXiv preprint arXiv:1611.05763, 2016. | 1703.03400#57 | Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks | We propose an algorithm for meta-learning that is model-agnostic, in the
sense that it is compatible with any model trained with gradient descent and
applicable to a variety of different learning problems, including
classification, regression, and reinforcement learning. The goal of
meta-learning is to train a model on a variety of learning tasks, such that it
can solve new learning tasks using only a small number of training samples. In
our approach, the parameters of the model are explicitly trained such that a
small number of gradient steps with a small amount of training data from a new
task will produce good generalization performance on that task. In effect, our
method trains the model to be easy to fine-tune. We demonstrate that this
approach leads to state-of-the-art performance on two few-shot image
classification benchmarks, produces good results on few-shot regression, and
accelerates fine-tuning for policy gradient reinforcement learning with neural
network policies. | http://arxiv.org/pdf/1703.03400 | Chelsea Finn, Pieter Abbeel, Sergey Levine | cs.LG, cs.AI, cs.CV, cs.NE | ICML 2017. Code at https://github.com/cbfinn/maml, Videos of RL
results at https://sites.google.com/view/maml, Blog post at
http://bair.berkeley.edu/blog/2017/07/18/learning-to-learn/ | null | cs.LG | 20170309 | 20170718 | [
{
"id": "1612.00796"
},
{
"id": "1611.02779"
},
{
"id": "1603.04467"
},
{
"id": "1703.05175"
},
{
"id": "1508.03854"
},
{
"id": "1611.05763"
}
] |
1703.03400 | 58 | Rezende, Danilo Jimenez, Mohamed, Shakir, Danihelka, Ivo, Gregor, Karol, and Wierstra, Daan. One-shot gener- alization in deep generative models. International Con- ference on Machine Learning (ICML), 2016.
Williams, Ronald J. Simple statistical gradient-following learning. algorithms for connectionist reinforcement Machine learning, 8(3-4):229â256, 1992.
Salimans, Tim and Kingma, Diederik P. Weight normaliza- tion: A simple reparameterization to accelerate training of deep neural networks. In Neural Information Process- ing Systems (NIPS), 2016.
Santoro, Adam, Bartunov, Sergey, Botvinick, Matthew, Wierstra, Daan, and Lillicrap, Timothy. Meta-learning In Interna- with memory-augmented neural networks. tional Conference on Machine Learning (ICML), 2016.
Saxe, Andrew, McClelland, James, and Ganguli, Surya. Exact solutions to the nonlinear dynamics of learning in International Conference deep linear neural networks. on Learning Representations (ICLR), 2014. | 1703.03400#58 | Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks | We propose an algorithm for meta-learning that is model-agnostic, in the
sense that it is compatible with any model trained with gradient descent and
applicable to a variety of different learning problems, including
classification, regression, and reinforcement learning. The goal of
meta-learning is to train a model on a variety of learning tasks, such that it
can solve new learning tasks using only a small number of training samples. In
our approach, the parameters of the model are explicitly trained such that a
small number of gradient steps with a small amount of training data from a new
task will produce good generalization performance on that task. In effect, our
method trains the model to be easy to fine-tune. We demonstrate that this
approach leads to state-of-the-art performance on two few-shot image
classification benchmarks, produces good results on few-shot regression, and
accelerates fine-tuning for policy gradient reinforcement learning with neural
network policies. | http://arxiv.org/pdf/1703.03400 | Chelsea Finn, Pieter Abbeel, Sergey Levine | cs.LG, cs.AI, cs.CV, cs.NE | ICML 2017. Code at https://github.com/cbfinn/maml, Videos of RL
results at https://sites.google.com/view/maml, Blog post at
http://bair.berkeley.edu/blog/2017/07/18/learning-to-learn/ | null | cs.LG | 20170309 | 20170718 | [
{
"id": "1612.00796"
},
{
"id": "1611.02779"
},
{
"id": "1603.04467"
},
{
"id": "1703.05175"
},
{
"id": "1508.03854"
},
{
"id": "1611.05763"
}
] |
1703.03400 | 59 | Schmidhuber, Jurgen. Evolutionary principles in self- referential learning. On learning how to learn: The meta-meta-... hook.) Diploma thesis, Institut f. Infor- matik, Tech. Univ. Munich, 1987.
Schmidhuber, J¨urgen. Learning to control fast-weight memories: An alternative to dynamic recurrent net- works. Neural Computation, 1992.
Schulman, John, Levine, Sergey, Abbeel, Pieter, Jordan, Michael I, and Moritz, Philipp. Trust region policy optimization. In International Conference on Machine Learning (ICML), 2015.
Shyam, Pranav, Gupta, Shubham, and Dukkipati, Ambed- kar. Attentive recurrent comparators. International Con- ferecence on Machine Learning (ICML), 2017.
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
# A. Additional Experiment Details
# C.1. Multi-task baselines
In this section, we provide additional details of the experi- mental set-up and hyperparameters.
# A.1. Classiï¬cation | 1703.03400#59 | Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks | We propose an algorithm for meta-learning that is model-agnostic, in the
sense that it is compatible with any model trained with gradient descent and
applicable to a variety of different learning problems, including
classification, regression, and reinforcement learning. The goal of
meta-learning is to train a model on a variety of learning tasks, such that it
can solve new learning tasks using only a small number of training samples. In
our approach, the parameters of the model are explicitly trained such that a
small number of gradient steps with a small amount of training data from a new
task will produce good generalization performance on that task. In effect, our
method trains the model to be easy to fine-tune. We demonstrate that this
approach leads to state-of-the-art performance on two few-shot image
classification benchmarks, produces good results on few-shot regression, and
accelerates fine-tuning for policy gradient reinforcement learning with neural
network policies. | http://arxiv.org/pdf/1703.03400 | Chelsea Finn, Pieter Abbeel, Sergey Levine | cs.LG, cs.AI, cs.CV, cs.NE | ICML 2017. Code at https://github.com/cbfinn/maml, Videos of RL
results at https://sites.google.com/view/maml, Blog post at
http://bair.berkeley.edu/blog/2017/07/18/learning-to-learn/ | null | cs.LG | 20170309 | 20170718 | [
{
"id": "1612.00796"
},
{
"id": "1611.02779"
},
{
"id": "1603.04467"
},
{
"id": "1703.05175"
},
{
"id": "1508.03854"
},
{
"id": "1611.05763"
}
] |
1703.03400 | 60 | In this section, we provide additional details of the experi- mental set-up and hyperparameters.
# A.1. Classiï¬cation
For N-way, K-shot classiï¬cation, each gradient is com- puted using a batch size of N K examples. For Omniglot, the 5-way convolutional and non-convolutional MAML models were each trained with 1 gradient step with step size α = 0.4 and a meta batch-size of 32 tasks. The network was evaluated using 3 gradient steps with the same step size α = 0.4. The 20-way convolutional MAML model was trained and evaluated with 5 gradient steps with step size α = 0.1. During training, the meta batch-size was set to 16 tasks. For MiniImagenet, both models were trained using 5 gradient steps of size α = 0.01, and evaluated using 10 gradient steps at test time. Following Ravi & Larochelle (2017), 15 examples per class were used for evaluating the post-update meta-gradient. We used a meta batch-size of 4 and 2 tasks for 1-shot and 5-shot training respectively. All models were trained for 60000 iterations on a single NVIDIA Pascal Titan X GPU. | 1703.03400#60 | Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks | We propose an algorithm for meta-learning that is model-agnostic, in the
sense that it is compatible with any model trained with gradient descent and
applicable to a variety of different learning problems, including
classification, regression, and reinforcement learning. The goal of
meta-learning is to train a model on a variety of learning tasks, such that it
can solve new learning tasks using only a small number of training samples. In
our approach, the parameters of the model are explicitly trained such that a
small number of gradient steps with a small amount of training data from a new
task will produce good generalization performance on that task. In effect, our
method trains the model to be easy to fine-tune. We demonstrate that this
approach leads to state-of-the-art performance on two few-shot image
classification benchmarks, produces good results on few-shot regression, and
accelerates fine-tuning for policy gradient reinforcement learning with neural
network policies. | http://arxiv.org/pdf/1703.03400 | Chelsea Finn, Pieter Abbeel, Sergey Levine | cs.LG, cs.AI, cs.CV, cs.NE | ICML 2017. Code at https://github.com/cbfinn/maml, Videos of RL
results at https://sites.google.com/view/maml, Blog post at
http://bair.berkeley.edu/blog/2017/07/18/learning-to-learn/ | null | cs.LG | 20170309 | 20170718 | [
{
"id": "1612.00796"
},
{
"id": "1611.02779"
},
{
"id": "1603.04467"
},
{
"id": "1703.05175"
},
{
"id": "1508.03854"
},
{
"id": "1611.05763"
}
] |
1703.03400 | 61 | The pretraining baseline in the main text trained a single network on all tasks, which we referred to as âpretraining on all tasksâ. To evaluate the model, as with MAML, we ï¬ne-tuned this model on each test task using K examples. In the domains that we study, different tasks involve dif- ferent output values for the same input. As a result, by pre-training on all tasks, the model would learn to output the average output for a particular input value. In some in- stances, this model may learn very little about the actual domain, and instead learn about the range of the output space.
We experimented with a multi-task method to provide a point of comparison, where instead of averaging in the out- put space, we averaged in the parameter space. To achieve averaging in parameter space, we sequentially trained 500 separate models on 500 tasks drawn from p(T ). Each model was initialized randomly and trained on a large amount of data from its assigned task. We then took the average parameter vector across models and ï¬ne-tuned on 5 datapoints with a tuned step size. All of our experiments for this method were on the sinusoid task because of com- putational requirements. The error of the individual regres- sors was low: less than 0.02 on their respective sine waves.
# A.2. Reinforcement Learning | 1703.03400#61 | Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks | We propose an algorithm for meta-learning that is model-agnostic, in the
sense that it is compatible with any model trained with gradient descent and
applicable to a variety of different learning problems, including
classification, regression, and reinforcement learning. The goal of
meta-learning is to train a model on a variety of learning tasks, such that it
can solve new learning tasks using only a small number of training samples. In
our approach, the parameters of the model are explicitly trained such that a
small number of gradient steps with a small amount of training data from a new
task will produce good generalization performance on that task. In effect, our
method trains the model to be easy to fine-tune. We demonstrate that this
approach leads to state-of-the-art performance on two few-shot image
classification benchmarks, produces good results on few-shot regression, and
accelerates fine-tuning for policy gradient reinforcement learning with neural
network policies. | http://arxiv.org/pdf/1703.03400 | Chelsea Finn, Pieter Abbeel, Sergey Levine | cs.LG, cs.AI, cs.CV, cs.NE | ICML 2017. Code at https://github.com/cbfinn/maml, Videos of RL
results at https://sites.google.com/view/maml, Blog post at
http://bair.berkeley.edu/blog/2017/07/18/learning-to-learn/ | null | cs.LG | 20170309 | 20170718 | [
{
"id": "1612.00796"
},
{
"id": "1611.02779"
},
{
"id": "1603.04467"
},
{
"id": "1703.05175"
},
{
"id": "1508.03854"
},
{
"id": "1611.05763"
}
] |
1703.03400 | 62 | # A.2. Reinforcement Learning
In all reinforcement learning experiments, the MAML pol- icy was trained using a single gradient step with α = 0.1. During evaluation, we found that halving the learning rate after the ï¬rst gradient step produced superior performance. Thus, the step size during adaptation was set to α = 0.1 for the ï¬rst step, and α = 0.05 for all future steps. The step sizes for the baseline methods were manually tuned for each domain. In the 2D navigation, we used a meta batch size of 20; in the locomotion problems, we used a meta batch size of 40 tasks. The MAML models were trained for up to 500 meta-iterations, and the model with the best average return during training was used for evaluation. For the ant goal velocity task, we added a positive reward bonus at each timestep to prevent the ant from ending the episode. | 1703.03400#62 | Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks | We propose an algorithm for meta-learning that is model-agnostic, in the
sense that it is compatible with any model trained with gradient descent and
applicable to a variety of different learning problems, including
classification, regression, and reinforcement learning. The goal of
meta-learning is to train a model on a variety of learning tasks, such that it
can solve new learning tasks using only a small number of training samples. In
our approach, the parameters of the model are explicitly trained such that a
small number of gradient steps with a small amount of training data from a new
task will produce good generalization performance on that task. In effect, our
method trains the model to be easy to fine-tune. We demonstrate that this
approach leads to state-of-the-art performance on two few-shot image
classification benchmarks, produces good results on few-shot regression, and
accelerates fine-tuning for policy gradient reinforcement learning with neural
network policies. | http://arxiv.org/pdf/1703.03400 | Chelsea Finn, Pieter Abbeel, Sergey Levine | cs.LG, cs.AI, cs.CV, cs.NE | ICML 2017. Code at https://github.com/cbfinn/maml, Videos of RL
results at https://sites.google.com/view/maml, Blog post at
http://bair.berkeley.edu/blog/2017/07/18/learning-to-learn/ | null | cs.LG | 20170309 | 20170718 | [
{
"id": "1612.00796"
},
{
"id": "1611.02779"
},
{
"id": "1603.04467"
},
{
"id": "1703.05175"
},
{
"id": "1508.03854"
},
{
"id": "1611.05763"
}
] |
1703.03400 | 63 | We tried three variants of this set-up. During training of the individual regressors, we tried using one of the fol- lowing: no regularization, standard ¢2 weight decay, and fy weight regularization to the mean parameter vector thus far of the trained regressors. The latter two variants en- courage the individual models to find parsimonious solu- tions. When using regularization, we set the magnitude of the regularization to be as high as possible without signif- icantly deterring performance. In our results, we refer to this approach as âmulti-taskâ. As seen in the results in Ta- ble 2, we find averaging in the parameter space (multi-task) performed worse than averaging in the output space (pre- training on all tasks). This suggests that it is difficult to find parsimonious solutions to multiple tasks when training on tasks separately, and that MAML is learning a solution that is more sophisticated than the mean optimal parameter vector.
# B. Additional Sinusoid Results
In Figure 6, we show the full quantitative results of the MAML model trained on 10-shot learning and evaluated on 5-shot, 10-shot, and 20-shot. In Figure 7, we show the qualitative performance of MAML and the pretrained base- line on randomly sampled sinusoids.
# C. Additional Comparisons | 1703.03400#63 | Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks | We propose an algorithm for meta-learning that is model-agnostic, in the
sense that it is compatible with any model trained with gradient descent and
applicable to a variety of different learning problems, including
classification, regression, and reinforcement learning. The goal of
meta-learning is to train a model on a variety of learning tasks, such that it
can solve new learning tasks using only a small number of training samples. In
our approach, the parameters of the model are explicitly trained such that a
small number of gradient steps with a small amount of training data from a new
task will produce good generalization performance on that task. In effect, our
method trains the model to be easy to fine-tune. We demonstrate that this
approach leads to state-of-the-art performance on two few-shot image
classification benchmarks, produces good results on few-shot regression, and
accelerates fine-tuning for policy gradient reinforcement learning with neural
network policies. | http://arxiv.org/pdf/1703.03400 | Chelsea Finn, Pieter Abbeel, Sergey Levine | cs.LG, cs.AI, cs.CV, cs.NE | ICML 2017. Code at https://github.com/cbfinn/maml, Videos of RL
results at https://sites.google.com/view/maml, Blog post at
http://bair.berkeley.edu/blog/2017/07/18/learning-to-learn/ | null | cs.LG | 20170309 | 20170718 | [
{
"id": "1612.00796"
},
{
"id": "1611.02779"
},
{
"id": "1603.04467"
},
{
"id": "1703.05175"
},
{
"id": "1508.03854"
},
{
"id": "1611.05763"
}
] |
1703.03400 | 64 | # C. Additional Comparisons
In this section, we include more thorough evaluations of our approach, including additional multi-task baselines and a comparison representative of the approach of Rei (2015).
# C.2. Context vector adaptation
Rei (2015) developed a method which learns a context vec- tor that can be adapted online, with an application to re- current language models. The parameters in this context vector are learned and adapted in the same way as the pa- rameters in the MAML model. To provide a comparison to using such a context vector for meta-learning problems, we concatenated a set of free parameters z to the input x, and only allowed the gradient steps to modify z, rather than modifying the model parameters θ, as in MAML. For imModel-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
k-shot regression, k=5 ss k-shot regression, k=10 k-shot regression, k=20 â+â MAML (ours) âsâ MAML (ours) --=- pretrained, step=0.02 = oracle + MAMI (ours) -*- pretrained, step=0.02 sor oracle -*- retrained, step=0.01 Re = oracle mean squared error mean squared error number of gradient steps number of gradient steps number of gradient steps | 1703.03400#64 | Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks | We propose an algorithm for meta-learning that is model-agnostic, in the
sense that it is compatible with any model trained with gradient descent and
applicable to a variety of different learning problems, including
classification, regression, and reinforcement learning. The goal of
meta-learning is to train a model on a variety of learning tasks, such that it
can solve new learning tasks using only a small number of training samples. In
our approach, the parameters of the model are explicitly trained such that a
small number of gradient steps with a small amount of training data from a new
task will produce good generalization performance on that task. In effect, our
method trains the model to be easy to fine-tune. We demonstrate that this
approach leads to state-of-the-art performance on two few-shot image
classification benchmarks, produces good results on few-shot regression, and
accelerates fine-tuning for policy gradient reinforcement learning with neural
network policies. | http://arxiv.org/pdf/1703.03400 | Chelsea Finn, Pieter Abbeel, Sergey Levine | cs.LG, cs.AI, cs.CV, cs.NE | ICML 2017. Code at https://github.com/cbfinn/maml, Videos of RL
results at https://sites.google.com/view/maml, Blog post at
http://bair.berkeley.edu/blog/2017/07/18/learning-to-learn/ | null | cs.LG | 20170309 | 20170718 | [
{
"id": "1612.00796"
},
{
"id": "1611.02779"
},
{
"id": "1603.04467"
},
{
"id": "1703.05175"
},
{
"id": "1508.03854"
},
{
"id": "1611.05763"
}
] |
1703.03400 | 65 | k-shot regression, k=5 â+â MAML (ours) -*- retrained, step=0.01 Re = oracle mean squared error number of gradient steps
ss k-shot regression, k=10 âsâ MAML (ours) --=- pretrained, step=0.02 = oracle mean squared error number of gradient steps
k-shot regression, k=20 + MAMI (ours) -*- pretrained, step=0.02 sor oracle number of gradient steps
Figure 6. Quantitative sinusoid regression results showing test-time learning curves with varying numbers of K test-time samples. Each gradient step is computed using the same K examples. Note that MAML continues to improve with additional gradient steps without overï¬tting to the extremely small dataset during meta-testing, and achieves a loss that is substantially lower than the baseline ï¬ne-tuning approach.
Table 2. Additional multi-task baselines on the sinusoid regres- sion domain, showing 5-shot mean squared error. The results sug- gest that MAML is learning a solution more sophisticated than the mean optimal parameter vector. | 1703.03400#65 | Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks | We propose an algorithm for meta-learning that is model-agnostic, in the
sense that it is compatible with any model trained with gradient descent and
applicable to a variety of different learning problems, including
classification, regression, and reinforcement learning. The goal of
meta-learning is to train a model on a variety of learning tasks, such that it
can solve new learning tasks using only a small number of training samples. In
our approach, the parameters of the model are explicitly trained such that a
small number of gradient steps with a small amount of training data from a new
task will produce good generalization performance on that task. In effect, our
method trains the model to be easy to fine-tune. We demonstrate that this
approach leads to state-of-the-art performance on two few-shot image
classification benchmarks, produces good results on few-shot regression, and
accelerates fine-tuning for policy gradient reinforcement learning with neural
network policies. | http://arxiv.org/pdf/1703.03400 | Chelsea Finn, Pieter Abbeel, Sergey Levine | cs.LG, cs.AI, cs.CV, cs.NE | ICML 2017. Code at https://github.com/cbfinn/maml, Videos of RL
results at https://sites.google.com/view/maml, Blog post at
http://bair.berkeley.edu/blog/2017/07/18/learning-to-learn/ | null | cs.LG | 20170309 | 20170718 | [
{
"id": "1612.00796"
},
{
"id": "1611.02779"
},
{
"id": "1603.04467"
},
{
"id": "1703.05175"
},
{
"id": "1508.03854"
},
{
"id": "1611.05763"
}
] |
1703.03400 | 66 | num. grad steps multi-task, no reg multi-task, l2 reg multi-task, reg to mean θ pretrain on all tasks MAML (ours) 1 4.19 7.18 2.91 2.41 0.67 5 3.85 5.69 2.72 2.23 0.38 10 3.69 5.60 2.71 2.19 0.35
image. We ran this method on Omniglot and two RL do- mains following the same experimental protocol. We report the results in Tables 3, 4, and 5. Learning an adaptable con- text vector performed well on the toy pointmass problem, but sub-par on more difï¬cult problems, likely due to a less ï¬exible meta-optimization.
Table 4. 2D Pointmass, average return
Table 3. 5-way Omniglot Classiï¬cation 1-shot 5-shot context vector MAML 94.9 ± 0.9% 97.7 ± 0.3% 98.7 ± 0.4% 99.9 ± 0.1%
age inputs, z was concatenated channel-wise with the input
num. grad steps context vector â42.42 â13.90 â5.17 â3.18 MAML (ours) â40.41 â11.68 â3.33 â3.23 | 1703.03400#66 | Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks | We propose an algorithm for meta-learning that is model-agnostic, in the
sense that it is compatible with any model trained with gradient descent and
applicable to a variety of different learning problems, including
classification, regression, and reinforcement learning. The goal of
meta-learning is to train a model on a variety of learning tasks, such that it
can solve new learning tasks using only a small number of training samples. In
our approach, the parameters of the model are explicitly trained such that a
small number of gradient steps with a small amount of training data from a new
task will produce good generalization performance on that task. In effect, our
method trains the model to be easy to fine-tune. We demonstrate that this
approach leads to state-of-the-art performance on two few-shot image
classification benchmarks, produces good results on few-shot regression, and
accelerates fine-tuning for policy gradient reinforcement learning with neural
network policies. | http://arxiv.org/pdf/1703.03400 | Chelsea Finn, Pieter Abbeel, Sergey Levine | cs.LG, cs.AI, cs.CV, cs.NE | ICML 2017. Code at https://github.com/cbfinn/maml, Videos of RL
results at https://sites.google.com/view/maml, Blog post at
http://bair.berkeley.edu/blog/2017/07/18/learning-to-learn/ | null | cs.LG | 20170309 | 20170718 | [
{
"id": "1612.00796"
},
{
"id": "1611.02779"
},
{
"id": "1603.04467"
},
{
"id": "1703.05175"
},
{
"id": "1508.03854"
},
{
"id": "1611.05763"
}
] |
1703.03400 | 67 | Table 5. Half-cheetah forward/backward, average return
num. grad steps context vector â40.49 â44.08 â38.27 â42.50 315.65 MAML (ours) â50.69
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
MAML, K=5 MAML, K=10 pretrained, K=5, step size=0.01 pretrained, K=10, step size=0.02 MAML, K=10 pretrained, K=5, step size=0.01 MAML, K=10 MAML, K=10 pretrained, K=10, step size=0.02
pretrained, K=10, step size=0.02
pretrained, K=5, step size=0.01
MAML, K=5
MAML, K=10 | 1703.03400#67 | Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks | We propose an algorithm for meta-learning that is model-agnostic, in the
sense that it is compatible with any model trained with gradient descent and
applicable to a variety of different learning problems, including
classification, regression, and reinforcement learning. The goal of
meta-learning is to train a model on a variety of learning tasks, such that it
can solve new learning tasks using only a small number of training samples. In
our approach, the parameters of the model are explicitly trained such that a
small number of gradient steps with a small amount of training data from a new
task will produce good generalization performance on that task. In effect, our
method trains the model to be easy to fine-tune. We demonstrate that this
approach leads to state-of-the-art performance on two few-shot image
classification benchmarks, produces good results on few-shot regression, and
accelerates fine-tuning for policy gradient reinforcement learning with neural
network policies. | http://arxiv.org/pdf/1703.03400 | Chelsea Finn, Pieter Abbeel, Sergey Levine | cs.LG, cs.AI, cs.CV, cs.NE | ICML 2017. Code at https://github.com/cbfinn/maml, Videos of RL
results at https://sites.google.com/view/maml, Blog post at
http://bair.berkeley.edu/blog/2017/07/18/learning-to-learn/ | null | cs.LG | 20170309 | 20170718 | [
{
"id": "1612.00796"
},
{
"id": "1611.02779"
},
{
"id": "1603.04467"
},
{
"id": "1703.05175"
},
{
"id": "1508.03854"
},
{
"id": "1611.05763"
}
] |
1703.01780 | 1 | # Abstract
The recently proposed Temporal Ensembling has achieved state-of-the-art results in several semi-supervised learning benchmarks. It maintains an exponential moving average of label predictions on each training example, and penalizes predictions that are inconsistent with this target. However, because the targets change only once per epoch, Temporal Ensembling becomes unwieldy when learning large datasets. To overcome this problem, we propose Mean Teacher, a method that averages model weights instead of label predictions. As an additional beneï¬t, Mean Teacher improves test accuracy and enables training with fewer labels than Temporal Ensembling. Without changing the network architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250 labels, outperforming Temporal Ensembling trained with 1000 labels. We also show that a good network architecture is crucial to performance. Combining Mean Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with 4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels from 35.24% to 9.11%.
# Introduction | 1703.01780#1 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | The recently proposed Temporal Ensembling has achieved state-of-the-art
results in several semi-supervised learning benchmarks. It maintains an
exponential moving average of label predictions on each training example, and
penalizes predictions that are inconsistent with this target. However, because
the targets change only once per epoch, Temporal Ensembling becomes unwieldy
when learning large datasets. To overcome this problem, we propose Mean
Teacher, a method that averages model weights instead of label predictions. As
an additional benefit, Mean Teacher improves test accuracy and enables training
with fewer labels than Temporal Ensembling. Without changing the network
architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250
labels, outperforming Temporal Ensembling trained with 1000 labels. We also
show that a good network architecture is crucial to performance. Combining Mean
Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with
4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels
from 35.24% to 9.11%. | http://arxiv.org/pdf/1703.01780 | Antti Tarvainen, Harri Valpola | cs.NE, cs.LG, stat.ML | In this version: Corrected hyperparameters of the 4000-label CIFAR-10
ResNet experiment. Changed Antti's contact info, Advances in Neural
Information Processing Systems 30 (NIPS 2017) pre-proceedings | null | cs.NE | 20170306 | 20180416 | [
{
"id": "1706.04599"
},
{
"id": "1705.07485"
},
{
"id": "1503.02531"
},
{
"id": "1609.08976"
},
{
"id": "1511.04508"
},
{
"id": "1610.02242"
},
{
"id": "1605.06465"
},
{
"id": "1608.03983"
},
{
"id": "1611.05431"
},
{
"id": "1704.03976"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1709.01507"
}
] |
1703.01780 | 2 | # Introduction
Deep learning has seen tremendous success in areas such as image and speech recognition. In order to learn useful abstractions, deep learning models require a large number of parameters, thus making them prone to over-ï¬tting (Figure 1a). Moreover, adding high-quality labels to training data manually is often expensive. Therefore, it is desirable to use regularization methods that exploit unlabeled data effectively to reduce over-ï¬tting in semi-supervised learning.
When a percept is changed slightly, a human typically still considers it to be the same object. Corre- spondingly, a classiï¬cation model should favor functions that give consistent output for similar data points. One approach for achieving this is to add noise to the input of the model. To enable the model to learn more abstract invariances, the noise may be added to intermediate representations, an insight that has motivated many regularization techniques, such as Dropout [28]. Rather than minimizing the classiï¬cation cost at the zero-dimensional data points of the input space, the regularized model minimizes the cost on a manifold around each data point, thus pushing decision boundaries away from the labeled data points (Figure 1b). | 1703.01780#2 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | The recently proposed Temporal Ensembling has achieved state-of-the-art
results in several semi-supervised learning benchmarks. It maintains an
exponential moving average of label predictions on each training example, and
penalizes predictions that are inconsistent with this target. However, because
the targets change only once per epoch, Temporal Ensembling becomes unwieldy
when learning large datasets. To overcome this problem, we propose Mean
Teacher, a method that averages model weights instead of label predictions. As
an additional benefit, Mean Teacher improves test accuracy and enables training
with fewer labels than Temporal Ensembling. Without changing the network
architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250
labels, outperforming Temporal Ensembling trained with 1000 labels. We also
show that a good network architecture is crucial to performance. Combining Mean
Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with
4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels
from 35.24% to 9.11%. | http://arxiv.org/pdf/1703.01780 | Antti Tarvainen, Harri Valpola | cs.NE, cs.LG, stat.ML | In this version: Corrected hyperparameters of the 4000-label CIFAR-10
ResNet experiment. Changed Antti's contact info, Advances in Neural
Information Processing Systems 30 (NIPS 2017) pre-proceedings | null | cs.NE | 20170306 | 20180416 | [
{
"id": "1706.04599"
},
{
"id": "1705.07485"
},
{
"id": "1503.02531"
},
{
"id": "1609.08976"
},
{
"id": "1511.04508"
},
{
"id": "1610.02242"
},
{
"id": "1605.06465"
},
{
"id": "1608.03983"
},
{
"id": "1611.05431"
},
{
"id": "1704.03976"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1709.01507"
}
] |
1703.01780 | 3 | Since the classiï¬cation cost is undeï¬ned for unlabeled examples, the noise regularization by itself does not aid in semi-supervised learning. To overcome this, the Î model [21] evaluates each data point with and without noise, and then applies a consistency cost between the two predictions. In this case, the model assumes a dual role as a teacher and a student. As a student, it learns as before; as a teacher, it generates targets, which are then used by itself as a student for learning. Since the model itself generates targets, they may very well be incorrect. If too much weight is given to the generated targets, the cost of inconsistency outweighs that of misclassiï¬cation, preventing the learning of new
(a) (b) (c) (d) (e) | 1703.01780#3 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | The recently proposed Temporal Ensembling has achieved state-of-the-art
results in several semi-supervised learning benchmarks. It maintains an
exponential moving average of label predictions on each training example, and
penalizes predictions that are inconsistent with this target. However, because
the targets change only once per epoch, Temporal Ensembling becomes unwieldy
when learning large datasets. To overcome this problem, we propose Mean
Teacher, a method that averages model weights instead of label predictions. As
an additional benefit, Mean Teacher improves test accuracy and enables training
with fewer labels than Temporal Ensembling. Without changing the network
architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250
labels, outperforming Temporal Ensembling trained with 1000 labels. We also
show that a good network architecture is crucial to performance. Combining Mean
Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with
4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels
from 35.24% to 9.11%. | http://arxiv.org/pdf/1703.01780 | Antti Tarvainen, Harri Valpola | cs.NE, cs.LG, stat.ML | In this version: Corrected hyperparameters of the 4000-label CIFAR-10
ResNet experiment. Changed Antti's contact info, Advances in Neural
Information Processing Systems 30 (NIPS 2017) pre-proceedings | null | cs.NE | 20170306 | 20180416 | [
{
"id": "1706.04599"
},
{
"id": "1705.07485"
},
{
"id": "1503.02531"
},
{
"id": "1609.08976"
},
{
"id": "1511.04508"
},
{
"id": "1610.02242"
},
{
"id": "1605.06465"
},
{
"id": "1608.03983"
},
{
"id": "1611.05431"
},
{
"id": "1704.03976"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1709.01507"
}
] |
1703.01780 | 4 | (a) (b) (c) (d) (e)
Figure 1: A sketch of a binary classiï¬cation task with two labeled examples (large blue dots) and one unlabeled example, demonstrating how the choice of the unlabeled target (black circle) affects the ï¬tted function (gray curve). (a) A model with no regularization is free to ï¬t any function that predicts the labeled training examples well. (b) A model trained with noisy labeled data (small dots) learns to give consistent predictions around labeled data points. (c) Consistency to noise around unlabeled examples provides additional smoothing. For the clarity of illustration, the teacher model (gray curve) is ï¬rst ï¬tted to the labeled examples, and then left unchanged during the training of the student model. Also for clarity, we will omit the small dots in ï¬gures d and e. (d) Noise on the teacher model reduces the bias of the targets without additional training. The expected direction of stochastic gradient descent is towards the mean (large blue circle) of individual noisy targets (small blue circles). (e) An ensemble of models gives an even better expected target. Both Temporal Ensembling and the Mean Teacher method use this approach. | 1703.01780#4 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | The recently proposed Temporal Ensembling has achieved state-of-the-art
results in several semi-supervised learning benchmarks. It maintains an
exponential moving average of label predictions on each training example, and
penalizes predictions that are inconsistent with this target. However, because
the targets change only once per epoch, Temporal Ensembling becomes unwieldy
when learning large datasets. To overcome this problem, we propose Mean
Teacher, a method that averages model weights instead of label predictions. As
an additional benefit, Mean Teacher improves test accuracy and enables training
with fewer labels than Temporal Ensembling. Without changing the network
architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250
labels, outperforming Temporal Ensembling trained with 1000 labels. We also
show that a good network architecture is crucial to performance. Combining Mean
Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with
4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels
from 35.24% to 9.11%. | http://arxiv.org/pdf/1703.01780 | Antti Tarvainen, Harri Valpola | cs.NE, cs.LG, stat.ML | In this version: Corrected hyperparameters of the 4000-label CIFAR-10
ResNet experiment. Changed Antti's contact info, Advances in Neural
Information Processing Systems 30 (NIPS 2017) pre-proceedings | null | cs.NE | 20170306 | 20180416 | [
{
"id": "1706.04599"
},
{
"id": "1705.07485"
},
{
"id": "1503.02531"
},
{
"id": "1609.08976"
},
{
"id": "1511.04508"
},
{
"id": "1610.02242"
},
{
"id": "1605.06465"
},
{
"id": "1608.03983"
},
{
"id": "1611.05431"
},
{
"id": "1704.03976"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1709.01507"
}
] |
1703.01780 | 5 | information. In effect, the model suffers from conï¬rmation bias (Figure 1c), a hazard that can be mitigated by improving the quality of targets.
There are at least two ways to improve the target quality. One approach is to choose the perturbation of the representations carefully instead of barely applying additive or multiplicative noise. Another approach is to choose the teacher model carefully instead of barely replicating the student model. Concurrently to our research, Miyato et al. [16] have taken the ï¬rst approach and shown that Virtual Adversarial Training can yield impressive results. We take the second approach and will show that it too provides signiï¬cant beneï¬ts. To our understanding, these two approaches are compatible, and their combination may produce even better outcomes. However, the analysis of their combined effects is outside the scope of this paper. | 1703.01780#5 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | The recently proposed Temporal Ensembling has achieved state-of-the-art
results in several semi-supervised learning benchmarks. It maintains an
exponential moving average of label predictions on each training example, and
penalizes predictions that are inconsistent with this target. However, because
the targets change only once per epoch, Temporal Ensembling becomes unwieldy
when learning large datasets. To overcome this problem, we propose Mean
Teacher, a method that averages model weights instead of label predictions. As
an additional benefit, Mean Teacher improves test accuracy and enables training
with fewer labels than Temporal Ensembling. Without changing the network
architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250
labels, outperforming Temporal Ensembling trained with 1000 labels. We also
show that a good network architecture is crucial to performance. Combining Mean
Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with
4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels
from 35.24% to 9.11%. | http://arxiv.org/pdf/1703.01780 | Antti Tarvainen, Harri Valpola | cs.NE, cs.LG, stat.ML | In this version: Corrected hyperparameters of the 4000-label CIFAR-10
ResNet experiment. Changed Antti's contact info, Advances in Neural
Information Processing Systems 30 (NIPS 2017) pre-proceedings | null | cs.NE | 20170306 | 20180416 | [
{
"id": "1706.04599"
},
{
"id": "1705.07485"
},
{
"id": "1503.02531"
},
{
"id": "1609.08976"
},
{
"id": "1511.04508"
},
{
"id": "1610.02242"
},
{
"id": "1605.06465"
},
{
"id": "1608.03983"
},
{
"id": "1611.05431"
},
{
"id": "1704.03976"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1709.01507"
}
] |
1703.01780 | 6 | Our goal, then, is to form a better teacher model from the student model without additional training. As the ï¬rst step, consider that the softmax output of a model does not usually provide accurate predictions outside training data. This can be partly alleviated by adding noise to the model at inference time [4], and consequently a noisy teacher can yield more accurate targets (Figure 1d). This approach was used in Pseudo-Ensemble Agreement [2] and has lately been shown to work well on semi-supervised image classiï¬cation [13, 23]. Laine & Aila [13] named the method the Î model; we will use this name for it and their version of it as the basis of our experiments. | 1703.01780#6 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | The recently proposed Temporal Ensembling has achieved state-of-the-art
results in several semi-supervised learning benchmarks. It maintains an
exponential moving average of label predictions on each training example, and
penalizes predictions that are inconsistent with this target. However, because
the targets change only once per epoch, Temporal Ensembling becomes unwieldy
when learning large datasets. To overcome this problem, we propose Mean
Teacher, a method that averages model weights instead of label predictions. As
an additional benefit, Mean Teacher improves test accuracy and enables training
with fewer labels than Temporal Ensembling. Without changing the network
architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250
labels, outperforming Temporal Ensembling trained with 1000 labels. We also
show that a good network architecture is crucial to performance. Combining Mean
Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with
4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels
from 35.24% to 9.11%. | http://arxiv.org/pdf/1703.01780 | Antti Tarvainen, Harri Valpola | cs.NE, cs.LG, stat.ML | In this version: Corrected hyperparameters of the 4000-label CIFAR-10
ResNet experiment. Changed Antti's contact info, Advances in Neural
Information Processing Systems 30 (NIPS 2017) pre-proceedings | null | cs.NE | 20170306 | 20180416 | [
{
"id": "1706.04599"
},
{
"id": "1705.07485"
},
{
"id": "1503.02531"
},
{
"id": "1609.08976"
},
{
"id": "1511.04508"
},
{
"id": "1610.02242"
},
{
"id": "1605.06465"
},
{
"id": "1608.03983"
},
{
"id": "1611.05431"
},
{
"id": "1704.03976"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1709.01507"
}
] |
1703.01780 | 7 | The Î model can be further improved by Temporal Ensembling [13], which maintains an exponential moving average (EMA) prediction for each of the training examples. At each training step, all the EMA predictions of the examples in that minibatch are updated based on the new predictions. Consequently, the EMA prediction of each example is formed by an ensemble of the modelâs current version and those earlier versions that evaluated the same example. This ensembling improves the quality of the predictions, and using them as the teacher predictions improves results. However, since each target is updated only once per epoch, the learned information is incorporated into the training process at a slow pace. The larger the dataset, the longer the span of the updates, and in the case of on-line learning, it is unclear how Temporal Ensembling can be used at all. (One could evaluate all the targets periodically more than once per epoch, but keeping the evaluation span constant would require O(n2) evaluations per epoch where n is the number of training examples.)
# 2 Mean Teacher
To overcome the limitations of Temporal Ensembling, we propose averaging model weights instead of predictions. Since the teacher model is an average of consecutive student models, we call this the Mean Teacher method (Figure 2). Averaging model weights over training steps tends to produce a
2 | 1703.01780#7 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | The recently proposed Temporal Ensembling has achieved state-of-the-art
results in several semi-supervised learning benchmarks. It maintains an
exponential moving average of label predictions on each training example, and
penalizes predictions that are inconsistent with this target. However, because
the targets change only once per epoch, Temporal Ensembling becomes unwieldy
when learning large datasets. To overcome this problem, we propose Mean
Teacher, a method that averages model weights instead of label predictions. As
an additional benefit, Mean Teacher improves test accuracy and enables training
with fewer labels than Temporal Ensembling. Without changing the network
architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250
labels, outperforming Temporal Ensembling trained with 1000 labels. We also
show that a good network architecture is crucial to performance. Combining Mean
Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with
4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels
from 35.24% to 9.11%. | http://arxiv.org/pdf/1703.01780 | Antti Tarvainen, Harri Valpola | cs.NE, cs.LG, stat.ML | In this version: Corrected hyperparameters of the 4000-label CIFAR-10
ResNet experiment. Changed Antti's contact info, Advances in Neural
Information Processing Systems 30 (NIPS 2017) pre-proceedings | null | cs.NE | 20170306 | 20180416 | [
{
"id": "1706.04599"
},
{
"id": "1705.07485"
},
{
"id": "1503.02531"
},
{
"id": "1609.08976"
},
{
"id": "1511.04508"
},
{
"id": "1610.02242"
},
{
"id": "1605.06465"
},
{
"id": "1608.03983"
},
{
"id": "1611.05431"
},
{
"id": "1704.03976"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1709.01507"
}
] |
1703.01780 | 8 | 2
prediction prediction 3 « > ] « > | classification consistency cost r cost t â â Ol Ol Q+â> @â Ol exponential Ol â< moving << average 3 | a label input student model teacher model
Figure 2: The Mean Teacher method. The figure depicts a training batch with a single labeled example. Both the student and the teacher model evaluate the input applying noise (7, 7â) within their computation. The softmax output of the student model is compared with the one-hot label using classification cost and with the teacher output using consistency cost. After the weights of the student model have been updated with gradient descent, the teacher model weights are updated as an exponential moving average of the student weights. Both model outputs can be used for prediction, but at the end of the training the teacher prediction is more likely to be correct. A training step with an unlabeled example would be similar, except no classification cost would be applied. | 1703.01780#8 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | The recently proposed Temporal Ensembling has achieved state-of-the-art
results in several semi-supervised learning benchmarks. It maintains an
exponential moving average of label predictions on each training example, and
penalizes predictions that are inconsistent with this target. However, because
the targets change only once per epoch, Temporal Ensembling becomes unwieldy
when learning large datasets. To overcome this problem, we propose Mean
Teacher, a method that averages model weights instead of label predictions. As
an additional benefit, Mean Teacher improves test accuracy and enables training
with fewer labels than Temporal Ensembling. Without changing the network
architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250
labels, outperforming Temporal Ensembling trained with 1000 labels. We also
show that a good network architecture is crucial to performance. Combining Mean
Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with
4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels
from 35.24% to 9.11%. | http://arxiv.org/pdf/1703.01780 | Antti Tarvainen, Harri Valpola | cs.NE, cs.LG, stat.ML | In this version: Corrected hyperparameters of the 4000-label CIFAR-10
ResNet experiment. Changed Antti's contact info, Advances in Neural
Information Processing Systems 30 (NIPS 2017) pre-proceedings | null | cs.NE | 20170306 | 20180416 | [
{
"id": "1706.04599"
},
{
"id": "1705.07485"
},
{
"id": "1503.02531"
},
{
"id": "1609.08976"
},
{
"id": "1511.04508"
},
{
"id": "1610.02242"
},
{
"id": "1605.06465"
},
{
"id": "1608.03983"
},
{
"id": "1611.05431"
},
{
"id": "1704.03976"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1709.01507"
}
] |
1703.01780 | 9 | more accurate model than using the ï¬nal weights directly [19]. We can take advantage of this during training to construct better targets. Instead of sharing the weights with the student model, the teacher model uses the EMA weights of the student model. Now it can aggregate information after every step instead of every epoch. In addition, since the weight averages improve all layer outputs, not just the top output, the target model has better intermediate representations. These aspects lead to two practical advantages over Temporal Ensembling: First, the more accurate target labels lead to a faster feedback loop between the student and the teacher models, resulting in better test accuracy. Second, the approach scales to large datasets and on-line learning.
More formally, we define the consistency cost J as the expected distance between the prediction of the student model (with weights 6 and noise 7) and the prediction of the teacher model (with weights 0â and noise 77â).
I(0) = Exon [Il F(@, 9.0!) = F(x, 8,0)?
The difference between the II model, Temporal Ensembling, and Mean teacher is how the teacher predictions are generated. Whereas the II model uses 6â = 6, and Temporal Ensembling approximates f(x, 9â, 7â) with a weighted average of successive predictions, we define 6), at training step t as the EMA of successive @ weights: | 1703.01780#9 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | The recently proposed Temporal Ensembling has achieved state-of-the-art
results in several semi-supervised learning benchmarks. It maintains an
exponential moving average of label predictions on each training example, and
penalizes predictions that are inconsistent with this target. However, because
the targets change only once per epoch, Temporal Ensembling becomes unwieldy
when learning large datasets. To overcome this problem, we propose Mean
Teacher, a method that averages model weights instead of label predictions. As
an additional benefit, Mean Teacher improves test accuracy and enables training
with fewer labels than Temporal Ensembling. Without changing the network
architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250
labels, outperforming Temporal Ensembling trained with 1000 labels. We also
show that a good network architecture is crucial to performance. Combining Mean
Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with
4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels
from 35.24% to 9.11%. | http://arxiv.org/pdf/1703.01780 | Antti Tarvainen, Harri Valpola | cs.NE, cs.LG, stat.ML | In this version: Corrected hyperparameters of the 4000-label CIFAR-10
ResNet experiment. Changed Antti's contact info, Advances in Neural
Information Processing Systems 30 (NIPS 2017) pre-proceedings | null | cs.NE | 20170306 | 20180416 | [
{
"id": "1706.04599"
},
{
"id": "1705.07485"
},
{
"id": "1503.02531"
},
{
"id": "1609.08976"
},
{
"id": "1511.04508"
},
{
"id": "1610.02242"
},
{
"id": "1605.06465"
},
{
"id": "1608.03983"
},
{
"id": "1611.05431"
},
{
"id": "1704.03976"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1709.01507"
}
] |
1703.01780 | 10 | O, = a0)_, + (Lâa)%
where a is a smoothing coefficient hyperparameter. An additional difference between the three algorithms is that the IT model applies training to 6â whereas Temporal Ensembling and Mean Teacher treat it as a constant with regards to optimization. We can approximate the consistency cost function J by sampling noise 7,7â at each training step with stochastic gradient descent. Following Laine & Aila [13], we use mean squared error (MSE) as the consistency cost in most of our experiments.
3
Table 1: Error rate percentage on SVHN over 10 runs (4 runs when using all labels). We use exponential moving average weights in the evaluation of all our models. All the methods use a similar 13-layer ConvNet architecture. See Table 5 in the Appendix for results without input augmentation. | 1703.01780#10 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | The recently proposed Temporal Ensembling has achieved state-of-the-art
results in several semi-supervised learning benchmarks. It maintains an
exponential moving average of label predictions on each training example, and
penalizes predictions that are inconsistent with this target. However, because
the targets change only once per epoch, Temporal Ensembling becomes unwieldy
when learning large datasets. To overcome this problem, we propose Mean
Teacher, a method that averages model weights instead of label predictions. As
an additional benefit, Mean Teacher improves test accuracy and enables training
with fewer labels than Temporal Ensembling. Without changing the network
architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250
labels, outperforming Temporal Ensembling trained with 1000 labels. We also
show that a good network architecture is crucial to performance. Combining Mean
Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with
4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels
from 35.24% to 9.11%. | http://arxiv.org/pdf/1703.01780 | Antti Tarvainen, Harri Valpola | cs.NE, cs.LG, stat.ML | In this version: Corrected hyperparameters of the 4000-label CIFAR-10
ResNet experiment. Changed Antti's contact info, Advances in Neural
Information Processing Systems 30 (NIPS 2017) pre-proceedings | null | cs.NE | 20170306 | 20180416 | [
{
"id": "1706.04599"
},
{
"id": "1705.07485"
},
{
"id": "1503.02531"
},
{
"id": "1609.08976"
},
{
"id": "1511.04508"
},
{
"id": "1610.02242"
},
{
"id": "1605.06465"
},
{
"id": "1608.03983"
},
{
"id": "1611.05431"
},
{
"id": "1704.03976"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1709.01507"
}
] |
1703.01780 | 11 | 250 labels 73257 images GAN [25] Πmodel [13] Temporal Ensembling [13] VAT+EntMin [16] Supervised-only Πmodel Mean Teacher 27.77 ± 3.18 9.69 ± 0.92 4.35 ± 0.50 4.35 ± 0.50 4.35 ± 0.50 500 labels 73257 images 18.44 ± 4.8 6.65 ± 0.53 5.12 ± 0.13 16.88 ± 1.30 6.83 ± 0.66 4.18 ± 0.27 4.18 ± 0.27 4.18 ± 0.27 1000 labels 73257 images 8.11 ± 1.3 4.82 ± 0.17 4.42 ± 0.16 3.863.863.86 12.32 ± 0.95 4.95 ± 0.26 3.95 ± 0.19 73257 labels 73257 images 2.54 ± 0.04 2.74 ± 0.06 2.75 ± 0.10 2.50 ± 0.07 2.50 ± 0.05 2.50 ± 0.05 2.50 ± 0.05
Table 2: Error rate percentage on CIFAR-10 over 10 runs (4 runs when using all labels). | 1703.01780#11 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | The recently proposed Temporal Ensembling has achieved state-of-the-art
results in several semi-supervised learning benchmarks. It maintains an
exponential moving average of label predictions on each training example, and
penalizes predictions that are inconsistent with this target. However, because
the targets change only once per epoch, Temporal Ensembling becomes unwieldy
when learning large datasets. To overcome this problem, we propose Mean
Teacher, a method that averages model weights instead of label predictions. As
an additional benefit, Mean Teacher improves test accuracy and enables training
with fewer labels than Temporal Ensembling. Without changing the network
architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250
labels, outperforming Temporal Ensembling trained with 1000 labels. We also
show that a good network architecture is crucial to performance. Combining Mean
Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with
4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels
from 35.24% to 9.11%. | http://arxiv.org/pdf/1703.01780 | Antti Tarvainen, Harri Valpola | cs.NE, cs.LG, stat.ML | In this version: Corrected hyperparameters of the 4000-label CIFAR-10
ResNet experiment. Changed Antti's contact info, Advances in Neural
Information Processing Systems 30 (NIPS 2017) pre-proceedings | null | cs.NE | 20170306 | 20180416 | [
{
"id": "1706.04599"
},
{
"id": "1705.07485"
},
{
"id": "1503.02531"
},
{
"id": "1609.08976"
},
{
"id": "1511.04508"
},
{
"id": "1610.02242"
},
{
"id": "1605.06465"
},
{
"id": "1608.03983"
},
{
"id": "1611.05431"
},
{
"id": "1704.03976"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1709.01507"
}
] |
1703.01780 | 12 | Table 2: Error rate percentage on CIFAR-10 over 10 runs (4 runs when using all labels).
1000 labels 50000 images GAN [25] Πmodel [13] Temporal Ensembling [13] VAT+EntMin [16] Supervised-only Πmodel Mean Teacher 46.43 ± 1.21 27.36 ± 1.20 21.55 ± 1.48 21.55 ± 1.48 21.55 ± 1.48 2000 labels 50000 images 33.94 ± 0.73 18.02 ± 0.60 15.73 ± 0.31 15.73 ± 0.31 15.73 ± 0.31 4000 labels 50000 images 18.63 ± 2.32 12.36 ± 0.31 12.16 ± 0.31 10.55 10.55 10.55 20.66 ± 0.57 13.20 ± 0.27 12.31 ± 0.28 50000 labels 50000 images 5.56 ± 0.10 5.60 ± 0.10 5.60 ± 0.10 5.60 ± 0.10 5.82 ± 0.15 6.06 ± 0.11 5.94 ± 0.15
# 3 Experiments | 1703.01780#12 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | The recently proposed Temporal Ensembling has achieved state-of-the-art
results in several semi-supervised learning benchmarks. It maintains an
exponential moving average of label predictions on each training example, and
penalizes predictions that are inconsistent with this target. However, because
the targets change only once per epoch, Temporal Ensembling becomes unwieldy
when learning large datasets. To overcome this problem, we propose Mean
Teacher, a method that averages model weights instead of label predictions. As
an additional benefit, Mean Teacher improves test accuracy and enables training
with fewer labels than Temporal Ensembling. Without changing the network
architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250
labels, outperforming Temporal Ensembling trained with 1000 labels. We also
show that a good network architecture is crucial to performance. Combining Mean
Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with
4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels
from 35.24% to 9.11%. | http://arxiv.org/pdf/1703.01780 | Antti Tarvainen, Harri Valpola | cs.NE, cs.LG, stat.ML | In this version: Corrected hyperparameters of the 4000-label CIFAR-10
ResNet experiment. Changed Antti's contact info, Advances in Neural
Information Processing Systems 30 (NIPS 2017) pre-proceedings | null | cs.NE | 20170306 | 20180416 | [
{
"id": "1706.04599"
},
{
"id": "1705.07485"
},
{
"id": "1503.02531"
},
{
"id": "1609.08976"
},
{
"id": "1511.04508"
},
{
"id": "1610.02242"
},
{
"id": "1605.06465"
},
{
"id": "1608.03983"
},
{
"id": "1611.05431"
},
{
"id": "1704.03976"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1709.01507"
}
] |
1703.01780 | 13 | # 3 Experiments
To test our hypotheses, we ï¬rst replicated the Î model [13] in TensorFlow [1] as our baseline. We then modiï¬ed the baseline model to use weight-averaged consistency targets. The model architecture is a 13-layer convolutional neural network (ConvNet) with three types of noise: random translations and horizontal ï¬ips of the input images, Gaussian noise on the input layer, and dropout applied within the network. We use mean squared error as the consistency cost and ramp up its weight from 0 to its ï¬nal value during the ï¬rst 80 epochs. The details of the model and the training procedure are described in Appendix B.1.
# 3.1 Comparison to other methods on SVHN and CIFAR-10 | 1703.01780#13 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | The recently proposed Temporal Ensembling has achieved state-of-the-art
results in several semi-supervised learning benchmarks. It maintains an
exponential moving average of label predictions on each training example, and
penalizes predictions that are inconsistent with this target. However, because
the targets change only once per epoch, Temporal Ensembling becomes unwieldy
when learning large datasets. To overcome this problem, we propose Mean
Teacher, a method that averages model weights instead of label predictions. As
an additional benefit, Mean Teacher improves test accuracy and enables training
with fewer labels than Temporal Ensembling. Without changing the network
architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250
labels, outperforming Temporal Ensembling trained with 1000 labels. We also
show that a good network architecture is crucial to performance. Combining Mean
Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with
4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels
from 35.24% to 9.11%. | http://arxiv.org/pdf/1703.01780 | Antti Tarvainen, Harri Valpola | cs.NE, cs.LG, stat.ML | In this version: Corrected hyperparameters of the 4000-label CIFAR-10
ResNet experiment. Changed Antti's contact info, Advances in Neural
Information Processing Systems 30 (NIPS 2017) pre-proceedings | null | cs.NE | 20170306 | 20180416 | [
{
"id": "1706.04599"
},
{
"id": "1705.07485"
},
{
"id": "1503.02531"
},
{
"id": "1609.08976"
},
{
"id": "1511.04508"
},
{
"id": "1610.02242"
},
{
"id": "1605.06465"
},
{
"id": "1608.03983"
},
{
"id": "1611.05431"
},
{
"id": "1704.03976"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1709.01507"
}
] |
1703.01780 | 14 | # 3.1 Comparison to other methods on SVHN and CIFAR-10
We ran experiments using the Street View House Numbers (SVHN) and CIFAR-10 benchmarks [17]. Both datasets contain 32x32 pixel RGB images belonging to ten different classes. In SVHN, each example is a close-up of a house number, and the class represents the identity of the digit at the center of the image. In CIFAR-10, each example is a natural image belonging to a class such as horses, cats, cars and airplanes. SVHN contains of 73257 training samples and 26032 test samples. CIFAR-10 consists of 50000 training samples and 10000 test samples.
Tables 1 and 2 compare the results against recent state-of-the-art methods. All the methods in the comparison use a similar 13-layer ConvNet architecture. Mean Teacher improves test accuracy over the Î model and Temporal Ensembling on semi-supervised SVHN tasks. Mean Teacher also improves results on CIFAR-10 over our baseline Î model. | 1703.01780#14 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | The recently proposed Temporal Ensembling has achieved state-of-the-art
results in several semi-supervised learning benchmarks. It maintains an
exponential moving average of label predictions on each training example, and
penalizes predictions that are inconsistent with this target. However, because
the targets change only once per epoch, Temporal Ensembling becomes unwieldy
when learning large datasets. To overcome this problem, we propose Mean
Teacher, a method that averages model weights instead of label predictions. As
an additional benefit, Mean Teacher improves test accuracy and enables training
with fewer labels than Temporal Ensembling. Without changing the network
architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250
labels, outperforming Temporal Ensembling trained with 1000 labels. We also
show that a good network architecture is crucial to performance. Combining Mean
Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with
4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels
from 35.24% to 9.11%. | http://arxiv.org/pdf/1703.01780 | Antti Tarvainen, Harri Valpola | cs.NE, cs.LG, stat.ML | In this version: Corrected hyperparameters of the 4000-label CIFAR-10
ResNet experiment. Changed Antti's contact info, Advances in Neural
Information Processing Systems 30 (NIPS 2017) pre-proceedings | null | cs.NE | 20170306 | 20180416 | [
{
"id": "1706.04599"
},
{
"id": "1705.07485"
},
{
"id": "1503.02531"
},
{
"id": "1609.08976"
},
{
"id": "1511.04508"
},
{
"id": "1610.02242"
},
{
"id": "1605.06465"
},
{
"id": "1608.03983"
},
{
"id": "1611.05431"
},
{
"id": "1704.03976"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1709.01507"
}
] |
1703.01780 | 15 | The recently published version of Virtual Adversarial Training by Miyato et al. [16] performs even better than Mean Teacher on the 1000-label SVHN and the 4000-label CIFAR-10. As discussed in the introduction, VAT and Mean Teacher are complimentary approaches. Their combination may yield better accuracy than either of them alone, but that investigation is beyond the scope of this paper.
4
Table 3: Error percentage over 10 runs on SVHN with extra unlabeled training data.
Πmodel (ours) Mean Teacher 500 labels 73257 images 6.83 ± 0.66 4.18 ± 0.27 4.18 ± 0.27 4.18 ± 0.27 500 labels 173257 images 4.49 ± 0.27 3.02 ± 0.16 3.02 ± 0.16 3.02 ± 0.16 500 labels 573257 images 3.26 ± 0.14 2.46 ± 0.06 2.46 ± 0.06 2.46 ± 0.06 | 1703.01780#15 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | The recently proposed Temporal Ensembling has achieved state-of-the-art
results in several semi-supervised learning benchmarks. It maintains an
exponential moving average of label predictions on each training example, and
penalizes predictions that are inconsistent with this target. However, because
the targets change only once per epoch, Temporal Ensembling becomes unwieldy
when learning large datasets. To overcome this problem, we propose Mean
Teacher, a method that averages model weights instead of label predictions. As
an additional benefit, Mean Teacher improves test accuracy and enables training
with fewer labels than Temporal Ensembling. Without changing the network
architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250
labels, outperforming Temporal Ensembling trained with 1000 labels. We also
show that a good network architecture is crucial to performance. Combining Mean
Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with
4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels
from 35.24% to 9.11%. | http://arxiv.org/pdf/1703.01780 | Antti Tarvainen, Harri Valpola | cs.NE, cs.LG, stat.ML | In this version: Corrected hyperparameters of the 4000-label CIFAR-10
ResNet experiment. Changed Antti's contact info, Advances in Neural
Information Processing Systems 30 (NIPS 2017) pre-proceedings | null | cs.NE | 20170306 | 20180416 | [
{
"id": "1706.04599"
},
{
"id": "1705.07485"
},
{
"id": "1503.02531"
},
{
"id": "1609.08976"
},
{
"id": "1511.04508"
},
{
"id": "1610.02242"
},
{
"id": "1605.06465"
},
{
"id": "1608.03983"
},
{
"id": "1611.05431"
},
{
"id": "1704.03976"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1709.01507"
}
] |
1703.01780 | 16 | 73257 images and labels 73257 images and 500 labels 573257 images and 500 labels 102 100 10-1 ââ Mmodel (test set) ââ Mean teacher (student, test set) ve T model (training) tee Mean teacher (student, training) 10° 102 classification cost 100% 1 model 1 model (EMA) Mean teacher (student) Mean teacher (teacher) 50% classification error RoR 8 8 & & 5% 2% 0k = 20k_=â 40k) GOK 80K 100k -«Ok-=â 20k) 40k_~â GOK) 80k = 100k â0k = 20k 40k = 60K_~=â 80k ~â100k
Figure 3: Smoothened classiï¬cation cost (top) and classiï¬cation error (bottom) of Mean Teacher and our baseline Î model on SVHN over the ï¬rst 100000 training steps. In the upper row, the training classiï¬cation costs are measured using only labeled data.
# 3.2 SVHN with extra unlabeled data | 1703.01780#16 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | The recently proposed Temporal Ensembling has achieved state-of-the-art
results in several semi-supervised learning benchmarks. It maintains an
exponential moving average of label predictions on each training example, and
penalizes predictions that are inconsistent with this target. However, because
the targets change only once per epoch, Temporal Ensembling becomes unwieldy
when learning large datasets. To overcome this problem, we propose Mean
Teacher, a method that averages model weights instead of label predictions. As
an additional benefit, Mean Teacher improves test accuracy and enables training
with fewer labels than Temporal Ensembling. Without changing the network
architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250
labels, outperforming Temporal Ensembling trained with 1000 labels. We also
show that a good network architecture is crucial to performance. Combining Mean
Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with
4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels
from 35.24% to 9.11%. | http://arxiv.org/pdf/1703.01780 | Antti Tarvainen, Harri Valpola | cs.NE, cs.LG, stat.ML | In this version: Corrected hyperparameters of the 4000-label CIFAR-10
ResNet experiment. Changed Antti's contact info, Advances in Neural
Information Processing Systems 30 (NIPS 2017) pre-proceedings | null | cs.NE | 20170306 | 20180416 | [
{
"id": "1706.04599"
},
{
"id": "1705.07485"
},
{
"id": "1503.02531"
},
{
"id": "1609.08976"
},
{
"id": "1511.04508"
},
{
"id": "1610.02242"
},
{
"id": "1605.06465"
},
{
"id": "1608.03983"
},
{
"id": "1611.05431"
},
{
"id": "1704.03976"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1709.01507"
}
] |
1703.01780 | 17 | # 3.2 SVHN with extra unlabeled data
Above, we suggested that Mean Teacher scales well to large datasets and on-line learning. In addition, the SVHN and CIFAR-10 results indicate that it uses unlabeled examples efï¬ciently. Therefore, we wanted to test whether we have reached the limits of our approach.
Besides the primary training data, SVHN includes also an extra dataset of 531131 examples. We picked 500 samples from the primary training as our labeled training examples. We used the rest of the primary training set together with the extra training set as unlabeled examples. We ran experiments with Mean Teacher and our baseline Î model, and used either 0, 100000 or 500000 extra examples. Table 3 shows the results.
# 3.3 Analysis of the training curves
The training curves on Figure 3 help us understand the effects of using Mean Teacher. As expected, the EMA-weighted models (blue and dark gray curves in the bottom row) give more accurate predictions than the bare student models (orange and light gray) after an initial period. | 1703.01780#17 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | The recently proposed Temporal Ensembling has achieved state-of-the-art
results in several semi-supervised learning benchmarks. It maintains an
exponential moving average of label predictions on each training example, and
penalizes predictions that are inconsistent with this target. However, because
the targets change only once per epoch, Temporal Ensembling becomes unwieldy
when learning large datasets. To overcome this problem, we propose Mean
Teacher, a method that averages model weights instead of label predictions. As
an additional benefit, Mean Teacher improves test accuracy and enables training
with fewer labels than Temporal Ensembling. Without changing the network
architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250
labels, outperforming Temporal Ensembling trained with 1000 labels. We also
show that a good network architecture is crucial to performance. Combining Mean
Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with
4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels
from 35.24% to 9.11%. | http://arxiv.org/pdf/1703.01780 | Antti Tarvainen, Harri Valpola | cs.NE, cs.LG, stat.ML | In this version: Corrected hyperparameters of the 4000-label CIFAR-10
ResNet experiment. Changed Antti's contact info, Advances in Neural
Information Processing Systems 30 (NIPS 2017) pre-proceedings | null | cs.NE | 20170306 | 20180416 | [
{
"id": "1706.04599"
},
{
"id": "1705.07485"
},
{
"id": "1503.02531"
},
{
"id": "1609.08976"
},
{
"id": "1511.04508"
},
{
"id": "1610.02242"
},
{
"id": "1605.06465"
},
{
"id": "1608.03983"
},
{
"id": "1611.05431"
},
{
"id": "1704.03976"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1709.01507"
}
] |
1703.01780 | 18 | Using the EMA-weighted model as the teacher improves results in the semi-supervised settings. There appears to be a virtuous feedback cycle of the teacher (blue curve) improving the student (orange) via the consistency cost, and the student improving the teacher via exponential moving averaging. If this feedback cycle is detached, the learning is slower, and the model starts to overï¬t earlier (dark gray and light gray).
Mean Teacher helps when labels are scarce. When using 500 labels (middle column) Mean Teacher learns faster, and continues training after the Î model stops improving. On the other hand, in the all-labeled case (left column), Mean Teacher and the Î model behave virtually identically.
5 | 1703.01780#18 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | The recently proposed Temporal Ensembling has achieved state-of-the-art
results in several semi-supervised learning benchmarks. It maintains an
exponential moving average of label predictions on each training example, and
penalizes predictions that are inconsistent with this target. However, because
the targets change only once per epoch, Temporal Ensembling becomes unwieldy
when learning large datasets. To overcome this problem, we propose Mean
Teacher, a method that averages model weights instead of label predictions. As
an additional benefit, Mean Teacher improves test accuracy and enables training
with fewer labels than Temporal Ensembling. Without changing the network
architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250
labels, outperforming Temporal Ensembling trained with 1000 labels. We also
show that a good network architecture is crucial to performance. Combining Mean
Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with
4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels
from 35.24% to 9.11%. | http://arxiv.org/pdf/1703.01780 | Antti Tarvainen, Harri Valpola | cs.NE, cs.LG, stat.ML | In this version: Corrected hyperparameters of the 4000-label CIFAR-10
ResNet experiment. Changed Antti's contact info, Advances in Neural
Information Processing Systems 30 (NIPS 2017) pre-proceedings | null | cs.NE | 20170306 | 20180416 | [
{
"id": "1706.04599"
},
{
"id": "1705.07485"
},
{
"id": "1503.02531"
},
{
"id": "1609.08976"
},
{
"id": "1511.04508"
},
{
"id": "1610.02242"
},
{
"id": "1605.06465"
},
{
"id": "1608.03983"
},
{
"id": "1611.05431"
},
{
"id": "1704.03976"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1709.01507"
}
] |
1703.01780 | 19 | 5
15% 4 (a) eg âity 15% 4 (b) 15% 4 (c) ° Brey ug 10% | with âoe,* 10% 4 10% 4 . augmentation 2 . : e 5% 4 . 5% 4 5% 4 no input dropout both 0.0 0.25 O05 0.75 0 0103 1 3 10 noise noise teacher dropout consistency cost weight 15% 15% 4 (e) consistency 15% + (f) ramp-up 10% 10% 4 on 10% 4 â off 5% 5% 4 5% {e+ oa a sH aR o4 sf mm Nu gt wid 4 Oa OD D> Rags 5 7 tT 2 FT oS a o a aR 8 8 23 3 8 Be Eo F fo § EMA decay dual output diff. cost 9 © cons. cost function t
Figure 4: Validation error on 250-label SVHN over four runs per hyperparameter setting and their means. In each experiment, we varied one hyperparameter, and used the evaluation run hyperparameters of Table 1 for the rest. The hyperparameter settings used in the evaluation runs are marked with the bolded font weight. See the text for details. | 1703.01780#19 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | The recently proposed Temporal Ensembling has achieved state-of-the-art
results in several semi-supervised learning benchmarks. It maintains an
exponential moving average of label predictions on each training example, and
penalizes predictions that are inconsistent with this target. However, because
the targets change only once per epoch, Temporal Ensembling becomes unwieldy
when learning large datasets. To overcome this problem, we propose Mean
Teacher, a method that averages model weights instead of label predictions. As
an additional benefit, Mean Teacher improves test accuracy and enables training
with fewer labels than Temporal Ensembling. Without changing the network
architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250
labels, outperforming Temporal Ensembling trained with 1000 labels. We also
show that a good network architecture is crucial to performance. Combining Mean
Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with
4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels
from 35.24% to 9.11%. | http://arxiv.org/pdf/1703.01780 | Antti Tarvainen, Harri Valpola | cs.NE, cs.LG, stat.ML | In this version: Corrected hyperparameters of the 4000-label CIFAR-10
ResNet experiment. Changed Antti's contact info, Advances in Neural
Information Processing Systems 30 (NIPS 2017) pre-proceedings | null | cs.NE | 20170306 | 20180416 | [
{
"id": "1706.04599"
},
{
"id": "1705.07485"
},
{
"id": "1503.02531"
},
{
"id": "1609.08976"
},
{
"id": "1511.04508"
},
{
"id": "1610.02242"
},
{
"id": "1605.06465"
},
{
"id": "1608.03983"
},
{
"id": "1611.05431"
},
{
"id": "1704.03976"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1709.01507"
}
] |
1703.01780 | 20 | Mean Teacher uses unlabeled training data more efï¬ciently than the Î model, as seen in the middle column. On the other hand, with 500k extra unlabeled examples (right column), Î model keeps improving for longer. Mean Teacher learns faster, and eventually converges to a better result, but the sheer amount of data appears to offset Î modelâs worse predictions.
# 3.4 Ablation experiments
To assess the importance of various aspects of the model, we ran experiments on SVHN with 250 labels, varying one or a few hyperparameters at a time while keeping the others ï¬xed.
Removal of noise (Figures 4(a) and 4(b)). In the introduction and Figure 1, we presented the hypothesis that the Î model produces better predictions by adding noise to the model on both sides. But after the addition of Mean Teacher, is noise still needed? Yes. We can see that either input augmentation or dropout is necessary for passable performance. On the other hand, input noise does not help when augmentation is in use. Dropout on the teacher side provides only a marginal beneï¬t over just having it on the student side, at least when input augmentation is in use. | 1703.01780#20 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | The recently proposed Temporal Ensembling has achieved state-of-the-art
results in several semi-supervised learning benchmarks. It maintains an
exponential moving average of label predictions on each training example, and
penalizes predictions that are inconsistent with this target. However, because
the targets change only once per epoch, Temporal Ensembling becomes unwieldy
when learning large datasets. To overcome this problem, we propose Mean
Teacher, a method that averages model weights instead of label predictions. As
an additional benefit, Mean Teacher improves test accuracy and enables training
with fewer labels than Temporal Ensembling. Without changing the network
architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250
labels, outperforming Temporal Ensembling trained with 1000 labels. We also
show that a good network architecture is crucial to performance. Combining Mean
Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with
4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels
from 35.24% to 9.11%. | http://arxiv.org/pdf/1703.01780 | Antti Tarvainen, Harri Valpola | cs.NE, cs.LG, stat.ML | In this version: Corrected hyperparameters of the 4000-label CIFAR-10
ResNet experiment. Changed Antti's contact info, Advances in Neural
Information Processing Systems 30 (NIPS 2017) pre-proceedings | null | cs.NE | 20170306 | 20180416 | [
{
"id": "1706.04599"
},
{
"id": "1705.07485"
},
{
"id": "1503.02531"
},
{
"id": "1609.08976"
},
{
"id": "1511.04508"
},
{
"id": "1610.02242"
},
{
"id": "1605.06465"
},
{
"id": "1608.03983"
},
{
"id": "1611.05431"
},
{
"id": "1704.03976"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1709.01507"
}
] |
1703.01780 | 21 | Sensitivity to EMA decay and consistency weight (Figures 4(c) and 4(d)). The essential hyperpa- rameters of the Mean Teacher algorithm are the consistency cost weight and the EMA decay α. How sensitive is the algorithm to their values? We can see that in each case the good values span roughly an order of magnitude and outside these ranges the performance degrades quickly. Note that EMA decay α = 0 makes the model a variation of the Î model, although somewhat inefï¬cient one because the gradients are propagated through only the student path. Note also that in the evaluation runs we used EMA decay α = 0.99 during the ramp-up phase, and α = 0.999 for the rest of the training. We chose this strategy because the student improves quickly early in the training, and thus the teacher should forget the old, inaccurate, student weights quickly. Later the student improvement slows, and the teacher beneï¬ts from a longer memory. | 1703.01780#21 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | The recently proposed Temporal Ensembling has achieved state-of-the-art
results in several semi-supervised learning benchmarks. It maintains an
exponential moving average of label predictions on each training example, and
penalizes predictions that are inconsistent with this target. However, because
the targets change only once per epoch, Temporal Ensembling becomes unwieldy
when learning large datasets. To overcome this problem, we propose Mean
Teacher, a method that averages model weights instead of label predictions. As
an additional benefit, Mean Teacher improves test accuracy and enables training
with fewer labels than Temporal Ensembling. Without changing the network
architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250
labels, outperforming Temporal Ensembling trained with 1000 labels. We also
show that a good network architecture is crucial to performance. Combining Mean
Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with
4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels
from 35.24% to 9.11%. | http://arxiv.org/pdf/1703.01780 | Antti Tarvainen, Harri Valpola | cs.NE, cs.LG, stat.ML | In this version: Corrected hyperparameters of the 4000-label CIFAR-10
ResNet experiment. Changed Antti's contact info, Advances in Neural
Information Processing Systems 30 (NIPS 2017) pre-proceedings | null | cs.NE | 20170306 | 20180416 | [
{
"id": "1706.04599"
},
{
"id": "1705.07485"
},
{
"id": "1503.02531"
},
{
"id": "1609.08976"
},
{
"id": "1511.04508"
},
{
"id": "1610.02242"
},
{
"id": "1605.06465"
},
{
"id": "1608.03983"
},
{
"id": "1611.05431"
},
{
"id": "1704.03976"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1709.01507"
}
] |
1703.01780 | 22 | Decoupling classiï¬cation and consistency (Figure 4(e)). The consistency to teacher predictions may not necessarily be a good proxy for the classiï¬cation task, especially early in the training. So far our model has strongly coupled these two tasks by using the same output for both. How would decoupling the tasks change the performance of the algorithm? To investigate, we changed the model to have two top layers and produce two outputs. We then trained one of the outputs for classiï¬cation and the other for consistency. We also added a mean squared error cost between the output logits, and then varied the weight of this cost, allowing us to control the strength of the coupling. Looking at the results (reported using the EMA version of the classiï¬cation output), we can see that the strongly coupled version performs well and the too loosely coupled versions do not. On the other hand, a moderate decoupling seems to have the beneï¬t of making the consistency ramp-up redundant.
6
Table 4: Error rate percentage of ResNet Mean Teacher compared to the state of the art. We report the test results from 10 runs on CIFAR-10 and validation results from 2 runs on ImageNet. | 1703.01780#22 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | The recently proposed Temporal Ensembling has achieved state-of-the-art
results in several semi-supervised learning benchmarks. It maintains an
exponential moving average of label predictions on each training example, and
penalizes predictions that are inconsistent with this target. However, because
the targets change only once per epoch, Temporal Ensembling becomes unwieldy
when learning large datasets. To overcome this problem, we propose Mean
Teacher, a method that averages model weights instead of label predictions. As
an additional benefit, Mean Teacher improves test accuracy and enables training
with fewer labels than Temporal Ensembling. Without changing the network
architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250
labels, outperforming Temporal Ensembling trained with 1000 labels. We also
show that a good network architecture is crucial to performance. Combining Mean
Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with
4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels
from 35.24% to 9.11%. | http://arxiv.org/pdf/1703.01780 | Antti Tarvainen, Harri Valpola | cs.NE, cs.LG, stat.ML | In this version: Corrected hyperparameters of the 4000-label CIFAR-10
ResNet experiment. Changed Antti's contact info, Advances in Neural
Information Processing Systems 30 (NIPS 2017) pre-proceedings | null | cs.NE | 20170306 | 20180416 | [
{
"id": "1706.04599"
},
{
"id": "1705.07485"
},
{
"id": "1503.02531"
},
{
"id": "1609.08976"
},
{
"id": "1511.04508"
},
{
"id": "1610.02242"
},
{
"id": "1605.06465"
},
{
"id": "1608.03983"
},
{
"id": "1611.05431"
},
{
"id": "1704.03976"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1709.01507"
}
] |
1703.01780 | 23 | State of the art ConvNet Mean Teacher ResNet Mean Teacher State of the art using all labels CIFAR-10 4000 labels 10.55 [16] 12.31 ± 0.28 6.28 ± 0.15 6.28 ± 0.15 6.28 ± 0.15 2.86 [5] ImageNet 2012 10% of the labels 35.24 ± 0.90 [20] 9.11 ± 0.12 9.11 ± 0.12 9.11 ± 0.12 3.79 [10]
Changing from MSE to KL-divergence (Figure 4(f)) Following Laine & Aila [13], we use mean squared error (MSE) as our consistency cost function, but KL-divergence would seem a more natural choice. Which one works better? We ran experiments with instances of a cost function family ranging from MSE (Ï = 0 in the ï¬gure) to KL-divergence (Ï = 1), and found out that in this setting MSE performs better than the other cost functions. See Appendix C for the details of the cost function family and for our intuition about why MSE performs so well.
# 3.5 Mean Teacher with residual networks on CIFAR-10 and ImageNet | 1703.01780#23 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | The recently proposed Temporal Ensembling has achieved state-of-the-art
results in several semi-supervised learning benchmarks. It maintains an
exponential moving average of label predictions on each training example, and
penalizes predictions that are inconsistent with this target. However, because
the targets change only once per epoch, Temporal Ensembling becomes unwieldy
when learning large datasets. To overcome this problem, we propose Mean
Teacher, a method that averages model weights instead of label predictions. As
an additional benefit, Mean Teacher improves test accuracy and enables training
with fewer labels than Temporal Ensembling. Without changing the network
architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250
labels, outperforming Temporal Ensembling trained with 1000 labels. We also
show that a good network architecture is crucial to performance. Combining Mean
Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with
4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels
from 35.24% to 9.11%. | http://arxiv.org/pdf/1703.01780 | Antti Tarvainen, Harri Valpola | cs.NE, cs.LG, stat.ML | In this version: Corrected hyperparameters of the 4000-label CIFAR-10
ResNet experiment. Changed Antti's contact info, Advances in Neural
Information Processing Systems 30 (NIPS 2017) pre-proceedings | null | cs.NE | 20170306 | 20180416 | [
{
"id": "1706.04599"
},
{
"id": "1705.07485"
},
{
"id": "1503.02531"
},
{
"id": "1609.08976"
},
{
"id": "1511.04508"
},
{
"id": "1610.02242"
},
{
"id": "1605.06465"
},
{
"id": "1608.03983"
},
{
"id": "1611.05431"
},
{
"id": "1704.03976"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1709.01507"
}
] |
1703.01780 | 24 | # 3.5 Mean Teacher with residual networks on CIFAR-10 and ImageNet
In the experiments above, we used a traditional 13-layer convolutional architecture (ConvNet), which has the beneï¬t of making comparisons to earlier work easy. In order to explore the effect of the model architecture, we ran experiments using a 12-block (26-layer) Residual Network [8] (ResNet) with Shake-Shake regularization [5] on CIFAR-10. The details of the model and the training procedure are described in Appendix B.2. As shown in Table 4, the results improve remarkably with the better network architecture.
To test whether the methods scales to more natural images, we ran experiments on Imagenet 2012 dataset [22] using 10% of the labels. We used a 50-block (152-layer) ResNeXt architecture [33], and saw a clear improvement over the state of the art. As the test set is not publicly available, we measured the results using the validation set.
# 4 Related work | 1703.01780#24 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | The recently proposed Temporal Ensembling has achieved state-of-the-art
results in several semi-supervised learning benchmarks. It maintains an
exponential moving average of label predictions on each training example, and
penalizes predictions that are inconsistent with this target. However, because
the targets change only once per epoch, Temporal Ensembling becomes unwieldy
when learning large datasets. To overcome this problem, we propose Mean
Teacher, a method that averages model weights instead of label predictions. As
an additional benefit, Mean Teacher improves test accuracy and enables training
with fewer labels than Temporal Ensembling. Without changing the network
architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250
labels, outperforming Temporal Ensembling trained with 1000 labels. We also
show that a good network architecture is crucial to performance. Combining Mean
Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with
4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels
from 35.24% to 9.11%. | http://arxiv.org/pdf/1703.01780 | Antti Tarvainen, Harri Valpola | cs.NE, cs.LG, stat.ML | In this version: Corrected hyperparameters of the 4000-label CIFAR-10
ResNet experiment. Changed Antti's contact info, Advances in Neural
Information Processing Systems 30 (NIPS 2017) pre-proceedings | null | cs.NE | 20170306 | 20180416 | [
{
"id": "1706.04599"
},
{
"id": "1705.07485"
},
{
"id": "1503.02531"
},
{
"id": "1609.08976"
},
{
"id": "1511.04508"
},
{
"id": "1610.02242"
},
{
"id": "1605.06465"
},
{
"id": "1608.03983"
},
{
"id": "1611.05431"
},
{
"id": "1704.03976"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1709.01507"
}
] |
1703.01780 | 25 | # 4 Related work
Noise regularization of neural networks was proposed by Sietsma & Dow [26]. More recently, several types of perturbations have been shown to regularize intermediate representations effectively in deep learning. Adversarial Training [6] changes the input slightly to give predictions that are as different as possible from the original predictions. Dropout [28] zeroes random dimensions of layer outputs. Dropconnect [31] generalizes Dropout by zeroing individual weights instead of activations. Stochastic Depth [11] drops entire layers of residual networks, and Swapout [27] generalizes Dropout and Stochastic Depth. Shake-shake regularization [5] duplicates residual paths and samples a linear combination of their outputs independently during forward and backward passes. | 1703.01780#25 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | The recently proposed Temporal Ensembling has achieved state-of-the-art
results in several semi-supervised learning benchmarks. It maintains an
exponential moving average of label predictions on each training example, and
penalizes predictions that are inconsistent with this target. However, because
the targets change only once per epoch, Temporal Ensembling becomes unwieldy
when learning large datasets. To overcome this problem, we propose Mean
Teacher, a method that averages model weights instead of label predictions. As
an additional benefit, Mean Teacher improves test accuracy and enables training
with fewer labels than Temporal Ensembling. Without changing the network
architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250
labels, outperforming Temporal Ensembling trained with 1000 labels. We also
show that a good network architecture is crucial to performance. Combining Mean
Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with
4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels
from 35.24% to 9.11%. | http://arxiv.org/pdf/1703.01780 | Antti Tarvainen, Harri Valpola | cs.NE, cs.LG, stat.ML | In this version: Corrected hyperparameters of the 4000-label CIFAR-10
ResNet experiment. Changed Antti's contact info, Advances in Neural
Information Processing Systems 30 (NIPS 2017) pre-proceedings | null | cs.NE | 20170306 | 20180416 | [
{
"id": "1706.04599"
},
{
"id": "1705.07485"
},
{
"id": "1503.02531"
},
{
"id": "1609.08976"
},
{
"id": "1511.04508"
},
{
"id": "1610.02242"
},
{
"id": "1605.06465"
},
{
"id": "1608.03983"
},
{
"id": "1611.05431"
},
{
"id": "1704.03976"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1709.01507"
}
] |
1703.01780 | 26 | Several semi-supervised methods are based on training the model predictions to be consistent to perturbation. The Denoising Source Separation framework (DSS) [29] uses denoising of latent variables to learn their likelihood estimate. The Î variant of Ladder Network [21] implements DSS with a deep learning model for classiï¬cation tasks. It produces a noisy student predictions and clean teacher predictions, and applies a denoising layer to predict teacher predictions from the student predictions. The Î model [13] improves the Î model by removing the explicit denoising layer and applying noise also to the teacher predictions. Similar methods had been proposed already earlier for linear models [30] and deep learning [2]. Virtual Adversarial Training [16] is similar to the Î model but uses adversarial perturbation instead of independent noise.
The idea of a teacher model training a student is related to model compression [3] and distillation [9]. The knowledge of a complicated model can be transferred to a simpler model by training the simpler model with the softmax outputs of the complicated model. The softmax outputs contain more information about the task than the one-hot outputs, and the requirement of representing this
7 | 1703.01780#26 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | The recently proposed Temporal Ensembling has achieved state-of-the-art
results in several semi-supervised learning benchmarks. It maintains an
exponential moving average of label predictions on each training example, and
penalizes predictions that are inconsistent with this target. However, because
the targets change only once per epoch, Temporal Ensembling becomes unwieldy
when learning large datasets. To overcome this problem, we propose Mean
Teacher, a method that averages model weights instead of label predictions. As
an additional benefit, Mean Teacher improves test accuracy and enables training
with fewer labels than Temporal Ensembling. Without changing the network
architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250
labels, outperforming Temporal Ensembling trained with 1000 labels. We also
show that a good network architecture is crucial to performance. Combining Mean
Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with
4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels
from 35.24% to 9.11%. | http://arxiv.org/pdf/1703.01780 | Antti Tarvainen, Harri Valpola | cs.NE, cs.LG, stat.ML | In this version: Corrected hyperparameters of the 4000-label CIFAR-10
ResNet experiment. Changed Antti's contact info, Advances in Neural
Information Processing Systems 30 (NIPS 2017) pre-proceedings | null | cs.NE | 20170306 | 20180416 | [
{
"id": "1706.04599"
},
{
"id": "1705.07485"
},
{
"id": "1503.02531"
},
{
"id": "1609.08976"
},
{
"id": "1511.04508"
},
{
"id": "1610.02242"
},
{
"id": "1605.06465"
},
{
"id": "1608.03983"
},
{
"id": "1611.05431"
},
{
"id": "1704.03976"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1709.01507"
}
] |
1703.01780 | 27 | 7
knowledge regularizes the simpler model. Besides its use in model compression, distillation can be used to harden trained models against adversarial attacks [18]. The difference between distillation and consistency regularization is that distillation is performed after training whereas consistency regularization is performed on training time.
Consistency regularization can be seen as a form of label propagation [34]. Training samples that resemble each other are more likely to belong to the same class. Label propagation takes advantage of this assumption by pushing label information from each example to examples that are near it according to some metric. Label propagation can also be applied to deep learning models [32]. However, ordinary label propagation requires a predeï¬ned distance metric in the input space. In contrast, consistency targets employ a learned distance metric implied by the abstract representations of the model. As the model learns new features, the distance metric changes to accommodate these features. Therefore, consistency targets guide learning in two ways. On the one hand they spread the labels according to the current distance metric, and on the other hand, they aid the network learn a better distance metric.
# 5 Conclusion | 1703.01780#27 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | The recently proposed Temporal Ensembling has achieved state-of-the-art
results in several semi-supervised learning benchmarks. It maintains an
exponential moving average of label predictions on each training example, and
penalizes predictions that are inconsistent with this target. However, because
the targets change only once per epoch, Temporal Ensembling becomes unwieldy
when learning large datasets. To overcome this problem, we propose Mean
Teacher, a method that averages model weights instead of label predictions. As
an additional benefit, Mean Teacher improves test accuracy and enables training
with fewer labels than Temporal Ensembling. Without changing the network
architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250
labels, outperforming Temporal Ensembling trained with 1000 labels. We also
show that a good network architecture is crucial to performance. Combining Mean
Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with
4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels
from 35.24% to 9.11%. | http://arxiv.org/pdf/1703.01780 | Antti Tarvainen, Harri Valpola | cs.NE, cs.LG, stat.ML | In this version: Corrected hyperparameters of the 4000-label CIFAR-10
ResNet experiment. Changed Antti's contact info, Advances in Neural
Information Processing Systems 30 (NIPS 2017) pre-proceedings | null | cs.NE | 20170306 | 20180416 | [
{
"id": "1706.04599"
},
{
"id": "1705.07485"
},
{
"id": "1503.02531"
},
{
"id": "1609.08976"
},
{
"id": "1511.04508"
},
{
"id": "1610.02242"
},
{
"id": "1605.06465"
},
{
"id": "1608.03983"
},
{
"id": "1611.05431"
},
{
"id": "1704.03976"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1709.01507"
}
] |
1703.01780 | 28 | # 5 Conclusion
Temporal Ensembling, Virtual Adversarial Training and other forms of consistency regularization have recently shown their strength in semi-supervised learning. In this paper, we propose Mean Teacher, a method that averages model weights to form a target-generating teacher model. Unlike Temporal Ensembling, Mean Teacher works with large datasets and on-line learning. Our experiments suggest that it improves the speed of learning and the classiï¬cation accuracy of the trained network. In addition, it scales well to state-of-the-art architectures and large image sizes.
The success of consistency regularization depends on the quality of teacher-generated targets. If the targets can be improved, they should be. Mean Teacher and Virtual Adversarial Training represent two ways of exploiting this principle. Their combination may yield even better targets. There are probably additional methods to be uncovered that improve targets and trained models even further.
# Acknowledgements
We thank Samuli Laine and Timo Aila for fruitful discussions about their work, Phil Bachman, Colin Raffel, and Thomas Robert for noticing errors in the previous versions of this paper and everyone at The Curious AI Company for their help, encouragement, and ideas.
# References | 1703.01780#28 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | The recently proposed Temporal Ensembling has achieved state-of-the-art
results in several semi-supervised learning benchmarks. It maintains an
exponential moving average of label predictions on each training example, and
penalizes predictions that are inconsistent with this target. However, because
the targets change only once per epoch, Temporal Ensembling becomes unwieldy
when learning large datasets. To overcome this problem, we propose Mean
Teacher, a method that averages model weights instead of label predictions. As
an additional benefit, Mean Teacher improves test accuracy and enables training
with fewer labels than Temporal Ensembling. Without changing the network
architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250
labels, outperforming Temporal Ensembling trained with 1000 labels. We also
show that a good network architecture is crucial to performance. Combining Mean
Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with
4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels
from 35.24% to 9.11%. | http://arxiv.org/pdf/1703.01780 | Antti Tarvainen, Harri Valpola | cs.NE, cs.LG, stat.ML | In this version: Corrected hyperparameters of the 4000-label CIFAR-10
ResNet experiment. Changed Antti's contact info, Advances in Neural
Information Processing Systems 30 (NIPS 2017) pre-proceedings | null | cs.NE | 20170306 | 20180416 | [
{
"id": "1706.04599"
},
{
"id": "1705.07485"
},
{
"id": "1503.02531"
},
{
"id": "1609.08976"
},
{
"id": "1511.04508"
},
{
"id": "1610.02242"
},
{
"id": "1605.06465"
},
{
"id": "1608.03983"
},
{
"id": "1611.05431"
},
{
"id": "1704.03976"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1709.01507"
}
] |
1703.01780 | 29 | # References
[1] Abadi, MartÃn, Agarwal, Ashish, Barham, Paul, Brevdo, Eugene, Chen, Zhifeng, Citro, Craig, Corrado, Greg S., Davis, Andy, Dean, Jeffrey, Devin, Matthieu, Ghemawat, Sanjay, Goodfellow, Ian, Harp, Andrew, Irving, Geoffrey, Isard, Michael, Jia, Yangqing, Jozefowicz, Rafal, Kaiser, Lukasz, Kudlur, Manjunath, Levenberg, Josh, Mané, Dan, Monga, Rajat, Moore, Sherry, Murray, Derek, Olah, Chris, Schuster, Mike, Shlens, Jonathon, Steiner, Benoit, Sutskever, Ilya, Talwar, Kunal, Tucker, Paul, Vanhoucke, Vincent, Vasudevan, Vijay, Viégas, Fernanda, Vinyals, Oriol, Warden, Pete, Wattenberg, Martin, Wicke, Martin, Yu, Yuan, and Zheng, Xiaoqiang. TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. 2015. | 1703.01780#29 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | The recently proposed Temporal Ensembling has achieved state-of-the-art
results in several semi-supervised learning benchmarks. It maintains an
exponential moving average of label predictions on each training example, and
penalizes predictions that are inconsistent with this target. However, because
the targets change only once per epoch, Temporal Ensembling becomes unwieldy
when learning large datasets. To overcome this problem, we propose Mean
Teacher, a method that averages model weights instead of label predictions. As
an additional benefit, Mean Teacher improves test accuracy and enables training
with fewer labels than Temporal Ensembling. Without changing the network
architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250
labels, outperforming Temporal Ensembling trained with 1000 labels. We also
show that a good network architecture is crucial to performance. Combining Mean
Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with
4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels
from 35.24% to 9.11%. | http://arxiv.org/pdf/1703.01780 | Antti Tarvainen, Harri Valpola | cs.NE, cs.LG, stat.ML | In this version: Corrected hyperparameters of the 4000-label CIFAR-10
ResNet experiment. Changed Antti's contact info, Advances in Neural
Information Processing Systems 30 (NIPS 2017) pre-proceedings | null | cs.NE | 20170306 | 20180416 | [
{
"id": "1706.04599"
},
{
"id": "1705.07485"
},
{
"id": "1503.02531"
},
{
"id": "1609.08976"
},
{
"id": "1511.04508"
},
{
"id": "1610.02242"
},
{
"id": "1605.06465"
},
{
"id": "1608.03983"
},
{
"id": "1611.05431"
},
{
"id": "1704.03976"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1709.01507"
}
] |
1703.01780 | 30 | [2] Bachman, Philip, Alsharif, Ouais, and Precup, Doina. Learning with Pseudo-Ensembles. arXiv:1412.4864 [cs, stat], December 2014. arXiv: 1412.4864.
[3] BuciluËa, Cristian, Caruana, Rich, and Niculescu-Mizil, Alexandru. Model compression. In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 535â541. ACM, 2006.
[4] Gal, Yarin and Ghahramani, Zoubin. Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning. In Proceedings of The 33rd International Conference on Machine Learning, pp. 1050â1059, 2016.
[5] Gastaldi, Xavier. Shake-Shake regularization. arXiv:1705.07485 [cs], May 2017. arXiv: 1705.07485.
8
[6] Goodfellow, Ian J., Shlens, Jonathon, and Szegedy, Christian. Explaining and Harnessing Adversarial Examples. December 2014. arXiv: 1412.6572. | 1703.01780#30 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | The recently proposed Temporal Ensembling has achieved state-of-the-art
results in several semi-supervised learning benchmarks. It maintains an
exponential moving average of label predictions on each training example, and
penalizes predictions that are inconsistent with this target. However, because
the targets change only once per epoch, Temporal Ensembling becomes unwieldy
when learning large datasets. To overcome this problem, we propose Mean
Teacher, a method that averages model weights instead of label predictions. As
an additional benefit, Mean Teacher improves test accuracy and enables training
with fewer labels than Temporal Ensembling. Without changing the network
architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250
labels, outperforming Temporal Ensembling trained with 1000 labels. We also
show that a good network architecture is crucial to performance. Combining Mean
Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with
4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels
from 35.24% to 9.11%. | http://arxiv.org/pdf/1703.01780 | Antti Tarvainen, Harri Valpola | cs.NE, cs.LG, stat.ML | In this version: Corrected hyperparameters of the 4000-label CIFAR-10
ResNet experiment. Changed Antti's contact info, Advances in Neural
Information Processing Systems 30 (NIPS 2017) pre-proceedings | null | cs.NE | 20170306 | 20180416 | [
{
"id": "1706.04599"
},
{
"id": "1705.07485"
},
{
"id": "1503.02531"
},
{
"id": "1609.08976"
},
{
"id": "1511.04508"
},
{
"id": "1610.02242"
},
{
"id": "1605.06465"
},
{
"id": "1608.03983"
},
{
"id": "1611.05431"
},
{
"id": "1704.03976"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1709.01507"
}
] |
1703.01780 | 31 | [7] Guo, Chuan, Pleiss, Geoff, Sun, Yu, and Weinberger, Kilian Q. On Calibration of Modern Neural Networks. arXiv:1706.04599 [cs], June 2017. arXiv: 1706.04599.
[8] He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Deep Residual Learning for Image Recognition. arXiv:1512.03385 [cs], December 2015. arXiv: 1512.03385.
[9] Hinton, Geoffrey, Vinyals, Oriol, and Dean, Jeff. Distilling the Knowledge in a Neural Network. arXiv:1503.02531 [cs, stat], March 2015. arXiv: 1503.02531.
[10] Hu, Jie, Shen, Li, and Sun, Gang. Squeeze-and-Excitation Networks. arXiv:1709.01507 [cs], September 2017. arXiv: 1709.01507.
[11] Huang, Gao, Sun, Yu, Liu, Zhuang, Sedra, Daniel, and Weinberger, Kilian. Deep Networks with Stochastic Depth. arXiv:1603.09382 [cs], March 2016. arXiv: 1603.09382. | 1703.01780#31 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | The recently proposed Temporal Ensembling has achieved state-of-the-art
results in several semi-supervised learning benchmarks. It maintains an
exponential moving average of label predictions on each training example, and
penalizes predictions that are inconsistent with this target. However, because
the targets change only once per epoch, Temporal Ensembling becomes unwieldy
when learning large datasets. To overcome this problem, we propose Mean
Teacher, a method that averages model weights instead of label predictions. As
an additional benefit, Mean Teacher improves test accuracy and enables training
with fewer labels than Temporal Ensembling. Without changing the network
architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250
labels, outperforming Temporal Ensembling trained with 1000 labels. We also
show that a good network architecture is crucial to performance. Combining Mean
Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with
4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels
from 35.24% to 9.11%. | http://arxiv.org/pdf/1703.01780 | Antti Tarvainen, Harri Valpola | cs.NE, cs.LG, stat.ML | In this version: Corrected hyperparameters of the 4000-label CIFAR-10
ResNet experiment. Changed Antti's contact info, Advances in Neural
Information Processing Systems 30 (NIPS 2017) pre-proceedings | null | cs.NE | 20170306 | 20180416 | [
{
"id": "1706.04599"
},
{
"id": "1705.07485"
},
{
"id": "1503.02531"
},
{
"id": "1609.08976"
},
{
"id": "1511.04508"
},
{
"id": "1610.02242"
},
{
"id": "1605.06465"
},
{
"id": "1608.03983"
},
{
"id": "1611.05431"
},
{
"id": "1704.03976"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1709.01507"
}
] |
1703.01780 | 32 | [12] Kingma, Diederik and Ba, Jimmy. Adam: A Method for Stochastic Optimization. arXiv:1412.6980 [cs], December 2014. arXiv: 1412.6980.
[13] Laine, Samuli and Aila, Timo. Temporal Ensembling for Semi-Supervised Learning. arXiv:1610.02242 [cs], October 2016. arXiv: 1610.02242.
[14] Loshchilov, Ilya and Hutter, Frank. SGDR: Stochastic Gradient Descent with Warm Restarts. arXiv:1608.03983 [cs, math], August 2016. arXiv: 1608.03983.
[15] Maas, Andrew L., Hannun, Awni Y., and Ng, Andrew Y. Rectiï¬er nonlinearities improve neural network acoustic models. In Proc. ICML, volume 30, 2013.
[16] Miyato, Takeru, Maeda, Shin-ichi, Koyama, Masanori, and Ishii, Shin. Virtual Adversarial Train- ing: a Regularization Method for Supervised and Semi-supervised Learning. arXiv:1704.03976 [cs, stat], April 2017. arXiv: 1704.03976. | 1703.01780#32 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | The recently proposed Temporal Ensembling has achieved state-of-the-art
results in several semi-supervised learning benchmarks. It maintains an
exponential moving average of label predictions on each training example, and
penalizes predictions that are inconsistent with this target. However, because
the targets change only once per epoch, Temporal Ensembling becomes unwieldy
when learning large datasets. To overcome this problem, we propose Mean
Teacher, a method that averages model weights instead of label predictions. As
an additional benefit, Mean Teacher improves test accuracy and enables training
with fewer labels than Temporal Ensembling. Without changing the network
architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250
labels, outperforming Temporal Ensembling trained with 1000 labels. We also
show that a good network architecture is crucial to performance. Combining Mean
Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with
4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels
from 35.24% to 9.11%. | http://arxiv.org/pdf/1703.01780 | Antti Tarvainen, Harri Valpola | cs.NE, cs.LG, stat.ML | In this version: Corrected hyperparameters of the 4000-label CIFAR-10
ResNet experiment. Changed Antti's contact info, Advances in Neural
Information Processing Systems 30 (NIPS 2017) pre-proceedings | null | cs.NE | 20170306 | 20180416 | [
{
"id": "1706.04599"
},
{
"id": "1705.07485"
},
{
"id": "1503.02531"
},
{
"id": "1609.08976"
},
{
"id": "1511.04508"
},
{
"id": "1610.02242"
},
{
"id": "1605.06465"
},
{
"id": "1608.03983"
},
{
"id": "1611.05431"
},
{
"id": "1704.03976"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1709.01507"
}
] |
1703.01780 | 33 | [17] Netzer, Yuval, Wang, Tao, Coates, Adam, Bissacco, Alessandro, Wu, Bo, and Ng, Andrew Y. Reading digits in natural images with unsupervised feature learning. In NIPS Workshop on Deep Learning and Unsupervised Feature Learning, 2011.
[18] Papernot, Nicolas, McDaniel, Patrick, Wu, Xi, Jha, Somesh, and Swami, Ananthram. Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks. arXiv:1511.04508 [cs, stat], November 2015. arXiv: 1511.04508.
[19] Polyak, B. T. and Juditsky, A. B. Acceleration of Stochastic Approximation by Averaging. SIAM J. Control Optim., 30(4):838â855, July 1992. ISSN 0363-0129. doi: 10.1137/0330046.
[20] Pu, Yunchen, Gan, Zhe, Henao, Ricardo, Yuan, Xin, Li, Chunyuan, Stevens, Andrew, and Carin, Lawrence. Variational Autoencoder for Deep Learning of Images, Labels and Captions. arXiv:1609.08976 [cs, stat], September 2016. arXiv: 1609.08976. | 1703.01780#33 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | The recently proposed Temporal Ensembling has achieved state-of-the-art
results in several semi-supervised learning benchmarks. It maintains an
exponential moving average of label predictions on each training example, and
penalizes predictions that are inconsistent with this target. However, because
the targets change only once per epoch, Temporal Ensembling becomes unwieldy
when learning large datasets. To overcome this problem, we propose Mean
Teacher, a method that averages model weights instead of label predictions. As
an additional benefit, Mean Teacher improves test accuracy and enables training
with fewer labels than Temporal Ensembling. Without changing the network
architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250
labels, outperforming Temporal Ensembling trained with 1000 labels. We also
show that a good network architecture is crucial to performance. Combining Mean
Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with
4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels
from 35.24% to 9.11%. | http://arxiv.org/pdf/1703.01780 | Antti Tarvainen, Harri Valpola | cs.NE, cs.LG, stat.ML | In this version: Corrected hyperparameters of the 4000-label CIFAR-10
ResNet experiment. Changed Antti's contact info, Advances in Neural
Information Processing Systems 30 (NIPS 2017) pre-proceedings | null | cs.NE | 20170306 | 20180416 | [
{
"id": "1706.04599"
},
{
"id": "1705.07485"
},
{
"id": "1503.02531"
},
{
"id": "1609.08976"
},
{
"id": "1511.04508"
},
{
"id": "1610.02242"
},
{
"id": "1605.06465"
},
{
"id": "1608.03983"
},
{
"id": "1611.05431"
},
{
"id": "1704.03976"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1709.01507"
}
] |
1703.01780 | 34 | [21] Rasmus, Antti, Berglund, Mathias, Honkala, Mikko, Valpola, Harri, and Raiko, Tapani. Semi- supervised Learning with Ladder Networks. In Cortes, C., Lawrence, N. D., Lee, D. D., Sugiyama, M., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 28, pp. 3546â3554. Curran Associates, Inc., 2015.
[22] Russakovsky, Olga, Deng, Jia, Su, Hao, Krause, Jonathan, Satheesh, Sanjeev, Ma, Sean, Huang, Zhiheng, Karpathy, Andrej, Khosla, Aditya, Bernstein, Michael, Berg, Alexander C., and Fei- Fei, Li. ImageNet Large Scale Visual Recognition Challenge. arXiv:1409.0575 [cs], September 2014. arXiv: 1409.0575. | 1703.01780#34 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | The recently proposed Temporal Ensembling has achieved state-of-the-art
results in several semi-supervised learning benchmarks. It maintains an
exponential moving average of label predictions on each training example, and
penalizes predictions that are inconsistent with this target. However, because
the targets change only once per epoch, Temporal Ensembling becomes unwieldy
when learning large datasets. To overcome this problem, we propose Mean
Teacher, a method that averages model weights instead of label predictions. As
an additional benefit, Mean Teacher improves test accuracy and enables training
with fewer labels than Temporal Ensembling. Without changing the network
architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250
labels, outperforming Temporal Ensembling trained with 1000 labels. We also
show that a good network architecture is crucial to performance. Combining Mean
Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with
4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels
from 35.24% to 9.11%. | http://arxiv.org/pdf/1703.01780 | Antti Tarvainen, Harri Valpola | cs.NE, cs.LG, stat.ML | In this version: Corrected hyperparameters of the 4000-label CIFAR-10
ResNet experiment. Changed Antti's contact info, Advances in Neural
Information Processing Systems 30 (NIPS 2017) pre-proceedings | null | cs.NE | 20170306 | 20180416 | [
{
"id": "1706.04599"
},
{
"id": "1705.07485"
},
{
"id": "1503.02531"
},
{
"id": "1609.08976"
},
{
"id": "1511.04508"
},
{
"id": "1610.02242"
},
{
"id": "1605.06465"
},
{
"id": "1608.03983"
},
{
"id": "1611.05431"
},
{
"id": "1704.03976"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1709.01507"
}
] |
1703.01780 | 35 | [23] Sajjadi, Mehdi, Javanmardi, Mehran, and Tasdizen, Tolga. Regularization With Stochastic Trans- formations and Perturbations for Deep Semi-Supervised Learning. In Lee, D. D., Sugiyama, M., Luxburg, U. V., Guyon, I., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 29, pp. 1163â1171. Curran Associates, Inc., 2016.
9
[24] Salimans, Tim and Kingma, Diederik P. Weight normalization: A simple reparameterization to accelerate training of deep neural networks. In Advances in Neural Information Processing Systems, pp. 901â901, 2016.
[25] Salimans, Tim, Goodfellow, Ian, Zaremba, Wojciech, Cheung, Vicki, Radford, Alec, and Chen, Xi. Improved techniques for training gans. In Advances in Neural Information Processing Systems, pp. 2226â2234, 2016.
[26] Sietsma, Jocelyn and Dow, Robert JF. Creating artiï¬cial neural networks that generalize. Neural networks, 4(1):67â79, 1991. | 1703.01780#35 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | The recently proposed Temporal Ensembling has achieved state-of-the-art
results in several semi-supervised learning benchmarks. It maintains an
exponential moving average of label predictions on each training example, and
penalizes predictions that are inconsistent with this target. However, because
the targets change only once per epoch, Temporal Ensembling becomes unwieldy
when learning large datasets. To overcome this problem, we propose Mean
Teacher, a method that averages model weights instead of label predictions. As
an additional benefit, Mean Teacher improves test accuracy and enables training
with fewer labels than Temporal Ensembling. Without changing the network
architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250
labels, outperforming Temporal Ensembling trained with 1000 labels. We also
show that a good network architecture is crucial to performance. Combining Mean
Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with
4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels
from 35.24% to 9.11%. | http://arxiv.org/pdf/1703.01780 | Antti Tarvainen, Harri Valpola | cs.NE, cs.LG, stat.ML | In this version: Corrected hyperparameters of the 4000-label CIFAR-10
ResNet experiment. Changed Antti's contact info, Advances in Neural
Information Processing Systems 30 (NIPS 2017) pre-proceedings | null | cs.NE | 20170306 | 20180416 | [
{
"id": "1706.04599"
},
{
"id": "1705.07485"
},
{
"id": "1503.02531"
},
{
"id": "1609.08976"
},
{
"id": "1511.04508"
},
{
"id": "1610.02242"
},
{
"id": "1605.06465"
},
{
"id": "1608.03983"
},
{
"id": "1611.05431"
},
{
"id": "1704.03976"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1709.01507"
}
] |
1703.01780 | 36 | [27] Singh, Saurabh, Hoiem, Derek, and Forsyth, David. Swapout: Learning an ensemble of deep architectures. arXiv:1605.06465 [cs], May 2016. arXiv: 1605.06465.
[28] Srivastava, Nitish, Hinton, Geoffrey, Krizhevsky, Alex, Sutskever, Ilya, and Salakhutdinov, Ruslan. Dropout: A Simple Way to Prevent Neural Networks from Overï¬tting. J. Mach. Learn. Res., 15(1):1929â1958, January 2014. ISSN 1532-4435.
[29] Särelä, Jaakko and Valpola, Harri. Denoising Source Separation. Journal of Machine Learning Research, 6(Mar):233â272, 2005. ISSN ISSN 1533-7928.
[30] Wager, Stefan, Wang, Sida, and Liang, Percy. Dropout Training as Adaptive Regularization. arXiv:1307.1493 [cs, stat], July 2013. arXiv: 1307.1493. | 1703.01780#36 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | The recently proposed Temporal Ensembling has achieved state-of-the-art
results in several semi-supervised learning benchmarks. It maintains an
exponential moving average of label predictions on each training example, and
penalizes predictions that are inconsistent with this target. However, because
the targets change only once per epoch, Temporal Ensembling becomes unwieldy
when learning large datasets. To overcome this problem, we propose Mean
Teacher, a method that averages model weights instead of label predictions. As
an additional benefit, Mean Teacher improves test accuracy and enables training
with fewer labels than Temporal Ensembling. Without changing the network
architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250
labels, outperforming Temporal Ensembling trained with 1000 labels. We also
show that a good network architecture is crucial to performance. Combining Mean
Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with
4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels
from 35.24% to 9.11%. | http://arxiv.org/pdf/1703.01780 | Antti Tarvainen, Harri Valpola | cs.NE, cs.LG, stat.ML | In this version: Corrected hyperparameters of the 4000-label CIFAR-10
ResNet experiment. Changed Antti's contact info, Advances in Neural
Information Processing Systems 30 (NIPS 2017) pre-proceedings | null | cs.NE | 20170306 | 20180416 | [
{
"id": "1706.04599"
},
{
"id": "1705.07485"
},
{
"id": "1503.02531"
},
{
"id": "1609.08976"
},
{
"id": "1511.04508"
},
{
"id": "1610.02242"
},
{
"id": "1605.06465"
},
{
"id": "1608.03983"
},
{
"id": "1611.05431"
},
{
"id": "1704.03976"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1709.01507"
}
] |
1703.01780 | 37 | [31] Wan, Li, Zeiler, Matthew, Zhang, Sixin, Le Cun, Yann, and Fergus, Rob. Regularization of Neural Networks using DropConnect. pp. 1058â1066, 2013.
[32] Weston, Jason, Ratle, Frédéric, Mobahi, Hossein, and Collobert, Ronan. Deep learning via semi-supervised embedding. In Neural Networks: Tricks of the Trade, pp. 639â655. Springer, 2012.
[33] Xie, Saining, Girshick, Ross, Dollár, Piotr, Tu, Zhuowen, and He, Kaiming. Aggregated Residual Transformations for Deep Neural Networks. arXiv:1611.05431 [cs], November 2016. arXiv: 1611.05431.
[34] Zhu, Xiaojin and Ghahramani, Zoubin. Learning from labeled and unlabeled data with label propagation. 2002.
10
# Appendix
# A Results without input augmentation
See table 5 for the results without input augmentation. | 1703.01780#37 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | The recently proposed Temporal Ensembling has achieved state-of-the-art
results in several semi-supervised learning benchmarks. It maintains an
exponential moving average of label predictions on each training example, and
penalizes predictions that are inconsistent with this target. However, because
the targets change only once per epoch, Temporal Ensembling becomes unwieldy
when learning large datasets. To overcome this problem, we propose Mean
Teacher, a method that averages model weights instead of label predictions. As
an additional benefit, Mean Teacher improves test accuracy and enables training
with fewer labels than Temporal Ensembling. Without changing the network
architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250
labels, outperforming Temporal Ensembling trained with 1000 labels. We also
show that a good network architecture is crucial to performance. Combining Mean
Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with
4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels
from 35.24% to 9.11%. | http://arxiv.org/pdf/1703.01780 | Antti Tarvainen, Harri Valpola | cs.NE, cs.LG, stat.ML | In this version: Corrected hyperparameters of the 4000-label CIFAR-10
ResNet experiment. Changed Antti's contact info, Advances in Neural
Information Processing Systems 30 (NIPS 2017) pre-proceedings | null | cs.NE | 20170306 | 20180416 | [
{
"id": "1706.04599"
},
{
"id": "1705.07485"
},
{
"id": "1503.02531"
},
{
"id": "1609.08976"
},
{
"id": "1511.04508"
},
{
"id": "1610.02242"
},
{
"id": "1605.06465"
},
{
"id": "1608.03983"
},
{
"id": "1611.05431"
},
{
"id": "1704.03976"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1709.01507"
}
] |
1703.01780 | 38 | 10
# Appendix
# A Results without input augmentation
See table 5 for the results without input augmentation.
Table 5: Error rate percentage on SVHN and CIFAR-10 over 10 runs, including the results without input augmentation. We use exponential moving average weights in the evaluation of all our models. All the comparison methods use a 13-layer ConvNet architecture similar to ours and augmentation similar to ours, expect GAN, which does not use augmentation. | 1703.01780#38 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | The recently proposed Temporal Ensembling has achieved state-of-the-art
results in several semi-supervised learning benchmarks. It maintains an
exponential moving average of label predictions on each training example, and
penalizes predictions that are inconsistent with this target. However, because
the targets change only once per epoch, Temporal Ensembling becomes unwieldy
when learning large datasets. To overcome this problem, we propose Mean
Teacher, a method that averages model weights instead of label predictions. As
an additional benefit, Mean Teacher improves test accuracy and enables training
with fewer labels than Temporal Ensembling. Without changing the network
architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250
labels, outperforming Temporal Ensembling trained with 1000 labels. We also
show that a good network architecture is crucial to performance. Combining Mean
Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with
4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels
from 35.24% to 9.11%. | http://arxiv.org/pdf/1703.01780 | Antti Tarvainen, Harri Valpola | cs.NE, cs.LG, stat.ML | In this version: Corrected hyperparameters of the 4000-label CIFAR-10
ResNet experiment. Changed Antti's contact info, Advances in Neural
Information Processing Systems 30 (NIPS 2017) pre-proceedings | null | cs.NE | 20170306 | 20180416 | [
{
"id": "1706.04599"
},
{
"id": "1705.07485"
},
{
"id": "1503.02531"
},
{
"id": "1609.08976"
},
{
"id": "1511.04508"
},
{
"id": "1610.02242"
},
{
"id": "1605.06465"
},
{
"id": "1608.03983"
},
{
"id": "1611.05431"
},
{
"id": "1704.03976"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1709.01507"
}
] |
1703.01780 | 39 | 18.44 ± 4.8 6.65 ± 0.53 5.12 ± 0.13 8.11 ± 1.3 4.82 ± 0.17 4.42 ± 0.16 3.863.863.86 2.54 ± 0.04 2.74 ± 0.06 Supervised-onlye Πmodel Mean Teacher Without augmentation Supervised-onlye Πmodel Mean Teacher 27.77 ± 3.18 9.69 ± 0.92 4.35 ± 0.50 4.35 ± 0.50 4.35 ± 0.50 36.26 ± 3.83 10.36 ± 0.94 5.85 ± 0.62 16.88 ± 1.30 6.83 ± 0.66 4.18 ± 0.27 4.18 ± 0.27 4.18 ± 0.27 19.68 ± 1.03 7.01 ± 0.29 5.45 ± 0.14 12.32 ± 0.95 4.95 ± 0.26 3.95 ± 0.19 14.15 ± 0.87 5.73 ± 0.16 5.21 ± 0.21 2.75 ± 0.10 2.50 ± 0.07 2.50 ± 0.05 2.50 ± 0.05 2.50 ± 0.05 3.04 ± 0.04 2.75 ± 0.08 2.77 | 1703.01780#39 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | The recently proposed Temporal Ensembling has achieved state-of-the-art
results in several semi-supervised learning benchmarks. It maintains an
exponential moving average of label predictions on each training example, and
penalizes predictions that are inconsistent with this target. However, because
the targets change only once per epoch, Temporal Ensembling becomes unwieldy
when learning large datasets. To overcome this problem, we propose Mean
Teacher, a method that averages model weights instead of label predictions. As
an additional benefit, Mean Teacher improves test accuracy and enables training
with fewer labels than Temporal Ensembling. Without changing the network
architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250
labels, outperforming Temporal Ensembling trained with 1000 labels. We also
show that a good network architecture is crucial to performance. Combining Mean
Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with
4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels
from 35.24% to 9.11%. | http://arxiv.org/pdf/1703.01780 | Antti Tarvainen, Harri Valpola | cs.NE, cs.LG, stat.ML | In this version: Corrected hyperparameters of the 4000-label CIFAR-10
ResNet experiment. Changed Antti's contact info, Advances in Neural
Information Processing Systems 30 (NIPS 2017) pre-proceedings | null | cs.NE | 20170306 | 20180416 | [
{
"id": "1706.04599"
},
{
"id": "1705.07485"
},
{
"id": "1503.02531"
},
{
"id": "1609.08976"
},
{
"id": "1511.04508"
},
{
"id": "1610.02242"
},
{
"id": "1605.06465"
},
{
"id": "1608.03983"
},
{
"id": "1611.05431"
},
{
"id": "1704.03976"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1709.01507"
}
] |
1703.01780 | 40 | 2.50 ± 0.05 2.50 ± 0.05 2.50 ± 0.05 3.04 ± 0.04 2.75 ± 0.08 2.77 ± 0.09 CIFAR-10 GANb Πmodelc Temporal Ensemblingc VAT+EntMind Ours 1000 labels 2000 labels 4000 labels 18.63 ± 2.32 12.36 ± 0.31 12.16 ± 0.31 10.55 all labelsa 5.56 ± 0.10 5.56 ± 0.10 5.56 ± 0.10 5.60 ± 0.10 Supervised-onlye Πmodel Mean Teacher Mean Teacher ResNet 46.43 ± 1.21 27.36 ± 1.20 21.55 ± 1.48 10.08 ± 0.41 10.08 ± 0.41 10.08 ± 0.41 33.94 ± 0.73 18.02 ± 0.60 15.73 ± 0.31 15.73 ± 0.31 15.73 ± 0.31 20.66 ± 0.57 13.20 ± 0.27 12.31 ± 0.28 6.28 ± 0.15 6.28 ± 0.15 6.28 ± 0.15 5.82 ± 0.15 6.06 ± 0.11 5.94 ± 0.15 Without augmentation Supervised-onlye | 1703.01780#40 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | The recently proposed Temporal Ensembling has achieved state-of-the-art
results in several semi-supervised learning benchmarks. It maintains an
exponential moving average of label predictions on each training example, and
penalizes predictions that are inconsistent with this target. However, because
the targets change only once per epoch, Temporal Ensembling becomes unwieldy
when learning large datasets. To overcome this problem, we propose Mean
Teacher, a method that averages model weights instead of label predictions. As
an additional benefit, Mean Teacher improves test accuracy and enables training
with fewer labels than Temporal Ensembling. Without changing the network
architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250
labels, outperforming Temporal Ensembling trained with 1000 labels. We also
show that a good network architecture is crucial to performance. Combining Mean
Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with
4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels
from 35.24% to 9.11%. | http://arxiv.org/pdf/1703.01780 | Antti Tarvainen, Harri Valpola | cs.NE, cs.LG, stat.ML | In this version: Corrected hyperparameters of the 4000-label CIFAR-10
ResNet experiment. Changed Antti's contact info, Advances in Neural
Information Processing Systems 30 (NIPS 2017) pre-proceedings | null | cs.NE | 20170306 | 20180416 | [
{
"id": "1706.04599"
},
{
"id": "1705.07485"
},
{
"id": "1503.02531"
},
{
"id": "1609.08976"
},
{
"id": "1511.04508"
},
{
"id": "1610.02242"
},
{
"id": "1605.06465"
},
{
"id": "1608.03983"
},
{
"id": "1611.05431"
},
{
"id": "1704.03976"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1709.01507"
}
] |
1703.01780 | 42 | # a 4 runs e Only labeled examples and only classiï¬cation cost
# B Experimental setup
Source mean-teacher. code for the experiments is available at https://github.com/CuriousAI/
# B.1 Convolutional network models
We replicated the Î model of Laine & Aila [13] in TensorFlow [1], and added support for Mean Teacher training. We modiï¬ed the model slightly to match the requirements of the experiments, as described in subsections B.1.1 and B.1.2. The difference between the original Î model described by Laine & Aila [13] and our baseline Î model thus depends on the experiment. The difference between
11
Table 6: The convolutional network architecture we used in the experiments.
Input Translation Horizontal ï¬ipa Randomly p = 0.5 Gaussian noise Convolutional Convolutional Convolutional Pooling Dropout Convolutional Convolutional Convolutional Pooling Dropout Convolutional Convolutional Convolutional Pooling Softmax | 1703.01780#42 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | The recently proposed Temporal Ensembling has achieved state-of-the-art
results in several semi-supervised learning benchmarks. It maintains an
exponential moving average of label predictions on each training example, and
penalizes predictions that are inconsistent with this target. However, because
the targets change only once per epoch, Temporal Ensembling becomes unwieldy
when learning large datasets. To overcome this problem, we propose Mean
Teacher, a method that averages model weights instead of label predictions. As
an additional benefit, Mean Teacher improves test accuracy and enables training
with fewer labels than Temporal Ensembling. Without changing the network
architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250
labels, outperforming Temporal Ensembling trained with 1000 labels. We also
show that a good network architecture is crucial to performance. Combining Mean
Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with
4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels
from 35.24% to 9.11%. | http://arxiv.org/pdf/1703.01780 | Antti Tarvainen, Harri Valpola | cs.NE, cs.LG, stat.ML | In this version: Corrected hyperparameters of the 4000-label CIFAR-10
ResNet experiment. Changed Antti's contact info, Advances in Neural
Information Processing Systems 30 (NIPS 2017) pre-proceedings | null | cs.NE | 20170306 | 20180416 | [
{
"id": "1706.04599"
},
{
"id": "1705.07485"
},
{
"id": "1503.02531"
},
{
"id": "1609.08976"
},
{
"id": "1511.04508"
},
{
"id": "1610.02242"
},
{
"id": "1605.06465"
},
{
"id": "1608.03983"
},
{
"id": "1611.05431"
},
{
"id": "1704.03976"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1709.01507"
}
] |
1703.01780 | 43 | Ï = 0.15 128 ï¬lters, 3 à 3, same padding 128 ï¬lters, 3 à 3, same padding 128 ï¬lters, 3 à 3, same padding Maxpool 2 à 2 p = 0.5 256 ï¬lters, 3 à 3, same padding 256 ï¬lters, 3 à 3, same padding 256 ï¬lters, 3 à 3, same padding Maxpool 2 à 2 p = 0.5 512 ï¬lters, 3 à 3, valid padding 256 ï¬lters, 1 à 1, same padding 128 ï¬lters, 1 à 1, same padding Average pool (6 à 6 â 1Ã1 pixels) Fully connected 128 â 10
a Not applied on SVHN experiments
our baseline Î model and our Mean Teacher model is whether the teacher weights are identical to the student weights or an EMA of the student weights. In addition, the Î models (both the original and ours) backpropagate gradients to both sides of the model whereas Mean Teacher applies them only to the student side.
Table 6 describes the architecture of the convolutional network. We applied mean-only batch normalization and weight normalization [24] on convolutional and softmax layers. We used Leaky ReLu [15] with α = 0.1 as the nonlinearity on each of the convolutional layers. | 1703.01780#43 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | The recently proposed Temporal Ensembling has achieved state-of-the-art
results in several semi-supervised learning benchmarks. It maintains an
exponential moving average of label predictions on each training example, and
penalizes predictions that are inconsistent with this target. However, because
the targets change only once per epoch, Temporal Ensembling becomes unwieldy
when learning large datasets. To overcome this problem, we propose Mean
Teacher, a method that averages model weights instead of label predictions. As
an additional benefit, Mean Teacher improves test accuracy and enables training
with fewer labels than Temporal Ensembling. Without changing the network
architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250
labels, outperforming Temporal Ensembling trained with 1000 labels. We also
show that a good network architecture is crucial to performance. Combining Mean
Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with
4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels
from 35.24% to 9.11%. | http://arxiv.org/pdf/1703.01780 | Antti Tarvainen, Harri Valpola | cs.NE, cs.LG, stat.ML | In this version: Corrected hyperparameters of the 4000-label CIFAR-10
ResNet experiment. Changed Antti's contact info, Advances in Neural
Information Processing Systems 30 (NIPS 2017) pre-proceedings | null | cs.NE | 20170306 | 20180416 | [
{
"id": "1706.04599"
},
{
"id": "1705.07485"
},
{
"id": "1503.02531"
},
{
"id": "1609.08976"
},
{
"id": "1511.04508"
},
{
"id": "1610.02242"
},
{
"id": "1605.06465"
},
{
"id": "1608.03983"
},
{
"id": "1611.05431"
},
{
"id": "1704.03976"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1709.01507"
}
] |
1703.01780 | 44 | We used cross-entropy between the student softmax output and the one-hot label as the classiï¬cation cost, and the mean square error between the student and teacher softmax outputs as the consistency cost. The total cost was the weighted sum of these costs, where the weight of classiï¬cation cost was the expected number of labeled examples per minibatch, subject to the ramp-ups described below.
We trained the network with minibatches of size 100. We used Adam Optimizer [12] for training with learning rate 0.003 and parameters β1 = 0.9, β2 = 0.999, and ε = 10â8. In our baseline Î model we applied gradients through both teacher and student sides of the network. In Mean teacher model, the teacher model parameters were updated after each training step using an EMA with α = 0.999. These hyperparameters were subject to the ramp-ups and ramp-downs described below.
We applied a ramp-up period of 40000 training steps at the beginning of training. The consistency cost coefï¬cient and the learning rate were ramped up from 0 to their maximum values, using a sigmoid-shaped function eâ5(1âx)2 | 1703.01780#44 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | The recently proposed Temporal Ensembling has achieved state-of-the-art
results in several semi-supervised learning benchmarks. It maintains an
exponential moving average of label predictions on each training example, and
penalizes predictions that are inconsistent with this target. However, because
the targets change only once per epoch, Temporal Ensembling becomes unwieldy
when learning large datasets. To overcome this problem, we propose Mean
Teacher, a method that averages model weights instead of label predictions. As
an additional benefit, Mean Teacher improves test accuracy and enables training
with fewer labels than Temporal Ensembling. Without changing the network
architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250
labels, outperforming Temporal Ensembling trained with 1000 labels. We also
show that a good network architecture is crucial to performance. Combining Mean
Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with
4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels
from 35.24% to 9.11%. | http://arxiv.org/pdf/1703.01780 | Antti Tarvainen, Harri Valpola | cs.NE, cs.LG, stat.ML | In this version: Corrected hyperparameters of the 4000-label CIFAR-10
ResNet experiment. Changed Antti's contact info, Advances in Neural
Information Processing Systems 30 (NIPS 2017) pre-proceedings | null | cs.NE | 20170306 | 20180416 | [
{
"id": "1706.04599"
},
{
"id": "1705.07485"
},
{
"id": "1503.02531"
},
{
"id": "1609.08976"
},
{
"id": "1511.04508"
},
{
"id": "1610.02242"
},
{
"id": "1605.06465"
},
{
"id": "1608.03983"
},
{
"id": "1611.05431"
},
{
"id": "1704.03976"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1709.01507"
}
] |
1703.01780 | 45 | We used different training settings in different experiments. In the CIFAR-10 experiment, we matched the settings of Laine & Aila [13] as closely as possible. In the SVHN experiments, we diverged from Laine & Aila [13] to accommodate for the sparsity of labeled data. Table 7 summarizes the differences between our experiments.
# B.1.1 ConvNet on CIFAR-10
We normalized the input images with ZCA based on training set statistics.
12
For sampling minibatches, the labeled and unlabeled examples were treated equally, and thus the number of labeled examples varied from minibatch to minibatch.
We applied a ramp-down for the last 25000 training steps. The learning rate coefï¬cient was ramped down to 0 from its maximum value. Adam β1 was ramped down to 0.5 from its maximum value. The ramp-downs were performed using sigmoid-shaped function 1 â eâ12.5x2 , where x â [0, 1]. These ramp-downs did not improve the results, but were used to stay as close as possible to the settings of Laine & Aila [13].
# B.1.2 ConvNet on SVHN
We normalized the input images to have zero mean and unit variance. | 1703.01780#45 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | The recently proposed Temporal Ensembling has achieved state-of-the-art
results in several semi-supervised learning benchmarks. It maintains an
exponential moving average of label predictions on each training example, and
penalizes predictions that are inconsistent with this target. However, because
the targets change only once per epoch, Temporal Ensembling becomes unwieldy
when learning large datasets. To overcome this problem, we propose Mean
Teacher, a method that averages model weights instead of label predictions. As
an additional benefit, Mean Teacher improves test accuracy and enables training
with fewer labels than Temporal Ensembling. Without changing the network
architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250
labels, outperforming Temporal Ensembling trained with 1000 labels. We also
show that a good network architecture is crucial to performance. Combining Mean
Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with
4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels
from 35.24% to 9.11%. | http://arxiv.org/pdf/1703.01780 | Antti Tarvainen, Harri Valpola | cs.NE, cs.LG, stat.ML | In this version: Corrected hyperparameters of the 4000-label CIFAR-10
ResNet experiment. Changed Antti's contact info, Advances in Neural
Information Processing Systems 30 (NIPS 2017) pre-proceedings | null | cs.NE | 20170306 | 20180416 | [
{
"id": "1706.04599"
},
{
"id": "1705.07485"
},
{
"id": "1503.02531"
},
{
"id": "1609.08976"
},
{
"id": "1511.04508"
},
{
"id": "1610.02242"
},
{
"id": "1605.06465"
},
{
"id": "1608.03983"
},
{
"id": "1611.05431"
},
{
"id": "1704.03976"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1709.01507"
}
] |
1703.01780 | 46 | # B.1.2 ConvNet on SVHN
We normalized the input images to have zero mean and unit variance.
When doing semi-supervised training, we used 1 labeled example and 99 unlabeled examples in each mini-batch. This was important to speed up training when using extra unlabeled data. After all labeled examples had been used, they were shufï¬ed and reused. Similarly, after all unlabeled examples had been used, they were shufï¬ed and reused.
We applied different values for Adam β2 and EMA decay rate during the ramp-up period and the rest of the training. Both of the values were 0.99 during the ï¬rst 40000 steps, and 0.999 afterwards. This helped the 250-label case converge reliably.
We trained the network for 180000 steps when not using extra unlabeled examples, for 400000 steps when using 100k extra unlabeled examples, and for 600000 steps when using 500k extra unlabeled examples.
# B.1.3 The baseline ConvNet models
For training the supervised-only and Î model baselines we used the same hyperparameters as for training the Mean Teacher, except we stopped training earlier to prevent over-ï¬tting. For supervised- only runs we did not include any unlabeled examples and did not apply the consistency cost. | 1703.01780#46 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | The recently proposed Temporal Ensembling has achieved state-of-the-art
results in several semi-supervised learning benchmarks. It maintains an
exponential moving average of label predictions on each training example, and
penalizes predictions that are inconsistent with this target. However, because
the targets change only once per epoch, Temporal Ensembling becomes unwieldy
when learning large datasets. To overcome this problem, we propose Mean
Teacher, a method that averages model weights instead of label predictions. As
an additional benefit, Mean Teacher improves test accuracy and enables training
with fewer labels than Temporal Ensembling. Without changing the network
architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250
labels, outperforming Temporal Ensembling trained with 1000 labels. We also
show that a good network architecture is crucial to performance. Combining Mean
Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with
4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels
from 35.24% to 9.11%. | http://arxiv.org/pdf/1703.01780 | Antti Tarvainen, Harri Valpola | cs.NE, cs.LG, stat.ML | In this version: Corrected hyperparameters of the 4000-label CIFAR-10
ResNet experiment. Changed Antti's contact info, Advances in Neural
Information Processing Systems 30 (NIPS 2017) pre-proceedings | null | cs.NE | 20170306 | 20180416 | [
{
"id": "1706.04599"
},
{
"id": "1705.07485"
},
{
"id": "1503.02531"
},
{
"id": "1609.08976"
},
{
"id": "1511.04508"
},
{
"id": "1610.02242"
},
{
"id": "1605.06465"
},
{
"id": "1608.03983"
},
{
"id": "1611.05431"
},
{
"id": "1704.03976"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1709.01507"
}
] |
1703.01780 | 47 | We trained the supervised-only model on CIFAR-10 for 7500 steps when using 1000 images, for 15000 steps when using 2000 images, for 30000 steps when using 4000 images and for 150000 steps when using all images. We trained it on SVHN for 40000 steps when using 250, 500 or 1000 labels, and for 180000 steps when using all labels.
We trained the Î model on CIFAR-10 for 60000 steps when using 1000 labels, for 100000 steps when using 2000 labels, and for 180000 steps when using 4000 labels or all labels. We trained it on SVHN for 100000 steps when using 250 labels, and for 180000 steps when using 500, 1000, or all labels.
# B.2 Residual network models
We implemented our residual network experiments in PyTorch1. We used different architectures for our CIFAR-10 and ImageNet experiments.
# B.2.1 ResNet on CIFAR-10
For CIFAR-10, we replicated the 26-2x96d Shake-Shake regularized architecture described in [5], and consisting of 4+4+4 residual blocks. | 1703.01780#47 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | The recently proposed Temporal Ensembling has achieved state-of-the-art
results in several semi-supervised learning benchmarks. It maintains an
exponential moving average of label predictions on each training example, and
penalizes predictions that are inconsistent with this target. However, because
the targets change only once per epoch, Temporal Ensembling becomes unwieldy
when learning large datasets. To overcome this problem, we propose Mean
Teacher, a method that averages model weights instead of label predictions. As
an additional benefit, Mean Teacher improves test accuracy and enables training
with fewer labels than Temporal Ensembling. Without changing the network
architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250
labels, outperforming Temporal Ensembling trained with 1000 labels. We also
show that a good network architecture is crucial to performance. Combining Mean
Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with
4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels
from 35.24% to 9.11%. | http://arxiv.org/pdf/1703.01780 | Antti Tarvainen, Harri Valpola | cs.NE, cs.LG, stat.ML | In this version: Corrected hyperparameters of the 4000-label CIFAR-10
ResNet experiment. Changed Antti's contact info, Advances in Neural
Information Processing Systems 30 (NIPS 2017) pre-proceedings | null | cs.NE | 20170306 | 20180416 | [
{
"id": "1706.04599"
},
{
"id": "1705.07485"
},
{
"id": "1503.02531"
},
{
"id": "1609.08976"
},
{
"id": "1511.04508"
},
{
"id": "1610.02242"
},
{
"id": "1605.06465"
},
{
"id": "1608.03983"
},
{
"id": "1611.05431"
},
{
"id": "1704.03976"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1709.01507"
}
] |
1703.01780 | 48 | For CIFAR-10, we replicated the 26-2x96d Shake-Shake regularized architecture described in [5], and consisting of 4+4+4 residual blocks.
We trained the network on 4 GPUs using minibatches of 512 images, 124 of which were labeled. We sampled the images in the same way as described in the SVHN experiments above. We augmented the input images with 4x4 random translations (reï¬ecting the pixels at borders when necessary) and random horizontal ï¬ips. (Note that following [5] we used a larger translation size than on our earlier experiments.) We normalized the images to have channel-wise zero mean and unit variance over training data.
We trained the network using stochastic gradient descent with initial learning rate 0.2 and Nesterov momentum 0.9. We trained for 180 epochs (when training with 1000 labels) or 300 epochs (when training with 4000 labels), decaying the learning rate with cosine annealing [14] so that it would
# 1https://github.com/pytorch/pytorch
13
Table 7: Differences in training settings between the ConvNet experiments
# semi-supervised CIFAR-10 | 1703.01780#48 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | The recently proposed Temporal Ensembling has achieved state-of-the-art
results in several semi-supervised learning benchmarks. It maintains an
exponential moving average of label predictions on each training example, and
penalizes predictions that are inconsistent with this target. However, because
the targets change only once per epoch, Temporal Ensembling becomes unwieldy
when learning large datasets. To overcome this problem, we propose Mean
Teacher, a method that averages model weights instead of label predictions. As
an additional benefit, Mean Teacher improves test accuracy and enables training
with fewer labels than Temporal Ensembling. Without changing the network
architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250
labels, outperforming Temporal Ensembling trained with 1000 labels. We also
show that a good network architecture is crucial to performance. Combining Mean
Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with
4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels
from 35.24% to 9.11%. | http://arxiv.org/pdf/1703.01780 | Antti Tarvainen, Harri Valpola | cs.NE, cs.LG, stat.ML | In this version: Corrected hyperparameters of the 4000-label CIFAR-10
ResNet experiment. Changed Antti's contact info, Advances in Neural
Information Processing Systems 30 (NIPS 2017) pre-proceedings | null | cs.NE | 20170306 | 20180416 | [
{
"id": "1706.04599"
},
{
"id": "1705.07485"
},
{
"id": "1503.02531"
},
{
"id": "1609.08976"
},
{
"id": "1511.04508"
},
{
"id": "1610.02242"
},
{
"id": "1605.06465"
},
{
"id": "1608.03983"
},
{
"id": "1611.05431"
},
{
"id": "1704.03976"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1709.01507"
}
] |
1703.01780 | 49 | # 1https://github.com/pytorch/pytorch
13
Table 7: Differences in training settings between the ConvNet experiments
# semi-supervised CIFAR-10
Aspect image pre-processing zero mean, unit variance zero mean, unit variance ZCA image augmentation translation translation translation + horizontal ï¬ip number of labeled examples per minibatch 1 100 varying training steps 180000-600000 180000 150000 Adam β2 during and after ramp-up 0.99, 0.999 0.99, 0.999 0.999, 0.999 EMA decay rate during and after ramp-up 0.99, 0.999 0.99, 0.999 0.999, 0.999 Ramp-downs No No Yes
have reached zero after 210 epochs (when 1000 labels) or 350 epochs (when 4000 labels). We deï¬ne epoch as one pass through all the unlabeled examples â each labeled example was included many times in one such epoch. | 1703.01780#49 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | The recently proposed Temporal Ensembling has achieved state-of-the-art
results in several semi-supervised learning benchmarks. It maintains an
exponential moving average of label predictions on each training example, and
penalizes predictions that are inconsistent with this target. However, because
the targets change only once per epoch, Temporal Ensembling becomes unwieldy
when learning large datasets. To overcome this problem, we propose Mean
Teacher, a method that averages model weights instead of label predictions. As
an additional benefit, Mean Teacher improves test accuracy and enables training
with fewer labels than Temporal Ensembling. Without changing the network
architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250
labels, outperforming Temporal Ensembling trained with 1000 labels. We also
show that a good network architecture is crucial to performance. Combining Mean
Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with
4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels
from 35.24% to 9.11%. | http://arxiv.org/pdf/1703.01780 | Antti Tarvainen, Harri Valpola | cs.NE, cs.LG, stat.ML | In this version: Corrected hyperparameters of the 4000-label CIFAR-10
ResNet experiment. Changed Antti's contact info, Advances in Neural
Information Processing Systems 30 (NIPS 2017) pre-proceedings | null | cs.NE | 20170306 | 20180416 | [
{
"id": "1706.04599"
},
{
"id": "1705.07485"
},
{
"id": "1503.02531"
},
{
"id": "1609.08976"
},
{
"id": "1511.04508"
},
{
"id": "1610.02242"
},
{
"id": "1605.06465"
},
{
"id": "1608.03983"
},
{
"id": "1611.05431"
},
{
"id": "1704.03976"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1709.01507"
}
] |
1703.01780 | 50 | We used a total cost function consisting of classiï¬cation cost and three other costs: We used the dual output trick described in subsection 3.4 and Figure 4(e) with MSE cost between logits with coefï¬cient 0.01. This simpliï¬ed other hyperparameter choices and improved the results. We used MSE consistency cost with coefï¬cient ramping up from 0 to 100.0 during the ï¬rst 5 epochs, using the same sigmoid ramp-up shape as in the experiments above. We also used an L2 weight decay with coefï¬cient 2e-4. We used EMA decay value 0.97 (when 1000 labels) or 0.99 (when 4000 labels).
# B.2.2 ResNet on ImageNet
On our ImageNet evaluation runs, we used a 152-layer ResNeXt architecture [33] consisting of 3+8+36+3 residual blocks, with 32 groups of 4 channels on the ï¬rst block. | 1703.01780#50 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | The recently proposed Temporal Ensembling has achieved state-of-the-art
results in several semi-supervised learning benchmarks. It maintains an
exponential moving average of label predictions on each training example, and
penalizes predictions that are inconsistent with this target. However, because
the targets change only once per epoch, Temporal Ensembling becomes unwieldy
when learning large datasets. To overcome this problem, we propose Mean
Teacher, a method that averages model weights instead of label predictions. As
an additional benefit, Mean Teacher improves test accuracy and enables training
with fewer labels than Temporal Ensembling. Without changing the network
architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250
labels, outperforming Temporal Ensembling trained with 1000 labels. We also
show that a good network architecture is crucial to performance. Combining Mean
Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with
4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels
from 35.24% to 9.11%. | http://arxiv.org/pdf/1703.01780 | Antti Tarvainen, Harri Valpola | cs.NE, cs.LG, stat.ML | In this version: Corrected hyperparameters of the 4000-label CIFAR-10
ResNet experiment. Changed Antti's contact info, Advances in Neural
Information Processing Systems 30 (NIPS 2017) pre-proceedings | null | cs.NE | 20170306 | 20180416 | [
{
"id": "1706.04599"
},
{
"id": "1705.07485"
},
{
"id": "1503.02531"
},
{
"id": "1609.08976"
},
{
"id": "1511.04508"
},
{
"id": "1610.02242"
},
{
"id": "1605.06465"
},
{
"id": "1608.03983"
},
{
"id": "1611.05431"
},
{
"id": "1704.03976"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1709.01507"
}
] |
1703.01780 | 51 | We trained the network on 10 GPUs using minibatches of 400 images, 200 of which were labeled. We sampled the images in the same way as described in the SVHN experiments above. Following [10], we randomly augmented images using a 10 degree rotation, a crop with aspect ratio between 3/4 and 4/3 resized to 224x224 pixels, a random horizontal ï¬ip and a color jitter. We then normalized images to have channel-wise zero mean and unit variance over training data.
We trained the network using stochastic gradient descent with maximum learning rate 0.25 and Nesterov momentum 0.9. We ramped up the learning rate linearly during the ï¬rst two epochs from 0.1 to 0.25. We trained for 60 epochs, decaying the learning rate with cosine annealing so that it would have reached zero after 75 epochs. | 1703.01780#51 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | The recently proposed Temporal Ensembling has achieved state-of-the-art
results in several semi-supervised learning benchmarks. It maintains an
exponential moving average of label predictions on each training example, and
penalizes predictions that are inconsistent with this target. However, because
the targets change only once per epoch, Temporal Ensembling becomes unwieldy
when learning large datasets. To overcome this problem, we propose Mean
Teacher, a method that averages model weights instead of label predictions. As
an additional benefit, Mean Teacher improves test accuracy and enables training
with fewer labels than Temporal Ensembling. Without changing the network
architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250
labels, outperforming Temporal Ensembling trained with 1000 labels. We also
show that a good network architecture is crucial to performance. Combining Mean
Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with
4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels
from 35.24% to 9.11%. | http://arxiv.org/pdf/1703.01780 | Antti Tarvainen, Harri Valpola | cs.NE, cs.LG, stat.ML | In this version: Corrected hyperparameters of the 4000-label CIFAR-10
ResNet experiment. Changed Antti's contact info, Advances in Neural
Information Processing Systems 30 (NIPS 2017) pre-proceedings | null | cs.NE | 20170306 | 20180416 | [
{
"id": "1706.04599"
},
{
"id": "1705.07485"
},
{
"id": "1503.02531"
},
{
"id": "1609.08976"
},
{
"id": "1511.04508"
},
{
"id": "1610.02242"
},
{
"id": "1605.06465"
},
{
"id": "1608.03983"
},
{
"id": "1611.05431"
},
{
"id": "1704.03976"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1709.01507"
}
] |
1703.01780 | 52 | We used a total cost function consisting of classiï¬cation cost and three other costs: We used the dual output trick described in subsection 3.4 and Figure 4(e) with MSE cost between logits with coefï¬cient 0.01. We used a KL-divergence consistency cost with coefï¬cient ramping up from 0 to 10.0 during the ï¬rst 5 epochs, using the same sigmoid ramp-up shape as in the experiments above. We also used an L2 weight decay with coefï¬cient 5e-5. We used EMA decay value 0.9997.
14
15% + (f) 10% | 5% MSE a 4 a a Gs 6 @ o o KL-div cons. cost function Tt
Figure 5: Copy of Figure 4(f) in the main text. Validation error on 250-label SVHN over four runs and their mean, when varying the consistency cost shape hyperparameter Ï between mean squared error (Ï = 0) and KL-divergence (Ï = 1).
# B.3 Use of training, validation and test data | 1703.01780#52 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | The recently proposed Temporal Ensembling has achieved state-of-the-art
results in several semi-supervised learning benchmarks. It maintains an
exponential moving average of label predictions on each training example, and
penalizes predictions that are inconsistent with this target. However, because
the targets change only once per epoch, Temporal Ensembling becomes unwieldy
when learning large datasets. To overcome this problem, we propose Mean
Teacher, a method that averages model weights instead of label predictions. As
an additional benefit, Mean Teacher improves test accuracy and enables training
with fewer labels than Temporal Ensembling. Without changing the network
architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250
labels, outperforming Temporal Ensembling trained with 1000 labels. We also
show that a good network architecture is crucial to performance. Combining Mean
Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with
4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels
from 35.24% to 9.11%. | http://arxiv.org/pdf/1703.01780 | Antti Tarvainen, Harri Valpola | cs.NE, cs.LG, stat.ML | In this version: Corrected hyperparameters of the 4000-label CIFAR-10
ResNet experiment. Changed Antti's contact info, Advances in Neural
Information Processing Systems 30 (NIPS 2017) pre-proceedings | null | cs.NE | 20170306 | 20180416 | [
{
"id": "1706.04599"
},
{
"id": "1705.07485"
},
{
"id": "1503.02531"
},
{
"id": "1609.08976"
},
{
"id": "1511.04508"
},
{
"id": "1610.02242"
},
{
"id": "1605.06465"
},
{
"id": "1608.03983"
},
{
"id": "1611.05431"
},
{
"id": "1704.03976"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1709.01507"
}
] |
1703.01780 | 53 | # B.3 Use of training, validation and test data
In the development phase of our work with CIFAR-10 and SVHN datasets, we separated 10% of training data into a validation set. We removed randomly most of the labels from the remaining training data, retaining an equal number of labels from each class. We used a different set of labels for each of the evaluation runs. We retained labels in the validation set to enable exploration of the results. In the ï¬nal evaluation phase we used the entire training set, including the validation set but with labels removed.
On a real-world use case we would not possess a large fully-labeled validation set. However, this setup is useful in a research setting, since it enables a more thorough analysis of the results. To the best of our knowledge, this is the common practice when carrying out research on semi-supervised learning. By retaining the hyperparameters from previous work where possible we decreased the chance of over-ï¬tting our results to validation labels.
In the ImageNet experiments we removed randomly most of the labels from the training set, retaining an equal number of labels from each class. For validation we used the given validation set without modiï¬cations. We used a different set of training labels for each of the evaluation runs and evaluated the results against the validation set.
# C Varying between mean squared error and KL-divergence | 1703.01780#53 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | The recently proposed Temporal Ensembling has achieved state-of-the-art
results in several semi-supervised learning benchmarks. It maintains an
exponential moving average of label predictions on each training example, and
penalizes predictions that are inconsistent with this target. However, because
the targets change only once per epoch, Temporal Ensembling becomes unwieldy
when learning large datasets. To overcome this problem, we propose Mean
Teacher, a method that averages model weights instead of label predictions. As
an additional benefit, Mean Teacher improves test accuracy and enables training
with fewer labels than Temporal Ensembling. Without changing the network
architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250
labels, outperforming Temporal Ensembling trained with 1000 labels. We also
show that a good network architecture is crucial to performance. Combining Mean
Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with
4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels
from 35.24% to 9.11%. | http://arxiv.org/pdf/1703.01780 | Antti Tarvainen, Harri Valpola | cs.NE, cs.LG, stat.ML | In this version: Corrected hyperparameters of the 4000-label CIFAR-10
ResNet experiment. Changed Antti's contact info, Advances in Neural
Information Processing Systems 30 (NIPS 2017) pre-proceedings | null | cs.NE | 20170306 | 20180416 | [
{
"id": "1706.04599"
},
{
"id": "1705.07485"
},
{
"id": "1503.02531"
},
{
"id": "1609.08976"
},
{
"id": "1511.04508"
},
{
"id": "1610.02242"
},
{
"id": "1605.06465"
},
{
"id": "1608.03983"
},
{
"id": "1611.05431"
},
{
"id": "1704.03976"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1709.01507"
}
] |
1703.01780 | 54 | # C Varying between mean squared error and KL-divergence
As mentioned in subsection 3.4, we ran an experiment varying the consistency cost function between MSE and KL-divergence (reproduced in Figure 5). The exact consistency function we used was
2 lâ-t lâ-t C,(p.4) = Z,Dei(prlidr), where Z, = 5p, Pr = TPH = TIF aH
Ï â (0, 1] and N is the number of classes. Taking the Taylor expansion we get
1 Dy (villa) = >> 57 N (Pi â qi)? + O (N27) a
where the zeroth- and ï¬rst-order terms vanish. Consequently,
1 2 Cr(p.9) +5 Swi - a) when 7 â 0 F 2 C-(p. 4) =Fy3 Pa. (Ila) when 7 = 1.
The results in Figure 5 show that MSE performs better than KL-divergence or CÏ with any Ï . We also tried other consistency cost weights with KL-divergence and did not reach the accuracy of MSE.
15 | 1703.01780#54 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | The recently proposed Temporal Ensembling has achieved state-of-the-art
results in several semi-supervised learning benchmarks. It maintains an
exponential moving average of label predictions on each training example, and
penalizes predictions that are inconsistent with this target. However, because
the targets change only once per epoch, Temporal Ensembling becomes unwieldy
when learning large datasets. To overcome this problem, we propose Mean
Teacher, a method that averages model weights instead of label predictions. As
an additional benefit, Mean Teacher improves test accuracy and enables training
with fewer labels than Temporal Ensembling. Without changing the network
architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250
labels, outperforming Temporal Ensembling trained with 1000 labels. We also
show that a good network architecture is crucial to performance. Combining Mean
Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with
4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels
from 35.24% to 9.11%. | http://arxiv.org/pdf/1703.01780 | Antti Tarvainen, Harri Valpola | cs.NE, cs.LG, stat.ML | In this version: Corrected hyperparameters of the 4000-label CIFAR-10
ResNet experiment. Changed Antti's contact info, Advances in Neural
Information Processing Systems 30 (NIPS 2017) pre-proceedings | null | cs.NE | 20170306 | 20180416 | [
{
"id": "1706.04599"
},
{
"id": "1705.07485"
},
{
"id": "1503.02531"
},
{
"id": "1609.08976"
},
{
"id": "1511.04508"
},
{
"id": "1610.02242"
},
{
"id": "1605.06465"
},
{
"id": "1608.03983"
},
{
"id": "1611.05431"
},
{
"id": "1704.03976"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1709.01507"
}
] |
1703.01780 | 55 | 15
The exact reason why MSE performs better than KL-divergence remains unclear, but the form of CÏ may help explain it. Modern neural network architectures tend to produce accurate but overly conï¬dent predictions [7]. We can assume that the true labels are accurate, but we should discount the conï¬dence of the teacher predictions. We can do that by having Ï = 1 for the classiï¬cation cost and Ï < 1 for the consistency cost. Then pÏ and qÏ discount the conï¬dence of the approximations while ZÏ keeps gradients large enough to provide a useful training signal. However, we did not perform experiments to validate this explanation.
16 | 1703.01780#55 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | The recently proposed Temporal Ensembling has achieved state-of-the-art
results in several semi-supervised learning benchmarks. It maintains an
exponential moving average of label predictions on each training example, and
penalizes predictions that are inconsistent with this target. However, because
the targets change only once per epoch, Temporal Ensembling becomes unwieldy
when learning large datasets. To overcome this problem, we propose Mean
Teacher, a method that averages model weights instead of label predictions. As
an additional benefit, Mean Teacher improves test accuracy and enables training
with fewer labels than Temporal Ensembling. Without changing the network
architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250
labels, outperforming Temporal Ensembling trained with 1000 labels. We also
show that a good network architecture is crucial to performance. Combining Mean
Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with
4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels
from 35.24% to 9.11%. | http://arxiv.org/pdf/1703.01780 | Antti Tarvainen, Harri Valpola | cs.NE, cs.LG, stat.ML | In this version: Corrected hyperparameters of the 4000-label CIFAR-10
ResNet experiment. Changed Antti's contact info, Advances in Neural
Information Processing Systems 30 (NIPS 2017) pre-proceedings | null | cs.NE | 20170306 | 20180416 | [
{
"id": "1706.04599"
},
{
"id": "1705.07485"
},
{
"id": "1503.02531"
},
{
"id": "1609.08976"
},
{
"id": "1511.04508"
},
{
"id": "1610.02242"
},
{
"id": "1605.06465"
},
{
"id": "1608.03983"
},
{
"id": "1611.05431"
},
{
"id": "1704.03976"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1709.01507"
}
] |
1703.01041 | 1 | Abstract Neural networks have proven effective at solv- ing difï¬cult problems but designing their archi- tectures can be challenging, even for image clas- siï¬cation problems alone. Our goal is to min- imize human participation, so we employ evo- lutionary algorithms to discover such networks automatically. Despite signiï¬cant computational requirements, we show that it is now possible to evolve models with accuracies within the range of those published in the last year. Speciï¬- cally, we employ simple evolutionary techniques at unprecedented scales to discover models for the CIFAR-10 and CIFAR-100 datasets, start- ing from trivial initial conditions and reaching accuracies of 94.6% (95.6% for ensemble) and 77.0%, respectively. To do this, we use novel and intuitive mutation operators that navigate large search spaces; we stress that no human participa- tion is required once evolution starts and that the output is a fully-trained model. Throughout this work, we place special emphasis on the repeata- bility of results, the variability in the outcomes and the computational requirements.
# 1. Introduction | 1703.01041#1 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 2 | # 1. Introduction
Neural networks can successfully perform difï¬cult tasks where large amounts of training data are available (He et al., 2015; Weyand et al., 2016; Silver et al., 2016; Wu et al., 2016). Discovering neural network architectures, however, remains a laborious task. Even within the spe- ciï¬c problem of image classiï¬cation, the state of the art was attained through many years of focused investigation by hundreds of researchers (Krizhevsky et al. (2012); Si- monyan & Zisserman (2014); Szegedy et al. (2015); He et al. (2016); Huang et al. (2016a), among many others). | 1703.01041#2 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 3 | It is therefore not surprising that in recent years, tech- niques to automatically discover these architectures have been gaining popularity (Bergstra & Bengio, 2012; Snoek et al., 2012; Han et al., 2015; Baker et al., 2016; Zoph & Le, 2016). One of the earliest such âneuro-discoveryâ methods was neuro-evolution (Miller et al., 1989; Stanley & Miikkulainen, 2002; Stanley, 2007; Bayer et al., 2009; Stanley et al., 2009; Breuel & Shafait, 2010; Pugh & Stan- ley, 2013; Kim & Rigazio, 2015; Zaremba, 2015; Fernando et al., 2016; Morse & Stanley, 2016). Despite the promis- ing results, the deep learning community generally per- ceives evolutionary algorithms to be incapable of match- ing the accuracies of hand-designed models (Verbancsics & Harguess, 2013; Baker et al., 2016; Zoph & Le, 2016). In this paper, we show that it is possible to evolve such com- petitive models today, given enough computational power. | 1703.01041#3 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 4 | We used slightly-modiï¬ed known evolutionary algorithms and scaled up the computation to unprecedented levels, as far as we know. This, together with a set of novel and intuitive mutation operators, allowed us to reach compet- itive accuracies on the CIFAR-10 dataset. This dataset was chosen because it requires large networks to reach high accuracies, thus presenting a computational challenge. We also took a small ï¬rst step toward generalization and evolved networks on the CIFAR-100 dataset. In transi- tioning from CIFAR-10 to CIFAR-100, we did not mod- ify any aspect or parameter of our algorithm. Our typical neuro-evolution outcome on CIFAR-10 had a test accuracy with µ = 94.1%, Ï = 0.4% @ 9 à 1019 FLOPs, and our top model (by validation accuracy) had a test accuracy of 94.6% @ 4Ã1020 FLOPs. Ensembling the validation-top 2 models from each population reaches a test accuracy of 95.6%, at no additional training cost. On CIFAR-100, our single experiment resulted in a test accuracy of 77.0% @ 2 à 1020 FLOPs. As far as we know, these are the most accurate results obtained on these datasets by automated discovery methods that start from trivial initial conditions. | 1703.01041#4 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 5 | 1Google Brain, Mountain View, California, USA 2Google Re- search, Mountain View, California, USA. Correspondence to: Es- teban Real <[email protected]>.
Proceedings of the 34 th International Conference on Machine Learning, Sydney, Australia, PMLR 70, 2017. Copyright 2017 by the author(s).
Throughout this study, we placed special emphasis on the simplicity of the algorithm. In particular, it is a âone- shotâ technique, producing a fully trained neural network It also has few impactful requiring no post-processing. meta-parameters (i.e. parameters not optimized by the al- gorithm). Starting out with poor-performing models with
Large-Scale Evolution
Table 1. Comparison with single-model hand-designed architectures. The âC10+â and âC100+â columns indicate the test accuracy on the data-augmented CIFAR-10 and CIFAR-100 datasets, respectively. The âReachable?â column denotes whether the given hand- designed model lies within our search space. An entry of âââ indicates that no value was reported. The â indicates a result reported by Huang et al. (2016b) instead of the original author. Much of this table was based on that presented in Huang et al. (2016a). | 1703.01041#5 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 6 | STUDY PARAMS. C10+ C100+ REACHABLE? MAXOUT (GOODFELLOW ET AL., 2013) NETWORK IN NETWORK (LIN ET AL., 2013) ALL-CNN (SPRINGENBERG ET AL., 2014) DEEPLY SUPERVISED (LEE ET AL., 2015) HIGHWAY (SRIVASTAVA ET AL., 2015) RESNET (HE ET AL., 2016) EVOLUTION (OURS) WIDE RESNET 28-10 (ZAGORUYKO & KOMODAKIS, 2016) WIDE RESNET 40-10+D/O (ZAGORUYKO & KOMODAKIS, 2016) DENSENET (HUANG ET AL., 2016A) 61.4% 90.7% â 91.2% 66.3% 1.3 M 92.8% 65.4% 92.0% 2.3 M 92.3% 67.6% 1.7 M 93.4% 72.8%â 5.4 M 94.6% 40.4 M 36.5 M 96.0% 50.7 M 96.2% 25.6 M 96.7% â â â 77.0% 80.0% 81.7% 82.8% NO NO YES NO NO YES N/A YES NO NO | 1703.01041#6 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 7 | no convolutions, the algorithm must evolve complex con- volutional neural networks while navigating a fairly unre- stricted search space: no ï¬xed depth, arbitrary skip con- nections, and numerical parameters that have few restric- tions on the values they can take. We also paid close atten- tion to result reporting. Namely, we present the variabil- ity in our results in addition to the top value, we account for researcher degrees of freedom (Simmons et al., 2011), we study the dependence on the meta-parameters, and we disclose the amount of computation necessary to reach the main results. We are hopeful that our explicit discussion of computation cost could spark more study of efï¬cient model search and training. Studying model performance normal- ized by computational investment allows consideration of economic concepts like opportunity cost.
# 2. Related Work
2015; Fernando et al., 2016). For example, the CPPN (Stanley, 2007; Stanley et al., 2009) allows for the evolu- tion of repeating features at different scales. Also, Kim & Rigazio (2015) use an indirect encoding to improve the convolution ï¬lters in an initially highly-optimized ï¬xed ar- chitecture. | 1703.01041#7 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 8 | Research on weight evolution is still ongoing (Morse & Stanley, 2016) but the broader machine learning commu- nity defaults to back-propagation for optimizing neural net- work weights (Rumelhart et al., 1988). Back-propagation and evolution can be combined as in Stanley et al. (2009), where only the structure is evolved. Their algorithm fol- lows an alternation of architectural mutations and weight back-propagation. Similarly, Breuel & Shafait (2010) use this approach for hyper-parameter search. Fernando et al. (2016) also use back-propagation, allowing the trained weights to be inherited through the structural modiï¬ca- tions. | 1703.01041#8 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 9 | Neuro-evolution dates back many years (Miller et al., 1989), originally being used only to evolve the weights of a ï¬xed architecture. Stanley & Miikkulainen (2002) showed that it was advantageous to simultaneously evolve the architecture using the NEAT algorithm. NEAT has three kinds of mutations: (i) modify a weight, (ii) add a connection between existing nodes, or (iii) insert a node while splitting an existing connection. It also has a mech- anism for recombining two models into one and a strategy to promote diversity known as ï¬tness sharing (Goldberg et al., 1987). Evolutionary algorithms represent the models using an encoding that is convenient for their purposeâ analogous to natureâs DNA. NEAT uses a direct encoding: every node and every connection is stored in the DNA. The alternative paradigm, indirect encoding, has been the sub- ject of much neuro-evolution research (Gruau, 1993; Stan- ley et al., 2009; Pugh & Stanley, 2013; Kim & Rigazio, | 1703.01041#9 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 10 | The above studies create neural networks that are small in comparison to the typical modern architectures used for im- age classiï¬cation (He et al., 2016; Huang et al., 2016a). Their focus is on the encoding or the efï¬ciency of the evo- lutionary process, but not on the scale. When it comes to images, some neuro-evolution results reach the computa- tional scale required to succeed on the MNIST dataset (Le- Cun et al., 1998). Yet, modern classiï¬ers are often tested on realistic images, such as those in the CIFAR datasets (Krizhevsky & Hinton, 2009), which are much more chal- lenging. These datasets require large models to achieve high accuracy.
Non-evolutionary neuro-discovery methods have been more successful at tackling realistic image data. Snoek et al. (2012) used Bayesian optimization to tune 9 hyper-parameters for a ï¬xed-depth architecture, reachLarge-Scale Evolution | 1703.01041#10 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 11 | Table 2. Comparison with automatically discovered architectures. The âC10+â and âC100+â contain the test accuracy on the data- augmented CIFAR-10 and CIFAR-100 datasets, respectively. An entry of âââ indicates that the information was not reported or is not known to us. For Zoph & Le (2016), we quote the result with the most similar search space to ours, as well as their best result. Please refer to Table 1 for hand-designed results, including the state of the art. âDiscrete params.â means that the parameters can be picked from a handful of values only (e.g. strides â {1, 2, 4}). | 1703.01041#11 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 12 | STUDY STARTING POINT CONSTRAINTS POST-PROCESSING PARAMS. C10+ BAYESIAN (SNOEK ET AL., 2012) 3 LAYERS FIXED ARCHITECTURE, NO SKIPS NONE â 90.5% Q-LEARNING (BAKER ET AL., 2016) â DISCRETE PARAMS., MAX. NUM. LAYERS, NO SKIPS TUNE, RETRAIN 11.2 M 93.1% RL (ZOPH & LE, 2016) 20 LAYERS, 50% SKIPS DISCRETE PARAMS., EXACTLY 20 LAYERS SMALL GRID SEARCH, RETRAIN 2.5 M 94.0% RL (ZOPH & LE, 2016) 39 LAYERS, 2 POOL LAYERS AT 13 AND 26, 50% SKIPS DISCRETE PARAMS., EXACTLY 39 LAYERS, 2 POOL LAYERS AT 13 AND 26 ADD MORE FILTERS, SMALL GRID SEARCH, RETRAIN 37.0 M 96.4% EVOLUTION (OURS) SINGLE LAYER, ZERO CONVS. POWER-OF-2 STRIDES NONE 5.4 M 40.4 M ENSEMB. 94.6% 95.6% C100+ â 72.9% â â 77.0% | 1703.01041#12 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 13 | ing a new state of the art at the time. Zoph & Le (2016) used reinforcement learning on a deeper In their approach, a neu- ï¬xed-length architecture. ral networkâthe âdiscovererââconstructs a convolutional neural networkâthe âdiscoveredââone layer at a time. In addition to tuning layer parameters, they add and remove skip connections. This, together with some manual post- processing, gets them very close to the (current) state of the art. (Additionally, they surpassed the state of the art on a sequence-to-sequence problem.) Baker et al. (2016) use Q-learning to also discover a network one layer at a time, but in their approach, the number of layers is decided by the discoverer. This is a desirable feature, as it would allow a system to construct shallow or deep solutions, as may be the requirements of the dataset at hand. Different datasets would not require specially tuning the algorithm. Compar- isons among these methods are difï¬cult because they ex- plore very different search spaces and have very different initial conditions (Table 2). | 1703.01041#13 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 14 | Tangentially, there has also been neuro-evolution work on LSTM structure (Bayer et al., 2009; Zaremba, 2015), but this is beyond the scope of this paper. Also related to this work is that of Saxena & Verbeek (2016), who embed con- volutions with different parameters into a species of âsuper- networkâ with many parallel paths. Their algorithm then selects and ensembles paths in the super-network. Finally, canonical approaches to hyper-parameter search are grid search (used in Zagoruyko & Komodakis (2016), for ex- ample) and random search, the latter being the better of the
# two (Bergstra & Bengio, 2012). | 1703.01041#14 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 15 | Our approach builds on previous work, with some im- portant differences. We explore large model-architecture search spaces starting with basic initial conditions to avoid priming the system with information about known good strategies for the speciï¬c dataset at hand. Our encoding is different from the neuro-evolution methods mentioned above: we use a simpliï¬ed graph as our DNA, which is transformed to a full neural network graph for training and evaluation (Section 3). Some of the mutations acting on this DNA are reminiscent of NEAT. However, instead of single nodes, one mutation can insert whole layersâi.e. tens to hundreds of nodes at a time. We also allow for these layers to be removed, so that the evolutionary process can simplify an architecture in addition to complexifying it. Layer parameters are also mutable, but we do not prescribe a small set of possible values to choose from, to allow for a larger search space. We do not use ï¬tness sharing. We report additional results using recombination, but for the most part, we used mutation only. On the other hand, we do use back-propagation to optimize the weights, which can be inherited | 1703.01041#15 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 16 | but for the most part, we used mutation only. On the other hand, we do use back-propagation to optimize the weights, which can be inherited across mutations. Together with a learn- ing rate mutation, this allows the exploration of the space of learning rate schedules, yielding fully trained models at the end of the evolutionary process (Section 3). Ta- bles 1 and 2 compare our approach with hand-designed ar- chitectures and with other neuro-discovery techniques, re- spectively. | 1703.01041#16 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 18 | To automatically search for high-performing neural net- work architectures, we evolve a population of models. Each modelâor individualâis a trained architecture. The modelâs accuracy on a separate validation dataset is a mea- sure of the individualâs quality or ï¬tness. During each evo- lutionary step, a computerâa workerâchooses two indi- viduals at random from this population and compares their ï¬tnesses. The worst of the pair is immediately removed from the populationâit is killed. The best of the pair is selected to be a parent, that is, to undergo reproduction. By this we mean that the worker creates a copy of the par- ent and modiï¬es this copy by applying a mutation, as de- scribed below. We will refer to this modiï¬ed copy as the child. After the worker creates the child, it trains this child, evaluates it on the validation set, and puts it back into the population. The child then becomes aliveâi.e. free to act as a parent. Our scheme, therefore, uses repeated pairwise competitions of random individuals, which makes it an ex- ample of tournament selection (Goldberg & Deb, 1991). | 1703.01041#18 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 20 | lutional network, two of the dimensions of the tensor rep- resent the spatial coordinates of the image and the third is a number of channels. Activation functions are applied at the vertices and can be either (i) batch-normalization (Ioffe & Szegedy, 2015) with rectiï¬ed linear units (ReLUs) or (ii) plain linear units. The graphâs edges represent identity con- nections or convolutions and contain the mutable numeri- cal parameters deï¬ning the convolutionâs properties. When multiple edges are incident on a vertex, their spatial scales or numbers of channels may not coincide. However, the vertex must have a single size and number of channels for its activations. The inconsistent inputs must be resolved. Resolution is done by choosing one of the incoming edges as the primary one. We pick this primary edge to be the one that is not a skip connection. The activations coming from the non-primary edges are reshaped through zeroth- order interpolation in the case of the size and through trun- cation/padding in the case of the number of channels, as in He et al. (2016). In addition to the graph, the learning-rate value is also stored in the DNA. | 1703.01041#20 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 21 | A child is similar but not identical to the parent because of the action of a mutation. In each reproduction event, the worker picks a mutation at random from a predetermined set. The set contains the following mutations:
Using this strategy to search large spaces of complex im- age models requires considerable computation. To achieve scale, we developed a massively-parallel, lock-free infras- tructure. Many workers operate asynchronously on differ- ent computers. They do not communicate directly with each other. Instead, they use a shared ï¬le-system, where the population is stored. The ï¬le-system contains direc- tories that represent the individuals. Operations on these individuals, such as the killing of one, are represented as atomic renames on the directory2. Occasionally, a worker may concurrently modify the individual another worker is operating on. In this case, the affected worker simply gives up and tries again. The population size is 1000 individuals, unless otherwise stated. The number of workers is always 1 4 of the population size. To allow for long run-times with a limited amount of space, dead individualsâ directories are frequently garbage-collected.
ALTER-LEARNING-RATE (sampling details below). ⢠IDENTITY (effectively means âkeep trainingâ). ⢠RESET-WEIGHTS (sampled as in He et al. (2015), for
example). | 1703.01041#21 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 22 | example).
⢠INSERT-CONVOLUTION (inserts a convolution at a ran- dom location in the âconvolutional backboneâ, as in Fig- ure 1. The inserted convolution has 3 à 3 ï¬lters, strides of 1 or 2 at random, number of channels same as input. May apply batch-normalization and ReLU activation or none at random).
REMOVE-CONVOLUTION. ⢠ALTER-STRIDE (only powers of 2 are allowed). ⢠ALTER-NUMBER-OF-CHANNELS (of random conv.). ⢠FILTER-SIZE (horizontal or vertical at random, on random convolution, odd values only).
⢠INSERT-ONE-TO-ONE (inserts a one-to-one/identity connection, analogous to insert-convolution mutation).
ADD-SKIP (identity between random layers). ⢠REMOVE-SKIP (removes random skip).
# 3.2. Encoding and Mutations | 1703.01041#22 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 23 | ADD-SKIP (identity between random layers). ⢠REMOVE-SKIP (removes random skip).
# 3.2. Encoding and Mutations
Individual architectures are encoded as a graph that we refer to as the DNA. In this graph, the vertices represent rank-3 tensors or activations. As is standard for a convoThese speciï¬c mutations were chosen for their similarity to the actions that a human designer may take when im- proving an architecture. This may clear the way for hybrid evolutionaryâhand-design methods in the future. The prob- abilities for the mutations were not tuned in any way.
2The use of the ï¬le-name string to contain key information about the individual was inspired by Breuel & Shafait (2010), and it speeds up disk access enormously. In our case, the ï¬le name contains the state of the individual (alive, dead, training, etc.).
A mutation that acts on a numerical parameter chooses the new value at random around the existing value. All sam- pling is from uniform distributions. For example, a muta- tion acting on a convolution with 10 output channels will
Large-Scale Evolution | 1703.01041#23 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 24 | Large-Scale Evolution
result in a convolution having between 5 and 20 output channels (that is, half to twice the original value). All val- ues within the range are possible. As a result, the models are not constrained to a number of ï¬lters that is known to work well. The same is true for all other parameters, yield- ing a âdenseâ search space. In the case of the strides, this applies to the log-base-2 of the value, to allow for activa- tion shapes to match more easily3. In principle, there is also no upper limit to any of the parameters. All model depths are attainable, for example. Up to hardware constraints, the search space is unbounded. The dense and unbounded na- ture of the parameters result in the exploration of a truly large set of possible architectures.
# 3.3. Initial Conditions | 1703.01041#24 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 25 | # 3.3. Initial Conditions
Every evolution experiment begins with a population of simple individuals, all with a learning rate of 0.1. They are all very bad performers. Each initial individual consti- tutes just a single-layer model with no convolutions. This conscious choice of poor initial conditions forces evolution to make the discoveries by itself. The experimenter con- tributes mostly through the choice of mutations that demar- cate a search space. Altogether, the use of poor initial con- ditions and a large search space limits the experimenterâs impact. In other words, it prevents the experimenter from âriggingâ the experiment to succeed.
# 3.5. Computation cost
To estimate computation costs, we identiï¬ed the basic TensorFlow (TF) operations used by our model training and validation, like convolutions, generic matrix multipli- cations, etc. For each of these TF operations, we esti- mated the theoretical number of ï¬oating-point operations (FLOPs) required. This resulted in a map from TF opera- tion to FLOPs, which is valid for all our experiments. | 1703.01041#25 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 26 | For each individual within an evolution experiment, we compute the total FLOPs incurred by the TF operations in its architecture over one batch of examples, both during its training (Ft FLOPs) and during its validation (Fv FLOPs). Then we assign to the individual the cost FtNt + FvNv, where Nt and Nv are the number of training and validation batches, respectively. The cost of the experiment is then the sum of the costs of all its individuals.
We intend our FLOPs measurement as a coarse estimate only. We do not take into account input/output, data prepro- cessing, TF graph building or memory-copying operations. Some of these unaccounted operations take place once per training run or once per step and some have a component that is constant in the model size (such as disk-access la- tency or input data cropping). We therefore expect the esti- mate to be more useful for large architectures (for example, those with many convolutions).
# 3.4. Training and Validation
# 3.6. Weight Inheritance | 1703.01041#26 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 27 | # 3.4. Training and Validation
# 3.6. Weight Inheritance
Training and validation is done on the CIFAR-10 dataset. This dataset consists of 50,000 training examples and 10,000 test examples, all of which are 32 x 32 color images labeled with 1 of 10 common object classes (Krizhevsky & Hinton, 2009). 5,000 of the training examples are held out in a validation set. The remaining 45,000 examples consti- tute our actual training set. The training set is augmented as in He et al. (2016). The CIFAR-100 dataset has the same number of dimensions, colors and examples as CIFAR-10, but uses 100 classes, making it much more challenging. | 1703.01041#27 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 28 | Training is done with TensorFlow (Abadi et al., 2016), us- ing SGD with a momentum of 0.9 (Sutskever et al., 2013), a batch size of 50, and a weight decay of 0.0001. Each train- ing runs for 25,600 steps, a value chosen to be brief enough so that each individual could be trained in a few seconds to a few hours, depending on model size. The loss function is the cross-entropy. Once training is complete, a single eval- uation on the validation set provides the accuracy to use as the individualâs ï¬tness. Ensembling was done by majority voting during the testing evaluation. The models used in the ensemble were selected by validation accuracy.
3For integer DNA parameters, we actually store and mutate a ï¬oating-point value. This allows multiple small mutations to have a cumulative effect in spite of integer round-off. | 1703.01041#28 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
1703.01041 | 29 | 3For integer DNA parameters, we actually store and mutate a ï¬oating-point value. This allows multiple small mutations to have a cumulative effect in spite of integer round-off.
We need architectures that are trained to completion within an evolution experiment. If this does not happen, we are forced to retrain the best model at the end, possibly hav- ing to explore its hyper-parameters. Such extra explo- ration tends to depend on the details of the model being retrained. On the other hand, 25,600 steps are not enough to fully train each individual. Training a large model to completion is prohibitively slow for evolution. To resolve this dilemma, we allow the children to inherit the par- entsâ weights whenever possible. Namely, if a layer has matching shapes, the weights are preserved. Consequently, some mutations preserve all the weights (like the identity or learning-rate mutations), some preserve none (the weight- resetting mutation), and most preserve some but not all. An example of the latter is the ï¬lter-size mutation: only the ï¬l- ters of the convolution being mutated will be discarded.
# 3.7. Reporting Methodology | 1703.01041#29 | Large-Scale Evolution of Image Classifiers | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | http://arxiv.org/pdf/1703.01041 | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin | cs.NE, cs.AI, cs.CV, cs.DC, I.2.6; I.5.1; I.5.2 | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | cs.NE | 20170303 | 20170611 | [
{
"id": "1502.03167"
},
{
"id": "1605.07146"
},
{
"id": "1611.01578"
},
{
"id": "1503.01824"
},
{
"id": "1611.02167"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.