id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
listlengths 1
1
|
---|---|---|---|---|---|---|
1703.03400#36 | Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks | to run in a particular direction or at a particular In the goal velocity experiments, the reward is velocity. Reusing knowledge from past tasks may be a crucial in- gredient in making high-capacity scalable models, such as deep neural networks, amenable to fast training with small datasets. We believe that this work is one step toward a sim- ple and general-purpose meta-learning technique that can be applied to any problem and any model. Further research in this area can make multitask initialization a standard in- gredient in deep learning and reinforcement learning. Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks | 1703.03400#35 | 1703.03400#37 | 1703.03400 | [
"1612.00796"
]
|
1703.03400#37 | Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks | # Acknowledgements The authors would like to thank Xi Chen and Trevor Darrell for helpful discussions, Yan Duan and Alex Lee for techni- cal advice, Nikhil Mishra, Haoran Tang, and Greg Kahn for feedback on an early draft of the paper, and the anonymous reviewers for their comments. This work was supported in part by an ONR PECASE award and an NSF GRFP award. Ha, David, Dai, Andrew, and Le, Quoc V. Hypernetworks. International Conference on Learning Representations (ICLR), 2017. Hochreiter, Sepp, Younger, A Steven, and Conwell, Pe- In ter R. Learning to learn using gradient descent. | 1703.03400#36 | 1703.03400#38 | 1703.03400 | [
"1612.00796"
]
|
1703.03400#38 | Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks | International Conference on Artiï¬ cial Neural Networks. Springer, 2001. # References Abadi, Mart´ın, Agarwal, Ashish, Barham, Paul, Brevdo, Eugene, Chen, Zhifeng, Citro, Craig, Corrado, Greg S, Davis, Andy, Dean, Jeffrey, Devin, Matthieu, et al. Ten- sorï¬ ow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016. | 1703.03400#37 | 1703.03400#39 | 1703.03400 | [
"1612.00796"
]
|
1703.03400#39 | Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks | Andrychowicz, Marcin, Denil, Misha, Gomez, Sergio, Hoffman, Matthew W, Pfau, David, Schaul, Tom, and de Freitas, Nando. Learning to learn by gradient descent by gradient descent. In Neural Information Processing Systems (NIPS), 2016. Husken, Michael and Goerick, Christian. Fast learning for problem classes using knowledge based network initial- In Neural Networks, 2000. IJCNN 2000, Pro- ization. ceedings of the IEEE-INNS-ENNS International Joint Conference on, volume 6, pp. 619â | 1703.03400#38 | 1703.03400#40 | 1703.03400 | [
"1612.00796"
]
|
1703.03400#40 | Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks | 624. IEEE, 2000. Ioffe, Sergey and Szegedy, Christian. Batch normalization: Accelerating deep network training by reducing internal International Conference on Machine covariate shift. Learning (ICML), 2015. Kaiser, Lukasz, Nachum, Oï¬ r, Roy, Aurko, and Bengio, Samy. Learning to remember rare events. International Conference on Learning Representations (ICLR), 2017. Bengio, Samy, Bengio, Yoshua, Cloutier, Jocelyn, and Gecsei, Jan. On the optimization of a synaptic learning In Optimality in Artiï¬ cial and Biological Neural rule. Networks, pp. 6â | 1703.03400#39 | 1703.03400#41 | 1703.03400 | [
"1612.00796"
]
|
1703.03400#41 | Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks | 8, 1992. Kingma, Diederik and Ba, Jimmy. Adam: A method for International Conference on stochastic optimization. Learning Representations (ICLR), 2015. Bengio, Yoshua, Bengio, Samy, and Cloutier, Jocelyn. Learning a synaptic learning rule. Universit´e de Montr´eal, D´epartement dâ informatique et de recherche op´erationnelle, 1990. | 1703.03400#40 | 1703.03400#42 | 1703.03400 | [
"1612.00796"
]
|
1703.03400#42 | Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks | Donahue, Jeff, Jia, Yangqing, Vinyals, Oriol, Hoffman, Judy, Zhang, Ning, Tzeng, Eric, and Darrell, Trevor. De- caf: A deep convolutional activation feature for generic visual recognition. In International Conference on Ma- chine Learning (ICML), 2014. Kirkpatrick, James, Pascanu, Razvan, Rabinowitz, Neil, Veness, Joel, Desjardins, Guillaume, Rusu, Andrei A, Milan, Kieran, Quan, John, Ramalho, Tiago, Grabska- Overcoming catas- Barwinska, Agnieszka, et al. trophic forgetting in neural networks. arXiv preprint arXiv:1612.00796, 2016. Koch, Gregory. | 1703.03400#41 | 1703.03400#43 | 1703.03400 | [
"1612.00796"
]
|
1703.03400#43 | Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks | Siamese neural networks for one-shot im- age recognition. ICML Deep Learning Workshop, 2015. Duan, Yan, Chen, Xi, Houthooft, Rein, Schulman, John, and Abbeel, Pieter. Benchmarking deep reinforcement In International Con- learning for continuous control. ference on Machine Learning (ICML), 2016a. Kr¨ahenb¨uhl, Philipp, Doersch, Carl, Donahue, Jeff, and Darrell, Trevor. Data-dependent initializations of con- volutional neural networks. International Conference on Learning Representations (ICLR), 2016. Duan, Yan, Schulman, John, Chen, Xi, Bartlett, Peter L, Sutskever, Ilya, and Abbeel, Pieter. Rl2: Fast reinforce- ment learning via slow reinforcement learning. arXiv preprint arXiv:1611.02779, 2016b. Lake, Brenden M, Salakhutdinov, Ruslan, Gross, Jason, and Tenenbaum, Joshua B. One shot learning of simple visual concepts. In Conference of the Cognitive Science Society (CogSci), 2011. | 1703.03400#42 | 1703.03400#44 | 1703.03400 | [
"1612.00796"
]
|
1703.03400#44 | Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks | Edwards, Harrison and Storkey, Amos. Towards a neural statistician. International Conference on Learning Rep- resentations (ICLR), 2017. Li, Ke and Malik, Jitendra. Learning to optimize. Interna- tional Conference on Learning Representations (ICLR), 2017. Goodfellow, Ian J, Shlens, Jonathon, and Szegedy, Chris- tian. Explaining and harnessing adversarial examples. International Conference on Learning Representations (ICLR), 2015. Maclaurin, Dougal, Duvenaud, David, and Adams, Ryan. Gradient-based hyperparameter optimization through re- In International Conference on Ma- versible learning. chine Learning (ICML), 2015. Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks | 1703.03400#43 | 1703.03400#45 | 1703.03400 | [
"1612.00796"
]
|
1703.03400#45 | Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks | Munkhdalai, Tsendsuren and Yu, Hong. Meta net- works. International Conferecence on Machine Learn- ing (ICML), 2017. Snell, Jake, Swersky, Kevin, and Zemel, Richard S. Pro- totypical networks for few-shot learning. arXiv preprint arXiv:1703.05175, 2017. Naik, Devang K and Mammone, RJ. Meta-neural networks that learn by learning. In International Joint Conference on Neural Netowrks (IJCNN), 1992. | 1703.03400#44 | 1703.03400#46 | 1703.03400 | [
"1612.00796"
]
|
1703.03400#46 | Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks | Thrun, Sebastian and Pratt, Lorien. Learning to learn. Springer Science & Business Media, 1998. Parisotto, Emilio, Ba, Jimmy Lei, and Salakhutdinov, Rus- lan. Actor-mimic: Deep multitask and transfer reinforce- International Conference on Learning ment learning. Representations (ICLR), 2016. Todorov, Emanuel, Erez, Tom, and Tassa, Yuval. Mujoco: In Inter- A physics engine for model-based control. national Conference on Intelligent Robots and Systems (IROS), 2012. Ravi, Sachin and Larochelle, Hugo. Optimization as a In International Confer- model for few-shot learning. ence on Learning Representations (ICLR), 2017. Vinyals, Oriol, Blundell, Charles, Lillicrap, Tim, Wierstra, Daan, et al. Matching networks for one shot learning. In Neural Information Processing Systems (NIPS), 2016. | 1703.03400#45 | 1703.03400#47 | 1703.03400 | [
"1612.00796"
]
|
1703.03400#47 | Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks | Rei, Marek. Online representation learning in re- arXiv preprint current neural arXiv:1508.03854, 2015. language models. Wang, Jane X, Kurth-Nelson, Zeb, Tirumala, Dhruva, Soyer, Hubert, Leibo, Joel Z, Munos, Remi, Blun- dell, Charles, Kumaran, Dharshan, and Botvinick, Matt. Learning to reinforcement learn. arXiv preprint arXiv:1611.05763, 2016. Rezende, Danilo Jimenez, Mohamed, Shakir, Danihelka, Ivo, Gregor, Karol, and Wierstra, Daan. One-shot gener- alization in deep generative models. International Con- ference on Machine Learning (ICML), 2016. | 1703.03400#46 | 1703.03400#48 | 1703.03400 | [
"1612.00796"
]
|
1703.03400#48 | Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks | Williams, Ronald J. Simple statistical gradient-following learning. algorithms for connectionist reinforcement Machine learning, 8(3-4):229â 256, 1992. Salimans, Tim and Kingma, Diederik P. Weight normaliza- tion: A simple reparameterization to accelerate training of deep neural networks. In Neural Information Process- ing Systems (NIPS), 2016. Santoro, Adam, Bartunov, Sergey, Botvinick, Matthew, Wierstra, Daan, and Lillicrap, Timothy. Meta-learning In Interna- with memory-augmented neural networks. tional Conference on Machine Learning (ICML), 2016. Saxe, Andrew, McClelland, James, and Ganguli, Surya. Exact solutions to the nonlinear dynamics of learning in International Conference deep linear neural networks. on Learning Representations (ICLR), 2014. Schmidhuber, Jurgen. Evolutionary principles in self- referential learning. On learning how to learn: The meta-meta-... hook.) Diploma thesis, Institut f. Infor- matik, Tech. Univ. Munich, 1987. Schmidhuber, J¨urgen. | 1703.03400#47 | 1703.03400#49 | 1703.03400 | [
"1612.00796"
]
|
1703.03400#49 | Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks | Learning to control fast-weight memories: An alternative to dynamic recurrent net- works. Neural Computation, 1992. Schulman, John, Levine, Sergey, Abbeel, Pieter, Jordan, Michael I, and Moritz, Philipp. Trust region policy optimization. In International Conference on Machine Learning (ICML), 2015. Shyam, Pranav, Gupta, Shubham, and Dukkipati, Ambed- kar. Attentive recurrent comparators. | 1703.03400#48 | 1703.03400#50 | 1703.03400 | [
"1612.00796"
]
|
1703.03400#50 | Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks | International Con- ferecence on Machine Learning (ICML), 2017. Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks # A. Additional Experiment Details # C.1. Multi-task baselines In this section, we provide additional details of the experi- mental set-up and hyperparameters. # A.1. Classiï¬ cation For N-way, K-shot classiï¬ cation, each gradient is com- puted using a batch size of N K examples. For Omniglot, the 5-way convolutional and non-convolutional MAML models were each trained with 1 gradient step with step size α = 0.4 and a meta batch-size of 32 tasks. The network was evaluated using 3 gradient steps with the same step size α = 0.4. The 20-way convolutional MAML model was trained and evaluated with 5 gradient steps with step size α = 0.1. During training, the meta batch-size was set to 16 tasks. For MiniImagenet, both models were trained using 5 gradient steps of size α = 0.01, and evaluated using 10 gradient steps at test time. Following Ravi & Larochelle (2017), 15 examples per class were used for evaluating the post-update meta-gradient. We used a meta batch-size of 4 and 2 tasks for 1-shot and 5-shot training respectively. All models were trained for 60000 iterations on a single NVIDIA Pascal Titan X GPU. The pretraining baseline in the main text trained a single network on all tasks, which we referred to as â pretraining on all tasksâ | 1703.03400#49 | 1703.03400#51 | 1703.03400 | [
"1612.00796"
]
|
1703.03400#51 | Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks | . To evaluate the model, as with MAML, we ï¬ ne-tuned this model on each test task using K examples. In the domains that we study, different tasks involve dif- ferent output values for the same input. As a result, by pre-training on all tasks, the model would learn to output the average output for a particular input value. In some in- stances, this model may learn very little about the actual domain, and instead learn about the range of the output space. We experimented with a multi-task method to provide a point of comparison, where instead of averaging in the out- put space, we averaged in the parameter space. To achieve averaging in parameter space, we sequentially trained 500 separate models on 500 tasks drawn from p(T ). Each model was initialized randomly and trained on a large amount of data from its assigned task. We then took the average parameter vector across models and ï¬ ne-tuned on 5 datapoints with a tuned step size. All of our experiments for this method were on the sinusoid task because of com- putational requirements. The error of the individual regres- sors was low: less than 0.02 on their respective sine waves. # A.2. | 1703.03400#50 | 1703.03400#52 | 1703.03400 | [
"1612.00796"
]
|
1703.03400#52 | Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks | Reinforcement Learning In all reinforcement learning experiments, the MAML pol- icy was trained using a single gradient step with α = 0.1. During evaluation, we found that halving the learning rate after the ï¬ rst gradient step produced superior performance. Thus, the step size during adaptation was set to α = 0.1 for the ï¬ rst step, and α = 0.05 for all future steps. The step sizes for the baseline methods were manually tuned for each domain. In the 2D navigation, we used a meta batch size of 20; in the locomotion problems, we used a meta batch size of 40 tasks. The MAML models were trained for up to 500 meta-iterations, and the model with the best average return during training was used for evaluation. For the ant goal velocity task, we added a positive reward bonus at each timestep to prevent the ant from ending the episode. We tried three variants of this set-up. During training of the individual regressors, we tried using one of the fol- lowing: no regularization, standard ¢2 weight decay, and fy weight regularization to the mean parameter vector thus far of the trained regressors. The latter two variants en- courage the individual models to find parsimonious solu- tions. When using regularization, we set the magnitude of the regularization to be as high as possible without signif- icantly deterring performance. In our results, we refer to this approach as â multi-taskâ . As seen in the results in Ta- ble 2, we find averaging in the parameter space (multi-task) performed worse than averaging in the output space (pre- training on all tasks). This suggests that it is difficult to find parsimonious solutions to multiple tasks when training on tasks separately, and that MAML is learning a solution that is more sophisticated than the mean optimal parameter vector. | 1703.03400#51 | 1703.03400#53 | 1703.03400 | [
"1612.00796"
]
|
1703.03400#53 | Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks | # B. Additional Sinusoid Results In Figure 6, we show the full quantitative results of the MAML model trained on 10-shot learning and evaluated on 5-shot, 10-shot, and 20-shot. In Figure 7, we show the qualitative performance of MAML and the pretrained base- line on randomly sampled sinusoids. # C. Additional Comparisons In this section, we include more thorough evaluations of our approach, including additional multi-task baselines and a comparison representative of the approach of Rei (2015). # C.2. Context vector adaptation Rei (2015) developed a method which learns a context vec- tor that can be adapted online, with an application to re- current language models. The parameters in this context vector are learned and adapted in the same way as the pa- rameters in the MAML model. To provide a comparison to using such a context vector for meta-learning problems, we concatenated a set of free parameters z to the input x, and only allowed the gradient steps to modify z, rather than modifying the model parameters θ, as in MAML. | 1703.03400#52 | 1703.03400#54 | 1703.03400 | [
"1612.00796"
]
|
1703.03400#54 | Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks | For im- Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks k-shot regression, k=5 ss k-shot regression, k=10 k-shot regression, k=20 â +â MAML (ours) â sâ MAML (ours) --=- pretrained, step=0.02 = oracle + MAMI (ours) -*- pretrained, step=0.02 sor oracle -*- retrained, step=0.01 Re = oracle mean squared error mean squared error number of gradient steps number of gradient steps number of gradient steps k-shot regression, k=5 â +â MAML (ours) -*- retrained, step=0.01 Re = oracle mean squared error number of gradient steps ss k-shot regression, k=10 â | 1703.03400#53 | 1703.03400#55 | 1703.03400 | [
"1612.00796"
]
|
1703.03400#55 | Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks | sâ MAML (ours) --=- pretrained, step=0.02 = oracle mean squared error number of gradient steps k-shot regression, k=20 + MAMI (ours) -*- pretrained, step=0.02 sor oracle number of gradient steps Figure 6. Quantitative sinusoid regression results showing test-time learning curves with varying numbers of K test-time samples. Each gradient step is computed using the same K examples. Note that MAML continues to improve with additional gradient steps without overï¬ tting to the extremely small dataset during meta-testing, and achieves a loss that is substantially lower than the baseline ï¬ ne-tuning approach. Table 2. Additional multi-task baselines on the sinusoid regres- sion domain, showing 5-shot mean squared error. The results sug- gest that MAML is learning a solution more sophisticated than the mean optimal parameter vector. num. grad steps multi-task, no reg multi-task, l2 reg multi-task, reg to mean θ pretrain on all tasks MAML (ours) 1 4.19 7.18 2.91 2.41 0.67 5 3.85 5.69 2.72 2.23 0.38 10 3.69 5.60 2.71 2.19 0.35 | 1703.03400#54 | 1703.03400#56 | 1703.03400 | [
"1612.00796"
]
|
1703.03400#56 | Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks | image. We ran this method on Omniglot and two RL do- mains following the same experimental protocol. We report the results in Tables 3, 4, and 5. Learning an adaptable con- text vector performed well on the toy pointmass problem, but sub-par on more difï¬ cult problems, likely due to a less ï¬ exible meta-optimization. Table 4. 2D Pointmass, average return Table 3. 5-way Omniglot Classiï¬ cation 1-shot 5-shot context vector MAML 94.9 ± 0.9% 97.7 ± 0.3% 98.7 ± 0.4% 99.9 ± 0.1% age inputs, z was concatenated channel-wise with the input num. grad steps context vector â 42.42 â 13.90 â 5.17 â 3.18 MAML (ours) â 40.41 â 11.68 â 3.33 â 3.23 Table 5. Half-cheetah forward/backward, average return | 1703.03400#55 | 1703.03400#57 | 1703.03400 | [
"1612.00796"
]
|
1703.03400#57 | Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks | num. grad steps context vector â 40.49 â 44.08 â 38.27 â 42.50 315.65 MAML (ours) â 50.69 Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks MAML, K=5 MAML, K=10 pretrained, K=5, step size=0.01 pretrained, K=10, step size=0.02 MAML, K=10 pretrained, K=5, step size=0.01 MAML, K=10 MAML, K=10 pretrained, K=10, step size=0.02 pretrained, K=10, step size=0.02 pretrained, K=5, step size=0.01 MAML, K=5 MAML, K=10 pretrained, K=5, step size=0.01 MAML, K=10 MAML, K=10 MAML, K=10 pretrained, K=10, step size=0.02 pre-update ++ lgradstep <=+ 10gradsteps â | 1703.03400#56 | 1703.03400#58 | 1703.03400 | [
"1612.00796"
]
|
1703.03400#58 | Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks | â groundtruth a 4 usedforgrad © pre-update ve lgradstep <= 10 grad steps Figure 7. A random sample of qualitative results from the sinusoid regression task. | 1703.03400#57 | 1703.03400 | [
"1612.00796"
]
|
|
1703.03429#0 | What can you do with a rock? Affordance extraction via word embeddings | 7 1 0 2 r a M 9 ] I A . s c [ 1 v 9 2 4 3 0 . 3 0 7 1 : v i X r a # What can you do with a rock? Affordance extraction via word embeddings Nancy Fulda and Daniel Ricks and Ben Murdoch and David Wingate {nfulda, daniel ricks, murdoch, wingated}@byu.edu # Brigham Young University # Abstract mense wealth of common sense knowledge implicitly en- coded in online corpora. Autonomous agents must often detect affordances: the set of behaviors enabled by a situation. Af- fordance detection is particularly helpful in do- mains with large action spaces, allowing the agent to prune its search space by avoiding futile be- haviors. This paper presents a method for affor- dance extraction via word embeddings trained on a Wikipedia corpus. The resulting word vectors are treated as a common knowledge database which can be queried using linear algebra. We apply this method to a reinforcement learning agent in a text-only environment and show that affordance- based action selection improves performance most of the time. Our method increases the computa- tional complexity of each learning step but signif- icantly reduces the total number of steps needed. In addition, the agentâ s action selections begin to resemble those a human would choose. # 1 Introduction The physical world is ï¬ lled with constraints. You can open a door, but only if it isnâ t locked. You can douse a ï¬ re, but only if a ï¬ re is present. You can throw a rock or drop a rock or even, under certain circumstances, converse with a rock, but you cannot traverse it, enumerate it, or impeach it. | 1703.03429#1 | 1703.03429 | [
"1611.00274"
]
|
|
1703.03429#1 | What can you do with a rock? Affordance extraction via word embeddings | The term affordances [Gibson, 1977] refers to the subset of possible actions which are feasible in a given situation. Human beings detect these affordances automatically, often subconsciously, but it is not uncommon for autonomous learning agents to attempt impossible or even ridiculous actions, thus wasting effort on futile behaviors. This paper presents a method for affordance extraction based on the copiously available linguistic information in on- line corpora. Word embeddings trained using Wikipedia arti- cles are treated as a common sense knowledge base that en- codes (among other things) object-speciï¬ c affordances. Be- cause knowledge is represented as vectors, the knowledge base can be queried using linear algebra. This somewhat counterintuitive notion - the idea that words can be manip- ulated mathematically - creates a theoretical bridge between the frustrating realities of real-world systems and the im- We apply our technique to a text-based environment and show that a priori knowledge provided by affordance ex- traction greatly speeds learning. | 1703.03429#0 | 1703.03429#2 | 1703.03429 | [
"1611.00274"
]
|
1703.03429#2 | What can you do with a rock? Affordance extraction via word embeddings | Speciï¬ cally, we reduce the agentâ s search space by (a) identifying actions afforded by a given object; and (b) discriminating objects that can be grasped, lifted and manipulated from objects which can merely be observed. Because the agent explores only those actions which â make senseâ , it is able to discover valuable be- haviors more quickly than a comparable agent using a brute force approach. Critically, the affordance agent is demon- strably able to eliminate extraneous actions without (in most cases) discarding beneï¬ | 1703.03429#1 | 1703.03429#3 | 1703.03429 | [
"1611.00274"
]
|
1703.03429#3 | What can you do with a rock? Affordance extraction via word embeddings | cial ones. # 2 Related Work Our research relies heavily on word2vec [Mikolov et al., 2013a], an algorithm that encodes individual words based on the contexts in which they tend to appear. Earlier work has shown that word vectors trained using this method con- tain intriguing semantic properties, including structured rep- resentations of gender and geography [Mikolov et al., 2013b; Mikolov et al., 2013c]. The (by now) archetypal example of such properties is represented by the algebraic expres- sion vector[â | 1703.03429#2 | 1703.03429#4 | 1703.03429 | [
"1611.00274"
]
|
1703.03429#4 | What can you do with a rock? Affordance extraction via word embeddings | kingâ ] â vector[â manâ ] + vector[â womanâ ] = vector[â queenâ ]. Researchers have leveraged these properties for diverse ap- plications including sentence- and paragraph-level encoding [Kiros et al., 2015; Le and Mikolov, 2014], image catego- rization [Frome et al., 2013], bidirectional retrieval [Karpa- thy et al., 2014], semantic segmentation [Socher et al., 2011], biomedical document retrieval [Brokos et al., 2016], and the alignment of movie scripts to their corresponding source texts [Zhu et al., 2015]. Our work is most similar to [Zhu et al., 2014]; however, rather than using a Markov Logic Network to build an explicit knowledge base, we instead rely on the semantic structure implicitly encoded in skip-grams. Affordance detection, a topic of rising importance in our increasingly technological society, has been attempted and/or accomplished using visual characteristics [Song et al., 2011; Song et al., 2015], haptic data [Navarro et al., 2012], visuo- motor simulation [Schenck et al., 2012; Schenck et al., 2016], repeated real-world experimentation [Montesano et al., 2007; Stoytchev, 2008], and knowledge base representations [Zhu et al., 2014]. In 2001 [Laird and van Lent, 2001] identiï¬ ed text-based adventure games as a step toward general problem solving. The same year at AAAI, Mark DePristo and Robert Zubek unveiled a hybrid system for text-based game play [Arkin, 1998], which operated on hand-crafted logic trees combined with a secondary sensory system used for goal selection. The handcrafted logic worked well, but goal selection broke down and became cluttered due to the scale of the environment. Perhaps most notably, in 2015 [Narasimhan et al., 2015] de- signed an agent which passed the text output of the game through an LSTM [Hochreiter and Schmidhuber, 1997] to ï¬ nd a state representation, then used a DQN [Mnih et al., 2015] to select a Q-valued action. | 1703.03429#3 | 1703.03429#5 | 1703.03429 | [
"1611.00274"
]
|
1703.03429#5 | What can you do with a rock? Affordance extraction via word embeddings | This approach appeared to work well within a small discrete environment with reliable state action pairs, but as the complexity and alphabet of the environment grew, the clarity of Q-values broke down and left them with a negative overall reward. Our work, in contrast, is able to ï¬ nd meaningful state action pairs even in complex environments with many possible actions. # 3 Wikipedia as a Common Sense Knowledge Base Google â knowledge baseâ , and youâ ll get a list of hand-crafted systems, both commercial and academic, with strict con- straints on encoding methods. These highly-structured, often node-based solutions are successful at a wide variety of tasks including topic gisting [Liu and Singh, 2004], affordance de- tection [Zhu et al., 2014] and general reasoning [Russ et al., 2011]. Traditional knowledge bases are human-interpretable, closely tied to high-level human cognitive functions, and able to encode complex relationships compactly and effectively. It may seem strange, then, to treat Wikipedia as a knowl- edge base. When compared with curated solutions like Con- ceptNet [Liu and Singh, 2004], Cyc [Matuszek et al., 2006], and WordNet [Miller, 1995], its contents are largely unstruc- tured, polluted by irrelevant data, and prone to user error. When used as a training corpus for the word2vec algorithm, however, Wikipedia becomes more tractable. The word vec- tors create a compact representation of the knowledge base and, as observed by [Bolukbasi et al., 2016a] and [Bolukbasi et al., 2016b], can even encode relationships about which a human author is not consciously cognizant. Perhaps most notably, Wikipedia and other online corpora are constantly updated in response to new developments and new human in- sight; hence, they do not require explicit maintenance. However: in order to leverage the semantic structure im- plicitly encoded within Wikipedia, we must be able to in- terpret the resulting word vectors. | 1703.03429#4 | 1703.03429#6 | 1703.03429 | [
"1611.00274"
]
|
1703.03429#6 | What can you do with a rock? Affordance extraction via word embeddings | Signiï¬ cant semantic re- lationships are not readily apparent from the raw word vec- tors or from their PCA reduction. In order to extract useful information, the database must be queried through a math- ematical process. For example, in Figure 1 a dot product is used to project gendered terms onto the space deï¬ ned by vector[â kingâ ] â vector[â queenâ ] and vector[â womanâ ] â vector[â manâ ]. In such a projection, the mathematical re- lationship between the words is readily apparent. Masculine actress 02 oman échoolil ._aranathether at Fie Ege ess or stewardess gather air oud evaiterctor @randfather 0.0 duke Prince echoolboy brother stallion foster Dy Be gouboy steward «ng dock emperor vector['woman'] - vector{'manâ ] oul 02 aman =015 0.10 -0.05 0.00 005 0.10 O15 0.20 vectorf'king'] - vector['queen'] Figure 1: Word vectors projected into the space deï¬ ned by vector[â kingâ ] â vector[â queenâ ] and vector[â womanâ ] â vector[â manâ ]. In this projection, masculine and feminine terms are linearly separable. and feminine terms become linearly separable, making it easy to distinguish instances of each group. These relationships can be leveraged to detect affordances, and thus reduce the agentâ s search space. In its most general interpretation, the adjective affordant describes the set of ac- tions which are physically possible under given conditions. In the following subsections, however, we use it in the more restricted sense of actions which seem reasonable. For ex- ample, it is physically possible to eat a pencil, but it does not â make senseâ to do so. # 3.1 Verb/Noun affordances So how do you teach an algorithm what â makes senseâ ? | 1703.03429#5 | 1703.03429#7 | 1703.03429 | [
"1611.00274"
]
|
1703.03429#7 | What can you do with a rock? Affordance extraction via word embeddings | We address this challenge through an example-based query. First we provide a canonical set of verb/noun pairs which illus- trate the relationship we desire to extract from the knowl- edge base. Then we query the database using the analogy format presented by [Mikolov et al., 2013a]. Using their ter- minology, the analogy sing:song::[?]:[x] encodes the follow- ing question: If the affordant verb for â songâ is â singâ , then what is the affordant verb for [x]? In theory, a single canonical example is sufï¬ cient to per- form a query. However, experience has shown that results are better when multiple canonical values are averaged. More formally, let W be the set of all English-language word vectors in our agentâ s vocabulary. Further, let N = {ii,,...,71;},. NC W be the set of all nouns in W and let V = {t,..., 0}, VC W be the set of all verbs in W. Let C = {(v1, 71), ..., (Gm, Tim) } represent a set of canon- ical verb/noun pairs used by our algorithm. We use C to de- fine an affordance vector @ = 1/m >> ,(@;â 7i;), which can be thought of as the distance and direction within the embedding space which encodes affordant behavior. In our experiments we used the following verb/noun pairs as our canonical set: Our algorithm vanquish duel unsheath summon wield overpower cloak impale battle behead Co-occurrence Concept Net die have cut make ï¬ ght kill move use destroy be kill parry strike slash look cool cut harm fence thrust injure Figure 2: Verb associations for the noun â swordâ using three different methods: (1) Affordance detection using word vec- tors extracted from Wikipedia, as described in this section, (2) Strict co-occurrence counts using a Wikipedia corpus and a co-occurrence window of 9 words, (3) Results generated using ConceptNetâ s CapableOf relationship. [â sing songâ , â drink waterâ , â read bookâ , â eat foodâ , â wear coatâ , â drive carâ , â ride horseâ , â give giftâ , â attack enemyâ , â say wordâ , â open doorâ | 1703.03429#6 | 1703.03429#8 | 1703.03429 | [
"1611.00274"
]
|
1703.03429#8 | What can you do with a rock? Affordance extraction via word embeddings | , â climb treeâ , â heal woundâ , â cure diseaseâ , â paint pictureâ ] We describe a verb/noun pair (wv, 71) as affordant to the ex- tent that 7 + @ = v. Therefore, a typical knowledge base query would return the n closest verbs {t-1,..., en} to the point i+ a For example, using the canonical set listed above and a set of pre-trained word vectors, a query using 7% = vec- tor[â | 1703.03429#7 | 1703.03429#9 | 1703.03429 | [
"1611.00274"
]
|
1703.03429#9 | What can you do with a rock? Affordance extraction via word embeddings | swordâ ] returns the following: [â vanquishâ , â duelâ , â unsheatheâ , â wieldâ , â sum- monâ , â beheadâ , â battleâ , â impaleâ , â overpowerâ , â cloakâ ] Intuitively, this query process produces verbs which an- swer the question, â What should you do with an [x]?â . For example, when word vectors are trained on a Wikipedia cor- pus with part-of-speech tagging, the ï¬ ve most affordant verbs to the noun â horseâ are {â gallopâ , â rideâ , â raceâ , â horseâ , â out- runâ }, and the top ï¬ ve results for â kingâ are {â dethroneâ , â dis- obeyâ , â deposeâ , â reignâ , â abdicateâ }. | 1703.03429#8 | 1703.03429#10 | 1703.03429 | [
"1611.00274"
]
|
1703.03429#10 | What can you do with a rock? Affordance extraction via word embeddings | The resulting lists are surprisingly logical, especially given the unstructured nature of the Wikipedia corpus from which the vector embeddings were extracted. Subjective examina- tion suggests that affordances extracted using Wikipedia are at least as relevant as those produced by more traditional methods (see Figure 2). It is worth noting that our algorithm is not resilient to pol- ysemy, and behaves unpredictably when multiple interpre- tations exist for a given word. For example, the verb â eatâ is highly affordant with respect to most food items, but the twelve most salient results for â appleâ are {â appleâ , â packageâ , â programâ , â releaseâ , â syncâ , â buyâ , â outsellâ , â downloadâ , â in- stallâ , â reinstallâ , â uninstallâ , â rebootâ }. In this case, â Apple, the software companyâ is more strongly represented in the corpus than â apple, the fruitâ . | 1703.03429#9 | 1703.03429#11 | 1703.03429 | [
"1611.00274"
]
|
1703.03429#11 | What can you do with a rock? Affordance extraction via word embeddings | 3.2 Finding a verb that matches a given noun is useful. But an au- tonomous agent is often confronted with more than one object at a time. How should it determine which object to manipu- late, or whether any of the objects are manipulable? Pencils, ire gat 2 gpple , eel ct 04 wad atreet 03 3 op pate 67295 gastie a yo gina ES ath â ir gates = drone Bon way wglivise : pote Oe = 00 our 9 Syord e) gines 5 sina oem ARMED eovensncon 3 t B-0af ates ee ney grosquito wizard HF rach oc é39 er BPE acer HN error gyallet -0.4 | etissors 03 0.2 â 01 0.0 OL 0.2 vector['forest'] - vector| treeâ ) Figure 3: Word vectors projected into the space deï¬ ned by vector[â forestâ ] â vector[â treeâ ] and vector[â mountainâ ] â vector[â pebbleâ ]. Small, manipulable objects appear in the lower-left corner of the graph. Large, abstract, or background objects appear in the upper right. An objectâ s manipulabil- ity can be roughly estimated by measuring its location along either of the deï¬ ning axes. pillows, and coffee mugs are easy to grasp and lift, but the same cannot be said of shadows, boulders, or holograms. To identify affordant nouns - i.e. nouns that can be ma- nipulated in a meaningful way - we again utilize analogies based on canonical examples. In this section, we describe a noun as affordant to the extent that it can be pushed, pulled, grasped, transported, or transformed. | 1703.03429#10 | 1703.03429#12 | 1703.03429 | [
"1611.00274"
]
|
1703.03429#12 | What can you do with a rock? Affordance extraction via word embeddings | After all, it would not make much sense to lift a sunset or unlock a cliff. We begin by defining canonical affordance vectors @, = Tig1 â Tig and Gy = fy, â Tyo for each axis of the affordant vector space. Then, for each object 0; under consideration, a pair of projections po, = 0; dot @, and po, = 0; dot dy. The results of such a projection can be seen in Figure 3. This query is distinct from those described in section 3.1 be- cause, instead of using analogies to test the relationships be- tween nouns and verbs, we are instead locating a noun on the spectrum deï¬ ned by two other words. In our experiments, we used a single canonical vec- tor, vector[â | 1703.03429#11 | 1703.03429#13 | 1703.03429 | [
"1611.00274"
]
|
1703.03429#13 | What can you do with a rock? Affordance extraction via word embeddings | forestâ ] - vector[â treeâ ], to distinguish between nouns of different classes. Potentially affordant nouns were projected onto this line of manipulability, with the word whose projection lay closest to â treeâ being selected for fur- ther experimentation. Critical to this approach is the insight that canonical word vectors are most effective when they are thought of as exem- plars rather than as descriptors. For example, vector[â forestâ ] â vector[â treeâ ] and vector[â buildingâ ] â vector[â brickâ ] function reasonably well as projections for identifying manip- ulable items. vector[â bigâ ] â vector[â smallâ | 1703.03429#12 | 1703.03429#14 | 1703.03429 | [
"1611.00274"
]
|
1703.03429#14 | What can you do with a rock? Affordance extraction via word embeddings | ], on the other hand, is utterly ineffective. Algorithm 1 Noun Selection With Affordance Detection 1: state = game response to last command 2: manipulable nouns â {} 3: for each word w â state do 4: if w is a noun then 5: 6: 7: 8: 9: end for 10: noun = a randomly selected noun from manipulable nouns Algorithm 2 Verb Selection With Analogy Reduction 1: navigation verbs = [â northâ , â southâ , â eastâ , â westâ , â northeastâ , â southeastâ , â southwestâ , â northwestâ , â upâ , â downâ , â enterâ ] 2: manipulation verbs = a list of 1000 most common verbs 3: essential manipulation verbs = [â getâ , â dropâ , â pushâ , â pullâ , â openâ , â closeâ ] 4: aï¬ ordant verbs = verbs returned by Word2vec that match noun 5: aï¬ ordant verbs = aï¬ ordant verbs â © manipulation verbs 6: f inal verbs = navigation verbs â ª aï¬ ordant verbs â ª essential manipulation verbs 7: verb = a randomly selected verb from f inal verbs 4 Test Environment: A World Made of Words In this paper, we test our ideas in the challenging world of text-based adventure gaming. Text-based adventure games offer an unrestricted, free-form interface: the player is pre- sented with a block of text describing a situation, and must respond with a written phrase. Typical actions include com- mands such as: â | 1703.03429#13 | 1703.03429#15 | 1703.03429 | [
"1611.00274"
]
|
1703.03429#15 | What can you do with a rock? Affordance extraction via word embeddings | examine walletâ , â eat appleâ , or â light camp- ï¬ re with matchesâ . The game engine parses this response and produces a new block of text. The resulting inter- actions, although syntactically simple, provide a fertile re- search environment for natural language processing and hu- man/computer interaction. Game players must identify ob- jects that are manipulable and apply appropriate actions to those objects in order to make progress. In these games, the learning agent faces a frustrating di- chotomy: its action set must be large enough to accommodate any situation it encounters, and yet each additional action in- creases the size of its search space. A brute force approach to such scenarios is frequently futile, and yet factorization, func- tion approximation, and other search space reduction tech- niques bring the risk of data loss. We desire an agent that is able to clearly perceive all its options, and yet applies only that subset which is likely to produce results. In other words, we want an agent that explores the game world the same way a human does: by trying only those ac- tions that â make senseâ . In the following sections, we show that affordance-based action selection provides a meaningful ï¬ rst step towards this goal. 4.1 Learning algorithm Our agent utilizes a variant of Q-learning [Watkins and Dayan, 1992], a reinforcement learning algorithm which at- tempts to maximize expected discounted reward. Q-values are updated according to the equation AQ(s, a) = a(R(s, a) + ymaraQ(sâ ,a) â Q(s,a)) (1) where Q(s, a) is the expected reward for performing action a in observed state s, a is the learning rate, 7 is the discount Figure 4: Sample text from the adventure game Zork. Player responses follow a single angle bracket. | 1703.03429#14 | 1703.03429#16 | 1703.03429 | [
"1611.00274"
]
|
1703.03429#16 | What can you do with a rock? Affordance extraction via word embeddings | factor, and sâ is the new state observation after performing action a. Because our test environments are typically deter- ministic with a high percentage of consumable rewards, we modify this algorithm slightly, setting a = 1 and constrain- ing Q-value updates such that Q'(s,a) = max( Q(s,a), Q(s,a) + AQ(s,a)) (2) This adaptation encourages the agent to retain behaviors that have produced a reward at least once, even if the reward fails to manifest on subsequent attempts. The goal is to prevent the agent from â unlearningâ behaviors that are no longer effective during the current training epoch, but which will be essential in order to score points during the next round of play. The agentâ s state representation is encoded as a hash of the text provided by the game engine. Actions are comprised of verb/object pairs: a=v+â â +0,veV,0cO (3) where V is the set of all English-language verbs and O is the set of all English-language nouns. | 1703.03429#15 | 1703.03429#17 | 1703.03429 | [
"1611.00274"
]
|
1703.03429#17 | What can you do with a rock? Affordance extraction via word embeddings | To enable the agent to dis- tinguish between state transitions and merely informational feedback, the agent executes a â lookâ command every second iteration and assumes that the resulting game text represents its new state. Some games append a summary of actions taken and points earned in response to each â lookâ command. To prevent this from obfuscating the state space, we stripped all numerals from the game text prior to hashing. Given that the English language contains at least 20,000 verbs and 100,000 nouns in active use, a naive application of Q-learning is intractable. Some form of action-space reduc- tion must be used. For our baseline comparison, we use an agent with a vocabulary consisting of the 1000 most common verbs in Wikipedia, an 11-word navigation list and a 6-word essential manipulation list as depicted in Algorithm 2. The navigation list contains words which, by convention, are used to navigate through text-based games. The essential manip- ulation list contains words which, again by convention, are generally applicable to all in-game objects. | 1703.03429#16 | 1703.03429#18 | 1703.03429 | [
"1611.00274"
]
|
1703.03429#18 | What can you do with a rock? Affordance extraction via word embeddings | The baseline agent does not use a ï¬ xed noun vocabulary. Instead, it extracts nouns from the game text using part-of- speech tags. To facilitate game interactions, the baseline agent augments its noun list using adjectives that precede them. For example, if the game text consisted of â You see a red pill and a blue pillâ , then the agentâ s noun list for that superior performance detective cavetrip curses mansion comparable performance inferior performance break-in omniquest ou fae Parc zenon parallel reverb spirit ztuu we 7S â ? bs KO 5 y â â 20 : 1 b 10 : candy zork1 tryst205 om a0 aoa) 1000 > am ane ano m0 ion0 om â a0 0a 1000 u 10 â baseline agent oe â verb space reduction oe â object space reduction â verb and object reduction Figure 5: | 1703.03429#17 | 1703.03429#19 | 1703.03429 | [
"1611.00274"
]
|
1703.03429#19 | What can you do with a rock? Affordance extraction via word embeddings | Learning trajectories for sixteen Z-machine games. Agents played each game 1000 times, with 1000 game steps during each trial. No agent received any reward on the remaining 32 games. 10 data runs were averaged to create this plot. state would be [â pillâ , â red pillâ , â blue pillâ ]. (And its next action is hopefully â swallow red pillâ ). In Sections 5.1 and 5.2 the baseline agent is contrasted with an agent using affordance extraction to reduce its manipula- tion list from 1000 verbs to a mere 30 verbs for each state, and to reduce its object list to a maximum of 15 nouns per state. We compare our approach to other search space reduc- tion techniques and show that the a priori knowledge pro- vided by affordance extraction enables the agent to achieve results which cannot be paralleled through brute force meth- ods. All agents used epsilon-greedy exploration with a de- caying epsilon. The purpose of our research was to test the value of affordance-based search space reduction. Therefore, we did not add augmentations to address some of the more challeng- ing aspects of text-based adventure games. | 1703.03429#18 | 1703.03429#20 | 1703.03429 | [
"1611.00274"
]
|
1703.03429#20 | What can you do with a rock? Affordance extraction via word embeddings | Speciï¬ cally, the agent maintained no representation of items carried in inven- tory or of the game score achieved thus far. The agent was also not given the ability to construct prepositional commands such as â put book on shelfâ or â slay dragon with swordâ . Our affordance-based search space reduction algorithms enabled the agent to score points on 16/50 games, with a peak performance (expressed as a percentage of maximum game score) of 23.40% for verb space reduction, 4.33% for object space reduction, and 31.45% when both methods were combined. The baseline agent (see Sec. 4.1) scored points on 12/50 games, with a peak performance of 4.45%. (Peak performance is deï¬ ned as the maximum score achieved over all epochs, a metric that expresses the agentâ s ability to comb through the search space and discover areas of high reward.) Two games experienced termination errors and were ex- cluded from our subsequent analysis; however, our reduction methods outperformed the baseline in both peak performance and average reward in the discarded partial results. Figures 5 and 7 show the performance of our reduction techniques when compared to the baseline. Affordance- based search space reduction improved overall performance on 12/16 games, and decreased performance on only 1 game. 5 Results We tested our agent on a suite of 50 text-based adventure games compatible with Infocomâ s Z-machine. These games represent a wide variety of situations, ranging from business scenarios like â Detectiveâ to complex ï¬ ctional worlds like â Zork: The Underground Empireâ . Signiï¬ cantly, the games provide little or no information about the agentâ s goals, or actions that might provide reward. During training, the agent interacted with the game engine for 1000 epochs, with 1000 training steps in each epoch. On each game step, the agent received a positive reward corre- sponding to the change in game score. At the end of each epoch the game was restarted and the game score reset, but the agent retained its learned Q-values. | 1703.03429#19 | 1703.03429#21 | 1703.03429 | [
"1611.00274"
]
|
1703.03429#21 | What can you do with a rock? Affordance extraction via word embeddings | Examination of the 32 games in which no agent scored points (and which are correspondingly not depicted in Fig- ures 5 and 7) revealed three prevalent failure modes: (1) The game required prepositional commands such as â look at ma- chineâ or â give dagger to wizardâ , (2) The game provided points only after an unusually complex sequence of events, (3) The game required the user to infer the proper term for manipulable objects. (For example, the game might describe â something shinyâ at the bottom of a lake, but required the agent to â get shiny objectâ | 1703.03429#20 | 1703.03429#22 | 1703.03429 | [
"1611.00274"
]
|
1703.03429#22 | What can you do with a rock? Affordance extraction via word embeddings | .) Our test framework was not de- signed to address these issues, and hence did not score points on those games. A fourth failure mode (4) might be the ab- sence of a game-critical verb within the 1000-word manipu- lation list. However, this did not occur in our coarse exami- nation of games that failed. Affordant selection Random selection decorate glass open window add table generate quantity ring window weld glass travel passage climb staircase jump table Figure 6: Sample exploration actions produced by a Q-learner with and without affordance detection. The random agent used nouns extracted from game text and a verb list compris- ing the 200 most common verbs in Wikipedia. 5.1 Alternate reduction methods We compared our affordance-based reduction technique with four other approaches that seemed intuitively applicable to the test domain. Results are shown in Figure 7. Intrinsic rewards: This approach guides the agentâ s ex- ploration of the search space by allotting a small reward each time a new state is attained. We call these awards intrinsic because they are tied to the agentâ s assessment of its progress rather than to external events. | 1703.03429#21 | 1703.03429#23 | 1703.03429 | [
"1611.00274"
]
|
1703.03429#23 | What can you do with a rock? Affordance extraction via word embeddings | Random reduction: When applying search space reduc- tions one must always ask: â Did improvements result from my speciï¬ c choice of reduced space, or would any reduction be equally effective?â We address this question by randomly selecting 30 manipulation verbs to use during each epoch. ConceptNet reduction: In this approach we used Con- ceptNetâ s CapableOf relation to obtain a list of verbs relevant to the current object. We then reduced the agentâ s manipula- tion list to include only words that were also in ConceptNetâ s word list (effectively taking the intersection of the two lists). Co-occurrence reduction: In this method, we populated a co-occurrence dictionary using the 1000 most common verbs and 30,000 most common nouns in Wikipedia. The dictio- nary tracked the number of times each verb/noun pair oc- curred within a 9-word radius. | 1703.03429#22 | 1703.03429#24 | 1703.03429 | [
"1611.00274"
]
|
1703.03429#24 | What can you do with a rock? Affordance extraction via word embeddings | During game play, the agentâ s manipulation list was reduced to include only words which exceeded a low threshold (co-occurrences > 3). Figure 7 shows the performance of these four algorithms, along with a baseline learner using a 1000-word manipulation list. Affordance-based verb selection improved performance in most games, but the other reduction techniques fell prey to a classic danger: they pruned precisely those actions which were essential to obtain reward. # 5.2 Fixed-length vocabularies vs. Free-form learning | 1703.03429#23 | 1703.03429#25 | 1703.03429 | [
"1611.00274"
]
|
1703.03429#25 | What can you do with a rock? Affordance extraction via word embeddings | An interesting question arises from our research. What if, rather than beginning with a 1000-word vocabulary, the agent was free to search the entire English-language verb space? A traditional learning agent could not do this: the space of possible verbs is too large. However, the Wikipedia knowl- edge base opens new opportunities. Using the action selec- 1.0 [candy EME detective EE omnniquest mmm 2en0n ° ° ° iS & & normalized average score 2 S 0.0 ¢ * é & 8 § sg, £. o 2 & g 58 os e 3 é S oe s& g os g ? gs és gg ¥ 5 SF ss es ¢ & es ge se s 8 é © & & > & Figure 7: Five verb space reduction techniques compared over 100 exploration epochs. Average of 5 data runs. Re- sults were normalized for each game based on the maximum reward achieved by any agent. | 1703.03429#24 | 1703.03429#26 | 1703.03429 | [
"1611.00274"
]
|
1703.03429#26 | What can you do with a rock? Affordance extraction via word embeddings | tion mechanism described in Section 4.1, we allowed the agent to construct its own manipulation list for each state (see Section 3.1). The top 15 responses were unioned with the agentâ s navigation and essential manipulation lists, with actions selected randomly from that set. A sampling of the agentâ s behavior is displayed in Figure 6, along with comparable action selections from the baseline agent described in Section 4.1. The free-form learner is able to produce actions that seem, not only reasonable, but also rather inventive when considered in the context of the game environment. We believe that further research in this direction may enable the development of one-shot learning for text- based adventure games. 6 Conclusion The common sense knowledge implicitly encoded within Wikipedia opens new opportunities for autonomous agents. In this paper we have shown that previously intractable search spaces can be efï¬ ciently navigated when word embeddings are used to identify context-dependent affordances. In the do- main of text-based adventure games, this approach is superior to several other intuitive methods. Our initial experiments have been restricted to text-based environments, but the underlying principles apply to any do- main in which mappings can be formed between words and objects. Steady advances in object recognition and semantic segmentation, combined with improved precision in robotic systems, suggests that our methods are applicable to systems including self-driving cars, domestic robots, and UAVs. 7 Acknowledgements Our experiments were run using Autoplay: a learn- ing environment for interactive ï¬ ction (https://github.com/- danielricks/autoplay). We thank Nvidia, the Center for Un- manned Aircraft Systems, and Analog Devices, Inc. for their generous support. # References [Arkin, 1998] Ronald C. Arkin. Behavior-Based Robotics. MIT Press, 1998. [Bolukbasi et al., 2016a] Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. | 1703.03429#25 | 1703.03429#27 | 1703.03429 | [
"1611.00274"
]
|
1703.03429#27 | What can you do with a rock? Affordance extraction via word embeddings | Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, NIPS, pages 4349â 4357. Curran Associates, Inc., 2016. [Bolukbasi et al., 2016b] Tolga Bolukbasi, Kai-Wei Chang, James Y. Zou, Venkatesh Saligrama, and Adam Tauman Kalai. Quantifying and reducing stereotypes in word embeddings. CoRR, abs/1606.06121, 2016. | 1703.03429#26 | 1703.03429#28 | 1703.03429 | [
"1611.00274"
]
|
1703.03429#28 | What can you do with a rock? Affordance extraction via word embeddings | Prodromos Malakasiotis, and Ion Androutsopoulos. Using centroids of word embeddings and word moverâ s distance for biomedical document retrieval in question answering. CoRR, abs/1608.03905, 2016. [Frome et al., 2013] Andrea Frome, Greg S. Corrado, Jonathon Shlens, Samy Bengio, Jeffrey Dean, and Tomas Mikolov. Devise: A deep visual-semantic embedding model. In In NIPS, 2013. [Gibson, 1977] James J. Gibson. The theory of affordances. In Robert Shaw and John Bransford, editors, Perceiving, Acting, and Knowing. 1977. [Hochreiter and Schmidhuber, 1997] Sepp Hochreiter and J¨urgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735â 1780, 1997. | 1703.03429#27 | 1703.03429#29 | 1703.03429 | [
"1611.00274"
]
|
1703.03429#29 | What can you do with a rock? Affordance extraction via word embeddings | and Li Fei-fei. Deep fragment embeddings for bidirectional image sentence mapping. In In arXiv:1406.5679, 2014. [Kiros et al., 2015] Ryan Kiros, Yukun Zhu, Ruslan Salakhutdinov, Richard S. Zemel, Antonio Torralba, Raquel Urtasun, and Sanja Fidler. Skip-thought vectors. CoRR, abs/1506.06726, 2015. [Laird and van Lent, 2001] John E. Laird and Michael van Lent. Human-level AIâ s killer application: Interactive computer games. AI Magazine, 22(2):15â 26, 2001. [Le and Mikolov, 2014] Quoc V. Le and Tomas Mikolov. Dis- tributed representations of sentences and documents. CoRR, abs/1405.4053, 2014. [Liu and Singh, 2004] H. Liu and P. Singh. | 1703.03429#28 | 1703.03429#30 | 1703.03429 | [
"1611.00274"
]
|
1703.03429#30 | What can you do with a rock? Affordance extraction via word embeddings | Conceptnet â a prac- tical commonsense reasoning tool-kit. BT Technology Journal, 22(4):211â 226, 2004. [Matuszek et al., 2006] Cynthia Matuszek, John Cabral, Michael Witbrock, and John Deoliveira. An introduction to the syntax and content of cyc. In Proceedings of the 2006 AAAI Spring Sympo- sium on Formalizing and Compiling Background Knowledge and Its Applications to Knowledge Representation and Question An- swering, pages 44â 49, 2006. [Mikolov et al., 2013a] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. | 1703.03429#29 | 1703.03429#31 | 1703.03429 | [
"1611.00274"
]
|
1703.03429#31 | What can you do with a rock? Affordance extraction via word embeddings | Efï¬ cient estimation of word representations in vector space. CoRR, abs/1301.3781, 2013. [Mikolov et al., 2013b] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of words and phrases and their compositionality. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, NIPS, pages 3111â 3119. Curran Associates, Inc., 2013. [Mikolov et al., 2013c] Tomas Mikolov, Wen tau Yih, and Geoffrey Zweig. Linguistic regularities in continuous space word represen- tations. Association for Computational Linguistics, May 2013. [Miller, 1995] George A. Miller. | 1703.03429#30 | 1703.03429#32 | 1703.03429 | [
"1611.00274"
]
|
1703.03429#32 | What can you do with a rock? Affordance extraction via word embeddings | Wordnet: A lexical database for english. Commun. ACM, 38(11):39â 41, November 1995. [Mnih et al., 2015] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostro- vski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529â 533, 2015. [Montesano et al., 2007] L. Montesano, M. Lopes, A. Bernardino, and J. Santos-Victor. | 1703.03429#31 | 1703.03429#33 | 1703.03429 | [
"1611.00274"
]
|
1703.03429#33 | What can you do with a rock? Affordance extraction via word embeddings | Modeling affordances using bayesian net- works. In 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 4102â 4107, Oct 2007. [Narasimhan et al., 2015] Karthik Narasimhan, Tejas D. Kulka- rni, and Regina Barzilay. Language understanding for text- CoRR, based games using deep reinforcement abs/1506.08941, 2015. [Navarro et al., 2012] Stefan Escaida Navarro, Nicolas Gorges, Heinz W¨orn, Julian Schill, Tamim Asfour, and R¨udiger Dill- mann. Haptic object recognition for multi-ï¬ ngered robot hands. In 2012 IEEE Haptics Symposium (HAPTICS), pages 497â 502. IEEE, 2012. [Russ et al., 2011] Thomas A Russ, Cartic Ramakrishnan, Ed- uard H Hovy, Mihail Bota, and Gully APC Burns. Knowledge engineering tools for reasoning with scientiï¬ c observations and interpretations: a neural connectivity use case. BMC bioinfor- matics, 12(1):351, 2011. [Schenck et al., 2012] Wolfram Schenck, Hendrik Hasenbein, and Ralf M¨oller. Detecting affordances by mental imagery. In Alessandro G. Di Nuovo, Vivian M. de la Cruz, and Davide Marocco, editors, Proceedings of the SAB Workshop on â Arti- ï¬ cial Mental Imageryâ , pages 15â 18, Odense (Danmark), 2012. [Schenck et al., 2016] Wolfram Schenck, Hendrik Hasenbein, and Ralf M¨oller. Detecting affordances by visuomotor simulation. arXiv preprint arXiv:1611.00274, 2016. [Socher et al., 2011] Richard Socher, Cliff C. Lin, Chris Manning, and Andrew Y Ng. Parsing natural scenes and natural language with recursive neural networks. ICML, pages 129 â | 1703.03429#32 | 1703.03429#34 | 1703.03429 | [
"1611.00274"
]
|
1703.03429#34 | What can you do with a rock? Affordance extraction via word embeddings | 136, 2011. [Song et al., 2011] Hyun Oh Song, Mario Fritz, Chunhui Gu, and Trevor Darrell. Visual grasp affordances from appearance-based cues. In ICCV Workshops, pages 998â 1005. IEEE, 2011. [Song et al., 2015] Hyun Oh Song, Mario Fritz, Daniel Goehring, and Trevor Darrell. Learning to detect visual grasp affordance. In IEEE Transactions on Automation Science and Engineering (TASE), 2015. [Stoytchev, 2008] Alexander Stoytchev. Learning the Affordances of Tools Using a Behavior-Grounded Approach, pages 140â 158. Springer Berlin Heidelberg, Berlin, Heidelberg, 2008. [Watkins and Dayan, 1992] Christopher JCH Watkins and Peter Dayan. Q-learning. Machine learning, 8(3-4):279â 292, 1992. [Zhu et al., 2014] Yuke Zhu, Alireza Fathi, and Li Fei-Fei. Reason- ing about object affordances in a knowledge base representation. In ECCV, 2014. [Zhu et al., 2015] Yukun Zhu, Ryan Kiros, Richard S. Zemel, Rus- lan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. | 1703.03429#33 | 1703.03429#35 | 1703.03429 | [
"1611.00274"
]
|
1703.03429#35 | What can you do with a rock? Affordance extraction via word embeddings | Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. CoRR, abs/1506.06724, 2015. | 1703.03429#34 | 1703.03429 | [
"1611.00274"
]
|
|
1703.01780#0 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | 8 1 0 2 r p A 6 1 ] E N . s c [ 6 v 0 8 7 1 0 . 3 0 7 1 : v i X r a # Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results Antti Tarvainen The Curious AI Company and Aalto University [email protected] Harri Valpola The Curious AI Company [email protected] # Abstract | 1703.01780#1 | 1703.01780 | [
"1706.04599"
]
|
|
1703.01780#1 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | The recently proposed Temporal Ensembling has achieved state-of-the-art results in several semi-supervised learning benchmarks. It maintains an exponential moving average of label predictions on each training example, and penalizes predictions that are inconsistent with this target. However, because the targets change only once per epoch, Temporal Ensembling becomes unwieldy when learning large datasets. To overcome this problem, we propose Mean Teacher, a method that averages model weights instead of label predictions. As an additional beneï¬ t, Mean Teacher improves test accuracy and enables training with fewer labels than Temporal Ensembling. Without changing the network architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250 labels, outperforming Temporal Ensembling trained with 1000 labels. We also show that a good network architecture is crucial to performance. Combining Mean Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with 4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels from 35.24% to 9.11%. | 1703.01780#0 | 1703.01780#2 | 1703.01780 | [
"1706.04599"
]
|
1703.01780#2 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | # Introduction Deep learning has seen tremendous success in areas such as image and speech recognition. In order to learn useful abstractions, deep learning models require a large number of parameters, thus making them prone to over-ï¬ tting (Figure 1a). Moreover, adding high-quality labels to training data manually is often expensive. Therefore, it is desirable to use regularization methods that exploit unlabeled data effectively to reduce over-ï¬ tting in semi-supervised learning. When a percept is changed slightly, a human typically still considers it to be the same object. | 1703.01780#1 | 1703.01780#3 | 1703.01780 | [
"1706.04599"
]
|
1703.01780#3 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | Corre- spondingly, a classiï¬ cation model should favor functions that give consistent output for similar data points. One approach for achieving this is to add noise to the input of the model. To enable the model to learn more abstract invariances, the noise may be added to intermediate representations, an insight that has motivated many regularization techniques, such as Dropout [28]. Rather than minimizing the classiï¬ cation cost at the zero-dimensional data points of the input space, the regularized model minimizes the cost on a manifold around each data point, thus pushing decision boundaries away from the labeled data points (Figure 1b). Since the classiï¬ cation cost is undeï¬ | 1703.01780#2 | 1703.01780#4 | 1703.01780 | [
"1706.04599"
]
|
1703.01780#4 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | ned for unlabeled examples, the noise regularization by itself does not aid in semi-supervised learning. To overcome this, the Î model [21] evaluates each data point with and without noise, and then applies a consistency cost between the two predictions. In this case, the model assumes a dual role as a teacher and a student. As a student, it learns as before; as a teacher, it generates targets, which are then used by itself as a student for learning. Since the model itself generates targets, they may very well be incorrect. If too much weight is given to the generated targets, the cost of inconsistency outweighs that of misclassiï¬ cation, preventing the learning of new | 1703.01780#3 | 1703.01780#5 | 1703.01780 | [
"1706.04599"
]
|
1703.01780#5 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | (a) (b) (c) (d) (e) Figure 1: A sketch of a binary classiï¬ cation task with two labeled examples (large blue dots) and one unlabeled example, demonstrating how the choice of the unlabeled target (black circle) affects the ï¬ tted function (gray curve). (a) A model with no regularization is free to ï¬ t any function that predicts the labeled training examples well. (b) A model trained with noisy labeled data (small dots) learns to give consistent predictions around labeled data points. (c) Consistency to noise around unlabeled examples provides additional smoothing. For the clarity of illustration, the teacher model (gray curve) is ï¬ rst ï¬ tted to the labeled examples, and then left unchanged during the training of the student model. Also for clarity, we will omit the small dots in ï¬ gures d and e. (d) Noise on the teacher model reduces the bias of the targets without additional training. The expected direction of stochastic gradient descent is towards the mean (large blue circle) of individual noisy targets (small blue circles). (e) An ensemble of models gives an even better expected target. Both Temporal Ensembling and the Mean Teacher method use this approach. | 1703.01780#4 | 1703.01780#6 | 1703.01780 | [
"1706.04599"
]
|
1703.01780#6 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | information. In effect, the model suffers from conï¬ rmation bias (Figure 1c), a hazard that can be mitigated by improving the quality of targets. There are at least two ways to improve the target quality. One approach is to choose the perturbation of the representations carefully instead of barely applying additive or multiplicative noise. Another approach is to choose the teacher model carefully instead of barely replicating the student model. Concurrently to our research, Miyato et al. [16] have taken the ï¬ rst approach and shown that Virtual Adversarial Training can yield impressive results. We take the second approach and will show that it too provides signiï¬ cant beneï¬ ts. To our understanding, these two approaches are compatible, and their combination may produce even better outcomes. However, the analysis of their combined effects is outside the scope of this paper. Our goal, then, is to form a better teacher model from the student model without additional training. | 1703.01780#5 | 1703.01780#7 | 1703.01780 | [
"1706.04599"
]
|
1703.01780#7 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | As the ï¬ rst step, consider that the softmax output of a model does not usually provide accurate predictions outside training data. This can be partly alleviated by adding noise to the model at inference time [4], and consequently a noisy teacher can yield more accurate targets (Figure 1d). This approach was used in Pseudo-Ensemble Agreement [2] and has lately been shown to work well on semi-supervised image classiï¬ cation [13, 23]. Laine & Aila [13] named the method the Î model; we will use this name for it and their version of it as the basis of our experiments. The Î model can be further improved by Temporal Ensembling [13], which maintains an exponential moving average (EMA) prediction for each of the training examples. At each training step, all the EMA predictions of the examples in that minibatch are updated based on the new predictions. Consequently, the EMA prediction of each example is formed by an ensemble of the modelâ s current version and those earlier versions that evaluated the same example. This ensembling improves the quality of the predictions, and using them as the teacher predictions improves results. However, since each target is updated only once per epoch, the learned information is incorporated into the training process at a slow pace. The larger the dataset, the longer the span of the updates, and in the case of on-line learning, it is unclear how Temporal Ensembling can be used at all. (One could evaluate all the targets periodically more than once per epoch, but keeping the evaluation span constant would require O(n2) evaluations per epoch where n is the number of training examples.) | 1703.01780#6 | 1703.01780#8 | 1703.01780 | [
"1706.04599"
]
|
1703.01780#8 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | # 2 Mean Teacher To overcome the limitations of Temporal Ensembling, we propose averaging model weights instead of predictions. Since the teacher model is an average of consecutive student models, we call this the Mean Teacher method (Figure 2). Averaging model weights over training steps tends to produce a 2 prediction prediction 3 « > ] « > | classification consistency cost r cost t â â Ol Ol Q+â > @â Ol exponential Ol â < moving << average 3 | a label input student model teacher model Figure 2: The Mean Teacher method. The figure depicts a training batch with a single labeled example. Both the student and the teacher model evaluate the input applying noise (7, 7â ) within their computation. The softmax output of the student model is compared with the one-hot label using classification cost and with the teacher output using consistency cost. After the weights of the student model have been updated with gradient descent, the teacher model weights are updated as an exponential moving average of the student weights. Both model outputs can be used for prediction, but at the end of the training the teacher prediction is more likely to be correct. A training step with an unlabeled example would be similar, except no classification cost would be applied. more accurate model than using the ï¬ nal weights directly [19]. We can take advantage of this during training to construct better targets. Instead of sharing the weights with the student model, the teacher model uses the EMA weights of the student model. Now it can aggregate information after every step instead of every epoch. In addition, since the weight averages improve all layer outputs, not just the top output, the target model has better intermediate representations. These aspects lead to two practical advantages over Temporal Ensembling: First, the more accurate target labels lead to a faster feedback loop between the student and the teacher models, resulting in better test accuracy. Second, the approach scales to large datasets and on-line learning. More formally, we define the consistency cost J as the expected distance between the prediction of the student model (with weights 6 and noise 7) and the prediction of the teacher model (with weights 0â | 1703.01780#7 | 1703.01780#9 | 1703.01780 | [
"1706.04599"
]
|
1703.01780#9 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | and noise 77â ). I(0) = Exon [Il F(@, 9.0!) = F(x, 8,0)? The difference between the II model, Temporal Ensembling, and Mean teacher is how the teacher predictions are generated. Whereas the II model uses 6â = 6, and Temporal Ensembling approximates f(x, 9â , 7â ) with a weighted average of successive predictions, we define 6), at training step t as the EMA of successive @ weights: O, = a0)_, + (Lâ a)% where a is a smoothing coefficient hyperparameter. An additional difference between the three algorithms is that the IT model applies training to 6â whereas Temporal Ensembling and Mean Teacher treat it as a constant with regards to optimization. We can approximate the consistency cost function J by sampling noise 7,7â at each training step with stochastic gradient descent. Following Laine & Aila [13], we use mean squared error (MSE) as the consistency cost in most of our experiments. | 1703.01780#8 | 1703.01780#10 | 1703.01780 | [
"1706.04599"
]
|
1703.01780#10 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | 3 Table 1: Error rate percentage on SVHN over 10 runs (4 runs when using all labels). We use exponential moving average weights in the evaluation of all our models. All the methods use a similar 13-layer ConvNet architecture. See Table 5 in the Appendix for results without input augmentation. 250 labels 73257 images GAN [25] Πmodel [13] Temporal Ensembling [13] VAT+EntMin [16] Supervised-only Πmodel Mean Teacher 27.77 ± 3.18 9.69 ± 0.92 4.35 ± 0.50 4.35 ± 0.50 4.35 ± 0.50 500 labels 73257 images 18.44 ± 4.8 6.65 ± 0.53 5.12 ± 0.13 16.88 ± 1.30 6.83 ± 0.66 4.18 ± 0.27 4.18 ± 0.27 4.18 ± 0.27 1000 labels 73257 images 8.11 ± 1.3 4.82 ± 0.17 4.42 ± 0.16 3.863.863.86 12.32 ± 0.95 4.95 ± 0.26 3.95 ± 0.19 73257 labels 73257 images 2.54 ± 0.04 2.74 ± 0.06 2.75 ± 0.10 2.50 ± 0.07 2.50 ± 0.05 2.50 ± 0.05 2.50 ± 0.05 Table 2: Error rate percentage on CIFAR-10 over 10 runs (4 runs when using all labels). | 1703.01780#9 | 1703.01780#11 | 1703.01780 | [
"1706.04599"
]
|
1703.01780#11 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | 1000 labels 50000 images GAN [25] Î model [13] Temporal Ensembling [13] VAT+EntMin [16] Supervised-only Î model Mean Teacher 46.43 ± 1.21 27.36 ± 1.20 21.55 ± 1.48 21.55 ± 1.48 21.55 ± 1.48 2000 labels 50000 images 33.94 ± 0.73 18.02 ± 0.60 15.73 ± 0.31 15.73 ± 0.31 15.73 ± 0.31 4000 labels 50000 images 18.63 ± 2.32 12.36 ± 0.31 12.16 ± 0.31 10.55 10.55 10.55 20.66 ± 0.57 13.20 ± 0.27 12.31 ± 0.28 50000 labels 50000 images 5.56 ± 0.10 5.60 ± 0.10 5.60 ± 0.10 5.60 ± 0.10 5.82 ± 0.15 6.06 ± 0.11 5.94 ± 0.15 # 3 Experiments To test our hypotheses, we ï¬ rst replicated the Î model [13] in TensorFlow [1] as our baseline. We then modiï¬ ed the baseline model to use weight-averaged consistency targets. The model architecture is a 13-layer convolutional neural network (ConvNet) with three types of noise: random translations and horizontal ï¬ ips of the input images, Gaussian noise on the input layer, and dropout applied within the network. We use mean squared error as the consistency cost and ramp up its weight from 0 to its ï¬ nal value during the ï¬ rst 80 epochs. The details of the model and the training procedure are described in Appendix B.1. # 3.1 Comparison to other methods on SVHN and CIFAR-10 We ran experiments using the Street View House Numbers (SVHN) and CIFAR-10 benchmarks [17]. | 1703.01780#10 | 1703.01780#12 | 1703.01780 | [
"1706.04599"
]
|
1703.01780#12 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | Both datasets contain 32x32 pixel RGB images belonging to ten different classes. In SVHN, each example is a close-up of a house number, and the class represents the identity of the digit at the center of the image. In CIFAR-10, each example is a natural image belonging to a class such as horses, cats, cars and airplanes. SVHN contains of 73257 training samples and 26032 test samples. CIFAR-10 consists of 50000 training samples and 10000 test samples. Tables 1 and 2 compare the results against recent state-of-the-art methods. All the methods in the comparison use a similar 13-layer ConvNet architecture. Mean Teacher improves test accuracy over the Î model and Temporal Ensembling on semi-supervised SVHN tasks. Mean Teacher also improves results on CIFAR-10 over our baseline Î model. The recently published version of Virtual Adversarial Training by Miyato et al. [16] performs even better than Mean Teacher on the 1000-label SVHN and the 4000-label CIFAR-10. As discussed in the introduction, VAT and Mean Teacher are complimentary approaches. Their combination may yield better accuracy than either of them alone, but that investigation is beyond the scope of this paper. | 1703.01780#11 | 1703.01780#13 | 1703.01780 | [
"1706.04599"
]
|
1703.01780#13 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | 4 Table 3: Error percentage over 10 runs on SVHN with extra unlabeled training data. Î model (ours) Mean Teacher 500 labels 73257 images 6.83 ± 0.66 4.18 ± 0.27 4.18 ± 0.27 4.18 ± 0.27 500 labels 173257 images 4.49 ± 0.27 3.02 ± 0.16 3.02 ± 0.16 3.02 ± 0.16 500 labels 573257 images 3.26 ± 0.14 2.46 ± 0.06 2.46 ± 0.06 2.46 ± 0.06 73257 images and labels 73257 images and 500 labels 573257 images and 500 labels 102 100 10-1 â | 1703.01780#12 | 1703.01780#14 | 1703.01780 | [
"1706.04599"
]
|
1703.01780#14 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | â Mmodel (test set) â â Mean teacher (student, test set) ve T model (training) tee Mean teacher (student, training) 10° 102 classification cost 100% 1 model 1 model (EMA) Mean teacher (student) Mean teacher (teacher) 50% classification error RoR 8 8 & & 5% 2% 0k = 20k_=â 40k) GOK 80K 100k -«Ok-=â 20k) 40k_~â GOK) 80k = 100k â 0k = 20k 40k = 60K_~=â 80k ~â 100k | 1703.01780#13 | 1703.01780#15 | 1703.01780 | [
"1706.04599"
]
|
1703.01780#15 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | Figure 3: Smoothened classiï¬ cation cost (top) and classiï¬ cation error (bottom) of Mean Teacher and our baseline Î model on SVHN over the ï¬ rst 100000 training steps. In the upper row, the training classiï¬ cation costs are measured using only labeled data. # 3.2 SVHN with extra unlabeled data Above, we suggested that Mean Teacher scales well to large datasets and on-line learning. In addition, the SVHN and CIFAR-10 results indicate that it uses unlabeled examples efï¬ | 1703.01780#14 | 1703.01780#16 | 1703.01780 | [
"1706.04599"
]
|
1703.01780#16 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | ciently. Therefore, we wanted to test whether we have reached the limits of our approach. Besides the primary training data, SVHN includes also an extra dataset of 531131 examples. We picked 500 samples from the primary training as our labeled training examples. We used the rest of the primary training set together with the extra training set as unlabeled examples. We ran experiments with Mean Teacher and our baseline Î model, and used either 0, 100000 or 500000 extra examples. Table 3 shows the results. # 3.3 Analysis of the training curves The training curves on Figure 3 help us understand the effects of using Mean Teacher. As expected, the EMA-weighted models (blue and dark gray curves in the bottom row) give more accurate predictions than the bare student models (orange and light gray) after an initial period. Using the EMA-weighted model as the teacher improves results in the semi-supervised settings. There appears to be a virtuous feedback cycle of the teacher (blue curve) improving the student (orange) via the consistency cost, and the student improving the teacher via exponential moving averaging. If this feedback cycle is detached, the learning is slower, and the model starts to overï¬ | 1703.01780#15 | 1703.01780#17 | 1703.01780 | [
"1706.04599"
]
|
1703.01780#17 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | t earlier (dark gray and light gray). Mean Teacher helps when labels are scarce. When using 500 labels (middle column) Mean Teacher learns faster, and continues training after the Î model stops improving. On the other hand, in the all-labeled case (left column), Mean Teacher and the Î model behave virtually identically. 5 15% 4 (a) eg â ity 15% 4 (b) 15% 4 (c) ° Brey ug 10% | with â oe,* 10% 4 10% 4 . augmentation 2 . : e 5% 4 . 5% 4 5% 4 no input dropout both 0.0 0.25 O05 0.75 0 0103 1 3 10 noise noise teacher dropout consistency cost weight 15% 15% 4 (e) consistency 15% + (f) ramp-up 10% 10% 4 on 10% 4 â off 5% 5% 4 5% {e+ oa a sH aR o4 sf mm Nu gt wid 4 Oa OD D> Rags 5 7 tT 2 FT oS a o a aR 8 8 23 3 8 Be Eo F fo § EMA decay dual output diff. cost 9 © cons. cost function t Figure 4: Validation error on 250-label SVHN over four runs per hyperparameter setting and their means. In each experiment, we varied one hyperparameter, and used the evaluation run hyperparameters of Table 1 for the rest. The hyperparameter settings used in the evaluation runs are marked with the bolded font weight. | 1703.01780#16 | 1703.01780#18 | 1703.01780 | [
"1706.04599"
]
|
1703.01780#18 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | See the text for details. Mean Teacher uses unlabeled training data more efï¬ ciently than the Î model, as seen in the middle column. On the other hand, with 500k extra unlabeled examples (right column), Î model keeps improving for longer. Mean Teacher learns faster, and eventually converges to a better result, but the sheer amount of data appears to offset Î modelâ s worse predictions. # 3.4 Ablation experiments To assess the importance of various aspects of the model, we ran experiments on SVHN with 250 labels, varying one or a few hyperparameters at a time while keeping the others ï¬ | 1703.01780#17 | 1703.01780#19 | 1703.01780 | [
"1706.04599"
]
|
1703.01780#19 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | xed. Removal of noise (Figures 4(a) and 4(b)). In the introduction and Figure 1, we presented the hypothesis that the Î model produces better predictions by adding noise to the model on both sides. But after the addition of Mean Teacher, is noise still needed? Yes. We can see that either input augmentation or dropout is necessary for passable performance. On the other hand, input noise does not help when augmentation is in use. Dropout on the teacher side provides only a marginal beneï¬ t over just having it on the student side, at least when input augmentation is in use. Sensitivity to EMA decay and consistency weight (Figures 4(c) and 4(d)). The essential hyperpa- rameters of the Mean Teacher algorithm are the consistency cost weight and the EMA decay α. How sensitive is the algorithm to their values? We can see that in each case the good values span roughly an order of magnitude and outside these ranges the performance degrades quickly. Note that EMA decay α = 0 makes the model a variation of the Î model, although somewhat inefï¬ cient one because the gradients are propagated through only the student path. Note also that in the evaluation runs we used EMA decay α = 0.99 during the ramp-up phase, and α = 0.999 for the rest of the training. We chose this strategy because the student improves quickly early in the training, and thus the teacher should forget the old, inaccurate, student weights quickly. Later the student improvement slows, and the teacher beneï¬ ts from a longer memory. Decoupling classiï¬ cation and consistency (Figure 4(e)). The consistency to teacher predictions may not necessarily be a good proxy for the classiï¬ cation task, especially early in the training. So far our model has strongly coupled these two tasks by using the same output for both. How would decoupling the tasks change the performance of the algorithm? To investigate, we changed the model to have two top layers and produce two outputs. We then trained one of the outputs for classiï¬ | 1703.01780#18 | 1703.01780#20 | 1703.01780 | [
"1706.04599"
]
|
1703.01780#20 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | cation and the other for consistency. We also added a mean squared error cost between the output logits, and then varied the weight of this cost, allowing us to control the strength of the coupling. Looking at the results (reported using the EMA version of the classiï¬ cation output), we can see that the strongly coupled version performs well and the too loosely coupled versions do not. On the other hand, a moderate decoupling seems to have the beneï¬ t of making the consistency ramp-up redundant. | 1703.01780#19 | 1703.01780#21 | 1703.01780 | [
"1706.04599"
]
|
1703.01780#21 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | 6 Table 4: Error rate percentage of ResNet Mean Teacher compared to the state of the art. We report the test results from 10 runs on CIFAR-10 and validation results from 2 runs on ImageNet. State of the art ConvNet Mean Teacher ResNet Mean Teacher State of the art using all labels CIFAR-10 4000 labels 10.55 [16] 12.31 ± 0.28 6.28 ± 0.15 6.28 ± 0.15 6.28 ± 0.15 2.86 [5] ImageNet 2012 10% of the labels 35.24 ± 0.90 [20] 9.11 ± 0.12 9.11 ± 0.12 9.11 ± 0.12 3.79 [10] | 1703.01780#20 | 1703.01780#22 | 1703.01780 | [
"1706.04599"
]
|
1703.01780#22 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | Changing from MSE to KL-divergence (Figure 4(f)) Following Laine & Aila [13], we use mean squared error (MSE) as our consistency cost function, but KL-divergence would seem a more natural choice. Which one works better? We ran experiments with instances of a cost function family ranging from MSE (Ï = 0 in the ï¬ gure) to KL-divergence (Ï = 1), and found out that in this setting MSE performs better than the other cost functions. See Appendix C for the details of the cost function family and for our intuition about why MSE performs so well. # 3.5 Mean Teacher with residual networks on CIFAR-10 and ImageNet In the experiments above, we used a traditional 13-layer convolutional architecture (ConvNet), which has the beneï¬ | 1703.01780#21 | 1703.01780#23 | 1703.01780 | [
"1706.04599"
]
|
1703.01780#23 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | t of making comparisons to earlier work easy. In order to explore the effect of the model architecture, we ran experiments using a 12-block (26-layer) Residual Network [8] (ResNet) with Shake-Shake regularization [5] on CIFAR-10. The details of the model and the training procedure are described in Appendix B.2. As shown in Table 4, the results improve remarkably with the better network architecture. To test whether the methods scales to more natural images, we ran experiments on Imagenet 2012 dataset [22] using 10% of the labels. We used a 50-block (152-layer) ResNeXt architecture [33], and saw a clear improvement over the state of the art. As the test set is not publicly available, we measured the results using the validation set. # 4 Related work Noise regularization of neural networks was proposed by Sietsma & Dow [26]. More recently, several types of perturbations have been shown to regularize intermediate representations effectively in deep learning. Adversarial Training [6] changes the input slightly to give predictions that are as different as possible from the original predictions. Dropout [28] zeroes random dimensions of layer outputs. Dropconnect [31] generalizes Dropout by zeroing individual weights instead of activations. Stochastic Depth [11] drops entire layers of residual networks, and Swapout [27] generalizes Dropout and Stochastic Depth. Shake-shake regularization [5] duplicates residual paths and samples a linear combination of their outputs independently during forward and backward passes. Several semi-supervised methods are based on training the model predictions to be consistent to perturbation. The Denoising Source Separation framework (DSS) [29] uses denoising of latent variables to learn their likelihood estimate. | 1703.01780#22 | 1703.01780#24 | 1703.01780 | [
"1706.04599"
]
|
1703.01780#24 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | The Î variant of Ladder Network [21] implements DSS with a deep learning model for classiï¬ cation tasks. It produces a noisy student predictions and clean teacher predictions, and applies a denoising layer to predict teacher predictions from the student predictions. The Î model [13] improves the Î model by removing the explicit denoising layer and applying noise also to the teacher predictions. Similar methods had been proposed already earlier for linear models [30] and deep learning [2]. Virtual Adversarial Training [16] is similar to the Î model but uses adversarial perturbation instead of independent noise. The idea of a teacher model training a student is related to model compression [3] and distillation [9]. The knowledge of a complicated model can be transferred to a simpler model by training the simpler model with the softmax outputs of the complicated model. The softmax outputs contain more information about the task than the one-hot outputs, and the requirement of representing this | 1703.01780#23 | 1703.01780#25 | 1703.01780 | [
"1706.04599"
]
|
1703.01780#25 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | 7 knowledge regularizes the simpler model. Besides its use in model compression, distillation can be used to harden trained models against adversarial attacks [18]. The difference between distillation and consistency regularization is that distillation is performed after training whereas consistency regularization is performed on training time. Consistency regularization can be seen as a form of label propagation [34]. Training samples that resemble each other are more likely to belong to the same class. Label propagation takes advantage of this assumption by pushing label information from each example to examples that are near it according to some metric. Label propagation can also be applied to deep learning models [32]. However, ordinary label propagation requires a predeï¬ ned distance metric in the input space. In contrast, consistency targets employ a learned distance metric implied by the abstract representations of the model. As the model learns new features, the distance metric changes to accommodate these features. Therefore, consistency targets guide learning in two ways. On the one hand they spread the labels according to the current distance metric, and on the other hand, they aid the network learn a better distance metric. | 1703.01780#24 | 1703.01780#26 | 1703.01780 | [
"1706.04599"
]
|
1703.01780#26 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | # 5 Conclusion Temporal Ensembling, Virtual Adversarial Training and other forms of consistency regularization have recently shown their strength in semi-supervised learning. In this paper, we propose Mean Teacher, a method that averages model weights to form a target-generating teacher model. Unlike Temporal Ensembling, Mean Teacher works with large datasets and on-line learning. Our experiments suggest that it improves the speed of learning and the classiï¬ cation accuracy of the trained network. In addition, it scales well to state-of-the-art architectures and large image sizes. The success of consistency regularization depends on the quality of teacher-generated targets. If the targets can be improved, they should be. Mean Teacher and Virtual Adversarial Training represent two ways of exploiting this principle. Their combination may yield even better targets. There are probably additional methods to be uncovered that improve targets and trained models even further. | 1703.01780#25 | 1703.01780#27 | 1703.01780 | [
"1706.04599"
]
|
1703.01780#27 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | # Acknowledgements We thank Samuli Laine and Timo Aila for fruitful discussions about their work, Phil Bachman, Colin Raffel, and Thomas Robert for noticing errors in the previous versions of this paper and everyone at The Curious AI Company for their help, encouragement, and ideas. # References [1] Abadi, Martà n, Agarwal, Ashish, Barham, Paul, Brevdo, Eugene, Chen, Zhifeng, Citro, Craig, Corrado, Greg S., Davis, Andy, Dean, Jeffrey, Devin, Matthieu, Ghemawat, Sanjay, Goodfellow, Ian, Harp, Andrew, Irving, Geoffrey, Isard, Michael, Jia, Yangqing, Jozefowicz, Rafal, Kaiser, Lukasz, Kudlur, Manjunath, Levenberg, Josh, Mané, Dan, Monga, Rajat, Moore, Sherry, Murray, Derek, Olah, Chris, Schuster, Mike, Shlens, Jonathon, Steiner, Benoit, Sutskever, Ilya, Talwar, Kunal, Tucker, Paul, Vanhoucke, Vincent, Vasudevan, Vijay, Viégas, Fernanda, Vinyals, Oriol, Warden, Pete, Wattenberg, Martin, Wicke, Martin, Yu, Yuan, and Zheng, Xiaoqiang. | 1703.01780#26 | 1703.01780#28 | 1703.01780 | [
"1706.04599"
]
|
1703.01780#28 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. 2015. [2] Bachman, Philip, Alsharif, Ouais, and Precup, Doina. Learning with Pseudo-Ensembles. arXiv:1412.4864 [cs, stat], December 2014. arXiv: 1412.4864. [3] BuciluË a, Cristian, Caruana, Rich, and Niculescu-Mizil, Alexandru. | 1703.01780#27 | 1703.01780#29 | 1703.01780 | [
"1706.04599"
]
|
1703.01780#29 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | Model compression. In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 535â 541. ACM, 2006. [4] Gal, Yarin and Ghahramani, Zoubin. Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning. In Proceedings of The 33rd International Conference on Machine Learning, pp. 1050â 1059, 2016. [5] Gastaldi, Xavier. Shake-Shake regularization. arXiv:1705.07485 [cs], May 2017. arXiv: 1705.07485. | 1703.01780#28 | 1703.01780#30 | 1703.01780 | [
"1706.04599"
]
|
1703.01780#30 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | 8 [6] Goodfellow, Ian J., Shlens, Jonathon, and Szegedy, Christian. Explaining and Harnessing Adversarial Examples. December 2014. arXiv: 1412.6572. [7] Guo, Chuan, Pleiss, Geoff, Sun, Yu, and Weinberger, Kilian Q. On Calibration of Modern Neural Networks. arXiv:1706.04599 [cs], June 2017. arXiv: 1706.04599. [8] He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Deep Residual Learning for Image Recognition. arXiv:1512.03385 [cs], December 2015. arXiv: 1512.03385. [9] Hinton, Geoffrey, Vinyals, Oriol, and Dean, Jeff. Distilling the Knowledge in a Neural Network. arXiv:1503.02531 [cs, stat], March 2015. arXiv: 1503.02531. [10] Hu, Jie, Shen, Li, and Sun, Gang. Squeeze-and-Excitation Networks. arXiv:1709.01507 [cs], September 2017. arXiv: 1709.01507. [11] Huang, Gao, Sun, Yu, Liu, Zhuang, Sedra, Daniel, and Weinberger, Kilian. Deep Networks with Stochastic Depth. arXiv:1603.09382 [cs], March 2016. arXiv: 1603.09382. [12] Kingma, Diederik and Ba, Jimmy. Adam: A Method for Stochastic Optimization. arXiv:1412.6980 [cs], December 2014. arXiv: 1412.6980. [13] Laine, Samuli and Aila, Timo. Temporal Ensembling for Semi-Supervised Learning. arXiv:1610.02242 [cs], October 2016. arXiv: 1610.02242. [14] Loshchilov, Ilya and Hutter, Frank. | 1703.01780#29 | 1703.01780#31 | 1703.01780 | [
"1706.04599"
]
|
1703.01780#31 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | SGDR: Stochastic Gradient Descent with Warm Restarts. arXiv:1608.03983 [cs, math], August 2016. arXiv: 1608.03983. [15] Maas, Andrew L., Hannun, Awni Y., and Ng, Andrew Y. Rectiï¬ er nonlinearities improve neural network acoustic models. In Proc. ICML, volume 30, 2013. [16] Miyato, Takeru, Maeda, Shin-ichi, Koyama, Masanori, and Ishii, Shin. Virtual Adversarial Train- ing: a Regularization Method for Supervised and Semi-supervised Learning. arXiv:1704.03976 [cs, stat], April 2017. arXiv: 1704.03976. [17] Netzer, Yuval, Wang, Tao, Coates, Adam, Bissacco, Alessandro, Wu, Bo, and Ng, Andrew Y. Reading digits in natural images with unsupervised feature learning. In NIPS Workshop on Deep Learning and Unsupervised Feature Learning, 2011. | 1703.01780#30 | 1703.01780#32 | 1703.01780 | [
"1706.04599"
]
|
1703.01780#32 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | [18] Papernot, Nicolas, McDaniel, Patrick, Wu, Xi, Jha, Somesh, and Swami, Ananthram. Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks. arXiv:1511.04508 [cs, stat], November 2015. arXiv: 1511.04508. [19] Polyak, B. T. and Juditsky, A. B. Acceleration of Stochastic Approximation by Averaging. SIAM J. Control Optim., 30(4):838â 855, July 1992. ISSN 0363-0129. doi: 10.1137/0330046. | 1703.01780#31 | 1703.01780#33 | 1703.01780 | [
"1706.04599"
]
|
1703.01780#33 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | [20] Pu, Yunchen, Gan, Zhe, Henao, Ricardo, Yuan, Xin, Li, Chunyuan, Stevens, Andrew, and Carin, Lawrence. Variational Autoencoder for Deep Learning of Images, Labels and Captions. arXiv:1609.08976 [cs, stat], September 2016. arXiv: 1609.08976. [21] Rasmus, Antti, Berglund, Mathias, Honkala, Mikko, Valpola, Harri, and Raiko, Tapani. Semi- supervised Learning with Ladder Networks. | 1703.01780#32 | 1703.01780#34 | 1703.01780 | [
"1706.04599"
]
|
1703.01780#34 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | In Cortes, C., Lawrence, N. D., Lee, D. D., Sugiyama, M., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 28, pp. 3546â 3554. Curran Associates, Inc., 2015. [22] Russakovsky, Olga, Deng, Jia, Su, Hao, Krause, Jonathan, Satheesh, Sanjeev, Ma, Sean, Huang, Zhiheng, Karpathy, Andrej, Khosla, Aditya, Bernstein, Michael, Berg, Alexander C., and Fei- Fei, Li. ImageNet Large Scale Visual Recognition Challenge. arXiv:1409.0575 [cs], September 2014. arXiv: 1409.0575. [23] Sajjadi, Mehdi, Javanmardi, Mehran, and Tasdizen, Tolga. Regularization With Stochastic Trans- formations and Perturbations for Deep Semi-Supervised Learning. In Lee, D. D., Sugiyama, M., Luxburg, U. V., Guyon, I., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 29, pp. 1163â | 1703.01780#33 | 1703.01780#35 | 1703.01780 | [
"1706.04599"
]
|
1703.01780#35 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | 1171. Curran Associates, Inc., 2016. 9 [24] Salimans, Tim and Kingma, Diederik P. Weight normalization: A simple reparameterization to accelerate training of deep neural networks. In Advances in Neural Information Processing Systems, pp. 901â 901, 2016. [25] Salimans, Tim, Goodfellow, Ian, Zaremba, Wojciech, Cheung, Vicki, Radford, Alec, and Chen, Xi. | 1703.01780#34 | 1703.01780#36 | 1703.01780 | [
"1706.04599"
]
|
1703.01780#36 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | Improved techniques for training gans. In Advances in Neural Information Processing Systems, pp. 2226â 2234, 2016. [26] Sietsma, Jocelyn and Dow, Robert JF. Creating artiï¬ cial neural networks that generalize. Neural networks, 4(1):67â 79, 1991. [27] Singh, Saurabh, Hoiem, Derek, and Forsyth, David. Swapout: Learning an ensemble of deep architectures. arXiv:1605.06465 [cs], May 2016. arXiv: 1605.06465. [28] Srivastava, Nitish, Hinton, Geoffrey, Krizhevsky, Alex, Sutskever, Ilya, and Salakhutdinov, Ruslan. Dropout: A Simple Way to Prevent Neural Networks from Overï¬ | 1703.01780#35 | 1703.01780#37 | 1703.01780 | [
"1706.04599"
]
|
1703.01780#37 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | tting. J. Mach. Learn. Res., 15(1):1929â 1958, January 2014. ISSN 1532-4435. [29] Särelä, Jaakko and Valpola, Harri. Denoising Source Separation. Journal of Machine Learning Research, 6(Mar):233â 272, 2005. ISSN ISSN 1533-7928. [30] Wager, Stefan, Wang, Sida, and Liang, Percy. Dropout Training as Adaptive Regularization. arXiv:1307.1493 [cs, stat], July 2013. arXiv: 1307.1493. [31] Wan, Li, Zeiler, Matthew, Zhang, Sixin, Le Cun, Yann, and Fergus, Rob. Regularization of Neural Networks using DropConnect. pp. 1058â 1066, 2013. [32] Weston, Jason, Ratle, Frédéric, Mobahi, Hossein, and Collobert, Ronan. | 1703.01780#36 | 1703.01780#38 | 1703.01780 | [
"1706.04599"
]
|
1703.01780#38 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | Deep learning via semi-supervised embedding. In Neural Networks: Tricks of the Trade, pp. 639â 655. Springer, 2012. [33] Xie, Saining, Girshick, Ross, Dollár, Piotr, Tu, Zhuowen, and He, Kaiming. Aggregated Residual Transformations for Deep Neural Networks. arXiv:1611.05431 [cs], November 2016. arXiv: 1611.05431. [34] Zhu, Xiaojin and Ghahramani, Zoubin. Learning from labeled and unlabeled data with label propagation. 2002. | 1703.01780#37 | 1703.01780#39 | 1703.01780 | [
"1706.04599"
]
|
1703.01780#39 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | 10 # Appendix # A Results without input augmentation See table 5 for the results without input augmentation. Table 5: Error rate percentage on SVHN and CIFAR-10 over 10 runs, including the results without input augmentation. We use exponential moving average weights in the evaluation of all our models. All the comparison methods use a 13-layer ConvNet architecture similar to ours and augmentation similar to ours, expect GAN, which does not use augmentation. 18.44 ± 4.8 6.65 ± 0.53 5.12 ± 0.13 8.11 ± 1.3 4.82 ± 0.17 4.42 ± 0.16 3.863.863.86 2.54 ± 0.04 2.74 ± 0.06 Supervised-onlye Πmodel Mean Teacher Without augmentation Supervised-onlye Πmodel Mean Teacher 27.77 ± 3.18 9.69 ± 0.92 4.35 ± 0.50 4.35 ± 0.50 4.35 ± 0.50 36.26 ± 3.83 10.36 ± 0.94 5.85 ± 0.62 16.88 ± 1.30 6.83 ± 0.66 4.18 ± 0.27 4.18 ± 0.27 4.18 ± 0.27 19.68 ± 1.03 7.01 ± 0.29 5.45 ± 0.14 12.32 ± 0.95 4.95 ± 0.26 3.95 ± 0.19 14.15 ± 0.87 5.73 ± 0.16 5.21 ± 0.21 2.75 ± 0.10 2.50 ± 0.07 2.50 ± 0.05 2.50 ± 0.05 2.50 ± 0.05 3.04 ± 0.04 2.75 ± 0.08 2.77 ± 0.09 CIFAR-10 GANb Πmodelc Temporal Ensemblingc VAT+EntMind Ours 1000 labels 2000 labels 4000 labels 18.63 ± 2.32 12.36 ± 0.31 12.16 ± 0.31 10.55 all labelsa 5.56 ± 0.10 5.56 ± 0.10 5.56 ± 0.10 5.60 ± 0.10 Supervised-onlye Πmodel Mean Teacher Mean Teacher ResNet 46.43 ± 1.21 27.36 ± 1.20 21.55 ± 1.48 10.08 ± 0.41 10.08 ± 0.41 10.08 ± 0.41 33.94 ± 0.73 18.02 ± 0.60 15.73 ± 0.31 15.73 ± 0.31 15.73 ± 0.31 20.66 ± 0.57 13.20 ± 0.27 12.31 ± 0.28 6.28 ± 0.15 6.28 ± 0.15 6.28 ± 0.15 5.82 ± 0.15 6.06 ± 0.11 5.94 ± 0.15 Without augmentation Supervised-onlye Πmodel Mean Teacher 48.38 ± 1.07 32.18 ± 1.33 30.62 ± 1.13 36.07 ± 0.90 23.92 ± 1.07 23.14 ± 0.46 24.47 ± 0.50 17.08 ± 0.32 17.74 ± 0.30 7.43 ± 0.06 7.00 ± 0.20 7.21 ± 0.24 | 1703.01780#38 | 1703.01780#40 | 1703.01780 | [
"1706.04599"
]
|
1703.01780#40 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results | # a 4 runs e Only labeled examples and only classiï¬ cation cost # B Experimental setup Source mean-teacher. code for the experiments is available at https://github.com/CuriousAI/ # B.1 Convolutional network models We replicated the Î model of Laine & Aila [13] in TensorFlow [1], and added support for Mean Teacher training. We modiï¬ ed the model slightly to match the requirements of the experiments, as described in subsections B.1.1 and B.1.2. The difference between the original Î model described by Laine & Aila [13] and our baseline Î model thus depends on the experiment. The difference between | 1703.01780#39 | 1703.01780#41 | 1703.01780 | [
"1706.04599"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.