id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
1609.09106#17
HyperNetworks
While the network is no longer able to learn the optimal set of filters for each layer, it will learn the best set of filters given the constraints, and the resulting number of model parameters is drastically reduced. 4.3. HYPERLSTM FOR CHARACTER-LEVEL PENN TREEBANK LANGUAGE MODELLING The HyperLSTM model is evaluated on character level prediction task on the Penn Treebank corpus (Marcus et al., 1993) using the train/validation/test split outlined in (Mikolov et al., 2012). As the dataset is quite small is prone to over fitting, we apply dropout on both input and output layers with a keep probability of 0.90. Unlike previous approaches (Graves, 2013; Ognawala & Bayer, 2014) of applying weight noise during training, we instead also apply dropout to the recurrent layer (Henaff et al., 2016) with the same dropout probability. We compare our model to the basic LSTM cell, stacked LSTM cells (Graves, 2013), and LSTM with layer normalization applied. In addition, we also experimented with applying layer normalization to HyperLSTM. Using the setup in (Graves, 2013), we use networks with 1000 units and train the network to predict the next character. In this task, the HyperLSTM cell has 128 units and a signal size of 4. As the HyperLSTM cell has more trainable parameters compared to the basic LSTM Cell, we also experimented with an LSTM Cell with 1250 units as well. For more details regarding experimental setup, please refer to Appendix A.3.3 It is interesting to note that combining Recurrent Dropout with a basic LSTM cell achieves quite formidable performance. Our implementation of Recurrent Dropout Basic LSTM cell reproduced similar results as (Semeniuta et al., 2016), where they have also experimented with different dropout settings. We also found that Layer Norm LSTM performed quite well when combined with recurrent dropout, making it both a formidable baseline and also an extension for HyperLSTM. In addition to outperforming both the larger or deeper version of the LSTM network, HyperLSTM also achieved similar performance of Layer Norm LSTM.
1609.09106#16
1609.09106#18
1609.09106
[ "1603.09025" ]
1609.09106#18
HyperNetworks
This suggests by dynamically adjusting the weight scaling vectors, the HyperLSTM cell has learned a policy of scaling inputs to the ac- tivation functions that is as efficient as the statistical moments-based strategy employed by Layer Norm, and that the required extra computation required is embedded inside the extra 128 units in- side the HyperLSTM cell. When we combine HyperLSTM with Layer Norm, we see an additional performance gain, implying that the HyperLSTM cell learned an adjustment policy that goes be- yond moments-based regularization. We also demonstrate that increasing the size of the embedding vector or stacking HyperLSTM layers together can also increase its performance.
1609.09106#17
1609.09106#19
1609.09106
[ "1603.09025" ]
1609.09106#19
HyperNetworks
Model! Test Validation Param Count ME n-gram (Mikolov et al., 2012) 1.37 Batch Norm LSTM (Cooijmans et al., 2016) 1.32 Recurrent Dropout LSTM (Semeniuta et al., 2016) 1.301 1.338 Zoneout RNN (Krueger et al., 2016) 1.27 HM-LSTM? (Chung et al., 2016) 1.27 LSTM, 1000 units ? 1.312 1.347 4.25M LSTM, 1250 units? 1.306 = 1.340 6.57M 2-Layer LSTM, 1000 unitsâ 1.281 1.312 12.26M Layer Norm LSTM, 1000 unitsâ 1.267 1.300 4.26 M HyperLSTM (ours), 1000 units 1.265 = 1.296 491M Layer Norm HyperLSTM, 1000 units (ours) 1.250 1.281 4.92 M Layer Norm HyperLSTM, 1000 units, Large Embedding (ours) 1.233 1.263 5.06 M 2-Layer Norm HyperLSTM, 1000 units 1.219 = 1.245 14.41M Table 3: Bits-per-character on the Penn Treebank test set. 4.4. HYPERLSTM FOR HUTTER PRIZE WIKIPEDIA LANGUAGE MODELLING We train our model on the larger and more challenging Hutter Prize Wikipedia dataset, also known as enwik8 (Hutter, 2012) consisting of a sequence of 100M characters composed of 205 unique characters. Unlike Penn Treebank, enwik8 contains some foreign words (Latin, Arabic, Chinese), indented XML, metadata, and internet addresses, making it a more realistic and practical dataset to test character language models.
1609.09106#18
1609.09106#20
1609.09106
[ "1603.09025" ]
1609.09106#20
HyperNetworks
For more details regarding experimental setup, please refer to Appendix A.3.4. Examples of these mixed variety of text samples that our HyperLSTM model can generate is in Appendix A.4. Model! enwiks Param Count Stacked LSTM (Graves, 2013) 1.67 27.0M MRNN (Sutskever et al., 2011) 1.60 GF-RNN (Chung et al., 2015) 1.58 20.0 M Grid-LSTM (Kalchbrenner et al., 2016) 1.47 16.8M LSTM (Rocki, 2016b) 1.45 MI-LSTM (Wu et al., 2016) 1.44 Recurrent Highway Networks (Zilly et al., 2016) 1.42 8.0M Recurrent Memory Array Structures (Rocki, 2016a) 1.40 HM-LSTM& (Chung et al., 2016) 1.40 Surprisal Feedback LSTM* (Rocki, 2016b) 1.37 LSTM, 1800 units, no recurrent dropout? 1.470 14.81 M LSTM, 2000 units, no recurrent dropoutâ 1.461 18.06 M Layer Norm LSTM, 1800 unitsâ 1.402 14.82 M HyperLSTM (ours), 1800 units 1.391 18.71 M Layer Norm HyperLSTM, 1800 units (ours) 1.353 18.78 M Layer Norm HyperLSTM, 2048 units (ours) 1.340 26.54 M Table 4: Bits-per-character on the enwik8 test set.
1609.09106#19
1609.09106#21
1609.09106
[ "1603.09025" ]
1609.09106#21
HyperNetworks
We see that HyperLSTM is once again competitive to Layer Norm LSTM, and if we combine both techniques, the Layer Norm HyperLSTM achieves respectable results. The version of HyperLSTM that uses 2048 hidden units achieve near state-of-the-art performance for this task. In addition, HyperLSTM converges quicker per training step compared to LSTM and Layer Norm LSTM. Please refer to Figure 6 for the loss graphs. 'We do not compare against methods that use dynamic evaluation. # implementation. Our 3Based on results of version 2 at the time of writing. http: //arxiv.org/abs/1609.01704v2 â
1609.09106#20
1609.09106#22
1609.09106
[ "1603.09025" ]
1609.09106#22
HyperNetworks
This method uses information about test errors during inference for predicting the next characters, hence it is not directly comparable to other methods that do not use this information. In 1955-37 most American and Europeans signed into the sea. An absence of [[Japan (Korea city) |Japan]], the Mayotte like Constantino 7 i. H . H . . . . an moe _â . . . - om : : Co | bh : .- ple (in its first week, in [[880]]) that served as the mother of emperors, as the Corinthians, Bernard on his continued sequel toget _ 8 2 H : Po : . i. =o. Hl : : . : cE ta : : af : her ordered [ [Operation Moabili]]. The Gallup churches in the army promulgated the possessions sitting at the reservation, and [ [Mel 2 ito de la Vegeta Provine|Felix]] had broken Diocletian desperate from the full victory of Augustus, cited by Stephen I. Alexander Se on oe sae rt = . . Pa - . a: : : fa = me Ch : nate became Princess Cartara, an annual ruler of war (777-184) and founded numerous extremiti of justice practitioners. - Figure 4:
1609.09106#21
1609.09106#23
1609.09106
[ "1603.09025" ]
1609.09106#23
HyperNetworks
Example text generated from HyperLSTM model. We visualize how four of the main RNNâ s weight matrices (W;,, Wi, W, if , |||) effectively change over time by plotting the norm of the changes below each generated character. High intensity represent large changes being made to weights of main RNN. When we use this prediction model as a generative model to sample a text passage, we use main RNN to model a probability distribution over possible characters conditioned over the preceding characters. In the case of the HyperRNN, we allow the model parameters of this generative model to vary over time, so in effect the HyperRNN cell is choosing the best model at any given time to generate a probability distribution to sample from. We can demonstrate this by visualizing how the weight scaling vectors of the main RNN change during the character sampling process. In Figure 4, we examine a sample text passage generated by HyperLSTM after training on enwik8 along with the weight differences below the text. We see that in regions of low intensity, where the weights of the main RNN are relatively static, the types of phrases generated seem more deterministic. For example, the weights do not change much during the words Europeans, possessions and reservation. The regions of high intensity is when the HyperRNN cell is making relatively large changes to the weights of the main RNN. These tend to happen in the areas between words, or sometimes during brackets. One might also wonder whether the HyperLSTM cell (without Layer Norm), via dynamically tuning the weight scaling vectors, has developed a policy that is similar to the statistics-based approach used by Layer Norm, given that both methods have similar performance. One way to see this effect is to look at the histogram of the hidden states in the network. In Figure 5, we examine the histograms of (cr), the hidden state of the LSTM before applying the output gate.
1609.09106#22
1609.09106#24
1609.09106
[ "1603.09025" ]
1609.09106#24
HyperNetworks
os os os os 02s 02s 02s 02s a2 a2 02 02 as ons ors ars a oa on os lll er vt lin a Mee toll, ull [iti 07-025 025075 075-025 025075 â 075-025, 025 O75 07-025 025 O75 istâ ¢ Layer Norm LSTÂ¥ Hyporistâ ¢ Layer Norm Hyper STM Figure 5: Normalized Histogram plots of $(c;) for different models during sampling. We see that the normalization process employed by Layer Norm reduces the saturation effects com- pared to the vanilla LSTM. However, for the case of the HyperLSTM, we notice that most of the time the cell is saturated. The HyperLSTM cellâ s dynamic weight adjustment policy appears to be doing something very different compared to statistical normalization, although the policy it came up with ended up providing similar performance as Layer Norm. It is interesting to see that when we combine both methods, the HyperLSTM cell will need to determine an adjustment policy in spite of the normalization forced upon it by Layer Norm. An interesting question is whether there are problems where statistical normalization may actually be a setback to the policy developed by the HyperLSTM, and the best strategy is to ignore it.
1609.09106#23
1609.09106#25
1609.09106
[ "1603.09025" ]
1609.09106#25
HyperNetworks
10 2.25 -800 215 LSTM â LSTM 20s Ss 850 â 2 Layer LSTM 108 â Layer Norm LSTM 2 -900 â Layer Norm LSTM o " g â & 1.85 â HyperLSTM 2 -950 HyperLSTM & 1.75 â Layer Norm HyperLSTM Z -1000 S 1.65 = -1050 S zB S 185 â s -1100 1.45 > 1.35 â 1150 1.25 -1200 0 10 20 30 40 50 60 70 80 25 22.5 42.5 62.5 82.5 102.5 Training Step (x1000) Training Step (x1000) Figure 6: Loss Graph for enwik8 (left). Loss Graph for Handwriting Generation (right) 4.5 HYPERLSTM FOR HANDWRITING SEQUENCE GENERATION In addition to modelling discrete sequential data, we want to see how the model performs when modelling sequences of real valued data. We will train our model on the IAM online handwrit- ing database (Liwicki & Bunke, 2005) and have our model predict pen strokes as per Section 4.2 of (Graves, 2013). The dataset has contains 12179 handwritten lines from 221 writers, digitally recorded from a tablet. We will model the (x, y) coordinate of the pen location at each recorded time step, along with a binary indicator of pen-up/pen-down. The average sequence length is around 700 steps and the longest around 1900 steps, making the training task particularly challenging as the network needs to retain information about both the stroke history and also the handwriting style in order to predict plausible future handwriting strokes. For experimental setup details, please refer to Appendix A.3.5.
1609.09106#24
1609.09106#26
1609.09106
[ "1603.09025" ]
1609.09106#26
HyperNetworks
Model Log-Loss Param Count LSTM, 900 units (Graves, 2013) -1,026 3-Layer LSTM, 400 units (Graves, 2013) -1,041 3-Layer LSTM, 400 units, adaptive weight noise (Graves, 2013) -1,058 LSTM, 900 units, no dropout, no data augmentation.! -1,026 3.36M 3-Layer LSTM, 400 units, no dropout, no data augmentation.! -1,039 3.26 M LSTM, 900 units? -1,055 3.36M LSTM, 1000 units? -1,048 414M 3-Layer LSTM, 400 unitsâ -1,068 3.26M 2-Layer LSTM, 650 unitsâ -1,135 5.16M Layer Norm LSTM, 900 units? -1,096 3.37M Layer Norm LSTM, 1000 units? -1,106 4.14M Layer Norm HyperLSTM, 900 units (ours) -1,067 3.95 M HyperLSTM (ours), 900 units -1,162 3.94 M
1609.09106#25
1609.09106#27
1609.09106
[ "1603.09025" ]
1609.09106#27
HyperNetworks
Table 5: Log-Loss of IAM Online DB validation set. In this task, we note that data augmentation and applying recurrent dropout improved the perfor- mance of all models, compared to the original setup by (Graves, 2013). In addition, for the LSTM model, increasing unit count per layer may not help the performance compared to increasing the layer depth. We notice that a 3-layer 400 unit LSTM outperforms a 1-layer 900 unit one, and we found that a 2-layer 650 unit LSTM outperforming most configurations. While layer norm helps with the performance, we found that in this task, layer norm does not combine well with HyperL- STM, and in this task the 900 unit HyperLSTM without layer norm achieved the best performance. Unlike the language modelling task, perhaps statistical normalization is far from the optimal ap- proach for a weight adjustment policy. The policy learned by the HyperLSTM cell not only per-
1609.09106#26
1609.09106#28
1609.09106
[ "1603.09025" ]
1609.09106#28
HyperNetworks
â Our implementation, to replicate setup of (Graves, 2013). Our implementation, with data augmentation, dropout and recurrent dropout. 11 formed well against the baseline, its convergence rate is also as fast as the 2-layer LSTM model. Please refer to Figure 6 for the loss graphs. In Appendix A.5, we display three sets of handwriting samples generated from LSTM, Layer Norm LSTM, and HyperLSTM, corresponding to log-loss scores of -1055, -1096, and -1162 nats respec- tively in Table 5. Qualitative assessments of handwriting quality is always subjective, and depends an individualâ s taste in calligraphy.
1609.09106#27
1609.09106#29
1609.09106
[ "1603.09025" ]
1609.09106#29
HyperNetworks
From looking at the examples produced by the three models, our opinion is that the samples produced by LSTM is noisier than the other two models. We also find HyperLSTMâ s samples to be a bit more coherent than the samples produced by Layer Norm LSTM. We leave to the reader to judge which model produces handwriting samples of higher quality. joa cencslourc ucit te al gsereoum Semenlo ejay Maki ON LA A a Figure 7: Handwriting sample generated from HyperLSTM model. We visualize how four of the main RNNâ s weight matrices (WW, wf , |\)/) effectively change over time, by plotting norm of changes made to them over time. Similar to the earlier character generation experiment, we show a generated handwriting sample from the HyperLSTM model in Figure 7, along with a plot of how the weight scaling vectors of the main RNN is changing over time below the sample. For a more detailed interactive demonstration of handwriting generation using HyperLSTM, visit http: //blog.otoro.net/2016/09/28/ hyper-networks/. We see that the regions of high intensity seem to be concentrated at many discrete instances, rather than slowly varying over time. This implies that the weights experience regime changes rather than gradual slow adjustments. We can see that many of these weight changes occur between the written words, and sometimes between written characters. While the LSTM model alone already does a formidable job of generating time-varying parameters of a Mixture Gaussian distribution used to generate realistic handwriting samples, the ability to go one level deeper, and to dynamically generate the generative model is one of the key advantages of HyperRNN over a normal RNN. 4.6 HYPERLSTM FOR NEURAL MACHINE TRANSLATION We experiment with the Neural Machine Translation task using the same experimental setup outlined in (Wuet al., 2016). Our model is the same wordpiece model architecture with a vocabulary size of 32k, but we replace the LSTM cells with HyperLSTM cells. We benchmark the modified model on WMTâ 14 En-+Fr using the same test/validation set split described in the GNMT paper (Wu et al., 2016). Please refer to Appendix A.3.6 for experimental setup details.
1609.09106#28
1609.09106#30
1609.09106
[ "1603.09025" ]
1609.09106#30
HyperNetworks
Model Test BLEU Log Perplexity Deep-Att + PosUnk (Zhou et al., 2016) 39.2 GNMT WPM-32K, LSTM (Wu et al., 2016) 38.95 1.027 GNMT WPM-32K, ensemble of 8 LSTMs (Wu et al., 2016) 40.35 GNMT WPM-32K, HyperLSTM (ours) 40.03 0.993 Table 6: Single model results on WMT En--+Fr (newstest2014) The HyperLSTM cell improves the performance of the existing GNMT model, achieving state- of-the-art single model results for this dataset. In addition, we demonstrate the applicability of hypernetworks to large-scale models used in production systems. Please see Appendix A.6 for actual translation samples generated from both models for a qualitative comparison.
1609.09106#29
1609.09106#31
1609.09106
[ "1603.09025" ]
1609.09106#31
HyperNetworks
12 # 5 CONCLUSION In this paper, we presented a method to use a hypernetwork to generate weights for another neural network. Our hypernetworks are trained end-to-end with backpropagation and therefore are effi- cient and scalable. We focused on two use cases of hypernetworks: static hypernetworks to generate weights for a convolutional network, dynamic hypernetworks to generate weights for recurrent net- works. We found that the method works well while using fewer parameters. On image recognition, language modelling and handwriting generation, hypernetworks are competitive to or sometimes better than state-of-the-art models.
1609.09106#30
1609.09106#32
1609.09106
[ "1603.09025" ]
1609.09106#32
HyperNetworks
ACKNOWLEDGMENTS We thank Jeff Dean, Geoffrey Hinton, Mike Schuster and the Google Brain team for their help with the project. # REFERENCES Martin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Gre- gory S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian J. Good- fellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mané, Rajat Monga, Sherry Moore, Derek Gor- don Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul A. Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda B. Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. CoRR, abs/1603.04467, 2016. URL http: //arxiv.org/abs/1603.04467. M. Andrychowicz, M. Denil, S. Gomez, M. W. Hoffman, D. Pfau, T. Schaul, and N. de Freitas.
1609.09106#31
1609.09106#33
1609.09106
[ "1603.09025" ]
1609.09106#33
HyperNetworks
Learning to learn by gradient descent by gradient descent. arXiv preprint arXiv: 1606.04474, 2016. Jimmy L. Ba, Jamie R. Kiros, and Geoffrey E. Hinton. Layer normalization. NIPS, 2016. Luca Bertinetto, Joao F. Henriques, Jack Valmadre, Philip H. S. Torr, and Andrea Vedaldi. Learning feed-forward one-shot learners. In NJPS, 2016.
1609.09106#32
1609.09106#34
1609.09106
[ "1603.09025" ]
1609.09106#34
HyperNetworks
Christopher M. Bishop. Mixture density networks. Technical report, 1994. Junyoung Chung, Caglar Giilgehre, Kyunghyun Cho, and Yoshua Bengio. Gated feedback recurrent neural networks. arXiv preprint arXiv: 1502.02367, 2015. Junyoung Chung, Sungjin Ahn, and Yoshua Bengio. Hierarchical multiscale recurrent neural net- works. arXiv preprint arXiv: 1609.01704, 2016. Djork-Arné Clevert, Thomas Unterthiner, and Sepp Hochreiter. Fast and accurate deep network learning by exponential linear units (ELUs). arXiv preprint arXiv: 1511.07289, 2015. Tim Cooijmans, Nicolas Ballas, Cesar Laurent, and Caglar Gulcehre. Recurrent Batch Normaliza- tion. arXiv:1603.09025, 2016. Bert De Brabandere, Xu Jia, Tinne Tuytelaars, and Luc Van Gool. Dynamic filter networks. In NIPS, 2016.
1609.09106#33
1609.09106#35
1609.09106
[ "1603.09025" ]
1609.09106#35
HyperNetworks
Misha Denil, Babak Shakibi, Laurent Dinh, Marcâ Aurelio Ranzato, and Nando de Freitas. Predicting Parameters in Deep Learning. In NIPS, 2013. Chrisantha Fernando, Dylan Banarse, Malcolm Reynolds, Frederic Besse, David Pfau, Max Jader- berg, Marc Lanctot, and Daan Wierstra. Convolution by evolution: Differentiable pattern produc- ing networks. In GECCO, 2016. Faustino Gomez and Jiirgen Schmidhuber. Evolving modular fast-weight networks for control. In ICANN, 2005.
1609.09106#34
1609.09106#36
1609.09106
[ "1603.09025" ]
1609.09106#36
HyperNetworks
13 Alex Graves. Generating sequences with recurrent neural networks. arXiv: 1308.0850, 2013. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog- nition. In CVPR, 2016a. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. arXiv preprint arXiv: 1603.05027, 201 6b. Mikael Henaff, Arthur Szlam, and Yann LeCun. Orthogonal RNNs and long-memory tasks. In ICML, 2016. Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhutdi- nov. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580, 2012. Sepp Hochreiter and Juergen Schmidhuber. Long short-term memory. Neural Computation, 1997.
1609.09106#35
1609.09106#37
1609.09106
[ "1603.09025" ]
1609.09106#37
HyperNetworks
Gao Huang, Zhuang Liu, and Kilian Q. Weinberger. Densely connected convolutional networks. arXiv preprint arXiv: 1608.06993, 2016a. Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Weinberger. Deep networks with stochas- tic depth. arXiv preprint arXiv: 1603.09382, 201 6b. Marcus Hutter. The human knowledge compression contest. 2012. URL http://prize. hutterl.net/. Max Jaderberg, Wojciech Marian Czarnecki, Simon Osindero, Oriol Vinyals, Alex Graves, and Koray Kavukcuoglu. Decoupled Neural Interfaces using Synthetic Gradients. arXiv preprint arXiv: 1608.05343, 2016. Nal Kalchbrenner, Ivo Danihelka, and Alex Graves. Grid long short-term memory. In JCLR, 2016. Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In JCLR, 2015. Jan Koutnik, Faustino Gomez, and Jiirgen Schmidhuber.
1609.09106#36
1609.09106#38
1609.09106
[ "1603.09025" ]
1609.09106#38
HyperNetworks
Evolving neural networks in compressed weight space. In GECCO, 2010. David Krueger, Tegan Maharaj, Janos Kramar, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Hugo Larochelle, Aaron Courville, et al. Zoneout: Regular- izing RNNs by randomly preserving hidden activations. arXiv preprint arXiv: 1606.01305, 2016. Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel.
1609.09106#37
1609.09106#39
1609.09106
[ "1603.09025" ]
1609.09106#39
HyperNetworks
Handwritten digit recognition with a back-propagation network. In N/PS, 1990. Chen-Yu Lee, Saining Xie, Patrick Gallagher, Zhengyou Zhang, and Zhuowen Tu. Deeply- supervised nets. In AISTATS, volume 2, pp. 6, 2015. Min Lin, Qiang Chen, and Shuicheng Yan. Network in network. In JCLR, 2014. Marcus Liwicki and Horst Bunke. IAM-OnDB - an on-line English sentence database acquired from handwritten text on a whiteboard. In JCDAR, 2005. Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini.
1609.09106#38
1609.09106#40
1609.09106
[ "1603.09025" ]
1609.09106#40
HyperNetworks
Building a large annotated corpus of english: The penn treebank. Computational linguistics, 19(2):313-330, 1993. Tomas Mikolov, Ilya Sutskever, Anoop Deoras, Hai-Son Le, Stefan Kombrink, and Jan Cernocky. Subword language modeling with neural networks. preprint, 2012. Marcin Moczulski, Misha Denil, Jeremy Appleyard, and Nando de Freitas.
1609.09106#39
1609.09106#41
1609.09106
[ "1603.09025" ]
1609.09106#41
HyperNetworks
ACDC: A Structured Efficient Linear Layer. arXiv preprint arXiv: 1511.05946, 2015. Saahil Ognawala and Justin Bayer. Regularizing recurrent networks-on injected noise and norm- based methods. arXiv preprint arXiv:1410.5684, 2014. Kamil Rocki. Recurrent memory array structures. arXiv preprint arXiv: 1607.03085, 2016a.
1609.09106#40
1609.09106#42
1609.09106
[ "1603.09025" ]
1609.09106#42
HyperNetworks
14 Kamil Rocki. Surprisal-driven feedback in recurrent networks. arXiv preprint arXiv: 1608.06027, 2016b. Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio. Fitnets: Hints for thin deep nets. arXiv preprint arXiv: 1412.6550, 2014. Jiirgen Schmidhuber. Learning to control fast-weight memories: An alternative to dynamic recurrent networks. Neural Computation, 4(1):131-139, 1992. Jiirgen Schmidhuber.
1609.09106#41
1609.09106#43
1609.09106
[ "1603.09025" ]
1609.09106#43
HyperNetworks
A â self-referentialâ weight matrix. In JCANN, 1993. Stanislaw Semeniuta, Aliases Severyn, and Erhardt Barth. Recurrent dropout without memory loss. arXiv: 1603.05118, 2016. Rupesh Srivastava, Klaus Greff, and Jiirgen Schmidhuber. Training very deep networks. In NIPS, 2015. Kenneth O. Stanley, David B.
1609.09106#42
1609.09106#44
1609.09106
[ "1603.09025" ]
1609.09106#44
HyperNetworks
Dâ Ambrosio, and Jason Gauci. A hypercube-based encoding for evolving large-scale neural networks. Artificial Life, 15(2):185-212, 2009. Ilya Sutskever, James Martens, and Geoffrey E. Hinton. Generating text with recurrent neural net- works. In JCML, 2011. YY. Wu, M. Schuster, Z. Chen, Q. V. Le, M. Norouzi, W. Macherey, M. Krikun, Y. Cao, Q. Gao, K. Macherey, J. Klingner, A. Shah, M. Johnson, X. Liu, L. Kaiser, S. Gouws, Y. Kato, T. Kudo, H. Kazawa, K. Stevens, G. Kurian, N. Patil, W. Wang, C. Young, J. Smith, J. Riesa, A. Rudnick, O. Vinyals, G. Corrado, M. Hughes, and J. Dean. Googleâ
1609.09106#43
1609.09106#45
1609.09106
[ "1603.09025" ]
1609.09106#45
HyperNetworks
s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation. ArXiv e-prints, 2016. Yuhuai Wu, Saizheng Zhang, Ying Zhang, Yoshua Bengio, and Ruslan Salakhutdinov. On multi- plicative integration with recurrent neural networks. NIPS, 2016. Jianlin Xia, Shivkumar Chandrasekaran, Ming Gu, and Xiaoye S. Li. Fast algorithms for hierarchi- cally semiseparable matrices. Numerical Linear Algebra with Applications, 2010.
1609.09106#44
1609.09106#46
1609.09106
[ "1603.09025" ]
1609.09106#46
HyperNetworks
Z. Yang, M. Moczulski, M. Denil, N. de Freitas, A. Smola, L. Song, and Z. Wang. Deep Fried Convnets. In JCCV, 2015. Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. In BMVC, 2016. Ke Zhang, Miao Sun, Tony X. Han, Xingfang Yuan, Liru Guo, and Tao Liu.
1609.09106#45
1609.09106#47
1609.09106
[ "1603.09025" ]
1609.09106#47
HyperNetworks
Residual networks of residual networks: Multilevel residual networks. arXiv preprint arXiv: 1608.02908, 2016. Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, and Wei Xu. Deep recurrent models with fast- forward connections for neural machine translation. CoRR, abs/1606.04199, 2016. URL http: //arxiv.org/abs/1606.04199. Julian Zilly, Rupesh Srivastava, Jan Koutnik, and Jiirgen Schmidhuber. Recurrent highway networks. arXiv preprint arXiv: 1607.03474, 2016.
1609.09106#46
1609.09106#48
1609.09106
[ "1603.09025" ]
1609.09106#48
HyperNetworks
15 A APPENDIX A.1 HYPERNETWORKS TO LEARN FILTERS FOR A FULLY CONNECTED NETWORKS ee || sn â â al apelin .â a i ee a re = = ee A A | 4446S BS elena. Figure 8: Filters learned to classify MNIST digits in a fully connected network (left). Filters learned by a hypernetwork (right). We ran an experiment where the hypernetwork receives the x, y locations of both the input pixel and the weight, and predicts the value of the hidden weight matrix in a fully connected network that learns to classify MNIST digits. In this experiment, the fully connected network (784-256-10) has one hidden layer of 16 x 16 units, where the hypernetwork is a pre-defined small feedforward net- work. The weights of the hidden layer has 784 x 256 = 200704 parameters, while the hypernetwork is a 801 parameter four layer feed forward relu network that would generate the 786 x 256 weight matrix. The result of this experiment is shown in Figure 8. We want to emphasize that even though the network can learn convolutional-like filters during end-to-end training, its performance is rather poor: the best accuracy is 93.5%, compared to 98.5% for the conventional fully connected network. We find that the virtual coordinates-based approach to hypernetworks that is used by HyperNEAT and DPPN has its limitations in many practical tasks, such as image recognition and language mod- elling, and therefore developed our embedding vector approach in this work. 16 A.2 CONCEPTUAL DIAGRAMS OF STATIC AND DYNAMIC HYPERNETWORKS input output > Wo > Wy {We =} Was > Wy > output output outputs output output, Ho Hy He Hes He > ow > ow SW Lew > ow > x0 I x x2 I Xe I x Figure 9: Feedforward Network (top) and Recurrent Network (bottom) output > input >| wiz) >| Wee) >| Wize) -â -----â >| Wana) >} ween) ne ee eee ee % 2 2 ZN zy Z â + we â â > |W Next Nin X Nout Figure 10: Static Hy; pernetwork generating weights for Feedforward Network
1609.09106#47
1609.09106#49
1609.09106
[ "1603.09025" ]
1609.09106#49
HyperNetworks
output output; output output. output, Ho Hy He Hes H > Wize) > Wea) >| Wap) ------->} Whar) > Wee) > x0 x xp Xe x 2 2 20 Ze a ho I hy I hy I I hes I he >| we > We > we >! w, > We > Ho Xo Hy xy Ho xo, Hea Xt He xt Figure 11: Dynamic Hypernetwork generating weights for Recurrent Network
1609.09106#48
1609.09106#50
1609.09106
[ "1603.09025" ]
1609.09106#50
HyperNetworks
17 A.2.1 FILTER VISUALIZATIONS FOR RESIDUAL NETWORKS In Figures 12 and 13 are example visualizations for various kernels in a deep residual network. Note that the 32x32x3x3 kernel generated by the hypernetwork was constructed by concatenating 4 basic kernels together. Figure 13: Generated 16x16x3x3 kernel (left). Generated 32x32x3x3 kernel (right). 18 # A.2.2. HYPERLSTM In this section we will discuss extension of HyperRNN to LSTM. Our focus will be on the basic version of the LSTM architecture Hochreiter & Schmidhuber (1997), given by:
1609.09106#49
1609.09106#51
1609.09106
[ "1603.09025" ]
1609.09106#51
HyperNetworks
ip = Wily + Wie +0 ge = Woy + Win, + 09 fr = Whi + WEa, +d! on = Wrht-1 + Wea, + 0° cr = o( fr) © G1 + (tt) © O(G) hy = a(04) © O(c) (9) where Wj! ⠬ RN**Ne Wy ⠬ RN»*Ne by ⠬ RN», o is the sigmoid operator, ¢ is the tanh operator. For brevity, y is one of {i, g, f, o}.! Similar to the previous section, we will make the weights and biases a function of an embedding, and the embedding for each {i, g, f, o} will be generated from a smaller HyperLSTM cell. As discussed earlier, we will also experiment with adding the option to use a Layer Normalization layer in the HyperLSTM. The HyperLSTM Cell is given by: w=("0) ip = LN(Wihy-1 + Wie + 6) Ge = LN(Wohy1 + War + 64) = LN(Wliu_a + Wie, +8) (10) ( 6: = LN(Woin_1 + W288, + 6°) h & 1) © G1 + a(t) © (Gt) 64) © 6(LN(4)) ht =o The weight matrices for each of the four {i, g, f, 0} gates will be a function of a set of embeddings Zz, Zh, and Z unique to each gates, just like the HyperRNN. These embeddings are linear projections of the hidden states of the HyperLSTM Cell. For brevity, y is one of {i, 9, f,o} to avoid writing four sets of identical equations: a= Wii, lte-1 _ ohn a=W? fy + bY (1) he 4 = W} hea As in the memory efficient version of the HyperRNN, we will focus on the efficient version of the HyperLSTM, where we use weight scaling vectors d to modify the rows of the weight matrices:
1609.09106#50
1609.09106#52
1609.09106
[ "1603.09025" ]
1609.09106#52
HyperNetworks
ye = LN (dj © Wihy-1 + d4 © W¥a, + DY (z})), where di (zn) = Ween (2h) he*h (12) d! (20) = Wize bY (z}) = Wak + bf In our implementation, the cell and hidden state update equations for the main LSTM will incorpo- rate a single dropout (Hinton et al., 2012) gate, as developed in Recurrent Dropout without Memory Loss (Semeniuta et al., 2016), as we found this to help regularize the entire model during training: cr = o(ft) © cr-1 + a(t) © DropOut(d(ge)) hy = o(0%4) © O(LN(cr)) (13) 'In practice, all eight weight matrices are concatenated into one large matrix for computational efficiency.
1609.09106#51
1609.09106#53
1609.09106
[ "1603.09025" ]
1609.09106#53
HyperNetworks
19 This dropout operation is generally only applied inside the main LSTM, not in the smaller HyperL- STM cell. For larger size systems we can apply dropout to both networks. A.2.3. IMPLEMENTATION DETAILS AND WEIGHT INITIALIZATION FOR HYPERLSTM This section may be useful to readers who may want to implement their own version of the Hyper- LSTM Cell, as we will discuss initialization of the parameters for Equations 10 to 13. We recom- mend implementing the HyperLSTM within the same interface as a normal recurrent network cell so that using the HyperLSTM will not be any different than using a normal RNN. These initial- ization parameters have been found to work well with our experiments, but they may be far from optimal depending on the task at hand. A reference implementation developed using the Tensor- Flow (Abadi et al., 2016) framework can be found at http: //blog.otoro.net/2016/09/ 28/hyper-networks/. Tl ie HyperLSTM Cell will be located inside the HyperLSTM, as described in Equation 10. It is a normal LSTM cell with Layer Normalization. The inputs to the HyperLSTM Cell will be the con- catenation of the input signal and the hidden units of the main LSTM cell. The biases in Equation 10 are initialized to zero and Orthogonal Initialization (Henaff et al., 2016) is performed for all weights. The embedding vectors are produced by the HyperLSTM Cell at each timestep by linear projection described in Equation 11. The weights for the first two equations are initialized to be zero, and the biases are initialized to one. The weights for the third equation are initialized to be a small normal random variable with standard deviation of 0.01. The weight scaling vectors that modify the weight matrices are generated from these embedding vectors, as per Equation 12. Orthogonal initialization is applied to the W), and W,,, while bo is initialized to zero. W,, is also initialized to zero. For the weight scaling vectors, we used a method described in Recurrent Batch Normalization (Cooijmans et al., 2016) where the scaling vectors are initialized to 0.1 rather than 1.0 and this has shown to help gradient flow.
1609.09106#52
1609.09106#54
1609.09106
[ "1603.09025" ]
1609.09106#54
HyperNetworks
Therefore, for weight matrices W;,. and W,,., we initialize to a constant value of 0.1/N, to maintain this property. The only place we use dropout is in the single location in Equation 13, developed in Recurrent Dropout without Memory Loss (Semeniuta et al., 2016). We can use this dropout gate like any other normal dropout gate in a feed-forward network. A.3 EXPERIMENT SETUP DETAILS AND HYPER PARAMETERS A.3.1 USING STATIC HYPERNETWORKS TO GENERATE FILTERS FOR CONVOLUTIONAL NETWORKS AND MNIST We train the network with a 55000 / 5000 / 10000 split for the training, validation and test sets and use the 5000 validation samples for early stopping, and train the network using Adam (Kingma & Ba, 2015) with a learning rate of 0.001 on mini-batches of size 1000. To decrease over fitting, we pad MNIST training images to 30x30 pixels and random crop to 28x28.! Model Test Error Params of 2"' Kernel Normal Convnet 0.72% 12,544 Hyper Convnet 0.76% 4,244
1609.09106#53
1609.09106#55
1609.09106
[ "1603.09025" ]
1609.09106#55
HyperNetworks
Table 7: MNIST Classification with hypernetwork generated weights. A.3.2 STATIC HYPERNETWORKS FOR RESIDUAL NETWORK ARCHITECTURE AND CIFAR-10 We train both the normal residual network and the hypernetwork version using a 45000 / 5000 / 10000 split for training, validation, and test set. The 5000 validation samples are randomly chosen and isolated from the original 50000 training samples. We train the entire setup with a mini-batch
1609.09106#54
1609.09106#56
1609.09106
[ "1603.09025" ]
1609.09106#56
HyperNetworks
â An [Python notebook demonstrating the MNIST Hypernetwork experiment is available at this website: http://blog.otoro.net/2016/09/28/hyper-networks/. 20 size of 128 using Nesterov Momentum SGD for the normal version and Adam for the hypernetwork version, both with a learning rate schedule. We apply L2 regularization on the kernel weights, and also on the hypernetwork-generated kernel weights of 0.0005%. To decrease over fitting, we apply light data augmentation pad training images to 36x36 pixels and random crop to 32x32, and perform random horizontal flips.
1609.09106#55
1609.09106#57
1609.09106
[ "1603.09025" ]
1609.09106#57
HyperNetworks
Table 8: Learning Rate Schedule for Nesterov Momentum SGD <step learning rate 28,000 0.10000 56,000 0.02000 84,000 0.00400 112,000 0.00080 140,000 0.00016 Table 9: Learning Rate Schedule for Hyper Network / Adam <step learning rate 168,000 0.00200 336,000 0.00100 504,000 0.00020 672,000 0.00005 A.3.3 CHARACTER-LEVEL PENN TREEBANK The hyper-parameters of all the experiments were selected through non-extensive grid search on the validation set. Whenever possible, we used reported learning rates and batch sizes in the literature that had been used for similar experiments performed in the past. For Character-level Penn Treebank, we use mini-batches of size 128, to train on sequences of length 100. We trained the model using Adam (Kingma & Ba, 2015) with a learning rate of 0.001 and gra- dient clipping of 1.0. During evaluation, we generate the entire sequence, and do not use information about previous test errors for prediction, e.g., dynamic evaluation (Graves, 2013; Rocki, 2016b). As mentioned earlier, we apply dropout to the input and output layers, and also apply recurrent dropout with a keep probability of 90%. For baseline models, Orthogonal Initialization (Henaff et al., 2016) is performed for all weights. We also experimented with a version of the model using a larger embedding size of 16, and also with a lower dropout keep probability of 85%, and reported results with this â Large Embedding" model in Table 3. Lastly, we stacked two layers of this â Large Embedding" model together to measure the benefits of a multi-layer version of HyperLSTM, with a dropout keep probability of 80%.
1609.09106#56
1609.09106#58
1609.09106
[ "1603.09025" ]
1609.09106#58
HyperNetworks
# A.3.4 HUTTER PRIZE WIKIPEDIA As enwik8 is a bigger dataset compared to Penn Treebank, we will use 1800 units for our networks. In addition, we perform training on sequences of length 250. Our normal HyperLSTM Cell consists of 256 units, and we use an embedding size of 64. Our setup is similar in the previous experiment, using the same mini-batch size, learning rate, weight initialization, gradient clipping parameters and optimizer. We do not use dropout for the input and output layers, but still apply recurrent dropout with a keep probability of 90%. For baseline models, Orthogonal Initialization (Henaff et al., 2016) is performed for all weights. As in (Chung et al., 2015), we train on the first 90M characters of the dataset, use the next 5M as a validation set for early stopping, and the last 5M characters as the test set. In this experiment, we also experimented with a slightly larger version of HyperLSTM with 2048 hidden units. This version of of the model uses 2048 hidden units for the main network, inline with similar models for this experiment in other works. In addition, its HyperLSTM Cell consists of 512
1609.09106#57
1609.09106#59
1609.09106
[ "1603.09025" ]
1609.09106#59
HyperNetworks
21 units with an embedding size of 64. Given the larger number of nodes in both the main LSTM and HyperLSTM cell, recurrent dropout is also applied to the HyperLSTM Cell of this model, where we use a lower dropout keep probability of 85%, and train on an increased sequence length of 300. # A.3.5 HANDWRITING SEQUENCE GENERATION We will use the same model architecture described in (Graves, 2013) and use a Mixture Density Network layer (Bishop, 1994) to generate a mixture of bi-variate Gaussian distributions to model at each time step to model the pen location. We normalize the data and use the same train/validation split as per (Graves, 2013) in this experiment. We remove samples less than length 300 as we found these samples contain a lot of recording errors and noise. After the pre-processing, as the dataset is small, we introduce data augmentation of chosen uniformly from +/- 10% and apply a this random scaling a the samples used for training. One concern we want to address is the lack of a test set in the data split methodology devised in (Graves, 2013). In this task, qualitative assessment of generated handwriting samples is arguably just as important as the quantitative log likelihood score of the results. Due to the small size of the dataset, we want to use as large as possible the portion of the dataset to train our models in order to generate better quality handwriting samples so we can also judge our models qualitatively in addition to just examining the log-loss numbers, so for this task we will use the same training / validation split as (Graves, 2013), with a caveat that we may be somewhat over fitting to the validation set in the quantitative results. In future works, we will explore using larger datasets to conduct a more rigorous quantitative analysis. For model training, will apply recurrent dropout and also dropout to the output layer with a keep probability of 0.95. The model is trained on mini-batches of size 32 containing sequences of variable length. We trained the model using Adam (Kingma & Ba, 2015) with a learning rate of 0.0001 and gradient clipping of 5.0. Our HyperLSTM Cell consists of 128 units and a signal size of 4.
1609.09106#58
1609.09106#60
1609.09106
[ "1603.09025" ]
1609.09106#60
HyperNetworks
For baseline models, Orthogonal Initialization (Henaff et al., 2016) is performed for all weights. # A.3.6 NEURAL MACHINE TRANSLATION Our experimental procedure follows the procedure outlined in Sections 8.1 to 8.4 of the GNMT paper (Wu et al., 2016). We only performed experiments with a single model and did not conduct experiments with Reinforcement Learning or Model Ensembles as described in Sections 8.5 and 8.6 of the GNMT paper. The GNMT paper outlines several methods for the training procedure, and investigated several ap- proaches including combining Adam and SGD optimization methods, in addition to weight quanti- zation schemes. In our experiment, we used only the Adam (Kingma & Ba, 2015) optimizer with the same hyperparameters described in the GNMT paper. We did not employ any quantization schemes. We replaced LSTM cells in the GNMT WPM-32K architecture, with LayerNorm HyperLSTM cells with the same number of hidden units. In this experiment, our HyperLSTM Cell consists of 128 units with an embedding size of 32.
1609.09106#59
1609.09106#61
1609.09106
[ "1603.09025" ]
1609.09106#61
HyperNetworks
22 A.4_ EXAMPLES OF GENERATED WIKIPEDIA TEXT The eastern half of Russia varies from Modern to Central Europe. Due to similar lighting and the extent of the combination of long tributaries to the [[Gulf of Boston]], it is more of a private warehouse than the [[Austro-Hungarian Orthodox Christian and Soviet Union]]. ==Demographic data base== # controversial # â â Austrian # Spellingâ â ]] [[Image:Auschwitz map.png|frame|The [[Image:Czech Middle East SSR chief state 103.JPG|thumb|Serbian Russia movement]] [[1593]]&amp;ndash;[[1719]], and set up a law of [[ parliamentary sovereignty]] and unity in Eastern churches. In medieval Roman Catholicism Tuba and Spanish controlled it until the reign of Burgundian kings and resulted in many changes in multiculturalism, though the [[Crusades]], usually started following the [[Treaty of Portugal]], shored the title of three major powers only a strong part. [[French Marines]] (prompting a huge change in [[President of the Council of the Empire]], only after about [[1793]], the Protestant church, fled to the perspective of his heroic declaration of government and, in the next fifty years, [[Christianity|Christian]] and [[Jutland]].
1609.09106#60
1609.09106#62
1609.09106
[ "1603.09025" ]
1609.09106#62
HyperNetworks
Books combined into a well-published work by a single R. (Sch. M. ellipse poem) tradition in St Peter also included 7:1, he dwell upon the apostle, scripture and the latter of Luke; totally unknown, a distinct class of religious congregations that describes in number of [[remor]]an traditions such as the [[Germanic tribes]] (Fridericus or Lichteusen and the Wales). Be introduced back to the [[14th century]], as related in the [[New Testament]] and in its elegant [[ Anglo-Saxon Chronicle]], although they branch off the characteristic traditions which Saint [[Philip of Macedon]] asserted. Ae also in his native countries. In [[1692]], Seymour was barged at poverty of young English children, which cost almost the preparation of the marriage to him. Burkeâ s work was a good step for his writing, which was stopped by clergy in the Pacific, where he had both refused and received a position of successor to the throne. Like the other councillors in his will, the elder Reinhold was not in the Duke, and he was virtually non-father of Edward I, in order to recognize [[Henry II of England|Queen Enrie
1609.09106#61
1609.09106#63
1609.09106
[ "1603.09025" ]
1609.09106#63
HyperNetworks
]] # of # Parliament. The Melchizedek Minister Qut]] signed the [[Soviet Union]], and forced Hoover to provide [[Hoover (disambiguation) |hoover]]s in [[1844]], [[1841]]. His work on social linguistic relations is divided to the several times of polity for educatinnisley is 760 Li Italians. After Zaitiâ s death , and he was captured August 3, he witnessed a choice better by public, character, repetitious, punt, and future.
1609.09106#62
1609.09106#64
1609.09106
[ "1603.09025" ]
1609.09106#64
HyperNetworks
Figure 14: enwik8 sample generated from 2048-unit Layer Norm HyperLSTM 23 == Quatitis== :/â Main article: [[sexagesimal]]â â Sexual intimacy was traditionally performed by a male race of the [[ mitochondria]] of living things. The next geneme is used by â â Clitoronâ â into short forms of [[sexual reproduction]]. When a maternal suffeach-Lashe]] to the myriad of a &quot;masterâ s character &quot;. He recognizes the associated reflection of [[force call carriers]], the [[Battle of Pois except fragile house and by historians who have at first incorporated his father. ==Geography== The island and county top of Guernsey consistently has about a third of its land, centred on the coast subtained by mountain peels with mountains, squares, and lakes that cease to be links with the size and depth of sea level and weave in so close to lowlands. Strategically to the border of the country also at the southeast corner of the province of Denmark do not apply, but sometimes west of dense climates of coastal Austria and west Canada, the Flemish area of the continent actually inhabits [[tropical geographical transition ]] and transitions from [[soil]] to [[snow]] residents.]]
1609.09106#63
1609.09106#65
1609.09106
[ "1603.09025" ]
1609.09106#65
HyperNetworks
==Definition== The symbols are â â quotationalâ â and â â â distinctâ â â or advanced. {{ref| no_1}} Older readings are used for [[phrase]]s, especially, [[ancient Greek]], and [[Latin]] in their development process. Several varieties of permanent systems typically refer to [[primordial pleasure]] (for example, [[Pleistocene]], [[Classical antenni|Ctrum ]]), but its claim is that it holds the size of the coci, but is historically important both for import: brewing and commercial use. majority of cuisine specifically refers to this period, where the southern countries developed in the 19th century. Scotland had a cultural identity of or now a key church who worked between the 8th and 60th through 6 (so that there are small single authors of detailed recommendations for them and at first) rather than
1609.09106#64
1609.09106#66
1609.09106
[ "1603.09025" ]
1609.09106#66
HyperNetworks
# A , # [[Adoptionism|adoptionists]] # often started # inscribed # with appearing the words distinct from two types. On the group definition the adjective fightingâ â is until Crown Violence Association]], in which the higher education [[motto]] (despite the resulting attack on [[medical treatment]]) peaked on [[15 December]], [[2005]]. At 30 percent, up to 50% of the electric music from the period was created by Voltaire, but Newton promoted the history of his life.
1609.09106#65
1609.09106#67
1609.09106
[ "1603.09025" ]
1609.09106#67
HyperNetworks
'â Publications in the Greek movie â â [[The Great Theory of Bertrand Russell J]â â , also kept an important part into the inclusion of â â [[The Beast for the Passage of Study]]â â , began in [[1869]], opposite the existence of racial matters. Many of Maryâ s religious faiths ( including the [[Mary Sue Literature]] in the United States) incorporated much of Christianity within Hispanic [[Sacred text]]s. But controversial belief must be traced back to the 1950s stated that their anticolonial forces required the challenge of even lingering wars tossing nomon before leaves the bomb in paint on the South Island, known as [[Quay]], facing [[Britain]], though he still holds to his ancestors a strong ancestor of Orthodoxy. Others explain that the process of reverence occurred from [[Common Hermitage]], when the [[Crusade|Speakers]] laid his lifespan in [[Islam]] into the north of Israel. At the end of the [[14th century BCE]], the citadel of [[ Israel]] set Eisenace itself in the [[Abyssinia]]n islands, which was Faroeâ s Dominican Republic claimed by the King.
1609.09106#66
1609.09106#68
1609.09106
[ "1603.09025" ]
1609.09106#68
HyperNetworks
Figure 15: enwik8 sample generated from 2048-unit Layer Norm HyperLSTM 24 A.5 EXAMPLES OF RANDOMLY CHOSEN GENERATED HANDWRITING SAMPLES A Yar - Fen h a , . Peob ontrend A Ihe OFS td oceray 2 ehrstalent LOuies Laerp ol; ybebe web rtlos polorigile Leach Haber cL As iw Rta wis aim Xe rere Pdp Lescol yg golin rat 2hi5 Chew odd â ¢ Cores boon. ~ Perr ereticllor Coon roles â aan RUD, ony Sree ponbiteme BI pes tHDIre &, wile onlsiScad Oy dfowk, Lp plc hel Co oue y â pt Ha Hae real, lew 4 > Le hic st Who / OF STec > = Co Abus yheeaore athintspscon at roth ret duer qansORe Wve flow bars tmotante ply ics couwtk edaris (orrien - Brenmre lancer torengar il fey Merpurre Heer asch trenoed ah. ene imaylil, Py BY) once gin) route lerl iy Wk deafore beara runs:
1609.09106#67
1609.09106#69
1609.09106
[ "1603.09025" ]
1609.09106#69
HyperNetworks
Ohngad glee PEP fopeavasta The 6, ME | Net royterd neg W.Glaar ¢ Figure 16: Handwriting samples generated from LSTM 25 conn! wot Hidtte fan perSye Broa ancighMinwy ok Ure {rag loa moth y Ab wed jean youn, wclO\Ahwunc: Nip Waiveis Wielysteresgrn dak Che Sercel an pox Mang Yio seper Whe bh 22 Aved endhne ron ldo foc ie gears ~elutce ow Lk cor rhode hevs, isons Pear ouek fest hourmrae ie Ko. Cre! C0 whand eh Colbed Rome cron exc LP oremip WOK fo Pteco. AS @@{iSF Woere ) iuel-alceanvere sevinkbepree?@ Hug rears Sol sealyriu dech pi rel Baleâ Ae pe gate vey hd we bce Lugey cs, Cope yA le Ihe 0 boy: fraccusysene â en err So fs, y Sare, \ReS aH ; pecikâ s tha bowngred , 12 Idetohseal Su Qaseborr cf fren, can L ibe J thd foke atc |) woe vcd Wig Wi'nede. Testing ao Figure 17: Handwriting samples generated from Layer Norm LSTM 26
1609.09106#68
1609.09106#70
1609.09106
[ "1603.09025" ]
1609.09106#70
HyperNetworks
Tahal sion wor Hm iM me, gel yedtica AM Cony Urns # So the # lomboe # ae # ety theble- sy fore Hoon aderpebecs lone! protsusioveriste waby caduetm cul Pol 4 OMâ Sy 4/2n0) edicesale ed atl ayer Wopanes: foay org BUN ol wt we wang Hresl Hem coteas Shim melthe- bed fone C [)igteuiclenhuta pert prone mat Car hos Cred, cl . MA; Rabo ove dhe ithe woopasaniics 4 (hoon pore in Ko Tho & Wom % Ove, Felcesy yor Mead tha pew Piugu | lea b eveeledy [, Sous cle [are jth Rebird Iprb lity. fo # r aA) Ved meee & co a5r CIOS rearthe Cv ecQune 3 Eo . paniter yronhe pins (de by lit Mhorgectdrly tr Figure 18: Handwriting samples generated from HyperLSTM
1609.09106#69
1609.09106#71
1609.09106
[ "1603.09025" ]
1609.09106#71
HyperNetworks
27 ; A.6 EXAMPLES OF RANDOMLY CHOSEN MACHINE TRANSLATION SAMPLES We randomly selected translation samples generated from both LSTM baseline and HyperLSTM models from the WMTâ 14 Enâ Fr Test Set. Given an English phrase, we can compare between the correct French translation, the LSTM translation, and the HyperLSTM translation. English Input I was expecting to see gnashing of teeth and a fight breaking out at the gate French (Ground Truth) Je mâ attendais a voir des grincements de dents et une bagarre éclater a la porte LSTM Translation Je mâ attendais a voir des larmes de dents et un combat a la porte HyperLSTM Translation Je mâ attendais a voir des dents grincer des dents et une bataille éclater a la porte
1609.09106#70
1609.09106#72
1609.09106
[ "1603.09025" ]
1609.09106#72
HyperNetworks
English Input French (Ground Truth) LSTM Translation HyperLSTM Translation English Input Prosecuting , Anne Whyte said : " If anyone should know not to the break the law , it is a criminal solicitor . " French (Ground Truth) Le procureur Anne Whyte a déclaré : « Si quelquâ savoir quâ il ne faut pas violer la loi , câ est avocat pénaliste . » LSTM Translation Prosecuting , Anne Whyte a dit : « Si quelquâ un doit savoir quâ il ne faut pas enfreindre la loi , câ est un solicitor criminel
1609.09106#71
1609.09106#73
1609.09106
[ "1603.09025" ]
1609.09106#73
HyperNetworks
HyperLSTM Translation En poursuivant , Anne Whyte a dit : « Si quelquâ un doit savoir ne pas enfreindre la loi , câ est un avocat criminel # English Input According to her , the CSRS was invited to a mediation and she asked for an additional period for consideration French (Ground Truth) Selon elle , la CSRS a été invitée a une médiation et elle a demandé un délai supplémentaire pour y réfléchir LSTM Translation Selon elle , le SCRS a été invité a une médiation et elle a demandé un délai supplémentaire HyperLSTM Translation Selon elle , le SCRS a été invité a une médiation et elle a demandé une période de réflexion supplémentaire
1609.09106#72
1609.09106#74
1609.09106
[ "1603.09025" ]
1609.09106#74
HyperNetworks
28 # English Input Relations between the US and Germany have come under strain following claims that the NSA bugged Chancellor Angela â s Merkel â â s phone French (Ground Truth) Les relations entre les Etats-Unis et 1â Allemagne ont été mises a rude épreuve a la suite de plaintes selon lesquelles la NSA avait mis sur écoute le téléphone portable de la chanceliére allemande Angela Merkel
1609.09106#73
1609.09106#75
1609.09106
[ "1603.09025" ]
1609.09106#75
HyperNetworks
# LSTM Translation Les relations entre les Etats-Unis et 1â Allemagne ont été mises a rude épreuve suite aux affirmations selon lesquelles la NSA aurait pris le téléphone de Merkel de la chanceliére Angela HyperLSTM Translation Les relations entre les Etats-Unis et 1â Allemagne ont été mises a rude épreuve aprés que la NSA a attaqué le téléphone de la chanceliére Angela Angela
1609.09106#74
1609.09106#76
1609.09106
[ "1603.09025" ]
1609.09106#76
HyperNetworks
# English Input Germany â s BfV advises executives to consider using simple prepaid mobiles when on foreign trips because of the risk that smart phones are compromised French (Ground Truth) Le BfV dâ Allemagne conseille a ses dirigeants dâ envisager dâ utiliser de simples téléphones portables prépayés lors de leurs voyages a 1â étranger en raison du risque dâ atteinte a 1â intégrité des smartphones
1609.09106#75
1609.09106#77
1609.09106
[ "1603.09025" ]
1609.09106#77
HyperNetworks
LSTM Translation Le BfV allemand conseille aux dirigeants dâ envisager 1â utilisation de mobiles prépayés simples lors de voyages a 1â étranger en raison du risque de compromission des téléphones intelligents HyperLSTM Translation Le BfV allemand conseille aux dirigeants dâ envisager 1â utilisation de téléphones mobiles prépayés simples lors de voyages a 1â étranger en raison du risque que les téléphones intelligents soient compromis English Input I was on the mid-evening news that same evening , and on TV the following day as well French (Ground Truth) Le soir-méme , je suis au 20h , le lendemain aussi je suis a la télé LSTM Translation Jâ @étais au milieu de 1â actualité le soir méme , et a la télévision le lendemain également HyperLSTM Translation Jâ étais au milieu de la soirée ce soir-la et a la télévision le lendemain
1609.09106#76
1609.09106#78
1609.09106
[ "1603.09025" ]
1609.09106#78
HyperNetworks
29
1609.09106#77
1609.09106
[ "1603.09025" ]
1609.08675#0
YouTube-8M: A Large-Scale Video Classification Benchmark
6 1 0 2 p e S 7 2 ] V C . s c [ 1 v 5 7 6 8 0 . 9 0 6 1 : v i X r a # YouTube-8M: A Large-Scale Video Classiï¬ cation Benchmark # Sami Abu-El-Haija [email protected] # Nisarg Kothari [email protected] # Joonseok Lee [email protected] # Paul Natsev [email protected] # George Toderici [email protected] # Balakrishnan Varadarajan [email protected] # Sudheendra Vijayanarasimhan [email protected] # Google Research ABSTRACT Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learn- ing and inexpensive commodity hardware have reduced the bar- rier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Al- though large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classiï¬ cation datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classiï¬ cation dataset, composed of â ¼8 million videosâ 500K hours of videoâ annotated with a vocabulary of 4800 visual en- tities. To get the videos and their (multiple) labels, we used a YouTube video annotation system, which labels videos with the main topics in them. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals, so they repre- sent an excellent target for content-based annotation approaches.
1609.08675#1
1609.08675
[ "1502.07209" ]
1609.08675#1
YouTube-8M: A Large-Scale Video Classification Benchmark
We ï¬ ltered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre- trained on ImageNet to extract the hidden representation immedi- ately prior to the classiï¬ cation layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. The dataset contains frame-level features for over 1.9 billion video frames and 8 million videos, making it the largest public multi-label video dataset.
1609.08675#0
1609.08675#2
1609.08675
[ "1502.07209" ]
1609.08675#2
YouTube-8M: A Large-Scale Video Classification Benchmark
Vertical Filter Entities [oeom ress goa] [Spo] Figure 1: YouTube-8M is a large-scale benchmark for general multi-label video classiï¬ cation. This screenshot of a dataset explorer depicts a subset of videos in the dataset annotated with the entity â Guitarâ . The dataset explorer allows browsing and searching of the full vocabulary of Knowledge Graph enti- ties, grouped in 24 top-level verticals, along with corresponding videos. We trained various (modest) classiï¬ cation models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using the publicly-available TensorFlow framework. We plan to release code for training a basic TensorFlow model and for computing metrics.
1609.08675#1
1609.08675#3
1609.08675
[ "1502.07209" ]
1609.08675#3
YouTube-8M: A Large-Scale Video Classification Benchmark
like Sports-1M and ActivityNet. We achieve state-of-the-art on Ac- tivityNet, improving mAP from 53.8% to 77.6%. We hope that the unprecedented scale and diversity of YouTube-8M will lead to ad- vances in video understanding and representation learning. ous tasks beyond classiï¬ cation [41, 9, 31]. In a similar vein, the amount and size of video benchmarks is growing with the avail- ability of Sports-1M [19] for sports videos and ActivityNet [12] for human activities. However, unlike ImageNet, which contains a diverse and general set of objects/entities, existing video bench- marks are restricted to action and sports classes. In this paper, we introduce YouTube-8M 1, a large-scale bench- mark dataset for general multi-label video classiï¬
1609.08675#2
1609.08675#4
1609.08675
[ "1502.07209" ]
1609.08675#4
YouTube-8M: A Large-Scale Video Classification Benchmark
cation. We treat the task of video classiï¬ cation as that of producing labels that are relevant to a video given its frames. Therefore, unlike Sports-1M and ActivityNet, YouTube-8M is not restricted to action classes alone. For example, Figure 1 shows random video examples for the Guitar entity. # INTRODUCTION Large-scale datasets such as ImageNet [6] have been key en- ablers of recent progress in image understanding [20, 14, 11]. By supporting the learning process of deep networks with mil- lions of parameters, such datasets have played a crucial role for the rapid progress of image understanding to near-human level ac- curacy [30]. Furthermore, intermediate layer activations of such networks have proven to be powerful and interpretable for vari-
1609.08675#3
1609.08675#5
1609.08675
[ "1502.07209" ]
1609.08675#5
YouTube-8M: A Large-Scale Video Classification Benchmark
We ï¬ rst construct a visual annotation vocabulary from Knowl- edge Graph entities that appear as topic annotations for YouTube videos based on the YouTube video annotation system [2]. To en- sure that our vocabulary consists of entities that are recognizable visually, we use various ï¬ ltering criteria, including human raters. The entities in the dataset span activities (sports, games, hobbies), objects (autos, food, products), scenes (travel), and events. The # 1http://research.google.com/youtube8m 10 r o. ul i YouTybe8M @ | gr o eee PImagenet 0% Seog e ioplebachle rc) ist Cgco magenet(ILSYRG). 2 Msit Ggco 5 | mee perenne E10 bce Fgp....@ 4 S : : â SUN. 2 Actntglgt & , 1 Pasgal UCF 109 Caltech 256 © 10°» af serietereed Sveetieeeeity °-Galtech 101: fl Hollyvood ne ! Image Datasets ; : Video Datasets 10 ; 10" 10 10 10° 10° Total Number of Classes Figure 2: The progression of datasets for image and video understand- ing tasks. Large datasets have played a key role for advances in both areas. entities were selected using a combination of their popularity on YouTube and manual ratings of their visualness according to hu- man raters. They are an attempt to describe the central themes of videos using a few succinct labels. We then collect a sample set of videos for each entity, and use a publicly available state-of-the-art Inception network [4] to extract features from them.
1609.08675#4
1609.08675#6
1609.08675
[ "1502.07209" ]
1609.08675#6
YouTube-8M: A Large-Scale Video Classification Benchmark
Speciï¬ cally, we decode videos at one frame- per-second and extract the last hidden representation before the classiï¬ cation layer for each frame. We compress the frame-level features and make them available on our website for download. Overall, YouTube-8M contains more than 8 million videosâ over 500,000 hours of videoâ from 4,800 classes. Figure 2 illus- trates the scale of YouTube-8M, compared to existing image and video datasets. We hope that the unprecedented scale and diversity of this dataset will be a useful resource for developing advanced video understanding and representation learning techniques. Towards this end, we provide extensive experiments comparing several state-of-the-art techniques for video representation learn- ing, including Deep Networks [26], and LSTMs (Long Short-Term In addition, we show Memory Networks) [13] on this dataset. that transfering video feature representations learned on this dataset leads to signiï¬ cant improvements on other benchmarks such as Sports-1M and ActivityNet.
1609.08675#5
1609.08675#7
1609.08675
[ "1502.07209" ]
1609.08675#7
YouTube-8M: A Large-Scale Video Classification Benchmark
In the rest of the paper, we ï¬ rst review existing benchmarks for image and video classiï¬ cation in Section 2. We present the details of our dataset including the collection process and a brief analysis of the categories and videos in Section 3. In Section 4, we review several approaches for the task of multi-label video classiï¬ cation given ï¬ xed frame-level features, and evaluate the approaches on the dataset. In Section 5, we show that features and models learned on our large-scale dataset generalize very well on other benchmarks. We offer concluding remarks with Section 6. # 2. RELATED WORK Image benchmarks have played a signiï¬ cant role in advancing computer vision algorithms for image understanding. Starting from a number of well labeled small-scale datasets such as Caltech 101/256 [8, 10], MSRC [32], PASCAL [7], image understanding research has rapidly advanced to utilizing larger datasets such as ImageNet [6] and SUN [38] for the next generation of vision algorithms. Im- ageNet in particular has enabled the development of deep feature learning techniques with millions of parameters such as the AlexNet [20] and Inception [14] architectures due to the number of classes (21841), the diversity of the classes (27 top-level categories) and the millions of labeled images available. A similar effort is in progress in the video understanding do- main where the community has quickly progressed from small, well-labeled datasets such as KTH [22], Hollywood 2 [23], Weiz- mann [5], with a few thousand video clips, to medium-scale datasets such as UCF101 [33], Thumosâ 14 [16] and HMDB51 [21], with more than 50 action categories. Currently, the largest available video benchmarks are the Sports-1M [19], with 487 sports related activities and 1M videos, the YFCC-100M [34], with 800K videos and raw metadata (titles, descriptions, tags) for some of them, the FCVID [17] dataset of 91, 223 videos manually annotated with 239 categories, and ActivityNet [12], with â ¼200 human activity classes and a few thousand videos. However, almost all current video benchmarks are restricted to recognizing action and activity categories, and have less than 500 categories. YouTube-8M ï¬
1609.08675#6
1609.08675#8
1609.08675
[ "1502.07209" ]
1609.08675#8
YouTube-8M: A Large-Scale Video Classification Benchmark
lls the gap in video benchmarks as follows: â ¢ A large-scale video annotation and representation learn- ing benchmark, reï¬ ecting the main themes of a video. â ¢ A signiï¬ cant jump in the number and diversity of annotation classesâ 4800 Knowledge Graph entities vs. less than 500 categories for all other datasets. â ¢ A substantial increase in the number of labeled videosâ over 8 million videos, more than 500,000 hours of video. â ¢ Availability of pre-computed state-of-the-art features for 1.9 billion video frames. We hope the pre-computed features will remove computational bar- riers, level the playing ï¬ eld, and enable researchers to explore new technologies in the video domain at an unprecedented scale. # 3. YOUTUBE-8M DATASET YouTube-8M is a benchmark dataset for video understanding, where the main task is to determine the key topical themes of a video. We start with YouTube videos since they are a good (albeit noisy) source of knowledge for diverse categories including vari- ous sports, activities, animals, foods, products, tourist attractions, games, and many more. We use the YouTube video annotation system [2] to obtain topic annotations for a video, and to retrieve videos for a given topic. The annotations are provided in the form of Knowledge Graph entities [3] (formerly, Freebase topics [1]). They are associated with each video based on the videoâ s metadata, context, and content signals [2]. We use Knowledge Graph entities to succinctly describe the main themes of a video. For example, a video of biking on dirt roads and cliffs would have a central topic/theme of Mountain Biking, not Dirt, Road, Person, Sky, and so on. Therefore, the aim of the dataset is not only to understand what is present in each frame of the video, but also to identify the few key topics that best describe what the video is about. Note that this is different than typical event or scene recognition tasks, where each item belongs to a single event or scene. [38, 28] It is also different than most object recognition tasks, where the goal is to label everything visible in an image. This would produce thousands of labels on each video but without an- swering what the video is really about.
1609.08675#7
1609.08675#9
1609.08675
[ "1502.07209" ]
1609.08675#9
YouTube-8M: A Large-Scale Video Classification Benchmark
The goal of this benchmark is to understand what is in the video and to summarize that into a few key topics. In the following sub-sections, we describe our vo- cabulary and video selection scheme, followed by a brief summary of dataset statistics. # Action-adventure-game seco Anin VAT O Nar satict sasebali Basketball exner oscenseBICYCI® ay gu eax BOlly WOOK, sxing BOxing Callof-Duty comers CAL camer CAPEOON Cat checteading CHOI Christmas cicus cesnorcens CliMbIN Combat COMEMY comiciooe COMICS computer CONCELTE Cooking cooking-show Cosmetics canersuie cite respons CYCIING Da NCâ ¬Dashcam Disc-jockey ona 09 po,c095n201DFaWiNgonnn DTUMS ovoeancran FASHION rarove-sawarnasmn SNING,.....FOOdFOOtball Games Gardening cranc-rnet-autow Grand-Theft-Auto-V Guitar Gymnastics Hair saisvie saio Handball Handheld-game-console High-school Highlight-filmHockey Home-improvement HOIS sorse-racing Hote! House HuMan-swimming HuntinglCE-SKAtING ,p.siPhone iroakayek knite Landing Laptop LeagUe-Of-Legends LEGO Medicine menorwinsom MINECLAft mae MO DIIE-PNONE model-aircratt mono MOtorcycle Moto rsport vonnwe MUSIC-VIC GO musical-ensemble ya: Naruto Nature teresnson Orchestra orsen Outdoor-recreation Painting rarscrutng PEFSONAl-COMpUter Photography Piano Pokémon rosi Frayer Racing Radicons scr Rado contolt-car Radio-controlled-model Rallying Recipe rolierskating Rugby-football_ Runescape Running School shoe simulation-video-game sitcom Skateboarding gcsceomesy SKIING sietsngsiscsnow ~OMartphone Sports-game Strategy-video-game surfing Tablet-computer Talent-show tan Television teleisoradverisement Tennis TheSims Theat: Disney-Company Touhow Project TOY youn ratornnmy Thier Train Trucks) vim WE@NhICle Video-game video-game-console.,.,, Aircraft aor Album American-football © Amusement-park Advertising â
1609.08675#8
1609.08675#10
1609.08675
[ "1502.07209" ]
1609.08675#10
YouTube-8M: A Large-Scale Video Classification Benchmark
Samsung-Galaxy Snowboarding _Sonic-the-Hedgehog Star-Wars Warcraft Water Weapon weather Wedding Weight-training yw Winter-sport Woodturning World-of-Warcraft Wrestling xbox # Figure 3: A tag-cloud representation of the top 200 entities. Font size is proportional to the number of videos labeled with the entity. Top-level Category Arts & Entertainment Autos & Vehicles Beauty & Fitness Books & Literature Business & Industrial Computers & Electronics Finance Food & Drink Games Health Hobbies & Leisure Home & Garden Internet & Telecom Jobs & Education Law & Government News People & Society Pets & Animals Real Estate Reference Science Shopping Sports Travel Full vocabulary 1st Entity Concert Vehicle Fashion Book Train Personal computer Video game console Money Food Video game Medicine Fishing Gardening Mobile phone School Tank Weather Prayer Animal House Vampire Nature Toy Motorsport Amusement park Vehicle 2nd Entity Animation Car Hair Harry Potter Model aircraft Bank Cooking Minecraft Raw food Outdoor recreation Home improvement Smartphone University Fireï¬ ghter Snow Family Dog Apartment Bus Robot LEGO Football Hotel Concert 3rd Entity Music video Motorcycle Cosmetics The Bible Fish iPhone Foreign Exchange Recipe Action-adventure game Ear Radio-controlled model Wedding Kitchen House Website Telephone Teacher High school Soldier President of the U.S.A. News broadcasting Rain Human Play-Doh Cat Horse Dormitory Condominium City River Ice Eye Doll Sledding Cycling Winter sport Beach Airport Music video Animation 4th Entity Dance Bicycle Weight training Writing Water PlayStation 3 Euro Cake Strategy video game Glasses 5th Entity Guitar Aircraft Hairstyle Magazine Tractor pulling Tablet computer United States Dollar Chocolate Sports game Injury Christmas Garden Sony Xperia Kindergarten President Newspaper Dragon Bird Mansion Mermaid Biology Shoe Basketball Roller coaster Video game 6th Entity Disc jockey Truck Nail Alice Advertising Xbox 360 Credit card Egg Call of Duty Dietary supplement Dental braces Hunting Door Google Nexus Campus Police ofï¬ cer Mattel Angel Aquarium Skyscraper Village Skin My Little Pony Gymnastics Lake Motorsport 7th Entity Trailer Boat Mascara E-book Landing Microsoft Windows Cash Eating Grand Theft Auto V Diving Swimming pool World Wide Web Classroom Fighter aircraft Hail Tarot Puppy Loft Samurai Light Nike; Inc. Wrestling Resort Football
1609.08675#9
1609.08675#11
1609.08675
[ "1502.07209" ]
1609.08675#11
YouTube-8M: A Large-Scale Video Classification Benchmark
# Table 1: Most frequent entities for each of the top-level categories. # 3.1 Vocabulary Construction We followed two main tenets when designing the vocabulary for the dataset; namely 1) every label in the dataset should be distin- guishable using visual information alone, and 2) each label should have sufï¬ cient number of videos for training models and for com- puting reliable metrics on the test set. For the former, we used a combination of manually curated topics and human ratings to prune the vocabulary into a visual set. For the latter, we considered only entities having at least 200 videos in the dataset. The Knowledge Graph contains millions of topics. Each topic has one or more types, that are curated with high precision. For ex- ample, there is an exhaustive list of animals with type animal and an exhaustive list of foods with type food. To start with our initial vocabulary, we manually selected a whitelist of 25 entity types that we considered visual (e.g. sport, tourist_attraction, inventions), and also blacklisted types that we thought are non-visual (e.g. mu- sic artists, music compositions, album, software). We then obtained all entities that have at least one whitelisted type and no blacklisted types, which resulted in an initial vocabulary of â ¼50, 000 entities. Following this, we used human raters in order to manually prune this set into a smaller set of entities that are considered visual with high conï¬ dence, and are also recognizable without very deep do- main expertise. Raters were provided with instructions and exam- ples. Each entity was rated by 3 raters and the ratings were av- eraged. Figure 4a shows the main rating question. The process resulted in a total of â ¼10, 000 entities that are considered visually recognizable and are not too ï¬ ne-grained (i.e. can be recognized by non-domain experts after studying some examples). These enti- ties were further pruned: we only kept entities that have more than 200 popular videos, as explained in the next section.
1609.08675#10
1609.08675#12
1609.08675
[ "1502.07209" ]
1609.08675#12
YouTube-8M: A Large-Scale Video Classification Benchmark
The ï¬ nal set of entities in the dataset are fairly balanced in terms of the speci- ï¬ city of the topic they describe, and span both coarse-grained and ï¬ ne-grained entities, as shown in Figure 4b. # 3.2 Collecting Videos Having established the initial target vocabulary, we followed these Entity Name Entity URL Entity Description A thunderstorm, also known as an electrical storm, a lightning storm, or @ thundershower, Is a type of storm characterized by the presence of lightning and its acoustie effect on the Earthâ s atmosphere known as thunder. The Thunderstorm http://www fr rym j021 ma Ter meteorologically assigned cloud type associated with the thunderstorm is the cumulonimbus. Thunderstorms are usually accompanied by strong winds, heavy rain and sometimes snow, sleet, hall, or no precipitation at al How difficult is it to identify this entity in images or videos (without audio, titles, comments, etc)? 1.
1609.08675#11
1609.08675#13
1609.08675
[ "1502.07209" ]
1609.08675#13
YouTube-8M: A Large-Scale Video Classification Benchmark
Any layperson could . Experts in some field can . Not possible without non-visual knowledge . Non-visual uRWNnN . Any layperson after studying examples, wikipedia, etc could Entity Name Entity URL Entity Description A thunderstorm, also known as an electrical storm, a lightning storm, or @ thundershower, Is a type of storm characterized by the presence of lightning and its acoustie effect on the Earthâ s atmosphere known as thunder. The Thunderstorm http://www fr rym j021 ma Ter meteorologically assigned cloud type associated with the thunderstorm is the cumulonimbus. Thunderstorms are usually accompanied by strong winds, heavy rain and sometimes snow, sleet, hall, or no precipitation at al
1609.08675#12
1609.08675#14
1609.08675
[ "1502.07209" ]
1609.08675#14
YouTube-8M: A Large-Scale Video Classification Benchmark
Coarse-grained Medium-grained Fine-grained 0 500 1000 1500 2000 2500 3000 Number of entities (a) Screenshot of the question displayed to human raters. (b) Distribution of vocabulary topics in terms of speciï¬ city. Figure 4: Rater guidelines to assess how speciï¬ c and visually recognizable each entity is, on a discrete scale of (1 to 5), where 1 is most visual and easily recognizable by a layperson. Each entity was rated by 3 raters. We kept only entities with a maximum average score of 2.5, and categorized them by speciï¬ city, into coarse-grained, medium-grained, and ï¬ ne-grained entities, using equally sized score range buckets.
1609.08675#13
1609.08675#15
1609.08675
[ "1502.07209" ]
1609.08675#15
YouTube-8M: A Large-Scale Video Classification Benchmark
steps to obtain the videos: Train Dataset YouTube-8M 5,786,881 Validate 1,652,167 Test 825,602 Total 8,264,650 â ¢ Collected all videos corresponding to the 10, 000 visual en- tities and have at least 1, 000 views, using the YouTube video annotation system [2]. We excluded too short (< 120 secs) or too long (> 500 secs) videos. â ¢ Randomly sampled 10 million videos among them. â ¢ Obtained all entities for the sampled 10 million videos using the YouTube video annotation system. This completes the annotations.
1609.08675#14
1609.08675#16
1609.08675
[ "1502.07209" ]
1609.08675#16
YouTube-8M: A Large-Scale Video Classification Benchmark
⠢ Filtered out entities with less than 200 videos, and videos with no remaining entities. This reduced the size of our data to 8, 264, 650 videos. ⠢ Split our videos into 3 partitions, Train : Validate : Test, with ratios 70% : 20% : 10%. We publish features for all splits, but only publish labels for the Train and Validate partitions. Table 2: Dataset partition sizes. 10° Pa TT nae 2 10° : > 3 io 8 E 10? | 2 10° 10? 10? 10° 10° Entity ID # 3.3 Features
1609.08675#15
1609.08675#17
1609.08675
[ "1502.07209" ]
1609.08675#17
YouTube-8M: A Large-Scale Video Classification Benchmark
The original size of the video dataset is hundreds of Terabytes, and covers over 500, 000 hours of video. This is impractical to process by most research teams (using a real-time video processing engine, it would take over 50 years to go through the data). There- fore, we pre-process the videos and extract frame-level features us- ing a state-of-the-art deep model: the publicly available Inception network [4] trained on ImageNet [14]. Concretely, we decode each video at 1 frame-per-second up to the ï¬ rst 360 seconds (6 minutes), feed the decoded frames into the Inception network, and fetch the ReLu activation of the last hidden layer, before the classiï¬ cation layer (layer name pool_3/_reshape). The feature vector is 2048-dimensional per second of video. While this removes mo- tion information from the videos, recent work shows diminishing returns from motion features as the size and diversity of the video data increases [26, 35]. The static frame-level features provide an excellent baseline, and constructing compact and efï¬ cient motion features is beyond the scope of this paper. Nonetheless, we hope to extend the dataset with audio and motion features in the future. We cap processing of each video up to the ï¬ rst 360 seconds for storage and computational reasons. For comparison, the average length of videos in UCF-101 is 10 â 15 seconds, Sports-1M is 336 seconds and in this dataset, it is 230 seconds.
1609.08675#16
1609.08675#18
1609.08675
[ "1502.07209" ]
1609.08675#18
YouTube-8M: A Large-Scale Video Classification Benchmark
Figure 5: Number of videos in log-scale versus entity rank in log scale. Entities were sorted by number of videos. We note that this somewhat follows the natural Zipf distribution. Afterwards, we apply PCA (+ whitening) to reduce feature di- mensions to 1024, followed by quantization (1 byte per coefï¬ cient). These two compression techniques reduce the size of the data by a factor of 8. The mean vector and covariance matrix for PCA was computed on all frames from the Train partition. We quantize each 32-bit ï¬ oat into 256 distinct values (8 bits) using optimally com- puted (non-uniform) quantization bin boundaries. We conï¬ rmed that the size reduction does not signiï¬ cantly hurt the evaluation metrics. In fact, training all baselines on the full-size data (8 times larger than what we publish), increases all evaluation metrics by less than 1%. Note that while this dataset comes with standard frame-level fea- tures, it leaves a lot of room for investigating video representation learning approaches on top of the ï¬ xed frame-level features (see Section 4 for approaches we explored). # 3.4 Dataset Statistics The YouTube-8M dataset contains 4, 800 classes and a total of Games Arts & Entertainment Autos & Vehicles Food & Drink Business & Industrial Computers & Electronics Science Sports, Pets & Animals Shopping Home & Garden Hobbies & Leisure People & Society Beauty & Fitness Travel Books & Literature Reference Law & Government Internet & Telecom News Health, Jobs & Education Finance Real Estated Vertical 200 400 600 800 Number of Entites 1000 Arts & Entertainment Games Autos & Vehicles Sports Food & Drink Computers & Electronics Hobbies & Leisure Business & Industrial Beauty & Fitness Vertical Internet & Telecom Shopping Home & Garden Travel People & Society News Reference Jobs & Education Books & Literature Law & Government 10° 10° 107 Number of Videos
1609.08675#17
1609.08675#19
1609.08675
[ "1502.07209" ]
1609.08675#19
YouTube-8M: A Large-Scale Video Classification Benchmark
Games Arts & Entertainment Autos & Vehicles Food & Drink Business & Industrial Computers & Electronics Science Sports, Pets & Animals Shopping Home & Garden Hobbies & Leisure People & Society Beauty & Fitness Travel Books & Literature Reference Law & Government Internet & Telecom News Health, Jobs & Education Finance Real Estated 200 400 600 800 Number of Entites 1000 Arts & Entertainment Games Autos & Vehicles Sports Food & Drink Computers & Electronics Hobbies & Leisure Business & Industrial Beauty & Fitness Vertical Internet & Telecom Shopping Home & Garden Travel People & Society News Reference Jobs & Education Books & Literature Law & Government 10° 10° 107 Number of Videos (a) Number of entities in each top-level category. (b) Number of train videos in log-scale per top-level category. Figure 6: Top-level category statistics of the YouTube-8M dataset. 8, 264, 650 videos. A video may be annotated with more than one class and the average number of classes per video is 1.8. Table 2 shows the number of videos for which we are releasing features, across the three datasets. ated on the human-based ground truth), if one explicitly models incorrect [29] (78.8% precision) or missing [40, 25] (14.5% recall) training labels. We believe this is an exciting area of research that this dataset will enable at scale.
1609.08675#18
1609.08675#20
1609.08675
[ "1502.07209" ]
1609.08675#20
YouTube-8M: A Large-Scale Video Classification Benchmark
We processed only the ï¬ rst six minutes of each video, at 1 frame- per-second. The average length of a video in the dataset is 229.6 seconds, which amounts to â ¼1.9 billion frames (and corresponding features) across the dataset. We grouped the 4, 800 entities into 24 top-level categories to measure statistics and illustrate diversity. Although we do not use these categories during training, we are releasing the entity-to-category mapping for completeness. Table 1 shows the top entities per cate- gory. Note that while some categories themselves may not seem vi- sual, most of the entities within them are visual. For instance, Jobs & Education includes universities, classrooms, lectures, etc., and Law & Government includes police, emergency vehicles, military- related entities, which are well represented and visual. Figure 5 shows a log-log scale distribution of entities and videos. Figures 6a and 6b show the size of categories, respectively, in terms of the number of entities and the number of videos.
1609.08675#19
1609.08675#21
1609.08675
[ "1502.07209" ]
1609.08675#21
YouTube-8M: A Large-Scale Video Classification Benchmark
# 4. BASELINE APPROACHES # 4.1 Models from Frame Features One of the challenges with this dataset is that we only have video-level ground-truth labels. We do not have any additional information that speciï¬ es how the labels are localized within the video, nor their relative prominence in the video, yet we want to in- fer their importance for the full video. In this section, we consider models trained to predict the main themes of the video using the in- put frame-level features. Frame-level models have shown competi- tive performance for video-level tasks in previous work [19, 26]. A video v is given by a sequence of frame-level features xv 1:Fv , where j is the feature of the jth frame from video v. xv # 4.1.1 Frame-Level Models and Average Pooling # 3.5 Human Rated Test Set The annotations from the YouTube video annotation system can be noisy and incomplete, as they are automatically generated from metadata, anchor text, comments, and user engagement signals [2]. To quantify the noise, we uniformly sampled over 8000 videos from the Test partition, and used 3 human raters per video to ex- haustively rate their labels. We measured the precision and recall of the ground truth labels to be 78.8% and 14.5%, respectively, with respect to the human raters. Note that typical inter-rater agreement on similar annotation tasks with human raters is also around 80% so the precision of these ground truth labels is perhaps compara- ble to (non-expert) human-provided labels. The recall, however, is low, which makes this an excellent test bed for approaches that deal with missing data. We report the accuracy of our models primarily on the (noisy) Validate partition but also show some results on the much smaller human-rated set, showing that some of the metrics are surprisingly similar on the two datasets. Since we do not have frame-level ground-truth, we assign the video-level ground-truth to every frame within that video. More sophisticated formulations based on multiple-instance learning are left for future work. From each video, we sample 20 random frames and associate all frames to the video-level ground-truth. This re- sults in about 120 million frames. For each entity e, we get 120M i ) pairs, where xi â
1609.08675#20
1609.08675#22
1609.08675
[ "1502.07209" ]
1609.08675#22
YouTube-8M: A Large-Scale Video Classification Benchmark
R1024 is the inception fea- instances of (xi, ye ture and ye i â 0, 1 is the ground-truth associated with entity e for the ith example. We train 4800 independent one-vs-all classiï¬ ers for each entity e. We use the online training framework after par- allelizing the work for each entity across multiple workers. During inference, we score every frame in the test video using the models for all classes. Since all our evaluations are based on video-level ground truths, we need to aggregate the frame-level scores (for each entity) to a single video-level score. The frame-level probabili- ties are aggregated to the video-level using a simple average. We choose average instead of max pooling since we want to reduce the effect of outlier detections and capture the prominence of each en- tity in the entire video. In other words, let p(e|x) be the probability of existence of e given the features x. We compute the probability While the baselines in section 4 show very promising results, we believe that they can be signiï¬ cantly improved (when evalu- Shared Parameters Pooling Classifier Frame-level Features Figure 7: The network architecture of the DBoF approach. Input frame features are ï¬ rst fed into a up-projection layer with shared pa- rameters for all frames. This is followed by a pooling layer that con- verts the frame-level sparse codes into a video-level representation. A few hidden layers and a classiï¬ cation layer provide the ï¬ nal video-level predictions. pv(e|xv 1:Fv ) of the entity e associated with the video v as Fy v 1 po(elXt.r,) = EF SY rle j=l x"). (1) # 4.1.2 Deep Bag of Frame (DBoF) Pooling Inspired by the success of various classic bag of words represen- tations for video classiï¬ cation [23, 36], we next consider a Deep Bag-of-Frames (DBoF) approach. Figure 7 shows the overall ar- chitecture of our DBoF network for video classiï¬ cation. The N - dimensional input frame level features from k randomly selected frames of a video are ï¬ rst fed into a fully connected layer of M units with RELU activations.
1609.08675#21
1609.08675#23
1609.08675
[ "1502.07209" ]
1609.08675#23
YouTube-8M: A Large-Scale Video Classification Benchmark
Typically, with M > N , the input features are projected onto a higher dimensional space. Crucially, the parameters of the fully connected layer are shared across the k input frames. Along with the RELU activation, this leads to a sparse coding of the input features in the M -dimensional space. The obtained sparse codes are fed into a pooling layer that aggre- gates the codes of the k frames into a single ï¬ xed-length video rep- resentation. We use max pooling to perform the aggregation. We use a batch normalization layer before pooling to improve stabil- ity and speed-up convergence.
1609.08675#22
1609.08675#24
1609.08675
[ "1502.07209" ]
1609.08675#24
YouTube-8M: A Large-Scale Video Classification Benchmark
The obtained ï¬ xed length descriptor of the video can now be classiï¬ ed into the output classes using a Logistic or Softmax layer with additional fully connected layers in between. The M -dimensions of the projection layer could be thought of as M discriminative clusters which can be trained in a single network end to end using backpropagation. The entire network is trained using Stocastic Gradient Descent (SGD) with logistic loss for a logistic layer and cross-entropy loss for a softmax layer. The backpropagated gradients from the top layer train the weight vectors of the projection layer in a discrimina- tive fashion in order to provide a powerful representation of the in- put bag of features. A similar network was proposed in [26] where the convolutional layer outputs are pooled across all the frames of a video to obtain a ï¬
1609.08675#23
1609.08675#25
1609.08675
[ "1502.07209" ]
1609.08675#25
YouTube-8M: A Large-Scale Video Classification Benchmark
xed length descriptor. However, the net- work in [26] does not use an intermediate projection layer which we found to be a crucial difference when learning from input frame features. Note that the up-projection layer into sparse codes is sim- ilar to what Fisher Vectors [27] and VLAD [15] approaches do but the projection (i.e., clustering) is done discriminatively here. We also experimented with Fisher Vectors and VLAD but were not able to obtain competitive results using comparable codebook sizes. Hyperparameters: We considered values of {2048, 4096, 8192} for the number of units in the projection layer of the network and found that larger values lead to better results. We used 8192 for all datasets. We used a single hidden layer with 1024 units between the pooling layer and the ï¬
1609.08675#24
1609.08675#26
1609.08675
[ "1502.07209" ]
1609.08675#26
YouTube-8M: A Large-Scale Video Classification Benchmark
nal classiï¬ cation layer in all experiments. The network was trained using SGD with AdaGrad, a learning rate of 0.1, and a weight decay penalty of 0.0005. # 4.1.3 Long Short-Term Memory (LSTM) We take a similar approach to [26] to utilize LSTMs for video- level prediction. However, unlike that work, we do not have access to the raw video frames. This means that we can only train the LSTM and Softmax layers. We experimented with the number of stacked LSTM layers and the number of hidden units. We empirically found that 2 layers with 1024 units provided the highest performance on the validation set. Similarly to [26], we also employ linearly increasing per-frame weights going from 1/N to 1 for the last frame. During the training time, the LSTM was unrolled for 60 itera- tions. Therefore, the gradient horizon for LSTM was 60 seconds. We experimented with a larger number of unroll iterations, but that slowed down the training process considerably. In the end, the best model was the one trained for the largest number of steps (rather than the most real time). In order to transfer the learned model to ActivityNet, we used a fully-connected model which uses as inputs the concatenation of the LSTM layersâ outputs as computed at the last frame of the videos in each of these two benchmarks. Unlike traditional trans- fer learning methods, we do not ï¬ ne-tune the LSTM layers. This approach is more robust to overï¬ tting than traditional methods, which is crucial for obtaining competitive performance on Activ- ityNet due to its size. We did perform full ï¬ ne-tuning experiments on Sports-1M, which is large enough to ï¬ ne-tune the entire LSTM model after pre-training.
1609.08675#25
1609.08675#27
1609.08675
[ "1502.07209" ]
1609.08675#27
YouTube-8M: A Large-Scale Video Classification Benchmark
# 4.2 Video level representations Instead of training classiï¬ ers directly on frame-level features, we also explore extracting a task-independent ï¬ xed-length video-level feature vector from the frame-level features xv 1:Fv for each video v. There are several beneï¬ ts of extracting ï¬ xed-length video features: 1. Standard classiï¬ ers can apply: Since the dimensionality of the representations are ï¬ xed across videos, we may train standard classiï¬ ers like logistic, SVM, mixture of experts. 2. Compactness: We get a compact representation for the en- tire video, thereby reducing the training data size by a few orders of magnitude. 3. More suitable for domain adaptation: Since the video- level representations are unsupervised (extracted independently of the labels), these representations are far less specialized to the labels associated with the current dataset, and can gener- alize better to new tasks or video domains. Formally, a video-level feature Ï (xv 1:Fv ) is a ï¬ xed-length repre- sentation (at the video-level). We explore a simple aggregation technique for getting these video-level representations. We also experimented with Fisher Vectors (FV) [27] and VLAD [15] ap- proaches for task-independent video-level representations but were not able to achieve competitive results for FV or VLAD representa- tions of similar dimensionality. We leave it as future work to come up with compact FV or VLAD type representations that outperform the much simpler approach described below. # 4.2.1 First, second order and ordinal statistics
1609.08675#26
1609.08675#28
1609.08675
[ "1502.07209" ]
1609.08675#28
YouTube-8M: A Large-Scale Video Classification Benchmark
j â R1024, we ex- tract the mean µv â R1024 and the standard-deviation Ï v â R1024. Additionally, we also extract the top 5 ordinal statistics for each dimension. Formally, TopK (xv(j)1:Fv ) returns a K dimensional vector where the pth dimension contains the pth highest value of the feature-vectorâ s jth dimension over the entire video. We denote TopK (xv 1:Fv ) to be a KD dimensional vector obtained by concate- nating the ordinal statistics for each dimension. Thus, the resulting feature-vector Ï (xv 1:Fv ) for the video becomes: Ï (xv 1:Fv ) = µ(xv Ï (xv TopK (xv 1:Fv ) 1:Fv ) 1:Fv ) . (2)
1609.08675#27
1609.08675#29
1609.08675
[ "1502.07209" ]
1609.08675#29
YouTube-8M: A Large-Scale Video Classification Benchmark
# 4.2.2 Feature normalization Standardization of features has been proven to help with online learning algorithms[14, 37] as it makes the updates using Stochas- tic Gradient Descent (SGD) based algorithms (like Adagrad) more robust to learning rates, and speeds up convergence. Before training our one-vs-all classiï¬ ers on the video-level rep- resentation, we apply global normalization to the feature vectors Ï (xv 1:Fv ) (deï¬ ned in equation 2). Similar to how we processed the frame features, we substract the mean Ï (.) then use PCA to decor- relate and whiten the features. The normalized video features are now approximately multivariate gaussian with zero mean and iden- tity covariance. This makes the gradient steps across the various dimensions independent, and learning algorithm gets an unbiased view of each dimension (since the same learning rate is applied to each dimension). Finally, the resulting features are L2 normal- ized. We found that these normalization techniques make our mod- els train faster. # 4.3 Models from Video Features Given the video-level representations, we train independent bi- nary classiï¬ ers for each label using all the data. Exploiting the structure information between the various labels is left for future work. A key challenge is training these classiï¬ ers at the scale of this dataset. Even with a compact video-level representation for the 6M training videos, it is unfeasible to train batch optimization classiï¬ ers, like SVM. Instead, we use online learning algorithms, and use Adagrad to perform model updates on the weight vectors given a small mini-batch of examples (each example is associated with a binary ground-truth value). # 4.3.1 Logistic Regression Given D dimensional video-level features, the parameters Î of the logistic regression classiï¬ er are the entity speciï¬ c weights we. During scoring, given x â RD+1 to be the video-level feature of the test example, the probability of the entity e is given as p(e|x) = Ï (wT e x). The weights we are obtained by minimizing the total log-loss on the training data given as: w Allwell? + D0 L(yi.e, (we x:)), G3) i=l where Ï (.) is the standard logistic, Ï
1609.08675#28
1609.08675#30
1609.08675
[ "1502.07209" ]
1609.08675#30
YouTube-8M: A Large-Scale Video Classification Benchmark
(z) = 1/(1 + exp(â z)). # 4.3.2 Hinge Loss Since training batch SVMs on such a large dataset is impossible, we use the online SVM approach. As in the conventional SVM framework, we use ±1 to represent negative and positive labels respectively. Given binary ground-truth labels y (0 or 1), and pre- dicted labels Ë y (positive or negative scalars), the hinge loss is: L(y, Ë y) = max(0, b â (2y â 1)Ë y), (4) where b is the hinge-loss parameter which can be ï¬
1609.08675#29
1609.08675#31
1609.08675
[ "1502.07209" ]
1609.08675#31
YouTube-8M: A Large-Scale Video Classification Benchmark
ne-tuned further or set to 1.0. Due to the presence of the max function, there is a discontinuity in the ï¬ rst derivative. This results in the subgradient being used in the updates, slowing convergence signiï¬ cantly. # 4.3.3 Mixture of Experts (MoE) Mixture of experts (MoE) was first proposed by Jacobs and Jor- dan [18]. The binary classifier for an entity e is composed of a set of hidden states, or experts, He. A softmax is typically used to model the probability of choosing each expert. Given an ex- pert, we can use a sigmoid to model the existence of the entity. Thus, the final probability for entity eâ s existence is p(e|x) = hen. p(h|x)o(uz_x), where p(h|x) is a softmax over |He| + 1 The last, exp(w? x) I+Dnrene exP(wry%) states. In other words, p(h|x) = (|He| + 1)th, state is a dummy state that always results in the non-existence of the entity. Denote py|x = p(y = 1|x), ph|x = p(h|x) and ph = p(y = 1|x, h). Given a set of training examples (xi, gi)i=1...N for a binary classiï¬ er, where xi is the feature vec- tor and gi â [0, 1] is the ground-truth, let L(pi, gi) be the log-loss between the predicted probability and the ground-truth: L(p, 9) = â g log p â (1 â g) log(1 â p). (5) We could directly write the derivative of £ [Pulx: g) with respect to the softmax weight wy, and the logistic weight u), as d£ [Puig] Pile (Pylnx _ Pylx) (Py|x _ 9) » ©) Own, Pylx(1 â Pylx) AL [Py\x; 9] 4c PilxPulnoe(L = Puine) (Puix = 9) a) oun Pylx (1 â Pylx)
1609.08675#30
1609.08675#32
1609.08675
[ "1502.07209" ]
1609.08675#32
YouTube-8M: A Large-Scale Video Classification Benchmark
We use Adagrad with a learning rate of 1.0 and batch size of 32 to learn the weights. Since we are training independent classiï¬ ers for each label, the work is distributed across multiple machines. For MoE models, we experimented with varying number of mix- tures (1, 2, 4), and found that performance increases by 0.5%-1% on all metrics as we go from 1 to 2, and then to 4 mixtures, but the number of model parameters correspondingly increases by 2 or 4 times. We chose 2 mixtures as a good compromise and report numbers with the 2-mixture MoE model for all datasets.
1609.08675#31
1609.08675#33
1609.08675
[ "1502.07209" ]
1609.08675#33
YouTube-8M: A Large-Scale Video Classification Benchmark
# 5. EXPERIMENTS In this section, we ï¬ rst provide benchmark baseline results for the above multi-label classiï¬ cation approaches on the YouTube-8M dataset. We then evaluate the usefulness of video representations learned on this dataset for other tasks, such as Sports-1M sports classiï¬ cation and AcitvityNet activity classiï¬ cation. # 5.1 Evaluation Metrics Mean Average Precision (mAP): For each entity, we ï¬ rst round the annotation scores in buckets of 10â 4 and sort all the non-zero annotations according to the model score. At a given threshold Ï , the precision P (Ï ) and recall R(Ï ) are given by I(yt â ¥ Ï )gt I(yt â ¥ Ï ) I(yt â ¥ Ï )gt tâ T gt
1609.08675#32
1609.08675#34
1609.08675
[ "1502.07209" ]
1609.08675#34
YouTube-8M: A Large-Scale Video Classification Benchmark
Modeling Approach Input Features Frame-level, {xv Logistic + Average (4.1.1) Frame-level, {xv Deep Bag of Frames (4.1.2) Frame-level, {xv LSTM (4.1.3) Video-level, µ Hinge loss (4.3) Video-level, µ Logistic Regression (4.3) Video-level, µ Mixture-of-2-Experts (4.3) Video-level, [µ; Ï ; Top5] Mixture-of-2-Experts (4.3) 1:Fv } 1:Fv } 1:Fv } mAP Hit@1 50.8 11.0 62.7 26.9 64.5 26.6 56.3 17.0 60.5 28.1 62.3 29.6 30.0 63.3 PERR 42.2 55.1 57.3 47.9 53.0 54.9 55.8 Table 3: Results of the various benchmark baselines on the YouTube- 8M dataset.
1609.08675#33
1609.08675#35
1609.08675
[ "1502.07209" ]
1609.08675#35
YouTube-8M: A Large-Scale Video Classification Benchmark
We ï¬ nd that binary classiï¬ ers on simple video-level rep- resentations perform substantially better than frame-level approaches. Deep learning methods such as DBoF and LSTMs do not provide a substantial boost over traditional dense feature aggregation methods because the underlying frame-level features are already very strong. Approach Deep Bag of Frames (DBoF) (4.1.2) LSTM (4.1.3) Mixture-of-2-Experts ([µ; Ï
1609.08675#34
1609.08675#36
1609.08675
[ "1502.07209" ]
1609.08675#36
YouTube-8M: A Large-Scale Video Classification Benchmark
; Top5]) (4.3) Hit@1 68.6 69.1 70.1 PERR Hit@5 83.5 29.0 84.7 30.5 84.8 29.1 Table 4: Results of the three best approaches on the human rated test set of the YouTube-8M dataset. A comparison with the results on the validation set (Table 3) shows that the relative strengths of the different approaches are largely preserved on both sets.
1609.08675#35
1609.08675#37
1609.08675
[ "1502.07209" ]
1609.08675#37
YouTube-8M: A Large-Scale Video Classification Benchmark
where I(.) is the indicator function. The average precision, ap- proximating the area under the precision-recall curve, can then be computed as 10000 AP = P(rj)[R(73) â R(tH41)], 1 j= (10) where where Ï j = j as the unweighted mean of all the per-class average precisions. Hit@k: This is the fraction of test samples that contain at least one of the ground truth labels in the top k predictions. If rankv,e is the rank of entity e on video v (with the best scoring entity having rank 1), and Gv is the set of ground-truth entities for v, then Hit@k can be written as: 1 iv S VeeG, I(rankv,e < k), veVv ql)
1609.08675#36
1609.08675#38
1609.08675
[ "1502.07209" ]