doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
1609.09106
9
Ki =g(z’), Vj=1,..,D (dy We note that this matrix A’ can be broken down as N;,, slices of a smaller matrix with dimensions fsize X Nout fsize, each Slice of the kernel is denoted as kK} € RfsizexNoutfsize Therefore, in our ap- proach, the hypernetwork is a two-layer linear network. The first layer of the hypernetwork takes the input vector z/ and linearly projects it into the N;,, inputs, with N;,, different matrices W; € RIXNz and bias vectors B; € IR¢, where d is the size of the hidden layer in the hypernetwork. For our pur- pose, we fix d to be equal to N, although they can be different. The final layer of the hypernetwork is a linear operation which takes an input vector a; of size d and linearly projects that into A; using acommon tensor Woy, € Rfsize*NoutSsizeX@ and bias matrix Bou, € Rfeize*Nouefsize, The final kernel K will be a concatenation of every K?. Thus g(z/) can be written as follows:
1609.09106#9
HyperNetworks
This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernetworks provide an abstraction that is similar to what is found in nature: the relationship between a genotype - the hypernetwork - and a phenotype - the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.
http://arxiv.org/pdf/1609.09106
David Ha, Andrew Dai, Quoc V. Le
cs.LG
null
null
cs.LG
20160927
20161201
[ { "id": "1603.09025" } ]
1609.08675
10
• A large-scale video annotation and representation learn- ing benchmark, reflecting the main themes of a video. • A significant jump in the number and diversity of annotation classes—4800 Knowledge Graph entities vs. less than 500 categories for all other datasets. • A substantial increase in the number of labeled videos—over 8 million videos, more than 500,000 hours of video. • Availability of pre-computed state-of-the-art features for 1.9 billion video frames. We hope the pre-computed features will remove computational bar- riers, level the playing field, and enable researchers to explore new technologies in the video domain at an unprecedented scale. # 3. YOUTUBE-8M DATASET
1609.08675#10
YouTube-8M: A Large-Scale Video Classification Benchmark
Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of ~8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.
http://arxiv.org/pdf/1609.08675
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan
cs.CV
10 pages
null
cs.CV
20160927
20160927
[ { "id": "1502.07209" } ]
1609.09106
10
a} = W;2z) + B;, Vi =1,.., Nin, Vj = 1,...,D K} = (Wow,a}) | + Bout, Vi=1,..,Nin, Vj =1,..,D (2) Ki=(K] Kho. K} . Kk,,), Vj =1,.,D In our formulation, the learnable parameters are W;, Bj, Wout, Bout together with all z’’s. During inference, the model simply takes the layer embeddings z/ learned during training to reproduce the kernel weights for layer 7 in the main convolutional network. As a side effect, the number of learnable parameters in hypernetwork will be much lower than the main convolutional network. In fact, the total number of learnable parameters in hypernetwork is N, x D +d x (Nz +1) x Ni + fsize X Nout X fsize X (d+ 1) compared to the D x Nin X fsize X Nout X fsize parameters for the kernels of the main convolutional network.
1609.09106#10
HyperNetworks
This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernetworks provide an abstraction that is similar to what is found in nature: the relationship between a genotype - the hypernetwork - and a phenotype - the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.
http://arxiv.org/pdf/1609.09106
David Ha, Andrew Dai, Quoc V. Le
cs.LG
null
null
cs.LG
20160927
20161201
[ { "id": "1603.09025" } ]
1609.08675
11
# 3. YOUTUBE-8M DATASET YouTube-8M is a benchmark dataset for video understanding, where the main task is to determine the key topical themes of a video. We start with YouTube videos since they are a good (albeit noisy) source of knowledge for diverse categories including vari- ous sports, activities, animals, foods, products, tourist attractions, games, and many more. We use the YouTube video annotation system [2] to obtain topic annotations for a video, and to retrieve videos for a given topic. The annotations are provided in the form of Knowledge Graph entities [3] (formerly, Freebase topics [1]). They are associated with each video based on the video’s metadata, context, and content signals [2].
1609.08675#11
YouTube-8M: A Large-Scale Video Classification Benchmark
Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of ~8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.
http://arxiv.org/pdf/1609.08675
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan
cs.CV
10 pages
null
cs.CV
20160927
20160927
[ { "id": "1502.07209" } ]
1609.09106
11
Our approach of constructing g(.) is similar to the hierarchically semiseparable matrix approach proposed by Xia et al. (2010). Note that even though it seems redundant to have a two-layered linear hypernetwork as that is equivalent to a one-layered hypernetwork, the fact that Wou¢ and Bout are shared makes our two-layered hypernetwork more compact than a one-layered hypernetwork. More concretely, a one-layered hypernetwork would have N, x Nin X fsize X Nout X fsize learnable parameters which is usually much bigger than a two-layered hypernetwork does. ‘Tensor dot product between W € R™*"™* and a € R*. Result (W,a) € R™*”
1609.09106#11
HyperNetworks
This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernetworks provide an abstraction that is similar to what is found in nature: the relationship between a genotype - the hypernetwork - and a phenotype - the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.
http://arxiv.org/pdf/1609.09106
David Ha, Andrew Dai, Quoc V. Le
cs.LG
null
null
cs.LG
20160927
20161201
[ { "id": "1603.09025" } ]
1609.08675
12
We use Knowledge Graph entities to succinctly describe the main themes of a video. For example, a video of biking on dirt roads and cliffs would have a central topic/theme of Mountain Biking, not Dirt, Road, Person, Sky, and so on. Therefore, the aim of the dataset is not only to understand what is present in each frame of the video, but also to identify the few key topics that best describe what the video is about. Note that this is different than typical event or scene recognition tasks, where each item belongs to a single event or scene. [38, 28] It is also different than most object recognition tasks, where the goal is to label everything visible in an image. This would produce thousands of labels on each video but without an- swering what the video is really about. The goal of this benchmark is to understand what is in the video and to summarize that into a few key topics. In the following sub-sections, we describe our vo- cabulary and video selection scheme, followed by a brief summary of dataset statistics. # Action-adventure-game
1609.08675#12
YouTube-8M: A Large-Scale Video Classification Benchmark
Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of ~8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.
http://arxiv.org/pdf/1609.08675
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan
cs.CV
10 pages
null
cs.CV
20160927
20160927
[ { "id": "1502.07209" } ]
1609.09106
12
The above formulation assumes that the network architecture consists of kernels with same dimen- sions. In practice, deep convolutional network architectures consists of kernels of varying dimen- sions. Typically, in many designs, the kernel dimensions are integer multiples of a basic size. This is indeed the case in the residual network family of architectures (He et al., 2016a) that we will be experimenting with later is an example of such a design. In our experiments, although the kernels of a residual network do not share the same dimensions, the N; and N,,,; dimensions for each kernel are integer multiples of 16. To modify our approach to work with this architecture, we have our hypernetwork generate kernels for this basic size of 16, and if we require a larger kernel for a certain layer, we will concatenate multiple basic kernels together to form the larger kernel. K, Ky Ks kK) @) K32 x64 = ( Ks Kg Ky Ks For example, if we need to generate a kernel with N; = 32 and Nou: = 64, we will tile eight basic kernels together. Each basic kernel is generated by a unique z embedding, hence the larger kernel will be expressed with eight
1609.09106#12
HyperNetworks
This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernetworks provide an abstraction that is similar to what is found in nature: the relationship between a genotype - the hypernetwork - and a phenotype - the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.
http://arxiv.org/pdf/1609.09106
David Ha, Andrew Dai, Quoc V. Le
cs.LG
null
null
cs.LG
20160927
20161201
[ { "id": "1603.09025" } ]
1609.08675
13
seco Anin VAT O Nar satict sasebali Basketball exner oscenseBICYCI® ay gu eax BOlly WOOK, sxing BOxing Callof-Duty comers CAL camer CAPEOON Cat checteading CHOI Christmas cicus cesnorcens CliMbIN Combat COMEMY comiciooe COMICS computer CONCELTE Cooking cooking-show Cosmetics canersuie cite respons CYCIING Da NC€Dashcam Disc-jockey ona 09 po,c095n201DFaWiNgonnn DTUMS ovoeancran FASHION rarove-sawarnasmn SNING,.....FOOdFOOtball Games Gardening cranc-rnet-autow Grand-Theft-Auto-V Guitar Gymnastics Hair saisvie saio Handball Handheld-game-console High-school Highlight-filmHockey Home-improvement HOIS sorse-racing Hote! House HuMan-swimming HuntinglCE-SKAtING ,p.siPhone iroakayek knite Landing Laptop LeagUe-Of-Legends LEGO Medicine menorwinsom MINECLAft mae MO DIIE-PNONE model-aircratt mono MOtorcycle Moto rsport
1609.08675#13
YouTube-8M: A Large-Scale Video Classification Benchmark
Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of ~8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.
http://arxiv.org/pdf/1609.08675
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan
cs.CV
10 pages
null
cs.CV
20160927
20160927
[ { "id": "1502.07209" } ]
1609.09106
13
and Nou: = 64, we will tile eight basic kernels together. Each basic kernel is generated by a unique z embedding, hence the larger kernel will be expressed with eight embeddings. Therefore, kernels that are larger in size will require a proportionally larger number of embedding vectors. For visualizations of concatenated kernels, please see Appendix A.2.1. Figure 2 shows the similarity between kernels learned by a ConvNet to classify MNIST digits and those learned by a hypernetwork generating weights for a ConvNet. Figure 2: Kernels learned by a ConvNet to classify MNIST digits (left). Kernels learned by a hypernetwork generating weights for the ConvNet (right). 3.2. DYNAMIC HYPERNETWORK: ADAPTIVE WEIGHT GENERATION FOR RECURRENT NETWORKS In the previous section, we outlined a procedure for using a hypernetwork to generate the weights for a deep convolutional network. In this section, we will use a recurrent network to dynamically gener- ate weights for another recurrent network, such that the weights can vary across many timesteps. In this context, hypernetworks are called dynamic hypernetworks, and can be seen as a form of relaxed weight-sharing,
1609.09106#13
HyperNetworks
This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernetworks provide an abstraction that is similar to what is found in nature: the relationship between a genotype - the hypernetwork - and a phenotype - the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.
http://arxiv.org/pdf/1609.09106
David Ha, Andrew Dai, Quoc V. Le
cs.LG
null
null
cs.LG
20160927
20161201
[ { "id": "1603.09025" } ]
1609.08675
14
LEGO Medicine menorwinsom MINECLAft mae MO DIIE-PNONE model-aircratt mono MOtorcycle Moto rsport vonnwe MUSIC-VIC GO musical-ensemble ya: Naruto Nature teresnson Orchestra orsen Outdoor-recreation Painting rarscrutng PEFSONAl-COMpUter Photography Piano Pokémon rosi Frayer Racing Radicons scr Rado contolt-car Radio-controlled-model Rallying Recipe rolierskating Rugby-football_ Runescape Running School shoe simulation-video-game sitcom Skateboarding gcsceomesy SKIING sietsngsiscsnow ~OMartphone Sports-game Strategy-video-game surfing Tablet-computer Talent-show tan Television teleisoradverisement Tennis TheSims Theat: Disney-Company Touhow Project TOY youn ratornnmy Thier Train Trucks) vim WE@NhICle Video-game video-game-console.,.,, Aircraft aor Album American-football © Amusement-park Advertising ‘Samsung-Galaxy Snowboarding _Sonic-the-Hedgehog Star-Wars Warcraft Water Weapon weather Wedding Weight-training yw Winter-sport Woodturning
1609.08675#14
YouTube-8M: A Large-Scale Video Classification Benchmark
Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of ~8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.
http://arxiv.org/pdf/1609.08675
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan
cs.CV
10 pages
null
cs.CV
20160927
20160927
[ { "id": "1502.07209" } ]
1609.09106
14
can vary across many timesteps. In this context, hypernetworks are called dynamic hypernetworks, and can be seen as a form of relaxed weight-sharing, a compromise between hard weight-sharing of traditional recurrent networks, and no weight-sharing of convolutional networks. This relaxed weight-sharing approach allows us to control the trade off between the number of model parameters and model expressiveness. Our dynamic hypernetworks can be used to generate weights for RNN and LSTM. When a hyper- network is used to generate the weights for an RNN, it is called HyperRNN. At every time step t, a HyperRNN takes as input the concatenated vector of input x; and the hidden states of the main RNN /;_1, it then generates as output the vector h,. This vector is then used to generate the weights for the main RNN at the same timestep. Both the HyperRNN and the main RNN are trained jointly with backpropagation and gradient descent. In the following, we will give a more formal description of the model. The standard formulation of a Basic RNN is given by: he = (Waht-1 + Wee + b) (4)
1609.09106#14
HyperNetworks
This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernetworks provide an abstraction that is similar to what is found in nature: the relationship between a genotype - the hypernetwork - and a phenotype - the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.
http://arxiv.org/pdf/1609.09106
David Ha, Andrew Dai, Quoc V. Le
cs.LG
null
null
cs.LG
20160927
20161201
[ { "id": "1603.09025" } ]
1609.09106
16
Figure 3: An overview of HyperRNNs. Black connections and parameters are associated basic RNNs. Orange connections and parameters are introduced in this work and associated with Hyper- RNNs. Dotted arrows are for parameter generation. In HyperRNN, we allow W), and W,, to float over time by using a smaller hypernetwork to generate these parameters of the main RNN at each step (see Figure 3). More concretely, the parameters W,, Wz, b of the main RNN are different at different time steps, so that h; can now be computed as: hy = (Wr (zn )he—a + Wi. (zx) + b(z)), where Wrhl(zn) = (Whz, Zn) W. (22) = (Woz, Zn) (20) = W220 + bo Where Wp, € RNa*NnXNe W, € RNaxNeXNe Wy, € RNaXN2 by © RNo and 2p, Zn, 22 € RY. We use a recurrent hypernetwork to compute z),, z,, and z, as a function of x; and hy_,:
1609.09106#16
HyperNetworks
This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernetworks provide an abstraction that is similar to what is found in nature: the relationship between a genotype - the hypernetwork - and a phenotype - the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.
http://arxiv.org/pdf/1609.09106
David Ha, Andrew Dai, Quoc V. Le
cs.LG
null
null
cs.LG
20160927
20161201
[ { "id": "1603.09025" } ]
1609.08675
17
Top-level Category Arts & Entertainment Autos & Vehicles Beauty & Fitness Books & Literature Business & Industrial Computers & Electronics Finance Food & Drink Games Health Hobbies & Leisure Home & Garden Internet & Telecom Jobs & Education Law & Government News People & Society Pets & Animals Real Estate Reference Science Shopping Sports Travel Full vocabulary 1st Entity Concert Vehicle Fashion Book Train Personal computer Video game console Money Food Video game Medicine Fishing Gardening Mobile phone School Tank Weather Prayer Animal House Vampire Nature Toy Motorsport Amusement park Vehicle 2nd Entity Animation Car Hair Harry Potter Model aircraft Bank Cooking Minecraft Raw food Outdoor recreation Home improvement Smartphone University Firefighter Snow Family Dog Apartment Bus Robot LEGO Football Hotel Concert 3rd Entity Music video Motorcycle Cosmetics The Bible Fish iPhone Foreign Exchange Recipe Action-adventure game Ear Radio-controlled model Wedding Kitchen House Website Telephone Teacher High school Soldier President of the U.S.A. News broadcasting Rain Human Play-Doh Cat Horse Dormitory Condominium City River Ice Eye Doll Sledding Cycling Winter sport Beach Airport Music video Animation 4th Entity Dance Bicycle Weight training Writing Water PlayStation 3 Euro Cake Strategy video game Glasses 5th Entity
1609.08675#17
YouTube-8M: A Large-Scale Video Classification Benchmark
Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of ~8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.
http://arxiv.org/pdf/1609.08675
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan
cs.CV
10 pages
null
cs.CV
20160927
20160927
[ { "id": "1502.07209" } ]
1609.09106
17
5 (he a= ( 2) hy = b(Wyhe_1 + Wet + b) zn = Wj, lu-1 +b tn = Wy, hi-1 +6 ha 2 = Wayht-1 (6) huh he Where Wi, € RNA*Na, We € RNAX(Nn+N2) b © RNA, and Wj,,,Wi,,Wiy, € RN?*N* and ban» Ong € R=. This HyperRNN Cell has Nj, hidden units. Typically Nj, is much smaller than Nj. hh? Pha As the embeddings z;,, z,, and z, are of dimensions N,, which is typically smaller than the hidden state size Nj; of the HyperRNN cell, a linear network is used to project the output of the HyperRNN cell into the embeddings in Equation 6. After the embeddings are computed, they will be used to generate the full weight matrix of the main RNN.
1609.09106#17
HyperNetworks
This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernetworks provide an abstraction that is similar to what is found in nature: the relationship between a genotype - the hypernetwork - and a phenotype - the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.
http://arxiv.org/pdf/1609.09106
David Ha, Andrew Dai, Quoc V. Le
cs.LG
null
null
cs.LG
20160927
20161201
[ { "id": "1603.09025" } ]
1609.08675
18
Cycling Winter sport Beach Airport Music video Animation 4th Entity Dance Bicycle Weight training Writing Water PlayStation 3 Euro Cake Strategy video game Glasses 5th Entity Guitar Aircraft Hairstyle Magazine Tractor pulling Tablet computer United States Dollar Chocolate Sports game Injury Christmas Garden Sony Xperia Kindergarten President Newspaper Dragon Bird Mansion Mermaid Biology Shoe Basketball Roller coaster Video game 6th Entity Disc jockey Truck Nail Alice Advertising Xbox 360 Credit card Egg Call of Duty Dietary supplement Dental braces Hunting Door Google Nexus Campus Police officer Mattel Angel Aquarium Skyscraper Village Skin My Little Pony Gymnastics Lake Motorsport 7th Entity Trailer Boat Mascara E-book Landing Microsoft Windows Cash Eating Grand Theft Auto V Diving Swimming pool World Wide Web Classroom Fighter aircraft Hail Tarot Puppy Loft Samurai Light Nike; Inc. Wrestling Resort Football
1609.08675#18
YouTube-8M: A Large-Scale Video Classification Benchmark
Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of ~8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.
http://arxiv.org/pdf/1609.08675
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan
cs.CV
10 pages
null
cs.CV
20160927
20160927
[ { "id": "1502.07209" } ]
1609.09106
18
The above is a general formulation of a /inear dynamic hypernetwork applied to RNNs. However, we found that in practice, Equation 5 is often not practical because the memory usage becomes too large for real problems. The amount of memory required in the system described in Equation 5 will be N, times the memory of a Basic RNN, which limits the number of hidden units we can use in many practical applications. We can modify the dynamic hypernetwork system described in Equation 5 so that it can be much more scalable and memory efficient. Our approach borrows from the static hypernetwork section and we will use an intermediate hidden vector d(z) € R%* to parametrize a weight matrix, where d(z) will be a linear projection of z. To dynamically modify a weight matrix W, we will allow each (5) row of this weight matrix to be scaled linearly by an element in vector d. We refer d as a weight scaling vector. Below is the modification to W (z): do(z) Wo Wie)=w(a)) =| BOM (7) dy, (2)Wn,
1609.09106#18
HyperNetworks
This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernetworks provide an abstraction that is similar to what is found in nature: the relationship between a genotype - the hypernetwork - and a phenotype - the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.
http://arxiv.org/pdf/1609.09106
David Ha, Andrew Dai, Quoc V. Le
cs.LG
null
null
cs.LG
20160927
20161201
[ { "id": "1603.09025" } ]
1609.08675
19
# Table 1: Most frequent entities for each of the top-level categories. # 3.1 Vocabulary Construction We followed two main tenets when designing the vocabulary for the dataset; namely 1) every label in the dataset should be distin- guishable using visual information alone, and 2) each label should have sufficient number of videos for training models and for com- puting reliable metrics on the test set. For the former, we used a combination of manually curated topics and human ratings to prune the vocabulary into a visual set. For the latter, we considered only entities having at least 200 videos in the dataset. The Knowledge Graph contains millions of topics. Each topic has one or more types, that are curated with high precision. For ex- ample, there is an exhaustive list of animals with type animal and an exhaustive list of foods with type food. To start with our initial vocabulary, we manually selected a whitelist of 25 entity types that we considered visual (e.g. sport, tourist_attraction, inventions), and also blacklisted types that we thought are non-visual (e.g. mu- sic artists, music compositions, album, software). We then obtained all entities that have at least one whitelisted type and no blacklisted
1609.08675#19
YouTube-8M: A Large-Scale Video Classification Benchmark
Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of ~8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.
http://arxiv.org/pdf/1609.08675
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan
cs.CV
10 pages
null
cs.CV
20160927
20160927
[ { "id": "1502.07209" } ]
1609.09106
19
do(z) Wo Wie)=w(a)) =| BOM (7) dy, (2)Wn, While we sacrifice the ability to construct an entire weight matrix from a linear combination of N, matrices of the same size, we are able to linearly scale the rows of a single matrix with N, degrees of freedom. We find this to be a good trade off, as this formulation of converting W(z) into W (d(z)) decreases the amount of memory required by the dynamic hypernetwork. Rather than requiring Nz times the memory of a Basic RNN, we will only be using memory in the order NV, times the number of hidden units, which is an acceptable amount of extra memory usage that is often available in many applications. In addition, the row-level operation in Equation 7 can be shown to be equivalent to an element-wise multiplication operator and hence computationally much more efficient in practice. Below is the more memory efficient version of the setup of Equation 5: hy = (dn (Zn) © Wrhe-1 + de(Zx) © Weve + b(z0)), where dn(2n) = Whz2h dy (22) = Waz%x b(zp) = Woz2n + bo (8)
1609.09106#19
HyperNetworks
This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernetworks provide an abstraction that is similar to what is found in nature: the relationship between a genotype - the hypernetwork - and a phenotype - the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.
http://arxiv.org/pdf/1609.09106
David Ha, Andrew Dai, Quoc V. Le
cs.LG
null
null
cs.LG
20160927
20161201
[ { "id": "1603.09025" } ]
1609.08675
20
types, which resulted in an initial vocabulary of ∼50, 000 entities. Following this, we used human raters in order to manually prune this set into a smaller set of entities that are considered visual with high confidence, and are also recognizable without very deep do- main expertise. Raters were provided with instructions and exam- ples. Each entity was rated by 3 raters and the ratings were av- eraged. Figure 4a shows the main rating question. The process resulted in a total of ∼10, 000 entities that are considered visually recognizable and are not too fine-grained (i.e. can be recognized by non-domain experts after studying some examples). These enti- ties were further pruned: we only kept entities that have more than 200 popular videos, as explained in the next section. The final set of entities in the dataset are fairly balanced in terms of the speci- ficity of the topic they describe, and span both coarse-grained and fine-grained entities, as shown in Figure 4b. # 3.2 Collecting Videos Having established the initial target vocabulary, we followed these
1609.08675#20
YouTube-8M: A Large-Scale Video Classification Benchmark
Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of ~8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.
http://arxiv.org/pdf/1609.08675
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan
cs.CV
10 pages
null
cs.CV
20160927
20160927
[ { "id": "1502.07209" } ]
1609.09106
20
This formulation of the HyperRNN has some similarities to Recurrent Batch Normalization (Cooij- mans et al., 2016) and Layer Normalization (Ba et al., 2016). The central idea for the normalization techniques is to calculate the first two statistical moments of the inputs to the activation function, and to linearly scale the inputs to have zero mean and unit variance. An additional set of fixed parameters are learned to unscale the activations if required. This element-wise operation also has similarities to the Multiplicative RNN (Sutskever et al., 2011) and Multiplicative Integration RNN (Wu et al., 2016) where it was demonstrated that the multiplication-operation encouraged better gradient flow. Since the HyperRNN cell can indirectly modify the rows of each weight matrix and also the bias of the main RNN, it is implicitly also performing a linear scaling to the inputs of the activation function. The difference here is that the linear scaling parameters can be different for each timestep and also for for each input sample. It will be interesting to compare the scaling policy that the HyperRNN cell comes up with, to the hand engineered statistical-moments based scaling approaches. In addition, we note that the existing normalization approaches can work together with the HyperRNN approach, where the HyperRNN cell will be tasked with discovering a better dynamical scaling policy to complement normalization. We will also explore this combination in our experiments.
1609.09106#20
HyperNetworks
This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernetworks provide an abstraction that is similar to what is found in nature: the relationship between a genotype - the hypernetwork - and a phenotype - the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.
http://arxiv.org/pdf/1609.09106
David Ha, Andrew Dai, Quoc V. Le
cs.LG
null
null
cs.LG
20160927
20161201
[ { "id": "1603.09025" } ]
1609.08675
21
# 3.2 Collecting Videos Having established the initial target vocabulary, we followed these Entity Name Entity URL Entity Description A thunderstorm, also known as an electrical storm, a lightning storm, or @ thundershower, Is a type of storm characterized by the presence of lightning and its acoustie effect on the Earth’s atmosphere known as thunder. The Thunderstorm http://www fr rym j021 ma Ter meteorologically assigned cloud type associated with the thunderstorm is the cumulonimbus. Thunderstorms are usually accompanied by strong winds, heavy rain and sometimes snow, sleet, hall, or no precipitation at al How difficult is it to identify this entity in images or videos (without audio, titles, comments, etc)? 1. Any layperson could . Experts in some field can . Not possible without non-visual knowledge . Non-visual uRWNnN . Any layperson after studying examples, wikipedia, etc could
1609.08675#21
YouTube-8M: A Large-Scale Video Classification Benchmark
Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of ~8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.
http://arxiv.org/pdf/1609.08675
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan
cs.CV
10 pages
null
cs.CV
20160927
20160927
[ { "id": "1502.07209" } ]
1609.09106
21
The Long Short-Term Memory (LSTM) architecture (Hochreiter & Schmidhuber, 1997) is usually better than the Basic RNN at storing and retrieving information over longer time steps. In our ex- periments, we will focus on this LSTM version of the HyperRNN, called the HyperLSTM. The details of the HyperLSTM architecture is described in Appendix A.2.2, along with specific imple- mentation details in Appendix A.2.3. We want to know whether the HyperLSTM cell can learn a weight adjustment policy that can rival statistical moments-based normalization methods, hence Layer Normalization will be one of our baseline methods. We will therefore conduct experiments on two versions of HyperLSTM, one with and one without the application of Layer Normalization. # 4 EXPERIMENTS In the following experiments, we will benchmark the performance of static hypernetworks on im- age recognition with MNIST and CIFAR-10, and the performance of dynamic hypernetworks on language modelling with Penn Treebank and Hutter Prize Wikipedia (enwik8) datasets and hand- writing generation. 4.1 USING STATIC HYPERNETWORKS TO GENERATE FILTERS FOR CONVOLUTIONAL NETWORKS AND MNIST
1609.09106#21
HyperNetworks
This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernetworks provide an abstraction that is similar to what is found in nature: the relationship between a genotype - the hypernetwork - and a phenotype - the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.
http://arxiv.org/pdf/1609.09106
David Ha, Andrew Dai, Quoc V. Le
cs.LG
null
null
cs.LG
20160927
20161201
[ { "id": "1603.09025" } ]
1609.08675
22
Entity Name Entity URL Entity Description A thunderstorm, also known as an electrical storm, a lightning storm, or @ thundershower, Is a type of storm characterized by the presence of lightning and its acoustie effect on the Earth’s atmosphere known as thunder. The Thunderstorm http://www fr rym j021 ma Ter meteorologically assigned cloud type associated with the thunderstorm is the cumulonimbus. Thunderstorms are usually accompanied by strong winds, heavy rain and sometimes snow, sleet, hall, or no precipitation at al Coarse-grained Medium-grained Fine-grained 0 500 1000 1500 2000 2500 3000 Number of entities (a) Screenshot of the question displayed to human raters. (b) Distribution of vocabulary topics in terms of specificity. Figure 4: Rater guidelines to assess how specific and visually recognizable each entity is, on a discrete scale of (1 to 5), where 1 is most visual and easily recognizable by a layperson. Each entity was rated by 3 raters. We kept only entities with a maximum average score of 2.5, and categorized them by specificity, into coarse-grained, medium-grained, and fine-grained entities, using equally sized score range buckets. steps to obtain the videos:
1609.08675#22
YouTube-8M: A Large-Scale Video Classification Benchmark
Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of ~8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.
http://arxiv.org/pdf/1609.08675
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan
cs.CV
10 pages
null
cs.CV
20160927
20160927
[ { "id": "1502.07209" } ]
1609.09106
22
4.1 USING STATIC HYPERNETWORKS TO GENERATE FILTERS FOR CONVOLUTIONAL NETWORKS AND MNIST We start by applying a hypernetwork to generate the filters for a convolutional network on MNIST. Our main convolutional network is a small two layer network and the hypernetwork is used to gener- ate the kernel for the second layer (7x7x 16x16), which contains the bulk of the trainable parameters in the system. Our weight matrix will be summarized by an embedding of size N, = 4. See Appendix A.3.1 for further experimental setup details. For this task, the hypernetwork achieved a test accuracy of 99.24%, comparable to the 99.28% for the conventional method. In this example, a kernel consisting of 12,544 weights is represented by an embedding vector of only 4 parameters, generated by a hypernetwork that has 4240 parameters. We can see the weight matrix this network produced by the hypernetwork in Figure 2. Now the question is whether we can also train a deep convolutional network, using a single hypernetwork generating a set of weights for each layer, on a dataset more challenging than MNIST. 4.2 STATIC HYPERNETWORKS FOR RESIDUAL NETWORK ARCHITECTURE AND CIFAR-10
1609.09106#22
HyperNetworks
This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernetworks provide an abstraction that is similar to what is found in nature: the relationship between a genotype - the hypernetwork - and a phenotype - the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.
http://arxiv.org/pdf/1609.09106
David Ha, Andrew Dai, Quoc V. Le
cs.LG
null
null
cs.LG
20160927
20161201
[ { "id": "1603.09025" } ]
1609.08675
23
steps to obtain the videos: Train Dataset YouTube-8M 5,786,881 Validate 1,652,167 Test 825,602 Total 8,264,650 • Collected all videos corresponding to the 10, 000 visual en- tities and have at least 1, 000 views, using the YouTube video annotation system [2]. We excluded too short (< 120 secs) or too long (> 500 secs) videos. • Randomly sampled 10 million videos among them. • Obtained all entities for the sampled 10 million videos using the YouTube video annotation system. This completes the annotations. • Filtered out entities with less than 200 videos, and videos with no remaining entities. This reduced the size of our data to 8, 264, 650 videos. • Split our videos into 3 partitions, Train : Validate : Test, with ratios 70% : 20% : 10%. We publish features for all splits, but only publish labels for the Train and Validate partitions. Table 2: Dataset partition sizes. 10° Pa TT nae 2 10° : > 3 io 8 E 10? | 2 10° 10? 10? 10° 10° Entity ID # 3.3 Features
1609.08675#23
YouTube-8M: A Large-Scale Video Classification Benchmark
Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of ~8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.
http://arxiv.org/pdf/1609.08675
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan
cs.CV
10 pages
null
cs.CV
20160927
20160927
[ { "id": "1502.07209" } ]
1609.09106
23
4.2 STATIC HYPERNETWORKS FOR RESIDUAL NETWORK ARCHITECTURE AND CIFAR-10 The residual network architectures (He et al., 2016a; Zagoruyko & Komodakis, 2016) are popular for image recognition tasks, as they can accommodate very deep networks while maintaining effective gradient flow across layers using skip connections. The original resnet and subsequent derivatives (Zhang et al., 2016; Huang et al., 2016a) achieved state-of-the-art image recognition performance on a variety of public datasets. While residual networks can be be very deep, and in some experi- ments as deep as 1001 layers ((He et al., 2016b), it is important to understand whether some these layers share common properties and can be reduced effectively by introducing weight sharing. If we enforce weight-sharing across many layers of a deep feed forward network, the network may share many properties to that of a recurrent network. In this experiment, we want to explore this idea of enforcing relaxed weight sharing across all of the layers of a deep residual network. We will take a simple version of residual network, use a single hypernetwork to generate the weights of all of its layers for image classification task on the CIFAR-10 dataset.
1609.09106#23
HyperNetworks
This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernetworks provide an abstraction that is similar to what is found in nature: the relationship between a genotype - the hypernetwork - and a phenotype - the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.
http://arxiv.org/pdf/1609.09106
David Ha, Andrew Dai, Quoc V. Le
cs.LG
null
null
cs.LG
20160927
20161201
[ { "id": "1603.09025" } ]
1609.08675
24
The original size of the video dataset is hundreds of Terabytes, and covers over 500, 000 hours of video. This is impractical to process by most research teams (using a real-time video processing engine, it would take over 50 years to go through the data). There- fore, we pre-process the videos and extract frame-level features us- ing a state-of-the-art deep model: the publicly available Inception network [4] trained on ImageNet [14]. Concretely, we decode each video at 1 frame-per-second up to the first 360 seconds (6 minutes), feed the decoded frames into the Inception network, and fetch the ReLu activation of the last hidden layer, before the classification layer (layer name pool_3/_reshape). The feature vector is 2048-dimensional per second of video. While this removes mo- tion information from the videos, recent work shows diminishing returns from motion features as the size and diversity of the video data increases [26, 35]. The static frame-level features provide an excellent baseline, and constructing compact and efficient motion features is beyond the scope of this paper. Nonetheless, we hope to extend the dataset with audio and motion features in the future. We cap processing of each video up
1609.08675#24
YouTube-8M: A Large-Scale Video Classification Benchmark
Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of ~8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.
http://arxiv.org/pdf/1609.08675
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan
cs.CV
10 pages
null
cs.CV
20160927
20160927
[ { "id": "1502.07209" } ]
1609.09106
25
Our experiment will use a version of the wide residual network (Zagoruyko & Komodakis, 2016), described in Table 1, a popular and simple variant of the family of residual network architectures, and we will focus configurations (NV = 6, = 1) and(N = 6, K = 2), referred to as WRN 40-1 and WRN 40-2 respectively. In this setup, we will use a hypernetwork to generate all of the kernels in conv2, conv3, and conv4, so we will generate 36 layers of kernels in total. The WRN architecture uses a filter size of 3 for every kernel. We use the method outlined in the Methods section to deal with kernels of varying sizes, and use the an embedding size of N, = 64 in our experiments. See Appendix A.3.2 for further experimental setup details. We obtained similar classification accuracy numbers as reported in (Zagoruyko & Komodakis, 2016) with our own implementation. We also note that the weights generated by the hypernetwork are used in a batch normalization setting without modification to the original model. In principle, hypernet- works can also be applied to the newer variants of residual networks with more skip connections, such as DenseNets and ResNets of Resnets.
1609.09106#25
HyperNetworks
This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernetworks provide an abstraction that is similar to what is found in nature: the relationship between a genotype - the hypernetwork - and a phenotype - the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.
http://arxiv.org/pdf/1609.09106
David Ha, Andrew Dai, Quoc V. Le
cs.LG
null
null
cs.LG
20160927
20161201
[ { "id": "1603.09025" } ]
1609.08675
26
Figure 5: Number of videos in log-scale versus entity rank in log scale. Entities were sorted by number of videos. We note that this somewhat follows the natural Zipf distribution. Afterwards, we apply PCA (+ whitening) to reduce feature di- mensions to 1024, followed by quantization (1 byte per coefficient). These two compression techniques reduce the size of the data by a factor of 8. The mean vector and covariance matrix for PCA was computed on all frames from the Train partition. We quantize each 32-bit float into 256 distinct values (8 bits) using optimally com- puted (non-uniform) quantization bin boundaries. We confirmed that the size reduction does not significantly hurt the evaluation metrics. In fact, training all baselines on the full-size data (8 times larger than what we publish), increases all evaluation metrics by less than 1%. Note that while this dataset comes with standard frame-level fea- tures, it leaves a lot of room for investigating video representation learning approaches on top of the fixed frame-level features (see Section 4 for approaches we explored). # 3.4 Dataset Statistics The YouTube-8M dataset contains 4, 800 classes and a total of
1609.08675#26
YouTube-8M: A Large-Scale Video Classification Benchmark
Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of ~8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.
http://arxiv.org/pdf/1609.08675
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan
cs.CV
10 pages
null
cs.CV
20160927
20160927
[ { "id": "1502.07209" } ]
1609.08675
27
# 3.4 Dataset Statistics The YouTube-8M dataset contains 4, 800 classes and a total of Games Arts & Entertainment Autos & Vehicles Food & Drink Business & Industrial Computers & Electronics Science Sports, Pets & Animals Shopping Home & Garden Hobbies & Leisure People & Society Beauty & Fitness Travel Books & Literature Reference Law & Government Internet & Telecom News Health, Jobs & Education Finance Real Estated Vertical 200 400 600 800 Number of Entites 1000 Arts & Entertainment Games Autos & Vehicles Sports Food & Drink Computers & Electronics Hobbies & Leisure Business & Industrial Beauty & Fitness Vertical Internet & Telecom Shopping Home & Garden Travel People & Society News Reference Jobs & Education Books & Literature Law & Government 10° 10° 107 Number of Videos Games Arts & Entertainment Autos & Vehicles Food & Drink Business & Industrial Computers & Electronics Science Sports, Pets & Animals Shopping Home & Garden Hobbies & Leisure People & Society Beauty & Fitness Travel Books & Literature Reference Law & Government Internet & Telecom News Health, Jobs & Education Finance Real Estated 200 400 600 800 Number of Entites 1000 Arts & Entertainment Games Autos & Vehicles Sports Food & Drink Computers & Electronics Hobbies & Leisure Business & Industrial Beauty & Fitness Vertical Internet & Telecom Shopping Home & Garden Travel People & Society News Reference Jobs & Education Books & Literature Law & Government 10° 10° 107 Number of Videos (a) Number of entities in each top-level category.
1609.08675#27
YouTube-8M: A Large-Scale Video Classification Benchmark
Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of ~8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.
http://arxiv.org/pdf/1609.08675
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan
cs.CV
10 pages
null
cs.CV
20160927
20160927
[ { "id": "1502.07209" } ]
1609.09106
27
Model Test Error Param Count Network in Network (Lin et al., 2014) 8.81% FitNet (Romero et al., 2014) 8.39% Deeply Supervised Nets (Lee et al., 2015) 8.22% Highway Networks (Srivastava et al., 2015) 7.12% ELU (Clevert et al., 2015) 6.55% Original Resnet-110 (He et al., 2016a) 6.43% 17M Stochastic Depth Resnet-110 (Huang et al., 2016b) 5.23% 17M Wide Residual Network 40-1 (Zagoruyko & Komodakis, 2016) 6.85% 0.6M Wide Residual Network 40-2 (Zagoruyko & Komodakis, 2016) 5.33% 2.2M Wide Residual Network 28-10 (Zagoruyko & Komodakis, 2016) 4.17% 36.5 M ResNet of ResNet 58-4 (Zhang et al., 2016) 3.77% 13.3M DenseNet (Huang et al., 2016a) 3.74% 27.2M Wide Residual Network 40-1? 6.73% 0.563 M Hyper Residual Network 40-1 (ours) 8.02% 0.097 M Wide Residual
1609.09106#27
HyperNetworks
This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernetworks provide an abstraction that is similar to what is found in nature: the relationship between a genotype - the hypernetwork - and a phenotype - the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.
http://arxiv.org/pdf/1609.09106
David Ha, Andrew Dai, Quoc V. Le
cs.LG
null
null
cs.LG
20160927
20161201
[ { "id": "1603.09025" } ]
1609.08675
28
(a) Number of entities in each top-level category. (b) Number of train videos in log-scale per top-level category. Figure 6: Top-level category statistics of the YouTube-8M dataset. 8, 264, 650 videos. A video may be annotated with more than one class and the average number of classes per video is 1.8. Table 2 shows the number of videos for which we are releasing features, across the three datasets. ated on the human-based ground truth), if one explicitly models incorrect [29] (78.8% precision) or missing [40, 25] (14.5% recall) training labels. We believe this is an exciting area of research that this dataset will enable at scale. We processed only the first six minutes of each video, at 1 frame- per-second. The average length of a video in the dataset is 229.6 seconds, which amounts to ∼1.9 billion frames (and corresponding features) across the dataset.
1609.08675#28
YouTube-8M: A Large-Scale Video Classification Benchmark
Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of ~8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.
http://arxiv.org/pdf/1609.08675
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan
cs.CV
10 pages
null
cs.CV
20160927
20160927
[ { "id": "1502.07209" } ]
1609.08675
29
We grouped the 4, 800 entities into 24 top-level categories to measure statistics and illustrate diversity. Although we do not use these categories during training, we are releasing the entity-to-category mapping for completeness. Table 1 shows the top entities per cate- gory. Note that while some categories themselves may not seem vi- sual, most of the entities within them are visual. For instance, Jobs & Education includes universities, classrooms, lectures, etc., and Law & Government includes police, emergency vehicles, military- related entities, which are well represented and visual. Figure 5 shows a log-log scale distribution of entities and videos. Figures 6a and 6b show the size of categories, respectively, in terms of the number of entities and the number of videos. # 4. BASELINE APPROACHES # 4.1 Models from Frame Features
1609.08675#29
YouTube-8M: A Large-Scale Video Classification Benchmark
Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of ~8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.
http://arxiv.org/pdf/1609.08675
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan
cs.CV
10 pages
null
cs.CV
20160927
20160927
[ { "id": "1502.07209" } ]
1609.09106
29
Table 2: CIFAR-10 Classification with hypernetwork generated weights. parameters in the model as a trade off. One reason for this reduction in accuracy is because different layers of a deep network is trained to extract different levels of features, and require different kinds of filters to perform optimally. The hypernetwork enforces some commonality between every layer, but offers each layer 64 degrees of freedom to distinguish itself from the other layers. While the network is no longer able to learn the optimal set of filters for each layer, it will learn the best set of filters given the constraints, and the resulting number of model parameters is drastically reduced. 4.3. HYPERLSTM FOR CHARACTER-LEVEL PENN TREEBANK LANGUAGE MODELLING The HyperLSTM model is evaluated on character level prediction task on the Penn Treebank corpus (Marcus et al., 1993) using the train/validation/test split outlined in (Mikolov et al., 2012). As the dataset is quite small is prone to over fitting, we apply dropout on both input and output layers with a keep probability of 0.90. Unlike previous approaches (Graves, 2013; Ognawala & Bayer, 2014) of applying weight noise during training, we instead also apply dropout to the recurrent layer (Henaff et al., 2016) with the same dropout probability.
1609.09106#29
HyperNetworks
This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernetworks provide an abstraction that is similar to what is found in nature: the relationship between a genotype - the hypernetwork - and a phenotype - the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.
http://arxiv.org/pdf/1609.09106
David Ha, Andrew Dai, Quoc V. Le
cs.LG
null
null
cs.LG
20160927
20161201
[ { "id": "1603.09025" } ]
1609.08675
30
# 4. BASELINE APPROACHES # 4.1 Models from Frame Features One of the challenges with this dataset is that we only have video-level ground-truth labels. We do not have any additional information that specifies how the labels are localized within the video, nor their relative prominence in the video, yet we want to in- fer their importance for the full video. In this section, we consider models trained to predict the main themes of the video using the in- put frame-level features. Frame-level models have shown competi- tive performance for video-level tasks in previous work [19, 26]. A video v is given by a sequence of frame-level features xv 1:Fv , where j is the feature of the jth frame from video v. xv # 4.1.1 Frame-Level Models and Average Pooling # 3.5 Human Rated Test Set
1609.08675#30
YouTube-8M: A Large-Scale Video Classification Benchmark
Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of ~8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.
http://arxiv.org/pdf/1609.08675
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan
cs.CV
10 pages
null
cs.CV
20160927
20160927
[ { "id": "1502.07209" } ]
1609.09106
30
We compare our model to the basic LSTM cell, stacked LSTM cells (Graves, 2013), and LSTM with layer normalization applied. In addition, we also experimented with applying layer normalization to HyperLSTM. Using the setup in (Graves, 2013), we use networks with 1000 units and train the network to predict the next character. In this task, the HyperLSTM cell has 128 units and a signal size of 4. As the HyperLSTM cell has more trainable parameters compared to the basic LSTM Cell, we also experimented with an LSTM Cell with 1250 units as well. For more details regarding experimental setup, please refer to Appendix A.3.3 It is interesting to note that combining Recurrent Dropout with a basic LSTM cell achieves quite formidable performance. Our implementation of Recurrent Dropout Basic LSTM cell reproduced similar results as (Semeniuta et al., 2016), where they have also experimented with different dropout settings. We also found that Layer Norm LSTM performed quite well when combined with recurrent dropout, making it both a formidable baseline and also an extension for HyperLSTM.
1609.09106#30
HyperNetworks
This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernetworks provide an abstraction that is similar to what is found in nature: the relationship between a genotype - the hypernetwork - and a phenotype - the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.
http://arxiv.org/pdf/1609.09106
David Ha, Andrew Dai, Quoc V. Le
cs.LG
null
null
cs.LG
20160927
20161201
[ { "id": "1603.09025" } ]
1609.08675
31
# 4.1.1 Frame-Level Models and Average Pooling # 3.5 Human Rated Test Set The annotations from the YouTube video annotation system can be noisy and incomplete, as they are automatically generated from metadata, anchor text, comments, and user engagement signals [2]. To quantify the noise, we uniformly sampled over 8000 videos from the Test partition, and used 3 human raters per video to ex- haustively rate their labels. We measured the precision and recall of the ground truth labels to be 78.8% and 14.5%, respectively, with respect to the human raters. Note that typical inter-rater agreement on similar annotation tasks with human raters is also around 80% so the precision of these ground truth labels is perhaps compara- ble to (non-expert) human-provided labels. The recall, however, is low, which makes this an excellent test bed for approaches that deal with missing data. We report the accuracy of our models primarily on the (noisy) Validate partition but also show some results on the much smaller human-rated set, showing that some of the metrics are surprisingly similar on the two datasets.
1609.08675#31
YouTube-8M: A Large-Scale Video Classification Benchmark
Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of ~8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.
http://arxiv.org/pdf/1609.08675
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan
cs.CV
10 pages
null
cs.CV
20160927
20160927
[ { "id": "1502.07209" } ]
1609.09106
31
In addition to outperforming both the larger or deeper version of the LSTM network, HyperLSTM also achieved similar performance of Layer Norm LSTM. This suggests by dynamically adjusting the weight scaling vectors, the HyperLSTM cell has learned a policy of scaling inputs to the ac- tivation functions that is as efficient as the statistical moments-based strategy employed by Layer Norm, and that the required extra computation required is embedded inside the extra 128 units in- side the HyperLSTM cell. When we combine HyperLSTM with Layer Norm, we see an additional performance gain, implying that the HyperLSTM cell learned an adjustment policy that goes be- yond moments-based regularization. We also demonstrate that increasing the size of the embedding vector or stacking HyperLSTM layers together can also increase its performance.
1609.09106#31
HyperNetworks
This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernetworks provide an abstraction that is similar to what is found in nature: the relationship between a genotype - the hypernetwork - and a phenotype - the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.
http://arxiv.org/pdf/1609.09106
David Ha, Andrew Dai, Quoc V. Le
cs.LG
null
null
cs.LG
20160927
20161201
[ { "id": "1603.09025" } ]
1609.08675
32
Since we do not have frame-level ground-truth, we assign the video-level ground-truth to every frame within that video. More sophisticated formulations based on multiple-instance learning are left for future work. From each video, we sample 20 random frames and associate all frames to the video-level ground-truth. This re- sults in about 120 million frames. For each entity e, we get 120M i ) pairs, where xi ∈ R1024 is the inception fea- instances of (xi, ye ture and ye i ∈ 0, 1 is the ground-truth associated with entity e for the ith example. We train 4800 independent one-vs-all classifiers for each entity e. We use the online training framework after par- allelizing the work for each entity across multiple workers. During inference, we score every frame in the test video using the models for all classes. Since all our evaluations are based on video-level ground truths, we need to aggregate the frame-level scores (for each entity) to a single video-level score. The frame-level probabili- ties are aggregated to the video-level using a simple average. We choose average instead of max pooling since we want to reduce the effect of outlier detections and capture the prominence of each en- tity
1609.08675#32
YouTube-8M: A Large-Scale Video Classification Benchmark
Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of ~8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.
http://arxiv.org/pdf/1609.08675
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan
cs.CV
10 pages
null
cs.CV
20160927
20160927
[ { "id": "1502.07209" } ]
1609.09106
32
Model! Test Validation Param Count ME n-gram (Mikolov et al., 2012) 1.37 Batch Norm LSTM (Cooijmans et al., 2016) 1.32 Recurrent Dropout LSTM (Semeniuta et al., 2016) 1.301 1.338 Zoneout RNN (Krueger et al., 2016) 1.27 HM-LSTM? (Chung et al., 2016) 1.27 LSTM, 1000 units ? 1.312 1.347 4.25M LSTM, 1250 units? 1.306 = 1.340 6.57M 2-Layer LSTM, 1000 units” 1.281 1.312 12.26M Layer Norm LSTM, 1000 units” 1.267 1.300 4.26 M HyperLSTM (ours), 1000 units 1.265 = 1.296 491M Layer Norm HyperLSTM, 1000 units (ours) 1.250 1.281 4.92 M Layer Norm HyperLSTM, 1000 units, Large Embedding (ours) 1.233 1.263 5.06 M 2-Layer Norm HyperLSTM, 1000 units 1.219 = 1.245 14.41M Table 3: Bits-per-character on the Penn Treebank test set.
1609.09106#32
HyperNetworks
This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernetworks provide an abstraction that is similar to what is found in nature: the relationship between a genotype - the hypernetwork - and a phenotype - the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.
http://arxiv.org/pdf/1609.09106
David Ha, Andrew Dai, Quoc V. Le
cs.LG
null
null
cs.LG
20160927
20161201
[ { "id": "1603.09025" } ]
1609.09106
33
Table 3: Bits-per-character on the Penn Treebank test set. 4.4. HYPERLSTM FOR HUTTER PRIZE WIKIPEDIA LANGUAGE MODELLING We train our model on the larger and more challenging Hutter Prize Wikipedia dataset, also known as enwik8 (Hutter, 2012) consisting of a sequence of 100M characters composed of 205 unique characters. Unlike Penn Treebank, enwik8 contains some foreign words (Latin, Arabic, Chinese), indented XML, metadata, and internet addresses, making it a more realistic and practical dataset to test character language models. For more details regarding experimental setup, please refer to Appendix A.3.4. Examples of these mixed variety of text samples that our HyperLSTM model can generate is in Appendix A.4.
1609.09106#33
HyperNetworks
This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernetworks provide an abstraction that is similar to what is found in nature: the relationship between a genotype - the hypernetwork - and a phenotype - the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.
http://arxiv.org/pdf/1609.09106
David Ha, Andrew Dai, Quoc V. Le
cs.LG
null
null
cs.LG
20160927
20161201
[ { "id": "1603.09025" } ]
1609.08675
34
While the baselines in section 4 show very promising results, we believe that they can be significantly improved (when evaluShared Parameters Pooling Classifier Frame-level Features Figure 7: The network architecture of the DBoF approach. Input frame features are first fed into a up-projection layer with shared pa- rameters for all frames. This is followed by a pooling layer that con- verts the frame-level sparse codes into a video-level representation. A few hidden layers and a classification layer provide the final video-level predictions. pv(e|xv 1:Fv ) of the entity e associated with the video v as Fy v 1 po(elXt.r,) = EF SY rle j=l x"). (1) # 4.1.2 Deep Bag of Frame (DBoF) Pooling
1609.08675#34
YouTube-8M: A Large-Scale Video Classification Benchmark
Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of ~8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.
http://arxiv.org/pdf/1609.08675
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan
cs.CV
10 pages
null
cs.CV
20160927
20160927
[ { "id": "1502.07209" } ]
1609.09106
34
Model! enwiks Param Count Stacked LSTM (Graves, 2013) 1.67 27.0M MRNN (Sutskever et al., 2011) 1.60 GF-RNN (Chung et al., 2015) 1.58 20.0 M Grid-LSTM (Kalchbrenner et al., 2016) 1.47 16.8M LSTM (Rocki, 2016b) 1.45 MI-LSTM (Wu et al., 2016) 1.44 Recurrent Highway Networks (Zilly et al., 2016) 1.42 8.0M Recurrent Memory Array Structures (Rocki, 2016a) 1.40 HM-LSTM& (Chung et al., 2016) 1.40 Surprisal Feedback LSTM* (Rocki, 2016b) 1.37 LSTM, 1800 units, no recurrent dropout? 1.470 14.81 M LSTM, 2000 units, no recurrent dropout” 1.461 18.06 M Layer Norm LSTM, 1800 units” 1.402 14.82 M HyperLSTM (ours), 1800 units 1.391 18.71 M Layer Norm HyperLSTM, 1800 units (ours) 1.353 18.78 M Layer Norm HyperLSTM, 2048 units (ours) 1.340 26.54 M
1609.09106#34
HyperNetworks
This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernetworks provide an abstraction that is similar to what is found in nature: the relationship between a genotype - the hypernetwork - and a phenotype - the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.
http://arxiv.org/pdf/1609.09106
David Ha, Andrew Dai, Quoc V. Le
cs.LG
null
null
cs.LG
20160927
20161201
[ { "id": "1603.09025" } ]
1609.08675
35
# 4.1.2 Deep Bag of Frame (DBoF) Pooling Inspired by the success of various classic bag of words represen- tations for video classification [23, 36], we next consider a Deep Bag-of-Frames (DBoF) approach. Figure 7 shows the overall ar- chitecture of our DBoF network for video classification. The N - dimensional input frame level features from k randomly selected frames of a video are first fed into a fully connected layer of M units with RELU activations. Typically, with M > N , the input features are projected onto a higher dimensional space. Crucially, the parameters of the fully connected layer are shared across the k input frames. Along with the RELU activation, this leads to a sparse coding of the input features in the M -dimensional space.
1609.08675#35
YouTube-8M: A Large-Scale Video Classification Benchmark
Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of ~8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.
http://arxiv.org/pdf/1609.08675
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan
cs.CV
10 pages
null
cs.CV
20160927
20160927
[ { "id": "1502.07209" } ]
1609.09106
35
Table 4: Bits-per-character on the enwik8 test set. We see that HyperLSTM is once again competitive to Layer Norm LSTM, and if we combine both techniques, the Layer Norm HyperLSTM achieves respectable results. The version of HyperLSTM that uses 2048 hidden units achieve near state-of-the-art performance for this task. In addition, HyperLSTM converges quicker per training step compared to LSTM and Layer Norm LSTM. Please refer to Figure 6 for the loss graphs. 'We do not compare against methods that use dynamic evaluation. # implementation. Our 3Based on results of version 2 at the time of writing. http: //arxiv.org/abs/1609.01704v2 “This method uses information about test errors during inference for predicting the next characters, hence it is not directly comparable to other methods that do not use this information.
1609.09106#35
HyperNetworks
This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernetworks provide an abstraction that is similar to what is found in nature: the relationship between a genotype - the hypernetwork - and a phenotype - the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.
http://arxiv.org/pdf/1609.09106
David Ha, Andrew Dai, Quoc V. Le
cs.LG
null
null
cs.LG
20160927
20161201
[ { "id": "1603.09025" } ]
1609.08675
36
The obtained sparse codes are fed into a pooling layer that aggre- gates the codes of the k frames into a single fixed-length video rep- resentation. We use max pooling to perform the aggregation. We use a batch normalization layer before pooling to improve stabil- ity and speed-up convergence. The obtained fixed length descriptor of the video can now be classified into the output classes using a Logistic or Softmax layer with additional fully connected layers in between. The M -dimensions of the projection layer could be thought of as M discriminative clusters which can be trained in a single network end to end using backpropagation.
1609.08675#36
YouTube-8M: A Large-Scale Video Classification Benchmark
Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of ~8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.
http://arxiv.org/pdf/1609.08675
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan
cs.CV
10 pages
null
cs.CV
20160927
20160927
[ { "id": "1502.07209" } ]
1609.09106
36
In 1955-37 most American and Europeans signed into the sea. An absence of [[Japan (Korea city) |Japan]], the Mayotte like Constantino 7 i. H . H . . . . an moe _” . . . - om : : Co | bh : .- ple (in its first week, in [[880]]) that served as the mother of emperors, as the Corinthians, Bernard on his continued sequel toget _ 8 2 H : Po : . i. =o. Hl : : . : cE ta : : af : her ordered [ [Operation Moabili]]. The Gallup churches in the army promulgated the possessions sitting at the reservation, and [ [Mel 2 ito de la Vegeta Provine|Felix]] had broken Diocletian desperate from the full victory of Augustus, cited by Stephen I. Alexander Se on oe sae rt = . . Pa - . a: : : fa = me Ch : nate became Princess Cartara, an annual ruler of war (777-184) and founded numerous extremiti of justice practitioners. Figure 4: Example text generated from HyperLSTM model. We visualize how four of the main RNN’s weight matrices (W;,, Wi, W, if ,
1609.09106#36
HyperNetworks
This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernetworks provide an abstraction that is similar to what is found in nature: the relationship between a genotype - the hypernetwork - and a phenotype - the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.
http://arxiv.org/pdf/1609.09106
David Ha, Andrew Dai, Quoc V. Le
cs.LG
null
null
cs.LG
20160927
20161201
[ { "id": "1603.09025" } ]
1609.08675
37
The entire network is trained using Stocastic Gradient Descent (SGD) with logistic loss for a logistic layer and cross-entropy loss for a softmax layer. The backpropagated gradients from the top layer train the weight vectors of the projection layer in a discrimina- tive fashion in order to provide a powerful representation of the in- put bag of features. A similar network was proposed in [26] where the convolutional layer outputs are pooled across all the frames of a video to obtain a fixed length descriptor. However, the net- work in [26] does not use an intermediate projection layer which we found to be a crucial difference when learning from input frame features. Note that the up-projection layer into sparse codes is sim- ilar to what Fisher Vectors [27] and VLAD [15] approaches do but the projection (i.e., clustering) is done discriminatively here. We also experimented with Fisher Vectors and VLAD but were not able to obtain competitive results using comparable codebook sizes.
1609.08675#37
YouTube-8M: A Large-Scale Video Classification Benchmark
Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of ~8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.
http://arxiv.org/pdf/1609.08675
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan
cs.CV
10 pages
null
cs.CV
20160927
20160927
[ { "id": "1502.07209" } ]
1609.08675
38
also experimented with Fisher Vectors and VLAD but were not able to obtain competitive results using comparable codebook sizes. Hyperparameters: We considered values of {2048, 4096, 8192} for the number of units in the projection layer of the network and found that larger values lead to better results. We used 8192 for all datasets. We used a single hidden layer with 1024 units between the pooling layer and the final classification layer in all experiments. The network was trained using SGD with AdaGrad, a learning rate of 0.1, and a weight decay penalty of 0.0005. # 4.1.3 Long Short-Term Memory (LSTM) We take a similar approach to [26] to utilize LSTMs for video- level prediction. However, unlike that work, we do not have access to the raw video frames. This means that we can only train the LSTM and Softmax layers. We experimented with the number of stacked LSTM layers and the number of hidden units. We empirically found that 2 layers with 1024 units provided the highest performance on the validation set. Similarly to [26], we also employ linearly increasing per-frame weights going from 1/N to 1 for the last frame.
1609.08675#38
YouTube-8M: A Large-Scale Video Classification Benchmark
Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of ~8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.
http://arxiv.org/pdf/1609.08675
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan
cs.CV
10 pages
null
cs.CV
20160927
20160927
[ { "id": "1502.07209" } ]
1609.09106
38
When we use this prediction model as a generative model to sample a text passage, we use main RNN to model a probability distribution over possible characters conditioned over the preceding characters. In the case of the HyperRNN, we allow the model parameters of this generative model to vary over time, so in effect the HyperRNN cell is choosing the best model at any given time to generate a probability distribution to sample from. We can demonstrate this by visualizing how the weight scaling vectors of the main RNN change during the character sampling process. In Figure 4, we examine a sample text passage generated by HyperLSTM after training on enwik8 along with the weight differences below the text. We see that in regions of low intensity, where the weights of the main RNN are relatively static, the types of phrases generated seem more deterministic. For example, the weights do not change much during the words Europeans, possessions and reservation. The regions of high intensity is when the HyperRNN cell is making relatively large changes to the weights of the main RNN. These tend to happen in the areas between words, or sometimes during brackets.
1609.09106#38
HyperNetworks
This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernetworks provide an abstraction that is similar to what is found in nature: the relationship between a genotype - the hypernetwork - and a phenotype - the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.
http://arxiv.org/pdf/1609.09106
David Ha, Andrew Dai, Quoc V. Le
cs.LG
null
null
cs.LG
20160927
20161201
[ { "id": "1603.09025" } ]
1609.08675
39
During the training time, the LSTM was unrolled for 60 itera- tions. Therefore, the gradient horizon for LSTM was 60 seconds. We experimented with a larger number of unroll iterations, but that slowed down the training process considerably. In the end, the best model was the one trained for the largest number of steps (rather than the most real time). In order to transfer the learned model to ActivityNet, we used a fully-connected model which uses as inputs the concatenation of the LSTM layers’ outputs as computed at the last frame of the videos in each of these two benchmarks. Unlike traditional trans- fer learning methods, we do not fine-tune the LSTM layers. This approach is more robust to overfitting than traditional methods, which is crucial for obtaining competitive performance on Activ- ityNet due to its size. We did perform full fine-tuning experiments on Sports-1M, which is large enough to fine-tune the entire LSTM model after pre-training. # 4.2 Video level representations
1609.08675#39
YouTube-8M: A Large-Scale Video Classification Benchmark
Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of ~8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.
http://arxiv.org/pdf/1609.08675
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan
cs.CV
10 pages
null
cs.CV
20160927
20160927
[ { "id": "1502.07209" } ]
1609.09106
39
One might also wonder whether the HyperLSTM cell (without Layer Norm), via dynamically tuning the weight scaling vectors, has developed a policy that is similar to the statistics-based approach used by Layer Norm, given that both methods have similar performance. One way to see this effect is to look at the histogram of the hidden states in the network. In Figure 5, we examine the histograms of (cr), the hidden state of the LSTM before applying the output gate. os os os os 02s 02s 02s 02s a2 a2 02 02 as ons ors ars a oa on os lll er vt lin a Mee toll, ull [iti 07-025 025075 075-025 025075 “075-025, 025 O75 07-025 025 O75 ist™ Layer Norm LST¥ Hyporist™ Layer Norm Hyper STM Figure 5: Normalized Histogram plots of $(c;) for different models during sampling.
1609.09106#39
HyperNetworks
This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernetworks provide an abstraction that is similar to what is found in nature: the relationship between a genotype - the hypernetwork - and a phenotype - the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.
http://arxiv.org/pdf/1609.09106
David Ha, Andrew Dai, Quoc V. Le
cs.LG
null
null
cs.LG
20160927
20161201
[ { "id": "1603.09025" } ]
1609.08675
40
# 4.2 Video level representations Instead of training classifiers directly on frame-level features, we also explore extracting a task-independent fixed-length video-level feature vector from the frame-level features xv 1:Fv for each video v. There are several benefits of extracting fixed-length video features: 1. Standard classifiers can apply: Since the dimensionality of the representations are fixed across videos, we may train standard classifiers like logistic, SVM, mixture of experts. 2. Compactness: We get a compact representation for the en- tire video, thereby reducing the training data size by a few orders of magnitude. 3. More suitable for domain adaptation: Since the video- level representations are unsupervised (extracted independently of the labels), these representations are far less specialized to the labels associated with the current dataset, and can gener- alize better to new tasks or video domains.
1609.08675#40
YouTube-8M: A Large-Scale Video Classification Benchmark
Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of ~8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.
http://arxiv.org/pdf/1609.08675
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan
cs.CV
10 pages
null
cs.CV
20160927
20160927
[ { "id": "1502.07209" } ]
1609.09106
40
Figure 5: Normalized Histogram plots of $(c;) for different models during sampling. We see that the normalization process employed by Layer Norm reduces the saturation effects com- pared to the vanilla LSTM. However, for the case of the HyperLSTM, we notice that most of the time the cell is saturated. The HyperLSTM cell’s dynamic weight adjustment policy appears to be doing something very different compared to statistical normalization, although the policy it came up with ended up providing similar performance as Layer Norm. It is interesting to see that when we combine both methods, the HyperLSTM cell will need to determine an adjustment policy in spite of the normalization forced upon it by Layer Norm. An interesting question is whether there are problems where statistical normalization may actually be a setback to the policy developed by the HyperLSTM, and the best strategy is to ignore it. 10
1609.09106#40
HyperNetworks
This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernetworks provide an abstraction that is similar to what is found in nature: the relationship between a genotype - the hypernetwork - and a phenotype - the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.
http://arxiv.org/pdf/1609.09106
David Ha, Andrew Dai, Quoc V. Le
cs.LG
null
null
cs.LG
20160927
20161201
[ { "id": "1603.09025" } ]
1609.08675
41
Formally, a video-level feature ϕ(xv 1:Fv ) is a fixed-length repre- sentation (at the video-level). We explore a simple aggregation technique for getting these video-level representations. We also experimented with Fisher Vectors (FV) [27] and VLAD [15] ap- proaches for task-independent video-level representations but were not able to achieve competitive results for FV or VLAD representa- tions of similar dimensionality. We leave it as future work to come up with compact FV or VLAD type representations that outperform the much simpler approach described below. # 4.2.1 First, second order and ordinal statistics
1609.08675#41
YouTube-8M: A Large-Scale Video Classification Benchmark
Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of ~8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.
http://arxiv.org/pdf/1609.08675
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan
cs.CV
10 pages
null
cs.CV
20160927
20160927
[ { "id": "1502.07209" } ]
1609.09106
41
10 2.25 -800 215 LSTM —LSTM 20s Ss 850 —2 Layer LSTM 108 —Layer Norm LSTM 2 -900 — Layer Norm LSTM o " g — & 1.85 — HyperLSTM 2 -950 HyperLSTM & 1.75 — Layer Norm HyperLSTM Z -1000 S 1.65 = -1050 S zB S 185 ‘s -1100 1.45 > 1.35 “1150 1.25 -1200 0 10 20 30 40 50 60 70 80 25 22.5 42.5 62.5 82.5 102.5 Training Step (x1000) Training Step (x1000) Figure 6: Loss Graph for enwik8 (left). Loss Graph for Handwriting Generation (right) 4.5 HYPERLSTM FOR HANDWRITING SEQUENCE GENERATION
1609.09106#41
HyperNetworks
This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernetworks provide an abstraction that is similar to what is found in nature: the relationship between a genotype - the hypernetwork - and a phenotype - the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.
http://arxiv.org/pdf/1609.09106
David Ha, Andrew Dai, Quoc V. Le
cs.LG
null
null
cs.LG
20160927
20161201
[ { "id": "1603.09025" } ]
1609.08675
42
# 4.2.1 First, second order and ordinal statistics j ∈ R1024, we ex- tract the mean µv ∈ R1024 and the standard-deviation σv ∈ R1024. Additionally, we also extract the top 5 ordinal statistics for each dimension. Formally, TopK (xv(j)1:Fv ) returns a K dimensional vector where the pth dimension contains the pth highest value of the feature-vector’s jth dimension over the entire video. We denote TopK (xv 1:Fv ) to be a KD dimensional vector obtained by concate- nating the ordinal statistics for each dimension. Thus, the resulting feature-vector ϕ(xv 1:Fv ) for the video becomes: ϕ(xv 1:Fv ) = µ(xv σ(xv TopK (xv 1:Fv ) 1:Fv ) 1:Fv ) . (2) # 4.2.2 Feature normalization Standardization of features has been proven to help with online learning algorithms[14, 37] as it makes the updates using Stochas- tic Gradient Descent (SGD) based algorithms (like Adagrad) more robust to learning rates, and speeds up convergence.
1609.08675#42
YouTube-8M: A Large-Scale Video Classification Benchmark
Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of ~8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.
http://arxiv.org/pdf/1609.08675
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan
cs.CV
10 pages
null
cs.CV
20160927
20160927
[ { "id": "1502.07209" } ]
1609.09106
42
4.5 HYPERLSTM FOR HANDWRITING SEQUENCE GENERATION In addition to modelling discrete sequential data, we want to see how the model performs when modelling sequences of real valued data. We will train our model on the IAM online handwrit- ing database (Liwicki & Bunke, 2005) and have our model predict pen strokes as per Section 4.2 of (Graves, 2013). The dataset has contains 12179 handwritten lines from 221 writers, digitally recorded from a tablet. We will model the (x, y) coordinate of the pen location at each recorded time step, along with a binary indicator of pen-up/pen-down. The average sequence length is around 700 steps and the longest around 1900 steps, making the training task particularly challenging as the network needs to retain information about both the stroke history and also the handwriting style in order to predict plausible future handwriting strokes. For experimental setup details, please refer to Appendix A.3.5.
1609.09106#42
HyperNetworks
This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernetworks provide an abstraction that is similar to what is found in nature: the relationship between a genotype - the hypernetwork - and a phenotype - the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.
http://arxiv.org/pdf/1609.09106
David Ha, Andrew Dai, Quoc V. Le
cs.LG
null
null
cs.LG
20160927
20161201
[ { "id": "1603.09025" } ]
1609.08675
43
Before training our one-vs-all classifiers on the video-level rep- resentation, we apply global normalization to the feature vectors ϕ(xv 1:Fv ) (defined in equation 2). Similar to how we processed the frame features, we substract the mean ϕ(.) then use PCA to decor- relate and whiten the features. The normalized video features are now approximately multivariate gaussian with zero mean and iden- tity covariance. This makes the gradient steps across the various dimensions independent, and learning algorithm gets an unbiased view of each dimension (since the same learning rate is applied to each dimension). Finally, the resulting features are L2 normal- ized. We found that these normalization techniques make our mod- els train faster. # 4.3 Models from Video Features
1609.08675#43
YouTube-8M: A Large-Scale Video Classification Benchmark
Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of ~8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.
http://arxiv.org/pdf/1609.08675
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan
cs.CV
10 pages
null
cs.CV
20160927
20160927
[ { "id": "1502.07209" } ]
1609.09106
43
Model Log-Loss Param Count LSTM, 900 units (Graves, 2013) -1,026 3-Layer LSTM, 400 units (Graves, 2013) -1,041 3-Layer LSTM, 400 units, adaptive weight noise (Graves, 2013) -1,058 LSTM, 900 units, no dropout, no data augmentation.! -1,026 3.36M 3-Layer LSTM, 400 units, no dropout, no data augmentation.! -1,039 3.26 M LSTM, 900 units? -1,055 3.36M LSTM, 1000 units? -1,048 414M 3-Layer LSTM, 400 units” -1,068 3.26M 2-Layer LSTM, 650 units” -1,135 5.16M Layer Norm LSTM, 900 units? -1,096 3.37M Layer Norm LSTM, 1000 units? -1,106 4.14M Layer Norm HyperLSTM, 900 units (ours) -1,067 3.95 M HyperLSTM (ours), 900 units -1,162 3.94 M Table 5: Log-Loss of IAM Online DB validation set.
1609.09106#43
HyperNetworks
This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernetworks provide an abstraction that is similar to what is found in nature: the relationship between a genotype - the hypernetwork - and a phenotype - the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.
http://arxiv.org/pdf/1609.09106
David Ha, Andrew Dai, Quoc V. Le
cs.LG
null
null
cs.LG
20160927
20161201
[ { "id": "1603.09025" } ]
1609.08675
44
# 4.3 Models from Video Features Given the video-level representations, we train independent bi- nary classifiers for each label using all the data. Exploiting the structure information between the various labels is left for future work. A key challenge is training these classifiers at the scale of this dataset. Even with a compact video-level representation for the 6M training videos, it is unfeasible to train batch optimization classifiers, like SVM. Instead, we use online learning algorithms, and use Adagrad to perform model updates on the weight vectors given a small mini-batch of examples (each example is associated with a binary ground-truth value). # 4.3.1 Logistic Regression Given D dimensional video-level features, the parameters Θ of the logistic regression classifier are the entity specific weights we. During scoring, given x ∈ RD+1 to be the video-level feature of the test example, the probability of the entity e is given as p(e|x) = σ(wT e x). The weights we are obtained by minimizing the total log-loss on the training data given as: w Allwell? + D0 L(yi.e, (we x:)), G3) i=l
1609.08675#44
YouTube-8M: A Large-Scale Video Classification Benchmark
Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of ~8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.
http://arxiv.org/pdf/1609.08675
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan
cs.CV
10 pages
null
cs.CV
20160927
20160927
[ { "id": "1502.07209" } ]
1609.09106
44
Table 5: Log-Loss of IAM Online DB validation set. In this task, we note that data augmentation and applying recurrent dropout improved the perfor- mance of all models, compared to the original setup by (Graves, 2013). In addition, for the LSTM model, increasing unit count per layer may not help the performance compared to increasing the layer depth. We notice that a 3-layer 400 unit LSTM outperforms a 1-layer 900 unit one, and we found that a 2-layer 650 unit LSTM outperforming most configurations. While layer norm helps with the performance, we found that in this task, layer norm does not combine well with HyperL- STM, and in this task the 900 unit HyperLSTM without layer norm achieved the best performance. Unlike the language modelling task, perhaps statistical normalization is far from the optimal ap- proach for a weight adjustment policy. The policy learned by the HyperLSTM cell not only per‘Our implementation, to replicate setup of (Graves, 2013). Our implementation, with data augmentation, dropout and recurrent dropout. 11 formed well against the baseline, its convergence rate is also as fast as the 2-layer LSTM model. Please refer to Figure 6 for the loss graphs.
1609.09106#44
HyperNetworks
This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernetworks provide an abstraction that is similar to what is found in nature: the relationship between a genotype - the hypernetwork - and a phenotype - the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.
http://arxiv.org/pdf/1609.09106
David Ha, Andrew Dai, Quoc V. Le
cs.LG
null
null
cs.LG
20160927
20161201
[ { "id": "1603.09025" } ]
1609.08675
45
w Allwell? + D0 L(yi.e, (we x:)), G3) i=l where σ(.) is the standard logistic, σ(z) = 1/(1 + exp(−z)). # 4.3.2 Hinge Loss Since training batch SVMs on such a large dataset is impossible, we use the online SVM approach. As in the conventional SVM framework, we use ±1 to represent negative and positive labels respectively. Given binary ground-truth labels y (0 or 1), and pre- dicted labels ˆy (positive or negative scalars), the hinge loss is: L(y, ˆy) = max(0, b − (2y − 1)ˆy), (4) where b is the hinge-loss parameter which can be fine-tuned further or set to 1.0. Due to the presence of the max function, there is a discontinuity in the first derivative. This results in the subgradient being used in the updates, slowing convergence significantly. # 4.3.3 Mixture of Experts (MoE)
1609.08675#45
YouTube-8M: A Large-Scale Video Classification Benchmark
Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of ~8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.
http://arxiv.org/pdf/1609.08675
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan
cs.CV
10 pages
null
cs.CV
20160927
20160927
[ { "id": "1502.07209" } ]
1609.09106
45
11 formed well against the baseline, its convergence rate is also as fast as the 2-layer LSTM model. Please refer to Figure 6 for the loss graphs. In Appendix A.5, we display three sets of handwriting samples generated from LSTM, Layer Norm LSTM, and HyperLSTM, corresponding to log-loss scores of -1055, -1096, and -1162 nats respec- tively in Table 5. Qualitative assessments of handwriting quality is always subjective, and depends an individual’s taste in calligraphy. From looking at the examples produced by the three models, our opinion is that the samples produced by LSTM is noisier than the other two models. We also find HyperLSTM’s samples to be a bit more coherent than the samples produced by Layer Norm LSTM. We leave to the reader to judge which model produces handwriting samples of higher quality. joa cencslourc ucit te al gsereoum Semenlo ejay Maki ON LA A a Figure 7: Handwriting sample generated from HyperLSTM model. We visualize how four of the main RNN’s weight matrices (WW, wf , |\)/) effectively change over time, by plotting norm of changes made to them over time.
1609.09106#45
HyperNetworks
This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernetworks provide an abstraction that is similar to what is found in nature: the relationship between a genotype - the hypernetwork - and a phenotype - the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.
http://arxiv.org/pdf/1609.09106
David Ha, Andrew Dai, Quoc V. Le
cs.LG
null
null
cs.LG
20160927
20161201
[ { "id": "1603.09025" } ]
1609.08675
46
# 4.3.3 Mixture of Experts (MoE) Mixture of experts (MoE) was first proposed by Jacobs and Jor- dan [18]. The binary classifier for an entity e is composed of a set of hidden states, or experts, He. A softmax is typically used to model the probability of choosing each expert. Given an ex- pert, we can use a sigmoid to model the existence of the entity. Thus, the final probability for entity e’s existence is p(e|x) = hen. p(h|x)o(uz_x), where p(h|x) is a softmax over |He| + 1 The last, exp(w? x) I+Dnrene exP(wry%) states. In other words, p(h|x) =
1609.08675#46
YouTube-8M: A Large-Scale Video Classification Benchmark
Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of ~8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.
http://arxiv.org/pdf/1609.08675
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan
cs.CV
10 pages
null
cs.CV
20160927
20160927
[ { "id": "1502.07209" } ]
1609.09106
46
Similar to the earlier character generation experiment, we show a generated handwriting sample from the HyperLSTM model in Figure 7, along with a plot of how the weight scaling vectors of the main RNN is changing over time below the sample. For a more detailed interactive demonstration of handwriting generation using HyperLSTM, visit http: //blog.otoro.net/2016/09/28/ hyper-networks/. We see that the regions of high intensity seem to be concentrated at many discrete instances, rather than slowly varying over time. This implies that the weights experience regime changes rather than gradual slow adjustments. We can see that many of these weight changes occur between the written words, and sometimes between written characters. While the LSTM model alone already does a formidable job of generating time-varying parameters of a Mixture Gaussian distribution used to generate realistic handwriting samples, the ability to go one level deeper, and to dynamically generate the generative model is one of the key advantages of HyperRNN over a normal RNN. 4.6 HYPERLSTM FOR NEURAL MACHINE TRANSLATION
1609.09106#46
HyperNetworks
This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernetworks provide an abstraction that is similar to what is found in nature: the relationship between a genotype - the hypernetwork - and a phenotype - the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.
http://arxiv.org/pdf/1609.09106
David Ha, Andrew Dai, Quoc V. Le
cs.LG
null
null
cs.LG
20160927
20161201
[ { "id": "1603.09025" } ]
1609.08675
47
(|He| + 1)th, state is a dummy state that always results in the non-existence of the entity. Denote py|x = p(y = 1|x), ph|x = p(h|x) and ph = p(y = 1|x, h). Given a set of training examples (xi, gi)i=1...N for a binary classifier, where xi is the feature vec- tor and gi ∈ [0, 1] is the ground-truth, let L(pi, gi) be the log-loss between the predicted probability and the ground-truth: L(p, 9) = —g log p — (1 — g) log(1 — p). (5) We could directly write the derivative of £ [Pulx: g) with respect to the softmax weight wy, and the logistic weight u), as d£ [Puig] Pile (Pylnx _ Pylx) (Py|x _ 9) » ©) Own, Pylx(1 — Pylx) AL [Py\x; 9] 4c PilxPulnoe(L = Puine) (Puix = 9) a) oun Pylx (1 — Pylx)
1609.08675#47
YouTube-8M: A Large-Scale Video Classification Benchmark
Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of ~8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.
http://arxiv.org/pdf/1609.08675
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan
cs.CV
10 pages
null
cs.CV
20160927
20160927
[ { "id": "1502.07209" } ]
1609.09106
47
4.6 HYPERLSTM FOR NEURAL MACHINE TRANSLATION We experiment with the Neural Machine Translation task using the same experimental setup outlined in (Wuet al., 2016). Our model is the same wordpiece model architecture with a vocabulary size of 32k, but we replace the LSTM cells with HyperLSTM cells. We benchmark the modified model on WMT’ 14 En-+Fr using the same test/validation set split described in the GNMT paper (Wu et al., 2016). Please refer to Appendix A.3.6 for experimental setup details. Model Test BLEU Log Perplexity Deep-Att + PosUnk (Zhou et al., 2016) 39.2 GNMT WPM-32K, LSTM (Wu et al., 2016) 38.95 1.027 GNMT WPM-32K, ensemble of 8 LSTMs (Wu et al., 2016) 40.35 GNMT WPM-32K, HyperLSTM (ours) 40.03 0.993 Table 6: Single model results on WMT En--+Fr (newstest2014)
1609.09106#47
HyperNetworks
This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernetworks provide an abstraction that is similar to what is found in nature: the relationship between a genotype - the hypernetwork - and a phenotype - the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.
http://arxiv.org/pdf/1609.09106
David Ha, Andrew Dai, Quoc V. Le
cs.LG
null
null
cs.LG
20160927
20161201
[ { "id": "1603.09025" } ]
1609.08675
48
We use Adagrad with a learning rate of 1.0 and batch size of 32 to learn the weights. Since we are training independent classifiers for each label, the work is distributed across multiple machines. For MoE models, we experimented with varying number of mix- tures (1, 2, 4), and found that performance increases by 0.5%-1% on all metrics as we go from 1 to 2, and then to 4 mixtures, but the number of model parameters correspondingly increases by 2 or 4 times. We chose 2 mixtures as a good compromise and report numbers with the 2-mixture MoE model for all datasets. # 5. EXPERIMENTS In this section, we first provide benchmark baseline results for the above multi-label classification approaches on the YouTube-8M dataset. We then evaluate the usefulness of video representations learned on this dataset for other tasks, such as Sports-1M sports classification and AcitvityNet activity classification. # 5.1 Evaluation Metrics
1609.08675#48
YouTube-8M: A Large-Scale Video Classification Benchmark
Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of ~8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.
http://arxiv.org/pdf/1609.08675
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan
cs.CV
10 pages
null
cs.CV
20160927
20160927
[ { "id": "1502.07209" } ]
1609.09106
48
Table 6: Single model results on WMT En--+Fr (newstest2014) The HyperLSTM cell improves the performance of the existing GNMT model, achieving state- of-the-art single model results for this dataset. In addition, we demonstrate the applicability of hypernetworks to large-scale models used in production systems. Please see Appendix A.6 for actual translation samples generated from both models for a qualitative comparison. 12 # 5 CONCLUSION In this paper, we presented a method to use a hypernetwork to generate weights for another neural network. Our hypernetworks are trained end-to-end with backpropagation and therefore are effi- cient and scalable. We focused on two use cases of hypernetworks: static hypernetworks to generate weights for a convolutional network, dynamic hypernetworks to generate weights for recurrent net- works. We found that the method works well while using fewer parameters. On image recognition, language modelling and handwriting generation, hypernetworks are competitive to or sometimes better than state-of-the-art models. ACKNOWLEDGMENTS We thank Jeff Dean, Geoffrey Hinton, Mike Schuster and the Google Brain team for their help with the project. # REFERENCES
1609.09106#48
HyperNetworks
This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernetworks provide an abstraction that is similar to what is found in nature: the relationship between a genotype - the hypernetwork - and a phenotype - the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.
http://arxiv.org/pdf/1609.09106
David Ha, Andrew Dai, Quoc V. Le
cs.LG
null
null
cs.LG
20160927
20161201
[ { "id": "1603.09025" } ]
1609.09106
49
ACKNOWLEDGMENTS We thank Jeff Dean, Geoffrey Hinton, Mike Schuster and the Google Brain team for their help with the project. # REFERENCES Martin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Gre- gory S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian J. Good- fellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mané, Rajat Monga, Sherry Moore, Derek Gor- don Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul A. Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda B. Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. CoRR, abs/1603.04467, 2016. URL http: //arxiv.org/abs/1603.04467.
1609.09106#49
HyperNetworks
This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernetworks provide an abstraction that is similar to what is found in nature: the relationship between a genotype - the hypernetwork - and a phenotype - the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.
http://arxiv.org/pdf/1609.09106
David Ha, Andrew Dai, Quoc V. Le
cs.LG
null
null
cs.LG
20160927
20161201
[ { "id": "1603.09025" } ]
1609.08675
50
Modeling Approach Input Features Frame-level, {xv Logistic + Average (4.1.1) Frame-level, {xv Deep Bag of Frames (4.1.2) Frame-level, {xv LSTM (4.1.3) Video-level, µ Hinge loss (4.3) Video-level, µ Logistic Regression (4.3) Video-level, µ Mixture-of-2-Experts (4.3) Video-level, [µ; σ; Top5] Mixture-of-2-Experts (4.3) 1:Fv } 1:Fv } 1:Fv } mAP Hit@1 50.8 11.0 62.7 26.9 64.5 26.6 56.3 17.0 60.5 28.1 62.3 29.6 30.0 63.3 PERR 42.2 55.1 57.3 47.9 53.0 54.9 55.8
1609.08675#50
YouTube-8M: A Large-Scale Video Classification Benchmark
Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of ~8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.
http://arxiv.org/pdf/1609.08675
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan
cs.CV
10 pages
null
cs.CV
20160927
20160927
[ { "id": "1502.07209" } ]
1609.09106
50
M. Andrychowicz, M. Denil, S. Gomez, M. W. Hoffman, D. Pfau, T. Schaul, and N. de Freitas. Learning to learn by gradient descent by gradient descent. arXiv preprint arXiv: 1606.04474, 2016. Jimmy L. Ba, Jamie R. Kiros, and Geoffrey E. Hinton. Layer normalization. NIPS, 2016. Luca Bertinetto, Joao F. Henriques, Jack Valmadre, Philip H. S. Torr, and Andrea Vedaldi. Learning feed-forward one-shot learners. In NJPS, 2016. Christopher M. Bishop. Mixture density networks. Technical report, 1994. Junyoung Chung, Caglar Giilgehre, Kyunghyun Cho, and Yoshua Bengio. Gated feedback recurrent neural networks. arXiv preprint arXiv: 1502.02367, 2015. Junyoung Chung, Sungjin Ahn, and Yoshua Bengio. Hierarchical multiscale recurrent neural net- works. arXiv preprint arXiv: 1609.01704, 2016.
1609.09106#50
HyperNetworks
This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernetworks provide an abstraction that is similar to what is found in nature: the relationship between a genotype - the hypernetwork - and a phenotype - the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.
http://arxiv.org/pdf/1609.09106
David Ha, Andrew Dai, Quoc V. Le
cs.LG
null
null
cs.LG
20160927
20161201
[ { "id": "1603.09025" } ]
1609.08675
51
Table 3: Results of the various benchmark baselines on the YouTube- 8M dataset. We find that binary classifiers on simple video-level rep- resentations perform substantially better than frame-level approaches. Deep learning methods such as DBoF and LSTMs do not provide a substantial boost over traditional dense feature aggregation methods because the underlying frame-level features are already very strong. Approach Deep Bag of Frames (DBoF) (4.1.2) LSTM (4.1.3) Mixture-of-2-Experts ([µ; σ; Top5]) (4.3) Hit@1 68.6 69.1 70.1 PERR Hit@5 83.5 29.0 84.7 30.5 84.8 29.1 Table 4: Results of the three best approaches on the human rated test set of the YouTube-8M dataset. A comparison with the results on the validation set (Table 3) shows that the relative strengths of the different approaches are largely preserved on both sets. where I(.) is the indicator function. The average precision, ap- proximating the area under the precision-recall curve, can then be computed as 10000 AP = P(rj)[R(73) — R(tH41)], 1 j= (10)
1609.08675#51
YouTube-8M: A Large-Scale Video Classification Benchmark
Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of ~8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.
http://arxiv.org/pdf/1609.08675
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan
cs.CV
10 pages
null
cs.CV
20160927
20160927
[ { "id": "1502.07209" } ]
1609.09106
51
Djork-Arné Clevert, Thomas Unterthiner, and Sepp Hochreiter. Fast and accurate deep network learning by exponential linear units (ELUs). arXiv preprint arXiv: 1511.07289, 2015. Tim Cooijmans, Nicolas Ballas, Cesar Laurent, and Caglar Gulcehre. Recurrent Batch Normaliza- tion. arXiv:1603.09025, 2016. Bert De Brabandere, Xu Jia, Tinne Tuytelaars, and Luc Van Gool. Dynamic filter networks. In NIPS, 2016. Misha Denil, Babak Shakibi, Laurent Dinh, Marc’ Aurelio Ranzato, and Nando de Freitas. Predicting Parameters in Deep Learning. In NIPS, 2013. Chrisantha Fernando, Dylan Banarse, Malcolm Reynolds, Frederic Besse, David Pfau, Max Jader- berg, Marc Lanctot, and Daan Wierstra. Convolution by evolution: Differentiable pattern produc- ing networks. In GECCO, 2016. Faustino Gomez and Jiirgen Schmidhuber. Evolving modular fast-weight networks for control. In ICANN, 2005. 13 Alex Graves. Generating sequences with recurrent neural networks. arXiv: 1308.0850, 2013.
1609.09106#51
HyperNetworks
This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernetworks provide an abstraction that is similar to what is found in nature: the relationship between a genotype - the hypernetwork - and a phenotype - the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.
http://arxiv.org/pdf/1609.09106
David Ha, Andrew Dai, Quoc V. Le
cs.LG
null
null
cs.LG
20160927
20161201
[ { "id": "1603.09025" } ]
1609.08675
52
10000 AP = P(rj)[R(73) — R(tH41)], 1 j= (10) where where τj = j as the unweighted mean of all the per-class average precisions. Hit@k: This is the fraction of test samples that contain at least one of the ground truth labels in the top k predictions. If rankv,e is the rank of entity e on video v (with the best scoring entity having rank 1), and Gv is the set of ground-truth entities for v, then Hit@k can be written as: 1 iv S VeeG, I(rankv,e < k), veVv ql) where ∨ is logical OR. Precision at equal recall rate (PERR): We measure the video- level annotation precision when we retrieve the same number of entities per video as there are in the ground-truth. With the same notation as for Hit@k, PERR can be written as: 1 1 , WGa>0. S Ga S I(ranky,c < |Go|) | - vEV:|Gy|>0 e€Gy # 5.2 Results on YouTube-8M
1609.08675#52
YouTube-8M: A Large-Scale Video Classification Benchmark
Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of ~8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.
http://arxiv.org/pdf/1609.08675
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan
cs.CV
10 pages
null
cs.CV
20160927
20160927
[ { "id": "1502.07209" } ]
1609.09106
52
13 Alex Graves. Generating sequences with recurrent neural networks. arXiv: 1308.0850, 2013. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog- nition. In CVPR, 2016a. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. arXiv preprint arXiv: 1603.05027, 201 6b. Mikael Henaff, Arthur Szlam, and Yann LeCun. Orthogonal RNNs and long-memory tasks. In ICML, 2016. Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhutdi- nov. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580, 2012. Sepp Hochreiter and Juergen Schmidhuber. Long short-term memory. Neural Computation, 1997. Gao Huang, Zhuang Liu, and Kilian Q. Weinberger. Densely connected convolutional networks. arXiv preprint arXiv: 1608.06993, 2016a.
1609.09106#52
HyperNetworks
This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernetworks provide an abstraction that is similar to what is found in nature: the relationship between a genotype - the hypernetwork - and a phenotype - the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.
http://arxiv.org/pdf/1609.09106
David Ha, Andrew Dai, Quoc V. Le
cs.LG
null
null
cs.LG
20160927
20161201
[ { "id": "1603.09025" } ]
1609.08675
53
# 5.2 Results on YouTube-8M Table 3 shows results for all approaches on the YouTube-8M dataset. Frame-level models (row 1), trained on the strong Incep- tion features and logistic regression, followed by simple averaging of predictions across all frames, perform poorly on this dataset. This shows that the video-level prediction task cannot be reduced to simple frame-level classification. Aggregating the frame-level features at the video-level using sim- ple mean pooling of frame-level features, followed by a hinge loss or logistic regression model, provides a non-trivial improvement in video level accuracies over naive averaging of the frame-level predictions. Further improvements are observed by using mixture- of-experts models and by adding other statistics, like the standard
1609.08675#53
YouTube-8M: A Large-Scale Video Classification Benchmark
Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of ~8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.
http://arxiv.org/pdf/1609.08675
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan
cs.CV
10 pages
null
cs.CV
20160927
20160927
[ { "id": "1502.07209" } ]
1609.09106
53
Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Weinberger. Deep networks with stochas- tic depth. arXiv preprint arXiv: 1603.09382, 201 6b. Marcus Hutter. The human knowledge compression contest. 2012. URL http://prize. hutterl.net/. Max Jaderberg, Wojciech Marian Czarnecki, Simon Osindero, Oriol Vinyals, Alex Graves, and Koray Kavukcuoglu. Decoupled Neural Interfaces using Synthetic Gradients. arXiv preprint arXiv: 1608.05343, 2016. Nal Kalchbrenner, Ivo Danihelka, and Alex Graves. Grid long short-term memory. In JCLR, 2016. Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In JCLR, 2015. Jan Koutnik, Faustino Gomez, and Jiirgen Schmidhuber. Evolving neural networks in compressed weight space. In GECCO, 2010.
1609.09106#53
HyperNetworks
This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernetworks provide an abstraction that is similar to what is found in nature: the relationship between a genotype - the hypernetwork - and a phenotype - the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.
http://arxiv.org/pdf/1609.09106
David Ha, Andrew Dai, Quoc V. Le
cs.LG
null
null
cs.LG
20160927
20161201
[ { "id": "1603.09025" } ]
1609.08675
54
deviation and ordinal features, computed over the frame-level fea- tures. Note that the standard deviation and ordinal statistics are more meaningful in the original RELU activation space so we re- construct the RELU features from the PCA-ed and quantized fea- tures by inverting the quantization and the PCA using the provided PCA matrix, computing the collection statistics over the recon- structed frame-level RELU features, and then re-applying PCA, whitening, and L2 normalization as described in Section 4.2.2. This simple task-independent feature pooling and normalization strategy yields some of the most competitive results on this dataset.
1609.08675#54
YouTube-8M: A Large-Scale Video Classification Benchmark
Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of ~8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.
http://arxiv.org/pdf/1609.08675
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan
cs.CV
10 pages
null
cs.CV
20160927
20160927
[ { "id": "1502.07209" } ]
1609.09106
54
Jan Koutnik, Faustino Gomez, and Jiirgen Schmidhuber. Evolving neural networks in compressed weight space. In GECCO, 2010. David Krueger, Tegan Maharaj, Janos Kramar, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Hugo Larochelle, Aaron Courville, et al. Zoneout: Regular- izing RNNs by randomly preserving hidden activations. arXiv preprint arXiv: 1606.01305, 2016. Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Handwritten digit recognition with a back-propagation network. In N/PS, 1990. Chen-Yu Lee, Saining Xie, Patrick Gallagher, Zhengyou Zhang, and Zhuowen Tu. Deeply- supervised nets. In AISTATS, volume 2, pp. 6, 2015. Min Lin, Qiang Chen, and Shuicheng Yan. Network in network. In JCLR, 2014. Marcus Liwicki and Horst Bunke. IAM-OnDB - an on-line English sentence database acquired from handwritten text on a whiteboard. In JCDAR, 2005.
1609.09106#54
HyperNetworks
This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernetworks provide an abstraction that is similar to what is found in nature: the relationship between a genotype - the hypernetwork - and a phenotype - the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.
http://arxiv.org/pdf/1609.09106
David Ha, Andrew Dai, Quoc V. Le
cs.LG
null
null
cs.LG
20160927
20161201
[ { "id": "1603.09025" } ]
1609.08675
55
Finally, we also evaluate two deep network architectures that have produced state-of-art results on previous benchmarks [26]. The DBoF architecture ignores sequence information and treats the input video as a bag of frames whereas LSTMs use state informa- tion to preserve the video sequence. The DBoF approach with a logistic classification layer produces 2% (absolute) gains in Hit@1 and PERR metrics over using simple mean feature pooling and a single-layer logistic model, which shows the benefits of discrim- intatively training a projection layer to obtain a task-specific video- level representation. The mAP results for DBoF are slightly worse than mean pooling + logistic model, which we attribute to slower training and convergence of DBoF on rare classes (mAP is strongly affected by results on rare classes and the joint class training of DBoF is a disadvantage for those classes).
1609.08675#55
YouTube-8M: A Large-Scale Video Classification Benchmark
Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of ~8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.
http://arxiv.org/pdf/1609.08675
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan
cs.CV
10 pages
null
cs.CV
20160927
20160927
[ { "id": "1502.07209" } ]
1609.09106
55
Marcus Liwicki and Horst Bunke. IAM-OnDB - an on-line English sentence database acquired from handwritten text on a whiteboard. In JCDAR, 2005. Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. Building a large annotated corpus of english: The penn treebank. Computational linguistics, 19(2):313-330, 1993. Tomas Mikolov, Ilya Sutskever, Anoop Deoras, Hai-Son Le, Stefan Kombrink, and Jan Cernocky. Subword language modeling with neural networks. preprint, 2012. Marcin Moczulski, Misha Denil, Jeremy Appleyard, and Nando de Freitas. ACDC: A Structured Efficient Linear Layer. arXiv preprint arXiv: 1511.05946, 2015. Saahil Ognawala and Justin Bayer. Regularizing recurrent networks-on injected noise and norm- based methods. arXiv preprint arXiv:1410.5684, 2014. Kamil Rocki. Recurrent memory array structures. arXiv preprint arXiv: 1607.03085, 2016a. 14 Kamil Rocki. Surprisal-driven feedback in recurrent networks. arXiv preprint arXiv: 1608.06027, 2016b.
1609.09106#55
HyperNetworks
This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernetworks provide an abstraction that is similar to what is found in nature: the relationship between a genotype - the hypernetwork - and a phenotype - the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.
http://arxiv.org/pdf/1609.09106
David Ha, Andrew Dai, Quoc V. Le
cs.LG
null
null
cs.LG
20160927
20161201
[ { "id": "1603.09025" } ]
1609.08675
56
The LSTM network generally performs best, except for mAP, where the 1-vs-all binary MoE classifiers perform better, likely for the same reasons of slower convergence on rare classes. LSTM does improve on Hit@1 and PERR metrics, as expected given its ability to learn long-term correlations in the time domain. Also, in [26], the authors used data augmentation by sampling multi- ple snippets of fixed length from a video and averaged the results, which could produce even better accuracies than our current results. We also considered Fisher vectors and VLAD given their recent success in aggregating CNN features at the video-level in [39]. However, for the same dimensionality as the video-level represen- tations of the LSTM, DBoF and mean features, they did not pro- duce competitive results. # 5.2.1 Human Rated Test Set
1609.08675#56
YouTube-8M: A Large-Scale Video Classification Benchmark
Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of ~8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.
http://arxiv.org/pdf/1609.08675
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan
cs.CV
10 pages
null
cs.CV
20160927
20160927
[ { "id": "1502.07209" } ]
1609.09106
56
14 Kamil Rocki. Surprisal-driven feedback in recurrent networks. arXiv preprint arXiv: 1608.06027, 2016b. Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio. Fitnets: Hints for thin deep nets. arXiv preprint arXiv: 1412.6550, 2014. Jiirgen Schmidhuber. Learning to control fast-weight memories: An alternative to dynamic recurrent networks. Neural Computation, 4(1):131-139, 1992. Jiirgen Schmidhuber. A ‘self-referential’ weight matrix. In JCANN, 1993. Stanislaw Semeniuta, Aliases Severyn, and Erhardt Barth. Recurrent dropout without memory loss. arXiv: 1603.05118, 2016. Rupesh Srivastava, Klaus Greff, and Jiirgen Schmidhuber. Training very deep networks. In NIPS, 2015. Kenneth O. Stanley, David B. D’Ambrosio, and Jason Gauci. A hypercube-based encoding for evolving large-scale neural networks. Artificial Life, 15(2):185-212, 2009.
1609.09106#56
HyperNetworks
This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernetworks provide an abstraction that is similar to what is found in nature: the relationship between a genotype - the hypernetwork - and a phenotype - the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.
http://arxiv.org/pdf/1609.09106
David Ha, Andrew Dai, Quoc V. Le
cs.LG
null
null
cs.LG
20160927
20161201
[ { "id": "1603.09025" } ]
1609.08675
57
# 5.2.1 Human Rated Test Set We also report results on the human rated test set of over 8000 videos (see Section 3.5) in Table 4 for the top three approaches. We report PERR, Hit@1, and Hit@5, since the mAP is not reliable given the size of the test set. The Hit@1 numbers are uniformly higher for all approaches when compared to the incomplete valida- tion set in Table 3 whereas the PERR numbers are uniformly lower. This is largely attributable to the missing labels in the validation set (recall of the Validation set labels is around 15% compared to ex- haustive human ratings). However, the relative ordering of the var- ious approaches is fairly consistent between the two sets, showing that the validation set results are still reliable enough to compare different approaches. # 5.3 Results on Sports-1M Next, we investigate generalization of the video-level features learned using the YouTube-8M dataset and perform transfer learn- ing experiments on the Sports-1M dataset. The Sports-1M dataset [19] consists of 487 sports activities with 1.2 million YouTube videos and is one of the largest benchmarks available for sports/activity recognition. We use the first 360 seconds of a video sampled at 1 frame per second for all experiments.
1609.08675#57
YouTube-8M: A Large-Scale Video Classification Benchmark
Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of ~8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.
http://arxiv.org/pdf/1609.08675
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan
cs.CV
10 pages
null
cs.CV
20160927
20160927
[ { "id": "1502.07209" } ]
1609.09106
57
Ilya Sutskever, James Martens, and Geoffrey E. Hinton. Generating text with recurrent neural net- works. In JCML, 2011. YY. Wu, M. Schuster, Z. Chen, Q. V. Le, M. Norouzi, W. Macherey, M. Krikun, Y. Cao, Q. Gao, K. Macherey, J. Klingner, A. Shah, M. Johnson, X. Liu, L. Kaiser, S. Gouws, Y. Kato, T. Kudo, H. Kazawa, K. Stevens, G. Kurian, N. Patil, W. Wang, C. Young, J. Smith, J. Riesa, A. Rudnick, O. Vinyals, G. Corrado, M. Hughes, and J. Dean. Google’s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation. ArXiv e-prints, 2016. Yuhuai Wu, Saizheng Zhang, Ying Zhang, Yoshua Bengio, and Ruslan Salakhutdinov. On multi- plicative integration with recurrent neural networks. NIPS, 2016.
1609.09106#57
HyperNetworks
This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernetworks provide an abstraction that is similar to what is found in nature: the relationship between a genotype - the hypernetwork - and a phenotype - the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.
http://arxiv.org/pdf/1609.09106
David Ha, Andrew Dai, Quoc V. Le
cs.LG
null
null
cs.LG
20160927
20161201
[ { "id": "1603.09025" } ]
1609.09106
58
Jianlin Xia, Shivkumar Chandrasekaran, Ming Gu, and Xiaoye S. Li. Fast algorithms for hierarchi- cally semiseparable matrices. Numerical Linear Algebra with Applications, 2010. Z. Yang, M. Moczulski, M. Denil, N. de Freitas, A. Smola, L. Song, and Z. Wang. Deep Fried Convnets. In JCCV, 2015. Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. In BMVC, 2016. Ke Zhang, Miao Sun, Tony X. Han, Xingfang Yuan, Liru Guo, and Tao Liu. Residual networks of residual networks: Multilevel residual networks. arXiv preprint arXiv: 1608.02908, 2016. Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, and Wei Xu. Deep recurrent models with fast- forward connections for neural machine translation. CoRR, abs/1606.04199, 2016. URL http: //arxiv.org/abs/1606.04199. Julian Zilly, Rupesh Srivastava, Jan Koutnik, and Jiirgen Schmidhuber. Recurrent highway networks. arXiv preprint arXiv: 1607.03474, 2016. 15
1609.09106#58
HyperNetworks
This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernetworks provide an abstraction that is similar to what is found in nature: the relationship between a genotype - the hypernetwork - and a phenotype - the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.
http://arxiv.org/pdf/1609.09106
David Ha, Andrew Dai, Quoc V. Le
cs.LG
null
null
cs.LG
20160927
20161201
[ { "id": "1603.09025" } ]
1609.08675
59
Approach Logistic Regression (µ) (4.3) Mixture-of-2-Experts (µ) (4.3) Mixture-of-2-Experts ([µ; σ; Top5]) (4.2.1) LSTM (4.1.3) +Pretrained on YT-8M (4.1.3) Hierarchical 3D Convolutions [19] Stacked 3D Convolutions [35] LSTM with Optical Flow and Pixels [26] mAP Hit@1 Hit@5 79.6 60.1 58.0 80.4 61.5 59.1 82.6 63.2 61.3 85.6 64.9 66.7 86.2 65.7 67.6 80.0 61.0 - 85.0 61.0 - 91.0 73.0 Approach Mixture-of-2-Experts (µ) (4.3) +Pretrained PCA on YT-8M Mixture-of-2-Experts ([µ; σ; Top5]) (4.2.1) +Pretrained PCA on YT-8M LSTM (4.1.3) +Pretrained on YT-8M (4.1.3) Ma, Bargal et al.[24] Heilbron
1609.08675#59
YouTube-8M: A Large-Scale Video Classification Benchmark
Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of ~8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.
http://arxiv.org/pdf/1609.08675
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan
cs.CV
10 pages
null
cs.CV
20160927
20160927
[ { "id": "1502.07209" } ]
1609.09106
59
A APPENDIX A.1 HYPERNETWORKS TO LEARN FILTERS FOR A FULLY CONNECTED NETWORKS ee || sn —— al apelin .— a i ee a re = = ee A A | 4446S BS elena. Figure 8: Filters learned to classify MNIST digits in a fully connected network (left). Filters learned by a hypernetwork (right). We ran an experiment where the hypernetwork receives the x, y locations of both the input pixel and the weight, and predicts the value of the hidden weight matrix in a fully connected network that learns to classify MNIST digits. In this experiment, the fully connected network (784-256-10) has one hidden layer of 16 x 16 units, where the hypernetwork is a pre-defined small feedforward net- work. The weights of the hidden layer has 784 x 256 = 200704 parameters, while the hypernetwork is a 801 parameter four layer feed forward relu network that would generate the 786 x 256 weight matrix. The result of this experiment is shown in Figure 8. We want to emphasize that even though the network can learn convolutional-like filters during end-to-end training, its performance is rather poor: the best accuracy is 93.5%, compared to 98.5% for the conventional fully connected network.
1609.09106#59
HyperNetworks
This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernetworks provide an abstraction that is similar to what is found in nature: the relationship between a genotype - the hypernetwork - and a phenotype - the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.
http://arxiv.org/pdf/1609.09106
David Ha, Andrew Dai, Quoc V. Le
cs.LG
null
null
cs.LG
20160927
20161201
[ { "id": "1603.09025" } ]
1609.09106
60
filters during end-to-end training, its performance is rather poor: the best accuracy is 93.5%, compared to 98.5% for the conventional fully connected network. We find that the virtual coordinates-based approach to hypernetworks that is used by HyperNEAT and DPPN has its limitations in many practical tasks, such as image recognition and language mod- elling, and therefore developed our embedding vector approach in this work. 16
1609.09106#60
HyperNetworks
This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernetworks provide an abstraction that is similar to what is found in nature: the relationship between a genotype - the hypernetwork - and a phenotype - the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.
http://arxiv.org/pdf/1609.09106
David Ha, Andrew Dai, Quoc V. Le
cs.LG
null
null
cs.LG
20160927
20161201
[ { "id": "1603.09025" } ]
1609.08675
61
(a) Sports-1M: Our learned features are competitive on this dataset beating all but the approach of [26], which learned directly from the video pixels. Both [26] and [35] included motion features. (b) ActivityNet: Since the dataset is small, we see a substantial boost in performance by pre-training on YouTube-8M or using the transfer learnt PCA versus the one learnt from scratch on ActivityNet. Table 5: Results of transferring video representations learned on the YouTube-8M dataset to the (a) Sports-1M and (b) ActivityNet. logistic models on top using target domain training data. For the LSTM networks, we have two scenarios: 1) we use the PCA transformed features and learn a LSTM model from scratch using these features; or 2) we use the LSTM layers pre-trained on the YouTube-8M task, and fine-tune them on the Sports-1M dataset (along with a new softmax classifier).
1609.08675#61
YouTube-8M: A Large-Scale Video Classification Benchmark
Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of ~8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.
http://arxiv.org/pdf/1609.08675
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan
cs.CV
10 pages
null
cs.CV
20160927
20160927
[ { "id": "1502.07209" } ]
1609.09106
61
A.2 CONCEPTUAL DIAGRAMS OF STATIC AND DYNAMIC HYPERNETWORKS input output > Wo > Wy {We =} Was > Wy > output output outputs output output, Ho Hy He Hes He > ow > ow SW Lew > ow > x0 I x x2 I Xe I x Figure 9: Feedforward Network (top) and Recurrent Network (bottom) output > input >| wiz) >| Wee) >| Wize) -—-----—>| Wana) >} ween) ne ee eee ee % 2 2 ZN zy Z —+ we ——> |W Next Nin X Nout Figure 10: Static Hy; pernetwork generating weights for Feedforward Network output output; output output. output, Ho Hy He Hes H > Wize) > Wea) >| Wap) ------->} Whar) > Wee) > x0 x xp Xe x 2 2 20 Ze a ho I hy I hy I I hes I he >| we > We > we >! w, > We > Ho Xo Hy xy Ho xo, Hea Xt He xt Figure 11: Dynamic Hypernetwork generating weights for Recurrent Network 17
1609.09106#61
HyperNetworks
This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernetworks provide an abstraction that is similar to what is found in nature: the relationship between a genotype - the hypernetwork - and a phenotype - the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.
http://arxiv.org/pdf/1609.09106
David Ha, Andrew Dai, Quoc V. Le
cs.LG
null
null
cs.LG
20160927
20161201
[ { "id": "1603.09025" } ]
1609.08675
62
Table 5a shows the evaluation metrics for the various video-level representations on the Sports-1M dataset. Our learned features are competitive on this dataset, with the best approach beating all but the approach of [26], which learned directly from the pixels of the videos in the Sports-1M dataset, including optical flow, and made use of data augmentation strategies and multiple inferences over several video segments. We also show that even on such a large dataset (1M videos), pre-training on YouTube-8M still helps, and improves the LSTM performance by ∼1% on all metrics (vs. no pre-training). # 5.4 Results on ActivityNet
1609.08675#62
YouTube-8M: A Large-Scale Video Classification Benchmark
Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of ~8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.
http://arxiv.org/pdf/1609.08675
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan
cs.CV
10 pages
null
cs.CV
20160927
20160927
[ { "id": "1502.07209" } ]
1609.09106
62
Figure 11: Dynamic Hypernetwork generating weights for Recurrent Network 17 A.2.1 FILTER VISUALIZATIONS FOR RESIDUAL NETWORKS In Figures 12 and 13 are example visualizations for various kernels in a deep residual network. Note that the 32x32x3x3 kernel generated by the hypernetwork was constructed by concatenating 4 basic kernels together. Figure 13: Generated 16x16x3x3 kernel (left). Generated 32x32x3x3 kernel (right). 18 # A.2.2. HYPERLSTM In this section we will discuss extension of HyperRNN to LSTM. Our focus will be on the basic version of the LSTM architecture Hochreiter & Schmidhuber (1997), given by: ip = Wily + Wie +0 ge = Woy + Win, + 09 fr = Whi + WEa, +d! on = Wrht-1 + Wea, + 0° cr = o( fr) © G1 + (tt) © O(G) hy = a(04) © O(c) (9) where Wj! € RN**Ne Wy € RN»*Ne by € RN», o is the sigmoid operator, ¢ is the tanh operator. For brevity, y is one of {i, g, f, o}.!
1609.09106#62
HyperNetworks
This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernetworks provide an abstraction that is similar to what is found in nature: the relationship between a genotype - the hypernetwork - and a phenotype - the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.
http://arxiv.org/pdf/1609.09106
David Ha, Andrew Dai, Quoc V. Le
cs.LG
null
null
cs.LG
20160927
20161201
[ { "id": "1603.09025" } ]
1609.08675
63
# 5.4 Results on ActivityNet Our final set of experiments demonstrate the generality of our learned features for the ActivityNet untrimmed video classification task. Similar to Sports-1M experiments, we compare directly train- ing on the ActivityNet dataset against pre-training on YouTube-8M for aggregation based and LSTM approaches. As seen in Table 5b, all of the transferred features are much better in terms of all metrics than training on ActivityNet alone. Notably, without the use of mo- tion information, our best feature is better by up to 80% than the HOG, HOF, MBH, FC-6, FC-7 features used in [12]. This result shows that features learned on YouTube-8M generalize very well to other datasets/tasks. We believe this is because of the diversity and scale of the videos present in YouTube-8M. will prove to be a test bed for developing novel video representation learning algorithms, and especially approaches that deal effectively with noisy or incomplete labels. As a side effect, we also provide one of the largest and most diverse public visual annotation vocabularies (consisting of 4800 visual Knowledge Graph entities), constructed from popularity sig- nals on YouTube as well as manual curation, and organized into 24 top-level categories.
1609.08675#63
YouTube-8M: A Large-Scale Video Classification Benchmark
Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of ~8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.
http://arxiv.org/pdf/1609.08675
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan
cs.CV
10 pages
null
cs.CV
20160927
20160927
[ { "id": "1502.07209" } ]
1609.09106
63
Similar to the previous section, we will make the weights and biases a function of an embedding, and the embedding for each {i, g, f, o} will be generated from a smaller HyperLSTM cell. As discussed earlier, we will also experiment with adding the option to use a Layer Normalization layer in the HyperLSTM. The HyperLSTM Cell is given by: w=("0) ip = LN(Wihy-1 + Wie + 6) Ge = LN(Wohy1 + War + 64) = LN(Wliu_a + Wie, +8) (10) ( 6: = LN(Woin_1 + W288, + 6°) h & 1) © G1 + a(t) © (Gt) 64) © 6(LN(4)) ht =o The weight matrices for each of the four {i, g, f, 0} gates will be a function of a set of embeddings Zz, Zh, and Z unique to each gates, just like the HyperRNN. These embeddings are linear projections of the hidden states of the HyperLSTM Cell. For brevity, y is one of {i, 9, f,o} to avoid writing four sets of identical equations:
1609.09106#63
HyperNetworks
This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernetworks provide an abstraction that is similar to what is found in nature: the relationship between a genotype - the hypernetwork - and a phenotype - the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.
http://arxiv.org/pdf/1609.09106
David Ha, Andrew Dai, Quoc V. Le
cs.LG
null
null
cs.LG
20160927
20161201
[ { "id": "1603.09025" } ]
1609.08675
64
We provide extensive experiments comparing several strong base- lines for video representation learning, including Deep Networks and LSTMs, on this dataset. We demonstrate the efficacy of using a fairly unexplored class of models (mixture-of-experts) and show that they can outperform popular classifiers like logistic regression and SVMs. This is particularly true for our large dataset where many classes can be multi-modal. We explore various video-level representations using simple statistics extracted from the frame- level features and model the probability of an entity given the ag- gregated vector as an MoE. We show that this yields competitive performance compared to more complex approaches (that directly use frame-level information) such as LSTM and DBoF. This also demonstrates that if the underlying frame-level features are strong, the need for more sophisticated video-level modeling techniques is reduced. Finally, we illustrate the usefulness of the dataset by perform- ing transfer learning experiments on existing video benchmarks— Sports-1M and ActivityNet. Our experiments show that features learned on this dataset generalize well on these benchmarks, in- cluding setting a new state-of-the-art on ActivityNet. # 6. CONCLUSIONS
1609.08675#64
YouTube-8M: A Large-Scale Video Classification Benchmark
Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of ~8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.
http://arxiv.org/pdf/1609.08675
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan
cs.CV
10 pages
null
cs.CV
20160927
20160927
[ { "id": "1502.07209" } ]
1609.09106
64
a= Wii, lte-1 _ ohn a=W? fy + bY (1) he 4 = W} hea As in the memory efficient version of the HyperRNN, we will focus on the efficient version of the HyperLSTM, where we use weight scaling vectors d to modify the rows of the weight matrices: ye = LN (dj © Wihy-1 + d4 © W¥a, + DY (z})), where di (zn) = Ween (2h) he*h (12) d! (20) = Wize bY (z}) = Wak + bf In our implementation, the cell and hidden state update equations for the main LSTM will incorpo- rate a single dropout (Hinton et al., 2012) gate, as developed in Recurrent Dropout without Memory Loss (Semeniuta et al., 2016), as we found this to help regularize the entire model during training: cr = o(ft) © cr-1 + a(t) © DropOut(d(ge)) hy = o(0%4) © O(LN(cr)) (13) 'In practice, all eight weight matrices are concatenated into one large matrix for computational efficiency. 19
1609.09106#64
HyperNetworks
This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernetworks provide an abstraction that is similar to what is found in nature: the relationship between a genotype - the hypernetwork - and a phenotype - the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.
http://arxiv.org/pdf/1609.09106
David Ha, Andrew Dai, Quoc V. Le
cs.LG
null
null
cs.LG
20160927
20161201
[ { "id": "1603.09025" } ]
1609.08675
65
# 6. CONCLUSIONS In this paper, we introduce YouTube-8M, a large-scale video benchmark for video classification and representation learning. With YouTube-8M, our goal is to advance the field of video understand- ing, similarly to what large-scale image datasets have done for im- age understanding. Specifically, we address the two main chal- lenges with large-scale video understanding—(1) collecting a large labeled video dataset, with reasonable quality labels, and (2) re- moving computational barriers by pre-processing the dataset and providing state-of-the-art frame-level features to build from. We process over 50 years worth of video, and provide features for nearly 2 billion frames from more than 8 million videos, which enables training a reasonable model at this scale within 1 day, us- ing an open source framework on a single machine! We expect this dataset to level the playing field for academia researchers, bridge the gap with large-scale labeled video datasets, and significantly accelerate research on video understanding. We hope this dataset
1609.08675#65
YouTube-8M: A Large-Scale Video Classification Benchmark
Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of ~8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.
http://arxiv.org/pdf/1609.08675
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan
cs.CV
10 pages
null
cs.CV
20160927
20160927
[ { "id": "1502.07209" } ]