doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1704.05179 | 21 | Rudolf Kadlec, Martin Schmid, Ondrej Bajgar, and Jan Kleindienst. 2016. Text understanding with the attention sum reader network. arXiv preprint arXiv:1603.01547 .
Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 .
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hin- ton. 2012. Imagenet classiï¬cation with deep con- volutional neural networks. In Advances in neural information processing systems. pages 1097â1105.
Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. Ms marco: A human generated machine arXiv preprint reading comprehension dataset. arXiv:1611.09268 .
3http://pytorch.org/
Takeshi Onishi, Hai Wang, Mohit Bansal, Kevin Gim- pel, and David McAllester. 2016. Who did what: A large-scale person-centered cloze dataset. arXiv preprint arXiv:1608.05457 . | 1704.05179#21 | SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine | We publicly release a new large-scale dataset, called SearchQA, for machine
comprehension, or question-answering. Unlike recently released datasets, such
as DeepMind CNN/DailyMail and SQuAD, the proposed SearchQA was constructed to
reflect a full pipeline of general question-answering. That is, we start not
from an existing article and generate a question-answer pair, but start from an
existing question-answer pair, crawled from J! Archive, and augment it with
text snippets retrieved by Google. Following this approach, we built SearchQA,
which consists of more than 140k question-answer pairs with each pair having
49.6 snippets on average. Each question-answer-context tuple of the SearchQA
comes with additional meta-data such as the snippet's URL, which we believe
will be valuable resources for future research. We conduct human evaluation as
well as test two baseline methods, one simple word selection and the other deep
learning based, on the SearchQA. We show that there is a meaningful gap between
the human and machine performances. This suggests that the proposed dataset
could well serve as a benchmark for question-answering. | http://arxiv.org/pdf/1704.05179 | Matthew Dunn, Levent Sagun, Mike Higgins, V. Ugur Guney, Volkan Cirik, Kyunghyun Cho | cs.CL | null | null | cs.CL | 20170418 | 20170611 | [
{
"id": "1511.02301"
},
{
"id": "1608.05457"
},
{
"id": "1606.05250"
},
{
"id": "1611.09268"
},
{
"id": "1611.09830"
},
{
"id": "1603.01547"
},
{
"id": "1506.02075"
},
{
"id": "1610.05256"
}
] |
1704.05194 | 21 | Figure 4: Model performance with diï¬erent divisions.
Figure 4 shows the training and testing AUC with diï¬erent division number m. We try m = 6, 12, 24, 36, the testing AUC for m = 12 is markedly better than m = 6, and improvement for m = 24, 36 is relatively gentle. Thus in all the following experiments , the parameter m is set as 12 for LS-PLM model.
# 4.2 Eï¬ectiveness of regularization
As stated in Session 2, in order to keep our model simpler and more generalized, we prefer to constrain the model parameters sparse by both L1 and L2,1 norms. Here we evaluate the strength of both the regularization terms.
Table 2 gives the results. As expected, both L1 and L2,1 norms can push our model to be sparse. Model trained with L2,1 norm has only 9.4% non-zero parameters left and 18.7% features are kept back. While in L1 norm case, there are only 1.9% non-zero parameters left. Combining them together, we get the sparsest
8
Table 2: Regularization eï¬ects on model sparsity and performance | 1704.05194#21 | Learning Piece-wise Linear Models from Large Scale Data for Ad Click Prediction | CTR prediction in real-world business is a difficult machine learning problem
with large scale nonlinear sparse data. In this paper, we introduce an
industrial strength solution with model named Large Scale Piece-wise Linear
Model (LS-PLM). We formulate the learning problem with $L_1$ and $L_{2,1}$
regularizers, leading to a non-convex and non-smooth optimization problem.
Then, we propose a novel algorithm to solve it efficiently, based on
directional derivatives and quasi-Newton method. In addition, we design a
distributed system which can run on hundreds of machines parallel and provides
us with the industrial scalability. LS-PLM model can capture nonlinear patterns
from massive sparse data, saving us from heavy feature engineering jobs. Since
2012, LS-PLM has become the main CTR prediction model in Alibaba's online
display advertising system, serving hundreds of millions users every day. | http://arxiv.org/pdf/1704.05194 | Kun Gai, Xiaoqiang Zhu, Han Li, Kai Liu, Zhe Wang | stat.ML, cs.LG | null | null | stat.ML | 20170418 | 20170418 | [] |
1704.05426 | 21 | Table 3: Key statistics for the corpus by genre. The ï¬rst ï¬ve genres represent the matched section of the develop- ment and test sets, and the remaining ï¬ve represent the mismatched section. The ï¬rst three statistics provide the number of examples in each genre. #Wds. Prem. is the mean token count among premise sentences. âSâ parses is the percentage of sentences for which the Stanford Parser produced a parse rooted with an âSâ (sentence) node. Agrmt. is the percent of individual labels that match the gold label in validated examples. Model Acc. gives the test accuracy for ESIM and CBOW models (trained on either SNLI or MultiNLI), as described in Section 3.
as parsed by the Stanford PCFG Parser 3.5.2 (Klein and Manning, 2003).
⢠sentence{1,2} binary parse: parses in unlabeled binary-branching format. | 1704.05426#21 | A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference | This paper introduces the Multi-Genre Natural Language Inference (MultiNLI)
corpus, a dataset designed for use in the development and evaluation of machine
learning models for sentence understanding. In addition to being one of the
largest corpora available for the task of NLI, at 433k examples, this corpus
improves upon available resources in its coverage: it offers data from ten
distinct genres of written and spoken English--making it possible to evaluate
systems on nearly the full complexity of the language--and it offers an
explicit setting for the evaluation of cross-genre domain adaptation. | http://arxiv.org/pdf/1704.05426 | Adina Williams, Nikita Nangia, Samuel R. Bowman | cs.CL | 10 pages, 1 figures, 5 tables. v2 corrects a misreported accuracy
number for the CBOW model in the 'matched' setting. v3 adds a discussion of
the difficulty of the corpus to the analysis section. v4 is the version that
was accepted to NAACL2018 | null | cs.CL | 20170418 | 20180219 | [
{
"id": "1705.02364"
}
] |
1704.05179 | 22 | Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Squad: 100,000+ questions Percy Liang. 2016. for machine comprehension of text. arXiv preprint arXiv:1606.05250 .
Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overï¬tting. Journal of Machine Learning Re- search 15(1):1929â1958.
Adam Trischler, Tong Wang, Xingdi Yuan, Justin Har- ris, Alessandro Sordoni, Philip Bachman, and Ka- heer Suleman. 2016. NewsQA: A machine compre- hension dataset. arXiv preprint arXiv:1611.09830 .
Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Advances in Neural In- formation Processing Systems. pages 2692â2700. | 1704.05179#22 | SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine | We publicly release a new large-scale dataset, called SearchQA, for machine
comprehension, or question-answering. Unlike recently released datasets, such
as DeepMind CNN/DailyMail and SQuAD, the proposed SearchQA was constructed to
reflect a full pipeline of general question-answering. That is, we start not
from an existing article and generate a question-answer pair, but start from an
existing question-answer pair, crawled from J! Archive, and augment it with
text snippets retrieved by Google. Following this approach, we built SearchQA,
which consists of more than 140k question-answer pairs with each pair having
49.6 snippets on average. Each question-answer-context tuple of the SearchQA
comes with additional meta-data such as the snippet's URL, which we believe
will be valuable resources for future research. We conduct human evaluation as
well as test two baseline methods, one simple word selection and the other deep
learning based, on the SearchQA. We show that there is a meaningful gap between
the human and machine performances. This suggests that the proposed dataset
could well serve as a benchmark for question-answering. | http://arxiv.org/pdf/1704.05179 | Matthew Dunn, Levent Sagun, Mike Higgins, V. Ugur Guney, Volkan Cirik, Kyunghyun Cho | cs.CL | null | null | cs.CL | 20170418 | 20170611 | [
{
"id": "1511.02301"
},
{
"id": "1608.05457"
},
{
"id": "1606.05250"
},
{
"id": "1611.09268"
},
{
"id": "1611.09830"
},
{
"id": "1603.01547"
},
{
"id": "1506.02075"
},
{
"id": "1610.05256"
}
] |
1704.05194 | 22 | 8
Table 2: Regularization eï¬ects on model sparsity and performance
β/λ(L1/L2,1) #features #non-zero parameters 0/0 0/1 1/0 1/1 3.04 à 106 5.68 à 105 3.87 à 105 2.55 à 105 7.30 à 107 6.64 à 106 1.33 à 106 1.15 à 106 testing auc 0.6489 0.6570 0.6617 0.6629
Table 3: Training cost comparision with/without common feature trick
Cost Memory cost/node Time cost/iteration Without CF. With CF. Cost Saving 31 GB 10s 89.2 GB 121s 65.2% 91.7%
result. Meanwhile, models trained with diï¬erent norm get diï¬erent AUC performance. Again adding the two norms together the model reaches the best AUC performance.
In this experiment, the hyper-parameter m is set to be 12. Parameters β and λ are selected by grid search. {0.01, 0.1, 1, 10} are tried for both norms in the all cases. The model with β = 1 and λ = 1 preforms best.
# 4.3 Eï¬ectiveness of common feature trick | 1704.05194#22 | Learning Piece-wise Linear Models from Large Scale Data for Ad Click Prediction | CTR prediction in real-world business is a difficult machine learning problem
with large scale nonlinear sparse data. In this paper, we introduce an
industrial strength solution with model named Large Scale Piece-wise Linear
Model (LS-PLM). We formulate the learning problem with $L_1$ and $L_{2,1}$
regularizers, leading to a non-convex and non-smooth optimization problem.
Then, we propose a novel algorithm to solve it efficiently, based on
directional derivatives and quasi-Newton method. In addition, we design a
distributed system which can run on hundreds of machines parallel and provides
us with the industrial scalability. LS-PLM model can capture nonlinear patterns
from massive sparse data, saving us from heavy feature engineering jobs. Since
2012, LS-PLM has become the main CTR prediction model in Alibaba's online
display advertising system, serving hundreds of millions users every day. | http://arxiv.org/pdf/1704.05194 | Kun Gai, Xiaoqiang Zhu, Han Li, Kai Liu, Zhe Wang | stat.ML, cs.LG | null | null | stat.ML | 20170418 | 20170418 | [] |
1704.05194 | 23 | # 4.3 Eï¬ectiveness of common feature trick
We prove the eï¬ectiveness of common features trick. Speciï¬cally, we set up the experiments with 100 workers, each of which uses 12 CPU cores, with up to 110 GB memory totally. As shown in Table 3, compressing instances with common feature trick does not aï¬ect the actual dimensions of feature space. However, in practice we can signiï¬cantly reduce memory usage (reduce to about 1/3) and speed up the calculation (around 12 times faster) compared to the training without common feature trick.
0.67 0.665 0.66 0.655 0.65 0.645 0.64 0.635 0.63 1 2 3 4 5 6 7 mLS-PLM MLR
Figure 5: Model performance comparison on 7 diï¬erent test datasets. LS-PLM owns consistent and markable promotion compared with LR.
# 4.4 Comparison with LR | 1704.05194#23 | Learning Piece-wise Linear Models from Large Scale Data for Ad Click Prediction | CTR prediction in real-world business is a difficult machine learning problem
with large scale nonlinear sparse data. In this paper, we introduce an
industrial strength solution with model named Large Scale Piece-wise Linear
Model (LS-PLM). We formulate the learning problem with $L_1$ and $L_{2,1}$
regularizers, leading to a non-convex and non-smooth optimization problem.
Then, we propose a novel algorithm to solve it efficiently, based on
directional derivatives and quasi-Newton method. In addition, we design a
distributed system which can run on hundreds of machines parallel and provides
us with the industrial scalability. LS-PLM model can capture nonlinear patterns
from massive sparse data, saving us from heavy feature engineering jobs. Since
2012, LS-PLM has become the main CTR prediction model in Alibaba's online
display advertising system, serving hundreds of millions users every day. | http://arxiv.org/pdf/1704.05194 | Kun Gai, Xiaoqiang Zhu, Han Li, Kai Liu, Zhe Wang | stat.ML, cs.LG | null | null | stat.ML | 20170418 | 20170418 | [] |
1704.05426 | 23 | ⢠label[2...5]: The four labels assigned during validation by individual annotators to each development and test example. These ï¬elds will be empty for training examples.
the The corpus at freely nyu.edu/projects/bowman/multinli/ for typical machine learning uses, and may be modiï¬ed and redistributed. The majority of the corpus is released under the OANCâs license, which allows all content to be freely used, modi- ï¬ed, and shared under permissive terms. The data in the FICTION section falls under several per- missive licenses; Seven Swords is available under a Creative Commons Share-Alike 3.0 Unported License, and with the explicit permission of the author, Living History and Password Incorrect are available under Creative Commons Attribution 3.0 Unported Licenses; the remaining works of ï¬ction are in the public domain in the United States (but may be licensed differently elsewhere).
Partition The distributed corpus comes with an explicit train/test/development split. The test and development sets contain 2,000 randomly selected examples each from each of the genres, resulting in a total of 20,000 examples per set. No premise sentence occurs in more than one set. | 1704.05426#23 | A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference | This paper introduces the Multi-Genre Natural Language Inference (MultiNLI)
corpus, a dataset designed for use in the development and evaluation of machine
learning models for sentence understanding. In addition to being one of the
largest corpora available for the task of NLI, at 433k examples, this corpus
improves upon available resources in its coverage: it offers data from ten
distinct genres of written and spoken English--making it possible to evaluate
systems on nearly the full complexity of the language--and it offers an
explicit setting for the evaluation of cross-genre domain adaptation. | http://arxiv.org/pdf/1704.05426 | Adina Williams, Nikita Nangia, Samuel R. Bowman | cs.CL | 10 pages, 1 figures, 5 tables. v2 corrects a misreported accuracy
number for the CBOW model in the 'matched' setting. v3 adds a discussion of
the difficulty of the corpus to the analysis section. v4 is the version that
was accepted to NAACL2018 | null | cs.CL | 20170418 | 20180219 | [
{
"id": "1705.02364"
}
] |
1704.05194 | 24 | Figure 5: Model performance comparison on 7 diï¬erent test datasets. LS-PLM owns consistent and markable promotion compared with LR.
# 4.4 Comparison with LR
We now compare LS-PLM with LR, the widely used CTR prediction model in product setting. The two models are trained using our distributed implementation architecture, running with hundreds of machines for speed-up. The choice of the L1 and L2,1 parameters for LS-PLM and the L1 parameter for LR are based on grid search. β = 0.01, 0.1, 1, 10 and λ = 0.01, 0.1, 1, 10 are tried. The best parameters are β = 1 and λ = 1 for LS-PLM, and β = 1 for LR.
9
As shown in Figure 5, LS-PLM outperforms LR clearly. The average improvement of AUC for LR is 1.44%, which has signiï¬cant impact to the overall online ad system performance. Besides, the improvement is stable. This ensures LS-PLM can be safely deployed for daily online production system.
# 5 Conclusions | 1704.05194#24 | Learning Piece-wise Linear Models from Large Scale Data for Ad Click Prediction | CTR prediction in real-world business is a difficult machine learning problem
with large scale nonlinear sparse data. In this paper, we introduce an
industrial strength solution with model named Large Scale Piece-wise Linear
Model (LS-PLM). We formulate the learning problem with $L_1$ and $L_{2,1}$
regularizers, leading to a non-convex and non-smooth optimization problem.
Then, we propose a novel algorithm to solve it efficiently, based on
directional derivatives and quasi-Newton method. In addition, we design a
distributed system which can run on hundreds of machines parallel and provides
us with the industrial scalability. LS-PLM model can capture nonlinear patterns
from massive sparse data, saving us from heavy feature engineering jobs. Since
2012, LS-PLM has become the main CTR prediction model in Alibaba's online
display advertising system, serving hundreds of millions users every day. | http://arxiv.org/pdf/1704.05194 | Kun Gai, Xiaoqiang Zhu, Han Li, Kai Liu, Zhe Wang | stat.ML, cs.LG | null | null | stat.ML | 20170418 | 20170418 | [] |
1704.05426 | 24 | Statistics Table 3 shows some additional statis- tics. Premise sentences in MultiNLI tend to be longer (max 401 words, mean 22.3 words) than their hypotheses (max 70 words), and much longer, on average, than premises in SNLI (mean 14.1 words); premises in MultiNLI also tend to be parsed as complete sentences at a much higher rate on average (91%) than their SNLI counter- parts (74%). We observe that the two spoken gen- res differ in thisâwith FACE-TO-FACE showing more complete sentences (91%) than TELEPHONE (71%)âand speculate that the lack of visual feed- back in a telephone setting may result in a high incidence of interrupted or otherwise incomplete sentences.
Hypothesis sentences in MultiNLI generally cannot be derived from their premise sentences us- ing only trivial editing strategies. While 2.5% of the hypotheses in SNLI differ from their premises by deletion, only 0.9% of those in MultiNLI (170 examples total) are constructed in this way. Sim- ilarly, in SNLI, 1.6% of hypotheses differ from their premises by addition, substitution, or shuf- ï¬ing a single word, while in MultiNLI this only happens in 1.2% of examples. The percentage of | 1704.05426#24 | A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference | This paper introduces the Multi-Genre Natural Language Inference (MultiNLI)
corpus, a dataset designed for use in the development and evaluation of machine
learning models for sentence understanding. In addition to being one of the
largest corpora available for the task of NLI, at 433k examples, this corpus
improves upon available resources in its coverage: it offers data from ten
distinct genres of written and spoken English--making it possible to evaluate
systems on nearly the full complexity of the language--and it offers an
explicit setting for the evaluation of cross-genre domain adaptation. | http://arxiv.org/pdf/1704.05426 | Adina Williams, Nikita Nangia, Samuel R. Bowman | cs.CL | 10 pages, 1 figures, 5 tables. v2 corrects a misreported accuracy
number for the CBOW model in the 'matched' setting. v3 adds a discussion of
the difficulty of the corpus to the analysis section. v4 is the version that
was accepted to NAACL2018 | null | cs.CL | 20170418 | 20180219 | [
{
"id": "1705.02364"
}
] |
1704.05194 | 25 | # 5 Conclusions
In this paper, a piece-wise linear model, LS-PLM for CTR prediction problem is proposed. It can capture the nonlinear pattern from sparse data and save us from heavy feature engineering jobs, which is crucial for real industry applications. Additionally, powered by our distributed and optimized implementation, our algorithm can handle problems of billion samples with ten million parameters, which is the typical industrial data volume. Regularization terms of L1 and L2,1 are utilized to keep the model sparse. Since 2012, LS-PLM has become the main CTR prediction model in alibabaâs online display advertising system, serving hundreds of millions users every day.
# Acknowledgments
We would like to thank Xingya Dai and Yanghui Yan for their help for this work.
# Appendix
# A Proof of Lemma 2.1
Proof. the definition of fâ(©;d) is given as follows:
{© + ad) â f(®) 1.) =); f'(@:d) =lim 5 (14) lim loss(© + ad) â loss(O) alo a Flim lO + edllea = [Olle aso a 0 + ad\|, â |e âim gl@tedl = lh, aso a
As the gradient of loss function exists for any Î, the directional derivative for the ï¬rst part is | 1704.05194#25 | Learning Piece-wise Linear Models from Large Scale Data for Ad Click Prediction | CTR prediction in real-world business is a difficult machine learning problem
with large scale nonlinear sparse data. In this paper, we introduce an
industrial strength solution with model named Large Scale Piece-wise Linear
Model (LS-PLM). We formulate the learning problem with $L_1$ and $L_{2,1}$
regularizers, leading to a non-convex and non-smooth optimization problem.
Then, we propose a novel algorithm to solve it efficiently, based on
directional derivatives and quasi-Newton method. In addition, we design a
distributed system which can run on hundreds of machines parallel and provides
us with the industrial scalability. LS-PLM model can capture nonlinear patterns
from massive sparse data, saving us from heavy feature engineering jobs. Since
2012, LS-PLM has become the main CTR prediction model in Alibaba's online
display advertising system, serving hundreds of millions users every day. | http://arxiv.org/pdf/1704.05194 | Kun Gai, Xiaoqiang Zhu, Han Li, Kai Liu, Zhe Wang | stat.ML, cs.LG | null | null | stat.ML | 20170418 | 20170418 | [] |
1704.05426 | 25 | MNLI Train Model SNLI Match. Mis. Most freq. 34.3 36.5 35.6 SNLI CBOW BiLSTM ESIM 80.6 81.5 86.7 - - - - - - MNLI CBOW BiLSTM ESIM 51.5 50.8 60.7 64.8 66.9 72.3 64.5 66.9 72.1 MNLI+ SNLI CBOW BiLSTM ESIM 74.7 74.0 79.7 65.2 67.5 72.4 64.6 67.1 71.9
Table 4: Test set accuracies (%) for all models; Match. represents test set performance on the MultiNLI genres that are also represented in the training set, Mis. repre- sents test set performance on the remaining ones; Most freq. is a trivial âmost frequent classâ baseline.
hypothesis-premise pairs with high token overlap (>37%) was comparable between MultiNLI (30% of pairs) and SNLI (29%). These statistics sug- gest that MultiNLIâs annotations are comparable in quality to those of SNLI.
# 3 Baselines | 1704.05426#25 | A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference | This paper introduces the Multi-Genre Natural Language Inference (MultiNLI)
corpus, a dataset designed for use in the development and evaluation of machine
learning models for sentence understanding. In addition to being one of the
largest corpora available for the task of NLI, at 433k examples, this corpus
improves upon available resources in its coverage: it offers data from ten
distinct genres of written and spoken English--making it possible to evaluate
systems on nearly the full complexity of the language--and it offers an
explicit setting for the evaluation of cross-genre domain adaptation. | http://arxiv.org/pdf/1704.05426 | Adina Williams, Nikita Nangia, Samuel R. Bowman | cs.CL | 10 pages, 1 figures, 5 tables. v2 corrects a misreported accuracy
number for the CBOW model in the 'matched' setting. v3 adds a discussion of
the difficulty of the corpus to the analysis section. v4 is the version that
was accepted to NAACL2018 | null | cs.CL | 20170418 | 20180219 | [
{
"id": "1705.02364"
}
] |
1704.05194 | 26 | As the gradient of loss function exists for any Î, the directional derivative for the ï¬rst part is
lim αâ0 loss(Î + αd) â loss(Î) α = âloss(Î)T d (15)
For the second part, we know if ||9j;.|2,1 4 0, the Lz normâs partial derivative exists. So the directional derivative is
tim 12 +adi.ll21 = ]Oill2. dr Of di. . 16 alo a }9:-llo1 (ae)
However, when ||9j.||2,1 = 0, it means O;; = 0,1 < j < 2m. Then its directional derivative can be denoted as follows:
lim \ Oe + dillon = ]%-[l2,1 alo Qa (17)
# = lim 뱉0
# λ
# di. ledellea Qa
= A\Idi.|l24
10
So combine the above cases in Eq. (16) and Eq. (17), we get the directional derivative for the second part:
C) d â1!E let ada. = elas a alo a ord = Ae ldstan {|©:.]2,140 |9:-|l2.1=0
Same as the second part, the direction derivative for the third part is: | 1704.05194#26 | Learning Piece-wise Linear Models from Large Scale Data for Ad Click Prediction | CTR prediction in real-world business is a difficult machine learning problem
with large scale nonlinear sparse data. In this paper, we introduce an
industrial strength solution with model named Large Scale Piece-wise Linear
Model (LS-PLM). We formulate the learning problem with $L_1$ and $L_{2,1}$
regularizers, leading to a non-convex and non-smooth optimization problem.
Then, we propose a novel algorithm to solve it efficiently, based on
directional derivatives and quasi-Newton method. In addition, we design a
distributed system which can run on hundreds of machines parallel and provides
us with the industrial scalability. LS-PLM model can capture nonlinear patterns
from massive sparse data, saving us from heavy feature engineering jobs. Since
2012, LS-PLM has become the main CTR prediction model in Alibaba's online
display advertising system, serving hundreds of millions users every day. | http://arxiv.org/pdf/1704.05194 | Kun Gai, Xiaoqiang Zhu, Han Li, Kai Liu, Zhe Wang | stat.ML, cs.LG | null | null | stat.ML | 20170418 | 20170418 | [] |
1704.05426 | 26 | # 3 Baselines
To test the difï¬culty of the corpus, we experiment with three neural network models. The ï¬rst is a simple continuous bag of words (CBOW) model in which each sentence is represented as the sum of the embedding representations of its words. The second computes representations by averag- ing the states of a bidirectional LSTM RNN (BiL- STM; Hochreiter and Schmidhuber, 1997) over words. For the third, we implement and evalu- ate Chen et al.âs Enhanced Sequential Inference Model (ESIM), which is roughly tied for the state of the art on SNLI at the time of writing. We use the base ESIM without ensembling with a TreeL- STM (as in the âHIMâ runs in that work).
The ï¬rst two models produce separate vec- tor representations for each sentence and com- pute label predictions for pairs of representations. To do this, they concatenate the representations for premise and hypothesis, their difference, and their element-wise product, following Mou et al. (2016b), and pass the result to a single tanh layer followed by a three-way softmax classiï¬er. | 1704.05426#26 | A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference | This paper introduces the Multi-Genre Natural Language Inference (MultiNLI)
corpus, a dataset designed for use in the development and evaluation of machine
learning models for sentence understanding. In addition to being one of the
largest corpora available for the task of NLI, at 433k examples, this corpus
improves upon available resources in its coverage: it offers data from ten
distinct genres of written and spoken English--making it possible to evaluate
systems on nearly the full complexity of the language--and it offers an
explicit setting for the evaluation of cross-genre domain adaptation. | http://arxiv.org/pdf/1704.05426 | Adina Williams, Nikita Nangia, Samuel R. Bowman | cs.CL | 10 pages, 1 figures, 5 tables. v2 corrects a misreported accuracy
number for the CBOW model in the 'matched' setting. v3 adds a discussion of
the difficulty of the corpus to the analysis section. v4 is the version that
was accepted to NAACL2018 | null | cs.CL | 20170418 | 20180219 | [
{
"id": "1705.02364"
}
] |
1704.05194 | 27 | Same as the second part, the direction derivative for the third part is:
O+ad|\; â ||O l@-+ad â Heh «9 hm 6
=
# YO
# βsign(Îij)dij +
# SO
# β|dij|
]945 |], 40
]943|],-=0
Based on Eq. (15), Eq. (18) and Eq. (19). we get that for any © and direction d, fâ(©;d) exists.
# B Proof of Proposition 2.2
Proof. Finding the expected direction turns to an optimization problem, which is formulated as follows:
min f'(O;d) s.t. |ldl|? <C. (20)
Here the direction d is bounded by a constant scalar C. To solve this problem, we employ lagrange function to combine the objective function and inequality function together:
# sy an
# oat)
L(d, pw) = f'(O;d) + n(|\dl|? â (21)
Here µ ⥠0 is the lagrange multiplier. Setting the partial derivative of d with respect to L(d, µ) to zero has three cases.
Define s = âVloss(®);; â Aes a. when 0;; 4 0, it implies that 2udij = s â Psign(®i;) | 1704.05194#27 | Learning Piece-wise Linear Models from Large Scale Data for Ad Click Prediction | CTR prediction in real-world business is a difficult machine learning problem
with large scale nonlinear sparse data. In this paper, we introduce an
industrial strength solution with model named Large Scale Piece-wise Linear
Model (LS-PLM). We formulate the learning problem with $L_1$ and $L_{2,1}$
regularizers, leading to a non-convex and non-smooth optimization problem.
Then, we propose a novel algorithm to solve it efficiently, based on
directional derivatives and quasi-Newton method. In addition, we design a
distributed system which can run on hundreds of machines parallel and provides
us with the industrial scalability. LS-PLM model can capture nonlinear patterns
from massive sparse data, saving us from heavy feature engineering jobs. Since
2012, LS-PLM has become the main CTR prediction model in Alibaba's online
display advertising system, serving hundreds of millions users every day. | http://arxiv.org/pdf/1704.05194 | Kun Gai, Xiaoqiang Zhu, Han Li, Kai Liu, Zhe Wang | stat.ML, cs.LG | null | null | stat.ML | 20170418 | 20170418 | [] |
1704.05426 | 27 | All models are initialized with 300D reference GloVe vectors (840B token version; Pennington et al., 2014). Out-of-vocabulary (OOV) words are initialized randomly and word embeddings are ï¬ne-tuned during training. The models use 300D
hidden states, as in most prior work on SNLI. We use Dropout (Srivastava et al., 2014) for regular- ization. For ESIM, we use a dropout rate of 0.5, following the paper. For CBOW and BiLSTM models, we tune Dropout on the SNLI dev. set and ï¬nd that a drop rate of 0.1 works well. We use the Adam (Kingma and Ba, 2015) optimizer with default parameters.
We train models on SNLI, MultiNLI, and a mix- ture; Table 4 shows the results. In the mixed set- ting, we use the full MultiNLI training set and ran- domly select 15% of the SNLI training set at each epoch, ensuring that each available genre is seen during training with roughly equal frequency. | 1704.05426#27 | A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference | This paper introduces the Multi-Genre Natural Language Inference (MultiNLI)
corpus, a dataset designed for use in the development and evaluation of machine
learning models for sentence understanding. In addition to being one of the
largest corpora available for the task of NLI, at 433k examples, this corpus
improves upon available resources in its coverage: it offers data from ten
distinct genres of written and spoken English--making it possible to evaluate
systems on nearly the full complexity of the language--and it offers an
explicit setting for the evaluation of cross-genre domain adaptation. | http://arxiv.org/pdf/1704.05426 | Adina Williams, Nikita Nangia, Samuel R. Bowman | cs.CL | 10 pages, 1 figures, 5 tables. v2 corrects a misreported accuracy
number for the CBOW model in the 'matched' setting. v3 adds a discussion of
the difficulty of the corpus to the analysis section. v4 is the version that
was accepted to NAACL2018 | null | cs.CL | 20170418 | 20180219 | [
{
"id": "1705.02364"
}
] |
1704.05194 | 28 | Define s = âVloss(®);; â Aes a. when 0;; 4 0, it implies that 2udij = s â Psign(®i;)
b. when ©;; = 0 and ||@j.||2,1 > 0, it is easy to have
2µdij = max{|s| â β, 0}sign(s)
c. We give more details when 0,; = 0 and ||®j.||2,1 = 0. For d;. we have
OL(d, 1) i. ââ = Vloss sign! + A + 2yd;. = 0. ad, (®)i. + Bsign(d:.) [dean 2
Here we use sign(di·) = [sign(di1), . . . , sign(di2m)]T for simplicity. Then we get
my (Qu + âââ )d;. = âVloss(9);. â Bsign(d;.) dios | 1704.05194#28 | Learning Piece-wise Linear Models from Large Scale Data for Ad Click Prediction | CTR prediction in real-world business is a difficult machine learning problem
with large scale nonlinear sparse data. In this paper, we introduce an
industrial strength solution with model named Large Scale Piece-wise Linear
Model (LS-PLM). We formulate the learning problem with $L_1$ and $L_{2,1}$
regularizers, leading to a non-convex and non-smooth optimization problem.
Then, we propose a novel algorithm to solve it efficiently, based on
directional derivatives and quasi-Newton method. In addition, we design a
distributed system which can run on hundreds of machines parallel and provides
us with the industrial scalability. LS-PLM model can capture nonlinear patterns
from massive sparse data, saving us from heavy feature engineering jobs. Since
2012, LS-PLM has become the main CTR prediction model in Alibaba's online
display advertising system, serving hundreds of millions users every day. | http://arxiv.org/pdf/1704.05194 | Kun Gai, Xiaoqiang Zhu, Han Li, Kai Liu, Zhe Wang | stat.ML, cs.LG | null | null | stat.ML | 20170418 | 20170418 | [] |
1704.05426 | 28 | We also train a separate CBOW model on each individual genre to establish the degree to which simple models already allow for effective transfer across genres, using a dropout rate of 0.2. When training on SNLI, a single random sample of 15% of the original training set is used. For each genre represented in the training set, the model that per- forms best on it was trained on that genre; a model trained only on SNLI performs worse on every genre than comparable models trained on any genre from MultiNLI. | 1704.05426#28 | A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference | This paper introduces the Multi-Genre Natural Language Inference (MultiNLI)
corpus, a dataset designed for use in the development and evaluation of machine
learning models for sentence understanding. In addition to being one of the
largest corpora available for the task of NLI, at 433k examples, this corpus
improves upon available resources in its coverage: it offers data from ten
distinct genres of written and spoken English--making it possible to evaluate
systems on nearly the full complexity of the language--and it offers an
explicit setting for the evaluation of cross-genre domain adaptation. | http://arxiv.org/pdf/1704.05426 | Adina Williams, Nikita Nangia, Samuel R. Bowman | cs.CL | 10 pages, 1 figures, 5 tables. v2 corrects a misreported accuracy
number for the CBOW model in the 'matched' setting. v3 adds a discussion of
the difficulty of the corpus to the analysis section. v4 is the version that
was accepted to NAACL2018 | null | cs.CL | 20170418 | 20180219 | [
{
"id": "1705.02364"
}
] |
1704.05194 | 29 | my (Qu + âââ )d;. = âVloss(9);. â Bsign(d;.) dios
which implies that sign(di·) = sign(ââloss(Î)i· â βsign(di·)). When dij ⥠0, it implies ââloss(Î)ij â . = ââloss(Î)i·â βsign(dij) ⥠0. Inversely, we have ââloss(Î)ij âβsign(dij) ⤠0 when dij ⤠0. So we deï¬ne v βsign(di·) and vj = max{| â âloss(Î)ij| â β, 0}sign(ââloss(Î)ij). So
(Qu+ [ae di. =v => (2ulldi.|| + A)|Idz-1 = Helle > (2ulldi.|| + A) = lle. Uv
11 | 1704.05194#29 | Learning Piece-wise Linear Models from Large Scale Data for Ad Click Prediction | CTR prediction in real-world business is a difficult machine learning problem
with large scale nonlinear sparse data. In this paper, we introduce an
industrial strength solution with model named Large Scale Piece-wise Linear
Model (LS-PLM). We formulate the learning problem with $L_1$ and $L_{2,1}$
regularizers, leading to a non-convex and non-smooth optimization problem.
Then, we propose a novel algorithm to solve it efficiently, based on
directional derivatives and quasi-Newton method. In addition, we design a
distributed system which can run on hundreds of machines parallel and provides
us with the industrial scalability. LS-PLM model can capture nonlinear patterns
from massive sparse data, saving us from heavy feature engineering jobs. Since
2012, LS-PLM has become the main CTR prediction model in Alibaba's online
display advertising system, serving hundreds of millions users every day. | http://arxiv.org/pdf/1704.05194 | Kun Gai, Xiaoqiang Zhu, Han Li, Kai Liu, Zhe Wang | stat.ML, cs.LG | null | null | stat.ML | 20170418 | 20170418 | [] |
1704.05426 | 29 | Models trained on a single genre from MultiNLI perform well on similar genres; for example, the model trained on TELEPHONE attains the best accuracy (63%) on FACE-TO-FACE, which was nearly one point better than it received on itself. SLATE seems to be a difï¬cult and relatively un- usual genre and performance on it is relatively poor in this setting; when averaging over runs trained on SNLI and all genres in the matched section of the training set, average performance on SLATE was only 57.5%. Sentences in SLATE cover a wide range of topics and phenomena, mak- ing it hard to do well on, but also forcing models trained on it be broadly capable; the model trained on SLATE achieves the highest accuracy of any model on 9/11 (55.6%) and VERBATIM (57.2%), and relatively high accuracy on TRAVEL (57.4%) and GOVERNMENT (58.3%). We also observe that our models perform similarly on both the matched and mismatched test sets of MultiNLI. We expect genre mismatch issues to become more conspic- uous as models are developed that can better ï¬t MultiNLIâs training genres.
# 4 Discussion and Analysis
# 4.1 Data Collection | 1704.05426#29 | A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference | This paper introduces the Multi-Genre Natural Language Inference (MultiNLI)
corpus, a dataset designed for use in the development and evaluation of machine
learning models for sentence understanding. In addition to being one of the
largest corpora available for the task of NLI, at 433k examples, this corpus
improves upon available resources in its coverage: it offers data from ten
distinct genres of written and spoken English--making it possible to evaluate
systems on nearly the full complexity of the language--and it offers an
explicit setting for the evaluation of cross-genre domain adaptation. | http://arxiv.org/pdf/1704.05426 | Adina Williams, Nikita Nangia, Samuel R. Bowman | cs.CL | 10 pages, 1 figures, 5 tables. v2 corrects a misreported accuracy
number for the CBOW model in the 'matched' setting. v3 adds a discussion of
the difficulty of the corpus to the analysis section. v4 is the version that
was accepted to NAACL2018 | null | cs.CL | 20170418 | 20180219 | [
{
"id": "1705.02364"
}
] |
1704.05194 | 30 | 11
As ||d;.|| => 0, we have 24||d;.|| = max(||v|] â A,0). Thus 2ud;; = max(l}ul[=A.0) 5, Tell The lagrange multiplier u is a scalar, and it has equivalent influence for all dij. We can see that the optimal direction which is bounded by C is the same direction as we defined in Eq. without considering the constant scalar j1. Here we finish our proof.
# Tell
# References
[1] Andrew G. and Gao J. (2007) Scalable Training of L1-Regularized Log-Linear Models. Proceedings of the 24-th International Conference on Machine Learning.
[2] Bertsekas, D. (2003) Nonlinear Programming. Springer US, 51â88.
[3] Brendan H., Holt G., Sculley D., Young M., Ebner D., Grady J., Nie L., Phillips. T, Davydov E., Golovin D., Chikkerur S., Liu D., Wattenberg M., Hrafnkelsson A., Boulos T., Kubica J. (2013) Ad Click Prediction: a View from the Trenches. Proceedings of the 19-th KDD. | 1704.05194#30 | Learning Piece-wise Linear Models from Large Scale Data for Ad Click Prediction | CTR prediction in real-world business is a difficult machine learning problem
with large scale nonlinear sparse data. In this paper, we introduce an
industrial strength solution with model named Large Scale Piece-wise Linear
Model (LS-PLM). We formulate the learning problem with $L_1$ and $L_{2,1}$
regularizers, leading to a non-convex and non-smooth optimization problem.
Then, we propose a novel algorithm to solve it efficiently, based on
directional derivatives and quasi-Newton method. In addition, we design a
distributed system which can run on hundreds of machines parallel and provides
us with the industrial scalability. LS-PLM model can capture nonlinear patterns
from massive sparse data, saving us from heavy feature engineering jobs. Since
2012, LS-PLM has become the main CTR prediction model in Alibaba's online
display advertising system, serving hundreds of millions users every day. | http://arxiv.org/pdf/1704.05194 | Kun Gai, Xiaoqiang Zhu, Han Li, Kai Liu, Zhe Wang | stat.ML, cs.LG | null | null | stat.ML | 20170418 | 20170418 | [] |
1704.05426 | 30 | # 4 Discussion and Analysis
# 4.1 Data Collection
In data collection for NLI, different annotator de- cisions about the coreference between entities and events across the two sentences in a pair can lead to very different assignments of pairs to labels (de Marneffe et al., 2008; Marelli et al., 2014a; Bowman et al., 2015). Drawing an example from Bowman et al., the pair âa boat sank in the Paciï¬c Oceanâ and âa boat sank in the Atlantic Oceanâ can be labeled either CONTRADICTION or NEU- TRAL depending on (among other things) whether the two mentions of boats are assumed to refer to the same entity in the world. This uncertainty can present a serious problem for inter-annotator agreement, since it is not clear that it is possible to deï¬ne an explicit set of rules around coreference that would be easily intelligible to an untrained an- notator (or any non-expert). | 1704.05426#30 | A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference | This paper introduces the Multi-Genre Natural Language Inference (MultiNLI)
corpus, a dataset designed for use in the development and evaluation of machine
learning models for sentence understanding. In addition to being one of the
largest corpora available for the task of NLI, at 433k examples, this corpus
improves upon available resources in its coverage: it offers data from ten
distinct genres of written and spoken English--making it possible to evaluate
systems on nearly the full complexity of the language--and it offers an
explicit setting for the evaluation of cross-genre domain adaptation. | http://arxiv.org/pdf/1704.05426 | Adina Williams, Nikita Nangia, Samuel R. Bowman | cs.CL | 10 pages, 1 figures, 5 tables. v2 corrects a misreported accuracy
number for the CBOW model in the 'matched' setting. v3 adds a discussion of
the difficulty of the corpus to the analysis section. v4 is the version that
was accepted to NAACL2018 | null | cs.CL | 20170418 | 20180219 | [
{
"id": "1705.02364"
}
] |
1704.05194 | 31 | [4] Fawcett T. (2006) An introduction to ROC analysis. Pattern Recognition Letters, 27, 861â874.
[5] Friedman J. (1999) Greedy Function Approximation: A Gradient Boosting Machine. Technical Report, Dept. of Statistics, Stanford University.
[6] Hilbe M. (2009) Logistic regression models. CRC Press.
[7] He X., Pan J., Jin O., Xu T., Liu B., Xu T, Shi Y., Atallah A, Herbrich R., Bowers S., Candela J. (2014) Practical Lessons from Predicting Clicks on Ads at Facebook. Proceedings of the 20-th KDD.
[8] Jordan I., Jacobs A (1994) Hierarchical mixtures of experts and the EM algorithm. Neural computation, 6(2): 181-214.
[9] Kivinen J., Warmuth M K. (1998) Relative Loss Bounds for Multidimensional Regres- sion Problems. Machine Learning, 45(3):301-329.
[10] Rendle S. (2010) Factorization Machines. Proceedings of the 10th IEEE International Conference on Data Mining.
[11] Roth S, Black M J. (2009) Fields of experts. International Journal of Computer Vision, 82(2): 205â229. | 1704.05194#31 | Learning Piece-wise Linear Models from Large Scale Data for Ad Click Prediction | CTR prediction in real-world business is a difficult machine learning problem
with large scale nonlinear sparse data. In this paper, we introduce an
industrial strength solution with model named Large Scale Piece-wise Linear
Model (LS-PLM). We formulate the learning problem with $L_1$ and $L_{2,1}$
regularizers, leading to a non-convex and non-smooth optimization problem.
Then, we propose a novel algorithm to solve it efficiently, based on
directional derivatives and quasi-Newton method. In addition, we design a
distributed system which can run on hundreds of machines parallel and provides
us with the industrial scalability. LS-PLM model can capture nonlinear patterns
from massive sparse data, saving us from heavy feature engineering jobs. Since
2012, LS-PLM has become the main CTR prediction model in Alibaba's online
display advertising system, serving hundreds of millions users every day. | http://arxiv.org/pdf/1704.05194 | Kun Gai, Xiaoqiang Zhu, Han Li, Kai Liu, Zhe Wang | stat.ML, cs.LG | null | null | stat.ML | 20170418 | 20170418 | [] |
1704.05426 | 31 | Bowman et al. attempt to avoid this problem by using an annotation prompt that is highly depen- dent on the concreteness of image descriptions; but, as we engage with the much more abstract writing that is found in, for example, government documents, there is no reason to assume a pri- ori that any similar prompt and annotation strat- egy can work. We are surprised to ï¬nd that this is not a major issue. Through a relatively straight- forward trial-and-error piloting phase, followed by discussion with our annotators, we manage to de- sign prompts for abstract genres that yield high inter-annotator agreement scores nearly identical to those of SNLI (see Table 2). These high scores suggest that our annotators agreed on a single task deï¬nition, and were able to apply it consistently across genres.
# 4.2 Overall Difï¬culty | 1704.05426#31 | A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference | This paper introduces the Multi-Genre Natural Language Inference (MultiNLI)
corpus, a dataset designed for use in the development and evaluation of machine
learning models for sentence understanding. In addition to being one of the
largest corpora available for the task of NLI, at 433k examples, this corpus
improves upon available resources in its coverage: it offers data from ten
distinct genres of written and spoken English--making it possible to evaluate
systems on nearly the full complexity of the language--and it offers an
explicit setting for the evaluation of cross-genre domain adaptation. | http://arxiv.org/pdf/1704.05426 | Adina Williams, Nikita Nangia, Samuel R. Bowman | cs.CL | 10 pages, 1 figures, 5 tables. v2 corrects a misreported accuracy
number for the CBOW model in the 'matched' setting. v3 adds a discussion of
the difficulty of the corpus to the analysis section. v4 is the version that
was accepted to NAACL2018 | null | cs.CL | 20170418 | 20180219 | [
{
"id": "1705.02364"
}
] |
1704.05194 | 32 | [11] Roth S, Black M J. (2009) Fields of experts. International Journal of Computer Vision, 82(2): 205â229.
[12] Safavian S. R., Landgrebe D. (1990) A survey of decision tree classiï¬er methodology[J].
[13] Wang P.-M and Puterman M. (1998) Mixed Logistic Regression Models. Journal of Agricultural, Biological, and Environmental Statistics, 3(2), 175â200.
[14] Zhang T. (2004) Solving large scale linear prediction problems using stochastic gradient descent algorithms. Proceedings of the twenty-ï¬rst international conference on Machine learning. ACM, 116.
[15] Gai K. http://club.alibabatech.org/resource_detail.htm?topicId=106
12 | 1704.05194#32 | Learning Piece-wise Linear Models from Large Scale Data for Ad Click Prediction | CTR prediction in real-world business is a difficult machine learning problem
with large scale nonlinear sparse data. In this paper, we introduce an
industrial strength solution with model named Large Scale Piece-wise Linear
Model (LS-PLM). We formulate the learning problem with $L_1$ and $L_{2,1}$
regularizers, leading to a non-convex and non-smooth optimization problem.
Then, we propose a novel algorithm to solve it efficiently, based on
directional derivatives and quasi-Newton method. In addition, we design a
distributed system which can run on hundreds of machines parallel and provides
us with the industrial scalability. LS-PLM model can capture nonlinear patterns
from massive sparse data, saving us from heavy feature engineering jobs. Since
2012, LS-PLM has become the main CTR prediction model in Alibaba's online
display advertising system, serving hundreds of millions users every day. | http://arxiv.org/pdf/1704.05194 | Kun Gai, Xiaoqiang Zhu, Han Li, Kai Liu, Zhe Wang | stat.ML, cs.LG | null | null | stat.ML | 20170418 | 20170418 | [] |
1704.05426 | 32 | # 4.2 Overall Difï¬culty
As expected, both the increase in the diver- sity of linguistic phenomena in MultiNLI and its longer average sentence length conspire to make MultiNLI dramatically more difï¬cult than SNLI. Our three baseline models perform better on SNLI than MultiNLI by about 15% when trained on the respective datasets. All three models achieve accuracy above 80% on the SNLI test set when trained only on SNLI. However, when trained on MultiNLI, only ESIM surpasses 70% accuracy on MultiNLIâs test sets. When we train mod- els on MultiNLI and downsampled SNLI, we see an expected signiï¬cant improvement on SNLI,
but no signiï¬cant change in performance on the MultiNLI test sets, suggesting including SNLI in training doesnât drive substantial improvement. These results attest to MultiNLIâs difï¬culty, and with its relatively high inter-annotator agreement, suggest that it presents a problem with substantial headroom for future work.
# 4.3 Analysis by Linguistic Phenomenon | 1704.05426#32 | A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference | This paper introduces the Multi-Genre Natural Language Inference (MultiNLI)
corpus, a dataset designed for use in the development and evaluation of machine
learning models for sentence understanding. In addition to being one of the
largest corpora available for the task of NLI, at 433k examples, this corpus
improves upon available resources in its coverage: it offers data from ten
distinct genres of written and spoken English--making it possible to evaluate
systems on nearly the full complexity of the language--and it offers an
explicit setting for the evaluation of cross-genre domain adaptation. | http://arxiv.org/pdf/1704.05426 | Adina Williams, Nikita Nangia, Samuel R. Bowman | cs.CL | 10 pages, 1 figures, 5 tables. v2 corrects a misreported accuracy
number for the CBOW model in the 'matched' setting. v3 adds a discussion of
the difficulty of the corpus to the analysis section. v4 is the version that
was accepted to NAACL2018 | null | cs.CL | 20170418 | 20180219 | [
{
"id": "1705.02364"
}
] |
1704.05426 | 33 | # 4.3 Analysis by Linguistic Phenomenon
To better understand the types of language un- derstanding skills that MultiNLI tests, we analyze the collected corpus using a set of annotation tags chosen to reï¬ect linguistic phenomena which are known to be potentially difï¬cult. We use two methods to assign tags to sentences. First, we use the Penn Treebank (PTB; Marcus et al., 1993) part-of-speech tag set (via the included Stanford Parser parses) to automatically isolate sentences containing a range of easily-identiï¬ed phenomena like comparatives. Second, we isolate sentences that contain hand-chosen key words indicative of additional interesting phenomena. | 1704.05426#33 | A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference | This paper introduces the Multi-Genre Natural Language Inference (MultiNLI)
corpus, a dataset designed for use in the development and evaluation of machine
learning models for sentence understanding. In addition to being one of the
largest corpora available for the task of NLI, at 433k examples, this corpus
improves upon available resources in its coverage: it offers data from ten
distinct genres of written and spoken English--making it possible to evaluate
systems on nearly the full complexity of the language--and it offers an
explicit setting for the evaluation of cross-genre domain adaptation. | http://arxiv.org/pdf/1704.05426 | Adina Williams, Nikita Nangia, Samuel R. Bowman | cs.CL | 10 pages, 1 figures, 5 tables. v2 corrects a misreported accuracy
number for the CBOW model in the 'matched' setting. v3 adds a discussion of
the difficulty of the corpus to the analysis section. v4 is the version that
was accepted to NAACL2018 | null | cs.CL | 20170418 | 20180219 | [
{
"id": "1705.02364"
}
] |
1704.05426 | 34 | The hand-chosen tag set covers the follow- ing phenomena: QUANTIFIERS contains single words with quantiï¬cational force (see, for exam- ple, Heim and Kratzer, 1998; Szabolcsi, 2010, e.g., many, all, few, some); BELIEF VERBS con- tains sentence-embedding verbs denoting mental states (e.g., know, believe, think), including irregu- lar past tense forms; TIME TERMS contains single words with abstract temporal interpretation, (e.g., then, today) and month names and days of the week; DISCOURSE MARKERS contains words that facilitate discourse coherence (e.g., yet, however, but, thus, despite); PRESUPPOSITION TRIGGERS contains words with lexical presuppositions (Stal- naker, 1974; Schlenker, 2016, e.g., again, too, anymore3); CONDITIONALS contains the word if. Table 5 presents the frequency of the tags in SNLI and MultiNLI, and model accuracy on MultiNLI (trained only on MultiNLI). | 1704.05426#34 | A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference | This paper introduces the Multi-Genre Natural Language Inference (MultiNLI)
corpus, a dataset designed for use in the development and evaluation of machine
learning models for sentence understanding. In addition to being one of the
largest corpora available for the task of NLI, at 433k examples, this corpus
improves upon available resources in its coverage: it offers data from ten
distinct genres of written and spoken English--making it possible to evaluate
systems on nearly the full complexity of the language--and it offers an
explicit setting for the evaluation of cross-genre domain adaptation. | http://arxiv.org/pdf/1704.05426 | Adina Williams, Nikita Nangia, Samuel R. Bowman | cs.CL | 10 pages, 1 figures, 5 tables. v2 corrects a misreported accuracy
number for the CBOW model in the 'matched' setting. v3 adds a discussion of
the difficulty of the corpus to the analysis section. v4 is the version that
was accepted to NAACL2018 | null | cs.CL | 20170418 | 20180219 | [
{
"id": "1705.02364"
}
] |
1704.05426 | 35 | The distributions of labels within each tagged subset of the corpus roughly mirrors the balanced overall distribution. The most frequent class over- all (in this case, ENTAILMENT) occurs with a fre- quency of roughly one third (see Table 4) in most. Only two annotation tags differ from the baseline percentage of the most frequent class in the cor- pus by at least 5%: sentences containing negation,
3Because their high frequency in the corpus, extremely common triggers like the were excluded from this tag. | 1704.05426#35 | A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference | This paper introduces the Multi-Genre Natural Language Inference (MultiNLI)
corpus, a dataset designed for use in the development and evaluation of machine
learning models for sentence understanding. In addition to being one of the
largest corpora available for the task of NLI, at 433k examples, this corpus
improves upon available resources in its coverage: it offers data from ten
distinct genres of written and spoken English--making it possible to evaluate
systems on nearly the full complexity of the language--and it offers an
explicit setting for the evaluation of cross-genre domain adaptation. | http://arxiv.org/pdf/1704.05426 | Adina Williams, Nikita Nangia, Samuel R. Bowman | cs.CL | 10 pages, 1 figures, 5 tables. v2 corrects a misreported accuracy
number for the CBOW model in the 'matched' setting. v3 adds a discussion of
the difficulty of the corpus to the analysis section. v4 is the version that
was accepted to NAACL2018 | null | cs.CL | 20170418 | 20180219 | [
{
"id": "1705.02364"
}
] |
1704.05426 | 36 | 3Because their high frequency in the corpus, extremely common triggers like the were excluded from this tag.
Dev. Freq. Most Frequent Label Model Acc. Tag SNLI MultiNLI Diff. Label % CBOW BiLSTM ESIM Entire Corpus 100 100 0 entailment â¼35 â¼65 â¼67 â¼72 Pronouns (PTB) Quantiï¬ers Modals (PTB) Negation (PTB) WH terms (PTB) Belief Verbs Time Terms Discourse Mark. Presup. Triggers Compr./Supr.(PTB) Conditionals Tense Match (PTB) Interjections (PTB) >20 words 34 33 <1 5 5 <1 19 <1 8 3 4 62 <1 <1 68 63 28 31 30 19 36 14 22 17 15 69 5 5 34 30 28 26 25 18 17 14 14 14 11 7 5 5 entailment contradiction entailment contradiction entailment entailment neutral neutral neutral neutral neutral entailment entailment entailment 34 36 35 48 35 34 35 34 34 39 35 37 36 42 66 66 65 67 64 64 64 62 65 61 65 67 67 65 68 68 67 70 65 67 66 64 67 63 68 68 70 67 73 73 72 75 72 71 71 70 73 69 73 73 75 76 | 1704.05426#36 | A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference | This paper introduces the Multi-Genre Natural Language Inference (MultiNLI)
corpus, a dataset designed for use in the development and evaluation of machine
learning models for sentence understanding. In addition to being one of the
largest corpora available for the task of NLI, at 433k examples, this corpus
improves upon available resources in its coverage: it offers data from ten
distinct genres of written and spoken English--making it possible to evaluate
systems on nearly the full complexity of the language--and it offers an
explicit setting for the evaluation of cross-genre domain adaptation. | http://arxiv.org/pdf/1704.05426 | Adina Williams, Nikita Nangia, Samuel R. Bowman | cs.CL | 10 pages, 1 figures, 5 tables. v2 corrects a misreported accuracy
number for the CBOW model in the 'matched' setting. v3 adds a discussion of
the difficulty of the corpus to the analysis section. v4 is the version that
was accepted to NAACL2018 | null | cs.CL | 20170418 | 20180219 | [
{
"id": "1705.02364"
}
] |
1704.05426 | 37 | Table 5: Dev. Freq. is the percentage of dev. set examples that include each phenomenon, ordered by greatest difference in frequency of occurrence (Diff.) between MultiNLI and SNLI. Most Frequent Label speciï¬es which label is the most frequent for each tag in the MultiNLI dev. set, and % is its incidence. Model Acc. is the dev. set accuracy (%) by annotation tag for each baseline model (trained on MultiNLI only). (PTB) marks a tag as derived from Penn Treebank-style parser output tags (Marcus et al., 1993).
and sentences exceeding 20 words. Sentences that contain negation are slightly more likely than av- erage to be labeled CONTRADICTION, reï¬ecting a similar ï¬nding in SNLI, while long sentences are slightly more likely to be labeled ENTAILMENT. | 1704.05426#37 | A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference | This paper introduces the Multi-Genre Natural Language Inference (MultiNLI)
corpus, a dataset designed for use in the development and evaluation of machine
learning models for sentence understanding. In addition to being one of the
largest corpora available for the task of NLI, at 433k examples, this corpus
improves upon available resources in its coverage: it offers data from ten
distinct genres of written and spoken English--making it possible to evaluate
systems on nearly the full complexity of the language--and it offers an
explicit setting for the evaluation of cross-genre domain adaptation. | http://arxiv.org/pdf/1704.05426 | Adina Williams, Nikita Nangia, Samuel R. Bowman | cs.CL | 10 pages, 1 figures, 5 tables. v2 corrects a misreported accuracy
number for the CBOW model in the 'matched' setting. v3 adds a discussion of
the difficulty of the corpus to the analysis section. v4 is the version that
was accepted to NAACL2018 | null | cs.CL | 20170418 | 20180219 | [
{
"id": "1705.02364"
}
] |
1704.05426 | 38 | None of the baseline models perform substan- tially better on any tagged set than they do on the corpus overall, with average model accuracies on sentences containing speciï¬c tags falling within about 3 points of overall averages. Using base- line model test accuracy overall as a metric (see Table 4), our baseline models had the most trouble on sentences containing comparatives or superla- tives (losing 3-4 points each). Despite the fact that 17% of sentence pairs in the corpus contained at least one instance of comparative or superlative, our baseline models donât utilize the information present in these sentences to predict the correct la- bel for the pair, although presence of a compara- tive or superlative is slightly more predictive of a NEUTRAL label.
Moreover, the baseline models perform below average on discourse markers, such as despite and however, losing roughly 2 to 3 points each. Un- surprisingly, the attention-based ESIM model per- forms better than the other two on sentences with greater than 20 words. Additionally, our baseline models do show slight improvements in accuracy on negation, suggesting that they may be tracking it as a predictor of CONTRADICTION.
# 5 Conclusion | 1704.05426#38 | A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference | This paper introduces the Multi-Genre Natural Language Inference (MultiNLI)
corpus, a dataset designed for use in the development and evaluation of machine
learning models for sentence understanding. In addition to being one of the
largest corpora available for the task of NLI, at 433k examples, this corpus
improves upon available resources in its coverage: it offers data from ten
distinct genres of written and spoken English--making it possible to evaluate
systems on nearly the full complexity of the language--and it offers an
explicit setting for the evaluation of cross-genre domain adaptation. | http://arxiv.org/pdf/1704.05426 | Adina Williams, Nikita Nangia, Samuel R. Bowman | cs.CL | 10 pages, 1 figures, 5 tables. v2 corrects a misreported accuracy
number for the CBOW model in the 'matched' setting. v3 adds a discussion of
the difficulty of the corpus to the analysis section. v4 is the version that
was accepted to NAACL2018 | null | cs.CL | 20170418 | 20180219 | [
{
"id": "1705.02364"
}
] |
1704.05426 | 39 | # 5 Conclusion
Natural language inference makes it easy to judge the degree to which neural network models for sentence understanding capture the full meanings for natural Existing NLI datasets like SNLI have facilitated substantial ad- vances in modeling, but have limited headroom and coverage of the full diversity of meanings ex- pressed in English. This paper presents a new dataset that offers dramatically greater linguistic difï¬culty and diversity, and also serves as a bench- mark for cross-genre domain adaptation. | 1704.05426#39 | A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference | This paper introduces the Multi-Genre Natural Language Inference (MultiNLI)
corpus, a dataset designed for use in the development and evaluation of machine
learning models for sentence understanding. In addition to being one of the
largest corpora available for the task of NLI, at 433k examples, this corpus
improves upon available resources in its coverage: it offers data from ten
distinct genres of written and spoken English--making it possible to evaluate
systems on nearly the full complexity of the language--and it offers an
explicit setting for the evaluation of cross-genre domain adaptation. | http://arxiv.org/pdf/1704.05426 | Adina Williams, Nikita Nangia, Samuel R. Bowman | cs.CL | 10 pages, 1 figures, 5 tables. v2 corrects a misreported accuracy
number for the CBOW model in the 'matched' setting. v3 adds a discussion of
the difficulty of the corpus to the analysis section. v4 is the version that
was accepted to NAACL2018 | null | cs.CL | 20170418 | 20180219 | [
{
"id": "1705.02364"
}
] |
1704.05426 | 40 | improves upon SNLI in its empirical coverageâbecause it in- cludes a representative sample of text and speech from ten different genres, as opposed to just sim- ple image captionsâand its difï¬culty, containing a much higher percentage of sentences tagged with one or more elements from our tag set of thir- teen difï¬cult linguistic phenomena. This greater diversity is reï¬ected in the dramatically lower baseline model performance on MultiNLI than on SNLI (see Table 5) and comparable inter- annotator agreement, suggesting that MultiNLI has a lot of headroom remaining for future work. The MultiNLI corpus was ï¬rst released in draft form in the ï¬rst half of 2017, and in the time since its initial release, work by others (Conneau et al., 2017) has shown that NLI can also be an effective source task for pre-training and transfer learning in the context of sentence-to-vector models, with
models trained on SNLI and MultiNLI substan- tially outperforming all prior models on a suite of established transfer learning benchmarks. We hope that this corpus will continue to serve for many years as a resource for the development and evaluation of methods for sentence understanding.
# Acknowledgments | 1704.05426#40 | A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference | This paper introduces the Multi-Genre Natural Language Inference (MultiNLI)
corpus, a dataset designed for use in the development and evaluation of machine
learning models for sentence understanding. In addition to being one of the
largest corpora available for the task of NLI, at 433k examples, this corpus
improves upon available resources in its coverage: it offers data from ten
distinct genres of written and spoken English--making it possible to evaluate
systems on nearly the full complexity of the language--and it offers an
explicit setting for the evaluation of cross-genre domain adaptation. | http://arxiv.org/pdf/1704.05426 | Adina Williams, Nikita Nangia, Samuel R. Bowman | cs.CL | 10 pages, 1 figures, 5 tables. v2 corrects a misreported accuracy
number for the CBOW model in the 'matched' setting. v3 adds a discussion of
the difficulty of the corpus to the analysis section. v4 is the version that
was accepted to NAACL2018 | null | cs.CL | 20170418 | 20180219 | [
{
"id": "1705.02364"
}
] |
1704.05426 | 41 | # Acknowledgments
This work was made possible by a Google Faculty Research Award to SB and AL. SB also gratefully acknowledges gift support from Tencent Holdings. We also thank George Dahl, the organizers of the RepEval 2016 and RepEval 2017 workshops, An- drew Drozdov, Angeliki Lazaridou, and our other colleagues at NYU for their help and advice.
# References
Recog- infer- nising textual the 2005 Confer- In Proceedings of ence. ence on Empirical Methods in Natural Lan- guage Processing (EMNLP). pages 628â635. http://www.aclweb.org/anthology/H05-1079.
Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large anno- tated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Associ- ation for Computational Linguistics, pages 632â642. https://doi.org/10.18653/v1/D15-1075. | 1704.05426#41 | A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference | This paper introduces the Multi-Genre Natural Language Inference (MultiNLI)
corpus, a dataset designed for use in the development and evaluation of machine
learning models for sentence understanding. In addition to being one of the
largest corpora available for the task of NLI, at 433k examples, this corpus
improves upon available resources in its coverage: it offers data from ten
distinct genres of written and spoken English--making it possible to evaluate
systems on nearly the full complexity of the language--and it offers an
explicit setting for the evaluation of cross-genre domain adaptation. | http://arxiv.org/pdf/1704.05426 | Adina Williams, Nikita Nangia, Samuel R. Bowman | cs.CL | 10 pages, 1 figures, 5 tables. v2 corrects a misreported accuracy
number for the CBOW model in the 'matched' setting. v3 adds a discussion of
the difficulty of the corpus to the analysis section. v4 is the version that
was accepted to NAACL2018 | null | cs.CL | 20170418 | 20180219 | [
{
"id": "1705.02364"
}
] |
1704.05426 | 42 | Samuel R. Bowman, Jon Gauthier, Abhinav Ras- togi, Raghav Gupta, Christopher D. Manning, and Christopher Potts. 2016. A fast uniï¬ed model for parsing and sentence understanding. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguis- tics (Volume 1: Long Papers). Association for Computational Linguistics, pages 1466â1477. https://doi.org/10.18653/v1/P16-1139.
Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2017. Enhanced LSTM for natural In the 55th Annual Meeting of Proceedings of the Association for Computational Linguis- tics (Volume 1: Long Papers). Association for Computational Linguistics, pages 1657â1668. https://doi.org/10.18653/v1/P17-1152.
Cleo Condoravdi, Dick Crouch, Valeria de Paiva, Rein- hard Stolle, and Daniel G. Bobrow. 2003. Entail- ment, intensionality and text understanding. In Pro- ceedings of the Human Language Technology-North American Association for Computational Linguis- tics 2003 Workshop on Text Meaning. | 1704.05426#42 | A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference | This paper introduces the Multi-Genre Natural Language Inference (MultiNLI)
corpus, a dataset designed for use in the development and evaluation of machine
learning models for sentence understanding. In addition to being one of the
largest corpora available for the task of NLI, at 433k examples, this corpus
improves upon available resources in its coverage: it offers data from ten
distinct genres of written and spoken English--making it possible to evaluate
systems on nearly the full complexity of the language--and it offers an
explicit setting for the evaluation of cross-genre domain adaptation. | http://arxiv.org/pdf/1704.05426 | Adina Williams, Nikita Nangia, Samuel R. Bowman | cs.CL | 10 pages, 1 figures, 5 tables. v2 corrects a misreported accuracy
number for the CBOW model in the 'matched' setting. v3 adds a discussion of
the difficulty of the corpus to the analysis section. v4 is the version that
was accepted to NAACL2018 | null | cs.CL | 20170418 | 20180219 | [
{
"id": "1705.02364"
}
] |
1704.05426 | 43 | Alexis Conneau, Douwe Kiela, Holger Schwenk, Loic Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from arXiv preprint natural language inference data. arXiv:1705.02364 .
Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The PASCAL recognising textual entailment challenge. In Machine learning challenges. Evalu- ating predictive uncertainty, visual object classiï¬ca- tion, and recognising textual entailment, Springer, pages 177â190.
Marie-Catherine de Marneffe, Anna N. Rafferty, and Christopher D. Manning. 2008. Finding In Proceedings of As- contradictions in text. sociation Linguistics-08: Human Language Technology. Association for Computational Linguistics, pages 1039â1047. http://www.aclweb.org/anthology/P08-1118.
Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoff- man, Ning Zhang, Eric Tzeng, and Trevor Darrell. 2014. DeCAF: A deep convolutional activation fea- ture for generic visual recognition. In Proceedings of the International Conference on Machine Learn- ing (ICML). | 1704.05426#43 | A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference | This paper introduces the Multi-Genre Natural Language Inference (MultiNLI)
corpus, a dataset designed for use in the development and evaluation of machine
learning models for sentence understanding. In addition to being one of the
largest corpora available for the task of NLI, at 433k examples, this corpus
improves upon available resources in its coverage: it offers data from ten
distinct genres of written and spoken English--making it possible to evaluate
systems on nearly the full complexity of the language--and it offers an
explicit setting for the evaluation of cross-genre domain adaptation. | http://arxiv.org/pdf/1704.05426 | Adina Williams, Nikita Nangia, Samuel R. Bowman | cs.CL | 10 pages, 1 figures, 5 tables. v2 corrects a misreported accuracy
number for the CBOW model in the 'matched' setting. v3 adds a discussion of
the difficulty of the corpus to the analysis section. v4 is the version that
was accepted to NAACL2018 | null | cs.CL | 20170418 | 20180219 | [
{
"id": "1705.02364"
}
] |
1704.05426 | 44 | Charles Fillmore, Nancy Ide, Daniel Jurafsky, and Catherine Macleod. 1998. An American National Corpus: A proposal. In Proceedings of the First An- nual Conference on Language Resources and Eval- uation. pages 965â969.
Yaroslav Fyodorov, Yoad Winter, and Nissim Francez. 2000. A natural logic inference system. In Proceed- ings of the 2nd Workshop on Inference in Computa- tional Semantics.
Irene Heim and Angelika Kratzer. 1998. Semantics in generative grammar. Blackwell Publishers.
Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Neural computation Long short-term memory. 9(8):1735â1780.
Nancy Ide and Catherine Macleod. 2001. The Amer- ican National Corpus: A standardized resource of American English. In Proceedings of Corpus Lin- guistics. Lancaster University Centre for Computer Corpus Research on Language, volume 3, pages 1â 7.
Inte- Nancy Ide and Keith Suderman. 2006. national The the Fifth In Proceedings of on Language Re- (LREC). European (ELRA).
Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Repre- sentations (ICLR). | 1704.05426#44 | A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference | This paper introduces the Multi-Genre Natural Language Inference (MultiNLI)
corpus, a dataset designed for use in the development and evaluation of machine
learning models for sentence understanding. In addition to being one of the
largest corpora available for the task of NLI, at 433k examples, this corpus
improves upon available resources in its coverage: it offers data from ten
distinct genres of written and spoken English--making it possible to evaluate
systems on nearly the full complexity of the language--and it offers an
explicit setting for the evaluation of cross-genre domain adaptation. | http://arxiv.org/pdf/1704.05426 | Adina Williams, Nikita Nangia, Samuel R. Bowman | cs.CL | 10 pages, 1 figures, 5 tables. v2 corrects a misreported accuracy
number for the CBOW model in the 'matched' setting. v3 adds a discussion of
the difficulty of the corpus to the analysis section. v4 is the version that
was accepted to NAACL2018 | null | cs.CL | 20170418 | 20180219 | [
{
"id": "1705.02364"
}
] |
1704.05426 | 45 | Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Repre- sentations (ICLR).
Dan Klein and Christopher D. Manning. 2003. In Proc. ACL. Accurate unlexicalized parsing. https://doi.org/10.3115/1075096.1075150.
Nikita Nangia, Adina Williams, Angeliki Lazaridou, and Samuel R Bowman. 2017. The repeval 2017 shared task: Multi-genre natural language inference In Proceedings of with sentence representations. RepEval 2017: The Second Workshop on Evaluat- ing Vector Space Representations for NLP..
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hin- ton. 2012. Imagenet classiï¬cation with deep convo- lutional neural networks. In Advances in Neural In- formation Processing Systems 25, pages 1097â1105. | 1704.05426#45 | A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference | This paper introduces the Multi-Genre Natural Language Inference (MultiNLI)
corpus, a dataset designed for use in the development and evaluation of machine
learning models for sentence understanding. In addition to being one of the
largest corpora available for the task of NLI, at 433k examples, this corpus
improves upon available resources in its coverage: it offers data from ten
distinct genres of written and spoken English--making it possible to evaluate
systems on nearly the full complexity of the language--and it offers an
explicit setting for the evaluation of cross-genre domain adaptation. | http://arxiv.org/pdf/1704.05426 | Adina Williams, Nikita Nangia, Samuel R. Bowman | cs.CL | 10 pages, 1 figures, 5 tables. v2 corrects a misreported accuracy
number for the CBOW model in the 'matched' setting. v3 adds a discussion of
the difficulty of the corpus to the analysis section. v4 is the version that
was accepted to NAACL2018 | null | cs.CL | 20170418 | 20180219 | [
{
"id": "1705.02364"
}
] |
1704.05426 | 46 | Ankur Parikh, Oscar T¨ackstr¨om, Dipanjan Das, and Jakob Uszkoreit. 2016. A decomposable attention model for natural language inference. In Proceed- ings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 2249â2255. https://doi.org/10.18653/v1/D16-1244.
Bill MacCartney and Christopher D Manning. 2009. In Proceed- An extended model of natural logic. ings of the of the Eighth International Conference on Computational Semantics. pages 140â156.
Catherine Macleod, Nancy Ide, and Ralph Grishman. 2000. The American National Corpus: A standard- ized resource for American English. In Conference on Language Resources and Evaluation (LREC).
Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of English: The Penn treebank. Computa- tional linguistics 19(2):313â330. | 1704.05426#46 | A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference | This paper introduces the Multi-Genre Natural Language Inference (MultiNLI)
corpus, a dataset designed for use in the development and evaluation of machine
learning models for sentence understanding. In addition to being one of the
largest corpora available for the task of NLI, at 433k examples, this corpus
improves upon available resources in its coverage: it offers data from ten
distinct genres of written and spoken English--making it possible to evaluate
systems on nearly the full complexity of the language--and it offers an
explicit setting for the evaluation of cross-genre domain adaptation. | http://arxiv.org/pdf/1704.05426 | Adina Williams, Nikita Nangia, Samuel R. Bowman | cs.CL | 10 pages, 1 figures, 5 tables. v2 corrects a misreported accuracy
number for the CBOW model in the 'matched' setting. v3 adds a discussion of
the difficulty of the corpus to the analysis section. v4 is the version that
was accepted to NAACL2018 | null | cs.CL | 20170418 | 20180219 | [
{
"id": "1705.02364"
}
] |
1704.05426 | 47 | Jeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. GloVe: Global vectors In Proceedings of the for word representation. 2014 Conference on Empirical Methods in Natu- ral Language Processing (EMNLP). Association for Computational Linguistics, pages 1532â1543. https://doi.org/10.3115/v1/D14-1162.
Marco Marelli, Luisa Bentivogli, Marco Baroni, Raffaella Bernardi, Stefano Menini, and Roberto Zamparelli. 2014a. Evaluation of compositional distributional semantic models on full sentences through semantic relatedness and textual entailment. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014). Associ- ation for Computational Linguistics, pages 1â8. https://doi.org/10.3115/v1/S14-2001.
Philippe Schlenker. 2016. Formal Press, Interface, The Cambridge Cam- Se- pages 664â727. Handbook bridge University mantics/Pragmatics https://doi.org/10.1017/CBO9781139236157.023. of Semantics, chapter The | 1704.05426#47 | A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference | This paper introduces the Multi-Genre Natural Language Inference (MultiNLI)
corpus, a dataset designed for use in the development and evaluation of machine
learning models for sentence understanding. In addition to being one of the
largest corpora available for the task of NLI, at 433k examples, this corpus
improves upon available resources in its coverage: it offers data from ten
distinct genres of written and spoken English--making it possible to evaluate
systems on nearly the full complexity of the language--and it offers an
explicit setting for the evaluation of cross-genre domain adaptation. | http://arxiv.org/pdf/1704.05426 | Adina Williams, Nikita Nangia, Samuel R. Bowman | cs.CL | 10 pages, 1 figures, 5 tables. v2 corrects a misreported accuracy
number for the CBOW model in the 'matched' setting. v3 adds a discussion of
the difficulty of the corpus to the analysis section. v4 is the version that
was accepted to NAACL2018 | null | cs.CL | 20170418 | 20180219 | [
{
"id": "1705.02364"
}
] |
1704.05426 | 48 | Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overï¬tting. Journal of Machine Learning Re- search (JMLR) 15:1929â1958.
Marco Marelli, Stefano Menini, Marco Baroni, Luisa Bentivogli, Raffaella Bernardi, and Roberto Zam- parelli. 2014b. A SICK cure for the evaluation of compositional distributional semantic models. In Proceedings of the Twelfth International Conference on Language Resources and Evaluation (LREC).
Robert Stalnaker. 1974. Semantics and Philosophy, New York, NY: New York University Press, chap- ter Pragmatic Presupposition, pages 329â355.
Anna Szabolcsi. 2010. Quantiï¬cation. Cambridge University Press. | 1704.05426#48 | A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference | This paper introduces the Multi-Genre Natural Language Inference (MultiNLI)
corpus, a dataset designed for use in the development and evaluation of machine
learning models for sentence understanding. In addition to being one of the
largest corpora available for the task of NLI, at 433k examples, this corpus
improves upon available resources in its coverage: it offers data from ten
distinct genres of written and spoken English--making it possible to evaluate
systems on nearly the full complexity of the language--and it offers an
explicit setting for the evaluation of cross-genre domain adaptation. | http://arxiv.org/pdf/1704.05426 | Adina Williams, Nikita Nangia, Samuel R. Bowman | cs.CL | 10 pages, 1 figures, 5 tables. v2 corrects a misreported accuracy
number for the CBOW model in the 'matched' setting. v3 adds a discussion of
the difficulty of the corpus to the analysis section. v4 is the version that
was accepted to NAACL2018 | null | cs.CL | 20170418 | 20180219 | [
{
"id": "1705.02364"
}
] |
1704.05426 | 49 | Anna Szabolcsi. 2010. Quantiï¬cation. Cambridge University Press.
Lili Mou, Zhao Meng, Rui Yan, Ge Li, Yan Xu, Lu Zhang, and Zhi Jin. 2016a. How transferable are neural networks in NLP applications? In Proceed- ings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP). Associ- ation for Computational Linguistics, pages 479â489. https://doi.org/10.18653/v1/D16-1046.
Learn- inference with LSTM. ing natural the 2016 Conference of In Proceedings of the Asso- the North American Chapter of ciation for Computational Linguistics: Hu- man Language Technologies. Association for pages 1442â1451. Computational Linguistics, https://doi.org/10.18653/v1/N16-1170.
Lili Mou, Men Rui, Ge Li, Yan Xu, Lu Zhang, Rui Yan, and Zhi Jin. 2016b. Natural language inference by tree-based convolution and heuristic the 54th Annual In Proceedings of matching. Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Associa- tion for Computational Linguistics, pages 130â136. https://doi.org/10.18653/v1/P16-2022. | 1704.05426#49 | A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference | This paper introduces the Multi-Genre Natural Language Inference (MultiNLI)
corpus, a dataset designed for use in the development and evaluation of machine
learning models for sentence understanding. In addition to being one of the
largest corpora available for the task of NLI, at 433k examples, this corpus
improves upon available resources in its coverage: it offers data from ten
distinct genres of written and spoken English--making it possible to evaluate
systems on nearly the full complexity of the language--and it offers an
explicit setting for the evaluation of cross-genre domain adaptation. | http://arxiv.org/pdf/1704.05426 | Adina Williams, Nikita Nangia, Samuel R. Bowman | cs.CL | 10 pages, 1 figures, 5 tables. v2 corrects a misreported accuracy
number for the CBOW model in the 'matched' setting. v3 adds a discussion of
the difficulty of the corpus to the analysis section. v4 is the version that
was accepted to NAACL2018 | null | cs.CL | 20170418 | 20180219 | [
{
"id": "1705.02364"
}
] |
1704.05426 | 50 | Peter Young, Alice Lai, Micah Hodosh, and Julia From image descrip- New similarity event the Associa- 2:67â78.
Tsendsuren Munkhdalai and Hong Yu. 2017. Neu- In Proceedings of the ral semantic encoders. the European Chapter of 15th Conference of for Computational Linguis- the Association tics: Volume 1, Long Papers. Association for Computational 397â407. http://www.aclweb.org/anthology/E17-1038.
Matthew D. Zeiler and Rob Fergus. 2014. Visualizing and understanding convolutional networks. In Pro- ceedings of the European Conference on Computer Vision (ECCV). pages 818â833.
This figure "sentence_dist.png" is available in "png" format from:
http://arxiv.org/ps/1704.05426v4 | 1704.05426#50 | A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference | This paper introduces the Multi-Genre Natural Language Inference (MultiNLI)
corpus, a dataset designed for use in the development and evaluation of machine
learning models for sentence understanding. In addition to being one of the
largest corpora available for the task of NLI, at 433k examples, this corpus
improves upon available resources in its coverage: it offers data from ten
distinct genres of written and spoken English--making it possible to evaluate
systems on nearly the full complexity of the language--and it offers an
explicit setting for the evaluation of cross-genre domain adaptation. | http://arxiv.org/pdf/1704.05426 | Adina Williams, Nikita Nangia, Samuel R. Bowman | cs.CL | 10 pages, 1 figures, 5 tables. v2 corrects a misreported accuracy
number for the CBOW model in the 'matched' setting. v3 adds a discussion of
the difficulty of the corpus to the analysis section. v4 is the version that
was accepted to NAACL2018 | null | cs.CL | 20170418 | 20180219 | [
{
"id": "1705.02364"
}
] |
1704.04861 | 0 | 7 1 0 2
r p A 7 1 ] V C . s c [
1 v 1 6 8 4 0 . 4 0 7 1 : v i X r a
# MobileNets: Efï¬cient Convolutional Neural Networks for Mobile Vision Applications
Menglong Zhu Tobias Weyand Bo Chen Marco Andreetto Dmitry Kalenichenko Hartwig Adam
# Google Inc. {howarda,menglong,bochen,dkalenichenko,weijunw,weyand,anm,hadam}@google.com
# Abstract
We present a class of efï¬cient models called MobileNets for mobile and embedded vision applications. MobileNets are based on a streamlined architecture that uses depth- wise separable convolutions to build light weight deep neural networks. We introduce two simple global hyper- parameters that efï¬ciently trade off between latency and accuracy. These hyper-parameters allow the model builder to choose the right sized model for their application based on the constraints of the problem. We present extensive experiments on resource and accuracy tradeoffs and show strong performance compared to other popular models on ImageNet classiï¬cation. We then demonstrate the effective- ness of MobileNets across a wide range of applications and use cases including object detection, ï¬negrain classiï¬ca- tion, face attributes and large scale geo-localization. | 1704.04861#0 | MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications | We present a class of efficient models called MobileNets for mobile and
embedded vision applications. MobileNets are based on a streamlined
architecture that uses depth-wise separable convolutions to build light weight
deep neural networks. We introduce two simple global hyper-parameters that
efficiently trade off between latency and accuracy. These hyper-parameters
allow the model builder to choose the right sized model for their application
based on the constraints of the problem. We present extensive experiments on
resource and accuracy tradeoffs and show strong performance compared to other
popular models on ImageNet classification. We then demonstrate the
effectiveness of MobileNets across a wide range of applications and use cases
including object detection, finegrain classification, face attributes and large
scale geo-localization. | http://arxiv.org/pdf/1704.04861 | Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam | cs.CV | null | null | cs.CV | 20170417 | 20170417 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1511.06789"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1609.07061"
},
{
"id": "1512.02325"
},
{
"id": "1512.00567"
},
{
"id": "1608.04337"
},
{
"id": "1611.10012"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1610.02357"
},
{
"id": "1512.06473"
}
] |
1704.04861 | 1 | models. Section 3 describes the MobileNet architecture and two hyper-parameters width multiplier and resolution mul- tiplier to deï¬ne smaller and more efï¬cient MobileNets. Sec- tion 4 describes experiments on ImageNet as well a variety of different applications and use cases. Section 5 closes with a summary and conclusion.
# 2. Prior Work
There has been rising interest in building small and efï¬- cient neural networks in the recent literature, e.g. [16, 34, 12, 36, 22]. Many different approaches can be generally categorized into either compressing pretrained networks or training small networks directly. This paper proposes a class of network architectures that allows a model devel- oper to speciï¬cally choose a small network that matches the resource restrictions (latency, size) for their application. MobileNets primarily focus on optimizing for latency but also yield small networks. Many papers on small networks focus only on size but do not consider speed.
# 1. Introduction | 1704.04861#1 | MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications | We present a class of efficient models called MobileNets for mobile and
embedded vision applications. MobileNets are based on a streamlined
architecture that uses depth-wise separable convolutions to build light weight
deep neural networks. We introduce two simple global hyper-parameters that
efficiently trade off between latency and accuracy. These hyper-parameters
allow the model builder to choose the right sized model for their application
based on the constraints of the problem. We present extensive experiments on
resource and accuracy tradeoffs and show strong performance compared to other
popular models on ImageNet classification. We then demonstrate the
effectiveness of MobileNets across a wide range of applications and use cases
including object detection, finegrain classification, face attributes and large
scale geo-localization. | http://arxiv.org/pdf/1704.04861 | Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam | cs.CV | null | null | cs.CV | 20170417 | 20170417 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1511.06789"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1609.07061"
},
{
"id": "1512.02325"
},
{
"id": "1512.00567"
},
{
"id": "1608.04337"
},
{
"id": "1611.10012"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1610.02357"
},
{
"id": "1512.06473"
}
] |
1704.05119 | 1 | # ABSTRACT
Recurrent Neural Networks (RNN) are widely used to solve a variety of problems and as the quantity of data and the amount of available compute have increased, so have model sizes. The number of parameters in recent state-of-the-art networks makes them hard to deploy, especially on mobile phones and embedded devices. The challenge is due to both the size of the model and the time it takes to eval- In order to deploy these RNNs efï¬ciently, we propose a technique to uate it. reduce the parameters of a network by pruning weights during the initial training of the network. At the end of training, the parameters of the network are sparse while accuracy is still close to the original dense neural network. The network size is reduced by 8à and the time required to train the model remains constant. Additionally, we can prune a larger dense network to achieve better than base- line performance while still reducing the total number of parameters signiï¬cantly. Pruning RNNs reduces the size of the model and can also help achieve signiï¬cant inference time speed-up using sparse matrix multiply. Benchmarks show that us- ing our technique model size can be reduced by 90% and speed-up is around 2à to 7Ã.
# INTRODUCTION | 1704.05119#1 | Exploring Sparsity in Recurrent Neural Networks | Recurrent Neural Networks (RNN) are widely used to solve a variety of
problems and as the quantity of data and the amount of available compute have
increased, so have model sizes. The number of parameters in recent
state-of-the-art networks makes them hard to deploy, especially on mobile
phones and embedded devices. The challenge is due to both the size of the model
and the time it takes to evaluate it. In order to deploy these RNNs
efficiently, we propose a technique to reduce the parameters of a network by
pruning weights during the initial training of the network. At the end of
training, the parameters of the network are sparse while accuracy is still
close to the original dense neural network. The network size is reduced by 8x
and the time required to train the model remains constant. Additionally, we can
prune a larger dense network to achieve better than baseline performance while
still reducing the total number of parameters significantly. Pruning RNNs
reduces the size of the model and can also help achieve significant inference
time speed-up using sparse matrix multiply. Benchmarks show that using our
technique model size can be reduced by 90% and speed-up is around 2x to 7x. | http://arxiv.org/pdf/1704.05119 | Sharan Narang, Erich Elsen, Gregory Diamos, Shubho Sengupta | cs.LG, cs.CL | Published as a conference paper at ICLR 2017 | null | cs.LG | 20170417 | 20171106 | [
{
"id": "1512.02595"
}
] |
1704.04861 | 2 | # 1. Introduction
Convolutional neural networks have become ubiquitous in computer vision ever since AlexNet [19] popularized deep convolutional neural networks by winning the Ima- geNet Challenge: ILSVRC 2012 [24]. The general trend has been to make deeper and more complicated networks in order to achieve higher accuracy [27, 31, 29, 8]. How- ever, these advances to improve accuracy are not necessar- ily making networks more efï¬cient with respect to size and speed. In many real world applications such as robotics, self-driving car and augmented reality, the recognition tasks need to be carried out in a timely fashion on a computation- ally limited platform.
This paper describes an efï¬cient network architecture and a set of two hyper-parameters in order to build very small, low latency models that can be easily matched to the design requirements for mobile and embedded vision ap- plications. Section 2 reviews prior work in building small | 1704.04861#2 | MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications | We present a class of efficient models called MobileNets for mobile and
embedded vision applications. MobileNets are based on a streamlined
architecture that uses depth-wise separable convolutions to build light weight
deep neural networks. We introduce two simple global hyper-parameters that
efficiently trade off between latency and accuracy. These hyper-parameters
allow the model builder to choose the right sized model for their application
based on the constraints of the problem. We present extensive experiments on
resource and accuracy tradeoffs and show strong performance compared to other
popular models on ImageNet classification. We then demonstrate the
effectiveness of MobileNets across a wide range of applications and use cases
including object detection, finegrain classification, face attributes and large
scale geo-localization. | http://arxiv.org/pdf/1704.04861 | Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam | cs.CV | null | null | cs.CV | 20170417 | 20170417 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1511.06789"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1609.07061"
},
{
"id": "1512.02325"
},
{
"id": "1512.00567"
},
{
"id": "1608.04337"
},
{
"id": "1611.10012"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1610.02357"
},
{
"id": "1512.06473"
}
] |
1704.05119 | 2 | # INTRODUCTION
Recent advances in multiple ï¬elds such as speech recognition (Graves & Jaitly, 2014; Amodei et al., 2015), language modeling (J´ozefowicz et al., 2016) and machine translation (Wu et al., 2016) can be at least partially attributed to larger training datasets, larger models and more compute that allows larger models to be trained on larger datasets.
For example, the deep neural network used for acoustic modeling in Hannun et al. (2014) had 11 million parameters which grew to approximately 67 million for bidirectional RNNs and further to 116 million for the latest forward only GRU models in Amodei et al. (2015). And in language mod- eling the size of the non-embedding parameters (mostly in the recurrent layers) have exploded even as various ways of hand engineering sparsity into the embeddings have been explored in J´ozefowicz et al. (2016) and Chen et al. (2015a). | 1704.05119#2 | Exploring Sparsity in Recurrent Neural Networks | Recurrent Neural Networks (RNN) are widely used to solve a variety of
problems and as the quantity of data and the amount of available compute have
increased, so have model sizes. The number of parameters in recent
state-of-the-art networks makes them hard to deploy, especially on mobile
phones and embedded devices. The challenge is due to both the size of the model
and the time it takes to evaluate it. In order to deploy these RNNs
efficiently, we propose a technique to reduce the parameters of a network by
pruning weights during the initial training of the network. At the end of
training, the parameters of the network are sparse while accuracy is still
close to the original dense neural network. The network size is reduced by 8x
and the time required to train the model remains constant. Additionally, we can
prune a larger dense network to achieve better than baseline performance while
still reducing the total number of parameters significantly. Pruning RNNs
reduces the size of the model and can also help achieve significant inference
time speed-up using sparse matrix multiply. Benchmarks show that using our
technique model size can be reduced by 90% and speed-up is around 2x to 7x. | http://arxiv.org/pdf/1704.05119 | Sharan Narang, Erich Elsen, Gregory Diamos, Shubho Sengupta | cs.LG, cs.CL | Published as a conference paper at ICLR 2017 | null | cs.LG | 20170417 | 20171106 | [
{
"id": "1512.02595"
}
] |
1704.04861 | 3 | MobileNets are built primarily from depthwise separable convolutions initially introduced in [26] and subsequently used in Inception models [13] to reduce the computation in the ï¬rst few layers. Flattened networks [16] build a network out of fully factorized convolutions and showed the poten- tial of extremely factorized networks. Independent of this current paper, Factorized Networks[34] introduces a similar factorized convolution as well as the use of topological con- nections. Subsequently, the Xception network [3] demon- strated how to scale up depthwise separable ï¬lters to out perform Inception V3 networks. Another small network is Squeezenet [12] which uses a bottleneck approach to design a very small network. Other reduced computation networks include structured transform networks [28] and deep fried convnets [37].
A different approach for obtaining small networks is shrinking, factorizing or compressing pretrained networks. Compression based on product quantization [36], hashing
1
Object Detection Finegrain Classification 481050 Photo by Juanede (CC BY 2.0) Face Attributes Photo by HarshLight (CC BY 2.0) Landmark Recognition Google Doodle by Sarah Harrison MobileNets Photo by Sharon VanderKaay (CC BY 2.0)
Figure 1. MobileNet models can be applied to various recognition tasks for efï¬cient on device intelligence. | 1704.04861#3 | MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications | We present a class of efficient models called MobileNets for mobile and
embedded vision applications. MobileNets are based on a streamlined
architecture that uses depth-wise separable convolutions to build light weight
deep neural networks. We introduce two simple global hyper-parameters that
efficiently trade off between latency and accuracy. These hyper-parameters
allow the model builder to choose the right sized model for their application
based on the constraints of the problem. We present extensive experiments on
resource and accuracy tradeoffs and show strong performance compared to other
popular models on ImageNet classification. We then demonstrate the
effectiveness of MobileNets across a wide range of applications and use cases
including object detection, finegrain classification, face attributes and large
scale geo-localization. | http://arxiv.org/pdf/1704.04861 | Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam | cs.CV | null | null | cs.CV | 20170417 | 20170417 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1511.06789"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1609.07061"
},
{
"id": "1512.02325"
},
{
"id": "1512.00567"
},
{
"id": "1608.04337"
},
{
"id": "1611.10012"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1610.02357"
},
{
"id": "1512.06473"
}
] |
1704.05119 | 3 | These large models face two signiï¬cant challenges in deployment. Mobile phones and embedded devices have limited memory and storage and in some cases network bandwidth is also a concern. In addition, the evaluation of these models requires a signiï¬cant amount of computation. Even in cases when the networks can be evaluated fast enough, it will still have a signiï¬cant impact on battery life in mobile devices (Han et al., 2015).
Inference performance of RNNs is dominated by the memory bandwidth of the hardware, since most of the work is simply reading in the parameters at every time step. Moving from a dense calculation to a sparse one comes with a penalty, but if the sparsity factor is large enough, then the smaller amount of data required by the sparse routines becomes a win. Furthermore, this suggests that if the parameter sizes can be reduced to ï¬t in cache or other very fast memory, then large speedups could be realized, resulting in a super-linear increase in performance.
# âNow at Google Brain [email protected] â Now at Facebook AI Research [email protected]
1
Published as a conference paper at ICLR 2017 | 1704.05119#3 | Exploring Sparsity in Recurrent Neural Networks | Recurrent Neural Networks (RNN) are widely used to solve a variety of
problems and as the quantity of data and the amount of available compute have
increased, so have model sizes. The number of parameters in recent
state-of-the-art networks makes them hard to deploy, especially on mobile
phones and embedded devices. The challenge is due to both the size of the model
and the time it takes to evaluate it. In order to deploy these RNNs
efficiently, we propose a technique to reduce the parameters of a network by
pruning weights during the initial training of the network. At the end of
training, the parameters of the network are sparse while accuracy is still
close to the original dense neural network. The network size is reduced by 8x
and the time required to train the model remains constant. Additionally, we can
prune a larger dense network to achieve better than baseline performance while
still reducing the total number of parameters significantly. Pruning RNNs
reduces the size of the model and can also help achieve significant inference
time speed-up using sparse matrix multiply. Benchmarks show that using our
technique model size can be reduced by 90% and speed-up is around 2x to 7x. | http://arxiv.org/pdf/1704.05119 | Sharan Narang, Erich Elsen, Gregory Diamos, Shubho Sengupta | cs.LG, cs.CL | Published as a conference paper at ICLR 2017 | null | cs.LG | 20170417 | 20171106 | [
{
"id": "1512.02595"
}
] |
1704.04861 | 4 | Figure 1. MobileNet models can be applied to various recognition tasks for efï¬cient on device intelligence.
[2], and pruning, vector quantization and Huffman coding [5] have been proposed in the literature. Additionally var- ious factorizations have been proposed to speed up pre- trained networks [14, 20]. Another method for training small networks is distillation [9] which uses a larger net- work to teach a smaller network. It is complementary to our approach and is covered in some of our use cases in section 4. Another emerging approach is low bit networks [4, 22, 11].
# 3. MobileNet Architecture
DF Ã M feature map F and produces a DF Ã DF Ã N feature map G where DF is the spatial width and height of a square input feature map1, M is the number of input channels (input depth), DG is the spatial width and height of a square output feature map and N is the number of output channel (output depth).
The standard convolutional layer is parameterized by convolution kernel K of size DK ÃDK ÃM ÃN where DK is the spatial dimension of the kernel assumed to be square and M is number of input channels and N is the number of output channels as deï¬ned previously. | 1704.04861#4 | MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications | We present a class of efficient models called MobileNets for mobile and
embedded vision applications. MobileNets are based on a streamlined
architecture that uses depth-wise separable convolutions to build light weight
deep neural networks. We introduce two simple global hyper-parameters that
efficiently trade off between latency and accuracy. These hyper-parameters
allow the model builder to choose the right sized model for their application
based on the constraints of the problem. We present extensive experiments on
resource and accuracy tradeoffs and show strong performance compared to other
popular models on ImageNet classification. We then demonstrate the
effectiveness of MobileNets across a wide range of applications and use cases
including object detection, finegrain classification, face attributes and large
scale geo-localization. | http://arxiv.org/pdf/1704.04861 | Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam | cs.CV | null | null | cs.CV | 20170417 | 20170417 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1511.06789"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1609.07061"
},
{
"id": "1512.02325"
},
{
"id": "1512.00567"
},
{
"id": "1608.04337"
},
{
"id": "1611.10012"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1610.02357"
},
{
"id": "1512.06473"
}
] |
1704.05119 | 4 | # âNow at Google Brain [email protected] â Now at Facebook AI Research [email protected]
1
Published as a conference paper at ICLR 2017
The more powerful server class GPUs used in data centers can generally perform inference quickly enough to serve one user, but in the data center performance per dollar is very important. Techniques that allow models to be evaluated faster enable more users to be served per GPU increasing the effective performance per dollar.
We propose a method to reduce the number of weights in recurrent neural networks. While the network is training we progressively set more and more weights to zero using a monotonically increasing threshold. By controlling the shape of the function that maps iteration count to threshold value, we can control how sparse the ï¬nal weight matrices become. We prune all the weights of a recurrent layer; other layer types with signiï¬cantly fewer parameters are not pruned. Separate threshold functions can be used for each layer, although in practice we use one threshold function per layer type. With this approach, we can achieve sparsity of 90% with a small loss in accuracy. We show this technique works with Gated Recurrent Units (GRU) (Cho et al., 2014) as well as vanilla RNNs. | 1704.05119#4 | Exploring Sparsity in Recurrent Neural Networks | Recurrent Neural Networks (RNN) are widely used to solve a variety of
problems and as the quantity of data and the amount of available compute have
increased, so have model sizes. The number of parameters in recent
state-of-the-art networks makes them hard to deploy, especially on mobile
phones and embedded devices. The challenge is due to both the size of the model
and the time it takes to evaluate it. In order to deploy these RNNs
efficiently, we propose a technique to reduce the parameters of a network by
pruning weights during the initial training of the network. At the end of
training, the parameters of the network are sparse while accuracy is still
close to the original dense neural network. The network size is reduced by 8x
and the time required to train the model remains constant. Additionally, we can
prune a larger dense network to achieve better than baseline performance while
still reducing the total number of parameters significantly. Pruning RNNs
reduces the size of the model and can also help achieve significant inference
time speed-up using sparse matrix multiply. Benchmarks show that using our
technique model size can be reduced by 90% and speed-up is around 2x to 7x. | http://arxiv.org/pdf/1704.05119 | Sharan Narang, Erich Elsen, Gregory Diamos, Shubho Sengupta | cs.LG, cs.CL | Published as a conference paper at ICLR 2017 | null | cs.LG | 20170417 | 20171106 | [
{
"id": "1512.02595"
}
] |
1704.04861 | 5 | In this section we ï¬rst describe the core layers that Mo- bileNet is built on which are depthwise separable ï¬lters. We then describe the MobileNet network structure and con- clude with descriptions of the two model shrinking hyper- parameters width multiplier and resolution multiplier.
The output feature map for standard convolution assum- ing stride one and padding is computed as:
Gk,l,n = Ki,j,m,n · Fk+iâ1,l+jâ1,m i,j,m (1)
# 3.1. Depthwise Separable Convolution
Standard convolutions have the computational cost of: | 1704.04861#5 | MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications | We present a class of efficient models called MobileNets for mobile and
embedded vision applications. MobileNets are based on a streamlined
architecture that uses depth-wise separable convolutions to build light weight
deep neural networks. We introduce two simple global hyper-parameters that
efficiently trade off between latency and accuracy. These hyper-parameters
allow the model builder to choose the right sized model for their application
based on the constraints of the problem. We present extensive experiments on
resource and accuracy tradeoffs and show strong performance compared to other
popular models on ImageNet classification. We then demonstrate the
effectiveness of MobileNets across a wide range of applications and use cases
including object detection, finegrain classification, face attributes and large
scale geo-localization. | http://arxiv.org/pdf/1704.04861 | Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam | cs.CV | null | null | cs.CV | 20170417 | 20170417 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1511.06789"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1609.07061"
},
{
"id": "1512.02325"
},
{
"id": "1512.00567"
},
{
"id": "1608.04337"
},
{
"id": "1611.10012"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1610.02357"
},
{
"id": "1512.06473"
}
] |
1704.05119 | 5 | In addition to the beneï¬ts of less storage and faster inference, this technique can also improve the accuracy over a dense baseline. By starting with a larger dense matrix than the baseline and then pruning it down, we can achieve equal or better accuracy compared to the baseline but with a much smaller number of parameters.
This approach can be implemented easily in current training frameworks and is agnostic to the optimization algorithm. Furthermore, training time does not increase unlike previous approaches such as in Han et al. (2015). State of the art results in speech recognition generally require days to weeks of training time, so a further 3-4Ã increase in training time is undesirable.
# 2 RELATED WORK | 1704.05119#5 | Exploring Sparsity in Recurrent Neural Networks | Recurrent Neural Networks (RNN) are widely used to solve a variety of
problems and as the quantity of data and the amount of available compute have
increased, so have model sizes. The number of parameters in recent
state-of-the-art networks makes them hard to deploy, especially on mobile
phones and embedded devices. The challenge is due to both the size of the model
and the time it takes to evaluate it. In order to deploy these RNNs
efficiently, we propose a technique to reduce the parameters of a network by
pruning weights during the initial training of the network. At the end of
training, the parameters of the network are sparse while accuracy is still
close to the original dense neural network. The network size is reduced by 8x
and the time required to train the model remains constant. Additionally, we can
prune a larger dense network to achieve better than baseline performance while
still reducing the total number of parameters significantly. Pruning RNNs
reduces the size of the model and can also help achieve significant inference
time speed-up using sparse matrix multiply. Benchmarks show that using our
technique model size can be reduced by 90% and speed-up is around 2x to 7x. | http://arxiv.org/pdf/1704.05119 | Sharan Narang, Erich Elsen, Gregory Diamos, Shubho Sengupta | cs.LG, cs.CL | Published as a conference paper at ICLR 2017 | null | cs.LG | 20170417 | 20171106 | [
{
"id": "1512.02595"
}
] |
1704.04861 | 6 | # 3.1. Depthwise Separable Convolution
Standard convolutions have the computational cost of:
The MobileNet model is based on depthwise separable convolutions which is a form of factorized convolutions which factorize a standard convolution into a depthwise convolution and a 1 à 1 convolution called a pointwise con- volution. For MobileNets the depthwise convolution ap- plies a single ï¬lter to each input channel. The pointwise convolution then applies a 1 à 1 convolution to combine the outputs the depthwise convolution. A standard convolution both ï¬lters and combines inputs into a new set of outputs in one step. The depthwise separable convolution splits this into two layers, a separate layer for ï¬ltering and a separate layer for combining. This factorization has the effect of drastically reducing computation and model size. Figure 2 shows how a standard convolution 2(a) is factorized into a depthwise convolution 2(b) and a 1 à 1 pointwise convolu- tion 2(c).
DK · DK · M · N · DF · DF (2) | 1704.04861#6 | MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications | We present a class of efficient models called MobileNets for mobile and
embedded vision applications. MobileNets are based on a streamlined
architecture that uses depth-wise separable convolutions to build light weight
deep neural networks. We introduce two simple global hyper-parameters that
efficiently trade off between latency and accuracy. These hyper-parameters
allow the model builder to choose the right sized model for their application
based on the constraints of the problem. We present extensive experiments on
resource and accuracy tradeoffs and show strong performance compared to other
popular models on ImageNet classification. We then demonstrate the
effectiveness of MobileNets across a wide range of applications and use cases
including object detection, finegrain classification, face attributes and large
scale geo-localization. | http://arxiv.org/pdf/1704.04861 | Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam | cs.CV | null | null | cs.CV | 20170417 | 20170417 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1511.06789"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1609.07061"
},
{
"id": "1512.02325"
},
{
"id": "1512.00567"
},
{
"id": "1608.04337"
},
{
"id": "1611.10012"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1610.02357"
},
{
"id": "1512.06473"
}
] |
1704.05119 | 6 | # 2 RELATED WORK
There have been several proposals to reduce the memory footprint of weights and activations in neural networks. One method is to use a ï¬xed point representation to quantize weights to signed bytes and activations to unsigned bytes (Vanhoucke et al., 2011). Another technique that has been tried in the past is to learn a low rank factorization of the weight matrices. One method is to carefully construct one of the factors and learn the other (Denil et al., 2013). Inspired by this technique, a low rank approximation for the convolution layers achieves twice the speed while staying within 1% of the original model in terms of accuracy (Denton et al., 2014). The convolution layer can also be approximated by a smaller set of basis ï¬lters (Jaderberg et al., 2014). By doing this they achieve a 2.5x speedup with no loss in accuracy. Quantization techniques like k-means clustering of weights can also reduce the storage size of the models by focusing only on the fully connected layers (Gong et al., 2014). A hash function can also reduce memory footprint by tying together weights that fall in the same hash bucket (Chen et al., 2015b). This reduces the model size by a factor of 8. | 1704.05119#6 | Exploring Sparsity in Recurrent Neural Networks | Recurrent Neural Networks (RNN) are widely used to solve a variety of
problems and as the quantity of data and the amount of available compute have
increased, so have model sizes. The number of parameters in recent
state-of-the-art networks makes them hard to deploy, especially on mobile
phones and embedded devices. The challenge is due to both the size of the model
and the time it takes to evaluate it. In order to deploy these RNNs
efficiently, we propose a technique to reduce the parameters of a network by
pruning weights during the initial training of the network. At the end of
training, the parameters of the network are sparse while accuracy is still
close to the original dense neural network. The network size is reduced by 8x
and the time required to train the model remains constant. Additionally, we can
prune a larger dense network to achieve better than baseline performance while
still reducing the total number of parameters significantly. Pruning RNNs
reduces the size of the model and can also help achieve significant inference
time speed-up using sparse matrix multiply. Benchmarks show that using our
technique model size can be reduced by 90% and speed-up is around 2x to 7x. | http://arxiv.org/pdf/1704.05119 | Sharan Narang, Erich Elsen, Gregory Diamos, Shubho Sengupta | cs.LG, cs.CL | Published as a conference paper at ICLR 2017 | null | cs.LG | 20170417 | 20171106 | [
{
"id": "1512.02595"
}
] |
1704.04861 | 7 | DK · DK · M · N · DF · DF (2)
where the computational cost depends multiplicatively on the number of input channels M , the number of output channels N the kernel size Dk à Dk and the feature map size DF à DF . MobileNet models address each of these terms and their interactions. First it uses depthwise separa- ble convolutions to break the interaction between the num- ber of output channels and the size of the kernel.
The standard convolution operation has the effect of ï¬l- tering features based on the convolutional kernels and com- bining features in order to produce a new representation. The ï¬ltering and combination steps can be split into two steps via the use of factorized convolutions called depthwise
A standard convolutional layer takes as input a DF Ã
1We assume that the output feature map has the same spatial dimen- sions as the input and both feature maps are square. Our model shrinking results generalize to feature maps with arbitrary sizes and aspect ratios.
separable convolutions for substantial reduction in compu- tational cost. | 1704.04861#7 | MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications | We present a class of efficient models called MobileNets for mobile and
embedded vision applications. MobileNets are based on a streamlined
architecture that uses depth-wise separable convolutions to build light weight
deep neural networks. We introduce two simple global hyper-parameters that
efficiently trade off between latency and accuracy. These hyper-parameters
allow the model builder to choose the right sized model for their application
based on the constraints of the problem. We present extensive experiments on
resource and accuracy tradeoffs and show strong performance compared to other
popular models on ImageNet classification. We then demonstrate the
effectiveness of MobileNets across a wide range of applications and use cases
including object detection, finegrain classification, face attributes and large
scale geo-localization. | http://arxiv.org/pdf/1704.04861 | Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam | cs.CV | null | null | cs.CV | 20170417 | 20170417 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1511.06789"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1609.07061"
},
{
"id": "1512.02325"
},
{
"id": "1512.00567"
},
{
"id": "1608.04337"
},
{
"id": "1611.10012"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1610.02357"
},
{
"id": "1512.06473"
}
] |
1704.05119 | 7 | Yet another approach to reduce compute and network size is through network pruning. One method is to use several bias techniques to decay weights (Hanson & Pratt, 1989). Yet another approach is to use the diagonal terms of a Hessian matrix to construct a saliency threshold and used this to drop weights that fall below a given saliency threshold (LeCun et al., 1989). In this technique, once a weight has been set to 0, the network is retrained with these weights frozen at 0. Optimal Brain Surgeon is another work in the same vein that prunes weights using the inverse of a Hessian matrix with the additional advantage of no re-training after pruning (Hassibi et al., 1993).
Both pruning and quantization techniques can be combined to get impressive gains on AlexNet trained on the ImageNet dataset (Han et al., 2015). In this case, pruning, quantization and subsequent Huffman encoding results in a 35x reduction in model size without affecting accuracy. There has also been some recent work to shrink model size for recurrent and LSTM networks used in automatic speech recognition (ASR) (Lu et al., 2016). By using a hybrid strategy of using Toeplitz matrices for the bottom layer and shared low-rank factors on the top layers, they were able to reduce the parameters of a LSTM by 75% while incurring a 0.3% increase in word error rate (WER). | 1704.05119#7 | Exploring Sparsity in Recurrent Neural Networks | Recurrent Neural Networks (RNN) are widely used to solve a variety of
problems and as the quantity of data and the amount of available compute have
increased, so have model sizes. The number of parameters in recent
state-of-the-art networks makes them hard to deploy, especially on mobile
phones and embedded devices. The challenge is due to both the size of the model
and the time it takes to evaluate it. In order to deploy these RNNs
efficiently, we propose a technique to reduce the parameters of a network by
pruning weights during the initial training of the network. At the end of
training, the parameters of the network are sparse while accuracy is still
close to the original dense neural network. The network size is reduced by 8x
and the time required to train the model remains constant. Additionally, we can
prune a larger dense network to achieve better than baseline performance while
still reducing the total number of parameters significantly. Pruning RNNs
reduces the size of the model and can also help achieve significant inference
time speed-up using sparse matrix multiply. Benchmarks show that using our
technique model size can be reduced by 90% and speed-up is around 2x to 7x. | http://arxiv.org/pdf/1704.05119 | Sharan Narang, Erich Elsen, Gregory Diamos, Shubho Sengupta | cs.LG, cs.CL | Published as a conference paper at ICLR 2017 | null | cs.LG | 20170417 | 20171106 | [
{
"id": "1512.02595"
}
] |
1704.04861 | 8 | separable convolutions for substantial reduction in compu- tational cost.
Depthwise separable convolution are made up of two layers: depthwise convolutions and pointwise convolutions. We use depthwise convolutions to apply a single ï¬lter per each input channel (input depth). Pointwise convolution, a simple 1Ã1 convolution, is then used to create a linear com- bination of the output of the depthwise layer. MobileNets use both batchnorm and ReLU nonlinearities for both lay- ers.
Depthwise convolution with one ï¬lter per input channel (input depth) can be written as:
ËGk,l,m = ËKi,j,m · Fk+iâ1,l+jâ1,m i,j (3)
where ËK is the depthwise convolutional kernel of size DK à DK à M where the mth ï¬lter in ËK is applied to the mth channel in F to produce the mth channel of the ï¬ltered output feature map ËG.
Depthwise convolution has a computational cost of:
DK · DK · M · DF · DF (4) | 1704.04861#8 | MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications | We present a class of efficient models called MobileNets for mobile and
embedded vision applications. MobileNets are based on a streamlined
architecture that uses depth-wise separable convolutions to build light weight
deep neural networks. We introduce two simple global hyper-parameters that
efficiently trade off between latency and accuracy. These hyper-parameters
allow the model builder to choose the right sized model for their application
based on the constraints of the problem. We present extensive experiments on
resource and accuracy tradeoffs and show strong performance compared to other
popular models on ImageNet classification. We then demonstrate the
effectiveness of MobileNets across a wide range of applications and use cases
including object detection, finegrain classification, face attributes and large
scale geo-localization. | http://arxiv.org/pdf/1704.04861 | Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam | cs.CV | null | null | cs.CV | 20170417 | 20170417 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1511.06789"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1609.07061"
},
{
"id": "1512.02325"
},
{
"id": "1512.00567"
},
{
"id": "1608.04337"
},
{
"id": "1611.10012"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1610.02357"
},
{
"id": "1512.06473"
}
] |
1704.05119 | 8 | Our method is a pruning technique that is computationally efï¬cient for large recurrent networks that have become the norm for automatic speech recognition. Unlike the methods that need to approximate a Hessian (LeCun et al., 1989; Hassibi et al., 1993) our method uses a simple heuristic to choose the threshold used to drop weights. Yet another advantage, when compared to methods that need re-training (Han et al., 2015), is that our pruning technique is part of training and needs
2
Published as a conference paper at ICLR 2017
Table 1: Hyper-Parameters used for determining threshold (e)
HYPER-PARAM DESCRIPTION HEURISTIC VALUES Start_itr Iteration to start pruning Start of second epoch ramp itr Iteration to increase the rate of Start of 25% of total epochs pruning end _itr Iteration to stop pruning more pa- Start of 50% of total epochs rameters start_slope Initial slope to prune the weights See equation|T] (0) ramp slope Ramp slope to change the rate of 1.50 to 20 (4) pruning freq Number of iterations after which e 100 is updated | 1704.05119#8 | Exploring Sparsity in Recurrent Neural Networks | Recurrent Neural Networks (RNN) are widely used to solve a variety of
problems and as the quantity of data and the amount of available compute have
increased, so have model sizes. The number of parameters in recent
state-of-the-art networks makes them hard to deploy, especially on mobile
phones and embedded devices. The challenge is due to both the size of the model
and the time it takes to evaluate it. In order to deploy these RNNs
efficiently, we propose a technique to reduce the parameters of a network by
pruning weights during the initial training of the network. At the end of
training, the parameters of the network are sparse while accuracy is still
close to the original dense neural network. The network size is reduced by 8x
and the time required to train the model remains constant. Additionally, we can
prune a larger dense network to achieve better than baseline performance while
still reducing the total number of parameters significantly. Pruning RNNs
reduces the size of the model and can also help achieve significant inference
time speed-up using sparse matrix multiply. Benchmarks show that using our
technique model size can be reduced by 90% and speed-up is around 2x to 7x. | http://arxiv.org/pdf/1704.05119 | Sharan Narang, Erich Elsen, Gregory Diamos, Shubho Sengupta | cs.LG, cs.CL | Published as a conference paper at ICLR 2017 | null | cs.LG | 20170417 | 20171106 | [
{
"id": "1512.02595"
}
] |
1704.04861 | 9 | Depthwise convolution has a computational cost of:
DK · DK · M · DF · DF (4)
Depthwise convolution is extremely efï¬cient relative to standard convolution. However it only ï¬lters input chan- nels, it does not combine them to create new features. So an additional layer that computes a linear combination of the output of depthwise convolution via 1 à 1 convolution is needed in order to generate these new features.
The combination of depthwise convolution and 1 Ã 1 (pointwise) convolution is called depthwise separable con- volution which was originally introduced in [26].
Depthwise separable convolutions cost:
DK · DK · M · DF · DF + M · N · DF · DF
which is the sum of the depthwise and 1 Ã 1 pointwise con- volutions.
By expressing convolution as a two step process of ï¬lter- ing and combining we get a reduction in computation of:
DK · DK · M · DF · DF + M · N · DF · DF DK · DK · M · N · DF · DF 1 D2 K
MobileNet uses 3 Ã 3 depthwise separable convolutions which uses between 8 to 9 times less computation than stan- dard convolutions at only a small reduction in accuracy as seen in Section 4. | 1704.04861#9 | MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications | We present a class of efficient models called MobileNets for mobile and
embedded vision applications. MobileNets are based on a streamlined
architecture that uses depth-wise separable convolutions to build light weight
deep neural networks. We introduce two simple global hyper-parameters that
efficiently trade off between latency and accuracy. These hyper-parameters
allow the model builder to choose the right sized model for their application
based on the constraints of the problem. We present extensive experiments on
resource and accuracy tradeoffs and show strong performance compared to other
popular models on ImageNet classification. We then demonstrate the
effectiveness of MobileNets across a wide range of applications and use cases
including object detection, finegrain classification, face attributes and large
scale geo-localization. | http://arxiv.org/pdf/1704.04861 | Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam | cs.CV | null | null | cs.CV | 20170417 | 20170417 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1511.06789"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1609.07061"
},
{
"id": "1512.02325"
},
{
"id": "1512.00567"
},
{
"id": "1608.04337"
},
{
"id": "1611.10012"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1610.02357"
},
{
"id": "1512.06473"
}
] |
1704.05119 | 9 | no additional re-training. Even though our technique requires judicious choice of pruning hyper- parameters, we feel that it is easier than choosing the structure of matrices to guide the sparsiï¬cation for recurrent networks (Lu et al., 2016). Another approach for pruning feed forward neural networks for speech recognition is using simple threshold to prune all weights (Yu et al., 2012) at a particular epoch. However, we ï¬nd that gradual pruning produces better results than hard pruning.
3
# IMPLEMENTATION
Our pruning approach involves maintaining a set of masks, a monotonically increasing threshold and a set of hyper parameters that are used to determine the threshold. During model initialization, we create a set of binary masks, one for each weight in the network that are all initially set to one. After every optimizer update step, each weight is multiplied with its corresponding mask. At regular intervals, the masks are updated by setting all parameters that are lower than the threshold to zero. | 1704.05119#9 | Exploring Sparsity in Recurrent Neural Networks | Recurrent Neural Networks (RNN) are widely used to solve a variety of
problems and as the quantity of data and the amount of available compute have
increased, so have model sizes. The number of parameters in recent
state-of-the-art networks makes them hard to deploy, especially on mobile
phones and embedded devices. The challenge is due to both the size of the model
and the time it takes to evaluate it. In order to deploy these RNNs
efficiently, we propose a technique to reduce the parameters of a network by
pruning weights during the initial training of the network. At the end of
training, the parameters of the network are sparse while accuracy is still
close to the original dense neural network. The network size is reduced by 8x
and the time required to train the model remains constant. Additionally, we can
prune a larger dense network to achieve better than baseline performance while
still reducing the total number of parameters significantly. Pruning RNNs
reduces the size of the model and can also help achieve significant inference
time speed-up using sparse matrix multiply. Benchmarks show that using our
technique model size can be reduced by 90% and speed-up is around 2x to 7x. | http://arxiv.org/pdf/1704.05119 | Sharan Narang, Erich Elsen, Gregory Diamos, Shubho Sengupta | cs.LG, cs.CL | Published as a conference paper at ICLR 2017 | null | cs.LG | 20170417 | 20171106 | [
{
"id": "1512.02595"
}
] |
1704.04861 | 10 | Additional factorization in spatial dimension such as in [16, 31] does not save much additional computation as very little computation is spent in depthwise convolutions.
(5)
Dx | Dx âN(a) Standard Convolution Filters
1 â FOO 6 Dr + Mâ
(b) Depthwise Convolutional Filters
J M 1 | im } 1 ~_Nâ
(c) 1Ã1 Convolutional Filters called Pointwise Convolution in the con- text of Depthwise Separable Convolution
Figure 2. The standard convolutional ï¬lters in (a) are replaced by two layers: depthwise convolution in (b) and pointwise convolu- tion in (c) to build a depthwise separable ï¬lter.
# 3.2. Network Structure and Training | 1704.04861#10 | MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications | We present a class of efficient models called MobileNets for mobile and
embedded vision applications. MobileNets are based on a streamlined
architecture that uses depth-wise separable convolutions to build light weight
deep neural networks. We introduce two simple global hyper-parameters that
efficiently trade off between latency and accuracy. These hyper-parameters
allow the model builder to choose the right sized model for their application
based on the constraints of the problem. We present extensive experiments on
resource and accuracy tradeoffs and show strong performance compared to other
popular models on ImageNet classification. We then demonstrate the
effectiveness of MobileNets across a wide range of applications and use cases
including object detection, finegrain classification, face attributes and large
scale geo-localization. | http://arxiv.org/pdf/1704.04861 | Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam | cs.CV | null | null | cs.CV | 20170417 | 20170417 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1511.06789"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1609.07061"
},
{
"id": "1512.02325"
},
{
"id": "1512.00567"
},
{
"id": "1608.04337"
},
{
"id": "1611.10012"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1610.02357"
},
{
"id": "1512.06473"
}
] |
1704.05119 | 10 | The threshold is computed using hyper-parameters shown in Table 1. The hyper-parameters control the duration, rate and frequency of pruning the parameters for each layer. We use a different set of hyper-parameters for each layer type resulting in a different threshold for each layer type. The threshold is updated at regular intervals using the hyper-parameters according to Algorithm 1. We donât modify the gradients in the back-propagation step. It is possible for the updates of a pruned weight to be larger than the threshold of that layer. In this case, the weight will be involved in the forward pass again.
We provide heuristics to help determine start itr, ramp itr and end itr in table 1. After picking these hyper parameters and assuming that ramp slope(Ï) is 1.5à start slope (θ), we calculate (θ) using equation 1.
θ = 2 â q â freq 2 â (ramp itr â start itr ) + 3 â (end itr â ramp itr ) (1) | 1704.05119#10 | Exploring Sparsity in Recurrent Neural Networks | Recurrent Neural Networks (RNN) are widely used to solve a variety of
problems and as the quantity of data and the amount of available compute have
increased, so have model sizes. The number of parameters in recent
state-of-the-art networks makes them hard to deploy, especially on mobile
phones and embedded devices. The challenge is due to both the size of the model
and the time it takes to evaluate it. In order to deploy these RNNs
efficiently, we propose a technique to reduce the parameters of a network by
pruning weights during the initial training of the network. At the end of
training, the parameters of the network are sparse while accuracy is still
close to the original dense neural network. The network size is reduced by 8x
and the time required to train the model remains constant. Additionally, we can
prune a larger dense network to achieve better than baseline performance while
still reducing the total number of parameters significantly. Pruning RNNs
reduces the size of the model and can also help achieve significant inference
time speed-up using sparse matrix multiply. Benchmarks show that using our
technique model size can be reduced by 90% and speed-up is around 2x to 7x. | http://arxiv.org/pdf/1704.05119 | Sharan Narang, Erich Elsen, Gregory Diamos, Shubho Sengupta | cs.LG, cs.CL | Published as a conference paper at ICLR 2017 | null | cs.LG | 20170417 | 20171106 | [
{
"id": "1512.02595"
}
] |
1704.04861 | 11 | # 3.2. Network Structure and Training
The MobileNet structure is built on depthwise separable convolutions as mentioned in the previous section except for the ï¬rst layer which is a full convolution. By deï¬ning the network in such simple terms we are able to easily explore network topologies to ï¬nd a good network. The MobileNet architecture is deï¬ned in Table 1. All layers are followed by a batchnorm [13] and ReLU nonlinearity with the exception of the ï¬nal fully connected layer which has no nonlinearity and feeds into a softmax layer for classiï¬cation. Figure 3 contrasts a layer with regular convolutions, batchnorm and ReLU nonlinearity to the factorized layer with depthwise convolution, 1 à 1 pointwise convolution as well as batch- norm and ReLU after each convolutional layer. Down sam- pling is handled with strided convolution in the depthwise convolutions as well as in the ï¬rst layer. A ï¬nal average pooling reduces the spatial resolution to 1 before the fully connected layer. Counting depthwise and pointwise convo- lutions as separate layers, MobileNet has 28 layers. | 1704.04861#11 | MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications | We present a class of efficient models called MobileNets for mobile and
embedded vision applications. MobileNets are based on a streamlined
architecture that uses depth-wise separable convolutions to build light weight
deep neural networks. We introduce two simple global hyper-parameters that
efficiently trade off between latency and accuracy. These hyper-parameters
allow the model builder to choose the right sized model for their application
based on the constraints of the problem. We present extensive experiments on
resource and accuracy tradeoffs and show strong performance compared to other
popular models on ImageNet classification. We then demonstrate the
effectiveness of MobileNets across a wide range of applications and use cases
including object detection, finegrain classification, face attributes and large
scale geo-localization. | http://arxiv.org/pdf/1704.04861 | Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam | cs.CV | null | null | cs.CV | 20170417 | 20170417 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1511.06789"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1609.07061"
},
{
"id": "1512.02325"
},
{
"id": "1512.00567"
},
{
"id": "1608.04337"
},
{
"id": "1611.10012"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1610.02357"
},
{
"id": "1512.06473"
}
] |
1704.05119 | 11 | θ = 2 â q â freq 2 â (ramp itr â start itr ) + 3 â (end itr â ramp itr ) (1)
In order to determine q in equation 1, we use an existing weight array from a previously trained model. The weights are sorted using absolute values and we pick the weight corresponding to the 90th percentile as q. This allows us to pick reasonable values for the hyper-parameters required for pruning. A validation set can be used to ï¬ne tune these parameters.
We only prune the weights of the recurrent and linear layers but not the biases or batch norm pa- rameters since they are much fewer in number compared to the weights. For the recurrent layers, we prune both the input weight matrix and the recurrent weight matrix. Similarly, we prune all the weights in gated recurrent units including those of the reset and update gates.
3
Published as a conference paper at ICLR 2017
# Algorithm 1 Pruning Algorithm | 1704.05119#11 | Exploring Sparsity in Recurrent Neural Networks | Recurrent Neural Networks (RNN) are widely used to solve a variety of
problems and as the quantity of data and the amount of available compute have
increased, so have model sizes. The number of parameters in recent
state-of-the-art networks makes them hard to deploy, especially on mobile
phones and embedded devices. The challenge is due to both the size of the model
and the time it takes to evaluate it. In order to deploy these RNNs
efficiently, we propose a technique to reduce the parameters of a network by
pruning weights during the initial training of the network. At the end of
training, the parameters of the network are sparse while accuracy is still
close to the original dense neural network. The network size is reduced by 8x
and the time required to train the model remains constant. Additionally, we can
prune a larger dense network to achieve better than baseline performance while
still reducing the total number of parameters significantly. Pruning RNNs
reduces the size of the model and can also help achieve significant inference
time speed-up using sparse matrix multiply. Benchmarks show that using our
technique model size can be reduced by 90% and speed-up is around 2x to 7x. | http://arxiv.org/pdf/1704.05119 | Sharan Narang, Erich Elsen, Gregory Diamos, Shubho Sengupta | cs.LG, cs.CL | Published as a conference paper at ICLR 2017 | null | cs.LG | 20170417 | 20171106 | [
{
"id": "1512.02595"
}
] |
1704.04861 | 12 | It is not enough to simply deï¬ne networks in terms of a small number of Mult-Adds. It is also important to make sure these operations can be efï¬ciently implementable. For
3x3 Conv 3x3 Depthwise Conv I I BN } BN I I ReLU ReLU 1x1 Conv BN I ReLU
Figure 3. Left: Standard convolutional layer with batchnorm and ReLU. Right: Depthwise Separable convolutions with Depthwise and Pointwise layers followed by batchnorm and ReLU. | 1704.04861#12 | MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications | We present a class of efficient models called MobileNets for mobile and
embedded vision applications. MobileNets are based on a streamlined
architecture that uses depth-wise separable convolutions to build light weight
deep neural networks. We introduce two simple global hyper-parameters that
efficiently trade off between latency and accuracy. These hyper-parameters
allow the model builder to choose the right sized model for their application
based on the constraints of the problem. We present extensive experiments on
resource and accuracy tradeoffs and show strong performance compared to other
popular models on ImageNet classification. We then demonstrate the
effectiveness of MobileNets across a wide range of applications and use cases
including object detection, finegrain classification, face attributes and large
scale geo-localization. | http://arxiv.org/pdf/1704.04861 | Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam | cs.CV | null | null | cs.CV | 20170417 | 20170417 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1511.06789"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1609.07061"
},
{
"id": "1512.02325"
},
{
"id": "1512.00567"
},
{
"id": "1608.04337"
},
{
"id": "1611.10012"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1610.02357"
},
{
"id": "1512.06473"
}
] |
1704.05119 | 12 | 3
Published as a conference paper at ICLR 2017
# Algorithm 1 Pruning Algorithm
current_itr = 0 while training do for all parameters do param = (param and mask) if current_itr > start_itr and current_itr < end_itr then if (current_itr mod freq) == 0 then if current_itr < ramp_itr then ⬠= 0 * (current_itr â start_itr + 1)/freq else ⬠= (0 * (ramp-itr â start_itr + 1) + 6 * (current_itr â ramp-itr + 1))/freq end if mask = abs(param) < ⬠end if end if end for current_itr += 1 end while
# 4 EXPERIMENTS
We run all our experiments on a training set of 2100 hours of English speech data and a validation set of 3.5 hours of multi-speaker data. This is a small subset of the datasets that we use to train our state-of-the-art automatic speech recognition models. We train the models using Nesterov SGD for 20 epochs. Besides the hyper-parameters for determining the threshold, all other hyper-parameters remain unchanged between the dense and sparse training runs. We ï¬nd that our pruning approach works well for vanilla bidirectional recurrent layers and forward only gated recurrent units. | 1704.05119#12 | Exploring Sparsity in Recurrent Neural Networks | Recurrent Neural Networks (RNN) are widely used to solve a variety of
problems and as the quantity of data and the amount of available compute have
increased, so have model sizes. The number of parameters in recent
state-of-the-art networks makes them hard to deploy, especially on mobile
phones and embedded devices. The challenge is due to both the size of the model
and the time it takes to evaluate it. In order to deploy these RNNs
efficiently, we propose a technique to reduce the parameters of a network by
pruning weights during the initial training of the network. At the end of
training, the parameters of the network are sparse while accuracy is still
close to the original dense neural network. The network size is reduced by 8x
and the time required to train the model remains constant. Additionally, we can
prune a larger dense network to achieve better than baseline performance while
still reducing the total number of parameters significantly. Pruning RNNs
reduces the size of the model and can also help achieve significant inference
time speed-up using sparse matrix multiply. Benchmarks show that using our
technique model size can be reduced by 90% and speed-up is around 2x to 7x. | http://arxiv.org/pdf/1704.05119 | Sharan Narang, Erich Elsen, Gregory Diamos, Shubho Sengupta | cs.LG, cs.CL | Published as a conference paper at ICLR 2017 | null | cs.LG | 20170417 | 20171106 | [
{
"id": "1512.02595"
}
] |
1704.04861 | 13 | instance unstructured sparse matrix operations are not typ- ically faster than dense matrix operations until a very high level of sparsity. Our model structure puts nearly all of the computation into dense 1 Ã 1 convolutions. This can be im- plemented with highly optimized general matrix multiply (GEMM) functions. Often convolutions are implemented by a GEMM but require an initial reordering in memory called im2col in order to map it to a GEMM. For instance, this approach is used in the popular Caffe package [15]. 1 Ã 1 convolutions do not require this reordering in memory and can be implemented directly with GEMM which is one of the most optimized numerical linear algebra algorithms. MobileNet spends 95% of itâs computation time in 1 Ã 1 convolutions which also has 75% of the parameters as can be seen in Table 2. Nearly all of the additional parameters are in the fully connected layer. | 1704.04861#13 | MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications | We present a class of efficient models called MobileNets for mobile and
embedded vision applications. MobileNets are based on a streamlined
architecture that uses depth-wise separable convolutions to build light weight
deep neural networks. We introduce two simple global hyper-parameters that
efficiently trade off between latency and accuracy. These hyper-parameters
allow the model builder to choose the right sized model for their application
based on the constraints of the problem. We present extensive experiments on
resource and accuracy tradeoffs and show strong performance compared to other
popular models on ImageNet classification. We then demonstrate the
effectiveness of MobileNets across a wide range of applications and use cases
including object detection, finegrain classification, face attributes and large
scale geo-localization. | http://arxiv.org/pdf/1704.04861 | Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam | cs.CV | null | null | cs.CV | 20170417 | 20170417 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1511.06789"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1609.07061"
},
{
"id": "1512.02325"
},
{
"id": "1512.00567"
},
{
"id": "1608.04337"
},
{
"id": "1611.10012"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1610.02357"
},
{
"id": "1512.06473"
}
] |
1704.05119 | 13 | 4.1 BIDIRECTIONAL RNNS
We use the Deep Speech 2 model for these experiments. As shown in Table 2, this model has 2 convolution layers, followed by 7 bidirectional recurrent layers and a CTC cost layer. Each recurrent linear layer has 1760 hidden units, creating a network of approximately 67 million parameters. For these experiments, we prune the linear layers that feed into the recurrent layers, the forward and backward recurrent layers and fully connected layer before the CTC layer. These experiments use clipped rectiï¬ed-linear units (ReLU) Ï(x) = min(max(x, 0), 20) as the activation function. In the sparse run, the pruning begins shortly after the ï¬rst epoch and continues until the 10th epoch. We chose these hyper-parameters so that the model has an overall sparsity of 88% at the end of pruning, which is 8x smaller than the original dense model. The character error rate (CER) on the devset is about 20% worse relative to the dense model as shown in Table 3. | 1704.05119#13 | Exploring Sparsity in Recurrent Neural Networks | Recurrent Neural Networks (RNN) are widely used to solve a variety of
problems and as the quantity of data and the amount of available compute have
increased, so have model sizes. The number of parameters in recent
state-of-the-art networks makes them hard to deploy, especially on mobile
phones and embedded devices. The challenge is due to both the size of the model
and the time it takes to evaluate it. In order to deploy these RNNs
efficiently, we propose a technique to reduce the parameters of a network by
pruning weights during the initial training of the network. At the end of
training, the parameters of the network are sparse while accuracy is still
close to the original dense neural network. The network size is reduced by 8x
and the time required to train the model remains constant. Additionally, we can
prune a larger dense network to achieve better than baseline performance while
still reducing the total number of parameters significantly. Pruning RNNs
reduces the size of the model and can also help achieve significant inference
time speed-up using sparse matrix multiply. Benchmarks show that using our
technique model size can be reduced by 90% and speed-up is around 2x to 7x. | http://arxiv.org/pdf/1704.05119 | Sharan Narang, Erich Elsen, Gregory Diamos, Shubho Sengupta | cs.LG, cs.CL | Published as a conference paper at ICLR 2017 | null | cs.LG | 20170417 | 20171106 | [
{
"id": "1512.02595"
}
] |
1704.04861 | 14 | MobileNet models were trained in TensorFlow [1] us- ing RMSprop [33] with asynchronous gradient descent sim- ilar to Inception V3 [31]. However, contrary to training large models we use less regularization and data augmen- tation techniques because small models have less trouble with overï¬tting. When training MobileNets we do not use side heads or label smoothing and additionally reduce the amount image of distortions by limiting the size of small crops that are used in large Inception training [31]. Addi- tionally, we found that it was important to put very little or no weight decay (l2 regularization) on the depthwise ï¬lters since their are so few parameters in them. For the ImageNet benchmarks in the next section all models were trained with same training parameters regardless of the size of the model.
# 3.3. Width Multiplier: Thinner Models
Although the base MobileNet architecture is already small and low latency, many times a speciï¬c use case or application may require the model to be smaller and faster. In order to construct these smaller and less computationally expensive models we introduce a very simple parameter α called width multiplier. The role of the width multiplier α is to thin a network uniformly at each layer. For a given layer
Table 1. MobileNet Body Architecture | 1704.04861#14 | MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications | We present a class of efficient models called MobileNets for mobile and
embedded vision applications. MobileNets are based on a streamlined
architecture that uses depth-wise separable convolutions to build light weight
deep neural networks. We introduce two simple global hyper-parameters that
efficiently trade off between latency and accuracy. These hyper-parameters
allow the model builder to choose the right sized model for their application
based on the constraints of the problem. We present extensive experiments on
resource and accuracy tradeoffs and show strong performance compared to other
popular models on ImageNet classification. We then demonstrate the
effectiveness of MobileNets across a wide range of applications and use cases
including object detection, finegrain classification, face attributes and large
scale geo-localization. | http://arxiv.org/pdf/1704.04861 | Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam | cs.CV | null | null | cs.CV | 20170417 | 20170417 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1511.06789"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1609.07061"
},
{
"id": "1512.02325"
},
{
"id": "1512.00567"
},
{
"id": "1608.04337"
},
{
"id": "1611.10012"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1610.02357"
},
{
"id": "1512.06473"
}
] |
1704.05119 | 14 | An argument against this sparsity result might be that we are taking advantage of a large model that overï¬ts our relatively small dataset. In order to test this hypothesis, we train a dense model with 704 hidden units in each layer, that has approximately the same number of parameters as the ï¬nal sparse model. Table 3 shows that this model performs worse than the sparse models. Thus sparse model is a better approach to reduce parameters than using a dense model with fewer hidden units. | 1704.05119#14 | Exploring Sparsity in Recurrent Neural Networks | Recurrent Neural Networks (RNN) are widely used to solve a variety of
problems and as the quantity of data and the amount of available compute have
increased, so have model sizes. The number of parameters in recent
state-of-the-art networks makes them hard to deploy, especially on mobile
phones and embedded devices. The challenge is due to both the size of the model
and the time it takes to evaluate it. In order to deploy these RNNs
efficiently, we propose a technique to reduce the parameters of a network by
pruning weights during the initial training of the network. At the end of
training, the parameters of the network are sparse while accuracy is still
close to the original dense neural network. The network size is reduced by 8x
and the time required to train the model remains constant. Additionally, we can
prune a larger dense network to achieve better than baseline performance while
still reducing the total number of parameters significantly. Pruning RNNs
reduces the size of the model and can also help achieve significant inference
time speed-up using sparse matrix multiply. Benchmarks show that using our
technique model size can be reduced by 90% and speed-up is around 2x to 7x. | http://arxiv.org/pdf/1704.05119 | Sharan Narang, Erich Elsen, Gregory Diamos, Shubho Sengupta | cs.LG, cs.CL | Published as a conference paper at ICLR 2017 | null | cs.LG | 20170417 | 20171106 | [
{
"id": "1512.02595"
}
] |
1704.04861 | 15 | Type / Stride Conv / s2 Conv dw / s1 Conv / s1 Conv dw / s2 Conv / s1 Conv dw / s1 Conv / s1 Conv dw / s2 Conv / s1 Conv dw / s1 Conv / s1 Conv dw / s2 Conv / s1 Conv dw / s1 Conv / s1 Conv dw / s2 Conv / s1 Conv dw / s2 Conv / s1 Avg Pool / s1 FC / s1 Softmax / s1 5à Filter Shape 3 à 3 à 3 à 32 3 à 3 à 32 dw 1 à 1 à 32 à 64 3 à 3 à 64 dw 1 à 1 à 64 à 128 3 à 3 à 128 dw 1 à 1 à 128 à 128 3 à 3 à 128 dw 1 à 1 à 128 à 256 3 à 3 à 256 dw 1 à 1 à 256 à 256 3 à 3 à 256 dw 1 à 1 à 256 à 512 3 à 3 à 512 dw 1 à 1 à 512 à 512 3 à 3 à 512 dw 1 à 1 à 512 à 1024 3 à 3 à 1024 dw 1 à 1 à 1024 à 1024 Pool 7 à 7 1024 à 1000 Classiï¬er Input Size 224 à 224 à 3 112 à 112 à 32 112 à 112 à 32 112 | 1704.04861#15 | MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications | We present a class of efficient models called MobileNets for mobile and
embedded vision applications. MobileNets are based on a streamlined
architecture that uses depth-wise separable convolutions to build light weight
deep neural networks. We introduce two simple global hyper-parameters that
efficiently trade off between latency and accuracy. These hyper-parameters
allow the model builder to choose the right sized model for their application
based on the constraints of the problem. We present extensive experiments on
resource and accuracy tradeoffs and show strong performance compared to other
popular models on ImageNet classification. We then demonstrate the
effectiveness of MobileNets across a wide range of applications and use cases
including object detection, finegrain classification, face attributes and large
scale geo-localization. | http://arxiv.org/pdf/1704.04861 | Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam | cs.CV | null | null | cs.CV | 20170417 | 20170417 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1511.06789"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1609.07061"
},
{
"id": "1512.02325"
},
{
"id": "1512.00567"
},
{
"id": "1608.04337"
},
{
"id": "1611.10012"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1610.02357"
},
{
"id": "1512.06473"
}
] |
1704.05119 | 15 | In order to recover the loss in accuracy, we train sparse models with larger recurrent layers with 2560 and 3072 hidden units. Figure 1a shows the training and dev curves for these sparse models compared to the dense baseline model. These experiments use the same hyper-parameters (except for small changes in the pruning hyper-parameters) and the same dataset as the baseline model. As we see in Table 3, the model with 2560 hidden units achieves a 0.75% relative improvement compared to the dense baseline model, while the model with 3072 hidden units has a 3.95% im- provement. The dense 2560 model also improves the CER by 11.85% relative to the dense baseline model. The sparse 2560 model is about 12% worse than the corresponding dense model. Both these large models are pruned to achieve a ï¬nal sparsity of around 92%. These sparse larger models have signiï¬cantly fewer parameters than the baseline dense model.
4
Published as a conference paper at ICLR 2017
Table 2: Deep Speech 2 architecture with 1760 hidden units | 1704.05119#15 | Exploring Sparsity in Recurrent Neural Networks | Recurrent Neural Networks (RNN) are widely used to solve a variety of
problems and as the quantity of data and the amount of available compute have
increased, so have model sizes. The number of parameters in recent
state-of-the-art networks makes them hard to deploy, especially on mobile
phones and embedded devices. The challenge is due to both the size of the model
and the time it takes to evaluate it. In order to deploy these RNNs
efficiently, we propose a technique to reduce the parameters of a network by
pruning weights during the initial training of the network. At the end of
training, the parameters of the network are sparse while accuracy is still
close to the original dense neural network. The network size is reduced by 8x
and the time required to train the model remains constant. Additionally, we can
prune a larger dense network to achieve better than baseline performance while
still reducing the total number of parameters significantly. Pruning RNNs
reduces the size of the model and can also help achieve significant inference
time speed-up using sparse matrix multiply. Benchmarks show that using our
technique model size can be reduced by 90% and speed-up is around 2x to 7x. | http://arxiv.org/pdf/1704.05119 | Sharan Narang, Erich Elsen, Gregory Diamos, Shubho Sengupta | cs.LG, cs.CL | Published as a conference paper at ICLR 2017 | null | cs.LG | 20170417 | 20171106 | [
{
"id": "1512.02595"
}
] |
1704.04861 | 16 | dw 1 à 1 à 1024 à 1024 Pool 7 à 7 1024 à 1000 Classiï¬er Input Size 224 à 224 à 3 112 à 112 à 32 112 à 112 à 32 112 à 112 à 64 56 à 56 à 64 56 à 56 à 128 56 à 56 à 128 56 à 56 à 128 28 à 28 à 128 28 à 28 à 256 28 à 28 à 256 28 à 28 à 256 14 à 14 à 256 14 à 14 à 512 14 à 14 à 512 14 à 14 à 512 7 à 7 à 512 7 à 7 à 1024 7 à 7 à 1024 7 à 7 à 1024 1 à 1 à 1024 1 à 1 à 1000 | 1704.04861#16 | MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications | We present a class of efficient models called MobileNets for mobile and
embedded vision applications. MobileNets are based on a streamlined
architecture that uses depth-wise separable convolutions to build light weight
deep neural networks. We introduce two simple global hyper-parameters that
efficiently trade off between latency and accuracy. These hyper-parameters
allow the model builder to choose the right sized model for their application
based on the constraints of the problem. We present extensive experiments on
resource and accuracy tradeoffs and show strong performance compared to other
popular models on ImageNet classification. We then demonstrate the
effectiveness of MobileNets across a wide range of applications and use cases
including object detection, finegrain classification, face attributes and large
scale geo-localization. | http://arxiv.org/pdf/1704.04861 | Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam | cs.CV | null | null | cs.CV | 20170417 | 20170417 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1511.06789"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1609.07061"
},
{
"id": "1512.02325"
},
{
"id": "1512.00567"
},
{
"id": "1608.04337"
},
{
"id": "1611.10012"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1610.02357"
},
{
"id": "1512.06473"
}
] |
1704.05119 | 16 | 4
Published as a conference paper at ICLR 2017
Table 2: Deep Speech 2 architecture with 1760 hidden units
LAYER ID TYPE layer 0 layer 1 layer 2 layer 3 layer 4 layer 5 layer 6 layer 7 layer 8 layer 9 layer 10 2D Convolution 2D Convolution Bidirectional Recurrent Linear Bidirectional Recurrent Linear Bidirectional Recurrent Linear Bidirectional Recurrent Linear Bidirectional Recurrent Linear Bidirectional Recurrent Linear Bidirectional Recurrent Linear FullyConnected CTCCost 19616 239168 8507840 9296320 9296320 9296320 9296320 9296320 9296320 3101120 95054
# # PARAMS
We also compare our gradual pruning approach to the hard pruning approach proposed in Yu et al. (2012). In their approach, all parameters below a certain threshold are pruned at particular epoch. Table 4 shows the results of pruning the RNN dense baseline model at different epochs to achieve ï¬nal parameter count ranging from 8 million to 11 million. The network is trained for the same number of epochs as the gradual pruning experiments. These hard threshold results are compared with the RNN Sparse 1760 model in Table 3. For approximately same number of parameters, gradual pruning is 7% to 9% better than hard pruning. | 1704.05119#16 | Exploring Sparsity in Recurrent Neural Networks | Recurrent Neural Networks (RNN) are widely used to solve a variety of
problems and as the quantity of data and the amount of available compute have
increased, so have model sizes. The number of parameters in recent
state-of-the-art networks makes them hard to deploy, especially on mobile
phones and embedded devices. The challenge is due to both the size of the model
and the time it takes to evaluate it. In order to deploy these RNNs
efficiently, we propose a technique to reduce the parameters of a network by
pruning weights during the initial training of the network. At the end of
training, the parameters of the network are sparse while accuracy is still
close to the original dense neural network. The network size is reduced by 8x
and the time required to train the model remains constant. Additionally, we can
prune a larger dense network to achieve better than baseline performance while
still reducing the total number of parameters significantly. Pruning RNNs
reduces the size of the model and can also help achieve significant inference
time speed-up using sparse matrix multiply. Benchmarks show that using our
technique model size can be reduced by 90% and speed-up is around 2x to 7x. | http://arxiv.org/pdf/1704.05119 | Sharan Narang, Erich Elsen, Gregory Diamos, Shubho Sengupta | cs.LG, cs.CL | Published as a conference paper at ICLR 2017 | null | cs.LG | 20170417 | 20171106 | [
{
"id": "1512.02595"
}
] |
1704.04861 | 17 | Table 2. Resource Per Layer Type
Type Conv 1 Ã 1 Conv DW 3 Ã 3 Conv 3 Ã 3 Fully Connected Mult-Adds 94.86% 3.06% 1.19% 0.18% Parameters 74.59% 1.06% 0.02% 24.33%
and width multiplier α, the number of input channels M be- comes αM and the number of output channels N becomes αN .
The computational cost of a depthwise separable convo- lution with width multiplier α is:
DK · DK · αM · DF · DF + αM · αN · DF · DF
where α â (0, 1] with typical settings of 1, 0.75, 0.5 and 0.25. α = 1 is the baseline MobileNet and α < 1 are reduced MobileNets. Width multiplier has the effect of re- ducing computational cost and the number of parameters quadratically by roughly α2. Width multiplier can be ap- plied to any model structure to deï¬ne a new smaller model with a reasonable accuracy, latency and size trade off. It is used to deï¬ne a new reduced structure that needs to be trained from scratch.
# 3.4. Resolution Multiplier: Reduced Representa- tion | 1704.04861#17 | MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications | We present a class of efficient models called MobileNets for mobile and
embedded vision applications. MobileNets are based on a streamlined
architecture that uses depth-wise separable convolutions to build light weight
deep neural networks. We introduce two simple global hyper-parameters that
efficiently trade off between latency and accuracy. These hyper-parameters
allow the model builder to choose the right sized model for their application
based on the constraints of the problem. We present extensive experiments on
resource and accuracy tradeoffs and show strong performance compared to other
popular models on ImageNet classification. We then demonstrate the
effectiveness of MobileNets across a wide range of applications and use cases
including object detection, finegrain classification, face attributes and large
scale geo-localization. | http://arxiv.org/pdf/1704.04861 | Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam | cs.CV | null | null | cs.CV | 20170417 | 20170417 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1511.06789"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1609.07061"
},
{
"id": "1512.02325"
},
{
"id": "1512.00567"
},
{
"id": "1608.04337"
},
{
"id": "1611.10012"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1610.02357"
},
{
"id": "1512.06473"
}
] |
1704.05119 | 17 | We conclude that pruning models to achieve sparsity of around 90% reduces the relative accuracy of the model by 10% to 20%. However, for a given performance requirement, it is better to prune a larger model than to use a smaller dense model. Gradually pruning a model produces better results than hard pruning.
Table 3: GRU & bidirectional RNN model results
MODEL RNN Dense Baseline RNN Dense Small RNN Dense Medium 2560 RNN Sparse 1760 1760 RNN Sparse Medium 2560 3072 RNN Sparse Big 2560 GRU Dense GRU Sparse 2560 GRU Sparse Medium 3568 1760 704 10.67 14.50 9.43 12.88 10.59 10.25 9.55 10.87 9.76 67 million 11.6 million 141 million 8.3 million 11.1 million 16.7 million 115 million 13 million 17.8 million 0.0% -35.89% 11.85% -20.71% 0.75% 3.95% 0.0% -13.82% -2.20%
# # UNITS CER # PARAMS RELATIVE PERF
Table 4: RNN dense baseline model with hard pruning
# UNITS PRUNED EPOCH CER # PARAMS RELATIVE PERF | 1704.05119#17 | Exploring Sparsity in Recurrent Neural Networks | Recurrent Neural Networks (RNN) are widely used to solve a variety of
problems and as the quantity of data and the amount of available compute have
increased, so have model sizes. The number of parameters in recent
state-of-the-art networks makes them hard to deploy, especially on mobile
phones and embedded devices. The challenge is due to both the size of the model
and the time it takes to evaluate it. In order to deploy these RNNs
efficiently, we propose a technique to reduce the parameters of a network by
pruning weights during the initial training of the network. At the end of
training, the parameters of the network are sparse while accuracy is still
close to the original dense neural network. The network size is reduced by 8x
and the time required to train the model remains constant. Additionally, we can
prune a larger dense network to achieve better than baseline performance while
still reducing the total number of parameters significantly. Pruning RNNs
reduces the size of the model and can also help achieve significant inference
time speed-up using sparse matrix multiply. Benchmarks show that using our
technique model size can be reduced by 90% and speed-up is around 2x to 7x. | http://arxiv.org/pdf/1704.05119 | Sharan Narang, Erich Elsen, Gregory Diamos, Shubho Sengupta | cs.LG, cs.CL | Published as a conference paper at ICLR 2017 | null | cs.LG | 20170417 | 20171106 | [
{
"id": "1512.02595"
}
] |
1704.04861 | 18 | # 3.4. Resolution Multiplier: Reduced Representa- tion
The second hyper-parameter to reduce the computational cost of a neural network is a resolution multiplier Ï. We apTable 3. Resource usage for modiï¬cations to standard convolution. Note that each row is a cumulative effect adding on top of the previous row. This example is for an internal MobileNet layer with DK = 3, M = 512, N = 512, DF = 14.
Layer/Modiï¬cation Convolution Depthwise Separable Conv α = 0.75 Ï = 0.714 Million Mult-Adds 462 52.3 29.6 15.1 Million Parameters 2.36 0.27 0.15 0.15
ply this to the input image and the internal representation of every layer is subsequently reduced by the same multiplier. In practice we implicitly set Ï by setting the input resolu- tion.
We can now express the computational cost for the core layers of our network as depthwise separable convolutions with width multiplier α and resolution multiplier Ï:
DK · DK · αM · ÏDF · ÏDF + αM · αN · ÏDF · ÏDF (7) | 1704.04861#18 | MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications | We present a class of efficient models called MobileNets for mobile and
embedded vision applications. MobileNets are based on a streamlined
architecture that uses depth-wise separable convolutions to build light weight
deep neural networks. We introduce two simple global hyper-parameters that
efficiently trade off between latency and accuracy. These hyper-parameters
allow the model builder to choose the right sized model for their application
based on the constraints of the problem. We present extensive experiments on
resource and accuracy tradeoffs and show strong performance compared to other
popular models on ImageNet classification. We then demonstrate the
effectiveness of MobileNets across a wide range of applications and use cases
including object detection, finegrain classification, face attributes and large
scale geo-localization. | http://arxiv.org/pdf/1704.04861 | Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam | cs.CV | null | null | cs.CV | 20170417 | 20170417 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1511.06789"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1609.07061"
},
{
"id": "1512.02325"
},
{
"id": "1512.00567"
},
{
"id": "1608.04337"
},
{
"id": "1611.10012"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1610.02357"
},
{
"id": "1512.06473"
}
] |
1704.05119 | 18 | Table 4: RNN dense baseline model with hard pruning
# UNITS PRUNED EPOCH CER # PARAMS RELATIVE PERF
1760 1760 1760 1760 1760 5 7 10 12 15 13.82 13.27 13.41 13.63 26.33 8 million 11 million 8.4 million 8 million 9.2 million -29.52% -24.37% -25.68% -27.74% -146.77%
5
Published as a conference paper at ICLR 2017
| Wh baseline_67mil_123250_dev â 2560_11mil_122945 train so i 2560_11mil_122945 dev â 3072_16mil_130495 train 3072_16mil_130495_dev âaseline_67mil_123250_train CTC cost Epoch number
35] io Epoch number 20
(a) (b)
Figure 1: Training and dev curves for baseline (dense) and sparse training. Figure 1a includes training and dev curves for models with larger recurrent layers with 2560 and 3072 hidden units compared to the 1760 dense baseline. Figure 1b plots the training and dev curves for GRU models (sparse and dense) with 2560 parameters.
Table 5: Gated recurrent units model | 1704.05119#18 | Exploring Sparsity in Recurrent Neural Networks | Recurrent Neural Networks (RNN) are widely used to solve a variety of
problems and as the quantity of data and the amount of available compute have
increased, so have model sizes. The number of parameters in recent
state-of-the-art networks makes them hard to deploy, especially on mobile
phones and embedded devices. The challenge is due to both the size of the model
and the time it takes to evaluate it. In order to deploy these RNNs
efficiently, we propose a technique to reduce the parameters of a network by
pruning weights during the initial training of the network. At the end of
training, the parameters of the network are sparse while accuracy is still
close to the original dense neural network. The network size is reduced by 8x
and the time required to train the model remains constant. Additionally, we can
prune a larger dense network to achieve better than baseline performance while
still reducing the total number of parameters significantly. Pruning RNNs
reduces the size of the model and can also help achieve significant inference
time speed-up using sparse matrix multiply. Benchmarks show that using our
technique model size can be reduced by 90% and speed-up is around 2x to 7x. | http://arxiv.org/pdf/1704.05119 | Sharan Narang, Erich Elsen, Gregory Diamos, Shubho Sengupta | cs.LG, cs.CL | Published as a conference paper at ICLR 2017 | null | cs.LG | 20170417 | 20171106 | [
{
"id": "1512.02595"
}
] |
1704.04861 | 19 | DK · DK · αM · ÏDF · ÏDF + αM · αN · ÏDF · ÏDF (7)
where Ï â (0, 1] which is typically set implicitly so that the input resolution of the network is 224, 192, 160 or 128. Ï = 1 is the baseline MobileNet and Ï < 1 are reduced computation MobileNets. Resolution multiplier has the ef- fect of reducing computational cost by Ï2.
As an example we can look at a typical layer in Mo- bileNet and see how depthwise separable convolutions, width multiplier and resolution multiplier reduce the cost and parameters. Table 3 shows the computation and number of parameters for a layer as architecture shrinking methods are sequentially applied to the layer. The ï¬rst row shows the Mult-Adds and parameters for a full convolutional layer with an input feature map of size 14 à 14 à 512 with a ker- nel K of size 3 à 3 à 512 à 512. We will look in detail in the next section at the trade offs between resources and accuracy.
# 4. Experiments | 1704.04861#19 | MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications | We present a class of efficient models called MobileNets for mobile and
embedded vision applications. MobileNets are based on a streamlined
architecture that uses depth-wise separable convolutions to build light weight
deep neural networks. We introduce two simple global hyper-parameters that
efficiently trade off between latency and accuracy. These hyper-parameters
allow the model builder to choose the right sized model for their application
based on the constraints of the problem. We present extensive experiments on
resource and accuracy tradeoffs and show strong performance compared to other
popular models on ImageNet classification. We then demonstrate the
effectiveness of MobileNets across a wide range of applications and use cases
including object detection, finegrain classification, face attributes and large
scale geo-localization. | http://arxiv.org/pdf/1704.04861 | Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam | cs.CV | null | null | cs.CV | 20170417 | 20170417 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1511.06789"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1609.07061"
},
{
"id": "1512.02325"
},
{
"id": "1512.00567"
},
{
"id": "1608.04337"
},
{
"id": "1611.10012"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1610.02357"
},
{
"id": "1512.06473"
}
] |
1704.05119 | 19 | Table 5: Gated recurrent units model
LAYER ID TYPE layer 0 layer 1 layer 2 layer 3 layer 4 layer 5 layer 6 layer 7 2D Convolution 2D Convolution Gated Recurrent Linear Gated Recurrent Linear Gated Recurrent Linear Row Convolution FullyConnected CTCCost 19616 239168 29752320 39336960 39336960 107520 6558720 74269
4.2 GATED RECURRENT UNITS
We also experimented with GRU models shown in Table 5, that have 2560 hidden units in the GRU layer and a total of 115 million parameters. For these experiments, we prune all layers except the convolution layers since they have relatively fewer parameters. | 1704.05119#19 | Exploring Sparsity in Recurrent Neural Networks | Recurrent Neural Networks (RNN) are widely used to solve a variety of
problems and as the quantity of data and the amount of available compute have
increased, so have model sizes. The number of parameters in recent
state-of-the-art networks makes them hard to deploy, especially on mobile
phones and embedded devices. The challenge is due to both the size of the model
and the time it takes to evaluate it. In order to deploy these RNNs
efficiently, we propose a technique to reduce the parameters of a network by
pruning weights during the initial training of the network. At the end of
training, the parameters of the network are sparse while accuracy is still
close to the original dense neural network. The network size is reduced by 8x
and the time required to train the model remains constant. Additionally, we can
prune a larger dense network to achieve better than baseline performance while
still reducing the total number of parameters significantly. Pruning RNNs
reduces the size of the model and can also help achieve significant inference
time speed-up using sparse matrix multiply. Benchmarks show that using our
technique model size can be reduced by 90% and speed-up is around 2x to 7x. | http://arxiv.org/pdf/1704.05119 | Sharan Narang, Erich Elsen, Gregory Diamos, Shubho Sengupta | cs.LG, cs.CL | Published as a conference paper at ICLR 2017 | null | cs.LG | 20170417 | 20171106 | [
{
"id": "1512.02595"
}
] |
1704.04861 | 20 | # 4. Experiments
In this section we ï¬rst investigate the effects of depth- wise convolutions as well as the choice of shrinking by re- ducing the width of the network rather than the number of layers. We then show the trade offs of reducing the net- work based on the two hyper-parameters: width multiplier and resolution multiplier and compare results to a number of popular models. We then investigate MobileNets applied to a number of different applications.
# 4.1. Model Choices
First we show results for MobileNet with depthwise sep- arable convolutions compared to a model built with full con- volutions. In Table 4 we see that using depthwise separa- ble convolutions compared to full convolutions only reduces
Table 4. Depthwise Separable vs Full Convolution MobileNet Million Parameters 29.3 4.2
Table 5. Narrow vs Shallow MobileNet Model 0.75 MobileNet Shallow MobileNet ImageNet Accuracy Mult-Adds Million 68.4% 65.3% 325 307 Million Parameters 2.6 2.9
Table 6. MobileNet Width Multiplier | 1704.04861#20 | MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications | We present a class of efficient models called MobileNets for mobile and
embedded vision applications. MobileNets are based on a streamlined
architecture that uses depth-wise separable convolutions to build light weight
deep neural networks. We introduce two simple global hyper-parameters that
efficiently trade off between latency and accuracy. These hyper-parameters
allow the model builder to choose the right sized model for their application
based on the constraints of the problem. We present extensive experiments on
resource and accuracy tradeoffs and show strong performance compared to other
popular models on ImageNet classification. We then demonstrate the
effectiveness of MobileNets across a wide range of applications and use cases
including object detection, finegrain classification, face attributes and large
scale geo-localization. | http://arxiv.org/pdf/1704.04861 | Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam | cs.CV | null | null | cs.CV | 20170417 | 20170417 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1511.06789"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1609.07061"
},
{
"id": "1512.02325"
},
{
"id": "1512.00567"
},
{
"id": "1608.04337"
},
{
"id": "1611.10012"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1610.02357"
},
{
"id": "1512.06473"
}
] |
1704.05119 | 20 | Figure 1b compares the training and dev curves of a sparse GRU model a dense GRU model. The sparse GRU model has a 13.8% drop in the accuracy relative to the dense model. As shown in Table 3, the sparse model has an overall sparsity of 88.6% with 13 million parameters. Similar to the RNN models, we train a sparse GRU model with 3568 hidden units. The dataset and the hyperparameters are not changed from the previous GRU experiments. This model has an overall sparsity of 91.82% with 17.8 million parameters. As shown in Table 3, the model with 3568 hidden units is only 2.2% worse than the baseline dense GRU model. We expect to match the performance of the GRU dense network by slightly lowering the sparsity of this network or by increasing the hidden units for the layers.
In addition, we experimented with pruning only the GRU layers and keeping all the parameters in fully connected layers. The accuracy for these experiments is around 7% worse than the baseline dense model. However, this model only achieves 50% compression due to the size of the fully connected layers.
6
Published as a conference paper at ICLR 2017
Table 6: GEMM times for recurrent layers with different sparsity
LAYER SIZE SPARSITY LAYER TYPE TIME (µsec) SPEEDUP | 1704.05119#20 | Exploring Sparsity in Recurrent Neural Networks | Recurrent Neural Networks (RNN) are widely used to solve a variety of
problems and as the quantity of data and the amount of available compute have
increased, so have model sizes. The number of parameters in recent
state-of-the-art networks makes them hard to deploy, especially on mobile
phones and embedded devices. The challenge is due to both the size of the model
and the time it takes to evaluate it. In order to deploy these RNNs
efficiently, we propose a technique to reduce the parameters of a network by
pruning weights during the initial training of the network. At the end of
training, the parameters of the network are sparse while accuracy is still
close to the original dense neural network. The network size is reduced by 8x
and the time required to train the model remains constant. Additionally, we can
prune a larger dense network to achieve better than baseline performance while
still reducing the total number of parameters significantly. Pruning RNNs
reduces the size of the model and can also help achieve significant inference
time speed-up using sparse matrix multiply. Benchmarks show that using our
technique model size can be reduced by 90% and speed-up is around 2x to 7x. | http://arxiv.org/pdf/1704.05119 | Sharan Narang, Erich Elsen, Gregory Diamos, Shubho Sengupta | cs.LG, cs.CL | Published as a conference paper at ICLR 2017 | null | cs.LG | 20170417 | 20171106 | [
{
"id": "1512.02595"
}
] |
1704.04861 | 21 | Table 6. MobileNet Width Multiplier
Width Multiplier 1.0 MobileNet-224 0.75 MobileNet-224 0.5 MobileNet-224 0.25 MobileNet-224 ImageNet Accuracy Mult-Adds Million 70.6% 68.4% 63.7% 50.6% 569 325 149 41 Million Parameters 4.2 2.6 1.3 0.5
Resolution Table 7. MobileNet Resolution Million ImageNet Accuracy Mult-Adds 1.0 MobileNet-224 1.0 MobileNet-192 1.0 MobileNet-160 1.0 MobileNet-128 70.6% 69.1% 67.2% 64.4% 569 418 290 186 Million Parameters 4.2 4.2 4.2 4.2
accuracy by 1% on ImageNet was saving tremendously on mult-adds and parameters.
We next show results comparing thinner models with width multiplier to shallower models using less layers. To make MobileNet shallower, the 5 layers of separable ï¬lters with feature size 14 à 14 à 512 in Table 1 are removed. Table 5 shows that at similar computation and number of parameters, that making MobileNets thinner is 3% better than making them shallower.
# 4.2. Model Shrinking Hyperparameters | 1704.04861#21 | MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications | We present a class of efficient models called MobileNets for mobile and
embedded vision applications. MobileNets are based on a streamlined
architecture that uses depth-wise separable convolutions to build light weight
deep neural networks. We introduce two simple global hyper-parameters that
efficiently trade off between latency and accuracy. These hyper-parameters
allow the model builder to choose the right sized model for their application
based on the constraints of the problem. We present extensive experiments on
resource and accuracy tradeoffs and show strong performance compared to other
popular models on ImageNet classification. We then demonstrate the
effectiveness of MobileNets across a wide range of applications and use cases
including object detection, finegrain classification, face attributes and large
scale geo-localization. | http://arxiv.org/pdf/1704.04861 | Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam | cs.CV | null | null | cs.CV | 20170417 | 20170417 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1511.06789"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1609.07061"
},
{
"id": "1512.02325"
},
{
"id": "1512.00567"
},
{
"id": "1608.04337"
},
{
"id": "1611.10012"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1610.02357"
},
{
"id": "1512.06473"
}
] |
1704.05119 | 21 | Table 6: GEMM times for recurrent layers with different sparsity
LAYER SIZE SPARSITY LAYER TYPE TIME (µsec) SPEEDUP
1760 1760 2560 3072 2560 2560 3568 0% 95% 95% 95% 0% 95% 95% RNN RNN RNN RNN GRU GRU GRU 56 20 29 48 313 46 89 1 2.8 1.93 1.16 1 6.80 3.5
# 5 PERFORMANCE
5.1 COMPUTE TIME
The success of deep learning in recent years have been driven by large models trained on large datasets. However this also increases the inference time after the models have been deployed. We can mitigate this effect by using sparse layers. | 1704.05119#21 | Exploring Sparsity in Recurrent Neural Networks | Recurrent Neural Networks (RNN) are widely used to solve a variety of
problems and as the quantity of data and the amount of available compute have
increased, so have model sizes. The number of parameters in recent
state-of-the-art networks makes them hard to deploy, especially on mobile
phones and embedded devices. The challenge is due to both the size of the model
and the time it takes to evaluate it. In order to deploy these RNNs
efficiently, we propose a technique to reduce the parameters of a network by
pruning weights during the initial training of the network. At the end of
training, the parameters of the network are sparse while accuracy is still
close to the original dense neural network. The network size is reduced by 8x
and the time required to train the model remains constant. Additionally, we can
prune a larger dense network to achieve better than baseline performance while
still reducing the total number of parameters significantly. Pruning RNNs
reduces the size of the model and can also help achieve significant inference
time speed-up using sparse matrix multiply. Benchmarks show that using our
technique model size can be reduced by 90% and speed-up is around 2x to 7x. | http://arxiv.org/pdf/1704.05119 | Sharan Narang, Erich Elsen, Gregory Diamos, Shubho Sengupta | cs.LG, cs.CL | Published as a conference paper at ICLR 2017 | null | cs.LG | 20170417 | 20171106 | [
{
"id": "1512.02595"
}
] |
1704.04861 | 22 | # 4.2. Model Shrinking Hyperparameters
Table 6 shows the accuracy, computation and size trade offs of shrinking the MobileNet architecture with the width multiplier α. Accuracy drops off smoothly until the archi- tecture is made too small at α = 0.25.
Table 7 shows the accuracy, computation and size trade offs for different resolution multipliers by training Mo- bileNets with reduced input resolutions. Accuracy drops off smoothly across resolution.
Figure 4 shows the trade off between ImageNet Accu- racy and computation for the 16 models made from the cross product of width multiplier α â {1, 0.75, 0.5, 0.25} and resolutions {224, 192, 160, 128}. Results are log linear with a jump when models get very small at α = 0.25.
Imagenet Accuracy vs Mult-Adds 80 > 70 e g eee * 3 le eee en) ° 3B 3 ° $ S g§ 50 ° e e e 40 10 100 1000 Million Mult-Adds
Figure 4. This ï¬gure shows the trade off between computation (Mult-Adds) and accuracy on the ImageNet benchmark. Note the log linear dependence between accuracy and computation.
Imagenet Accuracy vs Million Parameters 80 @ 224 @ 192 @ 160 @ 128 70 60 Imagenet Accuracy 50 40 04 06 08 14 2 4 Million Parameters | 1704.04861#22 | MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications | We present a class of efficient models called MobileNets for mobile and
embedded vision applications. MobileNets are based on a streamlined
architecture that uses depth-wise separable convolutions to build light weight
deep neural networks. We introduce two simple global hyper-parameters that
efficiently trade off between latency and accuracy. These hyper-parameters
allow the model builder to choose the right sized model for their application
based on the constraints of the problem. We present extensive experiments on
resource and accuracy tradeoffs and show strong performance compared to other
popular models on ImageNet classification. We then demonstrate the
effectiveness of MobileNets across a wide range of applications and use cases
including object detection, finegrain classification, face attributes and large
scale geo-localization. | http://arxiv.org/pdf/1704.04861 | Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam | cs.CV | null | null | cs.CV | 20170417 | 20170417 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1511.06789"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1609.07061"
},
{
"id": "1512.02325"
},
{
"id": "1512.00567"
},
{
"id": "1608.04337"
},
{
"id": "1611.10012"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1610.02357"
},
{
"id": "1512.06473"
}
] |
1704.05119 | 22 | A General Matrix-Matrix Multiply (GEMM) is the most compute intensive operation in evaluating a neural network model. Table 6 compares times for GEMM for recurrent layers with different number of hidden units that are 95% sparse. The performance benchmark was run using NVIDIAâs CUDNN and cuSPARSE libraries on a TitanX Maxwell GPU and compiled using CUDA 7.5. All experiments are run on a minibatch of 1 and in this case, the operation is known as a sparse matrix-vector product (SpMV). We can achieve speed-ups ranging from 3x to 1.15x depending on the size of the recurrent layer. Similarly, for the GRU models, the speed-ups range from 7x to 3.5x. However, we notice that cuSPARSE performance is substantially lower than the approximately 20x speedup that we would expect by comparing the bandwidth requirements of the 95% sparse and dense networks. State of the art SpMV routines can achieve close to device memory bandwidth for a wide array of matrix shapes and sparsity patterns (see Baxter (2016) and Liu et al. (2013)). This means that the performance should improve by the factor that parameter counts are reduced. Additionally, we | 1704.05119#22 | Exploring Sparsity in Recurrent Neural Networks | Recurrent Neural Networks (RNN) are widely used to solve a variety of
problems and as the quantity of data and the amount of available compute have
increased, so have model sizes. The number of parameters in recent
state-of-the-art networks makes them hard to deploy, especially on mobile
phones and embedded devices. The challenge is due to both the size of the model
and the time it takes to evaluate it. In order to deploy these RNNs
efficiently, we propose a technique to reduce the parameters of a network by
pruning weights during the initial training of the network. At the end of
training, the parameters of the network are sparse while accuracy is still
close to the original dense neural network. The network size is reduced by 8x
and the time required to train the model remains constant. Additionally, we can
prune a larger dense network to achieve better than baseline performance while
still reducing the total number of parameters significantly. Pruning RNNs
reduces the size of the model and can also help achieve significant inference
time speed-up using sparse matrix multiply. Benchmarks show that using our
technique model size can be reduced by 90% and speed-up is around 2x to 7x. | http://arxiv.org/pdf/1704.05119 | Sharan Narang, Erich Elsen, Gregory Diamos, Shubho Sengupta | cs.LG, cs.CL | Published as a conference paper at ICLR 2017 | null | cs.LG | 20170417 | 20171106 | [
{
"id": "1512.02595"
}
] |
1704.04861 | 23 | Imagenet Accuracy vs Million Parameters 80 @ 224 @ 192 @ 160 @ 128 70 60 Imagenet Accuracy 50 40 04 06 08 14 2 4 Million Parameters
Figure 5. This ï¬gure shows the trade off between the number of parameters and accuracy on the ImageNet benchmark. The colors encode input resolutions. The number of parameters do not vary based on the input resolution.
Figure 5 shows the trade off between ImageNet Ac- curacy and number of parameters for the 16 models made from the cross product of width multiplier α â {1, 0.75, 0.5, 0.25} and resolutions {224, 192, 160, 128}.
to the original GoogleNet [30] and VGG16 [27]. MobileNet is nearly as accurate as VGG16 while being 32 times smaller and 27 times less compute intensive. It is more accurate than GoogleNet while being smaller and more than 2.5 times less computation.
Table 9 compares a reduced MobileNet with width mul- tiplier α = 0.5 and reduced resolution 160 à 160. Reduced MobileNet is 4% better than AlexNet [19] while being 45à smaller and 9.4à less compute than AlexNet. It is also 4% better than Squeezenet [12] at about the same size and 22à less computation.
# Table 8. MobileNet Comparison to Popular Models Model | 1704.04861#23 | MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications | We present a class of efficient models called MobileNets for mobile and
embedded vision applications. MobileNets are based on a streamlined
architecture that uses depth-wise separable convolutions to build light weight
deep neural networks. We introduce two simple global hyper-parameters that
efficiently trade off between latency and accuracy. These hyper-parameters
allow the model builder to choose the right sized model for their application
based on the constraints of the problem. We present extensive experiments on
resource and accuracy tradeoffs and show strong performance compared to other
popular models on ImageNet classification. We then demonstrate the
effectiveness of MobileNets across a wide range of applications and use cases
including object detection, finegrain classification, face attributes and large
scale geo-localization. | http://arxiv.org/pdf/1704.04861 | Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam | cs.CV | null | null | cs.CV | 20170417 | 20170417 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1511.06789"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1609.07061"
},
{
"id": "1512.02325"
},
{
"id": "1512.00567"
},
{
"id": "1608.04337"
},
{
"id": "1611.10012"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1610.02357"
},
{
"id": "1512.06473"
}
] |
1704.04861 | 24 | # Table 8. MobileNet Comparison to Popular Models Model
ImageNet Accuracy Mult-Adds Million Million Parameters 4.2 6.8 138 1.0 MobileNet-224 GoogleNet VGG 16 70.6% 69.8% 71.5% 569 1550 15300
# Table 9. Smaller MobileNet Comparison to Popular Models ImageNet Accuracy Mult-Adds
Model Million Million Parameters 1.32 1.25 60 0.50 MobileNet-160 Squeezenet AlexNet 60.2% 57.5% 57.2% 76 1700 720
Table 10. MobileNet for Stanford Dogs
Model Inception V3 [18] 1.0 MobileNet-224 0.75 MobileNet-224 1.0 MobileNet-192 0.75 MobileNet-192 Top-1 Million Accuracy Mult-Adds 84% 83.3% 81.9% 81.9% 80.5% 5000 569 325 418 239 Million Parameters 23.2 3.3 1.9 3.3 1.9 | 1704.04861#24 | MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications | We present a class of efficient models called MobileNets for mobile and
embedded vision applications. MobileNets are based on a streamlined
architecture that uses depth-wise separable convolutions to build light weight
deep neural networks. We introduce two simple global hyper-parameters that
efficiently trade off between latency and accuracy. These hyper-parameters
allow the model builder to choose the right sized model for their application
based on the constraints of the problem. We present extensive experiments on
resource and accuracy tradeoffs and show strong performance compared to other
popular models on ImageNet classification. We then demonstrate the
effectiveness of MobileNets across a wide range of applications and use cases
including object detection, finegrain classification, face attributes and large
scale geo-localization. | http://arxiv.org/pdf/1704.04861 | Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam | cs.CV | null | null | cs.CV | 20170417 | 20170417 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1511.06789"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1609.07061"
},
{
"id": "1512.02325"
},
{
"id": "1512.00567"
},
{
"id": "1608.04337"
},
{
"id": "1611.10012"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1610.02357"
},
{
"id": "1512.06473"
}
] |
1704.05119 | 24 | 5.2 COMPRESSION
Pruning allows us to reduce the memory footprint of a model which allows them to be deployed on phones and other embedded devices. The Deep Speech 2 model can be compressed from 268 MB to around 32 MB (1760 hidden units) or 64 MB (3072 hidden units). The GRU model can be compressed from 460 MB to 50 MB. These pruned models can be further quantized down to ï¬oat16 or other smaller datatypes to further reduce the memory requirements without impacting accuracy.
# 6 DISCUSSION
6.1 PRUNING CHARACTERISTICS
Figure 2a shows the sparsity of all the recurrent layers with the same hyper-parameters used to prune the layers. The layers are ordered such that layer 1 is closest to input and layer 14 is the ï¬nal recur- rent layer before the cost layer. We see that the initial layers are pruned more aggressively compared to the ï¬nal layers. We also performed experiments where the hyper parameters are different for the recurrent layers resulting in equal sparsity for all the layers. However, we get higher CER for these experiments. We conclude that to get good accuracy, it is important to prune the ï¬nal layers slightly less than the initial ones.
7
Published as a conference paper at ICLR 2017 | 1704.05119#24 | Exploring Sparsity in Recurrent Neural Networks | Recurrent Neural Networks (RNN) are widely used to solve a variety of
problems and as the quantity of data and the amount of available compute have
increased, so have model sizes. The number of parameters in recent
state-of-the-art networks makes them hard to deploy, especially on mobile
phones and embedded devices. The challenge is due to both the size of the model
and the time it takes to evaluate it. In order to deploy these RNNs
efficiently, we propose a technique to reduce the parameters of a network by
pruning weights during the initial training of the network. At the end of
training, the parameters of the network are sparse while accuracy is still
close to the original dense neural network. The network size is reduced by 8x
and the time required to train the model remains constant. Additionally, we can
prune a larger dense network to achieve better than baseline performance while
still reducing the total number of parameters significantly. Pruning RNNs
reduces the size of the model and can also help achieve significant inference
time speed-up using sparse matrix multiply. Benchmarks show that using our
technique model size can be reduced by 90% and speed-up is around 2x to 7x. | http://arxiv.org/pdf/1704.05119 | Sharan Narang, Erich Elsen, Gregory Diamos, Shubho Sengupta | cs.LG, cs.CL | Published as a conference paper at ICLR 2017 | null | cs.LG | 20170417 | 20171106 | [
{
"id": "1512.02595"
}
] |
1704.04861 | 25 | Table 11. Performance of PlaNet using the MobileNet architec- ture. Percentages are the fraction of the Im2GPS test dataset that were localized within a certain distance from the ground truth. The numbers for the original PlaNet model are based on an updated version that has an improved architecture and training dataset. PlaNet MobileNet 79.3% 60.3% 45.2% 31.7% 11.4%
Scale Im2GPS [7] PlaNet [35] 51.9% 35.4% 32.1% 21.9% 2.5% 77.6% 64.0% 51.1% 31.7% 11.0% Continent (2500 km) Country (750 km) Region (200 km) City (25 km) Street (1 km)
# 4.3. Fine Grained Recognition
We train MobileNet for ï¬ne grained recognition on the Stanford Dogs dataset [17]. We extend the approach of [18] and collect an even larger but noisy training set than [18] from the web. We use the noisy web data to pretrain a ï¬ne grained dog recognition model and then ï¬ne tune the model on the Stanford Dogs training set. Results on Stanford Dogs test set are in Table 10. MobileNet can almost achieve the state of the art results from [18] at greatly reduced compu- tation and size. | 1704.04861#25 | MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications | We present a class of efficient models called MobileNets for mobile and
embedded vision applications. MobileNets are based on a streamlined
architecture that uses depth-wise separable convolutions to build light weight
deep neural networks. We introduce two simple global hyper-parameters that
efficiently trade off between latency and accuracy. These hyper-parameters
allow the model builder to choose the right sized model for their application
based on the constraints of the problem. We present extensive experiments on
resource and accuracy tradeoffs and show strong performance compared to other
popular models on ImageNet classification. We then demonstrate the
effectiveness of MobileNets across a wide range of applications and use cases
including object detection, finegrain classification, face attributes and large
scale geo-localization. | http://arxiv.org/pdf/1704.04861 | Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam | cs.CV | null | null | cs.CV | 20170417 | 20170417 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1511.06789"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1609.07061"
},
{
"id": "1512.02325"
},
{
"id": "1512.00567"
},
{
"id": "1608.04337"
},
{
"id": "1611.10012"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1610.02357"
},
{
"id": "1512.06473"
}
] |
1704.05119 | 25 | 7
Published as a conference paper at ICLR 2017
3500000, 3000000] 2500000 10000 § 1500000) 1000000] =ae ° 3055 roto 5600 wats 25890 âHoo00
(a) (b)
Sparsity s0% Pruning Percent 796 a a a OT Layers
Figure 2: Pruning characteristics. Figure 2a plots sparsity of recurrent layers in the network with the same hyper-parameters used for pruning . Figure 2b plots the pruning schedule of a single layer during a training run.
In Figure 2b, we plot the pruning schedule of a 95% sparse recurrent layer of the bidirectional model trained for 20 epochs (55000 iterations). We begin pruning the network at the start of the second epoch at 2700 iterations. We stop pruning a layer after 10 epochs (half the total epochs) are complete at 27000 iterations. We see that nearly 25000 weights are pruned before 5 epochs are complete at around 15000 iterations. In our experiments, weâve noticed that pruning schedules that are a convex curve tend to outperform schedules with a linear slope.
6.2 PERSISTENT KERNELS | 1704.05119#25 | Exploring Sparsity in Recurrent Neural Networks | Recurrent Neural Networks (RNN) are widely used to solve a variety of
problems and as the quantity of data and the amount of available compute have
increased, so have model sizes. The number of parameters in recent
state-of-the-art networks makes them hard to deploy, especially on mobile
phones and embedded devices. The challenge is due to both the size of the model
and the time it takes to evaluate it. In order to deploy these RNNs
efficiently, we propose a technique to reduce the parameters of a network by
pruning weights during the initial training of the network. At the end of
training, the parameters of the network are sparse while accuracy is still
close to the original dense neural network. The network size is reduced by 8x
and the time required to train the model remains constant. Additionally, we can
prune a larger dense network to achieve better than baseline performance while
still reducing the total number of parameters significantly. Pruning RNNs
reduces the size of the model and can also help achieve significant inference
time speed-up using sparse matrix multiply. Benchmarks show that using our
technique model size can be reduced by 90% and speed-up is around 2x to 7x. | http://arxiv.org/pdf/1704.05119 | Sharan Narang, Erich Elsen, Gregory Diamos, Shubho Sengupta | cs.LG, cs.CL | Published as a conference paper at ICLR 2017 | null | cs.LG | 20170417 | 20171106 | [
{
"id": "1512.02595"
}
] |
1704.04861 | 26 | # 4.4. Large Scale Geolocalizaton
PlaNet [35] casts the task of determining where on earth a photo was taken as a classiï¬cation problem. The approach divides the earth into a grid of geographic cells that serve as the target classes and trains a convolutional neural network
on millions of geo-tagged photos. PlaNet has been shown to successfully localize a large variety of photos and to out- perform Im2GPS [6, 7] that addresses the same task.
We re-train PlaNet using the MobileNet architecture on the same data. While the full PlaNet model based on the In- ception V3 architecture [31] has 52 million parameters and 5.74 billion mult-adds. The MobileNet model has only 13 million parameters with the usual 3 million for the body and 10 million for the ï¬nal layer and 0.58 Million mult-adds. As shown in Tab. 11, the MobileNet version delivers only slightly decreased performance compared to PlaNet despite being much more compact. Moreover, it still outperforms Im2GPS by a large margin.
# 4.5. Face Attributes | 1704.04861#26 | MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications | We present a class of efficient models called MobileNets for mobile and
embedded vision applications. MobileNets are based on a streamlined
architecture that uses depth-wise separable convolutions to build light weight
deep neural networks. We introduce two simple global hyper-parameters that
efficiently trade off between latency and accuracy. These hyper-parameters
allow the model builder to choose the right sized model for their application
based on the constraints of the problem. We present extensive experiments on
resource and accuracy tradeoffs and show strong performance compared to other
popular models on ImageNet classification. We then demonstrate the
effectiveness of MobileNets across a wide range of applications and use cases
including object detection, finegrain classification, face attributes and large
scale geo-localization. | http://arxiv.org/pdf/1704.04861 | Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam | cs.CV | null | null | cs.CV | 20170417 | 20170417 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1511.06789"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1609.07061"
},
{
"id": "1512.02325"
},
{
"id": "1512.00567"
},
{
"id": "1608.04337"
},
{
"id": "1611.10012"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1610.02357"
},
{
"id": "1512.06473"
}
] |
1704.05119 | 26 | 6.2 PERSISTENT KERNELS
Persistent Recurrent Neural Networks (Diamos et al., 2016) is a technique that increases the compu- tational intensity of evaluating an RNN by caching the weights in on-chip memory such as caches, block RAM, or register ï¬les across multiple timesteps. A high degree of sparsity allows signiï¬cantly large Persistent RNNs to be stored in on-chip memory. When all the weights are stored in ï¬oat16, a NVIDIA P100 GPU can support a vanilla RNN size of about 2600 hidden units. With the same datatype, at 90% sparsity, and 99% sparsity, a P100 can support RNNs with about 8000, and 24000 hidden units respectively. We expect these kernels to be bandwidth limited out of the memory that is used to store the parameters. This offers the potential of a 146x speedup compared to the TitanX GPU if the entire RNN layer can be stored in registers rather than the GPU DRAM of a TitanX. | 1704.05119#26 | Exploring Sparsity in Recurrent Neural Networks | Recurrent Neural Networks (RNN) are widely used to solve a variety of
problems and as the quantity of data and the amount of available compute have
increased, so have model sizes. The number of parameters in recent
state-of-the-art networks makes them hard to deploy, especially on mobile
phones and embedded devices. The challenge is due to both the size of the model
and the time it takes to evaluate it. In order to deploy these RNNs
efficiently, we propose a technique to reduce the parameters of a network by
pruning weights during the initial training of the network. At the end of
training, the parameters of the network are sparse while accuracy is still
close to the original dense neural network. The network size is reduced by 8x
and the time required to train the model remains constant. Additionally, we can
prune a larger dense network to achieve better than baseline performance while
still reducing the total number of parameters significantly. Pruning RNNs
reduces the size of the model and can also help achieve significant inference
time speed-up using sparse matrix multiply. Benchmarks show that using our
technique model size can be reduced by 90% and speed-up is around 2x to 7x. | http://arxiv.org/pdf/1704.05119 | Sharan Narang, Erich Elsen, Gregory Diamos, Shubho Sengupta | cs.LG, cs.CL | Published as a conference paper at ICLR 2017 | null | cs.LG | 20170417 | 20171106 | [
{
"id": "1512.02595"
}
] |
1704.04861 | 27 | # 4.5. Face Attributes
Another use-case for MobileNet is compressing large systems with unknown or esoteric training procedures. In a face attribute classiï¬cation task, we demonstrate a syner- gistic relationship between MobileNet and distillation [9], a knowledge transfer technique for deep networks. We seek to reduce a large face attribute classiï¬er with 75 million parameters and 1600 million Mult-Adds. The classiï¬er is trained on a multi-attribute dataset similar to YFCC100M [32].
We distill a face attribute classiï¬er using the MobileNet architecture. Distillation [9] works by training the classi- ï¬er to emulate the outputs of a larger model2 instead of the ground-truth labels, hence enabling training from large (and potentially inï¬nite) unlabeled datasets. Marrying the scal- ability of distillation training and the parsimonious param- eterization of MobileNet, the end system not only requires no regularization (e.g. weight-decay and early-stopping), but also demonstrates enhanced performances. It is evi- dent from Tab. 12 that the MobileNet-based classiï¬er is re- silient to aggressive model shrinking: it achieves a similar mean average precision across attributes (mean AP) as the in-house while consuming only 1% the Multi-Adds. | 1704.04861#27 | MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications | We present a class of efficient models called MobileNets for mobile and
embedded vision applications. MobileNets are based on a streamlined
architecture that uses depth-wise separable convolutions to build light weight
deep neural networks. We introduce two simple global hyper-parameters that
efficiently trade off between latency and accuracy. These hyper-parameters
allow the model builder to choose the right sized model for their application
based on the constraints of the problem. We present extensive experiments on
resource and accuracy tradeoffs and show strong performance compared to other
popular models on ImageNet classification. We then demonstrate the
effectiveness of MobileNets across a wide range of applications and use cases
including object detection, finegrain classification, face attributes and large
scale geo-localization. | http://arxiv.org/pdf/1704.04861 | Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam | cs.CV | null | null | cs.CV | 20170417 | 20170417 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1511.06789"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1609.07061"
},
{
"id": "1512.02325"
},
{
"id": "1512.00567"
},
{
"id": "1608.04337"
},
{
"id": "1611.10012"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1610.02357"
},
{
"id": "1512.06473"
}
] |
1704.05119 | 27 | Additionally, sparse matrix multiplication involves scheduling and load balancing phases to divide the work up evenly over thousands of threads and to route corresponding weights and activations to individual threads. Since the sparsity patterns for RNNs are ï¬xed over many timesteps these scheduling and load balancing operations can be factored outside of the loop, performed once, and reused many times.
# 7 CONCLUSION AND FUTURE WORK
We have demonstrated that by pruning the weights of RNNs during training we can ï¬nd sparse mod- els that are more accurate than dense models while signiï¬cantly reducing model size. These sparse models are especially suited for deployment on mobile devices and on back-end server farms due to their small size and increased computational efï¬ciency. Even with existing sub-optimal sparse matrix-vector libraries we realize speed-ups with these models. This technique is orthogonal to quantization techniques which would allow for even further reductions in model size and corre- sponding increase in performance.
We wish to investigate whether these techniques can generalize to language modeling tasks and if they can effectively reduce the size of embedding layers. We also wish to compare the sparsity generated by our pruning technique to that obtained by L1 regularization.
8
Published as a conference paper at ICLR 2017 | 1704.05119#27 | Exploring Sparsity in Recurrent Neural Networks | Recurrent Neural Networks (RNN) are widely used to solve a variety of
problems and as the quantity of data and the amount of available compute have
increased, so have model sizes. The number of parameters in recent
state-of-the-art networks makes them hard to deploy, especially on mobile
phones and embedded devices. The challenge is due to both the size of the model
and the time it takes to evaluate it. In order to deploy these RNNs
efficiently, we propose a technique to reduce the parameters of a network by
pruning weights during the initial training of the network. At the end of
training, the parameters of the network are sparse while accuracy is still
close to the original dense neural network. The network size is reduced by 8x
and the time required to train the model remains constant. Additionally, we can
prune a larger dense network to achieve better than baseline performance while
still reducing the total number of parameters significantly. Pruning RNNs
reduces the size of the model and can also help achieve significant inference
time speed-up using sparse matrix multiply. Benchmarks show that using our
technique model size can be reduced by 90% and speed-up is around 2x to 7x. | http://arxiv.org/pdf/1704.05119 | Sharan Narang, Erich Elsen, Gregory Diamos, Shubho Sengupta | cs.LG, cs.CL | Published as a conference paper at ICLR 2017 | null | cs.LG | 20170417 | 20171106 | [
{
"id": "1512.02595"
}
] |
1704.04861 | 28 | # 4.6. Object Detection
MobileNet can also be deployed as an effective base net- work in modern object detection systems. We report results for MobileNet trained for object detection on COCO data based on the recent work that won the 2016 COCO chal- lenge [10]. In table 13, MobileNet is compared to VGG and Inception V2 [13] under both Faster-RCNN [23] and SSD [21] framework. In our experiments, SSD is evaluated with 300 input resolution (SSD 300) and Faster-RCNN is compared with both 300 and 600 input resolution (Faster- RCNN 300, Faster-RCNN 600). The Faster-RCNN model evaluates 300 RPN proposal boxes per image. The models are trained on COCO train+val excluding 8k minival images
2The emulation quality is measured by averaging the per-attribute cross-entropy over all attributes.
Table 12. Face attribute classiï¬cation using the MobileNet archi- tecture. Each row corresponds to a different hyper-parameter set- ting (width multiplier α and image resolution). Width Multiplier / Mean Million | 1704.04861#28 | MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications | We present a class of efficient models called MobileNets for mobile and
embedded vision applications. MobileNets are based on a streamlined
architecture that uses depth-wise separable convolutions to build light weight
deep neural networks. We introduce two simple global hyper-parameters that
efficiently trade off between latency and accuracy. These hyper-parameters
allow the model builder to choose the right sized model for their application
based on the constraints of the problem. We present extensive experiments on
resource and accuracy tradeoffs and show strong performance compared to other
popular models on ImageNet classification. We then demonstrate the
effectiveness of MobileNets across a wide range of applications and use cases
including object detection, finegrain classification, face attributes and large
scale geo-localization. | http://arxiv.org/pdf/1704.04861 | Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam | cs.CV | null | null | cs.CV | 20170417 | 20170417 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1511.06789"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1609.07061"
},
{
"id": "1512.02325"
},
{
"id": "1512.00567"
},
{
"id": "1608.04337"
},
{
"id": "1611.10012"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1610.02357"
},
{
"id": "1512.06473"
}
] |
1704.05119 | 28 | 8
Published as a conference paper at ICLR 2017
We are investigating training techniques that donât require maintaining dense matrices for a sig- niï¬cant portion of the calculation. Further work remains to implement optimal small batch sparse matrix-dense vector routine for GPUs and ARM processors that would help in deployment.
# ACKNOWLEDGMENTS
We would like to thank Bryan Catanzaro for helpful discussions related to this work.
# REFERENCES
Dario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Jing- dong Chen, Mike Chrzanowski, Adam Coates, Greg Diamos, et al. Deep speech 2: End-to-end speech recognition in english and mandarin. arXiv preprint arXiv:1512.02595, 2015.
Sean Baxter. Moderngpu, 2016. URL https://nvlabs.github.io/moderngpu/ segreduce.html.
Welin Chen, David Grangier, and Michael Auli. Strategies for training large vocabulary neural language models. CoRR, abs/1512.04906, 2015a. URL http://arxiv.org/abs/1512. 04906. | 1704.05119#28 | Exploring Sparsity in Recurrent Neural Networks | Recurrent Neural Networks (RNN) are widely used to solve a variety of
problems and as the quantity of data and the amount of available compute have
increased, so have model sizes. The number of parameters in recent
state-of-the-art networks makes them hard to deploy, especially on mobile
phones and embedded devices. The challenge is due to both the size of the model
and the time it takes to evaluate it. In order to deploy these RNNs
efficiently, we propose a technique to reduce the parameters of a network by
pruning weights during the initial training of the network. At the end of
training, the parameters of the network are sparse while accuracy is still
close to the original dense neural network. The network size is reduced by 8x
and the time required to train the model remains constant. Additionally, we can
prune a larger dense network to achieve better than baseline performance while
still reducing the total number of parameters significantly. Pruning RNNs
reduces the size of the model and can also help achieve significant inference
time speed-up using sparse matrix multiply. Benchmarks show that using our
technique model size can be reduced by 90% and speed-up is around 2x to 7x. | http://arxiv.org/pdf/1704.05119 | Sharan Narang, Erich Elsen, Gregory Diamos, Shubho Sengupta | cs.LG, cs.CL | Published as a conference paper at ICLR 2017 | null | cs.LG | 20170417 | 20171106 | [
{
"id": "1512.02595"
}
] |
1704.04861 | 29 | Million Resolution 1.0 MobileNet-224 88.7% 0.5 MobileNet-224 88.1% 0.25 MobileNet-224 87.2% 1.0 MobileNet-128 88.1% 0.5 MobileNet-128 87.7% 0.25 MobileNet-128 86.4% 86.9% Baseline 568 149 45 185 48 15 1600 3.2 0.8 0.2 3.2 0.8 0.2 7.5
Table 13. COCO object detection results comparison using differ- ent frameworks and network architectures. mAP is reported with COCO primary challenge metric (AP at IoU=0.50:0.05:0.95)
Framework Resolution Model mAP Billion Million Mult-Adds Parameters SSD 300 Faster-RCNN 300 Faster-RCNN 600 deeplab-VGG 21.1% Inception V2 22.0% 19.3% MobileNet 22.9% VGG Inception V2 15.4% 16.4% MobileNet 25.7% VGG Inception V2 21.9% 19.8% 34.9 3.8 1.2 64.3 118.2 25.2 149.6 129.6 30.5 33.1 13.7 6.8 138.5 13.3 6.1 138.5 13.3 6.1 Mobilenet | 1704.04861#29 | MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications | We present a class of efficient models called MobileNets for mobile and
embedded vision applications. MobileNets are based on a streamlined
architecture that uses depth-wise separable convolutions to build light weight
deep neural networks. We introduce two simple global hyper-parameters that
efficiently trade off between latency and accuracy. These hyper-parameters
allow the model builder to choose the right sized model for their application
based on the constraints of the problem. We present extensive experiments on
resource and accuracy tradeoffs and show strong performance compared to other
popular models on ImageNet classification. We then demonstrate the
effectiveness of MobileNets across a wide range of applications and use cases
including object detection, finegrain classification, face attributes and large
scale geo-localization. | http://arxiv.org/pdf/1704.04861 | Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam | cs.CV | null | null | cs.CV | 20170417 | 20170417 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1511.06789"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1609.07061"
},
{
"id": "1512.02325"
},
{
"id": "1512.00567"
},
{
"id": "1608.04337"
},
{
"id": "1611.10012"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1610.02357"
},
{
"id": "1512.06473"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.