id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
1610.04286#29
Sim-to-Real Robot Learning from Pixels with Progressive Nets
Render for CNN: viewpoint estimation in images using cnns trained with rendered 3d model views. In 2015 IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, December 7-13, 2015, pages 2686â 2694, 2015. 8 [11] M. Long, Y. Cao, J. Wang, and M. I. Jordan. Learning transferable features with deep adaptation networks. In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, pages 97â 105, 2015. [12] E. Tzeng, J. Hoffman, T. Darrell, and K.
1610.04286#28
1610.04286#30
1610.04286
[ "1606.04671" ]
1610.04286#30
Sim-to-Real Robot Learning from Pixels with Progressive Nets
Saenko. Simultaneous deep transfer across domains and tasks. In 2015 IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, December 7-13, 2015, pages 4068â 4076, 2015. [13] E. Tzeng, J. Hoffman, N. Zhang, K. Saenko, and T. Darrell. Deep domain confusion: Maximizing for domain invariance. CoRR, abs/1412.3474, 2014. URL http://arxiv.org/abs/1412.3474. [14] E. Tzeng, C. Devin, J. Hoffman, C. Finn, X. Peng, S. Levine, K. Saenko, and T.
1610.04286#29
1610.04286#31
1610.04286
[ "1606.04671" ]
1610.04286#31
Sim-to-Real Robot Learning from Pixels with Progressive Nets
Darrell. Towards adapting deep visuomotor representations from simulated to real environments. CoRR, abs/1511.07111, 2015. URL http://arxiv.org/abs/1511.07111. [15] Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, and V. Lempitsky. Domain-adversarial training of neural networks. Journal of Machine Learning Research, 17(59):1â
1610.04286#30
1610.04286#32
1610.04286
[ "1606.04671" ]
1610.04286#32
Sim-to-Real Robot Learning from Pixels with Progressive Nets
35, 2016. [16] H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, and M. Marchand. Domain-adversarial neural networks. CoRR, abs/1412.4446, 2014. URL http://arxiv.org/abs/1412.4446. [17] K. Bousmalis, G. Trigeorgis, N. Silberman, D. Krishnan, and D. Erhan.
1610.04286#31
1610.04286#33
1610.04286
[ "1606.04671" ]
1610.04286#33
Sim-to-Real Robot Learning from Pixels with Progressive Nets
Domain separation networks. In Advances in Neural Information Processing Systems, pages 343â 351, 2016. [18] S. Barrett, M. E. Taylor, and P. Stone. Transfer learning for reinforcement learning on a physical robot. In Ninth International Conference on Autonomous Agents and Multiagent Systems - Adaptive Learning Agents Workshop (AAMAS - ALA), 2010. [19] S. James and E. Johns. 3D Simulation for Robot Arm Control with Deep Q-Learning. ArXiv e-prints, 2016. [20] Y. Zhu, R. Mottaghi, E. Kolve, J. J. Lim, A. Gupta, L. Fei-Fei, and A. Farhadi.
1610.04286#32
1610.04286#34
1610.04286
[ "1606.04671" ]
1610.04286#34
Sim-to-Real Robot Learning from Pixels with Progressive Nets
Target-driven visual navigation in indoor scenes using deep reinforcement learning. In Robotics and Automation (ICRA), 2017 IEEE International Conference on, pages 3357â 3364. IEEE, 2017. [21] S. Levine, C. Finn, T. Darrell, and P. Abbeel. End-to-end training of deep visuomotor policies. Journal of Machine Learning Research, 17(39):1â 40, 2016. [22] S. Levine, N. Wagener, and P. Abbeel.
1610.04286#33
1610.04286#35
1610.04286
[ "1606.04671" ]
1610.04286#35
Sim-to-Real Robot Learning from Pixels with Progressive Nets
Learning contact-rich manipulation skills with guided policy search. In IEEE International Conference on Robotics and Automation, ICRA 2015, Seattle, WA, USA, 26-30 May, 2015, pages 156â 163, 2015. [23] L. Pinto and A. Gupta. Supersizing self-supervision: Learning to grasp from 50k tries and 700 robot hours. In ICRA 2016, 2016. [24] S. Levine, P. Pastor, A. Krizhevsky, J. Ibarz, and D. Quillen.
1610.04286#34
1610.04286#36
1610.04286
[ "1606.04671" ]
1610.04286#36
Sim-to-Real Robot Learning from Pixels with Progressive Nets
Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. The International Journal of Robotics Research, page 0278364917710318, 2016. [25] V. Mnih, K. Kavukcuoglu, D. Silver, A. Rusu, J. Veness, M. Bellemare, A. Graves, M. Riedmiller, A. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis. Human-level control through deep reinforcement learning.
1610.04286#35
1610.04286#37
1610.04286
[ "1606.04671" ]
1610.04286#37
Sim-to-Real Robot Learning from Pixels with Progressive Nets
Nature, 518(7540):529â 533, 2015. [26] E. Todorov, T. Erez, and Y. Tassa. Mujoco: A physics engine for model-based control. In International Conference on Intelligent Robots and Systems IROS, 2012. 9
1610.04286#36
1610.04286
[ "1606.04671" ]
1610.03017#0
Fully Character-Level Neural Machine Translation without Explicit Segmentation
7 1 0 2 n u J 3 1 ] L C . s c [ 3 v 7 1 0 3 0 . 0 1 6 1 : v i X r a # Fully Character-Level Neural Machine Translation without Explicit Segmentation # Jason Leeâ ETH Z¨urich [email protected] Kyunghyun Cho New York University [email protected] # Thomas Hofmann ETH Z¨urich [email protected] # Abstract
1610.03017#1
1610.03017
[ "1602.00367" ]
1610.03017#1
Fully Character-Level Neural Machine Translation without Explicit Segmentation
Most existing machine translation systems op- erate at the level of words, relying on ex- plicit segmentation to extract tokens. We in- troduce a neural machine translation (NMT) model that maps a source character sequence to a target character sequence without any seg- mentation. We employ a character-level con- volutional network with max-pooling at the encoder to reduce the length of source rep- resentation, allowing the model to be trained at a speed comparable to subword-level mod- els while capturing local regularities. Our character-to-character model outperforms a recently proposed baseline with a subword- level encoder on WMTâ 15 DE-EN and CS- EN, and gives comparable performance on FI- EN and RU-EN. We then demonstrate that it is possible to share a single character- level encoder across multiple languages by training a model on a many-to-one transla- the tion task. character-level encoder signiï¬ cantly outper- forms the subword-level encoder on all the language pairs. We observe that on CS-EN, FI-EN and RU-EN, the quality of the mul- tilingual character-level translation even sur- passes the models speciï¬ cally trained on that language pair alone, both in terms of BLEU score and human judgment.
1610.03017#0
1610.03017#2
1610.03017
[ "1602.00367" ]
1610.03017#2
Fully Character-Level Neural Machine Translation without Explicit Segmentation
# Introduction Nearly all previous work in machine translation has been at the level of words. Aside from our intu- â The majority of this work was completed while the author was visiting New York University. itive understanding of word as a basic unit of mean- ing (Jackendoff, 1992), one reason behind this is that sequences are signiï¬ cantly longer when rep- resented in characters, compounding the problem of data sparsity and modeling long-range depen- dencies. This has driven NMT research to be al- most exclusively word-level (Bahdanau et al., 2015; Sutskever et al., 2015). remarkable success, word-level NMT models suffer from several major weaknesses. For one, they are unable to model rare, out-of- vocabulary words, making them limited in translat- ing languages with rich morphology such as Czech, Finnish and Turkish. If one uses a large vocabulary to combat this (Jean et al., 2015), the complexity of training and decoding grows linearly with respect to the target vocabulary size, leading to a vicious cycle. To address this, we present a fully character-level NMT model that maps a character sequence in a source language to a character sequence in a target language. We show that our model outperforms a baseline with a subword-level encoder on DE-EN and CS-EN, and achieves a comparable result on FI-EN and RU-EN. A purely character-level NMT model with a basic encoder was proposed as a base- line by Luong and Manning (2016), but training it was prohibitively slow. We were able to train our model at a reasonable speed by drastically reducing the length of source sentence representation using a stack of convolutional, pooling and highway layers. One advantage of character-level models is that they are better suited for multilingual translation than their word-level counterparts which require a separate word vocabulary for each language.
1610.03017#1
1610.03017#3
1610.03017
[ "1602.00367" ]
1610.03017#3
Fully Character-Level Neural Machine Translation without Explicit Segmentation
We verify this by training a single model to translate four languages (German, Czech, Finnish and Rus- sian) to English. Our multilingual character-level model outperforms the subword-level baseline by a considerable margin in all four language pairs, strongly indicating that a character-level model is more ï¬ exible in assigning its capacity to different language pairs. Furthermore, we observe that our multilingual character-level translation even exceeds the quality of bilingual translation in three out of four language pairs, both in BLEU score metric and human evaluation. This demonstrates excel- lent parameter efï¬ ciency of character-level transla- tion in a multilingual setting. We also showcase our modelâ s ability to handle intra-sentence code- switching while performing language identiï¬ cation on the ï¬ y.
1610.03017#2
1610.03017#4
1610.03017
[ "1602.00367" ]
1610.03017#4
Fully Character-Level Neural Machine Translation without Explicit Segmentation
The contributions of this work are twofold: we empirically show that (1) we can train character-to- character NMT model without any explicit segmen- tation; and (2) we can share a single character-level encoder across multiple languages to build a mul- tilingual translation system without increasing the model size. # 2 Background: Attentional Neural Machine Translation Neural machine translation (NMT) is a recently proposed approach to machine translation that builds a single neural network which takes as an input a source sentence X = (a1,...,a7,) and generates its translation Y = (y1,...,y7,), where xz and y are source and target symbols (Bahdanau et al., 2015; Sutskever et al., 2015; Luong et al., 2015; Cho et al., 2014a). Attentional NMT models have three components: an encoder, a decoder and an attention mechanism. Encoder Given a source sentence X, the en- coder constructs a continuous representation that summarizes its meaning with a recurrent neural network (RNN). A_ bidirectional RNN is often implemented as proposed in (Bahdanau et al., 2015). A forward encoder reads the input sentence from left to right! hy = fenc(Ex(#:), by-1). Similarly, a backward encoder reads it from right o_ : to left: hy = Fone (Ex(we), ev), where E,, is the source embedding lookup table, and Fen and fenc are recurrent activation functions such as long short-term memory units (LSTMs, (Hochreiter and Schmidhuber, 1997)) or gated recurrent units (GRUs, (Cho et al., 2014b)). The encoder constructs a set of continuous source sentence representations C by concatenating the forward and backward hid- den states at each timestep: C = {hi, ..., hry }, Se : where hy = [hu; hi]. Attention First introduced in (Bahdanau et al., 2015), the attention mechanism lets the decoder at- tend more to different source symbols for each target symbol. More concretely, it computes the context vector cy at each decoding time step tâ as a weighted sum of the source hidden states: cy = wy ay¢hy.
1610.03017#3
1610.03017#5
1610.03017
[ "1602.00367" ]
1610.03017#5
Fully Character-Level Neural Machine Translation without Explicit Segmentation
Similarly to (Chung et al., 2016; Firat et al., 2016a), each attentional weight a, represents how relevant the t-th source token 2; is to the tâ -th target token yzâ , and is computed as: 1 ant = 780 (seore(Ey(ueâ 1)-81-â 1-he) J (1) where Z = Se exp(score(Ey(yyâ 1), Svâ 1, hx) is the normalization constant. score() is a feed- forward neural network with a single hidden layer that scores how well the source symbol 2; and the target symbol y match. Ey is the target embedding lookup table and sy is the target hidden state at time t. Decoder Given a source context vector c,, the de- coder computes its hidden state at time t/ as: sy = Feec (Ey (yrâ
1610.03017#4
1610.03017#6
1610.03017
[ "1602.00367" ]
1610.03017#6
Fully Character-Level Neural Machine Translation without Explicit Segmentation
1), Svâ 1, Cvâ ). Then, a parametric func- tion out;() returns the conditional probability of the next target symbol being k: (ye =klycv, X) = 1 (2) zor (ous, (Ev); Sy, =) where Z is again the normalization constant: Z=0j exp (out; (Ey (ysâ 1), $7, â ¬v")).- Training The entire model can be trained end-to- end by minimizing the negative conditional log- likelihood, which is deï¬ ned as: 1 Tyâ _ 1 km), (n) fay > > log p(y = yf lyse, Xâ ¢), where N is the number of sentence pairs, and X (n) and y(n) are the source sentence and the t-th target symbol in the n-th pair, respectively. # 3 Fully Character-Level Translation # 3.1 Why Character-Level? The beneï¬ ts of character-level translation over word-level translation are well known. Chung et al. (2016) present three main arguments: character level models (1) do not suffer from out-of-vocabulary is- sues, (2) are able to model different, rare morpho- logical variants of a word, and (3) do not require seg- mentation. Particularly, text segmentation is highly non-trivial for many languages and problematic even for English as word tokenizers are either manually designed or trained on a corpus using an objective function that is unrelated to the translation task at hand, which makes the overall system sub-optimal. Here we present two additional arguments for character-level translation. First, a character-level translation system can easily be applied to a mul- tilingual translation setting. Between European lan- guages where the majority of alphabets overlaps, for instance, a character-level model may easily iden- tify morphemes that are shared across different lan- guages. A word-level model, however, will need a separate word vocabulary for each language, allow- ing no cross-lingual parameter sharing. Also, by not segmenting source sentences into words, we no longer inject our knowledge of words and word boundaries into the system; instead, we encourage the model to discover an internal struc- ture of a sentence by itself and learn how a sequence of symbols can be mapped to a continuous meaning representation.
1610.03017#5
1610.03017#7
1610.03017
[ "1602.00367" ]
1610.03017#7
Fully Character-Level Neural Machine Translation without Explicit Segmentation
# 3.2 Related Work To address these limitations associated with word- level translation, a recent line of research has inves- tigated using sub-word information. Costa-Juss´a and Fonollosa (2016) replaced the word-lookup table with convolutional and highway layers on top of character embeddings, while still segmenting source sentences into words. Target sen- tences were also segmented into words, and predic- tion was made at word-level. Similarly, Ling et al. (2015) employed a bidi- rectional LSTM to compose character embeddings into word embeddings. At the target side, another LSTM takes the hidden state of the decoder and generates the target word, character by character. While this system is completely open-vocabulary, it also requires ofï¬ ine segmentation. Also, character- to-word and word-to-character LSTMs signiï¬ cantly slow down training. Most recently, Luong and Manning (2016) pro- posed a hybrid scheme that consults character-level information whenever the model encounters an out- of-vocabulary word. As a baseline, they also imple- mented a purely character-level NMT model with 4 layers of unidirectional LSTMs with 512 cells, with attention over each character. Despite being ex- tremely slow (approximately 3 months to train), the character-level model gave comparable performance to the word-level baseline. This shows the possibil- ity of fully character-level translation. Having a word-level decoder restricts the model to only being able to generate previously seen words. Sennrich et al. (2015) introduced a subword-level NMT model that is capable of open-vocabulary translation using subword-level segmentation based on the byte pair encoding (BPE) algorithm. Starting from a character vocabulary, the algorithm identi- ï¬ es frequent character n-grams in the training data and iteratively adds them to the vocabulary, ulti- mately giving a subword vocabulary which consists of words, subwords and characters. Once the seg- mentation rules have been learned, their model per- forms subword-to-subword translation (bpe2bpe) in the same way as word-to-word translation.
1610.03017#6
1610.03017#8
1610.03017
[ "1602.00367" ]
1610.03017#8
Fully Character-Level Neural Machine Translation without Explicit Segmentation
Perhaps the work that is closest to our end goal is (Chung et al., 2016), which used a subword-level encoder from (Sennrich et al., 2015) and a fully character-level decoder (bpe2char). Their results show that character-level decoding performs better than subword-level decoding. Motivated by this work, we aim for fully character-level translation at both sides (char2char). Outside NMT, our work is based on a few exist- ing approaches that applied convolutional networks to text, most notably in text classiï¬ cation (Zhang et al., 2015; Xiao and Cho, 2016). Also, we drew in- spiration for our multilingual models from previous work that showed the possibility of training a single recurrent model for multiple languages in domains other than translation (Tsvetkov et al., 2016; Gillick et al., 2015). # 3.3 Challenges Sentences are on average 6 (DE, CS and RU) to 8 (FI) times longer when represented in characters. This poses three major challenges to achieving fully character-level translation. (1) Training/decoding latency For the decoder, although the sequence to be generated is much longer, each character-level softmax operation costs considerably less compared to a word- or subword- level softmax. Chung et al. (2016) report that character-level decoding is only 14% slower than subword-level decoding. On the other hand, computational complexity of the attention mechanism grows quadratically with respect to the sentence length, as it needs to attend to every source token for every target token. This makes a naive character-level approach, such as in (Luong and Manning, 2016), computationally pro- hibitive. Consequently, reducing the length of the source sequence is key to ensuring reasonable speed in both training and decoding. (2) Mapping character sequence to continu- ous representation The arbitrary relationship be- tween the orthography of a word and its meaning is a well-known problem in linguistics (de Saus- sure, 1916). Building a character-level encoder is arguably a more difï¬ cult problem, as the encoder needs to learn a highly non-linear function from a long sequence of character symbols to a meaning representation.
1610.03017#7
1610.03017#9
1610.03017
[ "1602.00367" ]
1610.03017#9
Fully Character-Level Neural Machine Translation without Explicit Segmentation
(3) Long range dependencies in characters A character-level encoder needs to model dependen- cies over longer timespans than a word-level en- coder does. # 4 Fully Character-Level NMT # 4.1 Encoder We design an encoder that addresses all the chal- lenges discussed above by using convolutional and pooling layers aggressively to both (1) drastically shorten the input sentence and (2) efï¬ ciently capture local regularities. Inspired by the character-level language model from (Kim et al., 2015), our encoder ï¬ rst reduces the source sentence length with a series of convolutional, pooling and highway layers. The shorter representation, instead of the full character sequence, is passed through a bidirectional GRU to (3) help it resolve long term dependencies. We illustrate the proposed encoder in Figure 1 and discuss each layer in detail below. Embedding We map the sequence of source characters of dc: character X = (C(x1), . . . , C(xTx)) â
1610.03017#8
1610.03017#10
1610.03017
[ "1602.00367" ]
1610.03017#10
Fully Character-Level Neural Machine Translation without Explicit Segmentation
Rdcà Tx where Tx is the number of source characters and C is the character embedding lookup table: C â Rdcà |C|. Convolution One-dimensional convolution opera- tion is then used along consecutive character embed- dings. Assuming we have a single filter f ¢ R¢*â of width w, we first apply padding to the begin- ning and the end of X, such that the padded sen- tence Xâ â ¬ R&ex(f=+Â¥â -)) is w â 1 symbols longer. We then apply narrow convolution between Xâ and f such that the k-th element of the output Y;, is given as: Yn = (X'* f)g = > (Xi ew 1m) ® f), (3) ig where ® denotes elementwise matrix multiplica- tion and * is the convolution operation. X [ kâ -w+L:k] is the sliced subset of Xâ that contains all the rows but only w adjacent columns. The padding scheme employed above, commonly known as half convolu- tion, ensures the length of the output is identical to the inputâ s: Y ¢ R!*7?=, We just illustrated how a single convolutional ï¬ lter of ï¬ xed width might be applied to a sentence. In order to extract informative character patterns of different lengths, we employ a set of ï¬ lters of varying widths. More concretely, we use a ï¬ lter IRN*(x/s) IRN*(x/s) RYxT. l (he+w-1) | l + RiXTs er sjon| P ! â = > # Racx
1610.03017#9
1610.03017#11
1610.03017
[ "1602.00367" ]
1610.03017#11
Fully Character-Level Neural Machine Translation without Explicit Segmentation
Single-layer Bidirectional GRU Four-layer Highway Network # Segment Embeddings # Max Pooling with Stride 5 # Single-layer Convolution ReLU # Character # Embeddings Figure 1: Encoder architecture schematics. Underscore denotes padding. A dotted vertical line delimits each segment. The stride of pooling s is 5 in the diagram. bank F = {fi,...,fm} where f; = Réexixni ig a collection of n; filters of width 7. Our model uses m = 8, hence extracts character n-grams up to 8 characters long. Outputs from all the filters are stacked upon each other, giving a single repre- sentation Y ⠬ RN*?=, where the dimensionality of each column is given by the total number of filters N = SC", n;. Finally, rectified linear activation (ReLU) is applied elementwise to this representation.
1610.03017#10
1610.03017#12
1610.03017
[ "1602.00367" ]
1610.03017#12
Fully Character-Level Neural Machine Translation without Explicit Segmentation
at increased training time. We chose s = 5 in our experiments as it gives a reasonable balance between the two. Highway network A sequence of segment embed- dings from the max pooling layer is fed into a high- way network (Srivastava et al., 2015). Highway net- works are shown to signiï¬ cantly improve the qual- ity of a character-level language model when used with convolutional layers (Kim et al., 2015). A high- way network transforms input x with a gating mech- anism that adaptively regulates information ï¬
1610.03017#11
1610.03017#13
1610.03017
[ "1602.00367" ]
1610.03017#13
Fully Character-Level Neural Machine Translation without Explicit Segmentation
ow: Max pooling with stride The output from the con- volutional layer is ï¬ rst split into segments of width s, and max-pooling over time is applied to each seg- ment with no overlap. This procedure selects the most salient features to give a segment embedding. Each segment embedding is a summary of meaning- ful character n-grams occurring in a particular (over- lapping) subsequence in the source sentence. Note that the rightmost segment (above â onâ ) in Figure 1 may capture â sonâ (the ï¬ lter in green) although â sâ
1610.03017#12
1610.03017#14
1610.03017
[ "1602.00367" ]
1610.03017#14
Fully Character-Level Neural Machine Translation without Explicit Segmentation
occurs in the previous segment. In other words, our segments are overlapping as opposed to in word- or subword-level models with hard segmentation. Segments act as our internal linguistic unit from this layer and above: the attention mechanism, for instance, attends to each source segment instead of source character. This shortens the source repre- sentation s-fold: Yâ ⠬¢ RN*(7:/s), Empirically, we found using smaller s leads to better performance y =9 © ReLU(W 2 + bi) + (1-9) Ox, where g = Ï ((W2x + b2)).
1610.03017#13
1610.03017#15
1610.03017
[ "1602.00367" ]
1610.03017#15
Fully Character-Level Neural Machine Translation without Explicit Segmentation
We apply this to each segment embedding individually. Recurrent from the highway layer is given to a bidirectional GRU from §2, using each segment embedding as input. Subword-level encoder Unlike a subword-level encoder, our model does not commit to a speciï¬ c is instead trained to choice of segmentation; consider every possible character pattern and extract only the most meaningful ones. Therefore, the deï¬ nition of segmentation in our model is dynamic unlike subword-level encoders. During training, the model ï¬ nds the most salient character patterns in a sentence via max-pooling, and the character Vocab size Source emb. Target emb. Conv. ï¬
1610.03017#14
1610.03017#16
1610.03017
[ "1602.00367" ]
1610.03017#16
Fully Character-Level Neural Machine Translation without Explicit Segmentation
lters Pool stride Highway Encoder Decoder 24,440 512 512 300 128 512 200-200-250-250 -300-300-300-300 5 4 layers 1-layer 512 GRUs 2-layer 1024 GRUs Table 1: Bilingual model architectures. The char2char model uses 200 ï¬ lters of width 1, 200 ï¬ lters of width 2, · · · and 300 ï¬ lters of width 8.
1610.03017#15
1610.03017#17
1610.03017
[ "1602.00367" ]
1610.03017#17
Fully Character-Level Neural Machine Translation without Explicit Segmentation
sequences extracted by the model change over the course of training. This is in contrast to how BPE segmentation rules are learned: the segmentation is learned and ï¬ xed before training begins. # 4.2 Attention and Decoder Similarly to the attention model in (Chung et al., 2016; Firat et al., 2016a), a single-layer feedforward network computes the attention score of next target character to be generated with every source segment representation. A standard two-layer character-level decoder then takes the source context vector from the attention mechanism and predicts each target character. This decoder was described as base de- coder by Chung et al. (2016).
1610.03017#16
1610.03017#18
1610.03017
[ "1602.00367" ]
1610.03017#18
Fully Character-Level Neural Machine Translation without Explicit Segmentation
# 5 Experiment Settings # 5.1 Task and Models We evaluate the proposed character-to-character (char2char) translation model against subword- and bpe2char) on level baselines the WMTâ 15 DEâ EN, CSâ EN, FIâ EN and RUâ EN translation tasks.1 We do not consider word-level models, as it has already been shown that subword-level models outperform them by mit- igating issues inherent to closed-vocabulary transla- tion (Sennrich et al., 2015; Sennrich et al., 2016). Indeed, subword-level NMT models have been the de-facto state-of-the-art and are now used in a very large-scale industry NMT system to serve millions of users per day (Wu et al., 2016). 1http://www.statmt.org/wmt15/translation -task.html We experiment in two different scenarios: 1) a bilingual setting where we train a model on data from a single language pair; and 2) a multilingual setting where the task is many-to-one translation: we train a single model on data from all four lan- guage pairs. Hence, our baselines and models are: (a) bilingual bpe2bpe: from (Firat et al., 2016a). (b) bilingual bpe2char: from (Chung et al., 2016). (c) bilingual char2char (d) multilingual bpe2char (e) multilingual char2char We train all the models ourselves other than (a), for which we report the results from (Firat et al., 2016a). We detail the conï¬ guration of our models in Table 1 and Table 2. # 5.2 Datasets and Preprocessing We use all available parallel data on the four lan- guage pairs from WMTâ 15: DE-EN, CS-EN, FI-EN and RU-EN. For the bpe2char baselines, we only use sentence pairs where the source is no longer than 50 subword symbols. For our char2char models, we only use pairs where the source sentence is no longer than 450 characters.
1610.03017#17
1610.03017#19
1610.03017
[ "1602.00367" ]
1610.03017#19
Fully Character-Level Neural Machine Translation without Explicit Segmentation
For all the language pairs apart from FI-EN, we use newstest-2013 as a develop- ment set and newstest-2014 and newstest-2015 as test sets. For FI-EN, we use newsdev-2015 and newstest-2015 as development and test sets respec- tively. We tokenize2 each corpus using the script from Moses.3 When training bilingual bpe2char models, we ex- tract 20,000 BPE operations from each of the source and target corpus using a script from (Sennrich et al., 2015). This gives a source BPE vocabulary of size 20kâ 24k for each language.
1610.03017#18
1610.03017#20
1610.03017
[ "1602.00367" ]
1610.03017#20
Fully Character-Level Neural Machine Translation without Explicit Segmentation
# 5.3 Training Details Each model is trained using stochastic gradient de- scent and Adam (Kingma and Ba, 2014) with learn- ing rate 0.0001 and minibatch size 64. Training con- tinues until the BLEU score on the validation set 2This is unnecessary for char2char models, yet was carried out for comparison. 3https://github.com/moses-smt/mosesdecod er Vocab size Source emb. Target emb. Conv. ï¬
1610.03017#19
1610.03017#21
1610.03017
[ "1602.00367" ]
1610.03017#21
Fully Character-Level Neural Machine Translation without Explicit Segmentation
lters Pool stride Highway Encoder Decoder 54,544 512 512 400 128 512 200-250-300-300 -400-400-400-400 5 4 layers 1-layer 512 GRUs 2-layer 1024 GRUs Table 2: Multilingual model architectures. stops improving. The norm of the gradient is clipped with a threshold of 1 (Pascanu et al., 2013). All weights are initialized from a uniform distribution [â 0.01, 0.01]. Each model is trained on a single pre-2016 GTX Titan X GPU with 12GB RAM.
1610.03017#20
1610.03017#22
1610.03017
[ "1602.00367" ]
1610.03017#22
Fully Character-Level Neural Machine Translation without Explicit Segmentation
# 5.4 Decoding Details As from (Chung et al., 2016), a two-layer unidirec- tional character-level decoder with 1024 GRU units is used for all our experiments. For decoding, we use beam search with length-normalization to penal- ize shorter hypotheses. The beam width is 20 for all models. # 5.5 Training Multilingual Models Task description We train a model on a many-to- one translation task to translate a sentence in any of the four languages (German, Czech, Finnish and Russian) to English. We do not provide a language identiï¬ er to the encoder, but merely the sentence itself, encouraging the model to perform language identiï¬ cation on the ï¬ y. In addition, by not providing the language identiï¬ er, we expect the model to handle intra-sentence code-switching seamlessly. Model architecture The multilingual char2char model uses slightly more convolutional ï¬ lters than the bilingual char2char model, namely (200-250- 300-300-400-400-400-400). Otherwise, the archi- tecture remains the same as shown in Table 1. By not changing the size of the encoder and the decoder, we ï¬ x the capacity of the core translation module, and only allow the multilingual model to detect more character patterns. Similarly, the multilingual bpe2char model has the same encoder and decoder as the bilingual bpe2char model, but a larger vocabulary. We learn 50,000 multilingual BPE operations on the multilingual corpus, resulting in 54,544 subwords.
1610.03017#21
1610.03017#23
1610.03017
[ "1602.00367" ]
1610.03017#23
Fully Character-Level Neural Machine Translation without Explicit Segmentation
See Table 2 for the exact conï¬ guration of our multilingual models. Data scheduling For the multilingual models, an appropriate scheduling of data from different lan- guages is crucial to avoid overï¬ tting to one language too soon. Following (Firat et al., 2016a; Firat et al., 2016b), each minibatch is balanced, in that the pro- portion of each language pair in a single minibatch corresponds to that of the full corpus. With this minibatch scheme, roughly the same number of up- dates is required to make one full pass over the entire training corpus of each language pair. Minibatches from all language pairs are combined and presented to the model as a single minibatch. See Table 3 for the minibatch size for each language pair. DE-EN CS-EN FI-EN RU-EN corpus size minibatch size 4.5m 14 12.1m 37 1.9m 6 2.3m 7 Table 3: The minibatch size of each language (second row) is proportionate to the number of sentence pairs in each corpus (ï¬ rst row). Treatment of Cyrillic To facilitate cross-lingual pa- rameter sharing, we convert every Cyrillic charac- ter in the Russian source corpus to Latin alphabet according to ISO-9. Table 4 shows an example of how this conversion may help the multilingual mod- els identify lexemes that are shared across multiple languages. school schools CS RU RU (ISO-9) Ë skoly Ñ ÐºÐ¾Ð»Ð° Ñ ÐºÐ¾Ð»Ñ Ë skoly Ë skola Ë skola
1610.03017#22
1610.03017#24
1610.03017
[ "1602.00367" ]
1610.03017#24
Fully Character-Level Neural Machine Translation without Explicit Segmentation
Table 4: Czech and Russian words for school and schools, alongside the conversion of Russian characters into Latin. Multilingual BPE For the multilingual bpe2char model, multilingual BPE segmentation rules are extracted from a large dataset containing training source corpora of all the language pairs. To ensure the BPE rules are not biased towards one language, Setting Src Trg Dev Test1 Test2 DE-EN CS-EN FI-EN RU-EN (a)â (b) (c) (d) (e) (f)â (g) (h) (i) (j) (k)â (l) (m) (n) (o) (p)â (q) (r) (s) (t) bi bi bi multi multi bi bi bi multi multi bi bi bi multi multi bi bi bi multi multi bpe bpe char bpe char bpe bpe char bpe char bpe bpe char bpe char bpe bpe char bpe char bpe char char char char bpe char char char char bpe char char char char bpe char char char char 24.13 25.64 26.30 24.92 25.67 21.24 22.95 23.38 23.27 24.09 13.15 14.54 14.18 14.70 15.96 21.04 21.68 21.75 21.75 22.20 24.59 25.77 24.54 25.13 23.78 24.08 24.27 25.01 26.21 26.80 26.31 26.33 24.00 25.27 25.83 25.23 25.79 20.32 22.40 22.46 22.42 23.24 12.24 13.98 13.10 14.40 15.74 22.44 22.83 22.73 22.81 23.33 Table 5: BLEU scores of ï¬ ve different models on four language pairs. For each test or development set, the best performing model is shown in bold. (â ) results are taken from (Firat et al., 2016a).
1610.03017#23
1610.03017#25
1610.03017
[ "1602.00367" ]
1610.03017#25
Fully Character-Level Neural Machine Translation without Explicit Segmentation
larger datasets such as Czech and German corpora are trimmed such that every corpus contains an approximately equal number of characters. # 6 Quantitative Analysis # 6.1 Evaluation with BLEU Score In this section, we ï¬ rst establish our main hypothe- ses for introducing character-level and multilingual models, and investigate whether our observations support or disagree with our hypotheses. From our (1) if fully empirical results, we want to verify: character-level translation outperforms subword- level translation, (2) in which setting and to what extent is multilingual translation beneï¬ cial and (3) if multilingual, character-level translation achieves superior performance to other models. We outline our results with respect to each hypothesis below. subword-level In a bilin- (1) Character- vs. gual setting, the char2char model outperforms both subword-level baselines on DE-EN (Table 5 (a-c)) and CS-EN (Table 5 (f-h)). On the other two language pairs, it exceeds the bpe2bpe model and achieves similar performance with the bpe2char baseline (Table 5 (k-m) and (p-r)). We conclude that the proposed character-level model is comparable to or better than both subword-level baselines. the Meanwhile, character-level surpasses the subword-level encoder consistently in all the language pairs (Table 5 (d-e), (i-j), (n-o) and (s-t)). From this, we conclude that translating at the level of characters allows the model to discover shared constructs between languages more effectively. This also demonstrates that the character-level model is more ï¬ exible in assigning model capacity to different language pairs. (2) Multilingual vs. bilingual At the level of char- acters, we note that multilingual translation is indeed strongly beneï¬ cial. On the test sets, the multilin- gual character-level model outperforms the single- pair character-level model by 2.64 BLEU in FI-EN (Table 5 (m, o)) and 0.78 BLEU in CS-EN (Ta- ble 5 (h, j)), while achieving comparable results on DE-EN and RU-EN. At the level of subwords, on the other hand, we do not observe the same degree of performance beneï¬ t from multilingual translation.
1610.03017#24
1610.03017#26
1610.03017
[ "1602.00367" ]
1610.03017#26
Fully Character-Level Neural Machine Translation without Explicit Segmentation
Also, the multilingual bpe2char model requires much more updates to reach the performance of the bilingual Adequacy Fluency Setting Src Trg Raw (%) Stnd. (Ï ) Raw (%) Stnd. (Ï ) DE-EN bi (a) (b) bi (c) multi bpe char char char char char 65.47 68.11 67.80 -0.0536 0.0509 0.0281 68.64 68.80 68.92 0.0052 0.0468 0.0282 CS-EN bi (d) (e) bi (f) multi bpe char char char char char 62.76 60.78 63.03 0.0361 -0.0154 0.0415 61.62 63.37 65.08 -0.0285 0.0410 0.1047 FI-EN (g) (h) (i) bi bi multi bpe char char char char char 47.03 50.17 50.95 -0.1326 -0.0650 -0.0110 59.33 59.97 63.26 -0.0329 -0.0216 0.0969 RU-EN (j) (k) (l) bi bi multi bpe char char char char char 61.26 64.06 64.77 -0.1062 0.0105 0.0116 57.74 59.85 63.32 -0.0592 0.0168 0.1748 Table 6: Human evaluation results for adequacy and ï¬ uency. We present both the averaged raw scores (Raw) and the averaged standardized scores (Stnd.). Standardized adequacy is used to rank the systems and standardized ï¬ uency is used to break ties. A positive standardized score should be interpreted as the number of standard deviations above this particular workerâ s mean score that this system scored on average. For each language pair, we boldface the best performing model with statistical signiï¬
1610.03017#25
1610.03017#27
1610.03017
[ "1602.00367" ]
1610.03017#27
Fully Character-Level Neural Machine Translation without Explicit Segmentation
cance. When there is a tie, we boldface both systems. This suggests bpe2char model (see Figure 2). that learning useful subword segmentation across languages is difï¬ cult. (3) Multilingual char2char vs. others The mul- tilingual char2char model is the best performer in CS-EN, FI-EN and RU-EN (Table 5 (j, o, t)), and is the runner-up in DE-EN (Table 5 (e)). The fact that the multilingual char2char model outperforms the single-pair models goes to show the parameter efï¬ ciency of character-level translation: instead of training N separate models for N language pairs, it is possible to get better performance with a single multilingual character-level model. Approximately 1k turkers assessed a single test set (3k sentences in newstest-2014) for each system and language pair. Each turker conducted a mini- mum of 100 assessments for quality control, and the set of scores generated by each turker was standard- ized to remove any bias in the individualâ s scoring strategy. We consider three models (bilingual bpe2char, bilingual char2char and multilingual char2char) for the human evaluation. We leave out the multilingual bpe2char model to minimize the number of similar systems to improve the interpretability of the evalu- ation overall. # 6.2 Human Evaluation It is well known that automatic evaluation met- rics such as BLEU encourage reference-like transla- tions and do not fully capture true translation qual- ity (Callison-Burch, 2009; Graham et al., 2015). Therefore, we also carry out a recently proposed evaluation from (Graham et al., 2016) where we have human assessors rate both (1) adequacy and (2) ï¬ uency of each system translation on a scale from 0 to 100 via Amazon Mechanical Turk. Adequacy is the degree to which assessors agree that the system translation expresses the meaning of the reference translation. Fluency is evaluated using system trans- lation alone without any reference translation. For DE-EN, we observe that the multilingual char2char and bilingual char2char models are tied with respect to both adequacy and ï¬
1610.03017#26
1610.03017#28
1610.03017
[ "1602.00367" ]
1610.03017#28
Fully Character-Level Neural Machine Translation without Explicit Segmentation
uency (Ta- ble 6 (b-c)). For CS-EN, the multilingual char2char and bilingual bpe2char models ared tied for ade- quacy. However, the multilingual char2char model yields signiï¬ cantly better ï¬ uency (Table 6 (d, f)). For FI-EN and RU-EN, the multilingual char2char model is tied with the bilingual char2char model with respect to adequacy, but signiï¬ cantly outper- forms all other models in ï¬ uency (Table 6 (g-i, j-l)). Overall, the improvement in translation quality yielded by the multilingual character-level model mainly comes from ï¬ uency. We conjecture that be- cause the English decoder of the multilingual model is tuned on all the training sentence pairs, it becomes (a) Spelling mistakes DE ori DE src EN ref bpe2char char2char Why should we not be friends ? Warum sollten wir nicht Freunde sei ? Warum solltne wir nich Freunde sei ? Why should not we be friends ? Why are we to be friends ?
1610.03017#27
1610.03017#29
1610.03017
[ "1602.00367" ]
1610.03017#29
Fully Character-Level Neural Machine Translation without Explicit Segmentation
# (b) Rare words # DE src EN ref bpe2char char2char Siebentausendzweihundertvierundf¨unfzig . Seven thousand two hundred ï¬ fty four . Fifty-ï¬ ve Decline of the Seventy . Seven thousand hundred thousand ï¬ fties . # (c) Morphology Die Zufahrtsstraà en wurden gesperrt , wodurch sich laut CNN lange R¨uckstaus bildeten . The access roads were blocked off , which , according to CNN , caused long tailbacks . The access roads were locked , which , according to CNN , was long back . The access roads were blocked , which looked long backwards , according to CNN .
1610.03017#28
1610.03017#30
1610.03017
[ "1602.00367" ]
1610.03017#30
Fully Character-Level Neural Machine Translation without Explicit Segmentation
# (d) Nonce words DE src EN ref bpe2char char2char Der Test ist nun ¨uber , aber ich habe keine gute Note . Es ist wie eine Verschlimmbesserung . The test is now over , but i donâ t have any good grade . it is like a worsened improvement . The test is now over , but i do not have a good note . The test is now , but i have no good note , it is like a worsening improvement .
1610.03017#29
1610.03017#31
1610.03017
[ "1602.00367" ]
1610.03017#31
Fully Character-Level Neural Machine Translation without Explicit Segmentation
# (e) Multilingual Bei der Metropolitn´ıho v´yboru pro dopravu f¨ur das Gebiet der San Francisco Bay erkl¨arten Beamte , der Kon- gress k¨onne das Problem Ð±Ð°Ð½ÐºÑ Ð¾Ñ Ñ Ñ Ð²Ð¾ Ð´Ð¾Ð²ÐµÑ Ð¸Ñ ÐµÐ»Ñ Ð½Ð¾Ð³Ð¾ Фонда Ñ Ñ Ñ Ð¾Ð¸Ñ ÐµÐ»Ñ Ñ Ñ Ð²Ð° Ñ Ð¾Ñ Ñ ÐµÐ¹Ð½Ñ Ñ Ð´Ð¾Ñ Ð¾Ð³ einfach durch Erh¨ohung der Kraftstoffsteuer l¨osen .
1610.03017#30
1610.03017#32
1610.03017
[ "1602.00367" ]
1610.03017#32
Fully Character-Level Neural Machine Translation without Explicit Segmentation
At the Metropolitan Transportation Commission in the San Francisco Bay Area , ofï¬ cials say Congress could very simply deal with the bankrupt Highway Trust Fund by raising gas taxes . During the Metropolitan Committee on Transport for San Francisco Bay , ofï¬ cials declared that Congress could solve the problem of bankruptcy by increasing the fuel tax bankrupt . At the Metropolitan Committee on Transport for the territory of San Francisco Bay , ofï¬ cials explained that the Congress could simply solve the problem of the bankruptcy of the Road Construction Fund by increasing the fuel tax .
1610.03017#31
1610.03017#33
1610.03017
[ "1602.00367" ]
1610.03017#33
Fully Character-Level Neural Machine Translation without Explicit Segmentation
Table 7: Sample translations. For each example, we show the source sentence as src, the human translation as ref, and the translations from the subword-level baseline and our character-level model as bpe2char and char2char, re- spectively. For (a), the original, uncorrupted source sentence is also shown (ori). The source sentence in (e) contains words in German (in green), Czech (in yellow) and Russian (in blue). The translations in (a-d) are from the bilingual models, whereas those in (e) are from the multilingual models. a better language model than a bilingual modelâ s de- coder. We leave it for future work to conï¬ rm if this is indeed the case.
1610.03017#32
1610.03017#34
1610.03017
[ "1602.00367" ]
1610.03017#34
Fully Character-Level Neural Machine Translation without Explicit Segmentation
sample translations from the character-level model with those from the subword-level model, which al- ready sidesteps some of the issues associated with word-level translation. # 7 Qualitative Analysis In Table 7, we demonstrate our character-level modelâ s robustness in four translation scenarios that conventional NMT systems are known to suffer in. We also showcase our modelâ s ability to seamlessly handle intra-sentence code-switching, or mixed ut- terances from two or more languages. We compare With real-world text containing typos and spelling mistakes, the quality of word-based translation would severely drop, as every non-canonical form of a word cannot be represented. On the other hand, a character-level model has a much better chance recovering the original word or sentence. Indeed, our char2char model is robust against a few spelling mistakes (Table 7 (a)). Given a long, rare word such as â Sieben- tausendzweihundertvierundf¨unfzigâ (seven thou- sand two hundred ï¬ fty four) in Table 7 (b), the subword-level model segments â Siebentausendâ as (Sieb, ent, aus, end), which results in an inaccurate translation.
1610.03017#33
1610.03017#35
1610.03017
[ "1602.00367" ]
1610.03017#35
Fully Character-Level Neural Machine Translation without Explicit Segmentation
The character-level model performs bet- ter on these long, concatenative words with ambigu- ous segmentation. Also, we expect a character-level model to han- dle novel and unseen morphological inï¬ ections well. We observe that this is indeed the case, as our char2char model correctly understands â gesperrtâ , a past participle form of â sperrenâ (to block) (Ta- ble 7 (c)). Nonce words are terms coined for a single use. They are not actual words but are constructed in a way that humans can intuitively guess what they mean, such as workoliday and friyay. We construct a few DE-EN sentence pairs that contain German nonce words (one example shown in Table 7 (d)), and observe that the character-level model can in- deed detect salient character patterns and arrive at a correct translation. Finally, we evaluate our multilingual modelsâ ca- pacity to perform intra-sentence code-switching, by giving them as input mixed sentences from multiple languages. The newstest-2013 development datasets for DE-EN, CS-EN and FI-EN contain intersecting examples with the same English sentences. We com- pile a list of these sentences in DE/CS/FI and their translation in EN, and choose a few samples uni- formly at random from the English side. Words or clauses from different languages are manually inter- mixed to create multilingual sentences. We discover that when given sentences with high degree of language intermixing, as in Table 7 (e), the multilingual bpe2char model fails to seamlessly handle alternation of languages. Overall, however, both multilingual models generate reasonable trans- lations. This is possible because we did not provide a language identiï¬ er when training our multilingual models; as a result, they learned to understand a multilingual sentence and translate it into a coherent English sentence. We show supplementary sample translations in each scenario on a webpage.4 4https://sites.google.com/site/dl4mtc2c
1610.03017#34
1610.03017#36
1610.03017
[ "1602.00367" ]
1610.03017#36
Fully Character-Level Neural Machine Translation without Explicit Segmentation
Training and decoding speed On a single Titan X GPU, we observe that our char2char models are ap- proximately 35% slower to train than our bpe2char baselines when the same batch size was used. Our bilingual character-level models can be trained in roughly two weeks. We further note that the bilingual bpe2char model can translate 3,000 sentences in 66.63 minutes while the bilingual char2char model requires 71.71 minutes (online, not in batch).
1610.03017#35
1610.03017#37
1610.03017
[ "1602.00367" ]
1610.03017#37
Fully Character-Level Neural Machine Translation without Explicit Segmentation
See Table 8 for the exact details. Model Time to execute 1k updates (s) Batch size Time to decode 3k sentences (m) bpe2char char2char 2461.72 2371.93 128 64 66.63 71.71 Multi bpe2char char2char 1646.37 2514.23 64 64 68.99 72.33 Table 8: Speed comparison. The second column shows the time taken to execute 1,000 training updates. The model makes each update after having seen one mini- batch. Further observations We also note that the mul- tilingual models are less prone to overï¬ tting than the bilingual models. This is particularly visible for low-resource language pairs such as FI-EN. Figure 2 shows the evolution of the FI-EN validation BLEU scores where the bilingual models overï¬ t rapidly but the multilingual models seem to regularize learning by training simultaneously on other language pairs. BLEU on FI-EN newstest-2013 mene o 15 ~ â qeaseessueeeeeees® a aaonetanas so aaa rine yo aâ . 2 sms bi-bpe2char J mee bi-char2char ae multi-bpe2char multi-char2char 10) â - 500 7500 2000 7000 Number of updates (k) Figure 2: Multilingual models overï¬ t less than bilingual models on low-resource language pairs.
1610.03017#36
1610.03017#38
1610.03017
[ "1602.00367" ]
1610.03017#38
Fully Character-Level Neural Machine Translation without Explicit Segmentation
# 8 Conclusion We propose a fully character-level NMT model that accepts a sequence of characters in the source lan- guage and outputs a sequence of characters in the target language. What is remarkable about this model is the absence of explicitly hard-coded knowl- edge of words and their boundaries, and that the model learns these concepts from a translation task alone. the fully character-level model performs as well as, or bet- ter than, subword-level translation models. The per- formance gain is distinctly pronounced in the multi- lingual many-to-one translation task, where results show that character-level model can assign model capacities to different languages more efï¬ ciently than the subword-level models. We observe a partic- ularly large improvement in FI-EN translation when the model is trained to translate multiple languages, indicating positive cross-lingual transfer to a low- resource language pair. We discover two main beneï¬ ts of the multilingual character-level model: (1) it is much more param- eter efï¬ cient than the bilingual models and (2) it can naturally handle intra-sentence code-switching as a result of the many-to-one translation task. Ul- timately, we present a case for fully character-level translation: that translation at the level of charac- ter is strongly beneï¬ cial and should be encouraged more.
1610.03017#37
1610.03017#39
1610.03017
[ "1602.00367" ]
1610.03017#39
Fully Character-Level Neural Machine Translation without Explicit Segmentation
The repository https://github.com/nyu-dl /dl4mt-c2c contains the source code and pre- trained models for reproducing the experimental re- sults. In the next stage of this research, we will investi- gate extending our multilingual many-to-one trans- lation models to perform many-to-many translation, which will allow the decoder, similarly with the en- coder, to learn from multiple target languages. Fur- thermore, a more thorough investigation into model architectures and hyperparameters is needed.
1610.03017#38
1610.03017#40
1610.03017
[ "1602.00367" ]
1610.03017#40
Fully Character-Level Neural Machine Translation without Explicit Segmentation
# Acknowledgements KC thanks the support by eBay, Facebook, Google (Google Faculty Award 2016) and NVidia (NVIDIA AI Lab 2016-2019). This work was partly sup- ported by Samsung Advanced Institute of Technol- ogy (Deep Learning). JL was supported by Qual- comm Innovation Fellowship, and thanks David Yenicelik and Kevin Wallimann for their contribu- tion in designing the qualitative analysis. The au- thors would like to thank Prof. Zheng Zhang (NYU Shanghai) for fruitful discussion and comments, as well as Yvette Graham for her help with the human evaluation. # References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- 2015. Neural machine translation by jointly gio. In Proceedings of learning to align and translate. the International Conference on Learning Represen- tations (ICLR).
1610.03017#39
1610.03017#41
1610.03017
[ "1602.00367" ]
1610.03017#41
Fully Character-Level Neural Machine Translation without Explicit Segmentation
Chris Callison-Burch. 2009. Fast, cheap, and creative: Evaluating translation quality using amazonâ s mechan- ical turk. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing. Kyunghyun Cho, Bart van Merri¨enboer, Dzmitry Bah- danau, and Yoshua Bengio. 2014a. On the proper- ties of neural machine translation: Encoder-decoder In Proceedings of the 8th Workshop on approaches. Syntax, Semantics, and Structure in Statistical Trans- lation, page 103. Kyunghyun Cho, Bart van Merri¨enboer, Caglar Gulcehre, Fethi Bougares, Holger Schwenk, and Yoshua Ben- gio. 2014b. Learning phrase representations using RNN encoder-decoder for statistical machine transla- tion. In Proceedings of the Empiricial Methods in Nat- ural Language Processing. Junyoung Chung, Kyunghyun Cho, and Yoshua Bengio. 2016. A character-level decoder without explicit seg- mentation for neural machine translation. In Proceed- ings of the 54th Annual Meeting of the Association for Computational Linguistics.
1610.03017#40
1610.03017#42
1610.03017
[ "1602.00367" ]
1610.03017#42
Fully Character-Level Neural Machine Translation without Explicit Segmentation
Marta R. Costa-Juss´a and Jos`e A. R. Fonollosa. 2016. In Pro- Character-based neural machine translation. ceedings of the 54th Annual Meeting of the Association for Computational Linguistics, page 357. Ferdinand de Saussure. 1916. Course in General Lin- guistics. Orhan Firat, Kyunghyun Cho, and Yoshua Bengio. 2016a. Multi-way, multilingual neural machine trans- lation with a shared attention mechanism. In Proceed- ings of the 2016 Conference of the North American Chapter of the Association for Computational Linguis- tics.
1610.03017#41
1610.03017#43
1610.03017
[ "1602.00367" ]
1610.03017#43
Fully Character-Level Neural Machine Translation without Explicit Segmentation
Orhan Firat, Baskaran Sankaran, Yaser Al-Onaizan, Fatos T. Yarman Vural, and Kyunghyun Cho. 2016b. Zero-resource translation with multi-lingual neural machine translation. Dan Gillick, Cliff Brunk, Oriol Vinyals, and Amarnag Subramanya. 2015. Multilingual language processing In Proceedings of the 2016 Conference from bytes. of the North American Chapter of the Association for Computational Linguistics. Yvette Graham, Nitika Mathur, and Timothy Baldwin. 2015. Accurate evaluation of segment-level machine translation metrics. In Proceedings of the 2015 Con- ference of the North American Chapter of the Associ- ation for Computational Linguistics Human Language Technologies, Denver, Colorado.
1610.03017#42
1610.03017#44
1610.03017
[ "1602.00367" ]
1610.03017#44
Fully Character-Level Neural Machine Translation without Explicit Segmentation
Yvette Graham, Timothy Baldwin, Alistair Moffat, and Justin Zobel. 2016. Can machine translation systems be evaluated by the crowd alone. Natural Language Engineering, FirstView. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735â 1780. Ray S. Jackendoff. 1992. Semantic Structures, vol- ume 18. MIT press. S´ebastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015. On using very large target vo- cabulary for neural machine translation. In Proceed- ings of the 53rd Annual Meeting of the Association for Computational Linguistics.
1610.03017#43
1610.03017#45
1610.03017
[ "1602.00367" ]
1610.03017#45
Fully Character-Level Neural Machine Translation without Explicit Segmentation
Yoon Kim, Yacine Jernite, David Sontag, and Alexan- der M. Rush. 2015. Character-aware neural language models. In Proceedings of the 30th AAAI Conference on Artiï¬ cial Intelligence. 2014. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference for Learning Repre- sentations (ICLR). Wang Ling, Isabel Trancoso, Chris Dyer, and Alan W.
1610.03017#44
1610.03017#46
1610.03017
[ "1602.00367" ]
1610.03017#46
Fully Character-Level Neural Machine Translation without Explicit Segmentation
Black. 2015. Character-based neural machine transla- tion. arXiv preprint arXiv:1511.04586. Minh-Thang Luong and Christopher D. Manning. 2016. Achieving open vocabulary neural machine translation with hybrid word-character models. In Proceedings of the 54th Annual Meeting of the Association for Com- putational Linguistics. Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention- In Proceedings of based neural machine translation. the 54th Annual Meeting of the Association for Com- putational Linguistics. Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013.
1610.03017#45
1610.03017#47
1610.03017
[ "1602.00367" ]
1610.03017#47
Fully Character-Level Neural Machine Translation without Explicit Segmentation
On the difï¬ culty of training recurrent neural net- works. In Proceedings of the 30th International Con- ference on Machine Learning (ICML). Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with In Proceedings of the 54th Annual subword units. Meeting of the Association for Computational Linguis- tics. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Edinburgh neural machine translation systems for WMT 16. Rupesh Kumar Srivastava, Klaus Greff, and J¨urgen Schmidhuber. 2015. Training very deep networks. In Advances in Neural Information Processing Systems (NIPS 2015), volume 28. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2015. Se- quence to sequence learning with neural networks. In Advances in Neural Information Processing Systems (NIPS 2015), volume 28.
1610.03017#46
1610.03017#48
1610.03017
[ "1602.00367" ]
1610.03017#48
Fully Character-Level Neural Machine Translation without Explicit Segmentation
Yulia Tsvetkov, Sunayana Sitaram, Manaal Faruqui, Guillaume Lample, Patrick Littell, David Mortensen, Alan W. Black, Lori Levin, and Chris Dyer. 2016. Polyglot neural language models: A case study in cross-lingual phonetic representation learning. In Pro- ceedings of the 2016 Conference of the North Ameri- can Chapter of the Association for Computational Lin- guistics. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016.
1610.03017#47
1610.03017#49
1610.03017
[ "1602.00367" ]
1610.03017#49
Fully Character-Level Neural Machine Translation without Explicit Segmentation
Googleâ s neural machine translation system: Bridg- ing the gap between human and machine translation. arXiv preprint arXiv:1609.08144. Yijun Xiao and Kyunghyun Cho. Efï¬ cient character-level document classiï¬ cation by combin- ing convolution and recurrent layers. arXiv preprint arXiv:1602.00367. 2015. Character-level convolutional networks for text classi- ï¬ cation. In Advances in Neural Information Process- ing Systems (NIPS 2015), volume 28.
1610.03017#48
1610.03017
[ "1602.00367" ]
1610.02850#0
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
MANUEL AMTHOR, ERIK RODNER, AND JOACHIM DENZLER: IMPATIENT DNNS # Impatient DNNs â Deep Neural Networks with Dynamic Time Budgets # t c O 0 1 # Manuel Amthor [email protected] # Erik Rodner [email protected] Computer Vision Group Friedrich Schiller University Jena Germany www.inf-cv.uni-jena.de ] # ren # Joachim Denzler [email protected] # V C . s c [ # Abstract 1 v 0 5 8 2 0 . 0 1 6 1 : v i X r 1 a
1610.02850#1
1610.02850
[ "1502.03167" ]
1610.02850#1
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
We propose Impatient Deep Neural Networks (DNNs) which deal with dynamic time budgets during application. They allow for individual budgets given a priori for each test example and for anytime prediction, i.e. a possible interruption at multiple stages during inference while still providing output estimates. Our approach can therefore tackle the computational costs and energy demands of DNNs in an adaptive manner, a property essential for real-time applications. Our Impatient DNNs are based on a new general framework of learning dynamic budget predictors using risk minimization, which can be applied to current DNN archi- tectures by adding early prediction and additional loss layers. A key aspect of our method is that all of the intermediate predictors are learned jointly. In experiments, we evaluate our approach for different budget distributions, architectures, and datasets. Our results show a signiï¬ cant gain in expected accuracy compared to common baselines.
1610.02850#0
1610.02850#2
1610.02850
[ "1502.03167" ]
1610.02850#2
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
# Introduction Deep and especially convolutional neural networks are the current base for the majority of state-of-the-art approaches in vision. Their ability to learn very effective representations of visual data has led to several breakthroughs in important applications, such as scene un- derstanding for autonomous driving [1], object detection [6], and robotics [4]. The main obstacle for their application is still the computational cost during prediction for a new test image. Many previous works have focused on speeding up DNN inference in general achiev- ing constant speed-ups for a certain loss in prediction accuracy [10, 16]. In contrast, we focus on inference with dynamic time budgets. Our networks provide a series of predictions with increasing computational cost and accuracy. This allows for (1) dynamic interruption of the prediction in time-critical applications (anytime ability, Figure 1 left), or for (2) predictions with a dynamic time budget individually given for each test im- age a-priori (Figure 1 right). Dynamic budget approaches can for example deal with varying energy resources, a property especially useful for real-time visual inference in robotics [17]. Furthermore, early predictions allow for immediate action selection in reinforcement learn- ing scenarios [21].
1610.02850#1
1610.02850#3
1610.02850
[ "1502.03167" ]
1610.02850#3
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
© 2016. The copyright of this document resides with its authors. It may be distributed unchanged freely in print or electronic forms. 1 2 MANUEL AMTHOR, ERIK RODNER, AND JOACHIM DENZLER: IMPATIENT DNNS @ a) a) >) () © < © « © © c < s Ss Ss s Ss S S 3 3 g X i 3 3 3 3 5 3 3 3 2 2 2 © 2 2 2 Ed Ed 5 5 Ey ES ES I TL tT v 2 . THGaNT HAT GRIN tH $s Q Q s time > time > ~ interruptable CNN CNN with dynamic budget (anytime ability) given a-priori # Fr 3 § Figure 1:
1610.02850#2
1610.02850#4
1610.02850
[ "1502.03167" ]
1610.02850#4
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
Illustration of convolutional neural network prediction in dynamic budget scenar- ios: (left) prediction can be interrupted at any time or (right) the budget is given before each prediction. The main idea of our approach is to formulate the learning of dynamic budget predictors as a generalized risk minimization that involves the distribution of budgets provided for the application. The distribution of possible budgets has been either previously neglected or as- sumed to be uniform [12]. However, we show that such an easily available prior information can signiï¬ cantly help to improve the expected accuracy. Our formulation leads to a straight-forward modiï¬ cation of convolutional neural network (CNN) architectures and their training. In particular, we add several early prediction and loss layers along the standard processing pipeline of a DNN (Figure 1 and Figure 2). Accord- ing to our risk minimization framework for dynamic budget predictors, all of these layers need to be learned jointly with a weighted combination derived from a time-budget distri- bution. Whereas this strategy is directly related to DNN learning strategies, such as deep supervision [24] and inception architectures [23], we demonstrate its usefulness for adapting to varying resources during testing.
1610.02850#3
1610.02850#5
1610.02850
[ "1502.03167" ]
1610.02850#5
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
The paper is structured as follows. After discussing related work, we deï¬ ne dynamic budget predictors and derive a new learning framework based on risk minimization with budget distributions (Sect. 2). Our framework can be directly applied to deep and especially convolutional neural networks as described in Sect. 3. Experiments in Sect. 4 show the advantages of our approach for different architectures, datasets, and budget distributions. Related work on anytime prediction The work of Karayev et al. [12] presented an ap- proach that iteratively and dynamically selects feature representations to maximize the area above an entropy vs. cost curve. Our approach however focuses on a static order of predic- tors and is able to incorporate budget distributions expected for the application. Fröhlich et al. [5] proposed a semantic segmentation approach with anytime classiï¬ cation capability. Their method is based on random decision forests learned in a layer-wise fashion. Xu et al. [26] considers anytime classiï¬ cation with unknown budgets by combining a cost-sensitive support vector machine with feature learning. Similar to [5], their predictors are learned in a greedy fashion and not learned jointly as in our case. Learning all of the predictors with shared parameters jointly allows us to share computations while directly optimizing with respect to expected accuracy during training. The paper of [25] presents an algorithm for MANUEL AMTHOR, ERIK RODNER, AND JOACHIM DENZLER:
1610.02850#4
1610.02850#6
1610.02850
[ "1502.03167" ]
1610.02850#6
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
IMPATIENT DNNS learning tree ensembles with a constrained time budget available during training. case, the whole distribution budgets is given during training. Related work on deep supervision and DNNs with multiple losses There are multiple methods that use a similar architecture of deep neural networks than ours characterized by multiple loss layers and joint training of them. For example, [24] refers to such a training strategy as â deep supervisionâ and shows that it allows for training deeper networks in a robust fashion. A very similar technique has been used in [7] for improved scene recognition. Furthermore, multiple loss layers are often used for multi-task learning, where the goal is to jointly predict various outputs [27]. In contrast to these works, our paper focuses on the impact of such an architecture on the ability of DNNs to deal with dynamic time budgets during inference. Furthermore, we show that such an architectural design can be directly derived from a very general risk minimiza- tion framework for predictors with dynamic budgets. Related work on speeding up convolutional neural networks There are multiple works that focus on speeding up DNNs and the special case of convolutional neural networks (CNNs). Applied and adapted techniques range from low-rank approximations [2, 6, 10] to FFT computations of the involved convolutions [19]. The Fast R-CNN method of [6] speeds up fully-connected layers by simple SVD approximation. Similar techniques have been presented by [2] and [10]. The paper of [8] provides an empirical study of the effects of CNN architectural design choices on the computation time and the achieved recognition performance. A straightforward technique to speed up convolutions with large ï¬ lter sizes uses Fast Fourier Transforms as studied by [19]. Furthermore, efï¬ cient ï¬ ltering techniques, such as the Winograd transformation [14], are applicable as well. Our approach also tries to speed up inference of deep neural networks, i.e. a forward pass. However, instead of approximating different operations performed in single layers, we achieve a signiï¬ cant speed-up by allowing the algorithm to deal with dynamic time budgets. Therefore, our research is orthogonal to the one brieï¬
1610.02850#5
1610.02850#7
1610.02850
[ "1502.03167" ]
1610.02850#7
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
y described and combining them is straightforward. # 2 Learning Dynamic Budget Predictors In this section, we derive a simple yet powerful learning scheme for dynamic budget predic- tors. Without loss of generality, we focus on time budgets in the following. Speciï¬ cation of dynamic budgets An important challenge for dynamic budget approaches is that the budget available for inference during testing is not known during training and for anytime scenarios even not known during inference itself. For anytime tasks, we need to learn algorithms that can be interrupted at several time steps and balance the trade-off be- tween calculating direct predictions of an output y for an example xxx or performing calculating intermediate outputs that help later on for further reï¬ nements of the predictions. This trade-off is without any further speciï¬ cation, ill-posed. However, in many appli- cations, we know something about the distribution p(t | xxx, y) of time budgets t available to the algorithm for a given input-output pair (xxx, y). In the following, we assume that this distribution is either given or can be modeled for an application.
1610.02850#6
1610.02850#8
1610.02850
[ "1502.03167" ]
1610.02850#8
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
3 MANUEL AMTHOR, ERIK RODNER, AND JOACHIM DENZLER: IMPATIENT DNNS 4 Risk minimization with budget distributions In the following, we develop a framework for learning dynamic budget predictors using risk minimization. We consider inference al- gorithms f that provide predictions y â Y for input examples xxx â X at different times t â R, i.e. we have f : X à R â Y. Learning the parameters θθθ of f is done by minimizing the following regularized risk: argminθθθ tâ R yâ Y xxxâ X L( f (xxx,t; θθθ ), y) · p(xxx, y,t) dxxx dy dt + R(θθθ ) , (1) with L being a suitable loss function, R(θθθ ) being a regularization term, and p(xxx, y,t) being the joint distribution of an input-output pair (xxx, y) and the available time t. This formulation does not require any differentiation between a-priori given budget or anytime scenarios. We further assume that the time available is independent of the actual example and itâ
1610.02850#7
1610.02850#9
1610.02850
[ "1502.03167" ]
1610.02850#9
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
s label. This is a reasonable assumption, since the available time is in most applications just based on a limitation of hardware or data transfer resources. Since we are given a training set D = (xxxi, yi)n n argming [ LLU: 8).99) POG + REE) . Q) JtER jay The predictor f is an algorithm performing a ï¬ nite sequence of atomic operations. There- fore, the prediction output will be only changing at discrete time steps t1, . . . ,tK: f (xxx,t; θθθ ) = f (xxx,tk; θθθ ) def.= fk(xxx; θθθ k), f (xxx,t; θθθ ) = fK(xxx; θθθ K) . Furthermore, before t1, no output estimate is available. Since this leads to a constant additive term independent of θθθ , we can ignore this aspect in the following. In total, Eq. (2) simpliï¬ es as follows: K n argming )â we: (Zecieac00.09) +R(8) , ©) k=] i=l with weights wz = Se p(t)dt for 1 <k < K and wx = fr p(t)dt. As can be seen we have a simple learning objective, which is a weighted combination of the learning objectives of each of the individual predictors f;. If some of the parameters are shared between the predictors, which is the case for our approach presented in Sect. 3, each term in the objective can not be optimized independently and joint optimization is necessary. Sharing parameters is essential for optimizing shared computations towards maximizing the expected accuracy of the complete model.
1610.02850#8
1610.02850#10
1610.02850
[ "1502.03167" ]
1610.02850#10
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
The information about the time-budget distribution deï¬ nes the weights of the loss terms in an intuitive manner: if there is a high probability of the time budget being between tk and tk+1, the loss of fk has a strong impact on the overall learning objective and the parameters θθθ k including the shared ones should be tuned towards reducing the loss of fk rather than contributing signiï¬ cantly to other predictors. # 3 Learning Impatient DNNs with Early Prediction Layers In this section, we show how a single deep neural network with additional prediction layers is well suited for providing a series of prediction models.
1610.02850#9
1610.02850#11
1610.02850
[ "1502.03167" ]
1610.02850#11
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
(3) (4) MANUEL AMTHOR, ERIK RODNER, AND JOACHIM DENZLER: IMPATIENT DNNS inputs labels _time-budget distribution batch convi. architectures a 4 norm. Tat [eariyipreun TossT for early prediction layers I pooll I batch conve early prediction (fe only) i norm. relu2__|{early pred.2}{__ loss2 ~L_ie 7 I pool2 i batch conv3 | \_ i norm. relu3__}{early pred. 3 Toss3 T ~ combined early prediction (ava) batch conv4 | NN loss â spatial avg norm E ool | Le} - relud_|{early pred. 4}{~ loss I batch conv E norm. relu5, éarly pred. 5 T0855 I pools early prediction (avg 4x4) To Spatial avg T | pool 4x4 â >[_fe relu6 I 7 T Veta] \ I 18 Toss6
1610.02850#10
1610.02850#12
1610.02850
[ "1502.03167" ]
1610.02850#12
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
Figure 2: (Left) Modiï¬ cation of the AlexNet architecture for dynamic budgets and early predictions. (Right) Possible architectures for early prediction. Early prediction layers To obtain a series of predictions, we add K additional layers to a common DNN architecture as illustrated in Figure 2. We refer to these layers as early predic- tion (EP) layers in the following. The output fk(xxx) of these layers has as many dimensions as y. Already after the ï¬ rst layers, our approach is able to perform predictions with only a very few number of computational operations. The layered architecture of a DNN has an important advantage, since all fk naturally share a large set of their parameters and also a large number of computations. Anytime approaches require a forward pass to go through all early prediction layers that can be processed until interruption. In case of non-parallel computation, the computational overhead of the early prediction layers should therefore be reduced as much as possible. The right part of Figure 2 shows different choices for EP layers we experimented with: (1) FC only, which is a simple single fully-connected (FC) layer followed by a softmax layer, (2) AVG, which performs average pooling across the spatial dimensions of previous layer before a fully-connected layer, which leads to a signiï¬ cantly reduced number of parameters for the EP layers, and (3) AVG 4 à 4, which allows for preserving rough spatial information by performing average pooling in 4 à 4 = 16 uniformly-sized regions. Learning with weighted losses For learning, each of the EP layers is connected to a loss layer. The overall loss during training is exactly the weighted combination we derived in the previous section in Eq. (2). In theory, training our Impatient DNNs does not require any further modiï¬ cations and learning can be done with standard back-propagation and gradient-descent. However, we observed in experiments that batch normalization [9] leads to a signiï¬ cantly more robust training and is even required to achieve convergence at all in most cases.
1610.02850#11
1610.02850#13
1610.02850
[ "1502.03167" ]
1610.02850#13
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
5 6 MANUEL AMTHOR, ERIK RODNER, AND JOACHIM DENZLER: IMPATIENT DNNS to, EQ 1 uN 1 POLY Lo, NORM o| o.| ol ol o.6| o.6| o.s| o.6| oa oa oa oa o.2} o.2} oa} 02] coer 2 rr a re as o 56 (OOD 3 3 > 7 4 5 EP layer EP layer EP layer EP layer Figure 3: Types of time-budget distributions we consider in our paper. Weighting schemes In our experiments, we are interested in the effect of different time- budget distributions provided during learning. To simulate them, we consider the following schemes for early prediction layer weights w;,...,wx: (STD) standard DNN training, i.e. only the last prediction matters: wx = 1 and w; = 0 otherwise, (EQ) uniform weights for uniform time-budget distributions: w, = t, (LIN) linearly increasing weights, i.e. small time budgets are unlikely: wx « k, (POLY) polynomially increasing weights: wy « kY with y> 1, CLIN, IPOLY) decreasing weights, i.e. small time budgets are likely: wz, = Wrel-k for weights w), of the former schemes, and (NORM) small and large time budgets are rare and layers in the middle of the architecture are given a high weight: w; « exp(â
1610.02850#12
1610.02850#14
1610.02850
[ "1502.03167" ]
1610.02850#14
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
B - (k â Kat)?) with B = 0.34. All of these schemes are simulating different budget specifications of an application. An illustration of several instances is given in Figure 3. # 4 Experiments In the following, we evaluate our approach with respect to different dynamic budget schemes and compare with standard DNN training and other relevant baselines. Experimental setup and datasets For evaluation, we conducted experiments on two ob- ject classiï¬ cation datasets. The 15-Scenes [15] dataset comprises a total of 4,485 images covering categories from kitchen and living room to suburban and industrial. Each category contains between 200 and 400 images each, from which we took 100 images for training, as suggested by [15], and the remaining ones for testing. The training set is further divided into 90 images for actual training and 10 images for validation. The MIT-67 [20] indoor scenes database is comprised of 67 categories. We follow the procedure of [20] and take 80 images for training and 20 for testing. Again, the training set is split in order to obtain a validation set of 8 images per class. Since our datasets are too small for DNN training from scratch, we perform ï¬ ne-tuning of different models pre-trained on ImageNet, e.g. AlexNet [13] and VGG19 [22]. The positions of EP layers for AlexNet are given in Figure 2. For VGG19, we add EP layers after each block of convolutional layers.
1610.02850#13
1610.02850#15
1610.02850
[ "1502.03167" ]
1610.02850#15
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
Please note that the last â earlyâ prediction layer is always the output layer of the original CNN architecture. Analysis of learning Impatient DNNs In the following, we show that for learning Impa- tient DNNs care has to be taken to ensure convergence. For example, an adequate learning rate has to be determined to ensure convergence of the network while avoiding saturation at low accuracy. This becomes much more important when dealing with losses of multiple branches, since the gradients at shared layers accumulate leading to the network training
1610.02850#14
1610.02850#16
1610.02850
[ "1502.03167" ]
1610.02850#16
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
MANUEL AMTHOR, ERIK RODNER, AND JOACHIM DENZLER: IMPATIENT DNNS 0.7 es g¢ 2 9g we 5 © 07 Prediction 1 Prediction 2 Prediction 3 Prediction 4 Prediction 5 Prediction 6 accuracy on validation set © 0.0 0 500 1000 1500 2000 0 20 40 60 80 100 epochs epochs accuracy on validation set
1610.02850#15
1610.02850#17
1610.02850
[ "1502.03167" ]
1610.02850#17
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
Figure 4: Convergence during learning an Impatient AlexNet trained on MIT-67 with (right) and without (left) batch normalization: Different colors indicate individual early prediction layers and it can be clearly seen that batch normalization signiï¬ cantly improves stability during training. being more fragile. Especially in the case of deeper network architectures, e.g. VGG, we observed that convergence can not be achieved at all without proper normalization. Therefore, we made use of batch normalization [9] which rectiï¬ es the covariate shift in the input data distribution of each convolution layer. This technique allows for training with much higher learning rates ensuring faster convergence and in our case convergence at all. In Figure 4 (left), an example of optimizing an Impatient AlexNet is shown where the validation accuracy for early prediction layers saturates very slow at a low value caused by a highly decreased learning rate of 10â 4. Even no convergence is achieved for very early layers after running 2000 epochs of training. In contrast, adding batch normalization (right- hand side) allows for a 100à higher learning rate resulting in very fast convergence at a high level of validation accuracy for all prediction layers. Evaluation of early prediction architectures As presented in Sect. 3, several architec- tures are possible for early prediction. The straightforward approach of connecting FC layers directly to each convolutional layer leads to a huge amount of additional parameters to be optimized. These layers are prone to overï¬
1610.02850#16
1610.02850#18
1610.02850
[ "1502.03167" ]
1610.02850#18
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
tting. This can be seen in the learning statistics for MIT67 with a VGG19 base architecture shown in Figure 5. The training loss is near zero together with a moderate validation accuracy for early layers. We also experimented with multiple FC layers. However, learning of these architectures failed to converge in all cases independently from the choice of hyperparameters. By applying spatial pooling lay- ers, validation accuracy is substantially improved, which can be seen in Figure 5 (AVG and AVG4x4). Especially AVG4x4 provides rough spatial information which helps to improve performance even further. Therefore, we use this architecture in the following experiments. In the last two columns of Table 1, average computation times according to the particular weighting schemes and budget distributions are presented for a single image. If inference is performed up to a particular prediction layer known in advance, previous prediction layers do not have to be assessed and we achieve low prediction times tB without additional overhead. Interruptable prediction in anytime scenario A (tA) requires inference of all intermediate In the worst case, i.e. the prediction layers caused by the potential sudden interruption. forward pass includes all prediction layers, average computation time increases compared to
1610.02850#17
1610.02850#19
1610.02850
[ "1502.03167" ]
1610.02850#19
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
7 MANUEL AMTHOR, ERIK RODNER, AND JOACHIM DENZLER: IMPATIENT DNNS 10 408 mmm FC 2 o7| {i FC Mmm AVG S$ o6|| El AVG We AVG4x4|} 5 05|| HEE AVG4x4 S o4 FS 5 o3 a P02 5 oa 107 0.0 EPL =P2 =P3 era EPS EPL eP2 eP3 EPS eS eG Early Prediction Layer Early Prediction Layer Figure 5: Comparison of different early prediction architectures of an Impatient VGG19 trained on MIT-67. Replacing fully-connected layers (FC) by spatial average pooling (AVG & AVG4x4) reduces the effect of overï¬ tting resulting in higher validation accuracy. the scenario with a-priori given budgets. All experiments were performed on an NVIDIA GeForce GTX 970 GPU.
1610.02850#18
1610.02850#20
1610.02850
[ "1502.03167" ]
1610.02850#20
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
Does joint training of EP layers help? The most interesting question, however, is whether our joint training scheme motivated in Sect. 2 provides superior results compared to learning predictors independently. To answer this question, we compared our approach with different baselines that learn several SVM classiï¬ ers based on extracted CNN features [3] at each early prediction layer. We optimize SVM hyperparameters on the validation set to allow fair comparison. The underlying networks, on the contrary, differ in the sense that we made use of an original CNN pre-trained on ImageNet and a pre-trained CNN ï¬ ne-tuned on the current dataset. In Table 1, the evaluation for different time-budget distributions is presented where each result shows the expected accuracy according to the particular weighting scheme and budget distribution. It can be clearly seen that the original CNN (ORIG) without the adaptation to the current dataset performs worst.
1610.02850#19
1610.02850#21
1610.02850
[ "1502.03167" ]
1610.02850#21
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
By applying ï¬ ne-tuning (FT), however, accuracy can be noticeably increased for all early prediction SVMs. Our joint learning of the EP layers provides superior results in almost all scenarios. Es- pecially in the case of small time budgets our method beneï¬ ts from taking the budget distri- bution during learning into account resulting in an improvement of almost 10% on MIT-67 and 6% on 15-Scenes for an Impatient VGG19 compared to the best performing baseline. For extreme weighting schemes with high priority on later predictions (POLY ), ï¬ ne-tuning of the original networks provides slightly better results compared to our approach. This is not surprising since in this case training is very similar to that of standard DNNs with only one ï¬ nal loss layer. In Table 2, we compared our approach to state-of-the-art results for MIT-67 and 15- Scenes. Although the focus of this paper is rather on anytime capability while running It should be the risk of dropping accuracy at ï¬ nal layers, we achieved superior results. noted that only the last layer is used to obtain predictions, since we assume to have no budget restrictions. Especially for the jointly trained Impatient VGG19 on MIT-67, it was even possible to outperform the standard ï¬ ne-tuned CNN, which supports the idea of â deep supervisionâ [24]. Cascaded prediction Apart from both scenarios presented in Figure 1, efï¬ cient classiï¬ ca- tion constitutes another interesting application of our approach. The task here isâ for a given set of examplesâ to reach a desired accuracy within a minimal but not ï¬ xed amount of time.
1610.02850#20
1610.02850#22
1610.02850
[ "1502.03167" ]
1610.02850#22
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
MANUEL AMTHOR, ERIK RODNER, AND JOACHIM DENZLER: IMPATIENT DNNS VGG19 BUDGET SCHEME MIT-67 ORIG FT OURS 15-Scenes ORIG FT OURS â tB [ms] â tA [ms] EQ LIN POLY ILIN IPOLY NORM 46.65 54.19 62.82 37.25 25.63 47.53 48.07 56.52 67.07 37.71 25.65 47.90 53.93 60.55 69.66 45.62 35.11 55.38 83.37 85.87 88.71 77.56 70.14 84.46 84.28 87.47 91.71 77.73 69.85 84.74 85.63 88.02 90.88 80.87 75.93 86.67 1.11 1.37 1.72 0.82 0.50 1.07 1.19 1.47 1.84 0.86 0.51 1.15 ALEXNET BUDGET SCHEME MIT-67 ORIG FT OURS 15-Scenes ORIG FT OURS â tB [ms] â
1610.02850#21
1610.02850#23
1610.02850
[ "1502.03167" ]
1610.02850#23
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
tA [ms] EQ LIN POLY ILIN IPOLY NORM 41.75 45.19 48.50 36.64 28.69 43.97 46.19 50.96 56.29 39.59 30.17 47.80 48.40 52.13 55.76 42.91 36.14 49.93 82.56 83.73 85.56 78.10 72.38 83.25 84.28 86.19 88.98 79.03 72.48 84.82 85.11 85.94 87.38 81.87 77.85 84.89 0.68 0.79 0.96 0.54 0.40 0.65 0.75 0.89 1.09 0.59 0.42 0.72 Table 1:
1610.02850#22
1610.02850#24
1610.02850
[ "1502.03167" ]
1610.02850#24
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
Comparison of Impatient AlexNet (top) and VGG19 (bottom) CNNs with sev- eral baselines. Performance is measured by expected accuracy in % based on the particular budget distribution. Dataset Orig FT Ours (eq) Ours (poly) PlacesCNN [28] [18]â MIT-67 15-Scenes 65.0% 71.04% 67.23% 88.30% 92.83% 92.13% 71.71% 91.45% 68.24% 90.19% 71.5% - Table 2: How good are our VGG19 Impatient Networks when there are no budget restrictions during testing? The table shows the accuracy of the last prediction layer also compared to state-of-the-art results. â The method of [18] requires more than 4s per image. In particular, interrupting the network at a certain depth might already provide the correct decision which renders further computation unnecessary.
1610.02850#23
1610.02850#25
1610.02850
[ "1502.03167" ]
1610.02850#25
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
To implement the idea of efï¬ cient inference, an adequate stopping criterion has to be deï¬ ned. Since each early prediction layer provides probabilistic outputs, we applied uncertainty-based decision making by calculating the ratio between the two highest class probabilities, which is known as 1-vs-2 strategy [11]. If the current prediction of class probabilities is characterized by a high ratio, inference can be interrupted. The analysis of the proposed criterion can be seen in Figure 6 showing time-accuracy plots. Thereby, one point on the red graph is obtained by a ï¬ xed ratio threshold which determines whether an early layer prediction already reaches sufï¬ cient certainty and thus provides the ï¬ nal decision. The blue graph, however, represents classiï¬ cation results of each early prediction layer itself, i.e., the ï¬ nal decision is made at always the same depth, inde- pendently of the underlying ratio. As can be seen, by using uncertainty-based predictions, accuracy can be increased substantially in a lot of cases with the same computational efforts. For example, by interrupting the AlexNet network at the ï¬ fth prediction layer consistently takes â ¼ 1 ms per image for MIT-67 (second-last plot in Figure 6). In contrast, using the proposed criterion, accuracy can be increased from 53% up to 57% while still requiring ex- actly the same computation time on average. An entropy-based criterion achieved inferior performance in our experiments. Qualitative results In Figure 7, qualitative results for the task of scene recognition (class â
1610.02850#24
1610.02850#26
1610.02850
[ "1502.03167" ]
1610.02850#26
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
bathroomâ from MIT-67) are shown. Different numbers in each image indicate the early 9 10 MANUEL AMTHOR, ERIK RODNER, AND JOACHIM DENZLER: IMPATIENT DNNS Boe & g accuracy on test set gg [= Ours (uncertainty) ours (uncertainty) â ours (anytime £9) â ours (anytime £9) ours (uncenainyy (anytime EQ) [= ours (uncertainty yytime EQ) average time per image in ms average time per image in ms average time per image in ms average time per image in ms
1610.02850#25
1610.02850#27
1610.02850
[ "1502.03167" ]
1610.02850#27
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
Figure 6: Evaluation of uncertainty-based predictions compared to early layer predictions. From left to right: Impatient AlexNet on 15-Scenes, Impatient VGG19 on 15-Scenes, Impa- tient AlexNet on MIT-67, and Impatient VGG19 on MIT-67. Figure 7: Images of the MIT-67 ï¬ rst correctly classiï¬ ed as â bathroomâ at different early prediction layers of an Impatient VGG19 CNN.
1610.02850#26
1610.02850#28
1610.02850
[ "1502.03167" ]
1610.02850#28
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
The position of the layers is highlighted as a number and a uniquely colored border. prediction layer in which the particular example was ï¬ rst correctly classiï¬ ed. It can be clearly seen that the examples already decided at EP1 are white colored bathrooms with clearly visible toilet bowl, shower, and sink. With increasing complexity of the scene, layer depth increases as well to provide correct decisions. For example, the right most images in the second row of Figure 7 shows extraordinary bathrooms of unusual colored walls and furnishings increasing the likelihood of confusion with other classes, e.g. children room. # 5 Conclusions
1610.02850#27
1610.02850#29
1610.02850
[ "1502.03167" ]
1610.02850#29
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
In this paper, we presented impatient deep neural networks that tackle the problem of classi- ï¬ cation with dynamic time budgets during application. Compared to standard DNNs which suffer from a high computational demand during inference, we showed that our approach allows for anytime prediction, i.e. a possible interruption at multiple stages while still pro- viding output estimates which renders our method suitable even for real-time applications. We presented a novel general framework of learning dynamic budget predictors based on risk minimization, which we adapted directly to state-of-the-art convolutional neural network ar- chitectures by branching additional early prediction layers with weighted losses.
1610.02850#28
1610.02850#30
1610.02850
[ "1502.03167" ]
1610.02850#30
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
Based on a set of object classiï¬ cation datasets and architectures, we showed that our approach pro- vides superior results for different time budget distributions. Furthermore, we developed an uncertainty-based prediction framework allowing for reducing computational costs while still providing the same accuracy. MANUEL AMTHOR, ERIK RODNER, AND JOACHIM DENZLER: IMPATIENT DNNS # References [1] Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele. The cityscapes dataset for semantic urban scene understanding. arXiv preprint arXiv:1604.01685, 2016. [2] Emily Denton, Wojciech Zaremba, Joan Bruna, Yann LeCun, and Rob Fergus. Ex- ploiting linear structure within convolutional networks for efï¬
1610.02850#29
1610.02850#31
1610.02850
[ "1502.03167" ]
1610.02850#31
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
cient evaluation. CoRR, abs/1404.0736, 2014. [3] Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, and Trevor Darrell. Decaf: A deep convolutional activation feature for generic visual recognition. arXiv preprint arXiv:1310.1531, 2013. [4] Chelsea Finn, Xin Yu Tan, Yan Duan, Trevor Darrell, Sergey Levine, and Pieter Abbeel. Deep spatial autoencoders for visuomotor learning. In ICRA, 2016. [5] Björn Fröhlich, Erik Rodner, and Joachim Denzler.
1610.02850#30
1610.02850#32
1610.02850
[ "1502.03167" ]
1610.02850#32
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
As time goes by: Anytime semantic segmentation with iterative context forests. In Symposium of the German Association for Pattern Recognition (DAGM), pages 1â 10, 2012. [6] Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierar- chies for accurate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 580â 587, 2014. [7] Sheng Guo, Weilin Huang, and Yu Qiao.
1610.02850#31
1610.02850#33
1610.02850
[ "1502.03167" ]
1610.02850#33
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
Locally-supervised deep hybrid model for scene recognition. arXiv preprint arXiv:1601.07576, 2016. [8] Kaiming He and Jian Sun. Convolutional neural networks at constrained time cost. CoRR, abs/1412.1710, 2014. URL http://arxiv.org/abs/1412.1710. [9] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. [10] Max Jaderberg, Andrea Vedaldi, and Andrew Zisserman. Speeding up convolutional neural networks with low rank expansions. arXiv preprint arXiv:1405.3866, 2014. [11] Ajay J Joshi, Fatih Porikli, and Nikolaos Papanikolopoulos. Multi-class active learning In Computer Vision and Pattern Recognition, 2009. CVPR for image classiï¬ cation. 2009. IEEE Conference on, pages 2372â 2379. IEEE, 2009. [12] Sergey Karayev, Mario Fritz, and Trevor Darrell.
1610.02850#32
1610.02850#34
1610.02850
[ "1502.03167" ]
1610.02850#34
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
Anytime recognition of objects and scenes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recog- nition, pages 572â 579, 2014. [13] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classiï¬ cation with In Advances in neural information processing deep convolutional neural networks. systems, pages 1097â 1105, 2012. [14] Andrew Lavin. Fast algorithms for convolutional neural networks. abs/1509.09308, 2015. CoRR, MANUEL AMTHOR, ERIK RODNER, AND JOACHIM DENZLER: IMPATIENT DNNS [15] Svetlana Lazebnik, Cordelia Schmid, and Jean Ponce.
1610.02850#33
1610.02850#35
1610.02850
[ "1502.03167" ]
1610.02850#35
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In Computer Vision and Pattern Recognition, 2006 IEEE Computer Society Conference on, volume 2, pages 2169â 2178. IEEE, 2006. [16] Vadim Lebedev and Victor Lempitsky. Fast convnets using group-wise brain damage. arXiv preprint arXiv:1506.02515, 2015. [17] Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep visuomotor policies. arXiv preprint arXiv:1504.00702, 2015. [18] Lingqiao Liu, Chunhua Shen, and Anton van den Hengel. The treasure beneath con- volutional layers: Cross-convolutional-layer pooling for image classiï¬
1610.02850#34
1610.02850#36
1610.02850
[ "1502.03167" ]
1610.02850#36
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
cation. In Pro- ceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4749â 4757, 2015. [19] Michael Mathieu, Mikael Henaff, and Yann LeCun. Fast training of convolutional networks through ffts. arXiv preprint arXiv:1312.5851, 2013. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 413â 420. IEEE, 2009. [21] David Silver, J Andrew Bagnell, and Anthony Stentz. Learning autonomous driving In Experimental Robotics, pages styles and maneuvers from expert demonstration. 371â 386. Springer, 2013. [22] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large- scale image recognition. arXiv preprint arXiv:1409.1556, 2014. [23] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1â
1610.02850#35
1610.02850#37
1610.02850
[ "1502.03167" ]
1610.02850#37
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
9, 2015. [24] Liwei Wang, Chen-Yu Lee, Zhuowen Tu, and Svetlana Lazebnik. Training deeper convolutional networks with deep supervision. arXiv preprint arXiv:1505.02496, 2015. [25] Zhixiang Xu, Kilian Weinberger, and Olivier Chapelle. The greedy miser: Learning under test-time budgets. arXiv preprint arXiv:1206.6451, 2012. [26] Zhixiang Xu, Matt Kusner, Gao Huang, and Kilian Q Weinberger. Anytime repre- sentation learning. In Proceedings of the 30th International Conference on Machine Learning (ICML-13), pages 1076â 1084, 2013. [27] Z. Zhang, P. Luo, C. C. Loy, and X. Tang.
1610.02850#36
1610.02850#38
1610.02850
[ "1502.03167" ]
1610.02850#38
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
Learning deep representation for face align- ment with auxiliary attributes. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(5):918â 930, May 2016. ISSN 0162-8828. doi: 10.1109/TPAMI.2015. 2469286. [28] Bolei Zhou, Agata Lapedriza, Jianxiong Xiao, Antonio Torralba, and Aude Oliva. In Advances in Learning deep features for scene recognition using places database. neural information processing systems, pages 487â 495, 2014.
1610.02850#37
1610.02850
[ "1502.03167" ]
1610.02357#0
Xception: Deep Learning with Depthwise Separable Convolutions
7 1 0 2 r p A 4 ] V C . s c [ 3 v 7 5 3 2 0 . 0 1 6 1 : v i X r a # Xception: Deep Learning with Depthwise Separable Convolutions # Franc¸ois Chollet Google, Inc. [email protected] # Abstract We present an interpretation of Inception modules in con- volutional neural networks as being an intermediate step in-between regular convolution and the depthwise separable convolution operation (a depthwise convolution followed by a pointwise convolution). In this light, a depthwise separable convolution can be understood as an Inception module with a maximally large number of towers. This observation leads us to propose a novel deep convolutional neural network architecture inspired by Inception, where Inception modules have been replaced with depthwise separable convolutions. We show that this architecture, dubbed Xception, slightly outperforms Inception V3 on the ImageNet dataset (which Inception V3 was designed for), and signiï¬ cantly outper- forms Inception V3 on a larger image classiï¬ cation dataset comprising 350 million images and 17,000 classes. Since the Xception architecture has the same number of param- eters as Inception V3, the performance gains are not due to increased capacity but rather to a more efï¬ cient use of model parameters. as GoogLeNet (Inception V1), later reï¬ ned as Inception V2 [7], Inception V3 [21], and most recently Inception-ResNet [19]. Inception itself was inspired by the earlier Network- In-Network architecture [11].
1610.02357#1
1610.02357
[ "1608.04337" ]
1610.02357#1
Xception: Deep Learning with Depthwise Separable Convolutions
Since its ï¬ rst introduction, Inception has been one of the best performing family of models on the ImageNet dataset [14], as well as internal datasets in use at Google, in particular JFT [5]. The fundamental building block of Inception-style mod- els is the Inception module, of which several different ver- sions exist. In ï¬ gure 1 we show the canonical form of an Inception module, as found in the Inception V3 architec- ture. An Inception model can be understood as a stack of such modules. This is a departure from earlier VGG-style networks which were stacks of simple convolution layers. While Inception modules are conceptually similar to con- volutions (they are convolutional feature extractors), they empirically appear to be capable of learning richer repre- sentations with less parameters. How do they work, and how do they differ from regular convolutions? What design strategies come after Inception?
1610.02357#0
1610.02357#2
1610.02357
[ "1608.04337" ]