doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1704.00648 | 31 | update o according to o(t + 1) = o(t) + Ke ec(t), where o(t) denotes o at iteration ¢. Fig.|3}in Appendix|A.4|shows the evolution of the gap, soft and hard loss as sigma grows during training. We observed that both vector quantization and entropy loss lead to higher compression rates at a given reconstruction MSE compared to scalar quantization and training without entropy loss, respectively (see Appendix[A.3]for details). | 1704.00648#31 | Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations | We present a new approach to learn compressible representations in deep
architectures with an end-to-end training strategy. Our method is based on a
soft (continuous) relaxation of quantization and entropy, which we anneal to
their discrete counterparts throughout training. We showcase this method for
two challenging applications: Image compression and neural network compression.
While these tasks have typically been approached with different methods, our
soft-to-hard quantization approach gives results competitive with the
state-of-the-art for both. | http://arxiv.org/pdf/1704.00648 | Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu Timofte, Luca Benini, Luc Van Gool | cs.LG, cs.CV | null | null | cs.LG | 20170403 | 20170608 | [
{
"id": "1703.10114"
},
{
"id": "1609.07061"
},
{
"id": "1702.04008"
},
{
"id": "1511.06085"
},
{
"id": "1702.03044"
},
{
"id": "1609.07009"
},
{
"id": "1611.01704"
},
{
"id": "1608.05148"
},
{
"id": "1612.01543"
},
{
"id": "1607.05006"
},
{
"id": "1510.00149"
}
]
|
1704.00805 | 31 | & = V" lse(z)| u = A(diag(x) â 2a! )u, (31) 2=0(2)
â â ânâ1 is a mixed strategy and u
Rn is a payoff where x vector. We note that the matrix term was referred to as the replicator operator in [56]. To the best of our knowledge, the implications of this connection has not been discussed in the evolutionary game theory community.
Lemma 4. The log-sum-exp function is C 2, convex and not strictly convex on Rn.
The convexity of the log-sum-exp function is well-known [7] and follows from Proposition 2. To show that log-sum-exp
is not strictly convex, take z and z +c1, where z then, â Rn, c â R,
lse(z + c1) = lse(z) + c. (32)
Thus, lse is afï¬ne along the line given by z+c1, which implies that the log-sum-exp function is not strictly convex. This result is also noted in [24, p. 48].
Proposition 3. The softmax function is monotone, that is, | 1704.00805#31 | On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning | In this paper, we utilize results from convex analysis and monotone operator
theory to derive additional properties of the softmax function that have not
yet been covered in the existing literature. In particular, we show that the
softmax function is the monotone gradient map of the log-sum-exp function. By
exploiting this connection, we show that the inverse temperature parameter
determines the Lipschitz and co-coercivity properties of the softmax function.
We then demonstrate the usefulness of these properties through an application
in game-theoretic reinforcement learning. | http://arxiv.org/pdf/1704.00805 | Bolin Gao, Lacra Pavel | math.OC, cs.LG | 10 pages, 4 figures. Comments are welcome | null | math.OC | 20170403 | 20180821 | [
{
"id": "1612.05628"
},
{
"id": "1602.02068"
},
{
"id": "1808.04464"
}
]
|
1704.00648 | 32 | Evaluation. To evaluate the image compression performance of our Soft-to-Hard Autoencoder (SHA) method we use four datasets, namely Kodak [1], B100 [31], Urban100 [14], ImageNET100 (100 randomly selected images from ImageNET [25]) and three standard quality measures, namely peak signal-to-noise ratio (PSNR), structural similarity index (SSIM) [37], and multi-scale SSIM (MS-SSIM), see Appendix A.5 for details. We compare our SHA with the standard JPEG, JPEG 2000, and BPG [10], focusing on compression rates < 1 bits per pixel (bpp) (i.e., the regime where traditional integral transform-based compression algorithms are most challenged). As shown in Fig. 1, for high compression rates (< 0.4 bpp), our SHA outperforms JPEG and JPEG 2000 in terms of MS-SSIM and is competitive with BPG. A similar trend can be observed for SSIM (see Fig. 4 in Appendix A.6 for plots of SSIM and PSNR as a function of bpp). SHA performs best on ImageNET100 and is most challenged on Kodak when compared with JPEG 2000. Visually, SHA-compressed images have fewer artifacts than those compressed by JPEG 2000 (see Fig. 1, and Appendix A.7). | 1704.00648#32 | Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations | We present a new approach to learn compressible representations in deep
architectures with an end-to-end training strategy. Our method is based on a
soft (continuous) relaxation of quantization and entropy, which we anneal to
their discrete counterparts throughout training. We showcase this method for
two challenging applications: Image compression and neural network compression.
While these tasks have typically been approached with different methods, our
soft-to-hard quantization approach gives results competitive with the
state-of-the-art for both. | http://arxiv.org/pdf/1704.00648 | Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu Timofte, Luca Benini, Luc Van Gool | cs.LG, cs.CV | null | null | cs.LG | 20170403 | 20170608 | [
{
"id": "1703.10114"
},
{
"id": "1609.07061"
},
{
"id": "1702.04008"
},
{
"id": "1511.06085"
},
{
"id": "1702.03044"
},
{
"id": "1609.07009"
},
{
"id": "1611.01704"
},
{
"id": "1608.05148"
},
{
"id": "1612.01543"
},
{
"id": "1607.05006"
},
{
"id": "1510.00149"
}
]
|
1704.00805 | 32 | Proposition 3. The softmax function is monotone, that is,
(o(2) -o(2')) (2-7) 20,¥z,2° ER", (33)
20,¥z,2°
â and not strictly monotone on Rn.
â
â¥
â
Proof. Monotonicity of Ï follows directly from the convexity of the log-sum-exp function. Since the log-sum-exp function is not strictly convex on Rn, therefore by Lemma 2, Ï fails to be strictly monotone. Alternatively, since every strictly monotone operator is injective, therefore Ï is not strictly monotone on Rn.
The monotonicity of Ï allows us to state a stronger result.
Corollary 1. The softmax function is a maximal monotone operator, that is, there exists no monotone operator such that its graph properly contains the graph of the softmax function.
Proof. This directly follows from Ï being a continuous, mono- tone map, see Lemma 3.
Next, we show that under appropriate conditions, the soft- max function is a contraction in 2. | 1704.00805#32 | On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning | In this paper, we utilize results from convex analysis and monotone operator
theory to derive additional properties of the softmax function that have not
yet been covered in the existing literature. In particular, we show that the
softmax function is the monotone gradient map of the log-sum-exp function. By
exploiting this connection, we show that the inverse temperature parameter
determines the Lipschitz and co-coercivity properties of the softmax function.
We then demonstrate the usefulness of these properties through an application
in game-theoretic reinforcement learning. | http://arxiv.org/pdf/1704.00805 | Bolin Gao, Lacra Pavel | math.OC, cs.LG | 10 pages, 4 figures. Comments are welcome | null | math.OC | 20170403 | 20180821 | [
{
"id": "1612.05628"
},
{
"id": "1602.02068"
},
{
"id": "1808.04464"
}
]
|
1704.00648 | 33 | Related methods and discussion. JPEG 2000 [29] uses wavelet-based transformations and adap- tive EBCOT coding. BPG [10], based on a subset of the HEVC video compression standard, is the
7
ACC COMP. METHOD [%] RATIO ORIGINAL MODEL 1.00 92.6 PRUNING + FT. + INDEX CODING + H. CODING [12] 92.6 4.52 92.6 18.25 PRUNING + FT. + K-MEANS + FT. + I.C. + H.C. [11] PRUNING + FT. + HESSIAN-WEIGHTED K-MEANS + FT. + I.C. + H.C. 92.7 20.51 92.7 22.17 PRUNING + FT. + UNIFORM QUANTIZATION + FT. + I.C. + H.C. 92.7 21.01 PRUNING + FT. + ITERATIVE ECSQ + FT. + I.C. + H.C. SOFT-TO-HARD ANNEALING + FT. + H. CODING (OURS) 92.1 19.15 SOFT-TO-HARD ANNEALING + FT. + A. CODING (OURS) 92.1 20.15 | 1704.00648#33 | Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations | We present a new approach to learn compressible representations in deep
architectures with an end-to-end training strategy. Our method is based on a
soft (continuous) relaxation of quantization and entropy, which we anneal to
their discrete counterparts throughout training. We showcase this method for
two challenging applications: Image compression and neural network compression.
While these tasks have typically been approached with different methods, our
soft-to-hard quantization approach gives results competitive with the
state-of-the-art for both. | http://arxiv.org/pdf/1704.00648 | Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu Timofte, Luca Benini, Luc Van Gool | cs.LG, cs.CV | null | null | cs.LG | 20170403 | 20170608 | [
{
"id": "1703.10114"
},
{
"id": "1609.07061"
},
{
"id": "1702.04008"
},
{
"id": "1511.06085"
},
{
"id": "1702.03044"
},
{
"id": "1609.07009"
},
{
"id": "1611.01704"
},
{
"id": "1608.05148"
},
{
"id": "1612.01543"
},
{
"id": "1607.05006"
},
{
"id": "1510.00149"
}
]
|
1704.00805 | 33 | Next, we show that under appropriate conditions, the soft- max function is a contraction in 2.
Lemma 5. ([8, p. 58], Theorem 2.1.6) A C 2, convex function f : Rn R has a Lipschitz continuous gradient with Lipschitz constant L > 0 if for all z, v
â 2f (z)v
0<vu! V?f(z)u < Llv|l3. (34)
0
â¤
â
â¤
Proposition 4. The softmax function is L-Lipschitz with re- spect to || \|2 with L = 4, that is, for all z,z' ⬠R",
â
llo(z) â o(2â)Il2 S Alle = 2'lla, (35)
o(2â)Il2
llo(z) â o(2â)Il2 S Alle = 2'lla, where is the inverse temperature constant.
# llo(z)
â¤
â
Proof. Given the Hessian of lse in Proposition 2, we have for all z, v
â
v! V? Ise(z)v = veoi(z) â (oS vj0i(z))?). (36)
Since the second term on the right hand side of (36) is nonnegative, therefore, | 1704.00805#33 | On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning | In this paper, we utilize results from convex analysis and monotone operator
theory to derive additional properties of the softmax function that have not
yet been covered in the existing literature. In particular, we show that the
softmax function is the monotone gradient map of the log-sum-exp function. By
exploiting this connection, we show that the inverse temperature parameter
determines the Lipschitz and co-coercivity properties of the softmax function.
We then demonstrate the usefulness of these properties through an application
in game-theoretic reinforcement learning. | http://arxiv.org/pdf/1704.00805 | Bolin Gao, Lacra Pavel | math.OC, cs.LG | 10 pages, 4 figures. Comments are welcome | null | math.OC | 20170403 | 20180821 | [
{
"id": "1612.05628"
},
{
"id": "1602.02068"
},
{
"id": "1808.04464"
}
]
|
1704.00648 | 34 | Table 1: Accuracies and compression factors for different DNN compression techniques, using a 32-layer ResNet on CIFAR-10. FT. denotes ï¬ne-tuning, IC. denotes index coding and H.C. and A.C. denote Huffman and arithmetic coding, respectively. The pruning based results are from [5].
current state-of-the art for image compression. It uses context-adaptive binary arithmetic coding (CABAC) [21].
Theis et al. [30] rounding to integers SHA (ours) vector quantization grad. of soft relaxation grad. of identity mapping Quantization Backpropagation Entropy estimation (soft) histogram Training material Operating points Gaussian scale mixtures high quality Flickr images ensemble ImageNET single model | 1704.00648#34 | Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations | We present a new approach to learn compressible representations in deep
architectures with an end-to-end training strategy. Our method is based on a
soft (continuous) relaxation of quantization and entropy, which we anneal to
their discrete counterparts throughout training. We showcase this method for
two challenging applications: Image compression and neural network compression.
While these tasks have typically been approached with different methods, our
soft-to-hard quantization approach gives results competitive with the
state-of-the-art for both. | http://arxiv.org/pdf/1704.00648 | Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu Timofte, Luca Benini, Luc Van Gool | cs.LG, cs.CV | null | null | cs.LG | 20170403 | 20170608 | [
{
"id": "1703.10114"
},
{
"id": "1609.07061"
},
{
"id": "1702.04008"
},
{
"id": "1511.06085"
},
{
"id": "1702.03044"
},
{
"id": "1609.07009"
},
{
"id": "1611.01704"
},
{
"id": "1608.05148"
},
{
"id": "1612.01543"
},
{
"id": "1607.05006"
},
{
"id": "1510.00149"
}
]
|
1704.00805 | 34 | Since the second term on the right hand side of (36) is nonnegative, therefore,
n n vl VW? Ise(z)u <A) v?0;(z) < Asupf{ai(z)} Ov? i=l i=l = v0! V? Ise(z)u < Alful|3. (37)
< Alful|3. {1,...,n},Vz
â
â = 1,
Rn. By 2 lse(z) is positive semideï¬nite. Hence using where sup Lemma 4, Lemma 1 and (37), we have, Ïi(z) { â , } z â i â } â â {
0<v' V? Ise(z)u < Allull3. (38)
0<v' V? Ise(z)u < By Lemma 5, a is Lipschitz with L = 2.
â¤
â
â¤
6
|
We note that Proposition 4 can also be established by using Theorem 4.2.1. in [28, p. 240], which resorts to using duality between the negative entropy and the log-sum-exp function.
As a minor consequence of Proposition 4, by the Cauchy- Schwarz inequality, we have, | 1704.00805#34 | On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning | In this paper, we utilize results from convex analysis and monotone operator
theory to derive additional properties of the softmax function that have not
yet been covered in the existing literature. In particular, we show that the
softmax function is the monotone gradient map of the log-sum-exp function. By
exploiting this connection, we show that the inverse temperature parameter
determines the Lipschitz and co-coercivity properties of the softmax function.
We then demonstrate the usefulness of these properties through an application
in game-theoretic reinforcement learning. | http://arxiv.org/pdf/1704.00805 | Bolin Gao, Lacra Pavel | math.OC, cs.LG | 10 pages, 4 figures. Comments are welcome | null | math.OC | 20170403 | 20180821 | [
{
"id": "1612.05628"
},
{
"id": "1602.02068"
},
{
"id": "1808.04464"
}
]
|
1704.00648 | 35 | The recent works of [30, 4] also showed competitive perfor- mance with JPEG 2000. While we use the architecture of [30], there are stark differences be- tween the works, summarized in the inset table. The work of [4] build a deep model using multiple generalized divisive normaliza- tion (GDN) layers and their inverses (IGDN), which are specialized layers designed to capture local joint statistics of natural images. Furthermore, they model marginals for entropy estimation using linear splines and also use CABAC[21] coding. Concurrent to our work, the method of [16] builds on the architecture proposed in [33], and shows that impressive performance in terms of the MS-SSIM metric can be obtained by incorporating it into the optimization (instead of just minimizing the MSE).
In contrast to the domain-speciï¬c techniques adopted by these state-of-the-art methods, our framework for learning compressible representation can realize a competitive image compression system, only using a convolutional autoencoder and simple entropy coding.
# 5 DNN Compression | 1704.00648#35 | Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations | We present a new approach to learn compressible representations in deep
architectures with an end-to-end training strategy. Our method is based on a
soft (continuous) relaxation of quantization and entropy, which we anneal to
their discrete counterparts throughout training. We showcase this method for
two challenging applications: Image compression and neural network compression.
While these tasks have typically been approached with different methods, our
soft-to-hard quantization approach gives results competitive with the
state-of-the-art for both. | http://arxiv.org/pdf/1704.00648 | Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu Timofte, Luca Benini, Luc Van Gool | cs.LG, cs.CV | null | null | cs.LG | 20170403 | 20170608 | [
{
"id": "1703.10114"
},
{
"id": "1609.07061"
},
{
"id": "1702.04008"
},
{
"id": "1511.06085"
},
{
"id": "1702.03044"
},
{
"id": "1609.07009"
},
{
"id": "1611.01704"
},
{
"id": "1608.05148"
},
{
"id": "1612.01543"
},
{
"id": "1607.05006"
},
{
"id": "1510.00149"
}
]
|
1704.00805 | 35 | As a minor consequence of Proposition 4, by the Cauchy- Schwarz inequality, we have,
(o(2) âo(2')) "(@- 2) <All 2/18. 9)
â L -co-coercive with â 2 2,
1 Corollary 2. The softmax function is ;-co-coercive with respect to ||- ||2 with L = 4, that is, for all z,z' ⬠R",
(o(2) = o(2")) "(2 = 21) 2 5 llo(2) â 0218, (40)
where λ is the inverse temperature constant.
Proof. Follows directly from Baillon - Haddad Theorem, see Theorem 1.
Proposition 4 and Corollary 2 show that the inverse tem- perature constant λ is crucial in determining the Lipschitz and co-coercive properties of the softmax function. We summarize 2 in the following corollary. these properties with respect to
Corollary 3. The softmax function is λ-Lipschitz and 1 coercive for any λ > 0, in particular, ⢠Nonexpansive and ï¬rmly nonexpansive for λ = 1, ⢠Contractive for λ (0, 1), â where λ is the inverse temperature constant. | 1704.00805#35 | On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning | In this paper, we utilize results from convex analysis and monotone operator
theory to derive additional properties of the softmax function that have not
yet been covered in the existing literature. In particular, we show that the
softmax function is the monotone gradient map of the log-sum-exp function. By
exploiting this connection, we show that the inverse temperature parameter
determines the Lipschitz and co-coercivity properties of the softmax function.
We then demonstrate the usefulness of these properties through an application
in game-theoretic reinforcement learning. | http://arxiv.org/pdf/1704.00805 | Bolin Gao, Lacra Pavel | math.OC, cs.LG | 10 pages, 4 figures. Comments are welcome | null | math.OC | 20170403 | 20180821 | [
{
"id": "1612.05628"
},
{
"id": "1602.02068"
},
{
"id": "1808.04464"
}
]
|
1704.00648 | 37 | R464,154 and employ scalar quantization (m = d), We concatenate the parameters into a vector W â such that ZT = z = W. We started from the pre-trained original model, which obtains a 92.6% accuracy on the test set. We implemented the entropy minimization by using L = 75 centers and 20, i.e., giving chose β = 0.1 such that the converged entropy would give a compression factor 32/20 = 1.6 bits per weight. The training was performed with the same learning parameters as â the original model was trained with (SGD with momentum 0.9). The annealing schedule used was a simple exponential one, Ï(t + 1) = 1.001 Ï(t) with Ï(0) = 0.4. After 4 epochs of training, when Ï(t) has increased by a factor 20, we switched to hard assignments and continued ï¬ne-tuning at lower learning rate. 2 Adhering to the benchmark of [5, 12, 11], we obtain the compression a 10 factor by dividing the bit cost of storing the uncompressed weights as ï¬oats (464, 154 32 bits) with the total encoding cost of compressed weights (i.e., L 32 bits for the centers plus the size of the compressed index stream). | 1704.00648#37 | Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations | We present a new approach to learn compressible representations in deep
architectures with an end-to-end training strategy. Our method is based on a
soft (continuous) relaxation of quantization and entropy, which we anneal to
their discrete counterparts throughout training. We showcase this method for
two challenging applications: Image compression and neural network compression.
While these tasks have typically been approached with different methods, our
soft-to-hard quantization approach gives results competitive with the
state-of-the-art for both. | http://arxiv.org/pdf/1704.00648 | Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu Timofte, Luca Benini, Luc Van Gool | cs.LG, cs.CV | null | null | cs.LG | 20170403 | 20170608 | [
{
"id": "1703.10114"
},
{
"id": "1609.07061"
},
{
"id": "1702.04008"
},
{
"id": "1511.06085"
},
{
"id": "1702.03044"
},
{
"id": "1609.07009"
},
{
"id": "1611.01704"
},
{
"id": "1608.05148"
},
{
"id": "1612.01543"
},
{
"id": "1607.05006"
},
{
"id": "1510.00149"
}
]
|
1704.00805 | 37 | Fig. 3: Feedback representation of the exponentially-discounted rein- forcement learning scheme (EXP-D-RL).
In this section we demonstrate an application of these new properties of the softmax function in the context of stateless continuous-time reinforcement learning in ï¬nite games. For clarity and ease of notation, we perform our analysis in a single-player setup. For extension to N -player games, higher- order extension, and addition simulations, refer to our related
7
paper [21]. For other related work in this direction, see [22], [23], [46].
Consider a game G with a single player. We note that type of game is also known as âplay against natureâ and is identifiable with single-population matching in population games [2]. The player is equipped with an action set A = {1,...,n} and continuous payoff function / : A â R. A mixed strategy profile is given by 7 = [z1, see 2)! ⬠Aââ!. The playerâs expected payoff of using x is given by,
U(x) = So aii(z) =«x'U(x), iâ¬A
# U
where u = U(x) = [Ui ' ⬠Râ is referred to the payoff vector at x. ..Un | 1704.00805#37 | On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning | In this paper, we utilize results from convex analysis and monotone operator
theory to derive additional properties of the softmax function that have not
yet been covered in the existing literature. In particular, we show that the
softmax function is the monotone gradient map of the log-sum-exp function. By
exploiting this connection, we show that the inverse temperature parameter
determines the Lipschitz and co-coercivity properties of the softmax function.
We then demonstrate the usefulness of these properties through an application
in game-theoretic reinforcement learning. | http://arxiv.org/pdf/1704.00805 | Bolin Gao, Lacra Pavel | math.OC, cs.LG | 10 pages, 4 figures. Comments are welcome | null | math.OC | 20170403 | 20180821 | [
{
"id": "1612.05628"
},
{
"id": "1602.02068"
},
{
"id": "1808.04464"
}
]
|
1704.00648 | 38 | Our compressible model achieves a comparable test accuracy of 92.1% while compressing the DNN by a factor 19.15 with Huffman and 20.15 using arithmetic coding. Table 1 compares our results with state-of-the-art approaches reported by [5]. We note that while the top methods from the literature also achieve accuracies above 92% and compression factors above 20 , they employ a considerable amount of hand-designed steps, such as pruning, retraining, various types of weight clustering, special encoding of the sparse weight matrices into an index-difference based format and then ï¬nally use entropy coding. In contrast, we directly minimize the entropy of the weights in the training, obtaining a highly compressible representation using standard entropy coding.
2 We switch to hard assignments since we can get large gradients for weights that are equally close to two centers as ËQ converges to hard nearest neighbor assignments. One could also employ simple gradient clipping.
8 | 1704.00648#38 | Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations | We present a new approach to learn compressible representations in deep
architectures with an end-to-end training strategy. Our method is based on a
soft (continuous) relaxation of quantization and entropy, which we anneal to
their discrete counterparts throughout training. We showcase this method for
two challenging applications: Image compression and neural network compression.
While these tasks have typically been approached with different methods, our
soft-to-hard quantization approach gives results competitive with the
state-of-the-art for both. | http://arxiv.org/pdf/1704.00648 | Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu Timofte, Luca Benini, Luc Van Gool | cs.LG, cs.CV | null | null | cs.LG | 20170403 | 20170608 | [
{
"id": "1703.10114"
},
{
"id": "1609.07061"
},
{
"id": "1702.04008"
},
{
"id": "1511.06085"
},
{
"id": "1702.03044"
},
{
"id": "1609.07009"
},
{
"id": "1611.01704"
},
{
"id": "1608.05148"
},
{
"id": "1612.01543"
},
{
"id": "1607.05006"
},
{
"id": "1510.00149"
}
]
|
1704.00805 | 38 | # U
where u = U(x) = [Ui ' ⬠Râ is referred to the payoff vector at x. ..Un
Starting at t = 0, we assume that the player repeatedly interacts with the game and aggregates his raw payoff u = Rn via the U (x) â learning rule,
# t
t zi(t) =e *2i(0) + [> war, Vie A, i 4)
R is the payoff to the ith strategy where ui = Ui(x) R is the score variable associated with the ith and zi â strategy. This form of aggregation as given by (41) is known as exponentially-discounted learning rule, under which the player allocates exponentially more weight to recent observations of the payoff [22], [23].
Taking the time derivative of (41) yields the score dynamics,
zi, (42)
# Ëzi = ui â
i â
, â A | 1704.00805#38 | On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning | In this paper, we utilize results from convex analysis and monotone operator
theory to derive additional properties of the softmax function that have not
yet been covered in the existing literature. In particular, we show that the
softmax function is the monotone gradient map of the log-sum-exp function. By
exploiting this connection, we show that the inverse temperature parameter
determines the Lipschitz and co-coercivity properties of the softmax function.
We then demonstrate the usefulness of these properties through an application
in game-theoretic reinforcement learning. | http://arxiv.org/pdf/1704.00805 | Bolin Gao, Lacra Pavel | math.OC, cs.LG | 10 pages, 4 figures. Comments are welcome | null | math.OC | 20170403 | 20180821 | [
{
"id": "1612.05628"
},
{
"id": "1602.02068"
},
{
"id": "1808.04464"
}
]
|
1704.00648 | 39 | 8
In Fig. 5 in Appendix A.8, we show how the sample entropy H(p) decays and the index histograms develop during training, as the network learns to condense most of the weights to a couple of centers when optimizing (6). In contrast, the methods of [12, 11, 5] manually impose 0 as the most frequent center by pruning 80% of the network weights. We note that the recent works by [34] also manages to tackle the problem in a single training procedure, using the minimum description length principle. In contrast to our framework, they take a Bayesian perspective and rely on a parametric assumption on the symbol distribution.
# 6 Conclusions
In this paper we proposed a uniï¬ed framework for end-to-end learning of compressed representations for deep architectures. By training with a soft-to-hard annealing scheme, gradually transferring from a soft relaxation of the sample entropy and network discretization process to the actual non- differentiable quantization process, we manage to optimize the rate distortion trade-off between the original network loss and the entropy. Our framework can elegantly capture diverse compression tasks, obtaining results competitive with state-of-the-art for both image compression as well as DNN compression. The simplicity of our approach opens up various directions for future work, since our framework can be easily adapted for other tasks where a compressible representation is desired.
9
# References | 1704.00648#39 | Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations | We present a new approach to learn compressible representations in deep
architectures with an end-to-end training strategy. Our method is based on a
soft (continuous) relaxation of quantization and entropy, which we anneal to
their discrete counterparts throughout training. We showcase this method for
two challenging applications: Image compression and neural network compression.
While these tasks have typically been approached with different methods, our
soft-to-hard quantization approach gives results competitive with the
state-of-the-art for both. | http://arxiv.org/pdf/1704.00648 | Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu Timofte, Luca Benini, Luc Van Gool | cs.LG, cs.CV | null | null | cs.LG | 20170403 | 20170608 | [
{
"id": "1703.10114"
},
{
"id": "1609.07061"
},
{
"id": "1702.04008"
},
{
"id": "1511.06085"
},
{
"id": "1702.03044"
},
{
"id": "1609.07009"
},
{
"id": "1611.01704"
},
{
"id": "1608.05148"
},
{
"id": "1612.01543"
},
{
"id": "1607.05006"
},
{
"id": "1510.00149"
}
]
|
1704.00805 | 39 | Taking the time derivative of (41) yields the score dynamics,
zi, (42)
# Ëzi = ui â
i â
, â A
We refer to (42) as the exponentially-discounted score dy- namics, a set of differential equations whose solutions capture the evolution of the playerâs scores over time. This form of score dynamics was investigated in [20], [22], [54], [57]. Since Ui(x) is continuous over a compact domain, therefore there ex- ânâ1. ists some constant M > 0 such that x â Then it can be shown using standard arguments that | ⤠is max } a compact, positively invariant set (solution remains in ⦠for all time).
We can express (42) using stacked-vector notation as,
Ëz = u z, (43)
â
where z = [z,.. en] Suppose that the score variable z is mapped to the strategy x from the softmax selection rule, ie., x = o(z), then the payoff vector can be written as u = U(x) = U(o(z)). Expressing the composition between the softmax selection rule with the payoff vector as U o o(z) := U(o(z)), then we can also write (43) as,
Ëz = (U Ï)(z) z. (44) | 1704.00805#39 | On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning | In this paper, we utilize results from convex analysis and monotone operator
theory to derive additional properties of the softmax function that have not
yet been covered in the existing literature. In particular, we show that the
softmax function is the monotone gradient map of the log-sum-exp function. By
exploiting this connection, we show that the inverse temperature parameter
determines the Lipschitz and co-coercivity properties of the softmax function.
We then demonstrate the usefulness of these properties through an application
in game-theoretic reinforcement learning. | http://arxiv.org/pdf/1704.00805 | Bolin Gao, Lacra Pavel | math.OC, cs.LG | 10 pages, 4 figures. Comments are welcome | null | math.OC | 20170403 | 20180821 | [
{
"id": "1612.05628"
},
{
"id": "1602.02068"
},
{
"id": "1808.04464"
}
]
|
1704.00648 | 40 | 9
# References
[1] Kodak PhotoCD dataset. http://r0k.us/graphics/kodak/, 1999. [2] Eugene L Allgower and Kurt Georg. Numerical continuation methods: an introduction,
volume 13. Springer Science & Business Media, 2012.
[3] Johannes Ballé, Valero Laparra, and Eero P Simoncelli. End-to-end optimization of nonlinear transform codes for perceptual quality. arXiv preprint arXiv:1607.05006, 2016.
[4] Johannes Ballé, Valero Laparra, and Eero P Simoncelli. End-to-end optimized image compres- sion. arXiv preprint arXiv:1611.01704, 2016.
[5] Yoojin Choi, Mostafa El-Khamy, and Jungwon Lee. Towards the limit of network quantization. arXiv preprint arXiv:1612.01543, 2016.
[6] Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. Binaryconnect: Training deep neural networks with binary weights during propagations. In Advances in Neural Information Processing Systems, pages 3123â3131, 2015.
[7] Thomas M Cover and Joy A Thomas. Elements of information theory. John Wiley & Sons, 2012. | 1704.00648#40 | Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations | We present a new approach to learn compressible representations in deep
architectures with an end-to-end training strategy. Our method is based on a
soft (continuous) relaxation of quantization and entropy, which we anneal to
their discrete counterparts throughout training. We showcase this method for
two challenging applications: Image compression and neural network compression.
While these tasks have typically been approached with different methods, our
soft-to-hard quantization approach gives results competitive with the
state-of-the-art for both. | http://arxiv.org/pdf/1704.00648 | Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu Timofte, Luca Benini, Luc Van Gool | cs.LG, cs.CV | null | null | cs.LG | 20170403 | 20170608 | [
{
"id": "1703.10114"
},
{
"id": "1609.07061"
},
{
"id": "1702.04008"
},
{
"id": "1511.06085"
},
{
"id": "1702.03044"
},
{
"id": "1609.07009"
},
{
"id": "1611.01704"
},
{
"id": "1608.05148"
},
{
"id": "1612.01543"
},
{
"id": "1607.05006"
},
{
"id": "1510.00149"
}
]
|
1704.00805 | 40 | â
The overall exponentially-discounted reinforcement learning scheme (EXP-D-RL) can be represented as a closed-loop n identity feedback system in Figure 3, where In is the n
Ã
1 s + 1 is the transfer C. The closed-loop system matrix, function of (42) from ui to zi, s is equivalently represented by, is the Kronecker product, and â â
Ëz = u z, u = U (x), x = Ï(z). â (45)
From (44), we see that the equilibria of the overall closed- loop system (45) are the fixed points of the map z ++ (Uo a)(z). This fixed-point condition can be restated as,
£=0 > T=", uw =U(Z"), a* =0(2"). (46)
£=0 > T=",
The existence of the fixed point is guaranteed by the Brouwerâs Fixed Point Theorem provided that U oa is a continuous function with bounded range [57]. Since 2* = u*, therefore the fixed point Z* is mapped through o to a logit equilibrium [14], [22].
Proposition 5. Z* = o(Z*) = o(U*) is the logit equilibrium of the game G.
# G | 1704.00805#40 | On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning | In this paper, we utilize results from convex analysis and monotone operator
theory to derive additional properties of the softmax function that have not
yet been covered in the existing literature. In particular, we show that the
softmax function is the monotone gradient map of the log-sum-exp function. By
exploiting this connection, we show that the inverse temperature parameter
determines the Lipschitz and co-coercivity properties of the softmax function.
We then demonstrate the usefulness of these properties through an application
in game-theoretic reinforcement learning. | http://arxiv.org/pdf/1704.00805 | Bolin Gao, Lacra Pavel | math.OC, cs.LG | 10 pages, 4 figures. Comments are welcome | null | math.OC | 20170403 | 20180821 | [
{
"id": "1612.05628"
},
{
"id": "1602.02068"
},
{
"id": "1808.04464"
}
]
|
1704.00648 | 41 | [7] Thomas M Cover and Joy A Thomas. Elements of information theory. John Wiley & Sons, 2012.
[8] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR09, 2009.
[9] Andre Esteva, Brett Kuprel, Roberto A Novoa, Justin Ko, Susan M Swetter, Helen M Blau, and Sebastian Thrun. Dermatologist-level classiï¬cation of skin cancer with deep neural networks. Nature, 542(7639):115â118, 2017.
[10] Bellard Fabrice. BPG Image format. https://bellard.org/bpg/, 2014. [11] Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural net- works with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015.
[12] Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efï¬cient neural network. In Advances in Neural Information Processing Systems, pages 1135â1143, 2015. | 1704.00648#41 | Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations | We present a new approach to learn compressible representations in deep
architectures with an end-to-end training strategy. Our method is based on a
soft (continuous) relaxation of quantization and entropy, which we anneal to
their discrete counterparts throughout training. We showcase this method for
two challenging applications: Image compression and neural network compression.
While these tasks have typically been approached with different methods, our
soft-to-hard quantization approach gives results competitive with the
state-of-the-art for both. | http://arxiv.org/pdf/1704.00648 | Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu Timofte, Luca Benini, Luc Van Gool | cs.LG, cs.CV | null | null | cs.LG | 20170403 | 20170608 | [
{
"id": "1703.10114"
},
{
"id": "1609.07061"
},
{
"id": "1702.04008"
},
{
"id": "1511.06085"
},
{
"id": "1702.03044"
},
{
"id": "1609.07009"
},
{
"id": "1611.01704"
},
{
"id": "1608.05148"
},
{
"id": "1612.01543"
},
{
"id": "1607.05006"
},
{
"id": "1510.00149"
}
]
|
1704.00805 | 41 | Proposition 5. Z* = o(Z*) = o(U*) is the logit equilibrium of the game G.
# G
Hence, the convergence of the solution of the score dynam- ics z(t) towards the ï¬xed point of U Ï implies convergence of the induced strategy x(t) = Ï(t) towards a logit equilibrium point xâ of the game. In the following, we provide different assumptions on the payoff function U or the composition between the payoff function and the softmax operator U Ï under which the induced strategy converges. For background on dynamical systems and Lyapunov theory, see [55]. This analysis was inspired by [57].
the co-coercive property of the soft- max function to provide the convergence conditions of the exponentially-discounted score dynamics (43) in a general class of games. Consider the exponentially-discounted re- inforcement learning scheme as depicted in Figure 3. We proceed by imposing the following assumption on the payoff of the game.
Assumption 1. The payoff U is anti-monotone, that is, for all x,x' ⬠Ant
â
(a â 2)" (U(x) âU(aâ)) <0. (47)
â
â
⤠| 1704.00805#41 | On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning | In this paper, we utilize results from convex analysis and monotone operator
theory to derive additional properties of the softmax function that have not
yet been covered in the existing literature. In particular, we show that the
softmax function is the monotone gradient map of the log-sum-exp function. By
exploiting this connection, we show that the inverse temperature parameter
determines the Lipschitz and co-coercivity properties of the softmax function.
We then demonstrate the usefulness of these properties through an application
in game-theoretic reinforcement learning. | http://arxiv.org/pdf/1704.00805 | Bolin Gao, Lacra Pavel | math.OC, cs.LG | 10 pages, 4 figures. Comments are welcome | null | math.OC | 20170403 | 20180821 | [
{
"id": "1612.05628"
},
{
"id": "1602.02068"
},
{
"id": "1808.04464"
}
]
|
1704.00648 | 42 | [13] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016.
[14] Jia-Bin Huang, Abhishek Singh, and Narendra Ahuja. Single image super-resolution from transformed self-exemplars. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5197â5206, 2015.
[15] Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Quan- tized neural networks: Training neural networks with low precision weights and activations. arXiv preprint arXiv:1609.07061, 2016.
[16] Nick Johnston, Damien Vincent, David Minnen, Michele Covell, Saurabh Singh, Troy Chinen, Sung Jin Hwang, Joel Shor, and George Toderici. Improved lossy image compression with priming and spatially adaptive bit rates for recurrent networks. arXiv preprint arXiv:1703.10114, 2017.
[17] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014. | 1704.00648#42 | Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations | We present a new approach to learn compressible representations in deep
architectures with an end-to-end training strategy. Our method is based on a
soft (continuous) relaxation of quantization and entropy, which we anneal to
their discrete counterparts throughout training. We showcase this method for
two challenging applications: Image compression and neural network compression.
While these tasks have typically been approached with different methods, our
soft-to-hard quantization approach gives results competitive with the
state-of-the-art for both. | http://arxiv.org/pdf/1704.00648 | Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu Timofte, Luca Benini, Luc Van Gool | cs.LG, cs.CV | null | null | cs.LG | 20170403 | 20170608 | [
{
"id": "1703.10114"
},
{
"id": "1609.07061"
},
{
"id": "1702.04008"
},
{
"id": "1511.06085"
},
{
"id": "1702.03044"
},
{
"id": "1609.07009"
},
{
"id": "1611.01704"
},
{
"id": "1608.05148"
},
{
"id": "1612.01543"
},
{
"id": "1607.05006"
},
{
"id": "1510.00149"
}
]
|
1704.00805 | 42 | â
(a â 2)" (U(x) âU(aâ)) <0. (47)
â
â
â¤
Theorem 2. Let G be a game with playerâs learning scheme as given by EXP-D-RL, (45) (Figure 3). Assume there are a finite number of isolated fixed-points Z* of U o 0, then under Assumption 1, the playerâs score z(t) converges to a rest point 2*. Moreover, x(t) = o(2(t)) converges to a logit equilibrium &* = 0(2*) of G.
# G
Proof. First, recall that solutions z(t) of remain bounded and Q = {z ⬠R"|||z]2 < VM} is a compact, positively invariant set. Let z* be a rest point, z* = u* = U(o(z*)), z* =o(2*).
Next, consider the Lyapunov function given by the Bregman divergence generated by the log-sum-exp function (10),
Vz (z) = Ise(z) â Ise(Z*) â VIse(Z*)'(z -â2*), (48)
â
â â
â | 1704.00805#42 | On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning | In this paper, we utilize results from convex analysis and monotone operator
theory to derive additional properties of the softmax function that have not
yet been covered in the existing literature. In particular, we show that the
softmax function is the monotone gradient map of the log-sum-exp function. By
exploiting this connection, we show that the inverse temperature parameter
determines the Lipschitz and co-coercivity properties of the softmax function.
We then demonstrate the usefulness of these properties through an application
in game-theoretic reinforcement learning. | http://arxiv.org/pdf/1704.00805 | Bolin Gao, Lacra Pavel | math.OC, cs.LG | 10 pages, 4 figures. Comments are welcome | null | math.OC | 20170403 | 20180821 | [
{
"id": "1612.05628"
},
{
"id": "1602.02068"
},
{
"id": "1808.04464"
}
]
|
1704.00648 | 43 | [17] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014.
[18] Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009.
[19] Alex Krizhevsky and Geoffrey E Hinton. Using very deep autoencoders for content-based image retrieval. In ESANN, 2011.
[20] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classiï¬cation with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097â1105, 2012.
[21] Detlev Marpe, Heiko Schwarz, and Thomas Wiegand. Context-based adaptive binary arithmetic coding in the h. 264/avc video compression standard. IEEE Transactions on circuits and systems for video technology, 13(7):620â636, 2003.
10
[22] D. Martin, C. Fowlkes, D. Tal, and J. Malik. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In Proc. Intâl Conf. Computer Vision, volume 2, pages 416â423, July 2001. | 1704.00648#43 | Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations | We present a new approach to learn compressible representations in deep
architectures with an end-to-end training strategy. Our method is based on a
soft (continuous) relaxation of quantization and entropy, which we anneal to
their discrete counterparts throughout training. We showcase this method for
two challenging applications: Image compression and neural network compression.
While these tasks have typically been approached with different methods, our
soft-to-hard quantization approach gives results competitive with the
state-of-the-art for both. | http://arxiv.org/pdf/1704.00648 | Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu Timofte, Luca Benini, Luc Van Gool | cs.LG, cs.CV | null | null | cs.LG | 20170403 | 20170608 | [
{
"id": "1703.10114"
},
{
"id": "1609.07061"
},
{
"id": "1702.04008"
},
{
"id": "1511.06085"
},
{
"id": "1702.03044"
},
{
"id": "1609.07009"
},
{
"id": "1611.01704"
},
{
"id": "1608.05148"
},
{
"id": "1612.01543"
},
{
"id": "1607.05006"
},
{
"id": "1510.00149"
}
]
|
1704.00805 | 43 | Vz (z) = Ise(z) â Ise(Z*) â VIse(Z*)'(z -â2*), (48)
â
â â
â
Recall that by Lemma 4, Ise is convex and by Proposition 1, V lse(z) = o(z). By convexity of Ise, Vz+(z) > 0,Vz ⬠Râ. Using ||o(z) ||, = o(z)'1 = 1 and lse(z + 1c) = lse(z) + ¢, it can be shown that Vz-(Z* + 1c) = 0,Vc ⬠R, so Vzs(-) is positive semidefinite, but not positive definite.
Taking the time derivative of V,+(z) along the solution of (44) yields,
(44) yields, Ve-(2)
Ve-(2) =VVa(2) "2 =(0(z) â o(2"))" (-z+u) =(0(z) â o(2"))" (-2 + -2* +0) =~ (0(2) â 0(2))" (2-2) + (o(2) 0)"
â
â
â
â | 1704.00805#43 | On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning | In this paper, we utilize results from convex analysis and monotone operator
theory to derive additional properties of the softmax function that have not
yet been covered in the existing literature. In particular, we show that the
softmax function is the monotone gradient map of the log-sum-exp function. By
exploiting this connection, we show that the inverse temperature parameter
determines the Lipschitz and co-coercivity properties of the softmax function.
We then demonstrate the usefulness of these properties through an application
in game-theoretic reinforcement learning. | http://arxiv.org/pdf/1704.00805 | Bolin Gao, Lacra Pavel | math.OC, cs.LG | 10 pages, 4 figures. Comments are welcome | null | math.OC | 20170403 | 20180821 | [
{
"id": "1612.05628"
},
{
"id": "1602.02068"
},
{
"id": "1808.04464"
}
]
|
1704.00648 | 44 | [23] Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. Xnor-net: Imagenet classiï¬cation using binary convolutional neural networks. In European Conference on Computer Vision, pages 525â542. Springer, 2016.
[24] Kenneth Rose, Eitan Gurewitz, and Geoffrey C Fox. Vector quantization by deterministic annealing. IEEE Transactions on Information theory, 38(4):1249â1257, 1992.
[25] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211â252, 2015.
[26] Wenzhe Shi, Jose Caballero, Ferenc Huszár, Johannes Totz, Andrew P Aitken, Rob Bishop, Daniel Rueckert, and Zehan Wang. Real-time single image and video super-resolution using an efï¬cient sub-pixel convolutional neural network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1874â1883, 2016. | 1704.00648#44 | Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations | We present a new approach to learn compressible representations in deep
architectures with an end-to-end training strategy. Our method is based on a
soft (continuous) relaxation of quantization and entropy, which we anneal to
their discrete counterparts throughout training. We showcase this method for
two challenging applications: Image compression and neural network compression.
While these tasks have typically been approached with different methods, our
soft-to-hard quantization approach gives results competitive with the
state-of-the-art for both. | http://arxiv.org/pdf/1704.00648 | Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu Timofte, Luca Benini, Luc Van Gool | cs.LG, cs.CV | null | null | cs.LG | 20170403 | 20170608 | [
{
"id": "1703.10114"
},
{
"id": "1609.07061"
},
{
"id": "1702.04008"
},
{
"id": "1511.06085"
},
{
"id": "1702.03044"
},
{
"id": "1609.07009"
},
{
"id": "1611.01704"
},
{
"id": "1608.05148"
},
{
"id": "1612.01543"
},
{
"id": "1607.05006"
},
{
"id": "1510.00149"
}
]
|
1704.00805 | 44 | â
â
â
â
By Corollary 2, a is co-coercive, therefore, , 1 ok Vee(2) $= 5llo(2) â o(2*)I8 + (0(2) , 1 ok = â Vee(2) $= 5llo(2) â o(2*)I8 + (0(2) - 0)" (w=).
Since u = U(o(z)),w = U(o(%)), « = o(z), and a* = 0(z*), therefore (47) implies that Vz«(z) < â}]lo(z) â o(2*)||3, thus Ve«(z) < 0,Vz ⬠Râ, and Vz-(z) = 0, for all 2â¬E= {z ⬠No(z) = o(2"*)}. On E the dynamics of (44) reduces to,
No(z) = o(2"*)}. 2=U(o(%*))
2=2*
2=U(o(%*)) -2=2* =z.
# z.
â
â | 1704.00805#44 | On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning | In this paper, we utilize results from convex analysis and monotone operator
theory to derive additional properties of the softmax function that have not
yet been covered in the existing literature. In particular, we show that the
softmax function is the monotone gradient map of the log-sum-exp function. By
exploiting this connection, we show that the inverse temperature parameter
determines the Lipschitz and co-coercivity properties of the softmax function.
We then demonstrate the usefulness of these properties through an application
in game-theoretic reinforcement learning. | http://arxiv.org/pdf/1704.00805 | Bolin Gao, Lacra Pavel | math.OC, cs.LG | 10 pages, 4 figures. Comments are welcome | null | math.OC | 20170403 | 20180821 | [
{
"id": "1612.05628"
},
{
"id": "1602.02068"
},
{
"id": "1808.04464"
}
]
|
1704.00648 | 45 | [27] Wenzhe Shi, Jose Caballero, Lucas Theis, Ferenc Huszar, Andrew Aitken, Christian Ledig, and Zehan Wang. Is the deconvolution layer the same as a convolutional layer? arXiv preprint arXiv:1609.07009, 2016.
[28] David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driess- che, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mas- tering the game of go with deep neural networks and tree search. Nature, 529(7587):484â489, 2016.
[29] David S. Taubman and Michael W. Marcellin. JPEG 2000: Image Compression Fundamentals, Standards and Practice. Kluwer Academic Publishers, Norwell, MA, USA, 2001.
[30] Lucas Theis, Wenzhe Shi, Andrew Cunningham, and Ferenc Huszar. Lossy image compression with compressive autoencoders. In ICLR 2017, 2017.
[31] Radu Timofte, Vincent De Smet, and Luc Van Gool. A+: Adjusted Anchored Neighborhood Regression for Fast Super-Resolution, pages 111â126. Springer International Publishing, Cham, 2015. | 1704.00648#45 | Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations | We present a new approach to learn compressible representations in deep
architectures with an end-to-end training strategy. Our method is based on a
soft (continuous) relaxation of quantization and entropy, which we anneal to
their discrete counterparts throughout training. We showcase this method for
two challenging applications: Image compression and neural network compression.
While these tasks have typically been approached with different methods, our
soft-to-hard quantization approach gives results competitive with the
state-of-the-art for both. | http://arxiv.org/pdf/1704.00648 | Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu Timofte, Luca Benini, Luc Van Gool | cs.LG, cs.CV | null | null | cs.LG | 20170403 | 20170608 | [
{
"id": "1703.10114"
},
{
"id": "1609.07061"
},
{
"id": "1702.04008"
},
{
"id": "1511.06085"
},
{
"id": "1702.03044"
},
{
"id": "1609.07009"
},
{
"id": "1611.01704"
},
{
"id": "1608.05148"
},
{
"id": "1612.01543"
},
{
"id": "1607.05006"
},
{
"id": "1510.00149"
}
]
|
1704.00805 | 45 | 2=2*
2=U(o(%*)) -2=2* =z.
# z.
â
â
Therefore z(t) â> Z* as t > oo, for any z(0) ⬠â¬. Thus, no other solution except Z* can stay forever in â¬, and the largest invariant subset M C ⬠consists only of equilibria. Since (44) has a finite number of isolated equilibria 2*, by LaSalleâs invariance principle [55], it follows that for any z(0) ⬠Q, z(t) converges to one of them. By continuity of o, x(t) converges to Z* = o(z*) as t â oo. For an alternative proof using Barbalatâs lemma, see [21]. i |
08t 1 | 05 0.65 047 a(t) = ai(2(?)) | a(t) = 02(2(t)) a(t) = on(2()) 0 L L L L L L 0 10 20 30 40 50 60
Fig. 4: Convergence of the induced strategy x(t) towards the logit equilibrium of the standard RPS game. The red curve shows the evolution of the strategy in the interior of the simplex. | 1704.00805#45 | On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning | In this paper, we utilize results from convex analysis and monotone operator
theory to derive additional properties of the softmax function that have not
yet been covered in the existing literature. In particular, we show that the
softmax function is the monotone gradient map of the log-sum-exp function. By
exploiting this connection, we show that the inverse temperature parameter
determines the Lipschitz and co-coercivity properties of the softmax function.
We then demonstrate the usefulness of these properties through an application
in game-theoretic reinforcement learning. | http://arxiv.org/pdf/1704.00805 | Bolin Gao, Lacra Pavel | math.OC, cs.LG | 10 pages, 4 figures. Comments are welcome | null | math.OC | 20170403 | 20180821 | [
{
"id": "1612.05628"
},
{
"id": "1602.02068"
},
{
"id": "1808.04464"
}
]
|
1704.00648 | 46 | [32] George Toderici, Sean M OâMalley, Sung Jin Hwang, Damien Vincent, David Minnen, Shumeet Baluja, Michele Covell, and Rahul Sukthankar. Variable rate image compression with recurrent neural networks. arXiv preprint arXiv:1511.06085, 2015.
[33] George Toderici, Damien Vincent, Nick Johnston, Sung Jin Hwang, David Minnen, Joel Shor, and Michele Covell. Full resolution image compression with recurrent neural networks. arXiv preprint arXiv:1608.05148, 2016.
[34] Karen Ullrich, Edward Meeds, and Max Welling. Soft weight-sharing for neural network compression. arXiv preprint arXiv:1702.04008, 2017.
[35] Gregory K Wallace. The JPEG still picture compression standard. IEEE transactions on consumer electronics, 38(1):xviiiâxxxiv, 1992.
[36] Z. Wang, E. P. Simoncelli, and A. C. Bovik. Multiscale structural similarity for image quality assessment. In Asilomar Conference on Signals, Systems Computers, 2003, volume 2, pages 1398â1402 Vol.2, Nov 2003. | 1704.00648#46 | Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations | We present a new approach to learn compressible representations in deep
architectures with an end-to-end training strategy. Our method is based on a
soft (continuous) relaxation of quantization and entropy, which we anneal to
their discrete counterparts throughout training. We showcase this method for
two challenging applications: Image compression and neural network compression.
While these tasks have typically been approached with different methods, our
soft-to-hard quantization approach gives results competitive with the
state-of-the-art for both. | http://arxiv.org/pdf/1704.00648 | Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu Timofte, Luca Benini, Luc Van Gool | cs.LG, cs.CV | null | null | cs.LG | 20170403 | 20170608 | [
{
"id": "1703.10114"
},
{
"id": "1609.07061"
},
{
"id": "1702.04008"
},
{
"id": "1511.06085"
},
{
"id": "1702.03044"
},
{
"id": "1609.07009"
},
{
"id": "1611.01704"
},
{
"id": "1608.05148"
},
{
"id": "1612.01543"
},
{
"id": "1607.05006"
},
{
"id": "1510.00149"
}
]
|
1704.00805 | 46 | Example 1. We note that Assumption 1 is equivalent to game being a stable game [2, p. 79], [18]. The representative
# G
8
(u-7)
â
game from the class of stable games is the standard Rock- Paper-Scissors (RPS) game given by the payoff matrix,
A = 0 1 1 â 1 0 1 1 1 , â 0 (49)
â
which generates the payoff vector U(x) = Aa. We present a simulation of the standard RPS game under the exponentially- discounted score dynamics (43) with A = 1. The resulting induced strategy x(t) is shown in Figure 4, which by Theo- rem 2 (which uses the co-coercivity property of the softmax function) is guaranteed to converge to the logit equilibrium of the RPS game, which is given by 7 = [1/3 1/3 1/3)". In this game, the logit equilibrium coincides with the Nash equilibrium.
Next, we rely on a slightly modiï¬ed result in [57] to show that the Lipschitzness of the softmax function can be directly used to conclude the convergence of the score dynamics (43) for certain classes of games. | 1704.00805#46 | On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning | In this paper, we utilize results from convex analysis and monotone operator
theory to derive additional properties of the softmax function that have not
yet been covered in the existing literature. In particular, we show that the
softmax function is the monotone gradient map of the log-sum-exp function. By
exploiting this connection, we show that the inverse temperature parameter
determines the Lipschitz and co-coercivity properties of the softmax function.
We then demonstrate the usefulness of these properties through an application
in game-theoretic reinforcement learning. | http://arxiv.org/pdf/1704.00805 | Bolin Gao, Lacra Pavel | math.OC, cs.LG | 10 pages, 4 figures. Comments are welcome | null | math.OC | 20170403 | 20180821 | [
{
"id": "1612.05628"
},
{
"id": "1602.02068"
},
{
"id": "1808.04464"
}
]
|
1704.00648 | 47 | [37] Zhou Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4):600â612, April 2004.
[38] Wei Wen, Chunpeng Wu, Yandan Wang, Yiran Chen, and Hai Li. Learning structured sparsity in deep neural networks. In Advances in Neural Information Processing Systems, pages 2074â2082, 2016.
[39] Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforce- ment learning. Machine learning, 8(3-4):229â256, 1992.
[40] Ian H. Witten, Radford M. Neal, and John G. Cleary. Arithmetic coding for data compression. Commun. ACM, 30(6):520â540, June 1987.
11
[41] Paul Wohlhart, Martin Kostinger, Michael Donoser, Peter M. Roth, and Horst Bischof. Optimiz- ing 1-nearest prototype classiï¬ers. In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), June 2013. | 1704.00648#47 | Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations | We present a new approach to learn compressible representations in deep
architectures with an end-to-end training strategy. Our method is based on a
soft (continuous) relaxation of quantization and entropy, which we anneal to
their discrete counterparts throughout training. We showcase this method for
two challenging applications: Image compression and neural network compression.
While these tasks have typically been approached with different methods, our
soft-to-hard quantization approach gives results competitive with the
state-of-the-art for both. | http://arxiv.org/pdf/1704.00648 | Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu Timofte, Luca Benini, Luc Van Gool | cs.LG, cs.CV | null | null | cs.LG | 20170403 | 20170608 | [
{
"id": "1703.10114"
},
{
"id": "1609.07061"
},
{
"id": "1702.04008"
},
{
"id": "1511.06085"
},
{
"id": "1702.03044"
},
{
"id": "1609.07009"
},
{
"id": "1611.01704"
},
{
"id": "1608.05148"
},
{
"id": "1612.01543"
},
{
"id": "1607.05006"
},
{
"id": "1510.00149"
}
]
|
1704.00805 | 47 | Assumption 2. Uoc is ||-||..-contractive, that is, there exists a constant L ⬠(0,1) such that for all score variables z, zâ ⬠Râ,
â Ï)(z)
â
I(U oa)(z) â Uea)(Z)Iloo < Lll2 = 2'|loo- (50)
Proposition 6. (Theorem 4, [57]) Under Assumption 2, the unique fixed point Z* of U oa is globally asymptotically stable for (44). Moreover, x(t) = o(z(t)) converges to the logit equilibrium &* = o(2*) of the game G.
# G
The above proposition essentially states that the conver- gence of the exponentially-discounted score dynamics (44) relies on, individually, the Lipschitzness of the softmax func- tion Ï and the gameâs payoff vector U . We illustrate this dependency using the following example. | 1704.00805#47 | On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning | In this paper, we utilize results from convex analysis and monotone operator
theory to derive additional properties of the softmax function that have not
yet been covered in the existing literature. In particular, we show that the
softmax function is the monotone gradient map of the log-sum-exp function. By
exploiting this connection, we show that the inverse temperature parameter
determines the Lipschitz and co-coercivity properties of the softmax function.
We then demonstrate the usefulness of these properties through an application
in game-theoretic reinforcement learning. | http://arxiv.org/pdf/1704.00805 | Bolin Gao, Lacra Pavel | math.OC, cs.LG | 10 pages, 4 figures. Comments are welcome | null | math.OC | 20170403 | 20180821 | [
{
"id": "1612.05628"
},
{
"id": "1602.02068"
},
{
"id": "1808.04464"
}
]
|
1704.00648 | 48 | [42] Eyal Yair, Kenneth Zeger, and Allen Gersho. Competitive learning and soft competition for vector quantizer design. IEEE transactions on Signal Processing, 40(2):294â309, 1992. [43] Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, and Yurong Chen. Incremental network quanti- zation: Towards lossless cnns with low-precision weights. arXiv preprint arXiv:1702.03044, 2017.
12
# A Image Compression Details
# A.1 Architecture
We rely on a variant of the compressive autoencoder proposed recently in [30], using convolutional neural networks for the image encoder and image decoder 3. The ï¬rst two convolutional layers in the image encoder each downsample the input image by a factor 2 and collectively increase the number of channels from 3 to 128. This is followed by three residual blocks, each with 128 ï¬lters. Another convolutional layer then downsamples again by a factor 2 and decreases the number of channels to c, where c is a hyperparameter ([30] use 64 and 96 channels). For a w h-dimensional input image, the output of the image encoder is the w/8
Ã
à | 1704.00648#48 | Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations | We present a new approach to learn compressible representations in deep
architectures with an end-to-end training strategy. Our method is based on a
soft (continuous) relaxation of quantization and entropy, which we anneal to
their discrete counterparts throughout training. We showcase this method for
two challenging applications: Image compression and neural network compression.
While these tasks have typically been approached with different methods, our
soft-to-hard quantization approach gives results competitive with the
state-of-the-art for both. | http://arxiv.org/pdf/1704.00648 | Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu Timofte, Luca Benini, Luc Van Gool | cs.LG, cs.CV | null | null | cs.LG | 20170403 | 20170608 | [
{
"id": "1703.10114"
},
{
"id": "1609.07061"
},
{
"id": "1702.04008"
},
{
"id": "1511.06085"
},
{
"id": "1702.03044"
},
{
"id": "1609.07009"
},
{
"id": "1611.01704"
},
{
"id": "1608.05148"
},
{
"id": "1612.01543"
},
{
"id": "1607.05006"
},
{
"id": "1510.00149"
}
]
|
1704.00805 | 48 | Example 2. By equivalence of norms, ¢ is ||-||,.-contractive if nd < 1. Then for any game where the payoff vector U is a || - || o-contraction, U oc is a || - |.o-contraction. Proposition 6 implies that the induced strategy x(t) = o(z(t)) converges to the logit equilibrium 7* ⬠A"â?.
â
# VII. CONCLUSION AND OPEN PROBLEMS
In this paper we have presented a thorough analysis of the softmax function using tools from convex analysis and monotone operator theory. We have shown that the softmax function is the monotone gradient map of the log-sum-exp function and that the inverse temperature parameter λ deter- mines the Lipschitz and co-coercivity properties of the softmax function. These properties allow for convenient constructions of convergence guarantees for score dynamics in general classes of games (see [21]). We note that the structure of the reinforcement learning scheme is similar to those that arises in bandit and online learning (such as the Follow- the-Regularized-Leader (FTRL) and mirror descent algorithm [49]). We hope that researchers could adapt our results pre- sented here and apply them to their domain-speciï¬c problems.
9 | 1704.00805#48 | On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning | In this paper, we utilize results from convex analysis and monotone operator
theory to derive additional properties of the softmax function that have not
yet been covered in the existing literature. In particular, we show that the
softmax function is the monotone gradient map of the log-sum-exp function. By
exploiting this connection, we show that the inverse temperature parameter
determines the Lipschitz and co-coercivity properties of the softmax function.
We then demonstrate the usefulness of these properties through an application
in game-theoretic reinforcement learning. | http://arxiv.org/pdf/1704.00805 | Bolin Gao, Lacra Pavel | math.OC, cs.LG | 10 pages, 4 figures. Comments are welcome | null | math.OC | 20170403 | 20180821 | [
{
"id": "1612.05628"
},
{
"id": "1602.02068"
},
{
"id": "1808.04464"
}
]
|
1704.00648 | 49 | Ã
Ã
The image decoder then mirrors the image encoder, using upsampling instead of downsampling, and deconvolutions instead of convolutions, mapping the bottleneck tensor into a w h-dimensional output image. In contrast to the âsubpixelâ layers [26, 27] used in [30], we use standard deconvolutions for simplicity.
# A.2 Hyperparameters
We do vector quantization to L = 1000 centers, using (pw, ph) = (2, 2), i.e., m = d/(2 2).We trained different combinations of β and c to explore different rate-distortion tradeoffs (measuring distortion in MSE). As β controls to which extent the network minimizes entropy, β directly controls bpp (see top left plot in Fig. 3). We evaluated all pairs (c, β) with c , } and selected 5 representative pairs (models) with average bpps roughly corresponding to uniformly spread points in the interval [0.1, 0.8] bpp. This deï¬nes a âquality indexâ for our model family, analogous to the JPEG quality factor. | 1704.00648#49 | Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations | We present a new approach to learn compressible representations in deep
architectures with an end-to-end training strategy. Our method is based on a
soft (continuous) relaxation of quantization and entropy, which we anneal to
their discrete counterparts throughout training. We showcase this method for
two challenging applications: Image compression and neural network compression.
While these tasks have typically been approached with different methods, our
soft-to-hard quantization approach gives results competitive with the
state-of-the-art for both. | http://arxiv.org/pdf/1704.00648 | Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu Timofte, Luca Benini, Luc Van Gool | cs.LG, cs.CV | null | null | cs.LG | 20170403 | 20170608 | [
{
"id": "1703.10114"
},
{
"id": "1609.07061"
},
{
"id": "1702.04008"
},
{
"id": "1511.06085"
},
{
"id": "1702.03044"
},
{
"id": "1609.07009"
},
{
"id": "1611.01704"
},
{
"id": "1608.05148"
},
{
"id": "1612.01543"
},
{
"id": "1607.05006"
},
{
"id": "1510.00149"
}
]
|
1704.00805 | 49 | 9
Finally, for many applications in reinforcement learning, it is desirable to use a generalized version of the softmax function given by,
oi(z) = exp(\i2i) ~ wl<i<n. © exp(jz;) j=l (51)
Here, each strategy i is associated with an inverse temperature constant λi > 0, which can be adjusted independently to improve an agentâs learning performance. The relationship between the individual parameters λi with the convergence properties of score dynamics under the choice rule given by (51) has been investigated in [57] but is not yet fully characterized at this point. It is of interest to extend the results presented in this paper for generalized versions of the softmax function [60] or adopt a monotone operator theoretic approach to analyze alternative forms of choice maps [61].
# REFERENCES
[1] H. Young and S. Zamir, Handbook of Game Theory, Volume 4, 1st ed. Amsterdam: Elsevier, North-Holland, 2015.
[2] W. H. Sandholm, Population Games and Evolutionary Dynamics. Cam- bridge, MA, USA: MIT Press, 2010.
[3] J. Goeree, C. Holt and T. Palfrey, Quantal response equilibrium: A Stochastic Theory of Games. Princeton University Press, 2016. | 1704.00805#49 | On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning | In this paper, we utilize results from convex analysis and monotone operator
theory to derive additional properties of the softmax function that have not
yet been covered in the existing literature. In particular, we show that the
softmax function is the monotone gradient map of the log-sum-exp function. By
exploiting this connection, we show that the inverse temperature parameter
determines the Lipschitz and co-coercivity properties of the softmax function.
We then demonstrate the usefulness of these properties through an application
in game-theoretic reinforcement learning. | http://arxiv.org/pdf/1704.00805 | Bolin Gao, Lacra Pavel | math.OC, cs.LG | 10 pages, 4 figures. Comments are welcome | null | math.OC | 20170403 | 20180821 | [
{
"id": "1612.05628"
},
{
"id": "1602.02068"
},
{
"id": "1808.04464"
}
]
|
1704.00648 | 50 | We experimented with the other training parameters on a setup with c = 32, which we chose as follows. In the ï¬rst stage we train for 250k iterations using a learning rate of 1eâ4. In the second stage, we use an annealing schedule with T = 50k, KG = 100, over 800k iterations using a learning rate of 1eâ5. In both stages, we use a weak l2 regularizer over all learnable parameters, with λ = 1eâ12.
# A.3 Effect of Vector Quantization and Entropy Loss
32 T T T T T â Vector, 6>0 30F) -- Scalar, 8 >0 = [lon Vector, 8 = 0 = 287 JPEG [ad Z 26 a 24 22 L 1 L 1 0.2 0.4 0.6 rate [bpp]
Figure 2: PSNR on ImageNET100 as a function of the rate for 2 1 JPEG is included for reference. 2-dimensional centers (Vector), for 2-dimensional centers without entropy loss (β = 0). à 1-dimensional centers (Scalar), and for 2 à à | 1704.00648#50 | Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations | We present a new approach to learn compressible representations in deep
architectures with an end-to-end training strategy. Our method is based on a
soft (continuous) relaxation of quantization and entropy, which we anneal to
their discrete counterparts throughout training. We showcase this method for
two challenging applications: Image compression and neural network compression.
While these tasks have typically been approached with different methods, our
soft-to-hard quantization approach gives results competitive with the
state-of-the-art for both. | http://arxiv.org/pdf/1704.00648 | Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu Timofte, Luca Benini, Luc Van Gool | cs.LG, cs.CV | null | null | cs.LG | 20170403 | 20170608 | [
{
"id": "1703.10114"
},
{
"id": "1609.07061"
},
{
"id": "1702.04008"
},
{
"id": "1511.06085"
},
{
"id": "1702.03044"
},
{
"id": "1609.07009"
},
{
"id": "1611.01704"
},
{
"id": "1608.05148"
},
{
"id": "1612.01543"
},
{
"id": "1607.05006"
},
{
"id": "1510.00149"
}
]
|
1704.00805 | 50 | [3] J. Goeree, C. Holt and T. Palfrey, Quantal response equilibrium: A Stochastic Theory of Games. Princeton University Press, 2016.
[4] R. Sutton and A. Barto, Reinforcement Learning: An Introduction. Cambridge, MA, USA: MIT Press, 1998.
[5] I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. Cambridge, MA, USA: MIT Press, 2016.
[6] C. M. Bishop, Pattern Recognition and Machine Learning. Secaucus, NJ, USA: Springer, 2006.
[7] S. Boyd and L. Vandenberghe, Convex optimization, 1st ed. Cambridge, UK: Cambridge University Press, 2004.
[8] Y. Nesterov, Introductory Lectures on Convex Optimization: A Basic Course. Norwell, MA: Kluwer, 2004.
[9] D. Bloembergen, K. Tuyls, D. Hennes, and M. Kaisers, âEvolutionary dynamics of multi-agent learning: A surveyâ, J. Artif. Intell. Res., vol. 53, no. 1, pp. 659-697, May 2015.
[10] G. Weiss, Multiagent Systems, 2nd ed. Cambridge, MA, USA: MIT Press, 2013. | 1704.00805#50 | On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning | In this paper, we utilize results from convex analysis and monotone operator
theory to derive additional properties of the softmax function that have not
yet been covered in the existing literature. In particular, we show that the
softmax function is the monotone gradient map of the log-sum-exp function. By
exploiting this connection, we show that the inverse temperature parameter
determines the Lipschitz and co-coercivity properties of the softmax function.
We then demonstrate the usefulness of these properties through an application
in game-theoretic reinforcement learning. | http://arxiv.org/pdf/1704.00805 | Bolin Gao, Lacra Pavel | math.OC, cs.LG | 10 pages, 4 figures. Comments are welcome | null | math.OC | 20170403 | 20180821 | [
{
"id": "1612.05628"
},
{
"id": "1602.02068"
},
{
"id": "1808.04464"
}
]
|
1704.00648 | 51 | To investigate the effect of vector quantization, we trained models as described in Section 4, but instead of using vector quantization, we set L = 6 and quantized to 1 1-dimensional (scalar) centers, à i.e., (ph, pw) = (1, 1), m = d. Again, we chose 5 representative pairs (c, β). We chose L = 6 to get 1000. approximately the same number of unique symbol assignments as for 2
Ã
2 centers for c To investigate the effect of the entropy loss, we trained models using 2 à (as described above), but used β = 0. â
8, 16, 32, 48 { Fig. 2 shows how both vector quantization and entropy loss lead to higher compression rates at a given reconstruction MSE compared to scalar quantization and training without entropy loss, respectively.
}
3We note that the image encoder (decoder) refers to the left (right) part of the autoencoder, which encodes (decodes) the data to (from) the bottleneck (not to be confused with the symbol encoder (decoder) in Section 3).
13
â
# A.4 Effect of Annealing | 1704.00648#51 | Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations | We present a new approach to learn compressible representations in deep
architectures with an end-to-end training strategy. Our method is based on a
soft (continuous) relaxation of quantization and entropy, which we anneal to
their discrete counterparts throughout training. We showcase this method for
two challenging applications: Image compression and neural network compression.
While these tasks have typically been approached with different methods, our
soft-to-hard quantization approach gives results competitive with the
state-of-the-art for both. | http://arxiv.org/pdf/1704.00648 | Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu Timofte, Luca Benini, Luc Van Gool | cs.LG, cs.CV | null | null | cs.LG | 20170403 | 20170608 | [
{
"id": "1703.10114"
},
{
"id": "1609.07061"
},
{
"id": "1702.04008"
},
{
"id": "1511.06085"
},
{
"id": "1702.03044"
},
{
"id": "1609.07009"
},
{
"id": "1611.01704"
},
{
"id": "1608.05148"
},
{
"id": "1612.01543"
},
{
"id": "1607.05006"
},
{
"id": "1510.00149"
}
]
|
1704.00805 | 51 | [10] G. Weiss, Multiagent Systems, 2nd ed. Cambridge, MA, USA: MIT Press, 2013.
[11] E. Alpaydin, Introduction to Machine Learning, 3rd ed. The MIT Press, 2014, p. 264.
[12] R. Luce, Individual Choice Behavior: A Theoretical Analysis. NY, Wiley, 1959.
[13] J. Bridle, âProbabilistic Interpretation of Feedforward Classiï¬cation Network Outputs, with Relationships to Statistical Pattern Recognitionâ, Neurocomputing: Algorithms, Architectures and Applications, F. Soulie and J. Herault, eds., pp. 227-236, 1990.
[14] R. McKelvey and T. Palfrey, Quantal response equilibria for normal form games, 1st ed. Pasadena, Calif.: Division of the Humanities and Social Sciences, California Institute of Technology, 1994.
[15] J. Smith and G. Price, âThe Logic of Animal Conï¬ictâ, Nature, vol. 246, no. 5427, pp. 15-18, 1973.
[16] J. Hofbauer, K. Sigmund, âEvolutionary Games and Population Dynam- icsâ, Cambridge University Press, 1998. | 1704.00805#51 | On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning | In this paper, we utilize results from convex analysis and monotone operator
theory to derive additional properties of the softmax function that have not
yet been covered in the existing literature. In particular, we show that the
softmax function is the monotone gradient map of the log-sum-exp function. By
exploiting this connection, we show that the inverse temperature parameter
determines the Lipschitz and co-coercivity properties of the softmax function.
We then demonstrate the usefulness of these properties through an application
in game-theoretic reinforcement learning. | http://arxiv.org/pdf/1704.00805 | Bolin Gao, Lacra Pavel | math.OC, cs.LG | 10 pages, 4 figures. Comments are welcome | null | math.OC | 20170403 | 20180821 | [
{
"id": "1612.05628"
},
{
"id": "1602.02068"
},
{
"id": "1808.04464"
}
]
|
1704.00648 | 52 | 13
â
# A.4 Effect of Annealing
Entropy Loss Soft and Hard PSNR [db] T 28 8 â B=4e -- §=6e 0 50k 100k 150k 200k 250k 300k 0 50k 100k 150k 200k 250k 300k 19 <1 _â8ap(t) 4 4 1 1 1 . 1 L 2 L 1 1 L 1
0 50k 100k 150k 200k 250k 300k
0
50k
100k
150k
200k
250k
300k
Figure 3: Entropy loss for three β values, soft and hard PSNR, as well as gap(t) and Ï as a function of the iteration t.
# A.5 Data Sets and Quality Measure Details
Kodak [1] is the most frequently employed dataset for analizing image compression performance 512 images covering a variety of subjects, locations and in recent years. It contains 24 color 768 lighting conditions.
B100 [31] is a set of 100 content diverse color 481 Dataset [22]. Ã 321 test images from the Berkeley Segmentation
Urban100 [14] has 100 color images selected from Flickr with labels such as urban, city, architecture, and structure. The images are larger than those from B100 or Kodak, in that the longer side of an image is always bigger than 992 pixels. Both B100 and Urban100 are commonly used to evaluate image super-resolution methods. | 1704.00648#52 | Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations | We present a new approach to learn compressible representations in deep
architectures with an end-to-end training strategy. Our method is based on a
soft (continuous) relaxation of quantization and entropy, which we anneal to
their discrete counterparts throughout training. We showcase this method for
two challenging applications: Image compression and neural network compression.
While these tasks have typically been approached with different methods, our
soft-to-hard quantization approach gives results competitive with the
state-of-the-art for both. | http://arxiv.org/pdf/1704.00648 | Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu Timofte, Luca Benini, Luc Van Gool | cs.LG, cs.CV | null | null | cs.LG | 20170403 | 20170608 | [
{
"id": "1703.10114"
},
{
"id": "1609.07061"
},
{
"id": "1702.04008"
},
{
"id": "1511.06085"
},
{
"id": "1702.03044"
},
{
"id": "1609.07009"
},
{
"id": "1611.01704"
},
{
"id": "1608.05148"
},
{
"id": "1612.01543"
},
{
"id": "1607.05006"
},
{
"id": "1510.00149"
}
]
|
1704.00805 | 52 | [16] J. Hofbauer, K. Sigmund, âEvolutionary Games and Population Dynam- icsâ, Cambridge University Press, 1998.
[17] J. Weibull, â Evolutionary Game Theoryâ. MIT Press, Cambridge, 1995. [18] J. Hofbauer and W. H. Sandholm, âStable games and their dynamicsâ, Journal of Economic Theory, vol. 144, no. 4, pp. 1665-1693, 2009. [19] J. Hofbauer and E. Hopkins, âLearning in perturbed asymmetric gamesâ,
Games Economic Behav., vol. 52, pp. 133-152, 2005.
[20] A. Kianercy and A. Galstyan, âDynamics of Boltzmann Q-learning in two-player two-action gamesâ, Phys. Rev. E, vol. 85, no. 4, pp. 1145- 1154, 2012.
[21] B. Gao and L. Pavel, âOn Passivity, Reinforcement Learning and Higher- Order Learning in Multi-Agent Finite Gamesâ, arXiv:1808.04464 [cs, math], Aug. 2018. | 1704.00805#52 | On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning | In this paper, we utilize results from convex analysis and monotone operator
theory to derive additional properties of the softmax function that have not
yet been covered in the existing literature. In particular, we show that the
softmax function is the monotone gradient map of the log-sum-exp function. By
exploiting this connection, we show that the inverse temperature parameter
determines the Lipschitz and co-coercivity properties of the softmax function.
We then demonstrate the usefulness of these properties through an application
in game-theoretic reinforcement learning. | http://arxiv.org/pdf/1704.00805 | Bolin Gao, Lacra Pavel | math.OC, cs.LG | 10 pages, 4 figures. Comments are welcome | null | math.OC | 20170403 | 20180821 | [
{
"id": "1612.05628"
},
{
"id": "1602.02068"
},
{
"id": "1808.04464"
}
]
|
1704.00648 | 53 | ImageNET100 contains 100 images randomly selected by us from ImageNET [25], also downsam- pled and cropped, see above.
Quality measures. PSNR (peak signal-to-noise ratio) is a standard measure in direct monotonous relation with the mean square error (MSE) computed between two signals. SSIM and MS-SSIM are the structural similarity index [37] and its multi-scale SSIM computed variant [36] proposed to measure the similarity of two images. They correlate better with human perception than PSNR.
We compute quantitative similarity scores between each compressed image and the corresponding uncompressed image and average them over whole datasets of images. For comparison with JPEG we used libjpeg4, for JPEG 2000 we used the Kakadu implementation5, subtracting in both cases the size of the header from the ï¬le size to compute the compression rate. For comparison with BPG we used the reference implementation6 and used the value reported in the picture_data_length header ï¬eld as ï¬le size.
4http://libjpeg.sourceforge.net/ 5http://kakadusoftware.com/ 6https://bellard.org/bpg/
14
# Image Compression Performance | 1704.00648#53 | Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations | We present a new approach to learn compressible representations in deep
architectures with an end-to-end training strategy. Our method is based on a
soft (continuous) relaxation of quantization and entropy, which we anneal to
their discrete counterparts throughout training. We showcase this method for
two challenging applications: Image compression and neural network compression.
While these tasks have typically been approached with different methods, our
soft-to-hard quantization approach gives results competitive with the
state-of-the-art for both. | http://arxiv.org/pdf/1704.00648 | Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu Timofte, Luca Benini, Luc Van Gool | cs.LG, cs.CV | null | null | cs.LG | 20170403 | 20170608 | [
{
"id": "1703.10114"
},
{
"id": "1609.07061"
},
{
"id": "1702.04008"
},
{
"id": "1511.06085"
},
{
"id": "1702.03044"
},
{
"id": "1609.07009"
},
{
"id": "1611.01704"
},
{
"id": "1608.05148"
},
{
"id": "1612.01543"
},
{
"id": "1607.05006"
},
{
"id": "1510.00149"
}
]
|
1704.00805 | 53 | [22] P. Coucheney, B. Gaujal and P. Mertikopoulos, âPenalty-Regulated Dynamics and Robust Learning Procedures in Gamesâ, Mathematics of Operations Research, vol. 40, no. 3, pp. 611-633, 2015.
[23] P. Mertikopoulos and W. Sandholm, âLearning in Games via Reinforce- ment and Regularizationâ, Mathematics of Operations Research, vol. 41, no. 4, pp. 1297-1324, 2016.
[24] R. T. Rockafellar and R. J.-B. Wets, Variational Analysis. Berlin: Springer-Verlag, 1998.
[25] H. Bauschke and P. Combettes, Convex analysis and monotone operator theory in Hilbert spaces, 1st ed. New York: Springer, 2011.
[26] F. Facchinei and J.-S. Pang, Finite-dimensional Variational Inequalities and Complementarity Problems. Vol. I, Springer Series in Operations Research, Springer-Verlag, New York, 2003.
[27] J. Peypouquet. Convex optimization in normed spaces: theory, methods and examples. Springer, 2015. | 1704.00805#53 | On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning | In this paper, we utilize results from convex analysis and monotone operator
theory to derive additional properties of the softmax function that have not
yet been covered in the existing literature. In particular, we show that the
softmax function is the monotone gradient map of the log-sum-exp function. By
exploiting this connection, we show that the inverse temperature parameter
determines the Lipschitz and co-coercivity properties of the softmax function.
We then demonstrate the usefulness of these properties through an application
in game-theoretic reinforcement learning. | http://arxiv.org/pdf/1704.00805 | Bolin Gao, Lacra Pavel | math.OC, cs.LG | 10 pages, 4 figures. Comments are welcome | null | math.OC | 20170403 | 20180821 | [
{
"id": "1612.05628"
},
{
"id": "1602.02068"
},
{
"id": "1808.04464"
}
]
|
1704.00648 | 54 | 1.00 pat 0.90F 4 32 reer 0.98 0.85 | 30 g 0.96 oso - He 0.94 _ oe ah a. a ZF o.gob BH 0.75 is > oa bal Z y : 4 22 B 26 . 4 £= 0.90 0.70 ia iad 4 7 , 5 7 0.88 Fa 0.65 24 of 0.86 1 77 a" 0.60 22 0.2 04 0.6 0.2 04 06 0.2 04 0.6 rate [bpp] rate [bpp] rate [bpp] 1.00 peer 0.90F 4 0.98 â 0.85 4 0.96 - = 0.80 od 0.94 _ 3 SB oo9 5 075 oz Be 0.92 EG 0.7 g = 0.90 0.70 = 0.88 0.65 0.86 , 0.60 0.2 04 0.6 0.2 04 06 02 04 0.6 rate [bpp] rate [bpp] rate [bpp] 1.00 0.90 0.98 0.85 0.96 Ss 0.80 Se 0.94 - ae a 0.75 BF 0.92 G 0.75 P= 0.90 0.70 0.88 0.65 0.86 beta 0.60 22 bait 0.2 04 0.6 | 1704.00648#54 | Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations | We present a new approach to learn compressible representations in deep
architectures with an end-to-end training strategy. Our method is based on a
soft (continuous) relaxation of quantization and entropy, which we anneal to
their discrete counterparts throughout training. We showcase this method for
two challenging applications: Image compression and neural network compression.
While these tasks have typically been approached with different methods, our
soft-to-hard quantization approach gives results competitive with the
state-of-the-art for both. | http://arxiv.org/pdf/1704.00648 | Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu Timofte, Luca Benini, Luc Van Gool | cs.LG, cs.CV | null | null | cs.LG | 20170403 | 20170608 | [
{
"id": "1703.10114"
},
{
"id": "1609.07061"
},
{
"id": "1702.04008"
},
{
"id": "1511.06085"
},
{
"id": "1702.03044"
},
{
"id": "1609.07009"
},
{
"id": "1611.01704"
},
{
"id": "1608.05148"
},
{
"id": "1612.01543"
},
{
"id": "1607.05006"
},
{
"id": "1510.00149"
}
]
|
1704.00805 | 54 | [27] J. Peypouquet. Convex optimization in normed spaces: theory, methods and examples. Springer, 2015.
[28] J. B. Hiriart-Urruty and C. Lemar´echal: Fundamentals of Convex Anal- ysis. SpringerVerlag, Berlin 2001.
[29] J. Baillon and G. Haddad, âQuelques propri´et´es des op´erateurs angle- born´es etn-cycliquement monotonesâ, Israel Journal of Mathematics, vol. 26, no. 2, pp. 137-150, 1977.
[30] N. Daw, J. OâDoherty, P. Dayan, B. Seymour and R. Dolan, âCortical substrates for exploratory decisions in humansâ, Nature, vol. 441, no. 7095, pp. 876-879, 2006.
[31] D. Lee, âNeuroeconomics: Best to go with what you know?â, Nature, vol. 441, no. 7095, pp. 822-823, 2006. | 1704.00805#54 | On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning | In this paper, we utilize results from convex analysis and monotone operator
theory to derive additional properties of the softmax function that have not
yet been covered in the existing literature. In particular, we show that the
softmax function is the monotone gradient map of the log-sum-exp function. By
exploiting this connection, we show that the inverse temperature parameter
determines the Lipschitz and co-coercivity properties of the softmax function.
We then demonstrate the usefulness of these properties through an application
in game-theoretic reinforcement learning. | http://arxiv.org/pdf/1704.00805 | Bolin Gao, Lacra Pavel | math.OC, cs.LG | 10 pages, 4 figures. Comments are welcome | null | math.OC | 20170403 | 20180821 | [
{
"id": "1612.05628"
},
{
"id": "1602.02068"
},
{
"id": "1808.04464"
}
]
|
1704.00805 | 55 | [32] J. D. Cohen, S. M. McClure, and A. J. Yu, âShould I stay or should I go? How the human brain manages the trade-off between exploitation and explorationâ, Philosph. Trans. Roy. Soc. B: Bio. Sci., vol. 362, no. 1481, pp. 933-942, 2007.
[33] P. Bossaerts and C. Murawski, âFrom behavioural economics to neuroe- conomics to decision neuroscience: the ascent of biology in research on human decision makingâ, Current Opinion in Behavioral Sciences, vol. 5, pp. 37-42, 2015.
[34] D. Koulouriotis and A. Xanthopoulos, âReinforcement learning and evolutionary algorithms for non-stationary multi-armed bandit problemsâ, Applied Mathematics and Computation, vol. 196, no. 2, pp. 913-922, 2008.
[35] R. Zunino, P. Gastaldo, âAnalog implementation of the softmax func- tionâ, In IEEE International Symposium on Circuits and Systems, vol 2, pp II-117, 2002. | 1704.00805#55 | On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning | In this paper, we utilize results from convex analysis and monotone operator
theory to derive additional properties of the softmax function that have not
yet been covered in the existing literature. In particular, we show that the
softmax function is the monotone gradient map of the log-sum-exp function. By
exploiting this connection, we show that the inverse temperature parameter
determines the Lipschitz and co-coercivity properties of the softmax function.
We then demonstrate the usefulness of these properties through an application
in game-theoretic reinforcement learning. | http://arxiv.org/pdf/1704.00805 | Bolin Gao, Lacra Pavel | math.OC, cs.LG | 10 pages, 4 figures. Comments are welcome | null | math.OC | 20170403 | 20180821 | [
{
"id": "1612.05628"
},
{
"id": "1602.02068"
},
{
"id": "1808.04464"
}
]
|
1704.00648 | 56 | Figure 4: Average MS-SSIM, SSIM, and PSNR as a function of the rate for the ImageNET100, Urban100, B100 and Kodak datasets.
# Image Compression Visual Examples
An online supplementary of visual examples is available at http://www.vision.ee.ethz.ch/ ~aeirikur/compression/visuals2.pdf, showing the output of compressing the ï¬rst four images of each of the four datasets with our method, BPG, JPEG, and JPEG 2000, at low bitrates.
15
A.8 DNN Compression: Entropy and Histogram Evolution
# Entropy
# Histogram H
= 4.07
x10°
5 3.0 0 1k 2k 3k =i 0sid120sâsi=20,s-380sâs«âs=d5Ds«GDâs 70 3.0 x1o° Histogram H = 2.90 30 x19° Histogram H = 1.58 3. 1 : 1 r ; T r 3. 1 : ; 1 r r r 2.54 4 25h | 2.04 4 2.0F | 15h 4 15h 4 â0 10 20 300 «©40 «©5006 «66070 â0 10 20 30 40 50 60 70 | 1704.00648#56 | Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations | We present a new approach to learn compressible representations in deep
architectures with an end-to-end training strategy. Our method is based on a
soft (continuous) relaxation of quantization and entropy, which we anneal to
their discrete counterparts throughout training. We showcase this method for
two challenging applications: Image compression and neural network compression.
While these tasks have typically been approached with different methods, our
soft-to-hard quantization approach gives results competitive with the
state-of-the-art for both. | http://arxiv.org/pdf/1704.00648 | Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu Timofte, Luca Benini, Luc Van Gool | cs.LG, cs.CV | null | null | cs.LG | 20170403 | 20170608 | [
{
"id": "1703.10114"
},
{
"id": "1609.07061"
},
{
"id": "1702.04008"
},
{
"id": "1511.06085"
},
{
"id": "1702.03044"
},
{
"id": "1609.07009"
},
{
"id": "1611.01704"
},
{
"id": "1608.05148"
},
{
"id": "1612.01543"
},
{
"id": "1607.05006"
},
{
"id": "1510.00149"
}
]
|
1704.00805 | 56 | [36] A. L. Yuille and D. Geiger, âWinner-Take-All Mechanismsâ, In The Handbook of Brain Theory and Neural Networks, Ed. M. Arbib, MIT Press, 1995.
[37] I. M. Elfadel and J. L. Wyatt Jr., âThe softmax nonlinearity: Derivation using statistical mechanics and useful properties as a multiterminal analog circuit elementâ, In Advances in Neural Information Processing Systems 6, J. Cowan, G. Tesauro, and C. L. Giles, Eds. San Mateo, CA: Morgan Kaufmann, 1994, pp. 882-887.
[38] I. M. Elfadel, âConvex Potentials and their Conjugates in Analog Mean- Field Optimizationâ, Neural Computation, vol. 7, no. 5, pp. 1079-1104, 1995.
[39] T. Genewein and D. A. Braun, âBio-inspired feedback-circuit implemen- tation of discrete, free energy optimizing, winner-take-all computationsâ, Biological, vol. 110, no. 2, pp. 135-150, Jun. 2016. | 1704.00805#56 | On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning | In this paper, we utilize results from convex analysis and monotone operator
theory to derive additional properties of the softmax function that have not
yet been covered in the existing literature. In particular, we show that the
softmax function is the monotone gradient map of the log-sum-exp function. By
exploiting this connection, we show that the inverse temperature parameter
determines the Lipschitz and co-coercivity properties of the softmax function.
We then demonstrate the usefulness of these properties through an application
in game-theoretic reinforcement learning. | http://arxiv.org/pdf/1704.00805 | Bolin Gao, Lacra Pavel | math.OC, cs.LG | 10 pages, 4 figures. Comments are welcome | null | math.OC | 20170403 | 20180821 | [
{
"id": "1612.05628"
},
{
"id": "1602.02068"
},
{
"id": "1808.04464"
}
]
|
1704.00648 | 57 | Figure 5: We show how the sample entropy H(p) decays during training, due to the entropy loss term in (6), and corresponding index histograms at three time instants. Top left: Evolution of the sample entropy H(p). Top right: the histogram for the entropy H = 4.07 at t = 216. Bottom left and right: the corresponding sample histogram when H(p) reaches 2.90 bits per weight at t = 475 and the ï¬nal histogram for H(p) = 1.58 bits per weight at t = 520.
16 | 1704.00648#57 | Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations | We present a new approach to learn compressible representations in deep
architectures with an end-to-end training strategy. Our method is based on a
soft (continuous) relaxation of quantization and entropy, which we anneal to
their discrete counterparts throughout training. We showcase this method for
two challenging applications: Image compression and neural network compression.
While these tasks have typically been approached with different methods, our
soft-to-hard quantization approach gives results competitive with the
state-of-the-art for both. | http://arxiv.org/pdf/1704.00648 | Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu Timofte, Luca Benini, Luc Van Gool | cs.LG, cs.CV | null | null | cs.LG | 20170403 | 20170608 | [
{
"id": "1703.10114"
},
{
"id": "1609.07061"
},
{
"id": "1702.04008"
},
{
"id": "1511.06085"
},
{
"id": "1702.03044"
},
{
"id": "1609.07009"
},
{
"id": "1611.01704"
},
{
"id": "1608.05148"
},
{
"id": "1612.01543"
},
{
"id": "1607.05006"
},
{
"id": "1510.00149"
}
]
|
1704.00805 | 57 | [40] T. Michalis, âOne-vs-each approximation to softmax for scalable esti- mation of probabilitiesâ, In Advances in Neural Information Processing Systems 29, pp. 4161-4169. 2016.
[41] P. Reverdy and N. Leonard, âParameter Estimation in Softmax Decision- Making Models With Linear Objective Functionsâ, IEEE Transactions on Automation Science and Engineering, vol. 13, no. 1, pp. 54-67, 2016. [42] M. Kaisers, K. Tuyls, F. Thuijsman, S. Parsons, âAn evolutionary model of multi-agent learning with a varying exploration rate (Short Paper)â, Proc. of 8th Int. Conf. on Autonomous Agents and Multiagent Systems (AA-MAS 2009), Decker, Sichman, Sierra and Castelfranchi (eds.), Budapest, Hungary, pp. 1255-1256., 2009.
[43] A. Martins and R. F. Astudillo. âFrom softmax to sparsemax: A sparse model of attention and multi-label classiï¬cationâ, arXiv:1602.02068 [cs.CL], Feb. 2016. | 1704.00805#57 | On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning | In this paper, we utilize results from convex analysis and monotone operator
theory to derive additional properties of the softmax function that have not
yet been covered in the existing literature. In particular, we show that the
softmax function is the monotone gradient map of the log-sum-exp function. By
exploiting this connection, we show that the inverse temperature parameter
determines the Lipschitz and co-coercivity properties of the softmax function.
We then demonstrate the usefulness of these properties through an application
in game-theoretic reinforcement learning. | http://arxiv.org/pdf/1704.00805 | Bolin Gao, Lacra Pavel | math.OC, cs.LG | 10 pages, 4 figures. Comments are welcome | null | math.OC | 20170403 | 20180821 | [
{
"id": "1612.05628"
},
{
"id": "1602.02068"
},
{
"id": "1808.04464"
}
]
|
1704.00805 | 58 | [44] D. Leslie and E. Collins, âIndividual Q-Learning in Normal Form Gamesâ, SIAM Journal on Control and Optimization, vol. 44, no. 2, pp. 495-514, 2005.
[45] W. Sandholm, E. Dokumacı and R. Lahkar, âThe projection dynamic and the replicator dynamicâ, Games and Economic Behavior, vol. 64, no. 2, pp. 666-683, 2008.
10
[46] R. Laraki and P. Mertikopoulos, âHigher order game dynamicsâ, Journal of Economic Theory, vol. 148, no. 6, pp. 2666-2695, 2013.
[47] F. Alvarez, J. Bolte and O. Brahic, âHessian Riemannian Gradient Flows in Convex Programmingâ, SIAM Journal on Control and Optimization, vol. 43, no. 2, pp. 477-501, 2004.
[48] A. Beck and M. Teboulle, âMirror descent and nonlinear projected sub- gradient methods for convex optimizationâ, Operations Research Letters, vol. 31, no. 3, pp. 167-175, 2003. | 1704.00805#58 | On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning | In this paper, we utilize results from convex analysis and monotone operator
theory to derive additional properties of the softmax function that have not
yet been covered in the existing literature. In particular, we show that the
softmax function is the monotone gradient map of the log-sum-exp function. By
exploiting this connection, we show that the inverse temperature parameter
determines the Lipschitz and co-coercivity properties of the softmax function.
We then demonstrate the usefulness of these properties through an application
in game-theoretic reinforcement learning. | http://arxiv.org/pdf/1704.00805 | Bolin Gao, Lacra Pavel | math.OC, cs.LG | 10 pages, 4 figures. Comments are welcome | null | math.OC | 20170403 | 20180821 | [
{
"id": "1612.05628"
},
{
"id": "1602.02068"
},
{
"id": "1808.04464"
}
]
|
1704.00805 | 59 | [49] S. Shalev-Shwartz, âOnline Learning and Online Convex Optimizationâ, Foundations and Trends in Machine Learning, vol. 4, no. 2, pp. 107-194, 2011.
[50] E. Hazan, âIntroduction to Online Convex Optimizationâ, Foundations and Trends in Optimization, vol. 2, no. 3-4, pp. 157-325, 2016.
[51] A. Rangarajan, âSelf-annealing and self-annihilation: unifying determin- istic annealing and relaxation labelingâ, Pattern Recognition, vol. 33, no. 4, pp. 635-649, 2000.
formulation of boosting al- gorithmsâ, IEEE Trans. Pattern Anal. Mach. Intell. Feb. 25, 2010, 10.1109/TPAMI.2010.47.
[53] M. Harper, âThe replicator equation as an inference dynamicâ, arXiv:0911.1763 [math.DS], May. 2010.
[54] Y. Sato and J. Crutchï¬eld, âCoupled Replicator Equations for the dynamics of learning in multiagent systemsâ, Physical Review E, vol. 67, no. 1, 2003. | 1704.00805#59 | On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning | In this paper, we utilize results from convex analysis and monotone operator
theory to derive additional properties of the softmax function that have not
yet been covered in the existing literature. In particular, we show that the
softmax function is the monotone gradient map of the log-sum-exp function. By
exploiting this connection, we show that the inverse temperature parameter
determines the Lipschitz and co-coercivity properties of the softmax function.
We then demonstrate the usefulness of these properties through an application
in game-theoretic reinforcement learning. | http://arxiv.org/pdf/1704.00805 | Bolin Gao, Lacra Pavel | math.OC, cs.LG | 10 pages, 4 figures. Comments are welcome | null | math.OC | 20170403 | 20180821 | [
{
"id": "1612.05628"
},
{
"id": "1602.02068"
},
{
"id": "1808.04464"
}
]
|
1704.00805 | 60 | [55] H. K. Khalil, Nonlinear Systems, 3rd ed., Upper Siddle River, NJ: Prentice-Hall, 2002.
[56] E. Hopkins, âTwo Competing Models of How People Learn in Gamesâ, Econometrica, vol. 70, no. 6, pp. 2141-2166, 2002.
[57] R. Cominetti, E. Melo and S. Sorin, âA payoff-based learning procedure and its application to trafï¬c gamesâ, Games and Economic Behavior, vol. 70, no. 1, pp. 71-83, 2010.
[58] K. Tuyls, K. Verbeeck and T. Lenaerts, âA selection-mutation model for Q-learning in multi-agent systemsâ, in Proc. of the 2nd Int. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS), pp. 693-700, 2003.
59 M. Tokic and G. Palm, âValue-difference based exploration: Adaptive control between â¬-greedy and softmaxâ, in KI 2011: Advances in Arti- ficial Intelligence, vol. 7006. Heidelberg, Germany: Springer, 2011, pp. 335-346. | 1704.00805#60 | On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning | In this paper, we utilize results from convex analysis and monotone operator
theory to derive additional properties of the softmax function that have not
yet been covered in the existing literature. In particular, we show that the
softmax function is the monotone gradient map of the log-sum-exp function. By
exploiting this connection, we show that the inverse temperature parameter
determines the Lipschitz and co-coercivity properties of the softmax function.
We then demonstrate the usefulness of these properties through an application
in game-theoretic reinforcement learning. | http://arxiv.org/pdf/1704.00805 | Bolin Gao, Lacra Pavel | math.OC, cs.LG | 10 pages, 4 figures. Comments are welcome | null | math.OC | 20170403 | 20180821 | [
{
"id": "1612.05628"
},
{
"id": "1602.02068"
},
{
"id": "1808.04464"
}
]
|
1704.00109 | 0 | 7 1 0 2
r p A 1 ] G L . s c [
1 v 9 0 1 0 0 . 4 0 7 1 : v i X r a
Published as a conference paper at ICLR 2017
# SNAPSHOT ENSEMBLES: TRAIN 1, GET M FOR FREE
Gao Huangâ, Yixuan Liâ, Geoff Pleiss Cornell University {gh349, yl2363}@cornell.edu, [email protected]
# Zhuang Liu Tsinghua University [email protected]
# John E. Hopcroft, Kilian Q. Weinberger Cornell University [email protected], [email protected]
# ABSTRACT | 1704.00109#0 | Snapshot Ensembles: Train 1, get M for free | Ensembles of neural networks are known to be much more robust and accurate
than individual networks. However, training multiple deep networks for model
averaging is computationally expensive. In this paper, we propose a method to
obtain the seemingly contradictory goal of ensembling multiple neural networks
at no additional training cost. We achieve this goal by training a single
neural network, converging to several local minima along its optimization path
and saving the model parameters. To obtain repeated rapid convergence, we
leverage recent work on cyclic learning rate schedules. The resulting
technique, which we refer to as Snapshot Ensembling, is simple, yet
surprisingly effective. We show in a series of experiments that our approach is
compatible with diverse network architectures and learning tasks. It
consistently yields lower error rates than state-of-the-art single models at no
additional training cost, and compares favorably with traditional network
ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain
error rates of 3.4% and 17.4% respectively. | http://arxiv.org/pdf/1704.00109 | Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170401 | 20170401 | [
{
"id": "1503.02531"
},
{
"id": "1609.04836"
},
{
"id": "1605.07146"
},
{
"id": "1605.06465"
},
{
"id": "1606.01305"
},
{
"id": "1610.02242"
},
{
"id": "1608.03983"
},
{
"id": "1606.02891"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
},
{
"id": "1605.07110"
}
]
|
1704.00109 | 1 | # John E. Hopcroft, Kilian Q. Weinberger Cornell University [email protected], [email protected]
# ABSTRACT
Ensembles of neural networks are known to be much more robust and accurate than individual networks. However, training multiple deep networks for model averaging is computationally expensive. In this paper, we propose a method to obtain the seemingly contradictory goal of ensembling multiple neural networks at no additional training cost. We achieve this goal by training a single neural net- work, converging to several local minima along its optimization path and saving the model parameters. To obtain repeated rapid convergence, we leverage recent work on cyclic learning rate schedules. The resulting technique, which we refer to as Snapshot Ensembling, is simple, yet surprisingly effective. We show in a series of experiments that our approach is compatible with diverse network architectures and learning tasks. It consistently yields lower error rates than state-of-the-art single models at no additional training cost, and compares favorably with tradi- tional network ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain error rates of 3.4% and 17.4% respectively.
# INTRODUCTION | 1704.00109#1 | Snapshot Ensembles: Train 1, get M for free | Ensembles of neural networks are known to be much more robust and accurate
than individual networks. However, training multiple deep networks for model
averaging is computationally expensive. In this paper, we propose a method to
obtain the seemingly contradictory goal of ensembling multiple neural networks
at no additional training cost. We achieve this goal by training a single
neural network, converging to several local minima along its optimization path
and saving the model parameters. To obtain repeated rapid convergence, we
leverage recent work on cyclic learning rate schedules. The resulting
technique, which we refer to as Snapshot Ensembling, is simple, yet
surprisingly effective. We show in a series of experiments that our approach is
compatible with diverse network architectures and learning tasks. It
consistently yields lower error rates than state-of-the-art single models at no
additional training cost, and compares favorably with traditional network
ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain
error rates of 3.4% and 17.4% respectively. | http://arxiv.org/pdf/1704.00109 | Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170401 | 20170401 | [
{
"id": "1503.02531"
},
{
"id": "1609.04836"
},
{
"id": "1605.07146"
},
{
"id": "1605.06465"
},
{
"id": "1606.01305"
},
{
"id": "1610.02242"
},
{
"id": "1608.03983"
},
{
"id": "1606.02891"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
},
{
"id": "1605.07110"
}
]
|
1704.00109 | 2 | # INTRODUCTION
Stochastic Gradient Descent (SGD) (Bottou, 2010) and its accelerated variants (Kingma & Ba, 2014; Duchi et al., 2011) have become the de-facto approaches for optimizing deep neural networks. The popularity of SGD can be attributed to its ability to avoid and even escape spurious saddle-points and local minima (Dauphin et al., 2014). Although avoiding these spurious solutions is generally considered positive, in this paper we argue that these local minima contain useful information that may in fact improve model performance. | 1704.00109#2 | Snapshot Ensembles: Train 1, get M for free | Ensembles of neural networks are known to be much more robust and accurate
than individual networks. However, training multiple deep networks for model
averaging is computationally expensive. In this paper, we propose a method to
obtain the seemingly contradictory goal of ensembling multiple neural networks
at no additional training cost. We achieve this goal by training a single
neural network, converging to several local minima along its optimization path
and saving the model parameters. To obtain repeated rapid convergence, we
leverage recent work on cyclic learning rate schedules. The resulting
technique, which we refer to as Snapshot Ensembling, is simple, yet
surprisingly effective. We show in a series of experiments that our approach is
compatible with diverse network architectures and learning tasks. It
consistently yields lower error rates than state-of-the-art single models at no
additional training cost, and compares favorably with traditional network
ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain
error rates of 3.4% and 17.4% respectively. | http://arxiv.org/pdf/1704.00109 | Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170401 | 20170401 | [
{
"id": "1503.02531"
},
{
"id": "1609.04836"
},
{
"id": "1605.07146"
},
{
"id": "1605.06465"
},
{
"id": "1606.01305"
},
{
"id": "1610.02242"
},
{
"id": "1608.03983"
},
{
"id": "1606.02891"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
},
{
"id": "1605.07110"
}
]
|
1704.00109 | 3 | Although deep networks typically never converge to a global minimum, there is a notion of âgoodâ and âbadâ local minima with respect to generalization. Keskar et al. (2016) argue that local minima with ï¬at basins tend to generalize better. SGD tends to avoid sharper local minima because gradients are computed from small mini-batches and are therefore inexact (Keskar et al., 2016). If the learning- rate is sufï¬ciently large, the intrinsic random motion across gradient steps prevents the optimizer from reaching any of the sharp basins along its optimization path. However, if the learning rate is small, the model tends to converge into the closest local minimum. These two very different behaviors of SGD are typically exploited in different phases of optimization (He et al., 2016a). Initially the learning rate is kept high to move into the general vicinity of a ï¬at local minimum. Once this search has reached a stage in which no further progress is made, the learning rate is dropped (once or twice), triggering a descent, and ultimately convergence, to the ï¬nal local minimum. | 1704.00109#3 | Snapshot Ensembles: Train 1, get M for free | Ensembles of neural networks are known to be much more robust and accurate
than individual networks. However, training multiple deep networks for model
averaging is computationally expensive. In this paper, we propose a method to
obtain the seemingly contradictory goal of ensembling multiple neural networks
at no additional training cost. We achieve this goal by training a single
neural network, converging to several local minima along its optimization path
and saving the model parameters. To obtain repeated rapid convergence, we
leverage recent work on cyclic learning rate schedules. The resulting
technique, which we refer to as Snapshot Ensembling, is simple, yet
surprisingly effective. We show in a series of experiments that our approach is
compatible with diverse network architectures and learning tasks. It
consistently yields lower error rates than state-of-the-art single models at no
additional training cost, and compares favorably with traditional network
ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain
error rates of 3.4% and 17.4% respectively. | http://arxiv.org/pdf/1704.00109 | Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170401 | 20170401 | [
{
"id": "1503.02531"
},
{
"id": "1609.04836"
},
{
"id": "1605.07146"
},
{
"id": "1605.06465"
},
{
"id": "1606.01305"
},
{
"id": "1610.02242"
},
{
"id": "1608.03983"
},
{
"id": "1606.02891"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
},
{
"id": "1605.07110"
}
]
|
1704.00109 | 4 | It is well established (Kawaguchi, 2016) that the number of possible local minima grows expo- nentially with the number of parametersâof which modern neural networks can have millions. It is therefore not surprising that two identical architectures optimized with different initializations or minibatch orderings will converge to different solutions. Although different local minima often have very similar error rates, the corresponding neural networks tend to make different mistakes. This
âAuthors contribute equally.
1
Published as a conference paper at ICLR 2017
o5~ Single Model o4., Standard LR Schedule °5~ Snapshot Ensemble 04 Cyclic LR Schedule
o5~ Single Model o4., Standard LR Schedule
°5~ Snapshot Ensemble 04 Cyclic LR Schedule
Figure 1: Left: Illustration of SGD optimization with a typical learning rate schedule. The model converges to a minimum at the end of training. Right: Illustration of Snapshot Ensembling. The model undergoes several learning rate annealing cycles, converging to and escaping from multiple local minima. We take a snapshot at each minimum for test-time ensembling. | 1704.00109#4 | Snapshot Ensembles: Train 1, get M for free | Ensembles of neural networks are known to be much more robust and accurate
than individual networks. However, training multiple deep networks for model
averaging is computationally expensive. In this paper, we propose a method to
obtain the seemingly contradictory goal of ensembling multiple neural networks
at no additional training cost. We achieve this goal by training a single
neural network, converging to several local minima along its optimization path
and saving the model parameters. To obtain repeated rapid convergence, we
leverage recent work on cyclic learning rate schedules. The resulting
technique, which we refer to as Snapshot Ensembling, is simple, yet
surprisingly effective. We show in a series of experiments that our approach is
compatible with diverse network architectures and learning tasks. It
consistently yields lower error rates than state-of-the-art single models at no
additional training cost, and compares favorably with traditional network
ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain
error rates of 3.4% and 17.4% respectively. | http://arxiv.org/pdf/1704.00109 | Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170401 | 20170401 | [
{
"id": "1503.02531"
},
{
"id": "1609.04836"
},
{
"id": "1605.07146"
},
{
"id": "1605.06465"
},
{
"id": "1606.01305"
},
{
"id": "1610.02242"
},
{
"id": "1608.03983"
},
{
"id": "1606.02891"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
},
{
"id": "1605.07110"
}
]
|
1704.00109 | 5 | diversity can be exploited through ensembling, in which multiple neural networks are trained from different initializations and then combined with majority voting or averaging (Caruana et al., 2004). Ensembling often leads to drastic reductions in error rates. In fact, most high proï¬le competitions, e.g. Imagenet (Deng et al., 2009) or Kaggle1, are won by ensembles of deep learning architectures.
Despite its obvious advantages, the use of ensembling for deep networks is not nearly as wide- spread as it is for other algorithms. One likely reason for this lack of adaptation may be the cost of learning multiple neural networks. Training deep networks can last for weeks, even on high performance hardware with GPU acceleration. As the training cost for ensembles increases linearly, ensembles can quickly becomes uneconomical for most researchers without access to industrial scale computational resources. | 1704.00109#5 | Snapshot Ensembles: Train 1, get M for free | Ensembles of neural networks are known to be much more robust and accurate
than individual networks. However, training multiple deep networks for model
averaging is computationally expensive. In this paper, we propose a method to
obtain the seemingly contradictory goal of ensembling multiple neural networks
at no additional training cost. We achieve this goal by training a single
neural network, converging to several local minima along its optimization path
and saving the model parameters. To obtain repeated rapid convergence, we
leverage recent work on cyclic learning rate schedules. The resulting
technique, which we refer to as Snapshot Ensembling, is simple, yet
surprisingly effective. We show in a series of experiments that our approach is
compatible with diverse network architectures and learning tasks. It
consistently yields lower error rates than state-of-the-art single models at no
additional training cost, and compares favorably with traditional network
ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain
error rates of 3.4% and 17.4% respectively. | http://arxiv.org/pdf/1704.00109 | Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170401 | 20170401 | [
{
"id": "1503.02531"
},
{
"id": "1609.04836"
},
{
"id": "1605.07146"
},
{
"id": "1605.06465"
},
{
"id": "1606.01305"
},
{
"id": "1610.02242"
},
{
"id": "1608.03983"
},
{
"id": "1606.02891"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
},
{
"id": "1605.07110"
}
]
|
1704.00109 | 6 | In this paper we focus on the seemingly-contradictory goal of learning an ensemble of multiple neural networks without incurring any additional training costs. We achieve this goal with a training method that is simple and straight-forward to implement. Our approach leverages the non-convex nature of neural networks and the ability of SGD to converge to and escape from local minima on demand. Instead of training M neural networks independently from scratch, we let SGD converge M times to local minima along its optimization path. Each time the model converges, we save the weights and add the corresponding network to our ensemble. We then restart the optimization with a large learning rate to escape the current local minimum. More speciï¬cally, we adopt the cycling procedure suggested by Loshchilov & Hutter (2016), in which the learning rate is abruptly raised and then quickly lowered to follow a cosine function. Because our ï¬nal ensemble consists of snapshots of the optimization path, we refer to our approach as Snapshot Ensembling. Figure 1 presents a high-level overview of this method. | 1704.00109#6 | Snapshot Ensembles: Train 1, get M for free | Ensembles of neural networks are known to be much more robust and accurate
than individual networks. However, training multiple deep networks for model
averaging is computationally expensive. In this paper, we propose a method to
obtain the seemingly contradictory goal of ensembling multiple neural networks
at no additional training cost. We achieve this goal by training a single
neural network, converging to several local minima along its optimization path
and saving the model parameters. To obtain repeated rapid convergence, we
leverage recent work on cyclic learning rate schedules. The resulting
technique, which we refer to as Snapshot Ensembling, is simple, yet
surprisingly effective. We show in a series of experiments that our approach is
compatible with diverse network architectures and learning tasks. It
consistently yields lower error rates than state-of-the-art single models at no
additional training cost, and compares favorably with traditional network
ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain
error rates of 3.4% and 17.4% respectively. | http://arxiv.org/pdf/1704.00109 | Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170401 | 20170401 | [
{
"id": "1503.02531"
},
{
"id": "1609.04836"
},
{
"id": "1605.07146"
},
{
"id": "1605.06465"
},
{
"id": "1606.01305"
},
{
"id": "1610.02242"
},
{
"id": "1608.03983"
},
{
"id": "1606.02891"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
},
{
"id": "1605.07110"
}
]
|
1704.00109 | 7 | In contrast to traditional ensembles, the training time for the entire ensemble is identical to the time required to train a single traditional model. During testing time, one can evaluate and average the last (and therefore most accurate) m out of M models. Our approach is naturally compatible with other methods to improve the accuracy, such as data augmentation, stochastic depth (Huang et al., 2016b), or batch normalization (Ioffe & Szegedy, 2015). In fact, Snapshot Ensembles can even be ensembled, if for example parallel resources are available during training. In this case, an ensemble of K Snapshot Ensembles yields K Ã M models at K times the training cost.
We evaluate the efï¬cacy of Snapshot Ensembles on three state-of-the-art deep learning architectures for object recognition: ResNet (He et al., 2016b), Wide-ResNet (Zagoruyko & Komodakis, 2016), and DenseNet (Huang et al., 2016a). We show across four different data sets that Snapshot Ensem- bles almost always reduce error without increasing training costs. For example, on CIFAR-10 and CIFAR-100, Snapshot Ensembles obtains error rates of 3.44% and 17.41% respectively.
1www.kaggle.com
2 | 1704.00109#7 | Snapshot Ensembles: Train 1, get M for free | Ensembles of neural networks are known to be much more robust and accurate
than individual networks. However, training multiple deep networks for model
averaging is computationally expensive. In this paper, we propose a method to
obtain the seemingly contradictory goal of ensembling multiple neural networks
at no additional training cost. We achieve this goal by training a single
neural network, converging to several local minima along its optimization path
and saving the model parameters. To obtain repeated rapid convergence, we
leverage recent work on cyclic learning rate schedules. The resulting
technique, which we refer to as Snapshot Ensembling, is simple, yet
surprisingly effective. We show in a series of experiments that our approach is
compatible with diverse network architectures and learning tasks. It
consistently yields lower error rates than state-of-the-art single models at no
additional training cost, and compares favorably with traditional network
ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain
error rates of 3.4% and 17.4% respectively. | http://arxiv.org/pdf/1704.00109 | Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170401 | 20170401 | [
{
"id": "1503.02531"
},
{
"id": "1609.04836"
},
{
"id": "1605.07146"
},
{
"id": "1605.06465"
},
{
"id": "1606.01305"
},
{
"id": "1610.02242"
},
{
"id": "1608.03983"
},
{
"id": "1606.02891"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
},
{
"id": "1605.07110"
}
]
|
1704.00109 | 9 | As an alternative to traditional ensembles, so-called âimplicitâ ensembles have high efï¬ciency dur- ing both training and testing (Srivastava et al., 2014; Wan et al., 2013; Huang et al., 2016b; Singh et al., 2016; Krueger et al., 2016). The Dropout (Srivastava et al., 2014) technique creates an en- semble out of a single model by âdroppingâ â or zeroing â random sets of hidden nodes during each mini-batch. At test time, no nodes are dropped, and each node is scaled by the probability of surviving during training. Srivastava et al. claim that Dropout reduces overï¬tting by preventing the co-adaptation of nodes. An alternative explanation is that this mechanism creates an exponential number of networks with shared weights during training, which are then implicitly ensembled at test time. DropConnect (Wan et al., 2013) uses a similar trick to create ensembles at test time by dropping connections (weights) during training instead of nodes. The recently proposed Stochastic Depth technique (Huang et al., 2016b) randomly drops layers during training to create an implicit ensemble of networks | 1704.00109#9 | Snapshot Ensembles: Train 1, get M for free | Ensembles of neural networks are known to be much more robust and accurate
than individual networks. However, training multiple deep networks for model
averaging is computationally expensive. In this paper, we propose a method to
obtain the seemingly contradictory goal of ensembling multiple neural networks
at no additional training cost. We achieve this goal by training a single
neural network, converging to several local minima along its optimization path
and saving the model parameters. To obtain repeated rapid convergence, we
leverage recent work on cyclic learning rate schedules. The resulting
technique, which we refer to as Snapshot Ensembling, is simple, yet
surprisingly effective. We show in a series of experiments that our approach is
compatible with diverse network architectures and learning tasks. It
consistently yields lower error rates than state-of-the-art single models at no
additional training cost, and compares favorably with traditional network
ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain
error rates of 3.4% and 17.4% respectively. | http://arxiv.org/pdf/1704.00109 | Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170401 | 20170401 | [
{
"id": "1503.02531"
},
{
"id": "1609.04836"
},
{
"id": "1605.07146"
},
{
"id": "1605.06465"
},
{
"id": "1606.01305"
},
{
"id": "1610.02242"
},
{
"id": "1608.03983"
},
{
"id": "1606.02891"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
},
{
"id": "1605.07110"
}
]
|
1704.00109 | 10 | nodes. The recently proposed Stochastic Depth technique (Huang et al., 2016b) randomly drops layers during training to create an implicit ensemble of networks with varying depth at test time. Finally, Swapout (Singh et al., 2016) is a stochastic training method that generalizes Dropout and Stochastic Depth. From the perspective of model ensembling, Swapout creates diversiï¬ed network structures for model averaging. Our pro- posed method similarly trains only a single model; however, the resulting ensemble is âexplicitâ in that the models do not share weights. Furthermore, our method can be used in conjunction with any of these implicit ensembling techniques. | 1704.00109#10 | Snapshot Ensembles: Train 1, get M for free | Ensembles of neural networks are known to be much more robust and accurate
than individual networks. However, training multiple deep networks for model
averaging is computationally expensive. In this paper, we propose a method to
obtain the seemingly contradictory goal of ensembling multiple neural networks
at no additional training cost. We achieve this goal by training a single
neural network, converging to several local minima along its optimization path
and saving the model parameters. To obtain repeated rapid convergence, we
leverage recent work on cyclic learning rate schedules. The resulting
technique, which we refer to as Snapshot Ensembling, is simple, yet
surprisingly effective. We show in a series of experiments that our approach is
compatible with diverse network architectures and learning tasks. It
consistently yields lower error rates than state-of-the-art single models at no
additional training cost, and compares favorably with traditional network
ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain
error rates of 3.4% and 17.4% respectively. | http://arxiv.org/pdf/1704.00109 | Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170401 | 20170401 | [
{
"id": "1503.02531"
},
{
"id": "1609.04836"
},
{
"id": "1605.07146"
},
{
"id": "1605.06465"
},
{
"id": "1606.01305"
},
{
"id": "1610.02242"
},
{
"id": "1608.03983"
},
{
"id": "1606.02891"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
},
{
"id": "1605.07110"
}
]
|
1704.00109 | 11 | Several recent publications focus on reducing the test time cost of ensembles, by transferring the âknowledgeâ of cumbersome ensembles into a single model (Bucilu et al., 2006; Hinton et al., 2015). Hinton et al. (2015) propose to use an ensemble of multiple networks as the target of a single (smaller) network. Our proposed method is complementary to these works as we aim to reduce the training cost of ensembles rather than the test-time cost. | 1704.00109#11 | Snapshot Ensembles: Train 1, get M for free | Ensembles of neural networks are known to be much more robust and accurate
than individual networks. However, training multiple deep networks for model
averaging is computationally expensive. In this paper, we propose a method to
obtain the seemingly contradictory goal of ensembling multiple neural networks
at no additional training cost. We achieve this goal by training a single
neural network, converging to several local minima along its optimization path
and saving the model parameters. To obtain repeated rapid convergence, we
leverage recent work on cyclic learning rate schedules. The resulting
technique, which we refer to as Snapshot Ensembling, is simple, yet
surprisingly effective. We show in a series of experiments that our approach is
compatible with diverse network architectures and learning tasks. It
consistently yields lower error rates than state-of-the-art single models at no
additional training cost, and compares favorably with traditional network
ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain
error rates of 3.4% and 17.4% respectively. | http://arxiv.org/pdf/1704.00109 | Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170401 | 20170401 | [
{
"id": "1503.02531"
},
{
"id": "1609.04836"
},
{
"id": "1605.07146"
},
{
"id": "1605.06465"
},
{
"id": "1606.01305"
},
{
"id": "1610.02242"
},
{
"id": "1608.03983"
},
{
"id": "1606.02891"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
},
{
"id": "1605.07110"
}
]
|
1704.00109 | 12 | Perhaps most similar to our work is that of Swann & Allinson (1998) and Xie et al. (2013), who introduce the hori- explore creating ensembles from slices of the learning trajectory. Xie et al. zontal and vertical ensembling method, which combines the output of networks within a range of training epochs. More recently, Jean et al. (2014) and Sennrich et al. (2016) show improvement by ensembling the intermediate stages of model training. Laine & Aila (2016) propose a temporal ensembling method for semi-supervised learning, which achieves consensus among models trained with different regularization and augmentation conditions for better generalization performance. Fi- nally, Moghimi et al. (2016) show that boosting can be applied to convolutional neural networks to create strong ensembles. Our work differs from these prior works in that we force the model to visit multiple local minima, and we take snapshots only when the model reaches a minimum. We believe this key insight allows us to leverage more power from our ensembles. | 1704.00109#12 | Snapshot Ensembles: Train 1, get M for free | Ensembles of neural networks are known to be much more robust and accurate
than individual networks. However, training multiple deep networks for model
averaging is computationally expensive. In this paper, we propose a method to
obtain the seemingly contradictory goal of ensembling multiple neural networks
at no additional training cost. We achieve this goal by training a single
neural network, converging to several local minima along its optimization path
and saving the model parameters. To obtain repeated rapid convergence, we
leverage recent work on cyclic learning rate schedules. The resulting
technique, which we refer to as Snapshot Ensembling, is simple, yet
surprisingly effective. We show in a series of experiments that our approach is
compatible with diverse network architectures and learning tasks. It
consistently yields lower error rates than state-of-the-art single models at no
additional training cost, and compares favorably with traditional network
ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain
error rates of 3.4% and 17.4% respectively. | http://arxiv.org/pdf/1704.00109 | Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170401 | 20170401 | [
{
"id": "1503.02531"
},
{
"id": "1609.04836"
},
{
"id": "1605.07146"
},
{
"id": "1605.06465"
},
{
"id": "1606.01305"
},
{
"id": "1610.02242"
},
{
"id": "1608.03983"
},
{
"id": "1606.02891"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
},
{
"id": "1605.07110"
}
]
|
1704.00109 | 13 | Our work is inspired by the recent ï¬ndings of Loshchilov & Hutter (2016) and Smith (2016), who show that cyclic learning rates can be effective for training convolutional neural networks. The au- thors show that each cycle produces models which are (almost) competitive to those learned with traditional learning rate schedules while requiring a fraction of training iterations. Although model performance temporarily suffers when the learning rate cycle is restarted, the performance eventu- ally surpasses the previous cycle after annealing the learning rate. The authors suggest that cycling perturbs the parameters of a converged model, which allows the model to ï¬nd a better local mini- mum. We build upon these recent ï¬ndings by (1) showing that there is signiï¬cant diversity in the local minima visited during each cycle and (2) exploiting this diversity using ensembles. We are not concerned with speeding up or improving the training of a single model; rather, our goal is to extract an ensemble of classiï¬ers while following the optimization path of the ï¬nal model.
# 3 SNAPSHOT ENSEMBLING | 1704.00109#13 | Snapshot Ensembles: Train 1, get M for free | Ensembles of neural networks are known to be much more robust and accurate
than individual networks. However, training multiple deep networks for model
averaging is computationally expensive. In this paper, we propose a method to
obtain the seemingly contradictory goal of ensembling multiple neural networks
at no additional training cost. We achieve this goal by training a single
neural network, converging to several local minima along its optimization path
and saving the model parameters. To obtain repeated rapid convergence, we
leverage recent work on cyclic learning rate schedules. The resulting
technique, which we refer to as Snapshot Ensembling, is simple, yet
surprisingly effective. We show in a series of experiments that our approach is
compatible with diverse network architectures and learning tasks. It
consistently yields lower error rates than state-of-the-art single models at no
additional training cost, and compares favorably with traditional network
ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain
error rates of 3.4% and 17.4% respectively. | http://arxiv.org/pdf/1704.00109 | Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170401 | 20170401 | [
{
"id": "1503.02531"
},
{
"id": "1609.04836"
},
{
"id": "1605.07146"
},
{
"id": "1605.06465"
},
{
"id": "1606.01305"
},
{
"id": "1610.02242"
},
{
"id": "1608.03983"
},
{
"id": "1606.02891"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
},
{
"id": "1605.07110"
}
]
|
1704.00109 | 14 | # 3 SNAPSHOT ENSEMBLING
Snapshot Ensembling produces an ensemble of accurate and diverse models from a single training process. At the heart of Snapshot Ensembling is an optimization process which visits several local minima before converging to a ï¬nal solution. We take model snapshots at these various minima, and average their predictions at test time.
3
Published as a conference paper at ICLR 2017 | 1704.00109#14 | Snapshot Ensembles: Train 1, get M for free | Ensembles of neural networks are known to be much more robust and accurate
than individual networks. However, training multiple deep networks for model
averaging is computationally expensive. In this paper, we propose a method to
obtain the seemingly contradictory goal of ensembling multiple neural networks
at no additional training cost. We achieve this goal by training a single
neural network, converging to several local minima along its optimization path
and saving the model parameters. To obtain repeated rapid convergence, we
leverage recent work on cyclic learning rate schedules. The resulting
technique, which we refer to as Snapshot Ensembling, is simple, yet
surprisingly effective. We show in a series of experiments that our approach is
compatible with diverse network architectures and learning tasks. It
consistently yields lower error rates than state-of-the-art single models at no
additional training cost, and compares favorably with traditional network
ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain
error rates of 3.4% and 17.4% respectively. | http://arxiv.org/pdf/1704.00109 | Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170401 | 20170401 | [
{
"id": "1503.02531"
},
{
"id": "1609.04836"
},
{
"id": "1605.07146"
},
{
"id": "1605.06465"
},
{
"id": "1606.01305"
},
{
"id": "1610.02242"
},
{
"id": "1608.03983"
},
{
"id": "1606.02891"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
},
{
"id": "1605.07110"
}
]
|
1704.00109 | 15 | Ensembles work best if the individual models (1) have low test error and (2) do not overlap in the set of examples they misclassify. Along most of the optimization path, the weight assignments of a neural network tend not to correspond to low test error. In fact, it is commonly observed that the validation error drops signiï¬cantly only after the learning rate has been reduced, which is typically done after several hundred epochs. Our approach is inspired by the observation that training neural networks for fewer epochs and dropping the learning rate earlier has minor impact on the ï¬nal test error (Loshchilov & Hutter, 2016). This seems to suggest that local minima along the optimization path become promising (in terms of generalization error) after only a few epochs. Cyclic Cosine Annealing. To converge to mul- tiple local minima, we follow a cyclic annealing schedule as proposed by Loshchilov & Hutter (2016). We lower the learning rate at a very fast pace, encouraging the model to converge towards its ï¬rst local minimum after as few as 50 epochs. The optimization is then contin- ued at a larger learning rate, which perturbs the model and dislodges it from the minimum. We repeat this process several times to obtain mul- tiple convergences. Formally, the learning rate α has the form: | 1704.00109#15 | Snapshot Ensembles: Train 1, get M for free | Ensembles of neural networks are known to be much more robust and accurate
than individual networks. However, training multiple deep networks for model
averaging is computationally expensive. In this paper, we propose a method to
obtain the seemingly contradictory goal of ensembling multiple neural networks
at no additional training cost. We achieve this goal by training a single
neural network, converging to several local minima along its optimization path
and saving the model parameters. To obtain repeated rapid convergence, we
leverage recent work on cyclic learning rate schedules. The resulting
technique, which we refer to as Snapshot Ensembling, is simple, yet
surprisingly effective. We show in a series of experiments that our approach is
compatible with diverse network architectures and learning tasks. It
consistently yields lower error rates than state-of-the-art single models at no
additional training cost, and compares favorably with traditional network
ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain
error rates of 3.4% and 17.4% respectively. | http://arxiv.org/pdf/1704.00109 | Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170401 | 20170401 | [
{
"id": "1503.02531"
},
{
"id": "1609.04836"
},
{
"id": "1605.07146"
},
{
"id": "1605.06465"
},
{
"id": "1606.01305"
},
{
"id": "1610.02242"
},
{
"id": "1608.03983"
},
{
"id": "1606.02891"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
},
{
"id": "1605.07110"
}
]
|
1704.00109 | 17 | a(t) = f (mod(tâ1,/T/M])), ()
where t is the iteration number, T' is the to- Epochs tal number of training iterations, and f is a monotonically decreasing function. In other words, we split the training process into M cy- cles, each of which starts with a large learning rate, which is annealed to a smaller learning Figure 2: Training loss of 100-layer DenseNet on CI- FAR10 using standard learning rate (blue) and M = 6 cosine annealing cycles (red). The intermediate mod- els, denoted by the dotted lines, form an ensemble at rate. The large learning rate a = f(0) pro- the end of training. vides the model enough energy to escape from a critical point, while the small learning rate a = f({I/M]) drives the model to a well behaved local minimum. In our experiments, we set f to be the shifted cosine function proposed by{Loshchilov & Hutter] (2016):
(<x (mn) : :) | 1704.00109#17 | Snapshot Ensembles: Train 1, get M for free | Ensembles of neural networks are known to be much more robust and accurate
than individual networks. However, training multiple deep networks for model
averaging is computationally expensive. In this paper, we propose a method to
obtain the seemingly contradictory goal of ensembling multiple neural networks
at no additional training cost. We achieve this goal by training a single
neural network, converging to several local minima along its optimization path
and saving the model parameters. To obtain repeated rapid convergence, we
leverage recent work on cyclic learning rate schedules. The resulting
technique, which we refer to as Snapshot Ensembling, is simple, yet
surprisingly effective. We show in a series of experiments that our approach is
compatible with diverse network architectures and learning tasks. It
consistently yields lower error rates than state-of-the-art single models at no
additional training cost, and compares favorably with traditional network
ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain
error rates of 3.4% and 17.4% respectively. | http://arxiv.org/pdf/1704.00109 | Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170401 | 20170401 | [
{
"id": "1503.02531"
},
{
"id": "1609.04836"
},
{
"id": "1605.07146"
},
{
"id": "1605.06465"
},
{
"id": "1606.01305"
},
{
"id": "1610.02242"
},
{
"id": "1608.03983"
},
{
"id": "1606.02891"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
},
{
"id": "1605.07110"
}
]
|
1704.00109 | 18 | where ag is the initial learning rate. Intuitively, this function anneals the learning rate from i value a to f([T'/M]) ~Â¥ 0 over the course of a cycle. Following update the learning rate at each iteration rather than at every epoch. This improves the convergence of short cycles, even when a large initial learning rate is used. Snapshot Ensembling. [Figure 2) depicts the training process using cyclic and traditional learning rate schedules. At the end of each training cycle, it is apparent that the model reaches a local mini- mum with respect to the training loss. Thus, before raising the learning rate, we take a âsnapshotâ of the model weights (indicated as vertical dashed black lines). After training M cycles, we have M model snapshots, 1... fz, each of which will be used in the final ensemble. It is important to highlight that the total training time of the M/ snapshots is the same as training a model with a stan- dard schedule (indicated in blue). In some cases, the standard learning rate schedule achieves lower training loss than the cyclic schedule; however, as we will show in the next section, the benefits of ensembling outweigh this difference. Ensembling at Test Time. The ensemble prediction at test time is the average | 1704.00109#18 | Snapshot Ensembles: Train 1, get M for free | Ensembles of neural networks are known to be much more robust and accurate
than individual networks. However, training multiple deep networks for model
averaging is computationally expensive. In this paper, we propose a method to
obtain the seemingly contradictory goal of ensembling multiple neural networks
at no additional training cost. We achieve this goal by training a single
neural network, converging to several local minima along its optimization path
and saving the model parameters. To obtain repeated rapid convergence, we
leverage recent work on cyclic learning rate schedules. The resulting
technique, which we refer to as Snapshot Ensembling, is simple, yet
surprisingly effective. We show in a series of experiments that our approach is
compatible with diverse network architectures and learning tasks. It
consistently yields lower error rates than state-of-the-art single models at no
additional training cost, and compares favorably with traditional network
ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain
error rates of 3.4% and 17.4% respectively. | http://arxiv.org/pdf/1704.00109 | Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170401 | 20170401 | [
{
"id": "1503.02531"
},
{
"id": "1609.04836"
},
{
"id": "1605.07146"
},
{
"id": "1605.06465"
},
{
"id": "1606.01305"
},
{
"id": "1610.02242"
},
{
"id": "1608.03983"
},
{
"id": "1606.02891"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
},
{
"id": "1605.07110"
}
]
|
1704.00109 | 19 | we will show in the next section, the benefits of ensembling outweigh this difference. Ensembling at Test Time. The ensemble prediction at test time is the average of the last m (m < M) modelâs softmax outputs. Let x be a test sample and let h; (x) be the softmax score of snapshot i. The output of the ensemble is a simple average of the last m models: Rensemble = + â¢~1 hyy_; (x). We always ensemble the last m models, as these models tend to have the lowest test error. | 1704.00109#19 | Snapshot Ensembles: Train 1, get M for free | Ensembles of neural networks are known to be much more robust and accurate
than individual networks. However, training multiple deep networks for model
averaging is computationally expensive. In this paper, we propose a method to
obtain the seemingly contradictory goal of ensembling multiple neural networks
at no additional training cost. We achieve this goal by training a single
neural network, converging to several local minima along its optimization path
and saving the model parameters. To obtain repeated rapid convergence, we
leverage recent work on cyclic learning rate schedules. The resulting
technique, which we refer to as Snapshot Ensembling, is simple, yet
surprisingly effective. We show in a series of experiments that our approach is
compatible with diverse network architectures and learning tasks. It
consistently yields lower error rates than state-of-the-art single models at no
additional training cost, and compares favorably with traditional network
ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain
error rates of 3.4% and 17.4% respectively. | http://arxiv.org/pdf/1704.00109 | Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170401 | 20170401 | [
{
"id": "1503.02531"
},
{
"id": "1609.04836"
},
{
"id": "1605.07146"
},
{
"id": "1605.06465"
},
{
"id": "1606.01305"
},
{
"id": "1610.02242"
},
{
"id": "1608.03983"
},
{
"id": "1606.02891"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
},
{
"id": "1605.07110"
}
]
|
1704.00109 | 21 | Method C10 C100 ResNet-110 Single model NoCycle Snapshot Ensemble SingleCycle Ensembles Snapshot Ensemble (α0 = 0.1) Snapshot Ensemble (α0 = 0.2) 5.52 5.49 6.66 5.73 5.32 28.02 26.97 24.54 25.55 24.19 1.96 1.78 1.74 1.63 1.66 46.50 43.69 42.60 40.54 39.40 Wide-ResNet-32 DenseNet-40 DenseNet-100 Single model Dropout NoCycle Snapshot Ensemble SingleCycle Ensembles Snapshot Ensemble (α0 = 0.1) Snapshot Ensemble (α0 = 0.2) Single model Dropout NoCycle Snapshot Ensemble SingleCycle Ensembles Snapshot Ensemble (α0 = 0.1) Snapshot Ensemble (α0 = 0.2) Single model Dropout NoCycle Snapshot Ensemble SingleCycle Ensembles Snapshot Ensemble (α0 = 0.1) Snapshot Ensemble (α0 = 0.2) 5.43 4.68 5.18 5.95 4.41 4.73 5.24â 6.08 5.20 5.43 4.99 4.84 3.74â 3.65 3.80 4.52 | 1704.00109#21 | Snapshot Ensembles: Train 1, get M for free | Ensembles of neural networks are known to be much more robust and accurate
than individual networks. However, training multiple deep networks for model
averaging is computationally expensive. In this paper, we propose a method to
obtain the seemingly contradictory goal of ensembling multiple neural networks
at no additional training cost. We achieve this goal by training a single
neural network, converging to several local minima along its optimization path
and saving the model parameters. To obtain repeated rapid convergence, we
leverage recent work on cyclic learning rate schedules. The resulting
technique, which we refer to as Snapshot Ensembling, is simple, yet
surprisingly effective. We show in a series of experiments that our approach is
compatible with diverse network architectures and learning tasks. It
consistently yields lower error rates than state-of-the-art single models at no
additional training cost, and compares favorably with traditional network
ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain
error rates of 3.4% and 17.4% respectively. | http://arxiv.org/pdf/1704.00109 | Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170401 | 20170401 | [
{
"id": "1503.02531"
},
{
"id": "1609.04836"
},
{
"id": "1605.07146"
},
{
"id": "1605.06465"
},
{
"id": "1606.01305"
},
{
"id": "1610.02242"
},
{
"id": "1608.03983"
},
{
"id": "1606.02891"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
},
{
"id": "1605.07110"
}
]
|
1704.00109 | 22 | 4.41 4.73 5.24â 6.08 5.20 5.43 4.99 4.84 3.74â 3.65 3.80 4.52 3.57 3.44 23.55 22.82 22.81 21.38 21.26 21.56 24.42â 25.79 24.63 22.51 23.34 21.93 19.25â 18.77 19.30 18.38 18.12 17.41 1.90 1.81 1.81 1.65 1.64 1.51 1.77 1.79â 1.80 1.87 1.64 1.73 - - - - - 39.63 36.58 38.64 35.53 35.45 32.90 39.09 39.68 38.51 38.00 37.25 36.61 - - - - Table 1: Error rates (%) on CIFAR-10 and CIFAR-100 datasets. All methods in the same group are trained for the same number of iterations. Results of our method are colored in blue, and the best result for each network/dataset pair are bolded. â indicates numbers which we take directly from Huang et al. (2016a). | 1704.00109#22 | Snapshot Ensembles: Train 1, get M for free | Ensembles of neural networks are known to be much more robust and accurate
than individual networks. However, training multiple deep networks for model
averaging is computationally expensive. In this paper, we propose a method to
obtain the seemingly contradictory goal of ensembling multiple neural networks
at no additional training cost. We achieve this goal by training a single
neural network, converging to several local minima along its optimization path
and saving the model parameters. To obtain repeated rapid convergence, we
leverage recent work on cyclic learning rate schedules. The resulting
technique, which we refer to as Snapshot Ensembling, is simple, yet
surprisingly effective. We show in a series of experiments that our approach is
compatible with diverse network architectures and learning tasks. It
consistently yields lower error rates than state-of-the-art single models at no
additional training cost, and compares favorably with traditional network
ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain
error rates of 3.4% and 17.4% respectively. | http://arxiv.org/pdf/1704.00109 | Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170401 | 20170401 | [
{
"id": "1503.02531"
},
{
"id": "1609.04836"
},
{
"id": "1605.07146"
},
{
"id": "1605.06465"
},
{
"id": "1606.01305"
},
{
"id": "1610.02242"
},
{
"id": "1608.03983"
},
{
"id": "1606.02891"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
},
{
"id": "1605.07110"
}
]
|
1704.00109 | 24 | CIFAR. The two CIFAR datasets (Krizhevsky & Hinton, 2009) consist of colored natural images sized at 32Ã32 pixels. CIFAR-10 (C10) and CIFAR-100 (C100) images are drawn from 10 and 100 classes, respectively. For each dataset, there are 50,000 training images and 10,000 images reserved for testing. We use a standard data augmentation scheme (Lin et al., 2013; Romero et al., 2014; Lee et al., 2015; Springenberg et al., 2014; Srivastava et al., 2015; Huang et al., 2016b; Larsson et al., 2016), in which the images are zero-padded with 4 pixels on each side, randomly cropped to produce 32Ã32 images, and horizontally mirrored with probability 0.5. SVHN. The Street View House Numbers (SVHN) dataset (Netzer et al., 2011) contains 32 Ã 32 colored digit images from Google Street View, with one class for each digit. There are 73,257 images in the training set and 26,032 images in the test set. Following common practice (Sermanet et al., 2012; Goodfellow et al., 2013; Huang et al., 2016a), we withhold 6,000 training | 1704.00109#24 | Snapshot Ensembles: Train 1, get M for free | Ensembles of neural networks are known to be much more robust and accurate
than individual networks. However, training multiple deep networks for model
averaging is computationally expensive. In this paper, we propose a method to
obtain the seemingly contradictory goal of ensembling multiple neural networks
at no additional training cost. We achieve this goal by training a single
neural network, converging to several local minima along its optimization path
and saving the model parameters. To obtain repeated rapid convergence, we
leverage recent work on cyclic learning rate schedules. The resulting
technique, which we refer to as Snapshot Ensembling, is simple, yet
surprisingly effective. We show in a series of experiments that our approach is
compatible with diverse network architectures and learning tasks. It
consistently yields lower error rates than state-of-the-art single models at no
additional training cost, and compares favorably with traditional network
ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain
error rates of 3.4% and 17.4% respectively. | http://arxiv.org/pdf/1704.00109 | Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170401 | 20170401 | [
{
"id": "1503.02531"
},
{
"id": "1609.04836"
},
{
"id": "1605.07146"
},
{
"id": "1605.06465"
},
{
"id": "1606.01305"
},
{
"id": "1610.02242"
},
{
"id": "1608.03983"
},
{
"id": "1606.02891"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
},
{
"id": "1605.07110"
}
]
|
1704.00109 | 25 | in the test set. Following common practice (Sermanet et al., 2012; Goodfellow et al., 2013; Huang et al., 2016a), we withhold 6,000 training images for validation, and train on the remaining images without data augmentation. Tiny ImageNet. The Tiny ImageNet dataset3 consists of a subset of ImageNet images (Deng et al., 2009). There are 200 classes, each of which has 500 training images and 50 validation images. Each image is resized to 64 à 64 and augmented with random crops, horizontal mirroring, and RGB intensity scaling (Krizhevsky et al., 2012). ImageNet. The ILSVRC 2012 classiï¬cation dataset (Deng et al., 2009) consists of 1000 images classes, with a total of 1.2 million training images and 50,000 validation images. We adopt the same | 1704.00109#25 | Snapshot Ensembles: Train 1, get M for free | Ensembles of neural networks are known to be much more robust and accurate
than individual networks. However, training multiple deep networks for model
averaging is computationally expensive. In this paper, we propose a method to
obtain the seemingly contradictory goal of ensembling multiple neural networks
at no additional training cost. We achieve this goal by training a single
neural network, converging to several local minima along its optimization path
and saving the model parameters. To obtain repeated rapid convergence, we
leverage recent work on cyclic learning rate schedules. The resulting
technique, which we refer to as Snapshot Ensembling, is simple, yet
surprisingly effective. We show in a series of experiments that our approach is
compatible with diverse network architectures and learning tasks. It
consistently yields lower error rates than state-of-the-art single models at no
additional training cost, and compares favorably with traditional network
ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain
error rates of 3.4% and 17.4% respectively. | http://arxiv.org/pdf/1704.00109 | Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170401 | 20170401 | [
{
"id": "1503.02531"
},
{
"id": "1609.04836"
},
{
"id": "1605.07146"
},
{
"id": "1605.06465"
},
{
"id": "1606.01305"
},
{
"id": "1610.02242"
},
{
"id": "1608.03983"
},
{
"id": "1606.02891"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
},
{
"id": "1605.07110"
}
]
|
1704.00109 | 26 | 2Code to reproduce results is available at https://github.com/gaohuang/SnapshotEnsemble 3https://tiny-imagenet.herokuapp.com
5
Published as a conference paper at ICLR 2017
Cifarl 0 (a9 =0.1) Cifarl 00 (ay = 0.1) Cifarl 0 (a9 =0.2) Cifarl 00 (ay = 0.2) DenseNet Baseline 40 â77 (Huang et al. 2016) 7 2 3 4S # of snapshots DenseNet Baseline 20.0 - 7" (Huang et al. 2016) + 2 3 4 5 6 # of snapshots DenseNet Baseline â77 (Huang et al. 2016) 238M # of snapshots DenseNet Baseline ~ 77 (Huang et al. 2016) 2 3 4 «5 6 # of snapshots
Figure 3: DenseNet-100 Snapshot Ensemble performance on CIFAR-10 and CIFAR-100 with restart learning rate α0 = 0.1 (left two) and α0 = 0.2 (right two). Each ensemble is trained with M = 6 annealing cycles (50 epochs per each).
data augmentation scheme as in (He et al., 2016a; Huang et al., 2016a) and apply a 224 Ã 224 center crop to images at test time. | 1704.00109#26 | Snapshot Ensembles: Train 1, get M for free | Ensembles of neural networks are known to be much more robust and accurate
than individual networks. However, training multiple deep networks for model
averaging is computationally expensive. In this paper, we propose a method to
obtain the seemingly contradictory goal of ensembling multiple neural networks
at no additional training cost. We achieve this goal by training a single
neural network, converging to several local minima along its optimization path
and saving the model parameters. To obtain repeated rapid convergence, we
leverage recent work on cyclic learning rate schedules. The resulting
technique, which we refer to as Snapshot Ensembling, is simple, yet
surprisingly effective. We show in a series of experiments that our approach is
compatible with diverse network architectures and learning tasks. It
consistently yields lower error rates than state-of-the-art single models at no
additional training cost, and compares favorably with traditional network
ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain
error rates of 3.4% and 17.4% respectively. | http://arxiv.org/pdf/1704.00109 | Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170401 | 20170401 | [
{
"id": "1503.02531"
},
{
"id": "1609.04836"
},
{
"id": "1605.07146"
},
{
"id": "1605.06465"
},
{
"id": "1606.01305"
},
{
"id": "1610.02242"
},
{
"id": "1608.03983"
},
{
"id": "1606.02891"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
},
{
"id": "1605.07110"
}
]
|
1704.00109 | 28 | Architectures. We test several state-of-the-art architectures, including residual networks (ResNet) (He et al., 2016a), Wide ResNet (Zagoruyko & Komodakis, 2016) and DenseNet (Huang et al., 2016a). For ResNet, we use the original 110-layer network introduced by He et al. (2016a). Wide-ResNet is a 32-layer ResNet with 4 times as many convolutional features per layer as a stan- dard ResNet. For DenseNet, our large model follows the same setup as (Huang et al., 2016a), with depth L = 100 and growth rate k = 24. In addition, we also evaluate our method on a small DenseNet, with depth L = 40 and k = 12. To adapt all these networks to Tiny ImageNet, we add a stride of 2 to the ï¬rst layer of the models, which downsamples the images to 32 à 32. For ImageNet, we test the 50-layer ResNet proposed in (He et al., 2016a). We use a mini batch size of 64.4 Baselines. Snapshot Ensembles incur the training cost of a single model; therefore, we compare with baselines that require the same amount of training. First, we compare against a Single | 1704.00109#28 | Snapshot Ensembles: Train 1, get M for free | Ensembles of neural networks are known to be much more robust and accurate
than individual networks. However, training multiple deep networks for model
averaging is computationally expensive. In this paper, we propose a method to
obtain the seemingly contradictory goal of ensembling multiple neural networks
at no additional training cost. We achieve this goal by training a single
neural network, converging to several local minima along its optimization path
and saving the model parameters. To obtain repeated rapid convergence, we
leverage recent work on cyclic learning rate schedules. The resulting
technique, which we refer to as Snapshot Ensembling, is simple, yet
surprisingly effective. We show in a series of experiments that our approach is
compatible with diverse network architectures and learning tasks. It
consistently yields lower error rates than state-of-the-art single models at no
additional training cost, and compares favorably with traditional network
ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain
error rates of 3.4% and 17.4% respectively. | http://arxiv.org/pdf/1704.00109 | Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170401 | 20170401 | [
{
"id": "1503.02531"
},
{
"id": "1609.04836"
},
{
"id": "1605.07146"
},
{
"id": "1605.06465"
},
{
"id": "1606.01305"
},
{
"id": "1610.02242"
},
{
"id": "1608.03983"
},
{
"id": "1606.02891"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
},
{
"id": "1605.07110"
}
]
|
1704.00109 | 29 | Snapshot Ensembles incur the training cost of a single model; therefore, we compare with baselines that require the same amount of training. First, we compare against a Single Model trained with a standard learning rate schedule, dropping the learning rate from 0.1 to 0.01 halfway through training, and then to 0.001 when training is at 75%. Additionally, to compare against implicit ensembling methods, we test against a single model trained with Dropout. This baseline uses the same learning rate as above, and drops nodes during training with a probability of 0.2. | 1704.00109#29 | Snapshot Ensembles: Train 1, get M for free | Ensembles of neural networks are known to be much more robust and accurate
than individual networks. However, training multiple deep networks for model
averaging is computationally expensive. In this paper, we propose a method to
obtain the seemingly contradictory goal of ensembling multiple neural networks
at no additional training cost. We achieve this goal by training a single
neural network, converging to several local minima along its optimization path
and saving the model parameters. To obtain repeated rapid convergence, we
leverage recent work on cyclic learning rate schedules. The resulting
technique, which we refer to as Snapshot Ensembling, is simple, yet
surprisingly effective. We show in a series of experiments that our approach is
compatible with diverse network architectures and learning tasks. It
consistently yields lower error rates than state-of-the-art single models at no
additional training cost, and compares favorably with traditional network
ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain
error rates of 3.4% and 17.4% respectively. | http://arxiv.org/pdf/1704.00109 | Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170401 | 20170401 | [
{
"id": "1503.02531"
},
{
"id": "1609.04836"
},
{
"id": "1605.07146"
},
{
"id": "1605.06465"
},
{
"id": "1606.01305"
},
{
"id": "1610.02242"
},
{
"id": "1608.03983"
},
{
"id": "1606.02891"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
},
{
"id": "1605.07110"
}
]
|
1704.00109 | 30 | We then test the Snapshot Ensemble algorithm trained with the cyclic cosine learning rate as de- scribed in (2). We test models with the max learning rate α0 set to 0.1 and 0.2. In both cases, we divide the training process into learning rate cycles. Model snapshots are taken after each learn- ing rate cycle. Additionally, we train a Snapshot Ensemble with a non-cyclic learning rate schedule. This NoCycle Snapshot Ensemble, which uses the same schedule as the Single Model and Dropout baselines, is meant to highlight the impact of cyclic learning rates for our method. To accurately compare with the cyclic Snapshot Ensembles, we take the same number of snapshots equally spaced throughout the training process. Finally, we compare against SingleCycle Ensembles, a Snapshot Ensemble variant in which the network is re-initialized at the beginning of every cosine learning rate cycle, rather than using the parameters from the previous optimization cycle. This baseline es- sentially creates a traditional ensemble, yet each network only has 1/M of the typical training time. This variant is meant to highlight the tradeoff between model diversity and model convergence. Though SingleCycle Ensembles should in theory explore more of the parameter space, the models | 1704.00109#30 | Snapshot Ensembles: Train 1, get M for free | Ensembles of neural networks are known to be much more robust and accurate
than individual networks. However, training multiple deep networks for model
averaging is computationally expensive. In this paper, we propose a method to
obtain the seemingly contradictory goal of ensembling multiple neural networks
at no additional training cost. We achieve this goal by training a single
neural network, converging to several local minima along its optimization path
and saving the model parameters. To obtain repeated rapid convergence, we
leverage recent work on cyclic learning rate schedules. The resulting
technique, which we refer to as Snapshot Ensembling, is simple, yet
surprisingly effective. We show in a series of experiments that our approach is
compatible with diverse network architectures and learning tasks. It
consistently yields lower error rates than state-of-the-art single models at no
additional training cost, and compares favorably with traditional network
ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain
error rates of 3.4% and 17.4% respectively. | http://arxiv.org/pdf/1704.00109 | Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170401 | 20170401 | [
{
"id": "1503.02531"
},
{
"id": "1609.04836"
},
{
"id": "1605.07146"
},
{
"id": "1605.06465"
},
{
"id": "1606.01305"
},
{
"id": "1610.02242"
},
{
"id": "1608.03983"
},
{
"id": "1606.02891"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
},
{
"id": "1605.07110"
}
]
|
1704.00109 | 31 | meant to highlight the tradeoff between model diversity and model convergence. Though SingleCycle Ensembles should in theory explore more of the parameter space, the models do not beneï¬t from the optimization of previous cycles. Training Budget. On CIFAR datasets, the training budget is B = 300 epochs for DenseNet-40 and DenseNet-100, and B = 200 for ResNet and Wide ResNet models. Snapshot variants are trained with M = 6 cycles of B/M = 50 epochs for DenseNets, and M = 5 cycles of B/M = 40 epochs for ResNets/Wide ResNets. SVHN models are trained with a budget of B = 40 epochs (5 cycles of 8 epochs). For Tiny ImageNet, we use a training budget of B = 150 (6 cycles of 25 epochs). Finally, ImageNet is trained with a budget of B = 90 epochs, and we trained 2 Snapshot variants: one with M = 2 cycles and one with M = 3. | 1704.00109#31 | Snapshot Ensembles: Train 1, get M for free | Ensembles of neural networks are known to be much more robust and accurate
than individual networks. However, training multiple deep networks for model
averaging is computationally expensive. In this paper, we propose a method to
obtain the seemingly contradictory goal of ensembling multiple neural networks
at no additional training cost. We achieve this goal by training a single
neural network, converging to several local minima along its optimization path
and saving the model parameters. To obtain repeated rapid convergence, we
leverage recent work on cyclic learning rate schedules. The resulting
technique, which we refer to as Snapshot Ensembling, is simple, yet
surprisingly effective. We show in a series of experiments that our approach is
compatible with diverse network architectures and learning tasks. It
consistently yields lower error rates than state-of-the-art single models at no
additional training cost, and compares favorably with traditional network
ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain
error rates of 3.4% and 17.4% respectively. | http://arxiv.org/pdf/1704.00109 | Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170401 | 20170401 | [
{
"id": "1503.02531"
},
{
"id": "1609.04836"
},
{
"id": "1605.07146"
},
{
"id": "1605.06465"
},
{
"id": "1606.01305"
},
{
"id": "1610.02242"
},
{
"id": "1608.03983"
},
{
"id": "1606.02891"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
},
{
"id": "1605.07110"
}
]
|
1704.00109 | 32 | 4Exceptions: ResNet-110 and Wide-ResNet are trained with batch size 128 on Tiny ImageNet. The Ima- geNet model is trained with batch size 256.
6
Published as a conference paper at ICLR 2017
# 4.3 SNAPSHOT ENSEMBLE RESULTS
Accuracy. The main results are summarized in Table 1. In most cases, Snapshot ensem- bles achieve lower error than any of the base- line methods. Most notably, Snapshot Ensem- bles yield an error rate of 17.41% on CIFAR- 100 using large DenseNets, far outperforming the record of 19.25% under the same training cost and architecture (Huang et al., 2016a). Our method has the most success on CIFAR-100 and Tiny ImageNet, which is likely due to the complexity of these datasets. The softmax outputs for these datasets are high dimensional due to the large number of classes, making it unlikely that any two models make the same predictions. Snap- shot Ensembling is also capable of improving the competitive baselines for CIFAR-10 and SVHN as well, reducing error by 1% and 0.4% respectively with the Wide ResNet architecture.
Method Val. Error (%) Single model Snapshot Ensemble (M = 2) Snapshot Ensemble (M = 3) 24.01 23.33 23.96 | 1704.00109#32 | Snapshot Ensembles: Train 1, get M for free | Ensembles of neural networks are known to be much more robust and accurate
than individual networks. However, training multiple deep networks for model
averaging is computationally expensive. In this paper, we propose a method to
obtain the seemingly contradictory goal of ensembling multiple neural networks
at no additional training cost. We achieve this goal by training a single
neural network, converging to several local minima along its optimization path
and saving the model parameters. To obtain repeated rapid convergence, we
leverage recent work on cyclic learning rate schedules. The resulting
technique, which we refer to as Snapshot Ensembling, is simple, yet
surprisingly effective. We show in a series of experiments that our approach is
compatible with diverse network architectures and learning tasks. It
consistently yields lower error rates than state-of-the-art single models at no
additional training cost, and compares favorably with traditional network
ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain
error rates of 3.4% and 17.4% respectively. | http://arxiv.org/pdf/1704.00109 | Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170401 | 20170401 | [
{
"id": "1503.02531"
},
{
"id": "1609.04836"
},
{
"id": "1605.07146"
},
{
"id": "1605.06465"
},
{
"id": "1606.01305"
},
{
"id": "1610.02242"
},
{
"id": "1608.03983"
},
{
"id": "1606.02891"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
},
{
"id": "1605.07110"
}
]
|
1704.00109 | 33 | Method Val. Error (%) Single model Snapshot Ensemble (M = 2) Snapshot Ensemble (M = 3) 24.01 23.33 23.96
The NoCycle Snapshot Ensemble generally has little effect on performance, and in some instances even increases the test error. This highlights the need for a cyclic learning rate for useful ensembling. The SingleCycle Ensemble has similarly mixed performance. In some cases, e.g., DenseNet-40 on CIFAR-100, the SingleCycle Ensemble is competitive with Snapshot Ensembles. However, as the model size increases to 100 layers, it does not perform as well. This is because it is difï¬cult to train a large model from scratch in only a few epochs. These results demonstrate that Snapshot Ensembles tend to work best when utilizing information from previous cycles. Effectively, Snapshot Ensembles strike a balance between model diversity and optimization. | 1704.00109#33 | Snapshot Ensembles: Train 1, get M for free | Ensembles of neural networks are known to be much more robust and accurate
than individual networks. However, training multiple deep networks for model
averaging is computationally expensive. In this paper, we propose a method to
obtain the seemingly contradictory goal of ensembling multiple neural networks
at no additional training cost. We achieve this goal by training a single
neural network, converging to several local minima along its optimization path
and saving the model parameters. To obtain repeated rapid convergence, we
leverage recent work on cyclic learning rate schedules. The resulting
technique, which we refer to as Snapshot Ensembling, is simple, yet
surprisingly effective. We show in a series of experiments that our approach is
compatible with diverse network architectures and learning tasks. It
consistently yields lower error rates than state-of-the-art single models at no
additional training cost, and compares favorably with traditional network
ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain
error rates of 3.4% and 17.4% respectively. | http://arxiv.org/pdf/1704.00109 | Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170401 | 20170401 | [
{
"id": "1503.02531"
},
{
"id": "1609.04836"
},
{
"id": "1605.07146"
},
{
"id": "1605.06465"
},
{
"id": "1606.01305"
},
{
"id": "1610.02242"
},
{
"id": "1608.03983"
},
{
"id": "1606.02891"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
},
{
"id": "1605.07110"
}
]
|
1704.00109 | 34 | Table 2 shows Snapshot Ensemble results on ImageNet. The Snapshot Ensemble with M = 2 achieves 23.33% validation error, outperforming the single model baseline with 24.01% validation error. It appears that 2 cycles is the optimal choice for the ImageNet dataset. Provided with the limited total training budget B = 90 epochs, we hypothesize that allocating fewer than B/2 = 45 epochs per training cycle is insufï¬cient for the model to converge on such a large dataset. Ensemble Size. In some applications, it may be beneï¬cial to vary the size of the ensemble dynamically at test time depending on available resources. Figure 3 displays the performance of DenseNet-40 on the CIFAR-100 dataset as the effective ensemble size, m, is varied. Each en- semble consists of snapshots from later cycles, as these snapshots have received the most training and therefore have likely converged to bet- ter minima. Although ensembling more models generally gives better performance, we observe signiï¬cant drops in error when the second and third models are added to the ensemble. In most cases, an ensemble of two models outperforms the baseline model. Restart Learning Rate. The | 1704.00109#34 | Snapshot Ensembles: Train 1, get M for free | Ensembles of neural networks are known to be much more robust and accurate
than individual networks. However, training multiple deep networks for model
averaging is computationally expensive. In this paper, we propose a method to
obtain the seemingly contradictory goal of ensembling multiple neural networks
at no additional training cost. We achieve this goal by training a single
neural network, converging to several local minima along its optimization path
and saving the model parameters. To obtain repeated rapid convergence, we
leverage recent work on cyclic learning rate schedules. The resulting
technique, which we refer to as Snapshot Ensembling, is simple, yet
surprisingly effective. We show in a series of experiments that our approach is
compatible with diverse network architectures and learning tasks. It
consistently yields lower error rates than state-of-the-art single models at no
additional training cost, and compares favorably with traditional network
ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain
error rates of 3.4% and 17.4% respectively. | http://arxiv.org/pdf/1704.00109 | Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170401 | 20170401 | [
{
"id": "1503.02531"
},
{
"id": "1609.04836"
},
{
"id": "1605.07146"
},
{
"id": "1605.06465"
},
{
"id": "1606.01305"
},
{
"id": "1610.02242"
},
{
"id": "1608.03983"
},
{
"id": "1606.02891"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
},
{
"id": "1605.07110"
}
]
|
1704.00109 | 35 | drops in error when the second and third models are added to the ensemble. In most cases, an ensemble of two models outperforms the baseline model. Restart Learning Rate. The effect of the restart learning rate can be observed in Figure 3. The left two plots show performance when using a restart learning rate of α0 = 0.1 at the beginning of each cycle, and the right two plots show α0 = 0.2. In most cases, ensembles with the larger restart learning rate perform better, presumably because the strong perturbation in between cycles increases the diversity of local minima. Varying Number of Cycles. Given a ï¬xed training budget, there is a trade-off between the number of learning rate cycles and their length. Therefore, we investigate how the number of cycles M affects the ensemble performance, given a ï¬xed training budget. We train a 40-layer DenseNet on the CIFAR-100 dataset with an initial learning rate of α0 = 0.2. We ï¬x the total training budget B = 300 epochs, and vary the value of M â {2, 4, 6, 8, 10}. As shown in Table 3, our method is relatively robust with respect to different values of M . At the extremes, M = | 1704.00109#35 | Snapshot Ensembles: Train 1, get M for free | Ensembles of neural networks are known to be much more robust and accurate
than individual networks. However, training multiple deep networks for model
averaging is computationally expensive. In this paper, we propose a method to
obtain the seemingly contradictory goal of ensembling multiple neural networks
at no additional training cost. We achieve this goal by training a single
neural network, converging to several local minima along its optimization path
and saving the model parameters. To obtain repeated rapid convergence, we
leverage recent work on cyclic learning rate schedules. The resulting
technique, which we refer to as Snapshot Ensembling, is simple, yet
surprisingly effective. We show in a series of experiments that our approach is
compatible with diverse network architectures and learning tasks. It
consistently yields lower error rates than state-of-the-art single models at no
additional training cost, and compares favorably with traditional network
ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain
error rates of 3.4% and 17.4% respectively. | http://arxiv.org/pdf/1704.00109 | Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170401 | 20170401 | [
{
"id": "1503.02531"
},
{
"id": "1609.04836"
},
{
"id": "1605.07146"
},
{
"id": "1605.06465"
},
{
"id": "1606.01305"
},
{
"id": "1610.02242"
},
{
"id": "1608.03983"
},
{
"id": "1606.02891"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
},
{
"id": "1605.07110"
}
]
|
1704.00109 | 36 | {2, 4, 6, 8, 10}. As shown in Table 3, our method is relatively robust with respect to different values of M . At the extremes, M = 2 and M = 10, we ï¬nd a slight degradation in performance, as the cycles are either too few or too short. In practice, we ï¬nd that setting M to be 4 â¼ 8 works reasonably well. Varying Training Budget. The left and middle panels of Figure 4 show the performance of Snap- shot Ensembles and SingleCycle Ensembles as a function of training budget (where the number of cycles is ï¬xed at M = 6). We train a 40-layer DenseNet on CIFAR-10 and CIFAR-100, with an ini- tial learning rate of α0 = 0.1, varying the total number of training epochs from 60 to 300. We observe | 1704.00109#36 | Snapshot Ensembles: Train 1, get M for free | Ensembles of neural networks are known to be much more robust and accurate
than individual networks. However, training multiple deep networks for model
averaging is computationally expensive. In this paper, we propose a method to
obtain the seemingly contradictory goal of ensembling multiple neural networks
at no additional training cost. We achieve this goal by training a single
neural network, converging to several local minima along its optimization path
and saving the model parameters. To obtain repeated rapid convergence, we
leverage recent work on cyclic learning rate schedules. The resulting
technique, which we refer to as Snapshot Ensembling, is simple, yet
surprisingly effective. We show in a series of experiments that our approach is
compatible with diverse network architectures and learning tasks. It
consistently yields lower error rates than state-of-the-art single models at no
additional training cost, and compares favorably with traditional network
ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain
error rates of 3.4% and 17.4% respectively. | http://arxiv.org/pdf/1704.00109 | Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170401 | 20170401 | [
{
"id": "1503.02531"
},
{
"id": "1609.04836"
},
{
"id": "1605.07146"
},
{
"id": "1605.06465"
},
{
"id": "1606.01305"
},
{
"id": "1610.02242"
},
{
"id": "1608.03983"
},
{
"id": "1606.02891"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
},
{
"id": "1605.07110"
}
]
|
1704.00109 | 37 | M Test Error (%) 2 4 6 8 10 22.92 22.07 21.93 21.89 22.16
7
Published as a conference paper at ICLR 2017
Cifar10, DenseNet-40 Cifar100, DenseNet-40 Cifarl100, DenseNet-40 + Snapshot Ensemble â+â Snapshot Ensemble 10 + SingleCycle Ensemble 36 ââ SingleCycle Ensemble g = = Single Model Snapshot ensemble ~ (60 epochs per model cost) True ensemble of fully trained models, (300 epochs per model cost) Ensemble Test Error (%) Ensemble Test Error (%) Ensemble Test Error (%) 100 150 200 250 300 100 150 200 250 300 1 2 3 4 Training budget B (epochs) Training budget # (epochs) 5 6 # of models
Figure 4: Snapshot Ensembles under different training budgets on (Left) CIFAR-10 and (Middle) CIFAR-100. Right: Comparison of Snapshot Ensembles with true ensembles. | 1704.00109#37 | Snapshot Ensembles: Train 1, get M for free | Ensembles of neural networks are known to be much more robust and accurate
than individual networks. However, training multiple deep networks for model
averaging is computationally expensive. In this paper, we propose a method to
obtain the seemingly contradictory goal of ensembling multiple neural networks
at no additional training cost. We achieve this goal by training a single
neural network, converging to several local minima along its optimization path
and saving the model parameters. To obtain repeated rapid convergence, we
leverage recent work on cyclic learning rate schedules. The resulting
technique, which we refer to as Snapshot Ensembling, is simple, yet
surprisingly effective. We show in a series of experiments that our approach is
compatible with diverse network architectures and learning tasks. It
consistently yields lower error rates than state-of-the-art single models at no
additional training cost, and compares favorably with traditional network
ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain
error rates of 3.4% and 17.4% respectively. | http://arxiv.org/pdf/1704.00109 | Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170401 | 20170401 | [
{
"id": "1503.02531"
},
{
"id": "1609.04836"
},
{
"id": "1605.07146"
},
{
"id": "1605.06465"
},
{
"id": "1606.01305"
},
{
"id": "1610.02242"
},
{
"id": "1608.03983"
},
{
"id": "1606.02891"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
},
{
"id": "1605.07110"
}
]
|
1704.00109 | 38 | we Cifar10 (cosine annealing) so Cifar100 (cosine annealing) Cifar10 (standard Ir scheduling) Cifar100 (standard Ir scheduling) â with 5th snapshot â with 4-08 snapshot â with a-rd snapshot â with 2-nd snapshot â with 1-st snapshot th 5-th snapshot ith 4-th snapshot ith 3-rd snapshot ith 2-nd snapshot ith I-st snapshot.
Figure 5: Interpolations in parameter space between the ï¬nal model (sixth snapshot) and all intermediate snapshots. λ = 0 represents an intermediate snapshot model, while λ = 1 represents the ï¬nal model. Left: A Snapshot Ensemble, with cosine annealing cycles (α0 = 0.2 every B/M = 50 epochs). Right: A NoCycle Snapshot Ensemble, (two learning rate drops, snapshots every 50 epochs). | 1704.00109#38 | Snapshot Ensembles: Train 1, get M for free | Ensembles of neural networks are known to be much more robust and accurate
than individual networks. However, training multiple deep networks for model
averaging is computationally expensive. In this paper, we propose a method to
obtain the seemingly contradictory goal of ensembling multiple neural networks
at no additional training cost. We achieve this goal by training a single
neural network, converging to several local minima along its optimization path
and saving the model parameters. To obtain repeated rapid convergence, we
leverage recent work on cyclic learning rate schedules. The resulting
technique, which we refer to as Snapshot Ensembling, is simple, yet
surprisingly effective. We show in a series of experiments that our approach is
compatible with diverse network architectures and learning tasks. It
consistently yields lower error rates than state-of-the-art single models at no
additional training cost, and compares favorably with traditional network
ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain
error rates of 3.4% and 17.4% respectively. | http://arxiv.org/pdf/1704.00109 | Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170401 | 20170401 | [
{
"id": "1503.02531"
},
{
"id": "1609.04836"
},
{
"id": "1605.07146"
},
{
"id": "1605.06465"
},
{
"id": "1606.01305"
},
{
"id": "1610.02242"
},
{
"id": "1608.03983"
},
{
"id": "1606.02891"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
},
{
"id": "1605.07110"
}
]
|
1704.00109 | 39 | that both Snapshot Ensembles and SingleCycle Ensembles become more accurate as training bud- get increases. However, we note that as training budget decreases, Snapshot Ensembles still yield competitive results, while the performance of the SingleCycle Ensembles degrades rapidly. These results highlight the improvements that Snapshot Ensembles obtain when the budget is low. If the budget is high, then the SingleCycle baseline approaches true ensembles and outperforms Snapshot ensembles eventually. Comparison with True Ensembles. We compare Snapshot Ensembles with the traditional ensem- bling method. The right panel of Figure 4 shows the test error rates of DenseNet-40 on CIFAR-100. The true ensemble method averages models that are trained with 300 full epochs, each with differ- ent weight initializations. Given the same number of models at test time, the error rate of the true ensemble can be seen as a lower bound of our method. Our method achieves performance that is comparable with ensembling of 2 independent models, but with the training cost of one model.
# 4.4 DIVERSITY OF MODEL ENSEMBLES | 1704.00109#39 | Snapshot Ensembles: Train 1, get M for free | Ensembles of neural networks are known to be much more robust and accurate
than individual networks. However, training multiple deep networks for model
averaging is computationally expensive. In this paper, we propose a method to
obtain the seemingly contradictory goal of ensembling multiple neural networks
at no additional training cost. We achieve this goal by training a single
neural network, converging to several local minima along its optimization path
and saving the model parameters. To obtain repeated rapid convergence, we
leverage recent work on cyclic learning rate schedules. The resulting
technique, which we refer to as Snapshot Ensembling, is simple, yet
surprisingly effective. We show in a series of experiments that our approach is
compatible with diverse network architectures and learning tasks. It
consistently yields lower error rates than state-of-the-art single models at no
additional training cost, and compares favorably with traditional network
ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain
error rates of 3.4% and 17.4% respectively. | http://arxiv.org/pdf/1704.00109 | Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170401 | 20170401 | [
{
"id": "1503.02531"
},
{
"id": "1609.04836"
},
{
"id": "1605.07146"
},
{
"id": "1605.06465"
},
{
"id": "1606.01305"
},
{
"id": "1610.02242"
},
{
"id": "1608.03983"
},
{
"id": "1606.02891"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
},
{
"id": "1605.07110"
}
]
|
1704.00109 | 40 | # 4.4 DIVERSITY OF MODEL ENSEMBLES
Parameter Space. We hypothesize that the cyclic learning rate schedule creates snapshots which are not only accurate but also diverse with respect to model predictions. We qualitatively measure this diversity by visualizing the local minima that models converge to. To do so, we linearly interpolate snapshot models, as described by Goodfellow et al. (2014). Let J (θ) be the test error of a model using parameters θ. Given θ1 and θ2 â the parameters from models 1 and 2 respectively â we can compute the loss for a convex combination of model parameters: J (λ (θ1) + (1 â λ) (θ2)), where λ is a mixing coefï¬cient. Setting λ to 1 results in a parameters that are entirely θ1 while setting λ to 0 gives the parameters θ2. By sweeping the values of λ, we can examine a linear slice of the parameter space. Two models that converge to a similar minimum will have smooth parameter interpolations, whereas models that converge to different minima will likely have a non-convex interpolation, with a spike in error when λ is between 0 and 1. | 1704.00109#40 | Snapshot Ensembles: Train 1, get M for free | Ensembles of neural networks are known to be much more robust and accurate
than individual networks. However, training multiple deep networks for model
averaging is computationally expensive. In this paper, we propose a method to
obtain the seemingly contradictory goal of ensembling multiple neural networks
at no additional training cost. We achieve this goal by training a single
neural network, converging to several local minima along its optimization path
and saving the model parameters. To obtain repeated rapid convergence, we
leverage recent work on cyclic learning rate schedules. The resulting
technique, which we refer to as Snapshot Ensembling, is simple, yet
surprisingly effective. We show in a series of experiments that our approach is
compatible with diverse network architectures and learning tasks. It
consistently yields lower error rates than state-of-the-art single models at no
additional training cost, and compares favorably with traditional network
ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain
error rates of 3.4% and 17.4% respectively. | http://arxiv.org/pdf/1704.00109 | Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170401 | 20170401 | [
{
"id": "1503.02531"
},
{
"id": "1609.04836"
},
{
"id": "1605.07146"
},
{
"id": "1605.06465"
},
{
"id": "1606.01305"
},
{
"id": "1610.02242"
},
{
"id": "1608.03983"
},
{
"id": "1606.02891"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
},
{
"id": "1605.07110"
}
]
|
1704.00109 | 41 | Figure 5 displays interpolations between the ï¬nal model of DenseNet-40 (sixth snapshot) and all intermediate snapshots. The left two plots show Snapshot Ensemble models trained with a cyclic learning rate, while the right two plots show NoCycle Snapshot models. λ = 0 represents a model which is entirely snapshot parameters, while λ = 1 represents a model which is entirely the param- eters of the ï¬nal model. From this ï¬gure, it is clear that there are differences between cyclic and
8
Published as a conference paper at ICLR 2017 | 1704.00109#41 | Snapshot Ensembles: Train 1, get M for free | Ensembles of neural networks are known to be much more robust and accurate
than individual networks. However, training multiple deep networks for model
averaging is computationally expensive. In this paper, we propose a method to
obtain the seemingly contradictory goal of ensembling multiple neural networks
at no additional training cost. We achieve this goal by training a single
neural network, converging to several local minima along its optimization path
and saving the model parameters. To obtain repeated rapid convergence, we
leverage recent work on cyclic learning rate schedules. The resulting
technique, which we refer to as Snapshot Ensembling, is simple, yet
surprisingly effective. We show in a series of experiments that our approach is
compatible with diverse network architectures and learning tasks. It
consistently yields lower error rates than state-of-the-art single models at no
additional training cost, and compares favorably with traditional network
ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain
error rates of 3.4% and 17.4% respectively. | http://arxiv.org/pdf/1704.00109 | Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170401 | 20170401 | [
{
"id": "1503.02531"
},
{
"id": "1609.04836"
},
{
"id": "1605.07146"
},
{
"id": "1605.06465"
},
{
"id": "1606.01305"
},
{
"id": "1610.02242"
},
{
"id": "1608.03983"
},
{
"id": "1606.02891"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
},
{
"id": "1605.07110"
}
]
|
1704.00109 | 42 | non-cyclic learning rate schedules. Firstly, all of the cyclic snapshots achieve roughly the same error as the ï¬nal cyclical model, as the error is similar for λ = 0 and λ = 1. Additionally, it appears that most snapshots do not lie in the same minimum as the ï¬nal model. Thus the snapshots are likely to misclassify different samples. Conversely, the ï¬rst three snapshots achieve much higher error than the ï¬nal model. This can be observed by the sharp minima around λ = 1, which suggests that mixing in any amount of the snapshot parameters will worsen performance. While the ï¬nal two snapshots achieve low error, the ï¬gures suggests that they lie in the same minimum as the ï¬nal model, and therefore likely add limited diversity to the ensemble. Activation space. To further explore the diver- sity of models, we compute the pairwise corre- lation of softmax outputs for every pair of snap- shots. Figure 6 displays the average correla- tion for both cyclic snapshots and non-cyclical snapshots. Firstly, there are large correlations between the last 3 snapshots of the non-cyclic training schedule (right). These snapshots are | 1704.00109#42 | Snapshot Ensembles: Train 1, get M for free | Ensembles of neural networks are known to be much more robust and accurate
than individual networks. However, training multiple deep networks for model
averaging is computationally expensive. In this paper, we propose a method to
obtain the seemingly contradictory goal of ensembling multiple neural networks
at no additional training cost. We achieve this goal by training a single
neural network, converging to several local minima along its optimization path
and saving the model parameters. To obtain repeated rapid convergence, we
leverage recent work on cyclic learning rate schedules. The resulting
technique, which we refer to as Snapshot Ensembling, is simple, yet
surprisingly effective. We show in a series of experiments that our approach is
compatible with diverse network architectures and learning tasks. It
consistently yields lower error rates than state-of-the-art single models at no
additional training cost, and compares favorably with traditional network
ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain
error rates of 3.4% and 17.4% respectively. | http://arxiv.org/pdf/1704.00109 | Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170401 | 20170401 | [
{
"id": "1503.02531"
},
{
"id": "1609.04836"
},
{
"id": "1605.07146"
},
{
"id": "1605.06465"
},
{
"id": "1606.01305"
},
{
"id": "1610.02242"
},
{
"id": "1608.03983"
},
{
"id": "1606.02891"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
},
{
"id": "1605.07110"
}
]
|
1704.00109 | 43 | snapshots and non-cyclical snapshots. Firstly, there are large correlations between the last 3 snapshots of the non-cyclic training schedule (right). These snapshots are taken after dropping the learning rate, suggest- ing that each snapshot has converged to the same minimum. Though there is more diversity amongst the earlier snapshots, these snapshots have much higher error rates and are therefore not ideal for ensembling. Conversely, there is less correlation between all cyclic snapshots (left). Because all snapshots have similar accu- racy (as can be seen in Figure 5), these differ- ences in predictions can be exploited to create effective ensembles. | 1704.00109#43 | Snapshot Ensembles: Train 1, get M for free | Ensembles of neural networks are known to be much more robust and accurate
than individual networks. However, training multiple deep networks for model
averaging is computationally expensive. In this paper, we propose a method to
obtain the seemingly contradictory goal of ensembling multiple neural networks
at no additional training cost. We achieve this goal by training a single
neural network, converging to several local minima along its optimization path
and saving the model parameters. To obtain repeated rapid convergence, we
leverage recent work on cyclic learning rate schedules. The resulting
technique, which we refer to as Snapshot Ensembling, is simple, yet
surprisingly effective. We show in a series of experiments that our approach is
compatible with diverse network architectures and learning tasks. It
consistently yields lower error rates than state-of-the-art single models at no
additional training cost, and compares favorably with traditional network
ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain
error rates of 3.4% and 17.4% respectively. | http://arxiv.org/pdf/1704.00109 | Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170401 | 20170401 | [
{
"id": "1503.02531"
},
{
"id": "1609.04836"
},
{
"id": "1605.07146"
},
{
"id": "1605.06465"
},
{
"id": "1606.01305"
},
{
"id": "1610.02242"
},
{
"id": "1608.03983"
},
{
"id": "1606.02891"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
},
{
"id": "1605.07110"
}
]
|
1704.00109 | 44 | # 5 DISCUSSION
We introduce Snapshot Ensembling, a simple method to obtain ensembles of neural networks with- out any additional training cost. Our method exploits the ability of SGD to converge to and escape from local minima as the learning rate is lowered, which allows the model to visit several weight assignments that lead to increasingly accurate predictions over the course of training. We harness this power with the cyclical learning rate schedule proposed by Loshchilov & Hutter (2016), saving model snapshots at each point of convergence. We show in several experiments that all snapshots are accurate, yet produce different predictions from one another, and therefore are well suited for test-time ensembles. Ensembles of these snapshots signiï¬cantly improve the state-of-the-art on CIFAR-10, CIFAR-100 and SVHN. Future work will explore combining Snapshot Ensembles with traditional ensembles. In particular, we will investigate how to balance growing an ensemble with new models (with random initializations) and reï¬ning existing models with further training cycles under a ï¬xed training budget.
# ACKNOWLEDGEMENTS | 1704.00109#44 | Snapshot Ensembles: Train 1, get M for free | Ensembles of neural networks are known to be much more robust and accurate
than individual networks. However, training multiple deep networks for model
averaging is computationally expensive. In this paper, we propose a method to
obtain the seemingly contradictory goal of ensembling multiple neural networks
at no additional training cost. We achieve this goal by training a single
neural network, converging to several local minima along its optimization path
and saving the model parameters. To obtain repeated rapid convergence, we
leverage recent work on cyclic learning rate schedules. The resulting
technique, which we refer to as Snapshot Ensembling, is simple, yet
surprisingly effective. We show in a series of experiments that our approach is
compatible with diverse network architectures and learning tasks. It
consistently yields lower error rates than state-of-the-art single models at no
additional training cost, and compares favorably with traditional network
ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain
error rates of 3.4% and 17.4% respectively. | http://arxiv.org/pdf/1704.00109 | Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170401 | 20170401 | [
{
"id": "1503.02531"
},
{
"id": "1609.04836"
},
{
"id": "1605.07146"
},
{
"id": "1605.06465"
},
{
"id": "1606.01305"
},
{
"id": "1610.02242"
},
{
"id": "1608.03983"
},
{
"id": "1606.02891"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
},
{
"id": "1605.07110"
}
]
|
1704.00109 | 45 | # ACKNOWLEDGEMENTS
We thank Ilya Loshchilov and Frank Hutter for their insightful comments on the cyclic cosine- shaped learning rate. The authors are supported in part by the, III-1618134, III-1526012, IIS- 1149882 grants from the National Science Foundation, US Army Research Ofï¬ce W911NF-14- 1-0477, and the Bill and Melinda Gates Foundation.
# REFERENCES
L´eon Bottou. Large-scale machine learning with stochastic gradient descent. In COMPSTAT. 2010.
Cristian Bucilu, Rich Caruana, and Alexandru Niculescu-Mizil. Model compression. In KDD, 2006.
Rich Caruana, Alexandru Niculescu-Mizil, Geoff Crew, and Alex Ksikes. Ensemble selection from libraries of models. In ICML, 2004.
9
Published as a conference paper at ICLR 2017
Ronan Collobert, Koray Kavukcuoglu, and Cl´ement Farabet. Torch7: A matlab-like environment for machine learning. In BigLearn, NIPS Workshop, 2011. | 1704.00109#45 | Snapshot Ensembles: Train 1, get M for free | Ensembles of neural networks are known to be much more robust and accurate
than individual networks. However, training multiple deep networks for model
averaging is computationally expensive. In this paper, we propose a method to
obtain the seemingly contradictory goal of ensembling multiple neural networks
at no additional training cost. We achieve this goal by training a single
neural network, converging to several local minima along its optimization path
and saving the model parameters. To obtain repeated rapid convergence, we
leverage recent work on cyclic learning rate schedules. The resulting
technique, which we refer to as Snapshot Ensembling, is simple, yet
surprisingly effective. We show in a series of experiments that our approach is
compatible with diverse network architectures and learning tasks. It
consistently yields lower error rates than state-of-the-art single models at no
additional training cost, and compares favorably with traditional network
ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain
error rates of 3.4% and 17.4% respectively. | http://arxiv.org/pdf/1704.00109 | Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170401 | 20170401 | [
{
"id": "1503.02531"
},
{
"id": "1609.04836"
},
{
"id": "1605.07146"
},
{
"id": "1605.06465"
},
{
"id": "1606.01305"
},
{
"id": "1610.02242"
},
{
"id": "1608.03983"
},
{
"id": "1606.02891"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
},
{
"id": "1605.07110"
}
]
|
1704.00109 | 46 | Yann N Dauphin, Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Surya Ganguli, and Yoshua Bengio. Identifying and attacking the saddle point problem in high-dimensional non-convex op- timization. In NIPS, 2014.
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR, 2009.
John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121â2159, 2011.
Ian J Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron Courville, and Yoshua Bengio. Maxout networks. In ICML, 2013.
Ian J Goodfellow, Oriol Vinyals, and Andrew M Saxe. Qualitatively characterizing neural network optimization problems. arXiv preprint arXiv:1412.6544, 2014.
Lars Kai Hansen and Peter Salamon. Neural network ensembles. IEEE transactions on pattern analysis and machine intelligence, 12:993â1001, 1990. | 1704.00109#46 | Snapshot Ensembles: Train 1, get M for free | Ensembles of neural networks are known to be much more robust and accurate
than individual networks. However, training multiple deep networks for model
averaging is computationally expensive. In this paper, we propose a method to
obtain the seemingly contradictory goal of ensembling multiple neural networks
at no additional training cost. We achieve this goal by training a single
neural network, converging to several local minima along its optimization path
and saving the model parameters. To obtain repeated rapid convergence, we
leverage recent work on cyclic learning rate schedules. The resulting
technique, which we refer to as Snapshot Ensembling, is simple, yet
surprisingly effective. We show in a series of experiments that our approach is
compatible with diverse network architectures and learning tasks. It
consistently yields lower error rates than state-of-the-art single models at no
additional training cost, and compares favorably with traditional network
ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain
error rates of 3.4% and 17.4% respectively. | http://arxiv.org/pdf/1704.00109 | Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170401 | 20170401 | [
{
"id": "1503.02531"
},
{
"id": "1609.04836"
},
{
"id": "1605.07146"
},
{
"id": "1605.06465"
},
{
"id": "1606.01305"
},
{
"id": "1610.02242"
},
{
"id": "1608.03983"
},
{
"id": "1606.02891"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
},
{
"id": "1605.07110"
}
]
|
1704.00109 | 47 | Lars Kai Hansen and Peter Salamon. Neural network ensembles. IEEE transactions on pattern analysis and machine intelligence, 12:993â1001, 1990.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog- nition. In CVPR, 2016a.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In ECCV, 2016b.
Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015.
Gao Huang, Zhuang Liu, and Kilian Q Weinberger. Densely connected convolutional networks. arXiv preprint arXiv:1608.06993, 2016a.
Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Weinberger. Deep networks with stochas- tic depth. In ECCV, 2016b.
Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICCV, 2015. | 1704.00109#47 | Snapshot Ensembles: Train 1, get M for free | Ensembles of neural networks are known to be much more robust and accurate
than individual networks. However, training multiple deep networks for model
averaging is computationally expensive. In this paper, we propose a method to
obtain the seemingly contradictory goal of ensembling multiple neural networks
at no additional training cost. We achieve this goal by training a single
neural network, converging to several local minima along its optimization path
and saving the model parameters. To obtain repeated rapid convergence, we
leverage recent work on cyclic learning rate schedules. The resulting
technique, which we refer to as Snapshot Ensembling, is simple, yet
surprisingly effective. We show in a series of experiments that our approach is
compatible with diverse network architectures and learning tasks. It
consistently yields lower error rates than state-of-the-art single models at no
additional training cost, and compares favorably with traditional network
ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain
error rates of 3.4% and 17.4% respectively. | http://arxiv.org/pdf/1704.00109 | Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170401 | 20170401 | [
{
"id": "1503.02531"
},
{
"id": "1609.04836"
},
{
"id": "1605.07146"
},
{
"id": "1605.06465"
},
{
"id": "1606.01305"
},
{
"id": "1610.02242"
},
{
"id": "1608.03983"
},
{
"id": "1606.02891"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
},
{
"id": "1605.07110"
}
]
|
1704.00109 | 48 | Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICCV, 2015.
Sbastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. On using very large target vocabulary for neural machine translation. arXiv preprint arXiv:1412.2007, 2014.
Kenji Kawaguchi. Deep learning without poor local minima. arXiv preprint arXiv:1605.07110, 2016.
Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Pe- ter Tang. On large-batch training for deep learning: Generalization gap and sharp minima. arXiv preprint arXiv:1609.04836, 2016.
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classiï¬cation with deep convo- lutional neural networks. In NIPS, 2012. | 1704.00109#48 | Snapshot Ensembles: Train 1, get M for free | Ensembles of neural networks are known to be much more robust and accurate
than individual networks. However, training multiple deep networks for model
averaging is computationally expensive. In this paper, we propose a method to
obtain the seemingly contradictory goal of ensembling multiple neural networks
at no additional training cost. We achieve this goal by training a single
neural network, converging to several local minima along its optimization path
and saving the model parameters. To obtain repeated rapid convergence, we
leverage recent work on cyclic learning rate schedules. The resulting
technique, which we refer to as Snapshot Ensembling, is simple, yet
surprisingly effective. We show in a series of experiments that our approach is
compatible with diverse network architectures and learning tasks. It
consistently yields lower error rates than state-of-the-art single models at no
additional training cost, and compares favorably with traditional network
ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain
error rates of 3.4% and 17.4% respectively. | http://arxiv.org/pdf/1704.00109 | Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170401 | 20170401 | [
{
"id": "1503.02531"
},
{
"id": "1609.04836"
},
{
"id": "1605.07146"
},
{
"id": "1605.06465"
},
{
"id": "1606.01305"
},
{
"id": "1610.02242"
},
{
"id": "1608.03983"
},
{
"id": "1606.02891"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
},
{
"id": "1605.07110"
}
]
|
1704.00109 | 49 | Anders Krogh, Jesper Vedelsby, et al. Neural network ensembles, cross validation, and active learn- ing. In NIPS, volume 7, 1995.
David Krueger, Tegan Maharaj, J´anos Kram´ar, Mohammad Pezeshki, Nicolas Ballas, Nan Rose- mary Ke, Anirudh Goyal, Yoshua Bengio, Hugo Larochelle, Aaron Courville, et al. Zoneout: Regularizing rnns by randomly preserving hidden activations. arXiv preprint arXiv:1606.01305, 2016.
10
Published as a conference paper at ICLR 2017
Samuli Laine and Timo Aila. Temporal ensembling for semi-supervised learning. arXiv preprint arXiv:1610.02242, 2016.
Gustav Larsson, Michael Maire, and Gregory Shakhnarovich. Fractalnet: Ultra-deep neural net- works without residuals. arXiv preprint arXiv:1605.07648, 2016.
Chen-Yu Lee, Saining Xie, Patrick Gallagher, Zhengyou Zhang, and Zhuowen Tu. Deeply- supervised nets. In AISTATS, 2015.
Min Lin, Qiang Chen, and Shuicheng Yan. Network in network. arXiv preprint arXiv:1312.4400, 2013. | 1704.00109#49 | Snapshot Ensembles: Train 1, get M for free | Ensembles of neural networks are known to be much more robust and accurate
than individual networks. However, training multiple deep networks for model
averaging is computationally expensive. In this paper, we propose a method to
obtain the seemingly contradictory goal of ensembling multiple neural networks
at no additional training cost. We achieve this goal by training a single
neural network, converging to several local minima along its optimization path
and saving the model parameters. To obtain repeated rapid convergence, we
leverage recent work on cyclic learning rate schedules. The resulting
technique, which we refer to as Snapshot Ensembling, is simple, yet
surprisingly effective. We show in a series of experiments that our approach is
compatible with diverse network architectures and learning tasks. It
consistently yields lower error rates than state-of-the-art single models at no
additional training cost, and compares favorably with traditional network
ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain
error rates of 3.4% and 17.4% respectively. | http://arxiv.org/pdf/1704.00109 | Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170401 | 20170401 | [
{
"id": "1503.02531"
},
{
"id": "1609.04836"
},
{
"id": "1605.07146"
},
{
"id": "1605.06465"
},
{
"id": "1606.01305"
},
{
"id": "1610.02242"
},
{
"id": "1608.03983"
},
{
"id": "1606.02891"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
},
{
"id": "1605.07110"
}
]
|
1704.00109 | 50 | Min Lin, Qiang Chen, and Shuicheng Yan. Network in network. arXiv preprint arXiv:1312.4400, 2013.
Ilya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with restarts. arXiv preprint arXiv:1608.03983, 2016.
Mohammad Moghimi, Mohammad Saberian, Jian Yang, Li-Jia Li, Nuno Vasconcelos, and Serge Belongie. Boosted convolutional neural networks. 2016.
Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading digits in natural images with unsupervised feature learning, 2011. In NIPS Workshop on Deep Learning and Unsupervised Feature Learning, 2011.
Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio. Fitnets: Hints for thin deep nets. arXiv preprint arXiv:1412.6550, 2014.
Rico Sennrich, Barry Haddow, and Alexandra Birch. Edinburgh neural machine translation systems for wmt 16. arXiv preprint arXiv:1606.02891, 2016. | 1704.00109#50 | Snapshot Ensembles: Train 1, get M for free | Ensembles of neural networks are known to be much more robust and accurate
than individual networks. However, training multiple deep networks for model
averaging is computationally expensive. In this paper, we propose a method to
obtain the seemingly contradictory goal of ensembling multiple neural networks
at no additional training cost. We achieve this goal by training a single
neural network, converging to several local minima along its optimization path
and saving the model parameters. To obtain repeated rapid convergence, we
leverage recent work on cyclic learning rate schedules. The resulting
technique, which we refer to as Snapshot Ensembling, is simple, yet
surprisingly effective. We show in a series of experiments that our approach is
compatible with diverse network architectures and learning tasks. It
consistently yields lower error rates than state-of-the-art single models at no
additional training cost, and compares favorably with traditional network
ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain
error rates of 3.4% and 17.4% respectively. | http://arxiv.org/pdf/1704.00109 | Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170401 | 20170401 | [
{
"id": "1503.02531"
},
{
"id": "1609.04836"
},
{
"id": "1605.07146"
},
{
"id": "1605.06465"
},
{
"id": "1606.01305"
},
{
"id": "1610.02242"
},
{
"id": "1608.03983"
},
{
"id": "1606.02891"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
},
{
"id": "1605.07110"
}
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.