doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
1711.08536
14
[3] I. Krasin, T. Duerig, N. Alldrin, V. Ferrari, S. Abu-El-Haija, A. Kuznetsova, H. Rom, J. Uijlings, S. Popov, A. Veit, S. Belongie, V. Gomes, A. Gupta, C. Sun, G. Chechik, D. Cai, Z. Feng, D. Narayanan, and K. Murphy. Openimages: A public dataset for large-scale multi-label and multi-class image classification. Dataset available from https://github.com/openimages, 2017. [4] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211–252, 2015. [5] D. Smilkov, N. Thorat, B. Kim, F. B. Viégas, and M. Wattenberg. Smoothgrad: removing noise by adding noise. CoRR, abs/1706.03825, 2017.
1711.08536#14
No Classification without Representation: Assessing Geodiversity Issues in Open Data Sets for the Developing World
Modern machine learning systems such as image classifiers rely heavily on large scale data sets for training. Such data sets are costly to create, thus in practice a small number of freely available, open source data sets are widely used. We suggest that examining the geo-diversity of open data sets is critical before adopting a data set for use cases in the developing world. We analyze two large, publicly available image data sets to assess geo-diversity and find that these data sets appear to exhibit an observable amerocentric and eurocentric representation bias. Further, we analyze classifiers trained on these data sets to assess the impact of these training distributions and find strong differences in the relative performance on images from different locales. These results emphasize the need to ensure geo-representation when constructing data sets for use in the developing world.
http://arxiv.org/pdf/1711.08536
Shreya Shankar, Yoni Halpern, Eric Breck, James Atwood, Jimbo Wilson, D. Sculley
stat.ML
Presented at NIPS 2017 Workshop on Machine Learning for the Developing World
null
stat.ML
20171122
20171122
[]
1711.08393
15
yi+1 = yi) does not lead to a significant accuracy drop. This behavior is due to the fact that ResNets can be viewed as an ensemble of many paths—as opposed to single-path models like AlexNet [28] and VGGNet [42]—and so infor- mation can be preserved even with the deletion of paths. The results in [50] suggest that different blocks do not share strong dependencies. However, the study also shows classification errors do increase when more blocks are re- moved from the model during inference. We contend this is the result of their adopting a global dropping strategy for all images. We posit the best dropping schemes, which lead to correct predictions with the minimal number of blocks, must be instance-specific. # 3.2. Policy Network for Dynamic Inference Paths
1711.08393#15
BlockDrop: Dynamic Inference Paths in Residual Networks
Very deep convolutional neural networks offer excellent recognition results, yet their computational expense limits their impact for many real-world applications. We introduce BlockDrop, an approach that learns to dynamically choose which layers of a deep network to execute during inference so as to best reduce total computation without degrading prediction accuracy. Exploiting the robustness of Residual Networks (ResNets) to layer dropping, our framework selects on-the-fly which residual blocks to evaluate for a given novel image. In particular, given a pretrained ResNet, we train a policy network in an associative reinforcement learning setting for the dual reward of utilizing a minimal number of blocks while preserving recognition accuracy. We conduct extensive experiments on CIFAR and ImageNet. The results provide strong quantitative and qualitative evidence that these learned policies not only accelerate inference but also encode meaningful visual information. Built upon a ResNet-101 model, our method achieves a speedup of 20\% on average, going as high as 36\% for some images, while maintaining the same 76.4\% top-1 accuracy on ImageNet.
http://arxiv.org/pdf/1711.08393
Zuxuan Wu, Tushar Nagarajan, Abhishek Kumar, Steven Rennie, Larry S. Davis, Kristen Grauman, Rogerio Feris
cs.CV, cs.LG
CVPR 2018
null
cs.CV
20171122
20190128
[ { "id": "1602.07360" }, { "id": "1503.02531" }, { "id": "1609.05672" }, { "id": "1701.00299" }, { "id": "1603.08983" }, { "id": "1602.02830" }, { "id": "1707.01213" }, { "id": "1703.09844" }, { "id": "1706.03912" } ]
1711.08393
16
# 3.2. Policy Network for Dynamic Inference Paths The configurations in the context of ResNets represent decisions to keep/drop each block, where each decision to drop a block corresponds to removing a subset of paths from the network. We refer to these decisions as our dropping strategy. To derive the optimal dropping strategy given an input instance, we develop a policy network to output a bi- nary policy vector, representing the actions to keep or drop a block in a pretrained ResNet. During training, a reward is given considering both block usage and prediction accuracy, which is generated by running the ResNet with only active blocks in the policy vector. See Figure 2 for an overview. Unlike standard reinforcement learning, we train the pol- icy to predict all actions at once. This is essentially a single- step Markov Decision Process (MDP) given the input state and can also be viewed as contextual bandit [29] or associa- tive reinforcement learning [46]. We examine the positive impact of this design choice on scalability in Sec. 4.2. Formally, given an image x and a pretrained ResNet with K residual blocks, we define a policy of block-dropping behavior as a K-dimensional Bernoulli distribution: K mw (ulx) = [J spt(1—s)*-™ ) k=1
1711.08393#16
BlockDrop: Dynamic Inference Paths in Residual Networks
Very deep convolutional neural networks offer excellent recognition results, yet their computational expense limits their impact for many real-world applications. We introduce BlockDrop, an approach that learns to dynamically choose which layers of a deep network to execute during inference so as to best reduce total computation without degrading prediction accuracy. Exploiting the robustness of Residual Networks (ResNets) to layer dropping, our framework selects on-the-fly which residual blocks to evaluate for a given novel image. In particular, given a pretrained ResNet, we train a policy network in an associative reinforcement learning setting for the dual reward of utilizing a minimal number of blocks while preserving recognition accuracy. We conduct extensive experiments on CIFAR and ImageNet. The results provide strong quantitative and qualitative evidence that these learned policies not only accelerate inference but also encode meaningful visual information. Built upon a ResNet-101 model, our method achieves a speedup of 20\% on average, going as high as 36\% for some images, while maintaining the same 76.4\% top-1 accuracy on ImageNet.
http://arxiv.org/pdf/1711.08393
Zuxuan Wu, Tushar Nagarajan, Abhishek Kumar, Steven Rennie, Larry S. Davis, Kristen Grauman, Rogerio Feris
cs.CV, cs.LG
CVPR 2018
null
cs.CV
20171122
20190128
[ { "id": "1602.07360" }, { "id": "1503.02531" }, { "id": "1609.05672" }, { "id": "1701.00299" }, { "id": "1603.08983" }, { "id": "1602.02830" }, { "id": "1707.01213" }, { "id": "1703.09844" }, { "id": "1706.03912" } ]
1711.08393
17
K mw (ulx) = [J spt(1—s)*-™ ) k=1 s = fpn(x; W), (2) where fpn denotes the policy network parameterized by weights W and s is the output of the network after the σ(x)= 1 1+e−x function. We choose the architecture of fpn (details below in Sec. 4) such that the cost of running it is negligible compared to ResNet, i.e., so that policy execu- tion overhead remains low. The k-th entry of the vector, sk ∈ [0, 1], represents the likelihood of its corresponding residual block in the original ResNet being dropped. An action u ∈ {0, 1}K is selected based on s. Here, uk = 0 and uk = 1 indicate dropping and keeping the k-th residual block, respectively. Only the blocks that are not dropped according to u will be evaluated in the forward pass. To encourage both correct predictions as well as minimal block usage, we associate the actions taken with the following reward function: R(u) = 1 − ( |u|0 −γ K )2 if correct otherwise. (3)
1711.08393#17
BlockDrop: Dynamic Inference Paths in Residual Networks
Very deep convolutional neural networks offer excellent recognition results, yet their computational expense limits their impact for many real-world applications. We introduce BlockDrop, an approach that learns to dynamically choose which layers of a deep network to execute during inference so as to best reduce total computation without degrading prediction accuracy. Exploiting the robustness of Residual Networks (ResNets) to layer dropping, our framework selects on-the-fly which residual blocks to evaluate for a given novel image. In particular, given a pretrained ResNet, we train a policy network in an associative reinforcement learning setting for the dual reward of utilizing a minimal number of blocks while preserving recognition accuracy. We conduct extensive experiments on CIFAR and ImageNet. The results provide strong quantitative and qualitative evidence that these learned policies not only accelerate inference but also encode meaningful visual information. Built upon a ResNet-101 model, our method achieves a speedup of 20\% on average, going as high as 36\% for some images, while maintaining the same 76.4\% top-1 accuracy on ImageNet.
http://arxiv.org/pdf/1711.08393
Zuxuan Wu, Tushar Nagarajan, Abhishek Kumar, Steven Rennie, Larry S. Davis, Kristen Grauman, Rogerio Feris
cs.CV, cs.LG
CVPR 2018
null
cs.CV
20171122
20190128
[ { "id": "1602.07360" }, { "id": "1503.02531" }, { "id": "1609.05672" }, { "id": "1701.00299" }, { "id": "1603.08983" }, { "id": "1602.02830" }, { "id": "1707.01213" }, { "id": "1703.09844" }, { "id": "1706.03912" } ]
1711.08393
18
R(u) = 1 − ( |u|0 −γ K )2 if correct otherwise. (3) Here, ( |u|0 K )2 measures the percentage of blocks utilized; when a correct prediction is produced, we incentivize block dropping by giving a larger reward to a policy that uses fewer blocks. We penalize incorrect predictions with γ, which controls the trade-off between efficiency (block us- age) and accuracy (i.e., a larger value leads to more correct, but less efficient policies). We use this parameter to vary the operating point of our model, allowing different models to be trained depending on the target budget constraint. Fi- nally, to learn the optimal parameters of the policy network, we maximize the following expected reward: J = Eu∼πW [R(u)]. In summary, our model works as follows: fpn is used to decide which blocks of the ResNet to keep conditioned on the input image, a prediction is generated by running a forward pass with the ResNet using only these blocks, and a reward is observed based on correctness and efficiency. # 3.3. Training the BlockDrop Policy
1711.08393#18
BlockDrop: Dynamic Inference Paths in Residual Networks
Very deep convolutional neural networks offer excellent recognition results, yet their computational expense limits their impact for many real-world applications. We introduce BlockDrop, an approach that learns to dynamically choose which layers of a deep network to execute during inference so as to best reduce total computation without degrading prediction accuracy. Exploiting the robustness of Residual Networks (ResNets) to layer dropping, our framework selects on-the-fly which residual blocks to evaluate for a given novel image. In particular, given a pretrained ResNet, we train a policy network in an associative reinforcement learning setting for the dual reward of utilizing a minimal number of blocks while preserving recognition accuracy. We conduct extensive experiments on CIFAR and ImageNet. The results provide strong quantitative and qualitative evidence that these learned policies not only accelerate inference but also encode meaningful visual information. Built upon a ResNet-101 model, our method achieves a speedup of 20\% on average, going as high as 36\% for some images, while maintaining the same 76.4\% top-1 accuracy on ImageNet.
http://arxiv.org/pdf/1711.08393
Zuxuan Wu, Tushar Nagarajan, Abhishek Kumar, Steven Rennie, Larry S. Davis, Kristen Grauman, Rogerio Feris
cs.CV, cs.LG
CVPR 2018
null
cs.CV
20171122
20190128
[ { "id": "1602.07360" }, { "id": "1503.02531" }, { "id": "1609.05672" }, { "id": "1701.00299" }, { "id": "1603.08983" }, { "id": "1602.02830" }, { "id": "1707.01213" }, { "id": "1703.09844" }, { "id": "1706.03912" } ]
1711.08393
19
# 3.3. Training the BlockDrop Policy Expected gradient. To maximize Eqn. 4, we utilize policy gradient [46], one of the seminal policy search methods [9], to compute the gradients of J. In contrast to typical re- inforcement learning methods where policies are sampled from a multinomial distribution [46], our policies are gen- erated from a K-dimensional Bernoulli distribution. With uk ∈ {0, 1}, the gradients can be derived similarly as: ∇WJ = E[R(u)∇Wlog πW(u|x)] K = E[R(u)Vwlog Il sé (1 —s,)'*] K _ R(u)Vw > log[s,uz + (1 — sx)(1 — ux )]], k=l (5) | E | i where again W denotes the parameters of the policy net- work. We approximate the expected gradient in Eqn. 5 with Monte-Carlo sampling using all samples in a mini-batch. These gradient estimates are unbiased, but exhibit high vari- ance [46]. To reduce variance, we utilize a self-critical base- line R(˜u) as in [39] , and rewrite Eqn. 5 as:
1711.08393#19
BlockDrop: Dynamic Inference Paths in Residual Networks
Very deep convolutional neural networks offer excellent recognition results, yet their computational expense limits their impact for many real-world applications. We introduce BlockDrop, an approach that learns to dynamically choose which layers of a deep network to execute during inference so as to best reduce total computation without degrading prediction accuracy. Exploiting the robustness of Residual Networks (ResNets) to layer dropping, our framework selects on-the-fly which residual blocks to evaluate for a given novel image. In particular, given a pretrained ResNet, we train a policy network in an associative reinforcement learning setting for the dual reward of utilizing a minimal number of blocks while preserving recognition accuracy. We conduct extensive experiments on CIFAR and ImageNet. The results provide strong quantitative and qualitative evidence that these learned policies not only accelerate inference but also encode meaningful visual information. Built upon a ResNet-101 model, our method achieves a speedup of 20\% on average, going as high as 36\% for some images, while maintaining the same 76.4\% top-1 accuracy on ImageNet.
http://arxiv.org/pdf/1711.08393
Zuxuan Wu, Tushar Nagarajan, Abhishek Kumar, Steven Rennie, Larry S. Davis, Kristen Grauman, Rogerio Feris
cs.CV, cs.LG
CVPR 2018
null
cs.CV
20171122
20190128
[ { "id": "1602.07360" }, { "id": "1503.02531" }, { "id": "1609.05672" }, { "id": "1701.00299" }, { "id": "1603.08983" }, { "id": "1602.02830" }, { "id": "1707.01213" }, { "id": "1703.09844" }, { "id": "1706.03912" } ]
1711.08393
20
K VwJ = E[AVw > log[s,uz + (1 — sx)(1 — ug)]], k=l (6) (6) where A = R(u) − R(˜u) and ˜u is defined as the maximally probable configuration under the current policy, s: i.e., ui = 1 if si > 0.5, and ui = 0 otherwise [39]. We further encourage exploration by introducing a pa- rameter a to bound the distribution s and prevent it from saturating, by creating a modified distribution s’: s’=a-s+(1-—a)-(1-s). This bounds the distribution in the range 1 — a < s’ < a, from which we then sample the policy vector.
1711.08393#20
BlockDrop: Dynamic Inference Paths in Residual Networks
Very deep convolutional neural networks offer excellent recognition results, yet their computational expense limits their impact for many real-world applications. We introduce BlockDrop, an approach that learns to dynamically choose which layers of a deep network to execute during inference so as to best reduce total computation without degrading prediction accuracy. Exploiting the robustness of Residual Networks (ResNets) to layer dropping, our framework selects on-the-fly which residual blocks to evaluate for a given novel image. In particular, given a pretrained ResNet, we train a policy network in an associative reinforcement learning setting for the dual reward of utilizing a minimal number of blocks while preserving recognition accuracy. We conduct extensive experiments on CIFAR and ImageNet. The results provide strong quantitative and qualitative evidence that these learned policies not only accelerate inference but also encode meaningful visual information. Built upon a ResNet-101 model, our method achieves a speedup of 20\% on average, going as high as 36\% for some images, while maintaining the same 76.4\% top-1 accuracy on ImageNet.
http://arxiv.org/pdf/1711.08393
Zuxuan Wu, Tushar Nagarajan, Abhishek Kumar, Steven Rennie, Larry S. Davis, Kristen Grauman, Rogerio Feris
cs.CV, cs.LG
CVPR 2018
null
cs.CV
20171122
20190128
[ { "id": "1602.07360" }, { "id": "1503.02531" }, { "id": "1609.05672" }, { "id": "1701.00299" }, { "id": "1603.08983" }, { "id": "1602.02830" }, { "id": "1707.01213" }, { "id": "1703.09844" }, { "id": "1706.03912" } ]
1711.08393
21
This bounds the distribution in the range 1 — a < s’ < a, from which we then sample the policy vector. Curriculum learning. Policy gradient methods are typi- cally extremely sensitive to their initialization. Indeed, we found that starting from a randomly initialized policy and optimizing for both accuracy and block usage is not effec- tive, due the extremely large dimension of the search space, which scales exponentially with the total number of blocks (there are 2K possible on/off configurations of the blocks). Note that in contrast with applications such as image cap- tioning where ground-truth action sequences (captions) can be used to train an initial policy [39], here no such “expert examples” are available, other than the standard single exe- cution path that executes all blocks.
1711.08393#21
BlockDrop: Dynamic Inference Paths in Residual Networks
Very deep convolutional neural networks offer excellent recognition results, yet their computational expense limits their impact for many real-world applications. We introduce BlockDrop, an approach that learns to dynamically choose which layers of a deep network to execute during inference so as to best reduce total computation without degrading prediction accuracy. Exploiting the robustness of Residual Networks (ResNets) to layer dropping, our framework selects on-the-fly which residual blocks to evaluate for a given novel image. In particular, given a pretrained ResNet, we train a policy network in an associative reinforcement learning setting for the dual reward of utilizing a minimal number of blocks while preserving recognition accuracy. We conduct extensive experiments on CIFAR and ImageNet. The results provide strong quantitative and qualitative evidence that these learned policies not only accelerate inference but also encode meaningful visual information. Built upon a ResNet-101 model, our method achieves a speedup of 20\% on average, going as high as 36\% for some images, while maintaining the same 76.4\% top-1 accuracy on ImageNet.
http://arxiv.org/pdf/1711.08393
Zuxuan Wu, Tushar Nagarajan, Abhishek Kumar, Steven Rennie, Larry S. Davis, Kristen Grauman, Rogerio Feris
cs.CV, cs.LG
CVPR 2018
null
cs.CV
20171122
20190128
[ { "id": "1602.07360" }, { "id": "1503.02531" }, { "id": "1609.05672" }, { "id": "1701.00299" }, { "id": "1603.08983" }, { "id": "1602.02830" }, { "id": "1707.01213" }, { "id": "1703.09844" }, { "id": "1706.03912" } ]
1711.08393
22
to efficiently search for good action se- quences, we take inspiration from the idea of curriculum learning [3]. During epoch t, for 1 ≤ t < K, we keep the first K − t blocks on, and learn a policy only for the last t blocks. As t increases, the activity of more blocks are optimized, until finally all blocks are included (i.e., when t ≥ K). Using this approach, the activation of each block is first optimized according to unmodified input features in order to assess the utility of the block, and then is gradu- ally exposed to increasingly different feature inputs as t in- creases and the policy for the last t blocks is jointly trained. This procedure is efficient, and it is effective at identifying and removing blocks that are redundant for the input data instance being considered. It is similar in spirit to [37, 39] that gradually exposes sequences when training with REIN- FORCE for text generation.
1711.08393#22
BlockDrop: Dynamic Inference Paths in Residual Networks
Very deep convolutional neural networks offer excellent recognition results, yet their computational expense limits their impact for many real-world applications. We introduce BlockDrop, an approach that learns to dynamically choose which layers of a deep network to execute during inference so as to best reduce total computation without degrading prediction accuracy. Exploiting the robustness of Residual Networks (ResNets) to layer dropping, our framework selects on-the-fly which residual blocks to evaluate for a given novel image. In particular, given a pretrained ResNet, we train a policy network in an associative reinforcement learning setting for the dual reward of utilizing a minimal number of blocks while preserving recognition accuracy. We conduct extensive experiments on CIFAR and ImageNet. The results provide strong quantitative and qualitative evidence that these learned policies not only accelerate inference but also encode meaningful visual information. Built upon a ResNet-101 model, our method achieves a speedup of 20\% on average, going as high as 36\% for some images, while maintaining the same 76.4\% top-1 accuracy on ImageNet.
http://arxiv.org/pdf/1711.08393
Zuxuan Wu, Tushar Nagarajan, Abhishek Kumar, Steven Rennie, Larry S. Davis, Kristen Grauman, Rogerio Feris
cs.CV, cs.LG
CVPR 2018
null
cs.CV
20171122
20190128
[ { "id": "1602.07360" }, { "id": "1503.02531" }, { "id": "1609.05672" }, { "id": "1701.00299" }, { "id": "1603.08983" }, { "id": "1602.02830" }, { "id": "1707.01213" }, { "id": "1703.09844" }, { "id": "1706.03912" } ]
1711.08393
23
Joint finetuning. After curriculum learning, our policy net- work is able to identify which residual blocks in the origi- nal ResNet to drop for a given input image. Though the policy network is trained to preserve accuracy as much as possible, removing blocks from the pre-trained ResNet will inevitably result in a mismatch between training and testing conditions. We therefore jointly finetune the ResNet with the policy network, so that it can adapt itself to the learned block dropping behavior. The principle of our joint training procedure is similar to that of stochastic depth [22], with the exception that the drop rates are not fixed, but are in- stead controlled by the policy network. Alg. 1 presents the complete training procedure for our framework. Algorithm 1 The pseudo-code for training our network. Input: An input image x and its label 1: Initialize the weights of policy network W randomly 2: Set epochs for curriculum learning and joint finetuning to M cl and M f t, respectively; and set α
1711.08393#23
BlockDrop: Dynamic Inference Paths in Residual Networks
Very deep convolutional neural networks offer excellent recognition results, yet their computational expense limits their impact for many real-world applications. We introduce BlockDrop, an approach that learns to dynamically choose which layers of a deep network to execute during inference so as to best reduce total computation without degrading prediction accuracy. Exploiting the robustness of Residual Networks (ResNets) to layer dropping, our framework selects on-the-fly which residual blocks to evaluate for a given novel image. In particular, given a pretrained ResNet, we train a policy network in an associative reinforcement learning setting for the dual reward of utilizing a minimal number of blocks while preserving recognition accuracy. We conduct extensive experiments on CIFAR and ImageNet. The results provide strong quantitative and qualitative evidence that these learned policies not only accelerate inference but also encode meaningful visual information. Built upon a ResNet-101 model, our method achieves a speedup of 20\% on average, going as high as 36\% for some images, while maintaining the same 76.4\% top-1 accuracy on ImageNet.
http://arxiv.org/pdf/1711.08393
Zuxuan Wu, Tushar Nagarajan, Abhishek Kumar, Steven Rennie, Larry S. Davis, Kristen Grauman, Rogerio Feris
cs.CV, cs.LG
CVPR 2018
null
cs.CV
20171122
20190128
[ { "id": "1602.07360" }, { "id": "1503.02531" }, { "id": "1609.05672" }, { "id": "1701.00299" }, { "id": "1603.08983" }, { "id": "1602.02830" }, { "id": "1707.01213" }, { "id": "1703.09844" }, { "id": "1706.03912" } ]
1711.08393
24
and M f t, respectively; and set α 3: for t ← 1 to M cl do s ← fpn(x; W) 4: s ← α · s + (1 − α) · (1 − s) 5: if t < K then 6: 7: 8: 9: 10: 11: 12: 13: end for 14: for t ← 1 to M f t do 15: 16: end for > curriculum training end if u ∼ Bernoulli(s) Execute the ResNet according to u Evaluate reward R(u) with Eqn. 3 Back-propagate gradients computed with Eqn. 6 # policy network # 4. Experiment # 4.1. Experimental Setup
1711.08393#24
BlockDrop: Dynamic Inference Paths in Residual Networks
Very deep convolutional neural networks offer excellent recognition results, yet their computational expense limits their impact for many real-world applications. We introduce BlockDrop, an approach that learns to dynamically choose which layers of a deep network to execute during inference so as to best reduce total computation without degrading prediction accuracy. Exploiting the robustness of Residual Networks (ResNets) to layer dropping, our framework selects on-the-fly which residual blocks to evaluate for a given novel image. In particular, given a pretrained ResNet, we train a policy network in an associative reinforcement learning setting for the dual reward of utilizing a minimal number of blocks while preserving recognition accuracy. We conduct extensive experiments on CIFAR and ImageNet. The results provide strong quantitative and qualitative evidence that these learned policies not only accelerate inference but also encode meaningful visual information. Built upon a ResNet-101 model, our method achieves a speedup of 20\% on average, going as high as 36\% for some images, while maintaining the same 76.4\% top-1 accuracy on ImageNet.
http://arxiv.org/pdf/1711.08393
Zuxuan Wu, Tushar Nagarajan, Abhishek Kumar, Steven Rennie, Larry S. Davis, Kristen Grauman, Rogerio Feris
cs.CV, cs.LG
CVPR 2018
null
cs.CV
20171122
20190128
[ { "id": "1602.07360" }, { "id": "1503.02531" }, { "id": "1609.05672" }, { "id": "1701.00299" }, { "id": "1603.08983" }, { "id": "1602.02830" }, { "id": "1707.01213" }, { "id": "1703.09844" }, { "id": "1706.03912" } ]
1711.08393
25
# policy network # 4. Experiment # 4.1. Experimental Setup Datasets and evaluation metrics. We evaluate our method on three benchmarks: CIFAR-10, CIFAR-100 [27], and IMAGENET (ILSVRC2012) [10]. The CIFAR datasets consist of 60,000 32×32 colored images, with 50,000 im- ages for training and 10,000 for testing. They are labeled for 10 and 100 classes for CIFAR-10 and CIFAR-100, respec- tively. Performance is measured by classification accuracy. ImageNet contains 1.2M training images labeled for 1,000 categories. We test on the validation set of 50,000 images and report top-1 accuracy.
1711.08393#25
BlockDrop: Dynamic Inference Paths in Residual Networks
Very deep convolutional neural networks offer excellent recognition results, yet their computational expense limits their impact for many real-world applications. We introduce BlockDrop, an approach that learns to dynamically choose which layers of a deep network to execute during inference so as to best reduce total computation without degrading prediction accuracy. Exploiting the robustness of Residual Networks (ResNets) to layer dropping, our framework selects on-the-fly which residual blocks to evaluate for a given novel image. In particular, given a pretrained ResNet, we train a policy network in an associative reinforcement learning setting for the dual reward of utilizing a minimal number of blocks while preserving recognition accuracy. We conduct extensive experiments on CIFAR and ImageNet. The results provide strong quantitative and qualitative evidence that these learned policies not only accelerate inference but also encode meaningful visual information. Built upon a ResNet-101 model, our method achieves a speedup of 20\% on average, going as high as 36\% for some images, while maintaining the same 76.4\% top-1 accuracy on ImageNet.
http://arxiv.org/pdf/1711.08393
Zuxuan Wu, Tushar Nagarajan, Abhishek Kumar, Steven Rennie, Larry S. Davis, Kristen Grauman, Rogerio Feris
cs.CV, cs.LG
CVPR 2018
null
cs.CV
20171122
20190128
[ { "id": "1602.07360" }, { "id": "1503.02531" }, { "id": "1609.05672" }, { "id": "1701.00299" }, { "id": "1603.08983" }, { "id": "1602.02830" }, { "id": "1707.01213" }, { "id": "1703.09844" }, { "id": "1706.03912" } ]
1711.08393
26
Pretrained ResNet. For CIFAR-10 and CIFAR-100, we experiment with two ResNet models that achieve promis- ing results. In particular, ResNet-32 and ResNet-110 start with a convolutional layer followed by 15 and 54 residual blocks, respectively. These residual blocks, each of which contains two convolutional layers, are evenly distributed into 3 segments with down-sampling layers in between. Fi- nally, a fully-connected layer with 10/100 neurons is ap- plied. See [18] for details. For ImageNet, we adopt ResNet- 101 with a total of 33 residual blocks, organized into four segments (i.e., [3, 4, 20, 3]). Here, each residual block con- tains three convolutional layers based on the bottleneck de- sign [18] for computational efficiency. These models are pretrained to match state-of-the-art performance on the cor- responding datasets when run without our policy network.
1711.08393#26
BlockDrop: Dynamic Inference Paths in Residual Networks
Very deep convolutional neural networks offer excellent recognition results, yet their computational expense limits their impact for many real-world applications. We introduce BlockDrop, an approach that learns to dynamically choose which layers of a deep network to execute during inference so as to best reduce total computation without degrading prediction accuracy. Exploiting the robustness of Residual Networks (ResNets) to layer dropping, our framework selects on-the-fly which residual blocks to evaluate for a given novel image. In particular, given a pretrained ResNet, we train a policy network in an associative reinforcement learning setting for the dual reward of utilizing a minimal number of blocks while preserving recognition accuracy. We conduct extensive experiments on CIFAR and ImageNet. The results provide strong quantitative and qualitative evidence that these learned policies not only accelerate inference but also encode meaningful visual information. Built upon a ResNet-101 model, our method achieves a speedup of 20\% on average, going as high as 36\% for some images, while maintaining the same 76.4\% top-1 accuracy on ImageNet.
http://arxiv.org/pdf/1711.08393
Zuxuan Wu, Tushar Nagarajan, Abhishek Kumar, Steven Rennie, Larry S. Davis, Kristen Grauman, Rogerio Feris
cs.CV, cs.LG
CVPR 2018
null
cs.CV
20171122
20190128
[ { "id": "1602.07360" }, { "id": "1503.02531" }, { "id": "1609.05672" }, { "id": "1701.00299" }, { "id": "1603.08983" }, { "id": "1602.02830" }, { "id": "1707.01213" }, { "id": "1703.09844" }, { "id": "1706.03912" } ]
1711.08393
27
Policy network architecture. For our policy network, we use ResNets with a fraction of the depth of the base model. For CIFAR, we use a ResNet with 3 blocks (equivalently ResNet-8), while for ImageNet, we use a ResNet with 4 blocks (equivalently ResNet-10). In addition, we downsamCIFAR-10 CIFAR-100 Acc K Acc (ft) K (ft) Acc K Acc (ft) K (ft) 16.6 FirstK RandomK 20.5 DistributeK 23.4 88.6 Ours Full ResNet 92.3 10 10 10 9.4 15 84.3 88.9 90.2 91.3 92.3 7 7 7 6.9 15 23.3 38.3 31.9 58.3 69.3 13 13 13 12.4 15 66.5 67.6 66.7 68.7 69.3 14 14 14 13.1 15 13.3 FirstK 14.5 RandomK DistributeK 13.0 75.4 Ours Full ResNet 93.2 21 21 21 20.1 54 71.3 90.1 92.7 93.6 93.2 17 17 17 16.9 54 63.5 66.3 49.6 72.1 72.2 50 50 50 49.1 54 57.9 68.4 69.9 73.7
1711.08393#27
BlockDrop: Dynamic Inference Paths in Residual Networks
Very deep convolutional neural networks offer excellent recognition results, yet their computational expense limits their impact for many real-world applications. We introduce BlockDrop, an approach that learns to dynamically choose which layers of a deep network to execute during inference so as to best reduce total computation without degrading prediction accuracy. Exploiting the robustness of Residual Networks (ResNets) to layer dropping, our framework selects on-the-fly which residual blocks to evaluate for a given novel image. In particular, given a pretrained ResNet, we train a policy network in an associative reinforcement learning setting for the dual reward of utilizing a minimal number of blocks while preserving recognition accuracy. We conduct extensive experiments on CIFAR and ImageNet. The results provide strong quantitative and qualitative evidence that these learned policies not only accelerate inference but also encode meaningful visual information. Built upon a ResNet-101 model, our method achieves a speedup of 20\% on average, going as high as 36\% for some images, while maintaining the same 76.4\% top-1 accuracy on ImageNet.
http://arxiv.org/pdf/1711.08393
Zuxuan Wu, Tushar Nagarajan, Abhishek Kumar, Steven Rennie, Larry S. Davis, Kristen Grauman, Rogerio Feris
cs.CV, cs.LG
CVPR 2018
null
cs.CV
20171122
20190128
[ { "id": "1602.07360" }, { "id": "1503.02531" }, { "id": "1609.05672" }, { "id": "1701.00299" }, { "id": "1603.08983" }, { "id": "1602.02830" }, { "id": "1707.01213" }, { "id": "1703.09844" }, { "id": "1706.03912" } ]
1711.08393
29
# 2 3 - t e N s e R # 0 1 1 - t e N s e R Table 1: Accuracy and block usage with our policies vs. heuristic baselines, with and without jointly finetuning (ft) for all methods. For fair comparisons, K is selected based on the average block usage of our method, and this can be different before and after finetuning. Note that the average value of K for our method is reported here for brevity. It is determined dynamically per image, and can be as low as 3 (out of 54) in ResNet-110 on CIFAR-10. ple images to 112×112 as the input of the policy network for ImageNet experiments. The computation required for the policy network is 4.8% and 3.0% of the total ResNet computation for the CIFAR (ResNet-110) and ImageNet (ResNet-101) models respectively, making policy compu- tations negligible (it takes about 0.5 ms per image on aver- age for ImageNet). While a recurrent model (e.g., LSTM) could also serve as the policy network, we found a CNN to be more efficient with similar performance.
1711.08393#29
BlockDrop: Dynamic Inference Paths in Residual Networks
Very deep convolutional neural networks offer excellent recognition results, yet their computational expense limits their impact for many real-world applications. We introduce BlockDrop, an approach that learns to dynamically choose which layers of a deep network to execute during inference so as to best reduce total computation without degrading prediction accuracy. Exploiting the robustness of Residual Networks (ResNets) to layer dropping, our framework selects on-the-fly which residual blocks to evaluate for a given novel image. In particular, given a pretrained ResNet, we train a policy network in an associative reinforcement learning setting for the dual reward of utilizing a minimal number of blocks while preserving recognition accuracy. We conduct extensive experiments on CIFAR and ImageNet. The results provide strong quantitative and qualitative evidence that these learned policies not only accelerate inference but also encode meaningful visual information. Built upon a ResNet-101 model, our method achieves a speedup of 20\% on average, going as high as 36\% for some images, while maintaining the same 76.4\% top-1 accuracy on ImageNet.
http://arxiv.org/pdf/1711.08393
Zuxuan Wu, Tushar Nagarajan, Abhishek Kumar, Steven Rennie, Larry S. Davis, Kristen Grauman, Rogerio Feris
cs.CV, cs.LG
CVPR 2018
null
cs.CV
20171122
20190128
[ { "id": "1602.07360" }, { "id": "1503.02531" }, { "id": "1609.05672" }, { "id": "1701.00299" }, { "id": "1603.08983" }, { "id": "1602.02830" }, { "id": "1707.01213" }, { "id": "1703.09844" }, { "id": "1706.03912" } ]
1711.08393
30
Implementations details. We adopt PyTorch for imple- mentation and utilize ADAM as the optimizer. We set α to 0.8, learning rate to 1e−4, and use a batch size of 2048 dur- ing curriculum learning. For joint finetuning, we adjust the batch size to 256 and 320 on CIFAR and ImageNet, respec- tively, and adjust the learning rate to 1e − 5 for ImageNet. Our code is available at https://goo.gl/NqyNeN.
1711.08393#30
BlockDrop: Dynamic Inference Paths in Residual Networks
Very deep convolutional neural networks offer excellent recognition results, yet their computational expense limits their impact for many real-world applications. We introduce BlockDrop, an approach that learns to dynamically choose which layers of a deep network to execute during inference so as to best reduce total computation without degrading prediction accuracy. Exploiting the robustness of Residual Networks (ResNets) to layer dropping, our framework selects on-the-fly which residual blocks to evaluate for a given novel image. In particular, given a pretrained ResNet, we train a policy network in an associative reinforcement learning setting for the dual reward of utilizing a minimal number of blocks while preserving recognition accuracy. We conduct extensive experiments on CIFAR and ImageNet. The results provide strong quantitative and qualitative evidence that these learned policies not only accelerate inference but also encode meaningful visual information. Built upon a ResNet-101 model, our method achieves a speedup of 20\% on average, going as high as 36\% for some images, while maintaining the same 76.4\% top-1 accuracy on ImageNet.
http://arxiv.org/pdf/1711.08393
Zuxuan Wu, Tushar Nagarajan, Abhishek Kumar, Steven Rennie, Larry S. Davis, Kristen Grauman, Rogerio Feris
cs.CV, cs.LG
CVPR 2018
null
cs.CV
20171122
20190128
[ { "id": "1602.07360" }, { "id": "1503.02531" }, { "id": "1609.05672" }, { "id": "1701.00299" }, { "id": "1603.08983" }, { "id": "1602.02830" }, { "id": "1707.01213" }, { "id": "1703.09844" }, { "id": "1706.03912" } ]
1711.08393
31
and ResNet-110 respectively, outperforming the baselines by a large margin. Furthermore, the instance-specific na- ture of our method allows us to capture the inherent vari- ance in the computational requirements of our dataset. We notice a wide distribution in block usage depending on the image. With ResNet-110, nearly 15% of the images use fewer than 10 blocks, with some images using as few as 3 blocks. This variance cannot be captured by any static policies. Similar trends are observed on CIFAR-100. This confirms that dropping residual blocks with policies com- puted in a learned manner is indeed significantly better than heuristic dropping behaviors. The fact that RandomK per- forms better than FirstK is interesting, suggesting the value of having residual blocks at different segments to learn fea- ture representations at different scales. # 4.2. Quantitative Results Impact of joint finetuning. Next we analyze the impact of joint finetuning (cf. Sec. 3.3) for both our approach and the baselines, denoted ft in Table 1.
1711.08393#31
BlockDrop: Dynamic Inference Paths in Residual Networks
Very deep convolutional neural networks offer excellent recognition results, yet their computational expense limits their impact for many real-world applications. We introduce BlockDrop, an approach that learns to dynamically choose which layers of a deep network to execute during inference so as to best reduce total computation without degrading prediction accuracy. Exploiting the robustness of Residual Networks (ResNets) to layer dropping, our framework selects on-the-fly which residual blocks to evaluate for a given novel image. In particular, given a pretrained ResNet, we train a policy network in an associative reinforcement learning setting for the dual reward of utilizing a minimal number of blocks while preserving recognition accuracy. We conduct extensive experiments on CIFAR and ImageNet. The results provide strong quantitative and qualitative evidence that these learned policies not only accelerate inference but also encode meaningful visual information. Built upon a ResNet-101 model, our method achieves a speedup of 20\% on average, going as high as 36\% for some images, while maintaining the same 76.4\% top-1 accuracy on ImageNet.
http://arxiv.org/pdf/1711.08393
Zuxuan Wu, Tushar Nagarajan, Abhishek Kumar, Steven Rennie, Larry S. Davis, Kristen Grauman, Rogerio Feris
cs.CV, cs.LG
CVPR 2018
null
cs.CV
20171122
20190128
[ { "id": "1602.07360" }, { "id": "1503.02531" }, { "id": "1609.05672" }, { "id": "1701.00299" }, { "id": "1603.08983" }, { "id": "1602.02830" }, { "id": "1707.01213" }, { "id": "1703.09844" }, { "id": "1706.03912" } ]
1711.08393
32
Learned policies vs. heuristics. We compare our block dropping strategy to the following alternative methods: (1) FIRSTK, which keeps only the first K residual blocks active; (2) RANDOMK, which keeps K randomly selected residual blocks active; (3) DISTRIBUTEK, which evenly distributes K blocks across all segments. For all baselines, we choose K to match the average number of blocks used by BlockDrop, rounding up as needed. DistributeK allows us to see if feature combinations of different blocks learned by BlockDrop are better than features learned from the re- stricted set of early blocks of each segment. This setting resembles the allowable feature combinations from early stopping models applied to ResNets. The results in Table 1 highlight the advantage of our instance-specific policy. On CIFAR-10, the learned poli- cies give an accuracy of 88.6% and 75.4% using an av- erage of 9.4 and 20.1 blocks from the original ResNet-32
1711.08393#32
BlockDrop: Dynamic Inference Paths in Residual Networks
Very deep convolutional neural networks offer excellent recognition results, yet their computational expense limits their impact for many real-world applications. We introduce BlockDrop, an approach that learns to dynamically choose which layers of a deep network to execute during inference so as to best reduce total computation without degrading prediction accuracy. Exploiting the robustness of Residual Networks (ResNets) to layer dropping, our framework selects on-the-fly which residual blocks to evaluate for a given novel image. In particular, given a pretrained ResNet, we train a policy network in an associative reinforcement learning setting for the dual reward of utilizing a minimal number of blocks while preserving recognition accuracy. We conduct extensive experiments on CIFAR and ImageNet. The results provide strong quantitative and qualitative evidence that these learned policies not only accelerate inference but also encode meaningful visual information. Built upon a ResNet-101 model, our method achieves a speedup of 20\% on average, going as high as 36\% for some images, while maintaining the same 76.4\% top-1 accuracy on ImageNet.
http://arxiv.org/pdf/1711.08393
Zuxuan Wu, Tushar Nagarajan, Abhishek Kumar, Steven Rennie, Larry S. Davis, Kristen Grauman, Rogerio Feris
cs.CV, cs.LG
CVPR 2018
null
cs.CV
20171122
20190128
[ { "id": "1602.07360" }, { "id": "1503.02531" }, { "id": "1609.05672" }, { "id": "1701.00299" }, { "id": "1603.08983" }, { "id": "1602.02830" }, { "id": "1707.01213" }, { "id": "1703.09844" }, { "id": "1706.03912" } ]
1711.08393
33
Joint finetuning further significantly improves classifica- tion accuracy using fewer (or almost the same) number of blocks. In particular, on CIFAR-10, it offers absolute per- formance gains of 2.7% and 18.2% using 2.5 and 3.2 fewer blocks with ResNet-32 and ResNet-110 respectively com- pared with curriculum training alone. Similarly, on CIFAR- 100, joint finetuning improves accuracies and brings down block usage with ResNet-110. For ResNet-32, we observe 0.7 more blocks on average are used after finetuning, which might be due to the challenging nature of CIFAR-100 re- quiring more blocks to make correct predictions. Compar- ing ResNet-110 with ResNet-32, we observe that the com- putational speed-ups are more dramatic for deeper ResNets owing to the fact that there are more blocks with potentially diverse features to select from. When built upon ResNet- 110, our method outperforms the pretrained model by 0.4% and 1.5% (absolute) using 31% and 55.9% of the original
1711.08393#33
BlockDrop: Dynamic Inference Paths in Residual Networks
Very deep convolutional neural networks offer excellent recognition results, yet their computational expense limits their impact for many real-world applications. We introduce BlockDrop, an approach that learns to dynamically choose which layers of a deep network to execute during inference so as to best reduce total computation without degrading prediction accuracy. Exploiting the robustness of Residual Networks (ResNets) to layer dropping, our framework selects on-the-fly which residual blocks to evaluate for a given novel image. In particular, given a pretrained ResNet, we train a policy network in an associative reinforcement learning setting for the dual reward of utilizing a minimal number of blocks while preserving recognition accuracy. We conduct extensive experiments on CIFAR and ImageNet. The results provide strong quantitative and qualitative evidence that these learned policies not only accelerate inference but also encode meaningful visual information. Built upon a ResNet-101 model, our method achieves a speedup of 20\% on average, going as high as 36\% for some images, while maintaining the same 76.4\% top-1 accuracy on ImageNet.
http://arxiv.org/pdf/1711.08393
Zuxuan Wu, Tushar Nagarajan, Abhishek Kumar, Steven Rennie, Larry S. Davis, Kristen Grauman, Rogerio Feris
cs.CV, cs.LG
CVPR 2018
null
cs.CV
20171122
20190128
[ { "id": "1602.07360" }, { "id": "1503.02531" }, { "id": "1609.05672" }, { "id": "1701.00299" }, { "id": "1603.08983" }, { "id": "1602.02830" }, { "id": "1707.01213" }, { "id": "1703.09844" }, { "id": "1706.03912" } ]
1711.08393
34
blocks on CIFAR-10 and CIFAR-100, respectively. Addi- tionally, we observe that some images use as few as 5 blocks for inference. These results confirm that joint finetuning can indeed assist the ResNet to adapt to the removal of blocks by refining its feature representations while maintaining its capacity for instance-specific variation. BlockDrop vs. state-of-the-art methods. We next com- pare BlockDrop to several techniques from the literature. We vary γ, which controls our algorithm’s trade-off be- tween block usage and accuracy, to get a range of models with varying computational requirements. We compute the average FLOPs utilized to classify each image in the test set; FLOPs are a hardware independent metric, allowing for fair comparisons across models. 3
1711.08393#34
BlockDrop: Dynamic Inference Paths in Residual Networks
Very deep convolutional neural networks offer excellent recognition results, yet their computational expense limits their impact for many real-world applications. We introduce BlockDrop, an approach that learns to dynamically choose which layers of a deep network to execute during inference so as to best reduce total computation without degrading prediction accuracy. Exploiting the robustness of Residual Networks (ResNets) to layer dropping, our framework selects on-the-fly which residual blocks to evaluate for a given novel image. In particular, given a pretrained ResNet, we train a policy network in an associative reinforcement learning setting for the dual reward of utilizing a minimal number of blocks while preserving recognition accuracy. We conduct extensive experiments on CIFAR and ImageNet. The results provide strong quantitative and qualitative evidence that these learned policies not only accelerate inference but also encode meaningful visual information. Built upon a ResNet-101 model, our method achieves a speedup of 20\% on average, going as high as 36\% for some images, while maintaining the same 76.4\% top-1 accuracy on ImageNet.
http://arxiv.org/pdf/1711.08393
Zuxuan Wu, Tushar Nagarajan, Abhishek Kumar, Steven Rennie, Larry S. Davis, Kristen Grauman, Rogerio Feris
cs.CV, cs.LG
CVPR 2018
null
cs.CV
20171122
20190128
[ { "id": "1602.07360" }, { "id": "1503.02531" }, { "id": "1609.05672" }, { "id": "1701.00299" }, { "id": "1603.08983" }, { "id": "1602.02830" }, { "id": "1707.01213" }, { "id": "1703.09844" }, { "id": "1706.03912" } ]
1711.08393
35
We compare to the following state-of-the-art methods 4: (1) ACT and (2) SACT [14], (3) PFEC [32], (4) LCCL [12]. ACT and SACT learn a halting score at the end of each block, and exit the model when a high-confidence is ob- tained. PFEC and LCCL reduce the parameters of convolu- tional layers by either pruning or sparsity constraints, which is complementary to our method. Other model compression methods cited earlier do not report results on larger ResNet models, and hence are not available to compare here.
1711.08393#35
BlockDrop: Dynamic Inference Paths in Residual Networks
Very deep convolutional neural networks offer excellent recognition results, yet their computational expense limits their impact for many real-world applications. We introduce BlockDrop, an approach that learns to dynamically choose which layers of a deep network to execute during inference so as to best reduce total computation without degrading prediction accuracy. Exploiting the robustness of Residual Networks (ResNets) to layer dropping, our framework selects on-the-fly which residual blocks to evaluate for a given novel image. In particular, given a pretrained ResNet, we train a policy network in an associative reinforcement learning setting for the dual reward of utilizing a minimal number of blocks while preserving recognition accuracy. We conduct extensive experiments on CIFAR and ImageNet. The results provide strong quantitative and qualitative evidence that these learned policies not only accelerate inference but also encode meaningful visual information. Built upon a ResNet-101 model, our method achieves a speedup of 20\% on average, going as high as 36\% for some images, while maintaining the same 76.4\% top-1 accuracy on ImageNet.
http://arxiv.org/pdf/1711.08393
Zuxuan Wu, Tushar Nagarajan, Abhishek Kumar, Steven Rennie, Larry S. Davis, Kristen Grauman, Rogerio Feris
cs.CV, cs.LG
CVPR 2018
null
cs.CV
20171122
20190128
[ { "id": "1602.07360" }, { "id": "1503.02531" }, { "id": "1609.05672" }, { "id": "1701.00299" }, { "id": "1603.08983" }, { "id": "1602.02830" }, { "id": "1707.01213" }, { "id": "1703.09844" }, { "id": "1706.03912" } ]
1711.08393
36
Figure 3 (a) presents the results on CIFAR. We observe that our best model offers 0.4% performance gain in accu- racy (93.6% vs. 93.2%) using 65% fewer FLOPs on average (1.73 × 108 vs. 5.08 × 108) over the original ResNet-110 model. The performance gains might result from the regu- larization effect of dropping blocks when finetuning the net- work as in [22]. Compared to ACT and SACT, our method only requires 50% of the FLOPs to achieve the same level of precision (>93.0%). BlockDrop also exhibits a much higher variance in its FLOPs over other methods. Com- pared to SACT, this variance is 3 times larger, allowing some samples to achieve a speedup as high as 85% with correct predictions. Further, BlockDrop also outperforms PFEC [32] and LCCL [12], which are complementary com- pression techniques and can be utilized together with our framework to speed up convolution operations.
1711.08393#36
BlockDrop: Dynamic Inference Paths in Residual Networks
Very deep convolutional neural networks offer excellent recognition results, yet their computational expense limits their impact for many real-world applications. We introduce BlockDrop, an approach that learns to dynamically choose which layers of a deep network to execute during inference so as to best reduce total computation without degrading prediction accuracy. Exploiting the robustness of Residual Networks (ResNets) to layer dropping, our framework selects on-the-fly which residual blocks to evaluate for a given novel image. In particular, given a pretrained ResNet, we train a policy network in an associative reinforcement learning setting for the dual reward of utilizing a minimal number of blocks while preserving recognition accuracy. We conduct extensive experiments on CIFAR and ImageNet. The results provide strong quantitative and qualitative evidence that these learned policies not only accelerate inference but also encode meaningful visual information. Built upon a ResNet-101 model, our method achieves a speedup of 20\% on average, going as high as 36\% for some images, while maintaining the same 76.4\% top-1 accuracy on ImageNet.
http://arxiv.org/pdf/1711.08393
Zuxuan Wu, Tushar Nagarajan, Abhishek Kumar, Steven Rennie, Larry S. Davis, Kristen Grauman, Rogerio Feris
cs.CV, cs.LG
CVPR 2018
null
cs.CV
20171122
20190128
[ { "id": "1602.07360" }, { "id": "1503.02531" }, { "id": "1609.05672" }, { "id": "1701.00299" }, { "id": "1603.08983" }, { "id": "1602.02830" }, { "id": "1707.01213" }, { "id": "1703.09844" }, { "id": "1706.03912" } ]
1711.08393
37
Figure 3 (b) presents the results for ImageNet. Com- pared with the original ResNet-101 model, BlockDrop again achieves slightly better results (76.8% vs. 76.4%) with 6% speed up (1.47×1010 vs. 1.56×1010 FLOPs). Block- Drop performs on par with the full ResNet with a 20% speed up (1.25×1010 vs. 1.56×1010 FLOPs) when we relax γ slightly. This 20% acceleration without degradation in ac- curacy is quite promising. For example, in a high-precision 3Note that we consider the multiply-accumulate operation as a two step process yielding two floating point operations and we only compute FLOPs for convolutional layers and linear layers as they account for most of the computation for inference. 4For ACT and SACT on CIFAR, we train models with the authors’ code. For the rest, we compare to numbers in the respective papers.
1711.08393#37
BlockDrop: Dynamic Inference Paths in Residual Networks
Very deep convolutional neural networks offer excellent recognition results, yet their computational expense limits their impact for many real-world applications. We introduce BlockDrop, an approach that learns to dynamically choose which layers of a deep network to execute during inference so as to best reduce total computation without degrading prediction accuracy. Exploiting the robustness of Residual Networks (ResNets) to layer dropping, our framework selects on-the-fly which residual blocks to evaluate for a given novel image. In particular, given a pretrained ResNet, we train a policy network in an associative reinforcement learning setting for the dual reward of utilizing a minimal number of blocks while preserving recognition accuracy. We conduct extensive experiments on CIFAR and ImageNet. The results provide strong quantitative and qualitative evidence that these learned policies not only accelerate inference but also encode meaningful visual information. Built upon a ResNet-101 model, our method achieves a speedup of 20\% on average, going as high as 36\% for some images, while maintaining the same 76.4\% top-1 accuracy on ImageNet.
http://arxiv.org/pdf/1711.08393
Zuxuan Wu, Tushar Nagarajan, Abhishek Kumar, Steven Rennie, Larry S. Davis, Kristen Grauman, Rogerio Feris
cs.CV, cs.LG
CVPR 2018
null
cs.CV
20171122
20190128
[ { "id": "1602.07360" }, { "id": "1503.02531" }, { "id": "1609.05672" }, { "id": "1701.00299" }, { "id": "1603.08983" }, { "id": "1602.02830" }, { "id": "1707.01213" }, { "id": "1703.09844" }, { "id": "1706.03912" } ]
1711.08393
38
4For ACT and SACT on CIFAR, we train models with the authors’ code. For the rest, we compare to numbers in the respective papers. g x ore 93.5 76.5 pre _ 76.0 bie 93.0 _ pena E755 ere 925 3 75.0 {poy > 8 1 o/e 8 220 g 745 ae FA < Bors 44 + BlockDrop || = 74.0 Igh@ es BlockDrop ver sactit4] | 8145 Fd “= SACT [14] 91.0 1 4 LCCL [12] e ro +e LCL [12] ! e+ PFEC [32] 73.0 . e+ PFEC [32] os} se ResNet [18] Rs “+ ResNet [18] 1 e ACT [14] . + ACT [14] 90.0 1 2 3 4 5& 6 9 O4 06 08 40 12 14 FLOPs x10° FLOPs x10 (a) CIFAR-10 (b) ImageNet 16 10 Figure 3: FLOPs vs. accuracy on CIFAR-10 and Ima- geNet. Results compared to several state-of-the art meth- ods. Error bars denote the standard deviation across images.
1711.08393#38
BlockDrop: Dynamic Inference Paths in Residual Networks
Very deep convolutional neural networks offer excellent recognition results, yet their computational expense limits their impact for many real-world applications. We introduce BlockDrop, an approach that learns to dynamically choose which layers of a deep network to execute during inference so as to best reduce total computation without degrading prediction accuracy. Exploiting the robustness of Residual Networks (ResNets) to layer dropping, our framework selects on-the-fly which residual blocks to evaluate for a given novel image. In particular, given a pretrained ResNet, we train a policy network in an associative reinforcement learning setting for the dual reward of utilizing a minimal number of blocks while preserving recognition accuracy. We conduct extensive experiments on CIFAR and ImageNet. The results provide strong quantitative and qualitative evidence that these learned policies not only accelerate inference but also encode meaningful visual information. Built upon a ResNet-101 model, our method achieves a speedup of 20\% on average, going as high as 36\% for some images, while maintaining the same 76.4\% top-1 accuracy on ImageNet.
http://arxiv.org/pdf/1711.08393
Zuxuan Wu, Tushar Nagarajan, Abhishek Kumar, Steven Rennie, Larry S. Davis, Kristen Grauman, Rogerio Feris
cs.CV, cs.LG
CVPR 2018
null
cs.CV
20171122
20190128
[ { "id": "1602.07360" }, { "id": "1503.02531" }, { "id": "1609.05672" }, { "id": "1701.00299" }, { "id": "1603.08983" }, { "id": "1602.02830" }, { "id": "1707.01213" }, { "id": "1703.09844" }, { "id": "1706.03912" } ]
1711.08393
39
2 Full ResNet Ours-single Ours-seq 3 - t e N s e R Time (ms) 7.71 6.56 9.92 Speed-up – 14.9% -28.7% 0 Full ResNet Ours-single Ours-seq 1 1 - t e N s e R 24.1 10.9 29.1 – 52.3% -20.7% Table 2: Impact of our single-step policy inference on efficiency for CIFAR-10. See text for details. image recognition service accepting 1 billion daily API calls, such a speedup would save around 1000 hours of com- putation on a single P6000 GPU (0.024 seconds/image).
1711.08393#39
BlockDrop: Dynamic Inference Paths in Residual Networks
Very deep convolutional neural networks offer excellent recognition results, yet their computational expense limits their impact for many real-world applications. We introduce BlockDrop, an approach that learns to dynamically choose which layers of a deep network to execute during inference so as to best reduce total computation without degrading prediction accuracy. Exploiting the robustness of Residual Networks (ResNets) to layer dropping, our framework selects on-the-fly which residual blocks to evaluate for a given novel image. In particular, given a pretrained ResNet, we train a policy network in an associative reinforcement learning setting for the dual reward of utilizing a minimal number of blocks while preserving recognition accuracy. We conduct extensive experiments on CIFAR and ImageNet. The results provide strong quantitative and qualitative evidence that these learned policies not only accelerate inference but also encode meaningful visual information. Built upon a ResNet-101 model, our method achieves a speedup of 20\% on average, going as high as 36\% for some images, while maintaining the same 76.4\% top-1 accuracy on ImageNet.
http://arxiv.org/pdf/1711.08393
Zuxuan Wu, Tushar Nagarajan, Abhishek Kumar, Steven Rennie, Larry S. Davis, Kristen Grauman, Rogerio Feris
cs.CV, cs.LG
CVPR 2018
null
cs.CV
20171122
20190128
[ { "id": "1602.07360" }, { "id": "1503.02531" }, { "id": "1609.05672" }, { "id": "1701.00299" }, { "id": "1603.08983" }, { "id": "1602.02830" }, { "id": "1707.01213" }, { "id": "1703.09844" }, { "id": "1706.03912" } ]
1711.08393
40
image recognition service accepting 1 billion daily API calls, such a speedup would save around 1000 hours of com- putation on a single P6000 GPU (0.024 seconds/image). Efficiency advantage of single-step policy. The single- step design of our policy network—where the full dynamic inference path is computed without revisiting intermediate outputs of the network—has important efficiency advan- tages. In short, it permits lower policy execution overhead. To examine the impact empirically, we devised a variant of BlockDrop that uses traditional RL policy learning to in- stead make sequential decisions (see Supp. for details). We select models of both variants that attain equivalent accu- racy, with the same number of blocks. To ensure fair com- parison, we run all three models on the same single NVIDIA P6000 GPU while disabling other processes.
1711.08393#40
BlockDrop: Dynamic Inference Paths in Residual Networks
Very deep convolutional neural networks offer excellent recognition results, yet their computational expense limits their impact for many real-world applications. We introduce BlockDrop, an approach that learns to dynamically choose which layers of a deep network to execute during inference so as to best reduce total computation without degrading prediction accuracy. Exploiting the robustness of Residual Networks (ResNets) to layer dropping, our framework selects on-the-fly which residual blocks to evaluate for a given novel image. In particular, given a pretrained ResNet, we train a policy network in an associative reinforcement learning setting for the dual reward of utilizing a minimal number of blocks while preserving recognition accuracy. We conduct extensive experiments on CIFAR and ImageNet. The results provide strong quantitative and qualitative evidence that these learned policies not only accelerate inference but also encode meaningful visual information. Built upon a ResNet-101 model, our method achieves a speedup of 20\% on average, going as high as 36\% for some images, while maintaining the same 76.4\% top-1 accuracy on ImageNet.
http://arxiv.org/pdf/1711.08393
Zuxuan Wu, Tushar Nagarajan, Abhishek Kumar, Steven Rennie, Larry S. Davis, Kristen Grauman, Rogerio Feris
cs.CV, cs.LG
CVPR 2018
null
cs.CV
20171122
20190128
[ { "id": "1602.07360" }, { "id": "1503.02531" }, { "id": "1609.05672" }, { "id": "1701.00299" }, { "id": "1603.08983" }, { "id": "1602.02830" }, { "id": "1707.01213" }, { "id": "1703.09844" }, { "id": "1706.03912" } ]
1711.08393
41
Table 2 shows the results for CIFAR-10. We report the time per test image and the speed-up over the original ResNet run in entirety with no block dropping. This result confirms the efficiency advantage of our single-step design: to reach the same accuracy, we need much less overhead (e.g., less than 60% of the time required by the sequential variant). In fact, the sequential variant takes even longer to run than the original full ResNet models, yielding a nega- tive speed-up. These results reaffirm our choice to compute all actions in one shot rather than compute them sequen- tially. They also stress the importance of accounting for any overhead a deep net speed-up scheme incurs to make its speed-up decisions. Policy 4 Policy 2 Policy 3 Castle Volcano Hamster Figure 4: Policies learned for four ImageNet classes, volcano, orange, hamster and castle. These policies correspond to a set of active paths in the ResNet, which seem to cater to different “states” of images of the particular class. For volcano, these include features like smoke, lava, etc., while for orange they include whether it is sliced/whole, quantity. # 4.3. Qualitative Results
1711.08393#41
BlockDrop: Dynamic Inference Paths in Residual Networks
Very deep convolutional neural networks offer excellent recognition results, yet their computational expense limits their impact for many real-world applications. We introduce BlockDrop, an approach that learns to dynamically choose which layers of a deep network to execute during inference so as to best reduce total computation without degrading prediction accuracy. Exploiting the robustness of Residual Networks (ResNets) to layer dropping, our framework selects on-the-fly which residual blocks to evaluate for a given novel image. In particular, given a pretrained ResNet, we train a policy network in an associative reinforcement learning setting for the dual reward of utilizing a minimal number of blocks while preserving recognition accuracy. We conduct extensive experiments on CIFAR and ImageNet. The results provide strong quantitative and qualitative evidence that these learned policies not only accelerate inference but also encode meaningful visual information. Built upon a ResNet-101 model, our method achieves a speedup of 20\% on average, going as high as 36\% for some images, while maintaining the same 76.4\% top-1 accuracy on ImageNet.
http://arxiv.org/pdf/1711.08393
Zuxuan Wu, Tushar Nagarajan, Abhishek Kumar, Steven Rennie, Larry S. Davis, Kristen Grauman, Rogerio Feris
cs.CV, cs.LG
CVPR 2018
null
cs.CV
20171122
20190128
[ { "id": "1602.07360" }, { "id": "1503.02531" }, { "id": "1609.05672" }, { "id": "1701.00299" }, { "id": "1603.08983" }, { "id": "1602.02830" }, { "id": "1707.01213" }, { "id": "1703.09844" }, { "id": "1706.03912" } ]
1711.08393
42
# 4.3. Qualitative Results Finally, we provide qualitative results based on our learned policies. We investigate the visual patterns encoded in these learned policies and then analyze the relation be- tween block usage and instance difficulty. Visual patterns in policies. Intuitively, related images can be recognized by their similar characteristics (e.g., low- level clues like texture and color). Here, we analyze similar- ity in terms of the policies they utilize by sampling dominant policies for each class and visualizing samples from them. Figure 4 shows samples utilizing three different policies for four classes. It can be clearly seen that images under the same policy are similar, and different policies encode dif- ferent styles, although they all correspond to the same se- mantic concept. For example, the first inference path for the “orange” class caters to images containing a pile of oranges, and close up views of oranges activate the second inference path, while images containing slices of oranges are routed through the third inference path. These results indicate that different paths encode meaningful semantic visual patterns, based on the input images. While this happens in standard ResNets as well, all images necessarily utilize all the paths, and disentangling this information is not possible.
1711.08393#42
BlockDrop: Dynamic Inference Paths in Residual Networks
Very deep convolutional neural networks offer excellent recognition results, yet their computational expense limits their impact for many real-world applications. We introduce BlockDrop, an approach that learns to dynamically choose which layers of a deep network to execute during inference so as to best reduce total computation without degrading prediction accuracy. Exploiting the robustness of Residual Networks (ResNets) to layer dropping, our framework selects on-the-fly which residual blocks to evaluate for a given novel image. In particular, given a pretrained ResNet, we train a policy network in an associative reinforcement learning setting for the dual reward of utilizing a minimal number of blocks while preserving recognition accuracy. We conduct extensive experiments on CIFAR and ImageNet. The results provide strong quantitative and qualitative evidence that these learned policies not only accelerate inference but also encode meaningful visual information. Built upon a ResNet-101 model, our method achieves a speedup of 20\% on average, going as high as 36\% for some images, while maintaining the same 76.4\% top-1 accuracy on ImageNet.
http://arxiv.org/pdf/1711.08393
Zuxuan Wu, Tushar Nagarajan, Abhishek Kumar, Steven Rennie, Larry S. Davis, Kristen Grauman, Rogerio Feris
cs.CV, cs.LG
CVPR 2018
null
cs.CV
20171122
20190128
[ { "id": "1602.07360" }, { "id": "1503.02531" }, { "id": "1609.05672" }, { "id": "1701.00299" }, { "id": "1603.08983" }, { "id": "1602.02830" }, { "id": "1707.01213" }, { "id": "1703.09844" }, { "id": "1706.03912" } ]
1711.08393
43
Instance difficulty. Instance difficulty is well understood in the context of prediction confidence, where easy and diffi- cult examples are classified with high and low probabilities, respectively. Inspired by the above analysis that revealed in- teresting correlations between the inference policies and the visual patterns in the images, we try to characterize instance difficulty in terms of block usage. We hypothesize that sim- ple examples (e.g. images with clear objects, without oc- clusions) require fewer computations to be correctly recog- nized. To qualitatively analyze the correlations between in- stance difficulty and block usage, we utilize learned policies that lead to high-confidence predictions for each class. Figure 5 illustrates samples from ImageNet. The top row contains images that are correctly classified with the least number of blocks, while samples in the bottom row utilize the most blocks. We see that samples using fewer blocks are indeed easier to identify since they contain single frontal- view objects positioned in the center, while several objects,
1711.08393#43
BlockDrop: Dynamic Inference Paths in Residual Networks
Very deep convolutional neural networks offer excellent recognition results, yet their computational expense limits their impact for many real-world applications. We introduce BlockDrop, an approach that learns to dynamically choose which layers of a deep network to execute during inference so as to best reduce total computation without degrading prediction accuracy. Exploiting the robustness of Residual Networks (ResNets) to layer dropping, our framework selects on-the-fly which residual blocks to evaluate for a given novel image. In particular, given a pretrained ResNet, we train a policy network in an associative reinforcement learning setting for the dual reward of utilizing a minimal number of blocks while preserving recognition accuracy. We conduct extensive experiments on CIFAR and ImageNet. The results provide strong quantitative and qualitative evidence that these learned policies not only accelerate inference but also encode meaningful visual information. Built upon a ResNet-101 model, our method achieves a speedup of 20\% on average, going as high as 36\% for some images, while maintaining the same 76.4\% top-1 accuracy on ImageNet.
http://arxiv.org/pdf/1711.08393
Zuxuan Wu, Tushar Nagarajan, Abhishek Kumar, Steven Rennie, Larry S. Davis, Kristen Grauman, Rogerio Feris
cs.CV, cs.LG
CVPR 2018
null
cs.CV
20171122
20190128
[ { "id": "1602.07360" }, { "id": "1503.02531" }, { "id": "1609.05672" }, { "id": "1701.00299" }, { "id": "1603.08983" }, { "id": "1602.02830" }, { "id": "1707.01213" }, { "id": "1703.09844" }, { "id": "1706.03912" } ]
1711.08393
44
“ EA ea ee “ sa Goldfish - easy (23 blocks) vs. hard (29 blocks) | Artichoke - easy (18 blocks) vs. hard (28 blocks) “ wy ce: : / Bridge - easy (24 blocks) vs. hard (29 blocks) Spacecrat- easy (23 blocks) vs. hard (29 blocks) hard Figure 5: Samples from ImageNet classes. Easy and hard samples from goldfish, artichoke, spacecraft and bridge to illustrate how block usage translates to instance difficulty. occlusion, or cluttered background occur in samples that re- quire more blocks. This confirms our hypothesis that block usage is a function of instance difficulty. We stress that this “sorting” into easy or hard cases falls out automatically; it is learned by BlockDrop. # 5. Conclusion
1711.08393#44
BlockDrop: Dynamic Inference Paths in Residual Networks
Very deep convolutional neural networks offer excellent recognition results, yet their computational expense limits their impact for many real-world applications. We introduce BlockDrop, an approach that learns to dynamically choose which layers of a deep network to execute during inference so as to best reduce total computation without degrading prediction accuracy. Exploiting the robustness of Residual Networks (ResNets) to layer dropping, our framework selects on-the-fly which residual blocks to evaluate for a given novel image. In particular, given a pretrained ResNet, we train a policy network in an associative reinforcement learning setting for the dual reward of utilizing a minimal number of blocks while preserving recognition accuracy. We conduct extensive experiments on CIFAR and ImageNet. The results provide strong quantitative and qualitative evidence that these learned policies not only accelerate inference but also encode meaningful visual information. Built upon a ResNet-101 model, our method achieves a speedup of 20\% on average, going as high as 36\% for some images, while maintaining the same 76.4\% top-1 accuracy on ImageNet.
http://arxiv.org/pdf/1711.08393
Zuxuan Wu, Tushar Nagarajan, Abhishek Kumar, Steven Rennie, Larry S. Davis, Kristen Grauman, Rogerio Feris
cs.CV, cs.LG
CVPR 2018
null
cs.CV
20171122
20190128
[ { "id": "1602.07360" }, { "id": "1503.02531" }, { "id": "1609.05672" }, { "id": "1701.00299" }, { "id": "1603.08983" }, { "id": "1602.02830" }, { "id": "1707.01213" }, { "id": "1703.09844" }, { "id": "1706.03912" } ]
1711.08393
45
# 5. Conclusion We presented BlockDrop, an approach for faster infer- ence in ResNets by selectively choosing residual blocks to evaluate in a learned and optimized manner conditioned on inputs. In particular, we trained a policy network to pre- dict blocks to drop in a pretrained ResNet while trying to retain the prediction accuracy. The ResNet is further jointly finetuned to produce smooth feature representations tailored for block dropping behavior. We conducted extensive ex- periments on CIFAR and ImageNet, observing consider- able gains over existing methods in terms of the efficiency- accuracy trade-off. Further, we also observe that the poli- cies learned encode semantic information in the images.
1711.08393#45
BlockDrop: Dynamic Inference Paths in Residual Networks
Very deep convolutional neural networks offer excellent recognition results, yet their computational expense limits their impact for many real-world applications. We introduce BlockDrop, an approach that learns to dynamically choose which layers of a deep network to execute during inference so as to best reduce total computation without degrading prediction accuracy. Exploiting the robustness of Residual Networks (ResNets) to layer dropping, our framework selects on-the-fly which residual blocks to evaluate for a given novel image. In particular, given a pretrained ResNet, we train a policy network in an associative reinforcement learning setting for the dual reward of utilizing a minimal number of blocks while preserving recognition accuracy. We conduct extensive experiments on CIFAR and ImageNet. The results provide strong quantitative and qualitative evidence that these learned policies not only accelerate inference but also encode meaningful visual information. Built upon a ResNet-101 model, our method achieves a speedup of 20\% on average, going as high as 36\% for some images, while maintaining the same 76.4\% top-1 accuracy on ImageNet.
http://arxiv.org/pdf/1711.08393
Zuxuan Wu, Tushar Nagarajan, Abhishek Kumar, Steven Rennie, Larry S. Davis, Kristen Grauman, Rogerio Feris
cs.CV, cs.LG
CVPR 2018
null
cs.CV
20171122
20190128
[ { "id": "1602.07360" }, { "id": "1503.02531" }, { "id": "1609.05672" }, { "id": "1701.00299" }, { "id": "1603.08983" }, { "id": "1602.02830" }, { "id": "1707.01213" }, { "id": "1703.09844" }, { "id": "1706.03912" } ]
1711.08393
46
Acknowledgments: Rogerio Feris is supported by IARPA via DOI/IBC contract number D17PC00341. The U.S. Government is authorized to reproduce and dis- tribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DOI/IBC, or the U.S. Gov- ernment. Kristen Grauman is supported in part by an IBM Faculty Award and IBM Open Collaboration Award. Larry S. Davis is partially supported by the Office of Naval Research under Grant N000141612713. # References [1] M. Abdi and S. Nahavandi. Multi-residual networks. arXiv preprint arXiv:1609.05672, 2016. 2 [2] E. Bengio, P.-L. Bacon, J. Pineau, and D. Precup. Condi- tional computation in neural networks for faster models. In ICML Workshop on Abstraction in Reinforcement Learning, 2016. 2 [3] Y. Bengio. Deep learning of representations: Looking for- ward. In SLCP, 2013. 2, 5
1711.08393#46
BlockDrop: Dynamic Inference Paths in Residual Networks
Very deep convolutional neural networks offer excellent recognition results, yet their computational expense limits their impact for many real-world applications. We introduce BlockDrop, an approach that learns to dynamically choose which layers of a deep network to execute during inference so as to best reduce total computation without degrading prediction accuracy. Exploiting the robustness of Residual Networks (ResNets) to layer dropping, our framework selects on-the-fly which residual blocks to evaluate for a given novel image. In particular, given a pretrained ResNet, we train a policy network in an associative reinforcement learning setting for the dual reward of utilizing a minimal number of blocks while preserving recognition accuracy. We conduct extensive experiments on CIFAR and ImageNet. The results provide strong quantitative and qualitative evidence that these learned policies not only accelerate inference but also encode meaningful visual information. Built upon a ResNet-101 model, our method achieves a speedup of 20\% on average, going as high as 36\% for some images, while maintaining the same 76.4\% top-1 accuracy on ImageNet.
http://arxiv.org/pdf/1711.08393
Zuxuan Wu, Tushar Nagarajan, Abhishek Kumar, Steven Rennie, Larry S. Davis, Kristen Grauman, Rogerio Feris
cs.CV, cs.LG
CVPR 2018
null
cs.CV
20171122
20190128
[ { "id": "1602.07360" }, { "id": "1503.02531" }, { "id": "1609.05672" }, { "id": "1701.00299" }, { "id": "1603.08983" }, { "id": "1602.02830" }, { "id": "1707.01213" }, { "id": "1703.09844" }, { "id": "1706.03912" } ]
1711.08393
47
[3] Y. Bengio. Deep learning of representations: Looking for- ward. In SLCP, 2013. 2, 5 [4] G. Chen, M. Chandraker, T. Han, W. Choi, and X. Yu. Learn- ing efficient object detection models with knowledge distil- lation. In NIPS, 2017. 1, 2 [5] W. Chen, J. Wilson, S. Tyree, K. Weinberger, and Y. Chen. In Compressing neural networks with the hashing trick. ICML, 2015. 2 [6] Y. Cheng, F. Yu, R. Feris, S. Kumar, A. Choudhary, and S. F. Chang. An exploration of parameter redundancy in deep networks with circulant projections. In ICCV, 2015. 2 [7] M. Courbariaux, I. Hubara, D. Soudry, R. El-Yaniv, and Y. Bengio. Binarized neural networks: Training deep neu- ral networks with weights and activations constrained to+ 1 or-1. arXiv preprint arXiv:1602.02830, 2016. 2
1711.08393#47
BlockDrop: Dynamic Inference Paths in Residual Networks
Very deep convolutional neural networks offer excellent recognition results, yet their computational expense limits their impact for many real-world applications. We introduce BlockDrop, an approach that learns to dynamically choose which layers of a deep network to execute during inference so as to best reduce total computation without degrading prediction accuracy. Exploiting the robustness of Residual Networks (ResNets) to layer dropping, our framework selects on-the-fly which residual blocks to evaluate for a given novel image. In particular, given a pretrained ResNet, we train a policy network in an associative reinforcement learning setting for the dual reward of utilizing a minimal number of blocks while preserving recognition accuracy. We conduct extensive experiments on CIFAR and ImageNet. The results provide strong quantitative and qualitative evidence that these learned policies not only accelerate inference but also encode meaningful visual information. Built upon a ResNet-101 model, our method achieves a speedup of 20\% on average, going as high as 36\% for some images, while maintaining the same 76.4\% top-1 accuracy on ImageNet.
http://arxiv.org/pdf/1711.08393
Zuxuan Wu, Tushar Nagarajan, Abhishek Kumar, Steven Rennie, Larry S. Davis, Kristen Grauman, Rogerio Feris
cs.CV, cs.LG
CVPR 2018
null
cs.CV
20171122
20190128
[ { "id": "1602.07360" }, { "id": "1503.02531" }, { "id": "1609.05672" }, { "id": "1701.00299" }, { "id": "1603.08983" }, { "id": "1602.02830" }, { "id": "1707.01213" }, { "id": "1703.09844" }, { "id": "1706.03912" } ]
1711.08393
48
J. Dai, Y. Li, K. He, and J. Sun. R-fen: Object detection via region-based fully convolutional networks. In N/PS, 2016. 1 M. P. Deisenroth, G. Neumann, J. Peters, et al. A survey on policy search for robotics. Foundations and Trends®) in Robotics, 2013. 4 [10] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei- Fei. Imagenet: A large-scale hierarchical image database. In CVPR, 2009. 2, 5 [11] L. Denoyer and P. Gallinari. Deep sequential neural network. arXiv preprint arXiv:1410.0510, 2014. 2 [12] X. Dong, J. Huang, Y. Yang, and S. Yan. More is less: A more complicated network with less inference complexity. In CVPR, 2017. 2, 7 [13] P. Felzenszwalb, R. Girshick, and D. McAllester. Cascade In CVPR, object detection with deformable part models. 2010. 3
1711.08393#48
BlockDrop: Dynamic Inference Paths in Residual Networks
Very deep convolutional neural networks offer excellent recognition results, yet their computational expense limits their impact for many real-world applications. We introduce BlockDrop, an approach that learns to dynamically choose which layers of a deep network to execute during inference so as to best reduce total computation without degrading prediction accuracy. Exploiting the robustness of Residual Networks (ResNets) to layer dropping, our framework selects on-the-fly which residual blocks to evaluate for a given novel image. In particular, given a pretrained ResNet, we train a policy network in an associative reinforcement learning setting for the dual reward of utilizing a minimal number of blocks while preserving recognition accuracy. We conduct extensive experiments on CIFAR and ImageNet. The results provide strong quantitative and qualitative evidence that these learned policies not only accelerate inference but also encode meaningful visual information. Built upon a ResNet-101 model, our method achieves a speedup of 20\% on average, going as high as 36\% for some images, while maintaining the same 76.4\% top-1 accuracy on ImageNet.
http://arxiv.org/pdf/1711.08393
Zuxuan Wu, Tushar Nagarajan, Abhishek Kumar, Steven Rennie, Larry S. Davis, Kristen Grauman, Rogerio Feris
cs.CV, cs.LG
CVPR 2018
null
cs.CV
20171122
20190128
[ { "id": "1602.07360" }, { "id": "1503.02531" }, { "id": "1609.05672" }, { "id": "1701.00299" }, { "id": "1603.08983" }, { "id": "1602.02830" }, { "id": "1707.01213" }, { "id": "1703.09844" }, { "id": "1706.03912" } ]
1711.08393
49
[13] P. Felzenszwalb, R. Girshick, and D. McAllester. Cascade In CVPR, object detection with deformable part models. 2010. 3 [14] M. Figurnov, M. D. Collins, Y. Zhu, L. Zhang, J. Huang, D. Vetrov, and R. Salakhutdinov. Spatially adaptive compu- tation time for residual networks. In CVPR, 2017. 2, 3, 7, 11 [15] A. Graves. Adaptive computation time for recurrent neural networks. arXiv preprint arXiv:1603.08983, 2016. 2, 3 [16] S. Han, H. Mao, and W. J. Dally. Deep compression: Com- pressing deep neural network with pruning, trained quanti- zation and huffman coding. In ICLR, 2016. 1, 2 [17] K. He, G. Gkioxari, P. Doll´ar, and R. Girshick. Mask r-cnn. In ICCV, 2017. 1 [18] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016. 1, 5
1711.08393#49
BlockDrop: Dynamic Inference Paths in Residual Networks
Very deep convolutional neural networks offer excellent recognition results, yet their computational expense limits their impact for many real-world applications. We introduce BlockDrop, an approach that learns to dynamically choose which layers of a deep network to execute during inference so as to best reduce total computation without degrading prediction accuracy. Exploiting the robustness of Residual Networks (ResNets) to layer dropping, our framework selects on-the-fly which residual blocks to evaluate for a given novel image. In particular, given a pretrained ResNet, we train a policy network in an associative reinforcement learning setting for the dual reward of utilizing a minimal number of blocks while preserving recognition accuracy. We conduct extensive experiments on CIFAR and ImageNet. The results provide strong quantitative and qualitative evidence that these learned policies not only accelerate inference but also encode meaningful visual information. Built upon a ResNet-101 model, our method achieves a speedup of 20\% on average, going as high as 36\% for some images, while maintaining the same 76.4\% top-1 accuracy on ImageNet.
http://arxiv.org/pdf/1711.08393
Zuxuan Wu, Tushar Nagarajan, Abhishek Kumar, Steven Rennie, Larry S. Davis, Kristen Grauman, Rogerio Feris
cs.CV, cs.LG
CVPR 2018
null
cs.CV
20171122
20190128
[ { "id": "1602.07360" }, { "id": "1503.02531" }, { "id": "1609.05672" }, { "id": "1701.00299" }, { "id": "1603.08983" }, { "id": "1602.02830" }, { "id": "1707.01213" }, { "id": "1703.09844" }, { "id": "1706.03912" } ]
1711.08393
50
[18] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016. 1, 5 [19] G. Hinton, O. Vinyals, and J. Dean. Distilling the knowledge In arXiv preprint arXiv:1503.02531, in a neural network. 2015. 1, 2 [20] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam. Mobilenets: Efficient convolutional neural networks for mobile vision appli- cations. In CVPR, 2017. 2 [21] G. Huang, D. Chen, T. Li, F. Wu, L. van der Maaten, and K. Q. Weinberger. Multi-scale dense convolutional networks for efficient prediction. arXiv preprint arXiv:1703.09844, 2017. 3 [22] G. Huang, Y. Sun, Z. Liu, D. Sedra, and K. Q. Weinberger. Deep networks with stochastic depth. In ECCV, 2016. 2, 5, 7
1711.08393#50
BlockDrop: Dynamic Inference Paths in Residual Networks
Very deep convolutional neural networks offer excellent recognition results, yet their computational expense limits their impact for many real-world applications. We introduce BlockDrop, an approach that learns to dynamically choose which layers of a deep network to execute during inference so as to best reduce total computation without degrading prediction accuracy. Exploiting the robustness of Residual Networks (ResNets) to layer dropping, our framework selects on-the-fly which residual blocks to evaluate for a given novel image. In particular, given a pretrained ResNet, we train a policy network in an associative reinforcement learning setting for the dual reward of utilizing a minimal number of blocks while preserving recognition accuracy. We conduct extensive experiments on CIFAR and ImageNet. The results provide strong quantitative and qualitative evidence that these learned policies not only accelerate inference but also encode meaningful visual information. Built upon a ResNet-101 model, our method achieves a speedup of 20\% on average, going as high as 36\% for some images, while maintaining the same 76.4\% top-1 accuracy on ImageNet.
http://arxiv.org/pdf/1711.08393
Zuxuan Wu, Tushar Nagarajan, Abhishek Kumar, Steven Rennie, Larry S. Davis, Kristen Grauman, Rogerio Feris
cs.CV, cs.LG
CVPR 2018
null
cs.CV
20171122
20190128
[ { "id": "1602.07360" }, { "id": "1503.02531" }, { "id": "1609.05672" }, { "id": "1701.00299" }, { "id": "1603.08983" }, { "id": "1602.02830" }, { "id": "1707.01213" }, { "id": "1703.09844" }, { "id": "1706.03912" } ]
1711.08393
51
[23] Z. Huang and N. Wang. Data-driven sparse struc- arXiv preprint ture selection for deep neural networks. arXiv:1707.01213, 2017. 2 [24] F. N. Iandola, S. Han, M. W. Moskewicz, K. Ashraf, W. J. Dally, and K. Keutzer. Squeezenet: Alexnet-level accu- racy with 50x fewer parameters and <0.5mb model size. In arXiv:1602.07360, 2016. 2 [25] Y. Ioannou, D. Robertson, J. Shotton, R. Cipolla, and A. Cri- minisi. Training cnns with low-rank filters for efficient image classification. In ICLR, 2016. 1, 2 [26] S. Karayev, M. Fritz, and T. Darrell. Anytime recognition of objects and scenes. In CVPR, 2014. 2 [27] A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. 2009. 2, 5
1711.08393#51
BlockDrop: Dynamic Inference Paths in Residual Networks
Very deep convolutional neural networks offer excellent recognition results, yet their computational expense limits their impact for many real-world applications. We introduce BlockDrop, an approach that learns to dynamically choose which layers of a deep network to execute during inference so as to best reduce total computation without degrading prediction accuracy. Exploiting the robustness of Residual Networks (ResNets) to layer dropping, our framework selects on-the-fly which residual blocks to evaluate for a given novel image. In particular, given a pretrained ResNet, we train a policy network in an associative reinforcement learning setting for the dual reward of utilizing a minimal number of blocks while preserving recognition accuracy. We conduct extensive experiments on CIFAR and ImageNet. The results provide strong quantitative and qualitative evidence that these learned policies not only accelerate inference but also encode meaningful visual information. Built upon a ResNet-101 model, our method achieves a speedup of 20\% on average, going as high as 36\% for some images, while maintaining the same 76.4\% top-1 accuracy on ImageNet.
http://arxiv.org/pdf/1711.08393
Zuxuan Wu, Tushar Nagarajan, Abhishek Kumar, Steven Rennie, Larry S. Davis, Kristen Grauman, Rogerio Feris
cs.CV, cs.LG
CVPR 2018
null
cs.CV
20171122
20190128
[ { "id": "1602.07360" }, { "id": "1503.02531" }, { "id": "1609.05672" }, { "id": "1701.00299" }, { "id": "1603.08983" }, { "id": "1602.02830" }, { "id": "1707.01213" }, { "id": "1703.09844" }, { "id": "1706.03912" } ]
1711.08393
52
[27] A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. 2009. 2, 5 [28] A. Krizhevsky, I. Sutskever, and G. E. Hinton. classification with deep convolutional neural networks. NIPS, 2012. 4 Imagenet In [29] J. Langford and T. Zhang. The epoch-greedy algorithm for multi-armed bandits with side information. In NIPS, 2008. 2, 4 [30] Y. LeCun, J. Denker, and S. Solla. Optimal brain damage. In NIPS, 1989. 2 [31] H. Li, S. De, Z. Xu, C. Studer, H. Samet, and T. Goldstein. Training quantized nets: A deeper understanding. In NIPS, 2017. 1, 2
1711.08393#52
BlockDrop: Dynamic Inference Paths in Residual Networks
Very deep convolutional neural networks offer excellent recognition results, yet their computational expense limits their impact for many real-world applications. We introduce BlockDrop, an approach that learns to dynamically choose which layers of a deep network to execute during inference so as to best reduce total computation without degrading prediction accuracy. Exploiting the robustness of Residual Networks (ResNets) to layer dropping, our framework selects on-the-fly which residual blocks to evaluate for a given novel image. In particular, given a pretrained ResNet, we train a policy network in an associative reinforcement learning setting for the dual reward of utilizing a minimal number of blocks while preserving recognition accuracy. We conduct extensive experiments on CIFAR and ImageNet. The results provide strong quantitative and qualitative evidence that these learned policies not only accelerate inference but also encode meaningful visual information. Built upon a ResNet-101 model, our method achieves a speedup of 20\% on average, going as high as 36\% for some images, while maintaining the same 76.4\% top-1 accuracy on ImageNet.
http://arxiv.org/pdf/1711.08393
Zuxuan Wu, Tushar Nagarajan, Abhishek Kumar, Steven Rennie, Larry S. Davis, Kristen Grauman, Rogerio Feris
cs.CV, cs.LG
CVPR 2018
null
cs.CV
20171122
20190128
[ { "id": "1602.07360" }, { "id": "1503.02531" }, { "id": "1609.05672" }, { "id": "1701.00299" }, { "id": "1603.08983" }, { "id": "1602.02830" }, { "id": "1707.01213" }, { "id": "1703.09844" }, { "id": "1706.03912" } ]
1711.08393
53
[32] H. Li, A. Kadav, I. Durdanovic, H. Samet, and H. P. Graf. Pruning filters for efficient convnets. In ICLR, 2017. 1, 2, 7 [33] Z. Li, X. Wang, X. Lv, and T. Yang. Sep-nets: Small and effective pattern networks. arXiv preprint arXiv:1706.03912, 2017. 2 [34] L. Liu and J. Deng. Dynamic deep neural networks: Opti- mizing accuracy-efficiency trade-offs by selective execution. arXiv preprint arXiv:1701.00299, 2017. 2 [35] M. McGill and P. Perona. Deciding how to decide: Dynamic routing in artificial neural networks. In ICML, 2017. 3 [36] A. Polyak and L. Wolf. Channel-level acceleration of deep face representations. IEEE Access, 3:2163–2175, 2015. 1, 2 [37] M. Ranzato, S. Chopra, M. Auli, and W. Zaremba. Sequence level training with recurrent neural networks. In ICLR, 2016. 5
1711.08393#53
BlockDrop: Dynamic Inference Paths in Residual Networks
Very deep convolutional neural networks offer excellent recognition results, yet their computational expense limits their impact for many real-world applications. We introduce BlockDrop, an approach that learns to dynamically choose which layers of a deep network to execute during inference so as to best reduce total computation without degrading prediction accuracy. Exploiting the robustness of Residual Networks (ResNets) to layer dropping, our framework selects on-the-fly which residual blocks to evaluate for a given novel image. In particular, given a pretrained ResNet, we train a policy network in an associative reinforcement learning setting for the dual reward of utilizing a minimal number of blocks while preserving recognition accuracy. We conduct extensive experiments on CIFAR and ImageNet. The results provide strong quantitative and qualitative evidence that these learned policies not only accelerate inference but also encode meaningful visual information. Built upon a ResNet-101 model, our method achieves a speedup of 20\% on average, going as high as 36\% for some images, while maintaining the same 76.4\% top-1 accuracy on ImageNet.
http://arxiv.org/pdf/1711.08393
Zuxuan Wu, Tushar Nagarajan, Abhishek Kumar, Steven Rennie, Larry S. Davis, Kristen Grauman, Rogerio Feris
cs.CV, cs.LG
CVPR 2018
null
cs.CV
20171122
20190128
[ { "id": "1602.07360" }, { "id": "1503.02531" }, { "id": "1609.05672" }, { "id": "1701.00299" }, { "id": "1603.08983" }, { "id": "1602.02830" }, { "id": "1707.01213" }, { "id": "1703.09844" }, { "id": "1706.03912" } ]
1711.08393
54
[38] M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi. Xnor- net: Imagenet classification using binary convolutional neu- ral networks. In ECCV, 2016. 2 [39] S. J. Rennie, E. Marcheret, Y. Mroueh, J. Ross, and V. Goel. Self-critical sequence training for image captioning. In CVPR, 2017. 4, 5 [40] A. Romero, N. Ballas, S. E. Kahou, A. Chassang, C. Gatta, and Y. Bengio. Fitnets: Hints for thin deep nets. In arXiv preprint arXiv:1412.6550, 2014. 2 [41] T. N. Sainath, B. Kingsbury, V. Sindhwani, E. Arisoy, and B. Ramabhadran. Low-rank matrix factorization for deep neural network training with high-dimensional output tar- gets. In ICASSP, 2013. 1, 2 [42] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. 4
1711.08393#54
BlockDrop: Dynamic Inference Paths in Residual Networks
Very deep convolutional neural networks offer excellent recognition results, yet their computational expense limits their impact for many real-world applications. We introduce BlockDrop, an approach that learns to dynamically choose which layers of a deep network to execute during inference so as to best reduce total computation without degrading prediction accuracy. Exploiting the robustness of Residual Networks (ResNets) to layer dropping, our framework selects on-the-fly which residual blocks to evaluate for a given novel image. In particular, given a pretrained ResNet, we train a policy network in an associative reinforcement learning setting for the dual reward of utilizing a minimal number of blocks while preserving recognition accuracy. We conduct extensive experiments on CIFAR and ImageNet. The results provide strong quantitative and qualitative evidence that these learned policies not only accelerate inference but also encode meaningful visual information. Built upon a ResNet-101 model, our method achieves a speedup of 20\% on average, going as high as 36\% for some images, while maintaining the same 76.4\% top-1 accuracy on ImageNet.
http://arxiv.org/pdf/1711.08393
Zuxuan Wu, Tushar Nagarajan, Abhishek Kumar, Steven Rennie, Larry S. Davis, Kristen Grauman, Rogerio Feris
cs.CV, cs.LG
CVPR 2018
null
cs.CV
20171122
20190128
[ { "id": "1602.07360" }, { "id": "1503.02531" }, { "id": "1609.05672" }, { "id": "1701.00299" }, { "id": "1603.08983" }, { "id": "1602.02830" }, { "id": "1707.01213" }, { "id": "1703.09844" }, { "id": "1706.03912" } ]
1711.08393
55
[43] V. Sindhwani, T. Sainath, and S. Kumar. Structured trans- forms for small-footprint deep learning. In NIPS, 2015. 2 [44] N. Srivastava, G. E. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. JMLR, 2014. 2 Leaving some stones un- turned: dynamic feature prioritization for activity detection in streaming video. In ECCV, 2016. 2 [46] R. S. Sutton and A. G. Barto. Reinforcement learning: An introduction. MIT press Cambridge, 1998. 2, 4 [47] C. Tai, T. Xiao, X. Wang, et al. Convolutional neural net- works with low-rank regularization. In ICLR, 2016. 2 [48] S. Teerapittayanon, B. McDanel, and H. Kung. Branchynet: Fast inference via early exiting from deep neural networks. In ICPR, 2016. 3 [49] A. Ude. Integrating visual perception and manipulation for autonomous learning of object representations. Can develop- mental robotics yield human-like cognitive abilities?, 2012. 1
1711.08393#55
BlockDrop: Dynamic Inference Paths in Residual Networks
Very deep convolutional neural networks offer excellent recognition results, yet their computational expense limits their impact for many real-world applications. We introduce BlockDrop, an approach that learns to dynamically choose which layers of a deep network to execute during inference so as to best reduce total computation without degrading prediction accuracy. Exploiting the robustness of Residual Networks (ResNets) to layer dropping, our framework selects on-the-fly which residual blocks to evaluate for a given novel image. In particular, given a pretrained ResNet, we train a policy network in an associative reinforcement learning setting for the dual reward of utilizing a minimal number of blocks while preserving recognition accuracy. We conduct extensive experiments on CIFAR and ImageNet. The results provide strong quantitative and qualitative evidence that these learned policies not only accelerate inference but also encode meaningful visual information. Built upon a ResNet-101 model, our method achieves a speedup of 20\% on average, going as high as 36\% for some images, while maintaining the same 76.4\% top-1 accuracy on ImageNet.
http://arxiv.org/pdf/1711.08393
Zuxuan Wu, Tushar Nagarajan, Abhishek Kumar, Steven Rennie, Larry S. Davis, Kristen Grauman, Rogerio Feris
cs.CV, cs.LG
CVPR 2018
null
cs.CV
20171122
20190128
[ { "id": "1602.07360" }, { "id": "1503.02531" }, { "id": "1609.05672" }, { "id": "1701.00299" }, { "id": "1603.08983" }, { "id": "1602.02830" }, { "id": "1707.01213" }, { "id": "1703.09844" }, { "id": "1706.03912" } ]
1711.08393
56
[49] A. Ude. Integrating visual perception and manipulation for autonomous learning of object representations. Can develop- mental robotics yield human-like cognitive abilities?, 2012. 1 [50] A. Veit, M. J. Wilber, and S. Belongie. Residual networks be- have like ensembles of relatively shallow networks. In NIPS, 2016. 1, 2, 3, 4 [51] P. Viola and M. J. Jones. Robust real-time face detection. IJCV, 2004. 3 [52] D. B. Walther, B. Chai, E. Caddigan, D. M. Beck, and L. Fei- Fei. Simple line drawings suffice for functional mri decoding of natural scene categories. PNAS, 2011. 1 [53] L. Wan, M. Zeiler, S. Zhang, Y. L. Cun, and R. Fergus. Reg- ularization of neural networks using dropconnect. In ICML, 2013. 2 [54] J. Wu, C. Leng, Y. Wang, Q. Hu, and J. Cheng. Quantized convolutional neural networks for mobile devices. In CVPR, 2016. 1, 2
1711.08393#56
BlockDrop: Dynamic Inference Paths in Residual Networks
Very deep convolutional neural networks offer excellent recognition results, yet their computational expense limits their impact for many real-world applications. We introduce BlockDrop, an approach that learns to dynamically choose which layers of a deep network to execute during inference so as to best reduce total computation without degrading prediction accuracy. Exploiting the robustness of Residual Networks (ResNets) to layer dropping, our framework selects on-the-fly which residual blocks to evaluate for a given novel image. In particular, given a pretrained ResNet, we train a policy network in an associative reinforcement learning setting for the dual reward of utilizing a minimal number of blocks while preserving recognition accuracy. We conduct extensive experiments on CIFAR and ImageNet. The results provide strong quantitative and qualitative evidence that these learned policies not only accelerate inference but also encode meaningful visual information. Built upon a ResNet-101 model, our method achieves a speedup of 20\% on average, going as high as 36\% for some images, while maintaining the same 76.4\% top-1 accuracy on ImageNet.
http://arxiv.org/pdf/1711.08393
Zuxuan Wu, Tushar Nagarajan, Abhishek Kumar, Steven Rennie, Larry S. Davis, Kristen Grauman, Rogerio Feris
cs.CV, cs.LG
CVPR 2018
null
cs.CV
20171122
20190128
[ { "id": "1602.07360" }, { "id": "1503.02531" }, { "id": "1609.05672" }, { "id": "1701.00299" }, { "id": "1603.08983" }, { "id": "1602.02830" }, { "id": "1707.01213" }, { "id": "1703.09844" }, { "id": "1706.03912" } ]
1711.08393
57
[55] S. Xie, R. Girshick, P. Doll´ar, Z. Tu, and K. He. Aggregated residual transformations for deep neural networks. In CVPR, 2017. 2 [56] S. Yeung, O. Russakovsky, G. Mori, and L. Fei-Fei. End- to-end learning of action detection from frame glimpses in videos. In CVPR, 2016. 2 [57] R. Yu, A. Li, C.-F. Chen, J.-H. Lai, V. I. Morariu, X. Han, M. Gao, C.-Y. Lin, and L. S. Davis. Nisp: Pruning networks using neuron importance score propagation. In CVPR, 2018. 2 # Supplemental Materials # Details of BlockDrop-seq (Ours-seq) We construct a sequential version of BlockDrop for dropping blocks, where the decision ai ∈ {0, 1} to drop or keep the i-th block is conditioned on the activations of its previous block, yi−1. Unlike BlockDrop, where all the actions are predicted in one shot, this model predicts one action at a time, which is a typical reinforcement learning setting. We follow the procedure to generate the halting scores in [14], and arrive at an equivalent per-block skipping score according to:
1711.08393#57
BlockDrop: Dynamic Inference Paths in Residual Networks
Very deep convolutional neural networks offer excellent recognition results, yet their computational expense limits their impact for many real-world applications. We introduce BlockDrop, an approach that learns to dynamically choose which layers of a deep network to execute during inference so as to best reduce total computation without degrading prediction accuracy. Exploiting the robustness of Residual Networks (ResNets) to layer dropping, our framework selects on-the-fly which residual blocks to evaluate for a given novel image. In particular, given a pretrained ResNet, we train a policy network in an associative reinforcement learning setting for the dual reward of utilizing a minimal number of blocks while preserving recognition accuracy. We conduct extensive experiments on CIFAR and ImageNet. The results provide strong quantitative and qualitative evidence that these learned policies not only accelerate inference but also encode meaningful visual information. Built upon a ResNet-101 model, our method achieves a speedup of 20\% on average, going as high as 36\% for some images, while maintaining the same 76.4\% top-1 accuracy on ImageNet.
http://arxiv.org/pdf/1711.08393
Zuxuan Wu, Tushar Nagarajan, Abhishek Kumar, Steven Rennie, Larry S. Davis, Kristen Grauman, Rogerio Feris
cs.CV, cs.LG
CVPR 2018
null
cs.CV
20171122
20190128
[ { "id": "1602.07360" }, { "id": "1503.02531" }, { "id": "1609.05672" }, { "id": "1701.00299" }, { "id": "1603.08983" }, { "id": "1602.02830" }, { "id": "1707.01213" }, { "id": "1703.09844" }, { "id": "1706.03912" } ]
1711.08393
58
p= softmax(W‘pool(y;_1) +0‘), where pool is a global average pooling operation. For fair comparisons, Ours-seq is compared to a BlockDrop model, which attains equivalent accuracy, with the same number of blocks. # Implementation Details • On CIFAR, we train the model for 5000 epochs during curriculum learning with a batch size of 2048 and a learning rate of 1e − 4. We further jointly finetune the model for 1600 epochs with a batch size of 256 and a learning rate of 1e − 4, which is annealed to 1e − 5 for 400 epochs. • On ImageNet, the policy network is trained for 45 epochs for curriculum learning with a batch size of 2048 and a learning rate of 1e − 4. We then use a batch size of 320 during joint finetuning for 10 epochs. # Detailed Results on CIFAR-10 and ImageNet We present detailed results of our method on CIFAR-10 (Table 3) and ImageNet (Table 4). We highlight the accuracy, block usage and speed up for variants of our model compared to full ResNets.
1711.08393#58
BlockDrop: Dynamic Inference Paths in Residual Networks
Very deep convolutional neural networks offer excellent recognition results, yet their computational expense limits their impact for many real-world applications. We introduce BlockDrop, an approach that learns to dynamically choose which layers of a deep network to execute during inference so as to best reduce total computation without degrading prediction accuracy. Exploiting the robustness of Residual Networks (ResNets) to layer dropping, our framework selects on-the-fly which residual blocks to evaluate for a given novel image. In particular, given a pretrained ResNet, we train a policy network in an associative reinforcement learning setting for the dual reward of utilizing a minimal number of blocks while preserving recognition accuracy. We conduct extensive experiments on CIFAR and ImageNet. The results provide strong quantitative and qualitative evidence that these learned policies not only accelerate inference but also encode meaningful visual information. Built upon a ResNet-101 model, our method achieves a speedup of 20\% on average, going as high as 36\% for some images, while maintaining the same 76.4\% top-1 accuracy on ImageNet.
http://arxiv.org/pdf/1711.08393
Zuxuan Wu, Tushar Nagarajan, Abhishek Kumar, Steven Rennie, Larry S. Davis, Kristen Grauman, Rogerio Feris
cs.CV, cs.LG
CVPR 2018
null
cs.CV
20171122
20190128
[ { "id": "1602.07360" }, { "id": "1503.02531" }, { "id": "1609.05672" }, { "id": "1701.00299" }, { "id": "1603.08983" }, { "id": "1602.02830" }, { "id": "1707.01213" }, { "id": "1703.09844" }, { "id": "1706.03912" } ]
1711.08393
59
Network FLOPs Block Usage Accuracy ResNet-32 ResNet-110 1.38E+08 ± 0.00E+00 5.06E+08 ± 0.00E+00 15.0 ± 0.0 54.0 ± 0.0 92.3 93.2 – – BlockDrop-32 (γ = 5) BlockDrop-110 (γ = 2) BlockDrop-110 (γ = 5) BlockDrop-110 (γ = 10) 8.66E+07 ± 1.40E+07 1.18E+08 ± 2.46E+07 1.51E+08 ± 3.24E+07 1.81E+08 ± 3.43E+07 6.9 ± 1.6 10.3 ± 2.7 13.8 ± 3.5 16.9 ± 3.7 91.3 91.9 93.0 93.6 37.2% 76.7% 70.1% 64.3% Table 3: Results of different architectures on CIFAR-10. Depending on the base ResNet architecture, speedups ranging from 37% to 76% are observed with little to no degradation in performance.
1711.08393#59
BlockDrop: Dynamic Inference Paths in Residual Networks
Very deep convolutional neural networks offer excellent recognition results, yet their computational expense limits their impact for many real-world applications. We introduce BlockDrop, an approach that learns to dynamically choose which layers of a deep network to execute during inference so as to best reduce total computation without degrading prediction accuracy. Exploiting the robustness of Residual Networks (ResNets) to layer dropping, our framework selects on-the-fly which residual blocks to evaluate for a given novel image. In particular, given a pretrained ResNet, we train a policy network in an associative reinforcement learning setting for the dual reward of utilizing a minimal number of blocks while preserving recognition accuracy. We conduct extensive experiments on CIFAR and ImageNet. The results provide strong quantitative and qualitative evidence that these learned policies not only accelerate inference but also encode meaningful visual information. Built upon a ResNet-101 model, our method achieves a speedup of 20\% on average, going as high as 36\% for some images, while maintaining the same 76.4\% top-1 accuracy on ImageNet.
http://arxiv.org/pdf/1711.08393
Zuxuan Wu, Tushar Nagarajan, Abhishek Kumar, Steven Rennie, Larry S. Davis, Kristen Grauman, Rogerio Feris
cs.CV, cs.LG
CVPR 2018
null
cs.CV
20171122
20190128
[ { "id": "1602.07360" }, { "id": "1503.02531" }, { "id": "1609.05672" }, { "id": "1701.00299" }, { "id": "1603.08983" }, { "id": "1602.02830" }, { "id": "1707.01213" }, { "id": "1703.09844" }, { "id": "1706.03912" } ]
1711.08393
60
Network FLOPs Block Usage Accuracy Speed-up ResNet-72 ResNet-75 ResNet-84 1.17E+10 ± 0.00E+00 1.21E+10 ± 0.00E+00 1.34E+10 ± 0.00E+00 24.0 ± 0.0 25.0 ± 0.0 28.0 ± 0.0 75.8 75.9 76.1 – – – ResNet-101 1.56E+10 ± 0.00E+00 33.0 ± 0.0 76.4 – BlockDrop (γ = 2) BlockDrop (γ = 5) BlockDrop (γ = 10) 9.85E+09 ± 3.34E+08 1.25E+10 ± 4.26E+08 1.47E+10 ± 4.02E+08 18.8 ± 0.8 24.8 ± 1.0 29.7 ± 0.9 75.2 76.4 76.8 36.9% 19.9% 5.7% Table 4: Results of different architectures on ImageNet. BlockDrop is built upon ResNet-101, and can achieve around 20% speedup on average with γ = 5.
1711.08393#60
BlockDrop: Dynamic Inference Paths in Residual Networks
Very deep convolutional neural networks offer excellent recognition results, yet their computational expense limits their impact for many real-world applications. We introduce BlockDrop, an approach that learns to dynamically choose which layers of a deep network to execute during inference so as to best reduce total computation without degrading prediction accuracy. Exploiting the robustness of Residual Networks (ResNets) to layer dropping, our framework selects on-the-fly which residual blocks to evaluate for a given novel image. In particular, given a pretrained ResNet, we train a policy network in an associative reinforcement learning setting for the dual reward of utilizing a minimal number of blocks while preserving recognition accuracy. We conduct extensive experiments on CIFAR and ImageNet. The results provide strong quantitative and qualitative evidence that these learned policies not only accelerate inference but also encode meaningful visual information. Built upon a ResNet-101 model, our method achieves a speedup of 20\% on average, going as high as 36\% for some images, while maintaining the same 76.4\% top-1 accuracy on ImageNet.
http://arxiv.org/pdf/1711.08393
Zuxuan Wu, Tushar Nagarajan, Abhishek Kumar, Steven Rennie, Larry S. Davis, Kristen Grauman, Rogerio Feris
cs.CV, cs.LG
CVPR 2018
null
cs.CV
20171122
20190128
[ { "id": "1602.07360" }, { "id": "1503.02531" }, { "id": "1609.05672" }, { "id": "1701.00299" }, { "id": "1603.08983" }, { "id": "1602.02830" }, { "id": "1707.01213" }, { "id": "1703.09844" }, { "id": "1706.03912" } ]
1711.06782
0
7 1 0 2 v o N 8 1 ] G L . s c [ 1 v 2 8 7 6 0 . 1 1 7 1 : v i X r a # Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning Benjamin Eysenbach∗ †, Shixiang Gu† ‡ ††, Julian Ibarz†, Sergey Levine† ‡‡ †Google Brain ‡University of Cambridge ††Max Planck Institute for Intelligent Systems ‡‡UC Berkeley {eysenbach,shanegu,julianibarz,slevine}@google.com # Abstract
1711.06782#0
Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning
Deep reinforcement learning algorithms can learn complex behavioral skills, but real-world application of these methods requires a large amount of experience to be collected by the agent. In practical settings, such as robotics, this involves repeatedly attempting a task, resetting the environment between each attempt. However, not all tasks are easily or automatically reversible. In practice, this learning process requires extensive human intervention. In this work, we propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward and reset policy, with the reset policy resetting the environment for a subsequent attempt. By learning a value function for the reset policy, we can automatically determine when the forward policy is about to enter a non-reversible state, providing for uncertainty-aware safety aborts. Our experiments illustrate that proper use of the reset policy can greatly reduce the number of manual resets required to learn a task, can reduce the number of unsafe actions that lead to non-reversible states, and can automatically induce a curriculum.
http://arxiv.org/pdf/1711.06782
Benjamin Eysenbach, Shixiang Gu, Julian Ibarz, Sergey Levine
cs.LG, cs.RO
Videos of our experiments are available at: https://sites.google.com/site/mlleavenotrace/
null
cs.LG
20171118
20171118
[ { "id": "1702.01182" }, { "id": "1704.05588" }, { "id": "1707.00183" }, { "id": "1506.02438" }, { "id": "1703.05407" }, { "id": "1705.06366" }, { "id": "1609.07152" }, { "id": "1509.02971" }, { "id": "1705.05035" }, { "id": "1707.05300" } ]
1711.06782
1
# Abstract Deep reinforcement learning algorithms can learn complex behavioral skills, but real-world application of these methods requires a large amount of experience to be collected by the agent. In practical settings, such as robotics, this involves repeat- edly attempting a task, resetting the environment between each attempt. However, not all tasks are easily or automatically reversible. In practice, this learning process requires extensive human intervention. In this work, we propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward and reset policy, with the reset policy resetting the environment for a subsequent attempt. By learning a value function for the reset policy, we can automatically determine when the forward policy is about to enter a non-reversible state, providing for uncertainty-aware safety aborts. Our experiments illustrate that proper use of the reset policy can greatly reduce the number of manual resets required to learn a task, can reduce the number of unsafe actions that lead to non-reversible states, and can automatically induce a curriculum.2 # Introduction
1711.06782#1
Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning
Deep reinforcement learning algorithms can learn complex behavioral skills, but real-world application of these methods requires a large amount of experience to be collected by the agent. In practical settings, such as robotics, this involves repeatedly attempting a task, resetting the environment between each attempt. However, not all tasks are easily or automatically reversible. In practice, this learning process requires extensive human intervention. In this work, we propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward and reset policy, with the reset policy resetting the environment for a subsequent attempt. By learning a value function for the reset policy, we can automatically determine when the forward policy is about to enter a non-reversible state, providing for uncertainty-aware safety aborts. Our experiments illustrate that proper use of the reset policy can greatly reduce the number of manual resets required to learn a task, can reduce the number of unsafe actions that lead to non-reversible states, and can automatically induce a curriculum.
http://arxiv.org/pdf/1711.06782
Benjamin Eysenbach, Shixiang Gu, Julian Ibarz, Sergey Levine
cs.LG, cs.RO
Videos of our experiments are available at: https://sites.google.com/site/mlleavenotrace/
null
cs.LG
20171118
20171118
[ { "id": "1702.01182" }, { "id": "1704.05588" }, { "id": "1707.00183" }, { "id": "1506.02438" }, { "id": "1703.05407" }, { "id": "1705.06366" }, { "id": "1609.07152" }, { "id": "1509.02971" }, { "id": "1705.05035" }, { "id": "1707.05300" } ]
1711.06782
2
# Introduction Deep reinforcement learning (RL) algorithms have the potential to automate acquisition of complex behaviors in a variety of real-world settings. Recent results have shown success on games (Mnih et al. (2013)), locomotion (Schulman et al. (2015)), and a variety of robotic manipulation skills (Pinto & Gupta (2017); Schulman et al. (2016); Gu et al. (2017)). However, the complexity of tasks achieved with deep RL in simulation still exceeds the complexity of the tasks learned in the real world. Why have real-world results lagged behind the simulated accomplishments of deep RL algorithms? One challenge with real-world application of deep RL is the scaffolding required for learning: a bad policy can easily put the system into an unrecoverable state from which no further learning is possible. For example, an autonomous car might collide at high speed, and a robot learning to clean glasses might break them. Even in cases where failures are not catastrophic, some degree of human intervention is often required to reset the environment between attempts (e.g. Chebotar et al. (2017)).
1711.06782#2
Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning
Deep reinforcement learning algorithms can learn complex behavioral skills, but real-world application of these methods requires a large amount of experience to be collected by the agent. In practical settings, such as robotics, this involves repeatedly attempting a task, resetting the environment between each attempt. However, not all tasks are easily or automatically reversible. In practice, this learning process requires extensive human intervention. In this work, we propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward and reset policy, with the reset policy resetting the environment for a subsequent attempt. By learning a value function for the reset policy, we can automatically determine when the forward policy is about to enter a non-reversible state, providing for uncertainty-aware safety aborts. Our experiments illustrate that proper use of the reset policy can greatly reduce the number of manual resets required to learn a task, can reduce the number of unsafe actions that lead to non-reversible states, and can automatically induce a curriculum.
http://arxiv.org/pdf/1711.06782
Benjamin Eysenbach, Shixiang Gu, Julian Ibarz, Sergey Levine
cs.LG, cs.RO
Videos of our experiments are available at: https://sites.google.com/site/mlleavenotrace/
null
cs.LG
20171118
20171118
[ { "id": "1702.01182" }, { "id": "1704.05588" }, { "id": "1707.00183" }, { "id": "1506.02438" }, { "id": "1703.05407" }, { "id": "1705.06366" }, { "id": "1609.07152" }, { "id": "1509.02971" }, { "id": "1705.05035" }, { "id": "1707.05300" } ]
1711.06782
3
Most RL algorithms require sampling from the initial state distribution at the start of each episode. On real-world tasks, this operation often corresponds to a human resetting the environment after every episode, an expensive solution for complex environments. Even when tasks are designed so that these resets are easy (e.g. Levine et al. (2016) and Gu et al. (2017)), manual resets are necessary when the robot or environment breaks (e.g. Gandhi et al. (2017)). The bottleneck for learning many real-world tasks is not that the agent collects data too slowly, but rather that data collection stops entirely when the agent is waiting for a manual reset. To avoid manual resets caused by the environment breaking, humans often add negative rewards to dangerous states and intervene to prevent agents from taking ∗Work done as a member of the Google Brain Residency Program (g.co/brainresidency) 2Videos of our experiments: https://sites.google.com/site/mlleavenotrace/
1711.06782#3
Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning
Deep reinforcement learning algorithms can learn complex behavioral skills, but real-world application of these methods requires a large amount of experience to be collected by the agent. In practical settings, such as robotics, this involves repeatedly attempting a task, resetting the environment between each attempt. However, not all tasks are easily or automatically reversible. In practice, this learning process requires extensive human intervention. In this work, we propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward and reset policy, with the reset policy resetting the environment for a subsequent attempt. By learning a value function for the reset policy, we can automatically determine when the forward policy is about to enter a non-reversible state, providing for uncertainty-aware safety aborts. Our experiments illustrate that proper use of the reset policy can greatly reduce the number of manual resets required to learn a task, can reduce the number of unsafe actions that lead to non-reversible states, and can automatically induce a curriculum.
http://arxiv.org/pdf/1711.06782
Benjamin Eysenbach, Shixiang Gu, Julian Ibarz, Sergey Levine
cs.LG, cs.RO
Videos of our experiments are available at: https://sites.google.com/site/mlleavenotrace/
null
cs.LG
20171118
20171118
[ { "id": "1702.01182" }, { "id": "1704.05588" }, { "id": "1707.00183" }, { "id": "1506.02438" }, { "id": "1703.05407" }, { "id": "1705.06366" }, { "id": "1609.07152" }, { "id": "1509.02971" }, { "id": "1705.05035" }, { "id": "1707.05300" } ]
1711.06782
4
dangerous actions. While this works well for simple tasks, scaling to more complex environments requires writing large numbers of rules for types of actions the robot should avoid. For example, a robot should avoid hitting itself, except when clapping. One interpretation of our method is as automatically learning these safety rules. Decreasing the number of manual resets required to learn to a task is important for scaling up RL experiments outside simulation, allowing researchers to run longer experiments on more agents for more hours.
1711.06782#4
Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning
Deep reinforcement learning algorithms can learn complex behavioral skills, but real-world application of these methods requires a large amount of experience to be collected by the agent. In practical settings, such as robotics, this involves repeatedly attempting a task, resetting the environment between each attempt. However, not all tasks are easily or automatically reversible. In practice, this learning process requires extensive human intervention. In this work, we propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward and reset policy, with the reset policy resetting the environment for a subsequent attempt. By learning a value function for the reset policy, we can automatically determine when the forward policy is about to enter a non-reversible state, providing for uncertainty-aware safety aborts. Our experiments illustrate that proper use of the reset policy can greatly reduce the number of manual resets required to learn a task, can reduce the number of unsafe actions that lead to non-reversible states, and can automatically induce a curriculum.
http://arxiv.org/pdf/1711.06782
Benjamin Eysenbach, Shixiang Gu, Julian Ibarz, Sergey Levine
cs.LG, cs.RO
Videos of our experiments are available at: https://sites.google.com/site/mlleavenotrace/
null
cs.LG
20171118
20171118
[ { "id": "1702.01182" }, { "id": "1704.05588" }, { "id": "1707.00183" }, { "id": "1506.02438" }, { "id": "1703.05407" }, { "id": "1705.06366" }, { "id": "1609.07152" }, { "id": "1509.02971" }, { "id": "1705.05035" }, { "id": "1707.05300" } ]
1711.06782
5
We propose to address these challenges by forcing our agent to “leave no trace.” The goal is to learn not only how to do the task at hand, but also how to undo it. The intuition is that the sequences of actions that are reversible are safe; it is always possible to undo them to get back to the original state. This property is also desirable for continual learning of agents, as it removes the requirements for manual resets. In this work, we learn two policies that alternate between attempting the task and resetting the environment. By learning how to reset the environment at the end of each episode, the agent we learn requires significantly fewer manual resets. Critically, our value-based reset policy restricts the agent to only visit states from which it can return, intervening to prevent the forward policy from taking potentially irreversible actions. The set of states from which the agent knows how to return grows over time, allowing the agent to explore more parts of the environment as soon as it is safe to do so.
1711.06782#5
Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning
Deep reinforcement learning algorithms can learn complex behavioral skills, but real-world application of these methods requires a large amount of experience to be collected by the agent. In practical settings, such as robotics, this involves repeatedly attempting a task, resetting the environment between each attempt. However, not all tasks are easily or automatically reversible. In practice, this learning process requires extensive human intervention. In this work, we propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward and reset policy, with the reset policy resetting the environment for a subsequent attempt. By learning a value function for the reset policy, we can automatically determine when the forward policy is about to enter a non-reversible state, providing for uncertainty-aware safety aborts. Our experiments illustrate that proper use of the reset policy can greatly reduce the number of manual resets required to learn a task, can reduce the number of unsafe actions that lead to non-reversible states, and can automatically induce a curriculum.
http://arxiv.org/pdf/1711.06782
Benjamin Eysenbach, Shixiang Gu, Julian Ibarz, Sergey Levine
cs.LG, cs.RO
Videos of our experiments are available at: https://sites.google.com/site/mlleavenotrace/
null
cs.LG
20171118
20171118
[ { "id": "1702.01182" }, { "id": "1704.05588" }, { "id": "1707.00183" }, { "id": "1506.02438" }, { "id": "1703.05407" }, { "id": "1705.06366" }, { "id": "1609.07152" }, { "id": "1509.02971" }, { "id": "1705.05035" }, { "id": "1707.05300" } ]
1711.06782
6
The main contribution of our work is a framework for continually and jointly learning a reset policy in concert with a forward task policy. We show that this reset policy not only automates resetting the environment between episodes, but also helps ensure safety by reducing how frequently the forward policy enters unrecoverable states. Incorporating uncertainty into the value functions of both the forward and reset policy further allows us to make this process risk-aware, balancing exploration against safety. Our experiments illustrate that this approach reduces the number of “hard” manual resets required during learning of a variety of simulated robotic skills. # 2 Related Work Our method builds off previous work in areas of safe exploration, multiple policies, and automatic curriculum generation. Previous work has examined safe exploration in small MDPs. Moldovan & Abbeel (2012a) examine risk-sensitive objectives for MDPs, and propose a new objective of which minmax and expectation optimization are both special cases. Moldovan & Abbeel (2012b) consider safety using ergodicity, where an action is safe if it is still possible to reach every other state after having taken that action. These methods are limited to small, discrete MDPs where exact planning is straightforward. Our work includes a similar notion of safety, but can be applied to solve complex, high-dimensional tasks.
1711.06782#6
Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning
Deep reinforcement learning algorithms can learn complex behavioral skills, but real-world application of these methods requires a large amount of experience to be collected by the agent. In practical settings, such as robotics, this involves repeatedly attempting a task, resetting the environment between each attempt. However, not all tasks are easily or automatically reversible. In practice, this learning process requires extensive human intervention. In this work, we propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward and reset policy, with the reset policy resetting the environment for a subsequent attempt. By learning a value function for the reset policy, we can automatically determine when the forward policy is about to enter a non-reversible state, providing for uncertainty-aware safety aborts. Our experiments illustrate that proper use of the reset policy can greatly reduce the number of manual resets required to learn a task, can reduce the number of unsafe actions that lead to non-reversible states, and can automatically induce a curriculum.
http://arxiv.org/pdf/1711.06782
Benjamin Eysenbach, Shixiang Gu, Julian Ibarz, Sergey Levine
cs.LG, cs.RO
Videos of our experiments are available at: https://sites.google.com/site/mlleavenotrace/
null
cs.LG
20171118
20171118
[ { "id": "1702.01182" }, { "id": "1704.05588" }, { "id": "1707.00183" }, { "id": "1506.02438" }, { "id": "1703.05407" }, { "id": "1705.06366" }, { "id": "1609.07152" }, { "id": "1509.02971" }, { "id": "1705.05035" }, { "id": "1707.05300" } ]
1711.06782
7
Previous work has also used multiple policies for safety and for learning complex tasks. Han et al. (2015) learn a sequence of forward and reset policies to complete a complex manipulation task. Similar to Han et al. (2015), our work learns a reset policy to undo the actions of the forward policy. While Han et al. (2015) engage the reset policy when the forward policy fails, we preemptively predict whether the forward policy will fail, and engage the reset policy before allowing the forward policy to fail. Similar to our approach, Richter & Roy (2017) also propose to use a safety policy that can trigger an “abort” to prevent a dangerous situation. However, in contrast to our approach, Richter & Roy (2017) use a heuristic, hand-engineered reset policy, while our reset policy is learned simultaneously with the forward policy. Kahn et al. (2017) uses uncertainty estimation via bootstrap to provide for safety. Our approach also uses bootstrap for uncertainty estimation, but unlike our method, Kahn et al. (2017) does not learn a reset or safety policy.
1711.06782#7
Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning
Deep reinforcement learning algorithms can learn complex behavioral skills, but real-world application of these methods requires a large amount of experience to be collected by the agent. In practical settings, such as robotics, this involves repeatedly attempting a task, resetting the environment between each attempt. However, not all tasks are easily or automatically reversible. In practice, this learning process requires extensive human intervention. In this work, we propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward and reset policy, with the reset policy resetting the environment for a subsequent attempt. By learning a value function for the reset policy, we can automatically determine when the forward policy is about to enter a non-reversible state, providing for uncertainty-aware safety aborts. Our experiments illustrate that proper use of the reset policy can greatly reduce the number of manual resets required to learn a task, can reduce the number of unsafe actions that lead to non-reversible states, and can automatically induce a curriculum.
http://arxiv.org/pdf/1711.06782
Benjamin Eysenbach, Shixiang Gu, Julian Ibarz, Sergey Levine
cs.LG, cs.RO
Videos of our experiments are available at: https://sites.google.com/site/mlleavenotrace/
null
cs.LG
20171118
20171118
[ { "id": "1702.01182" }, { "id": "1704.05588" }, { "id": "1707.00183" }, { "id": "1506.02438" }, { "id": "1703.05407" }, { "id": "1705.06366" }, { "id": "1609.07152" }, { "id": "1509.02971" }, { "id": "1705.05035" }, { "id": "1707.05300" } ]
1711.06782
8
Learning a reset policy is related to curriculum generation: the reset controller is engaged in in- creasingly distant states, naturally providing a curriculum for the reset policy. Prior methods have studied curriculum generation by maintaining a separate goal setting policy or network (Sukhbaatar et al., 2017; Matiisen et al., 2017; Held et al., 2017). In contrast to these methods, we do not set explicit goals, but only allow the reset policy to abort an episode. When learning the forward and reset policies jointly, the training dynamics of our reset policy resemble those of reverse curriculum generation (Florensa et al., 2017), but in reverse. In particular, reverse curriculum learning can be viewed as a special case of our method: our reset policy is analogous to the learner in the reverse curriculum, while the forward policy plays a role similar to the initial state selector. However, reverse curriculum generation requires that the agent can be reset to any state (e.g., in a simulator), while 2 our method is specifically aimed at streamlining real-world learning, through the use of uncertainty estimation and early aborts. # 3 Preliminaries
1711.06782#8
Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning
Deep reinforcement learning algorithms can learn complex behavioral skills, but real-world application of these methods requires a large amount of experience to be collected by the agent. In practical settings, such as robotics, this involves repeatedly attempting a task, resetting the environment between each attempt. However, not all tasks are easily or automatically reversible. In practice, this learning process requires extensive human intervention. In this work, we propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward and reset policy, with the reset policy resetting the environment for a subsequent attempt. By learning a value function for the reset policy, we can automatically determine when the forward policy is about to enter a non-reversible state, providing for uncertainty-aware safety aborts. Our experiments illustrate that proper use of the reset policy can greatly reduce the number of manual resets required to learn a task, can reduce the number of unsafe actions that lead to non-reversible states, and can automatically induce a curriculum.
http://arxiv.org/pdf/1711.06782
Benjamin Eysenbach, Shixiang Gu, Julian Ibarz, Sergey Levine
cs.LG, cs.RO
Videos of our experiments are available at: https://sites.google.com/site/mlleavenotrace/
null
cs.LG
20171118
20171118
[ { "id": "1702.01182" }, { "id": "1704.05588" }, { "id": "1707.00183" }, { "id": "1506.02438" }, { "id": "1703.05407" }, { "id": "1705.06366" }, { "id": "1609.07152" }, { "id": "1509.02971" }, { "id": "1705.05035" }, { "id": "1707.05300" } ]
1711.06782
9
In this section, we discuss the episodic RL problem setup, which motivates our proposed joint learning of forward and reset policies. RL considers decision making problems that consist of a state space S, action space A, transition dynamics P(s’ | s, a), an initial state distribution po(s), and a scalar reward function r(s, a). In episodic, finite horizon tasks, the objective is to find the optimal policy x*(a | s) that maximizes the expected sum of y-discounted returns, E,[5>/_ 7’r(sz, a1)], where $0 ~ Pos a, ~ T(a;|82), and 8,41 ~ P(s' | s,a). Typically, the RL training routines involve iteratively sampling new episodes, where at the end of each episode, a new starting state so is sampled from a given initial state distribution po. In practical applications, such as robotics, this procedure effectively involves executing some hard-coded reset policy or human interventions to manually reset the agent. Our work is aimed at avoiding these manual resets by learning an additional reset policy that satisfies the following property: when the reset policy is executed from any state, the distribution over final states matches the initial state distribution po. If we learn such a reset policy, then the agent never requires querying the black-box distribution pp and can continually learning on its own.
1711.06782#9
Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning
Deep reinforcement learning algorithms can learn complex behavioral skills, but real-world application of these methods requires a large amount of experience to be collected by the agent. In practical settings, such as robotics, this involves repeatedly attempting a task, resetting the environment between each attempt. However, not all tasks are easily or automatically reversible. In practice, this learning process requires extensive human intervention. In this work, we propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward and reset policy, with the reset policy resetting the environment for a subsequent attempt. By learning a value function for the reset policy, we can automatically determine when the forward policy is about to enter a non-reversible state, providing for uncertainty-aware safety aborts. Our experiments illustrate that proper use of the reset policy can greatly reduce the number of manual resets required to learn a task, can reduce the number of unsafe actions that lead to non-reversible states, and can automatically induce a curriculum.
http://arxiv.org/pdf/1711.06782
Benjamin Eysenbach, Shixiang Gu, Julian Ibarz, Sergey Levine
cs.LG, cs.RO
Videos of our experiments are available at: https://sites.google.com/site/mlleavenotrace/
null
cs.LG
20171118
20171118
[ { "id": "1702.01182" }, { "id": "1704.05588" }, { "id": "1707.00183" }, { "id": "1506.02438" }, { "id": "1703.05407" }, { "id": "1705.06366" }, { "id": "1609.07152" }, { "id": "1509.02971" }, { "id": "1705.05035" }, { "id": "1707.05300" } ]
1711.06782
10
# 4 Continual Learning with Joint Forward-Reset Policies Our method for continual learning relies on jointly learning a forward policy and reset policy, and using early aborts to avoid manual resets. The forward policy aims to maximize the task reward, while the reset policy takes actions to reset the environment. Both have the same state and action spaces, but are given different reward objectives. The forward policy reward rf (s, a) is the usual task reward given by the environment. The reset policy reward rr(s) is designed to be large for states with high density under the initial state distribution. For example, in locomotion experiments, the reset reward is large when the agent is standing upright. To make this set-up applicable for solving the task, we make the weak assumption on the task environment that there exists a policy that can reset from at least one of the reachable states with maximum reward in the environment. This assumption ensures that it is possible to solve the task without any manual resets.
1711.06782#10
Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning
Deep reinforcement learning algorithms can learn complex behavioral skills, but real-world application of these methods requires a large amount of experience to be collected by the agent. In practical settings, such as robotics, this involves repeatedly attempting a task, resetting the environment between each attempt. However, not all tasks are easily or automatically reversible. In practice, this learning process requires extensive human intervention. In this work, we propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward and reset policy, with the reset policy resetting the environment for a subsequent attempt. By learning a value function for the reset policy, we can automatically determine when the forward policy is about to enter a non-reversible state, providing for uncertainty-aware safety aborts. Our experiments illustrate that proper use of the reset policy can greatly reduce the number of manual resets required to learn a task, can reduce the number of unsafe actions that lead to non-reversible states, and can automatically induce a curriculum.
http://arxiv.org/pdf/1711.06782
Benjamin Eysenbach, Shixiang Gu, Julian Ibarz, Sergey Levine
cs.LG, cs.RO
Videos of our experiments are available at: https://sites.google.com/site/mlleavenotrace/
null
cs.LG
20171118
20171118
[ { "id": "1702.01182" }, { "id": "1704.05588" }, { "id": "1707.00183" }, { "id": "1506.02438" }, { "id": "1703.05407" }, { "id": "1705.06366" }, { "id": "1609.07152" }, { "id": "1509.02971" }, { "id": "1705.05035" }, { "id": "1707.05300" } ]
1711.06782
11
We choose off-policy actor-critic methods as the base RL algorithm (Silver et al., 2014; Lillicrap et al., 2015), since its off-policy learning allows sharing of the experience between the forward and reset policies. Additionally, the Q-functions can be used to signal early aborts. Our methods can also be used directly with any other Q-learning methods ((Watkins & Dayan, 1992; Mnih et al., 2013; Gu et al., 2017; Amos et al., 2016; Metz et al., 2017)). # 4.1 Early Aborts The reset policy learns how to transition from the forward policy’s final state back to an initial state. However, in challenging domains with irreversible states, the reset policy may be unable to reset from some states, and a costly “hard” reset may be required. The process of learning the reset policy offers us a natural mechanism for reducing these hard resets. We observe that, for states that are irrecoverable, the value function of the reset policy will be low. We can therefore use this value function (or, specifically, its Q-function) as a metric to determine when to terminate the forward policy, performing an early abort.
1711.06782#11
Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning
Deep reinforcement learning algorithms can learn complex behavioral skills, but real-world application of these methods requires a large amount of experience to be collected by the agent. In practical settings, such as robotics, this involves repeatedly attempting a task, resetting the environment between each attempt. However, not all tasks are easily or automatically reversible. In practice, this learning process requires extensive human intervention. In this work, we propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward and reset policy, with the reset policy resetting the environment for a subsequent attempt. By learning a value function for the reset policy, we can automatically determine when the forward policy is about to enter a non-reversible state, providing for uncertainty-aware safety aborts. Our experiments illustrate that proper use of the reset policy can greatly reduce the number of manual resets required to learn a task, can reduce the number of unsafe actions that lead to non-reversible states, and can automatically induce a curriculum.
http://arxiv.org/pdf/1711.06782
Benjamin Eysenbach, Shixiang Gu, Julian Ibarz, Sergey Levine
cs.LG, cs.RO
Videos of our experiments are available at: https://sites.google.com/site/mlleavenotrace/
null
cs.LG
20171118
20171118
[ { "id": "1702.01182" }, { "id": "1704.05588" }, { "id": "1707.00183" }, { "id": "1506.02438" }, { "id": "1703.05407" }, { "id": "1705.06366" }, { "id": "1609.07152" }, { "id": "1509.02971" }, { "id": "1705.05035" }, { "id": "1707.05300" } ]
1711.06782
12
Before an action proposed by the forward policy is executed in the environment, it must be “approved” by the reset policy. In particular, if the reset policy’s Q value for the proposed action is too small, then an early abort is performed: the proposed action is not taken and the reset policy takes control. Formally, early aborts restrict exploration to a ‘safe’ subspace of the MDP. Let E ⊆ S × A be the set of (possible stochastic) transitions, and let Qreset(s, a) be the Q values of our reset policy at state s taking action a. The subset of transitions E ∗ ∈ E allowed by our algorithm is E*S {(s,a) €E | Qreset($,4) > Qmin} (1) 3 # Algorithm 1 Joint Training
1711.06782#12
Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning
Deep reinforcement learning algorithms can learn complex behavioral skills, but real-world application of these methods requires a large amount of experience to be collected by the agent. In practical settings, such as robotics, this involves repeatedly attempting a task, resetting the environment between each attempt. However, not all tasks are easily or automatically reversible. In practice, this learning process requires extensive human intervention. In this work, we propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward and reset policy, with the reset policy resetting the environment for a subsequent attempt. By learning a value function for the reset policy, we can automatically determine when the forward policy is about to enter a non-reversible state, providing for uncertainty-aware safety aborts. Our experiments illustrate that proper use of the reset policy can greatly reduce the number of manual resets required to learn a task, can reduce the number of unsafe actions that lead to non-reversible states, and can automatically induce a curriculum.
http://arxiv.org/pdf/1711.06782
Benjamin Eysenbach, Shixiang Gu, Julian Ibarz, Sergey Levine
cs.LG, cs.RO
Videos of our experiments are available at: https://sites.google.com/site/mlleavenotrace/
null
cs.LG
20171118
20171118
[ { "id": "1702.01182" }, { "id": "1704.05588" }, { "id": "1707.00183" }, { "id": "1506.02438" }, { "id": "1703.05407" }, { "id": "1705.06366" }, { "id": "1609.07152" }, { "id": "1509.02971" }, { "id": "1705.05035" }, { "id": "1707.05300" } ]
1711.06782
13
E*S {(s,a) €E | Qreset($,4) > Qmin} (1) 3 # Algorithm 1 Joint Training 1: repeat 2: for max_steps_per_episode do 3: @ < FORWARD_AGENT.ACT(s) 4: if RESET_AGENT.Q(s, @) < Qmin then 5: Switch to reset policy. > Early Abort 6: (s,) <— ENVIRONMENT. STEP(a) 7: Update the forward policy. 8: for max_steps_per_episode do 9: a@ «+ RESET_AGENT.ACT(s) 10: (s,) <— ENVIRONMENT. STEP(a) 11: Update the reset policy. 12: Let Si..¢¢ be the final states from the last N reset episodes. 13: if Sheser OV Sreset = then 14: $ < ENVIRONMENT. RESET() > Hard Reset Noting that Q(s) maxac.a Q(s, a), we see that the states allowed under our algorithm S* C S are those states with at least one transition in €*: S* © {s | (s,a) € E* for at least one a € A} (2)
1711.06782#13
Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning
Deep reinforcement learning algorithms can learn complex behavioral skills, but real-world application of these methods requires a large amount of experience to be collected by the agent. In practical settings, such as robotics, this involves repeatedly attempting a task, resetting the environment between each attempt. However, not all tasks are easily or automatically reversible. In practice, this learning process requires extensive human intervention. In this work, we propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward and reset policy, with the reset policy resetting the environment for a subsequent attempt. By learning a value function for the reset policy, we can automatically determine when the forward policy is about to enter a non-reversible state, providing for uncertainty-aware safety aborts. Our experiments illustrate that proper use of the reset policy can greatly reduce the number of manual resets required to learn a task, can reduce the number of unsafe actions that lead to non-reversible states, and can automatically induce a curriculum.
http://arxiv.org/pdf/1711.06782
Benjamin Eysenbach, Shixiang Gu, Julian Ibarz, Sergey Levine
cs.LG, cs.RO
Videos of our experiments are available at: https://sites.google.com/site/mlleavenotrace/
null
cs.LG
20171118
20171118
[ { "id": "1702.01182" }, { "id": "1704.05588" }, { "id": "1707.00183" }, { "id": "1506.02438" }, { "id": "1703.05407" }, { "id": "1705.06366" }, { "id": "1609.07152" }, { "id": "1509.02971" }, { "id": "1705.05035" }, { "id": "1707.05300" } ]
1711.06782
14
S* © {s | (s,a) € E* for at least one a € A} (2) For intuition, consider tasks where the reset reward is 1 if the agent has successfully reset, and 0 otherwise. In these cases, the reset Q function is the probability that the reset will succeed. Early aborts occur when this probability for the proposed action is too small. Early aborts can be interpreted as a learned, dynamic, safety constraint, and a viable alternative for the manual constraints that are typically used for real-world robotic learning experiments. Early aborts promote safety by preventing the agent from taking actions from which it cannot recover. These aborts are dynamic because the states at which they occur change throughout training, as more and more states are considered safe. This can make it easier to learn the forward policy, by preventing it from entering unsafe states. We analyze this experimentally in Section 6.3. # 4.2 Hard Resets
1711.06782#14
Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning
Deep reinforcement learning algorithms can learn complex behavioral skills, but real-world application of these methods requires a large amount of experience to be collected by the agent. In practical settings, such as robotics, this involves repeatedly attempting a task, resetting the environment between each attempt. However, not all tasks are easily or automatically reversible. In practice, this learning process requires extensive human intervention. In this work, we propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward and reset policy, with the reset policy resetting the environment for a subsequent attempt. By learning a value function for the reset policy, we can automatically determine when the forward policy is about to enter a non-reversible state, providing for uncertainty-aware safety aborts. Our experiments illustrate that proper use of the reset policy can greatly reduce the number of manual resets required to learn a task, can reduce the number of unsafe actions that lead to non-reversible states, and can automatically induce a curriculum.
http://arxiv.org/pdf/1711.06782
Benjamin Eysenbach, Shixiang Gu, Julian Ibarz, Sergey Levine
cs.LG, cs.RO
Videos of our experiments are available at: https://sites.google.com/site/mlleavenotrace/
null
cs.LG
20171118
20171118
[ { "id": "1702.01182" }, { "id": "1704.05588" }, { "id": "1707.00183" }, { "id": "1506.02438" }, { "id": "1703.05407" }, { "id": "1705.06366" }, { "id": "1609.07152" }, { "id": "1509.02971" }, { "id": "1705.05035" }, { "id": "1707.05300" } ]
1711.06782
15
# 4.2 Hard Resets Early aborts decrease the requirement for “hard” resets, but do not eliminate them, since an imperfect reset policy might still miss a dangerous state early in the training process. However, it is challenging to identify whether it is possible for any policy to reset from the current state. Our approach is to approximate irreversible state identification with a necessary (but not sufficient) condition: we say we have reached an irreversible state if the reset policy fails to reset after N attempts, where N is a hyperparameter. Formally, we define a set of safe states Sreset ⊆ S, and say that we are in an irreversible state if the set of states visited by the reset policy over the past N episodes is disjoint from Sreset. Increasing N decreases the number of hard resets. However, when we are in an irreversible state, increasing N means that we remain in that state (learning nothing) for more episodes. Section 6.4 empirically examines this trade-off. In practice, the setting of this parameter should depend on the cost of hard resets. # 4.3 Algorithm Summary
1711.06782#15
Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning
Deep reinforcement learning algorithms can learn complex behavioral skills, but real-world application of these methods requires a large amount of experience to be collected by the agent. In practical settings, such as robotics, this involves repeatedly attempting a task, resetting the environment between each attempt. However, not all tasks are easily or automatically reversible. In practice, this learning process requires extensive human intervention. In this work, we propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward and reset policy, with the reset policy resetting the environment for a subsequent attempt. By learning a value function for the reset policy, we can automatically determine when the forward policy is about to enter a non-reversible state, providing for uncertainty-aware safety aborts. Our experiments illustrate that proper use of the reset policy can greatly reduce the number of manual resets required to learn a task, can reduce the number of unsafe actions that lead to non-reversible states, and can automatically induce a curriculum.
http://arxiv.org/pdf/1711.06782
Benjamin Eysenbach, Shixiang Gu, Julian Ibarz, Sergey Levine
cs.LG, cs.RO
Videos of our experiments are available at: https://sites.google.com/site/mlleavenotrace/
null
cs.LG
20171118
20171118
[ { "id": "1702.01182" }, { "id": "1704.05588" }, { "id": "1707.00183" }, { "id": "1506.02438" }, { "id": "1703.05407" }, { "id": "1705.06366" }, { "id": "1609.07152" }, { "id": "1509.02971" }, { "id": "1705.05035" }, { "id": "1707.05300" } ]
1711.06782
16
# 4.3 Algorithm Summary Our full algorithm (Algorithm 1) consists of alternately running a forward policy and reset policy. When running the forward policy, we perform an early abort if the Q value for the reset policy is less than Qmin. Only if the reset policy fails to reset after N episodes do we do a manual reset. # 4.4 Value Function Ensembles The accuracy of the Q-value estimates of our policies affect learning and reset performance through early aborts. Q-values of the reset policy may not be good estimates of the true value function for previously-unseen states. To address this, we train Q-functions for both the forward and reset policies that provide uncertainty estimates. Several prior works have explored how uncertainty estimates can be obtained in such settings (Gal & Ghahramani, 2016; Osband et al., 2016). We use the bootstrap 4 ensemble in our method (Osband et al., 2016), though other techniques could be employed. In this approach, we train an ensemble of Q-functions, each with a different random initialization, which provides a distribution over Q-values at each state. Given this distribution over Q values, we can propose three strategies for early aborts:
1711.06782#16
Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning
Deep reinforcement learning algorithms can learn complex behavioral skills, but real-world application of these methods requires a large amount of experience to be collected by the agent. In practical settings, such as robotics, this involves repeatedly attempting a task, resetting the environment between each attempt. However, not all tasks are easily or automatically reversible. In practice, this learning process requires extensive human intervention. In this work, we propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward and reset policy, with the reset policy resetting the environment for a subsequent attempt. By learning a value function for the reset policy, we can automatically determine when the forward policy is about to enter a non-reversible state, providing for uncertainty-aware safety aborts. Our experiments illustrate that proper use of the reset policy can greatly reduce the number of manual resets required to learn a task, can reduce the number of unsafe actions that lead to non-reversible states, and can automatically induce a curriculum.
http://arxiv.org/pdf/1711.06782
Benjamin Eysenbach, Shixiang Gu, Julian Ibarz, Sergey Levine
cs.LG, cs.RO
Videos of our experiments are available at: https://sites.google.com/site/mlleavenotrace/
null
cs.LG
20171118
20171118
[ { "id": "1702.01182" }, { "id": "1704.05588" }, { "id": "1707.00183" }, { "id": "1506.02438" }, { "id": "1703.05407" }, { "id": "1705.06366" }, { "id": "1609.07152" }, { "id": "1509.02971" }, { "id": "1705.05035" }, { "id": "1707.05300" } ]
1711.06782
17
Given this distribution over Q values, we can propose three strategies for early aborts: Optimistic Aborts: Perform an early abort only if all the Q values are less than Qmin. Equivalently, do an early abort if maxθ Qθ Realist Aborts: Perform an early abort if the mean Q value is less than Qmin. Pessimistic Aborts: Perform an early abort if any of the Q values are less than Qmin. Equivalently, do an early abort if minθ Qθ We expect that optimistic aborts will provide better exploration at the cost of more hard resets, while pessimistic aborts should decrease hard resets, but may be unable to effectively explore. We empirically test this hypothesis in Appendix A. 50 40 30 20 goal ° # 5 Small-Scale Didactic Example
1711.06782#17
Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning
Deep reinforcement learning algorithms can learn complex behavioral skills, but real-world application of these methods requires a large amount of experience to be collected by the agent. In practical settings, such as robotics, this involves repeatedly attempting a task, resetting the environment between each attempt. However, not all tasks are easily or automatically reversible. In practice, this learning process requires extensive human intervention. In this work, we propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward and reset policy, with the reset policy resetting the environment for a subsequent attempt. By learning a value function for the reset policy, we can automatically determine when the forward policy is about to enter a non-reversible state, providing for uncertainty-aware safety aborts. Our experiments illustrate that proper use of the reset policy can greatly reduce the number of manual resets required to learn a task, can reduce the number of unsafe actions that lead to non-reversible states, and can automatically induce a curriculum.
http://arxiv.org/pdf/1711.06782
Benjamin Eysenbach, Shixiang Gu, Julian Ibarz, Sergey Levine
cs.LG, cs.RO
Videos of our experiments are available at: https://sites.google.com/site/mlleavenotrace/
null
cs.LG
20171118
20171118
[ { "id": "1702.01182" }, { "id": "1704.05588" }, { "id": "1707.00183" }, { "id": "1506.02438" }, { "id": "1703.05407" }, { "id": "1705.06366" }, { "id": "1609.07152" }, { "id": "1509.02971" }, { "id": "1705.05035" }, { "id": "1707.05300" } ]
1711.06782
18
50 40 30 20 goal ° # 5 Small-Scale Didactic Example We first present a small didactic example to illustrate how our forward and reset policies interact and how cautious exploration reduces the number of hard resets. We first discuss the gridworld in Figure 1. The states with red borders are absorbing, meaning that the agent cannot leave them and must use a hard reset. The agent receives a reward of 1 for reaching the goal state, and 0 otherwise. The states are colored based on the number of early aborts triggered in each state. Note that most aborts occur next to the initial state, when the forward policy attempts to enter the absorbing state South-East of the start state, but is blocked by the reset policy. In Figure 2, we present a harder environment, where the task can be successfully completed by reaching one of the two goals, exactly one of which is reversible. The forward policy has no preference for which goal is better, but the reset policy successfully prevents the forward policy from entering the absorbing goal state, as indicated by the much larger early abort count in the blue-colored state next to the absorbing goal. Figure 1: Early aborts in gridworld.
1711.06782#18
Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning
Deep reinforcement learning algorithms can learn complex behavioral skills, but real-world application of these methods requires a large amount of experience to be collected by the agent. In practical settings, such as robotics, this involves repeatedly attempting a task, resetting the environment between each attempt. However, not all tasks are easily or automatically reversible. In practice, this learning process requires extensive human intervention. In this work, we propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward and reset policy, with the reset policy resetting the environment for a subsequent attempt. By learning a value function for the reset policy, we can automatically determine when the forward policy is about to enter a non-reversible state, providing for uncertainty-aware safety aborts. Our experiments illustrate that proper use of the reset policy can greatly reduce the number of manual resets required to learn a task, can reduce the number of unsafe actions that lead to non-reversible states, and can automatically induce a curriculum.
http://arxiv.org/pdf/1711.06782
Benjamin Eysenbach, Shixiang Gu, Julian Ibarz, Sergey Levine
cs.LG, cs.RO
Videos of our experiments are available at: https://sites.google.com/site/mlleavenotrace/
null
cs.LG
20171118
20171118
[ { "id": "1702.01182" }, { "id": "1704.05588" }, { "id": "1707.00183" }, { "id": "1506.02438" }, { "id": "1703.05407" }, { "id": "1705.06366" }, { "id": "1609.07152" }, { "id": "1509.02971" }, { "id": "1705.05035" }, { "id": "1707.05300" } ]
1711.06782
19
Figure 1: Early aborts in gridworld. Figure 3 shows how changing the early abort threshold to explore more cautiously reduces the number of failures. Increasing Qmin from 0 to 0.4 reduced the number of hard resets by 78% without increasing the number of steps to solve the task. In a real-world setting, this might produce a substantial gain in efficiency, as time spend waiting for a hard reset could be better spent collecting more experience. Thus, for some real-world experiments, increasing Qmin can decrease training time even if it requires more steps to learn. Figure 2: Early aborts with an absorbing goal. 300 250 50, 0.0 O12 0.2 0.3 0.4 05 Inin num. resets yw 8 S 12000 y i000 q@ 10000 TH 9000 € 8000 5 7000 < 6000 5000 0.0 O12 0.2 0.3 0.4 05 min 300 12000 250 y i000 q@ 10000 TH 9000 € 8000 5 7000 < 6000 50, 5000 0.0 O12 0.2 0.3 0.4 05 0.0 O12 0.2 0.3 0.4 05 Inin min num. resets yw 8 S Figure 3: Early abort threshold: In our didactic example, increasing the early abort threshold causes more cautious exploration (left) without severely increasing the number of steps to solve (right). 5
1711.06782#19
Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning
Deep reinforcement learning algorithms can learn complex behavioral skills, but real-world application of these methods requires a large amount of experience to be collected by the agent. In practical settings, such as robotics, this involves repeatedly attempting a task, resetting the environment between each attempt. However, not all tasks are easily or automatically reversible. In practice, this learning process requires extensive human intervention. In this work, we propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward and reset policy, with the reset policy resetting the environment for a subsequent attempt. By learning a value function for the reset policy, we can automatically determine when the forward policy is about to enter a non-reversible state, providing for uncertainty-aware safety aborts. Our experiments illustrate that proper use of the reset policy can greatly reduce the number of manual resets required to learn a task, can reduce the number of unsafe actions that lead to non-reversible states, and can automatically induce a curriculum.
http://arxiv.org/pdf/1711.06782
Benjamin Eysenbach, Shixiang Gu, Julian Ibarz, Sergey Levine
cs.LG, cs.RO
Videos of our experiments are available at: https://sites.google.com/site/mlleavenotrace/
null
cs.LG
20171118
20171118
[ { "id": "1702.01182" }, { "id": "1704.05588" }, { "id": "1707.00183" }, { "id": "1506.02438" }, { "id": "1703.05407" }, { "id": "1705.06366" }, { "id": "1609.07152" }, { "id": "1509.02971" }, { "id": "1705.05035" }, { "id": "1707.05300" } ]
1711.06782
21
In this section, we use the five complex, continuous control environments shown above to answer questions about our approach. While ball in cup and peg insertion are completely reversible, the other environments are not: the pusher can knock the puck outside its workspace and the cheetah and walker can jump off a cliff. Crucially, reaching these irreversible states does not terminate the episode, so the agent remains in the irreversible state until it calls for a hard reset. Additional plots and experimental details are in the Appendix. # 6.1 Why Learn a Reset Controller?
1711.06782#21
Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning
Deep reinforcement learning algorithms can learn complex behavioral skills, but real-world application of these methods requires a large amount of experience to be collected by the agent. In practical settings, such as robotics, this involves repeatedly attempting a task, resetting the environment between each attempt. However, not all tasks are easily or automatically reversible. In practice, this learning process requires extensive human intervention. In this work, we propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward and reset policy, with the reset policy resetting the environment for a subsequent attempt. By learning a value function for the reset policy, we can automatically determine when the forward policy is about to enter a non-reversible state, providing for uncertainty-aware safety aborts. Our experiments illustrate that proper use of the reset policy can greatly reduce the number of manual resets required to learn a task, can reduce the number of unsafe actions that lead to non-reversible states, and can automatically induce a curriculum.
http://arxiv.org/pdf/1711.06782
Benjamin Eysenbach, Shixiang Gu, Julian Ibarz, Sergey Levine
cs.LG, cs.RO
Videos of our experiments are available at: https://sites.google.com/site/mlleavenotrace/
null
cs.LG
20171118
20171118
[ { "id": "1702.01182" }, { "id": "1704.05588" }, { "id": "1707.00183" }, { "id": "1506.02438" }, { "id": "1703.05407" }, { "id": "1705.06366" }, { "id": "1609.07152" }, { "id": "1509.02971" }, { "id": "1705.05035" }, { "id": "1707.05300" } ]
1711.06782
22
# 6.1 Why Learn a Reset Controller? One proposal for learning without resets is to run the forward policy until the task is learned. This “forward-only” approach corresponds to standard, fully online, non-episodic lifelong RL setting, com- monly studied in the context of temporal difference learning (Sutton & Barto (1998)). We show that this approach fails, even on reversible environments where safety is not a concern. We benchmarked the forward-only approach and our method on ball in cup, using no hard resets for either. Figure 5 shows that our approach solves the task while the “forward-only” approach fails to learn how to catch the ball when initialized below the cup. Once the forward-only ap- proach catches the ball, it gets maximum reward by keeping the ball in the cup. In contrast, our method learns to solve this task by automatically resetting the environment after each episode, so the forward policy can practice catching the ball when initialized below the cup. As an upper bound, we show policy reward for the “status quo” approach, which performs a hard reset after every episode. Note that the dependence on hard resets makes this third method impractical outside simulation. — og artonly _—— ours Status quo 0.7 °° & ea = 03 © 02 0.1 0.05 02 04 06 08 Lo steps 1e6
1711.06782#22
Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning
Deep reinforcement learning algorithms can learn complex behavioral skills, but real-world application of these methods requires a large amount of experience to be collected by the agent. In practical settings, such as robotics, this involves repeatedly attempting a task, resetting the environment between each attempt. However, not all tasks are easily or automatically reversible. In practice, this learning process requires extensive human intervention. In this work, we propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward and reset policy, with the reset policy resetting the environment for a subsequent attempt. By learning a value function for the reset policy, we can automatically determine when the forward policy is about to enter a non-reversible state, providing for uncertainty-aware safety aborts. Our experiments illustrate that proper use of the reset policy can greatly reduce the number of manual resets required to learn a task, can reduce the number of unsafe actions that lead to non-reversible states, and can automatically induce a curriculum.
http://arxiv.org/pdf/1711.06782
Benjamin Eysenbach, Shixiang Gu, Julian Ibarz, Sergey Levine
cs.LG, cs.RO
Videos of our experiments are available at: https://sites.google.com/site/mlleavenotrace/
null
cs.LG
20171118
20171118
[ { "id": "1702.01182" }, { "id": "1704.05588" }, { "id": "1707.00183" }, { "id": "1506.02438" }, { "id": "1703.05407" }, { "id": "1705.06366" }, { "id": "1609.07152" }, { "id": "1509.02971" }, { "id": "1705.05035" }, { "id": "1707.05300" } ]
1711.06782
23
— og artonly _—— ours Status quo 0.7 °° & ea = 03 © 02 0.1 0.05 02 04 06 08 Lo steps 1e6 Figure 5: We compare our method to a non- episodic (“forward-only”) approach on ball in cup. Although neither uses hard resets, only our method learns to catch the ball. As an upper bound, we also show the “status quo” approach that performs a hard reset after episode, which is often impractical outside simulation. # 6.2 Does Our Method Reduce Manual Resets? — status quo: reward ~~ status quo: num. resets — ours: reward ~~ ours: num. resets 10 0.9 08 steps 166 Pusher Cliff Cheetah — status quo: reward - - status quo: num. resets — ours: reward — = ours: num. resets num. resets Figure 6: Our method achieves equal or better rewards than the status quo with fewer manual resets. 6
1711.06782#23
Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning
Deep reinforcement learning algorithms can learn complex behavioral skills, but real-world application of these methods requires a large amount of experience to be collected by the agent. In practical settings, such as robotics, this involves repeatedly attempting a task, resetting the environment between each attempt. However, not all tasks are easily or automatically reversible. In practice, this learning process requires extensive human intervention. In this work, we propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward and reset policy, with the reset policy resetting the environment for a subsequent attempt. By learning a value function for the reset policy, we can automatically determine when the forward policy is about to enter a non-reversible state, providing for uncertainty-aware safety aborts. Our experiments illustrate that proper use of the reset policy can greatly reduce the number of manual resets required to learn a task, can reduce the number of unsafe actions that lead to non-reversible states, and can automatically induce a curriculum.
http://arxiv.org/pdf/1711.06782
Benjamin Eysenbach, Shixiang Gu, Julian Ibarz, Sergey Levine
cs.LG, cs.RO
Videos of our experiments are available at: https://sites.google.com/site/mlleavenotrace/
null
cs.LG
20171118
20171118
[ { "id": "1702.01182" }, { "id": "1704.05588" }, { "id": "1707.00183" }, { "id": "1506.02438" }, { "id": "1703.05407" }, { "id": "1705.06366" }, { "id": "1609.07152" }, { "id": "1509.02971" }, { "id": "1705.05035" }, { "id": "1707.05300" } ]
1711.06782
24
Figure 6: Our method achieves equal or better rewards than the status quo with fewer manual resets. 6 Our first goal is to reduce the number of hard resets during learning. In this section, we compare our algorithm to the standard, episodic learning setup (“status quo”), which only learns a forward policy. As shown in Figure 6 (left), the conventional approach learns the pusher task somewhat faster than ours, but our approach eventually achieves the same reward with half the number of hard resets. In the cliff cheetah task (Figure 6.2 (right)), not only does our approach use an order of magnitude fewer hard resets, but the final reward of our method is substantially higher. This suggests that, besides reducing the number of resets, the early aborts can actually aid learning by preventing the forward policy from wasting exploration time waiting for resets in irreversible states. # 6.3 Do Early Aborts avoid Hard Resets? Pusher Cliff Cheetah sn =20: Feward =~ gniq =20: NUM. resets 50: reward w= 50: num. resets Gin =80; Feward ~~ nin =80: NUM, resets 1.0 == 3000 | 2500 ra) F200 8 G 100 E 1000 3 soo © t) 2.0
1711.06782#24
Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning
Deep reinforcement learning algorithms can learn complex behavioral skills, but real-world application of these methods requires a large amount of experience to be collected by the agent. In practical settings, such as robotics, this involves repeatedly attempting a task, resetting the environment between each attempt. However, not all tasks are easily or automatically reversible. In practice, this learning process requires extensive human intervention. In this work, we propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward and reset policy, with the reset policy resetting the environment for a subsequent attempt. By learning a value function for the reset policy, we can automatically determine when the forward policy is about to enter a non-reversible state, providing for uncertainty-aware safety aborts. Our experiments illustrate that proper use of the reset policy can greatly reduce the number of manual resets required to learn a task, can reduce the number of unsafe actions that lead to non-reversible states, and can automatically induce a curriculum.
http://arxiv.org/pdf/1711.06782
Benjamin Eysenbach, Shixiang Gu, Julian Ibarz, Sergey Levine
cs.LG, cs.RO
Videos of our experiments are available at: https://sites.google.com/site/mlleavenotrace/
null
cs.LG
20171118
20171118
[ { "id": "1702.01182" }, { "id": "1704.05588" }, { "id": "1707.00183" }, { "id": "1506.02438" }, { "id": "1703.05407" }, { "id": "1705.06366" }, { "id": "1609.07152" }, { "id": "1509.02971" }, { "id": "1705.05035" }, { "id": "1707.05300" } ]
1711.06782
25
divin =20: Feward — = ~ gq =20: num. resets — 50: reward = = — nin = 50: num. resets — nin =80: reward — ~~ din —80: NUM, resets 0.06 350 0.05 4 it ra ra Ni © 004 wi \ ee a FA 0.03 of Mad fil Melba al ve 100 it \wi 0.02 0.01" ° 0.0 15 2.0 oe 166 Figure 7: Early abort threshold: Increasing the early abort threshold to act more cautiously avoids many hard resets, indicating that early aborts help avoid irreversible states. To test whether early aborts prevent hard resets, we can see if the number of hard resets increases when we lower the early abort threshold. Figure 7 shows the effect of three values for Qmin while learning the pusher and cliff cheetah. In both environments, decreasing the early abort threshold increased the number of hard resets, supporting our hypothesis that early aborts prevent hard resets. On pusher, increasing Qmin to 80 allowed the agent to learn a policy that achieved nearly the same reward using 33% fewer hard resets. The cliff cheetah task has lower rewards than pusher, even an early abort threshold of 10 is enough to prevent 69% of the total early aborts that the status quo would have performed. # 6.4 Multiple Reset Attempts
1711.06782#25
Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning
Deep reinforcement learning algorithms can learn complex behavioral skills, but real-world application of these methods requires a large amount of experience to be collected by the agent. In practical settings, such as robotics, this involves repeatedly attempting a task, resetting the environment between each attempt. However, not all tasks are easily or automatically reversible. In practice, this learning process requires extensive human intervention. In this work, we propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward and reset policy, with the reset policy resetting the environment for a subsequent attempt. By learning a value function for the reset policy, we can automatically determine when the forward policy is about to enter a non-reversible state, providing for uncertainty-aware safety aborts. Our experiments illustrate that proper use of the reset policy can greatly reduce the number of manual resets required to learn a task, can reduce the number of unsafe actions that lead to non-reversible states, and can automatically induce a curriculum.
http://arxiv.org/pdf/1711.06782
Benjamin Eysenbach, Shixiang Gu, Julian Ibarz, Sergey Levine
cs.LG, cs.RO
Videos of our experiments are available at: https://sites.google.com/site/mlleavenotrace/
null
cs.LG
20171118
20171118
[ { "id": "1702.01182" }, { "id": "1704.05588" }, { "id": "1707.00183" }, { "id": "1506.02438" }, { "id": "1703.05407" }, { "id": "1705.06366" }, { "id": "1609.07152" }, { "id": "1509.02971" }, { "id": "1705.05035" }, { "id": "1707.05300" } ]
1711.06782
26
# 6.4 Multiple Reset Attempts While early aborts help avoid hard resets, our algorithm includes a mechanism for requesting a manual reset if the agent reaches an unresettable state. As described in Section 4.2, we only perform a hard reset if the reset agent fails to reset in N consecutive episodes. Figure 8 shows how the number of reset attempts, N , affects hard resets and reward. On the pusher task, when our algorithm was given a single reset attempt, it used 64% fewer hard resets than the status quo approach would have. Increasing the number of reset attempts to 4 resulted in another 2.5x reduction in hard resets, while decreasing the reward by less than 25%. On the cliff cheetah task, increasing the number of reset attempts brought the number of resets down to nearly 0, without changing the reward. Surprisingly, these results indicate that for some tasks, it is possible to learn an equally good policy with significantly fewer hard resets. 7 Pusher Cliff Cheetah — Lattempt: reward - - 1 attempt: num. resets — 2attempts: reward — — 2 attempts: num. resets — attempts: reward - - 4 attempts: num. resets —— Battempts: reward — — 8 attempts: num. resets reward num. resets
1711.06782#26
Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning
Deep reinforcement learning algorithms can learn complex behavioral skills, but real-world application of these methods requires a large amount of experience to be collected by the agent. In practical settings, such as robotics, this involves repeatedly attempting a task, resetting the environment between each attempt. However, not all tasks are easily or automatically reversible. In practice, this learning process requires extensive human intervention. In this work, we propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward and reset policy, with the reset policy resetting the environment for a subsequent attempt. By learning a value function for the reset policy, we can automatically determine when the forward policy is about to enter a non-reversible state, providing for uncertainty-aware safety aborts. Our experiments illustrate that proper use of the reset policy can greatly reduce the number of manual resets required to learn a task, can reduce the number of unsafe actions that lead to non-reversible states, and can automatically induce a curriculum.
http://arxiv.org/pdf/1711.06782
Benjamin Eysenbach, Shixiang Gu, Julian Ibarz, Sergey Levine
cs.LG, cs.RO
Videos of our experiments are available at: https://sites.google.com/site/mlleavenotrace/
null
cs.LG
20171118
20171118
[ { "id": "1702.01182" }, { "id": "1704.05588" }, { "id": "1707.00183" }, { "id": "1506.02438" }, { "id": "1703.05407" }, { "id": "1705.06366" }, { "id": "1609.07152" }, { "id": "1509.02971" }, { "id": "1705.05035" }, { "id": "1707.05300" } ]
1711.06782
27
— Lattempt: reward - - 1 attempt: num. resets — 2attempts: reward — — 2 attempts: num. resets — 4attempts: reward - - 4 attempts: num. resets — Battempts: reward — — 8 attempts: num. resets reward Figure 8: Reset attempts: Increasing the number of reset attempts reduces hard resets. Allowing too many reset attempts reduces reward for the pusher environment. # 6.5 Ensembles are Safer Our approach uses an ensemble of value functions to trigger early aborts. Our hypothesis was that our algorithm would be sensitive to bias in the value function if we used a single Q network. To test this hypothesis, we varied the ensemble size from 1 to 50. Figure 9 shows the effect on learning the pushing task. An ensemble with one network failed to learn, but still required many hard resets. Increasing the ensemble size slightly decreased the number of hard resets without affecting the reward. 1 model: reward 5 models: reward — 20 models: reward 1 model: num. resets models: num. resets 20 models: num, resets 50 models: reward ~~ 50 models: num. resets oo eee 08 oe ca) 5 97 any g i ae 2 ce 2 500 o steps 166 # 6.6 Automatic Curriculum Learning Figure 9: Increasing ensemble size boosts policy reward while decreasing rate of hard resets.
1711.06782#27
Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning
Deep reinforcement learning algorithms can learn complex behavioral skills, but real-world application of these methods requires a large amount of experience to be collected by the agent. In practical settings, such as robotics, this involves repeatedly attempting a task, resetting the environment between each attempt. However, not all tasks are easily or automatically reversible. In practice, this learning process requires extensive human intervention. In this work, we propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward and reset policy, with the reset policy resetting the environment for a subsequent attempt. By learning a value function for the reset policy, we can automatically determine when the forward policy is about to enter a non-reversible state, providing for uncertainty-aware safety aborts. Our experiments illustrate that proper use of the reset policy can greatly reduce the number of manual resets required to learn a task, can reduce the number of unsafe actions that lead to non-reversible states, and can automatically induce a curriculum.
http://arxiv.org/pdf/1711.06782
Benjamin Eysenbach, Shixiang Gu, Julian Ibarz, Sergey Levine
cs.LG, cs.RO
Videos of our experiments are available at: https://sites.google.com/site/mlleavenotrace/
null
cs.LG
20171118
20171118
[ { "id": "1702.01182" }, { "id": "1704.05588" }, { "id": "1707.00183" }, { "id": "1506.02438" }, { "id": "1703.05407" }, { "id": "1705.06366" }, { "id": "1609.07152" }, { "id": "1509.02971" }, { "id": "1705.05035" }, { "id": "1707.05300" } ]
1711.06782
28
Our method can automatically produce a curriculum in settings where the desired skill is performed by the reset policy, rather than the forward policy. As an example, we evaluate our method on a peg insertion task, where the reset policy inserts the peg and the forward policy removes it. The reward for a success- ful peg insertion is provided only when the peg is in the hole, making this task challenging to learn with random exploration. Hard resets provide illustrations of what a successful outcome looks like, but do not show how to achieve it. Our algorithm starts with the peg in the hole and runs the forward (peg removal) policy until an early abort occurs. As the reset (peg insertion) policy improves, early aborts occur further and further from the hole. Thus, the initial state dis- tribution for the reset (peg insertion) policy moves further and further from the hole, increasing the difficulty of the task as the policy improves. We compare our approach to an “insert-only” baseline that only learns the peg insertion policy – we manually remove the peg from the hole after every episode. For evaluation, both approaches start outside the hole. Figure 10 shows that only our method solves the task. The number of resets required
1711.06782#28
Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning
Deep reinforcement learning algorithms can learn complex behavioral skills, but real-world application of these methods requires a large amount of experience to be collected by the agent. In practical settings, such as robotics, this involves repeatedly attempting a task, resetting the environment between each attempt. However, not all tasks are easily or automatically reversible. In practice, this learning process requires extensive human intervention. In this work, we propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward and reset policy, with the reset policy resetting the environment for a subsequent attempt. By learning a value function for the reset policy, we can automatically determine when the forward policy is about to enter a non-reversible state, providing for uncertainty-aware safety aborts. Our experiments illustrate that proper use of the reset policy can greatly reduce the number of manual resets required to learn a task, can reduce the number of unsafe actions that lead to non-reversible states, and can automatically induce a curriculum.
http://arxiv.org/pdf/1711.06782
Benjamin Eysenbach, Shixiang Gu, Julian Ibarz, Sergey Levine
cs.LG, cs.RO
Videos of our experiments are available at: https://sites.google.com/site/mlleavenotrace/
null
cs.LG
20171118
20171118
[ { "id": "1702.01182" }, { "id": "1704.05588" }, { "id": "1707.00183" }, { "id": "1506.02438" }, { "id": "1703.05407" }, { "id": "1705.06366" }, { "id": "1609.07152" }, { "id": "1509.02971" }, { "id": "1705.05035" }, { "id": "1707.05300" } ]
1711.06782
29
remove the peg from the hole after every episode. For evaluation, both approaches start outside the hole. Figure 10 shows that only our method solves the task. The number of resets required by our method plateaus after one million steps, indicating that it has solved the task and no longer requires hard resets at the end of the episode. In contrast, the “insert-only” baseline fails to solve the task, never improving its reward. Thus, even if reducing manual resets is not important, the curriculum automatically created by Leave No Trace can enable agents to learn policies they otherwise would be unable to solve.
1711.06782#29
Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning
Deep reinforcement learning algorithms can learn complex behavioral skills, but real-world application of these methods requires a large amount of experience to be collected by the agent. In practical settings, such as robotics, this involves repeatedly attempting a task, resetting the environment between each attempt. However, not all tasks are easily or automatically reversible. In practice, this learning process requires extensive human intervention. In this work, we propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward and reset policy, with the reset policy resetting the environment for a subsequent attempt. By learning a value function for the reset policy, we can automatically determine when the forward policy is about to enter a non-reversible state, providing for uncertainty-aware safety aborts. Our experiments illustrate that proper use of the reset policy can greatly reduce the number of manual resets required to learn a task, can reduce the number of unsafe actions that lead to non-reversible states, and can automatically induce a curriculum.
http://arxiv.org/pdf/1711.06782
Benjamin Eysenbach, Shixiang Gu, Julian Ibarz, Sergey Levine
cs.LG, cs.RO
Videos of our experiments are available at: https://sites.google.com/site/mlleavenotrace/
null
cs.LG
20171118
20171118
[ { "id": "1702.01182" }, { "id": "1704.05588" }, { "id": "1707.00183" }, { "id": "1506.02438" }, { "id": "1703.05407" }, { "id": "1705.06366" }, { "id": "1609.07152" }, { "id": "1509.02971" }, { "id": "1705.05035" }, { "id": "1707.05300" } ]
1711.06782
30
—— insert-only: reward ~~ insert-only: num. resets 50 ours reward ~~ ours: num. resets 499 46000 40 5000 @ 2 x0 4000 @ g 20 3000 Y . | |y\lyagt)—---------- 2000 & 10 1000 € Qn" 05 10 15 38 166 8 # 7 Conclusion In this paper, we presented a framework for automating reinforcement learning based on two princi- ples: automated resets between trials, and early aborts to avoid unrecoverable states. Our method simultaneously learns a forward and reset policy, with the value functions of the two policies used to balance exploration against recoverability. Experiments in this paper demonstrate that our algorithm not only reduces the number of manual resets required to learn a task, but also learns to avoid unsafe states and automatically induces a curriculum.
1711.06782#30
Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning
Deep reinforcement learning algorithms can learn complex behavioral skills, but real-world application of these methods requires a large amount of experience to be collected by the agent. In practical settings, such as robotics, this involves repeatedly attempting a task, resetting the environment between each attempt. However, not all tasks are easily or automatically reversible. In practice, this learning process requires extensive human intervention. In this work, we propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward and reset policy, with the reset policy resetting the environment for a subsequent attempt. By learning a value function for the reset policy, we can automatically determine when the forward policy is about to enter a non-reversible state, providing for uncertainty-aware safety aborts. Our experiments illustrate that proper use of the reset policy can greatly reduce the number of manual resets required to learn a task, can reduce the number of unsafe actions that lead to non-reversible states, and can automatically induce a curriculum.
http://arxiv.org/pdf/1711.06782
Benjamin Eysenbach, Shixiang Gu, Julian Ibarz, Sergey Levine
cs.LG, cs.RO
Videos of our experiments are available at: https://sites.google.com/site/mlleavenotrace/
null
cs.LG
20171118
20171118
[ { "id": "1702.01182" }, { "id": "1704.05588" }, { "id": "1707.00183" }, { "id": "1506.02438" }, { "id": "1703.05407" }, { "id": "1705.06366" }, { "id": "1609.07152" }, { "id": "1509.02971" }, { "id": "1705.05035" }, { "id": "1707.05300" } ]
1711.06782
31
Our algorithm can be applied to a wide range of tasks, only requiring a small number of manual resets to learn some tasks. During the early stages of learning we cannot accurately predict the consequences of our actions. We cannot learn to avoid a dangerous state until we have visited that state (or a similar state) and experienced a manual reset. Nonetheless, reducing the number of manual resets during learning will enable researchers to run experiments for longer on more agents. A second limitation of our work is that we treat all manual resets as equally bad. In practice, some manual resets are more costly than others. For example, it is more costly for a grasping robot to break a wine glass than to push a block out of its workspace. An approach not studied in this paper for handling these cases would be to specify costs associated with each type of manual reset, and incorporate these reset costs into the learning algorithm.
1711.06782#31
Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning
Deep reinforcement learning algorithms can learn complex behavioral skills, but real-world application of these methods requires a large amount of experience to be collected by the agent. In practical settings, such as robotics, this involves repeatedly attempting a task, resetting the environment between each attempt. However, not all tasks are easily or automatically reversible. In practice, this learning process requires extensive human intervention. In this work, we propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward and reset policy, with the reset policy resetting the environment for a subsequent attempt. By learning a value function for the reset policy, we can automatically determine when the forward policy is about to enter a non-reversible state, providing for uncertainty-aware safety aborts. Our experiments illustrate that proper use of the reset policy can greatly reduce the number of manual resets required to learn a task, can reduce the number of unsafe actions that lead to non-reversible states, and can automatically induce a curriculum.
http://arxiv.org/pdf/1711.06782
Benjamin Eysenbach, Shixiang Gu, Julian Ibarz, Sergey Levine
cs.LG, cs.RO
Videos of our experiments are available at: https://sites.google.com/site/mlleavenotrace/
null
cs.LG
20171118
20171118
[ { "id": "1702.01182" }, { "id": "1704.05588" }, { "id": "1707.00183" }, { "id": "1506.02438" }, { "id": "1703.05407" }, { "id": "1705.06366" }, { "id": "1609.07152" }, { "id": "1509.02971" }, { "id": "1705.05035" }, { "id": "1707.05300" } ]
1711.06782
32
While the experiments for this paper were done in simulation, where manual resets are inexpensive, the next step is to apply our algorithm to real robots, where manual resets are costly. A challenge introduced when switching to the real world is automatically identifying when the agent has reset. In simulation we can access the state of the environment directly to compute the distance between the current state and initial state. In the real world, we must infer states from noisy sensor observations to deduce if they are the same. If we cannot distinguish between the state where the forward policy started and the state where the reset policy ended, then we have succeeded in Leaving No Trace! Acknowledgements: We thank Sergio Guadarrama, Oscar Ramirez, and Anoop Korattikara for implementing DDPG and thank Peter Pastor for insightful discussions. References Brandon Amos, Lei Xu, and J Zico Kolter. Input convex neural networks. arXiv preprint arXiv:1609.07152, 2016. Yevgen Chebotar, Mrinal Kalakrishnan, Ali Yahya, Adrian Li, Stefan Schaal, and Sergey Levine. Path integral guided policy search. 2017 IEEE International Conference on Robotics and Automation (ICRA), pp. 3381– 3388, 2017.
1711.06782#32
Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning
Deep reinforcement learning algorithms can learn complex behavioral skills, but real-world application of these methods requires a large amount of experience to be collected by the agent. In practical settings, such as robotics, this involves repeatedly attempting a task, resetting the environment between each attempt. However, not all tasks are easily or automatically reversible. In practice, this learning process requires extensive human intervention. In this work, we propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward and reset policy, with the reset policy resetting the environment for a subsequent attempt. By learning a value function for the reset policy, we can automatically determine when the forward policy is about to enter a non-reversible state, providing for uncertainty-aware safety aborts. Our experiments illustrate that proper use of the reset policy can greatly reduce the number of manual resets required to learn a task, can reduce the number of unsafe actions that lead to non-reversible states, and can automatically induce a curriculum.
http://arxiv.org/pdf/1711.06782
Benjamin Eysenbach, Shixiang Gu, Julian Ibarz, Sergey Levine
cs.LG, cs.RO
Videos of our experiments are available at: https://sites.google.com/site/mlleavenotrace/
null
cs.LG
20171118
20171118
[ { "id": "1702.01182" }, { "id": "1704.05588" }, { "id": "1707.00183" }, { "id": "1506.02438" }, { "id": "1703.05407" }, { "id": "1705.06366" }, { "id": "1609.07152" }, { "id": "1509.02971" }, { "id": "1705.05035" }, { "id": "1707.05300" } ]
1711.06782
33
Carlos Florensa, David Held, Markus Wulfmeier, and Pieter Abbeel. Reverse curriculum generation for reinforcement learning. arXiv preprint arXiv:1707.05300, 2017. Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In international conference on machine learning, pp. 1050–1059, 2016. Dhiraj Gandhi, Lerrel Pinto, and Abhinav Gupta. Learning to fly by crashing. arXiv preprint arXiv:1704.05588, 2017. Shixiang Gu, Ethan Holly, Timothy Lillicrap, and Sergey Levine. Deep reinforcement learning for robotic In Robotics and Automation (ICRA), 2017 IEEE manipulation with asynchronous off-policy updates. International Conference on, pp. 3389–3396. IEEE, 2017. Weiqiao Han, Sergey Levine, and Pieter Abbeel. Learning compound multi-step controllers under unknown In Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on, pp. dynamics. 6435–6442. IEEE, 2015. David Held, Xinyang Geng, Carlos Florensa, and Pieter Abbeel. Automatic goal generation for reinforcement learning agents. arXiv preprint arXiv:1705.06366, 2017.
1711.06782#33
Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning
Deep reinforcement learning algorithms can learn complex behavioral skills, but real-world application of these methods requires a large amount of experience to be collected by the agent. In practical settings, such as robotics, this involves repeatedly attempting a task, resetting the environment between each attempt. However, not all tasks are easily or automatically reversible. In practice, this learning process requires extensive human intervention. In this work, we propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward and reset policy, with the reset policy resetting the environment for a subsequent attempt. By learning a value function for the reset policy, we can automatically determine when the forward policy is about to enter a non-reversible state, providing for uncertainty-aware safety aborts. Our experiments illustrate that proper use of the reset policy can greatly reduce the number of manual resets required to learn a task, can reduce the number of unsafe actions that lead to non-reversible states, and can automatically induce a curriculum.
http://arxiv.org/pdf/1711.06782
Benjamin Eysenbach, Shixiang Gu, Julian Ibarz, Sergey Levine
cs.LG, cs.RO
Videos of our experiments are available at: https://sites.google.com/site/mlleavenotrace/
null
cs.LG
20171118
20171118
[ { "id": "1702.01182" }, { "id": "1704.05588" }, { "id": "1707.00183" }, { "id": "1506.02438" }, { "id": "1703.05407" }, { "id": "1705.06366" }, { "id": "1609.07152" }, { "id": "1509.02971" }, { "id": "1705.05035" }, { "id": "1707.05300" } ]
1711.06782
34
Gregory Kahn, Adam Villaflor, Vitchyr Pong, Pieter Abbeel, and Sergey Levine. Uncertainty-aware reinforce- ment learning for collision avoidance. arXiv preprint arXiv:1702.01182, 2017. 9 Sergey Levine, Peter Pastor, Alex Krizhevsky, Julian Ibarz, and Deirdre Quillen. Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. The International Journal of Robotics Research, pp. 0278364917710318, 2016. Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015. Tambet Matiisen, Avital Oliver, Taco Cohen, and John Schulman. Teacher-student curriculum learning. arXiv preprint arXiv:1707.00183, 2017. Luke Metz, Julian Ibarz, Navdeep Jaitly, and James Davidson. Discrete sequential prediction of continuous actions for deep rl. arXiv preprint arXiv:1705.05035, 2017.
1711.06782#34
Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning
Deep reinforcement learning algorithms can learn complex behavioral skills, but real-world application of these methods requires a large amount of experience to be collected by the agent. In practical settings, such as robotics, this involves repeatedly attempting a task, resetting the environment between each attempt. However, not all tasks are easily or automatically reversible. In practice, this learning process requires extensive human intervention. In this work, we propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward and reset policy, with the reset policy resetting the environment for a subsequent attempt. By learning a value function for the reset policy, we can automatically determine when the forward policy is about to enter a non-reversible state, providing for uncertainty-aware safety aborts. Our experiments illustrate that proper use of the reset policy can greatly reduce the number of manual resets required to learn a task, can reduce the number of unsafe actions that lead to non-reversible states, and can automatically induce a curriculum.
http://arxiv.org/pdf/1711.06782
Benjamin Eysenbach, Shixiang Gu, Julian Ibarz, Sergey Levine
cs.LG, cs.RO
Videos of our experiments are available at: https://sites.google.com/site/mlleavenotrace/
null
cs.LG
20171118
20171118
[ { "id": "1702.01182" }, { "id": "1704.05588" }, { "id": "1707.00183" }, { "id": "1506.02438" }, { "id": "1703.05407" }, { "id": "1705.06366" }, { "id": "1609.07152" }, { "id": "1509.02971" }, { "id": "1705.05035" }, { "id": "1707.05300" } ]
1711.06782
35
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013. Teodor M Moldovan and Pieter Abbeel. Risk aversion in markov decision processes via near optimal chernoff bounds. In Advances in neural information processing systems, pp. 3131–3139, 2012a. Teodor Mihai Moldovan and Pieter Abbeel. Safe exploration in markov decision processes. arXiv preprint arXiv:1205.4810, 2012b. Ian Osband, Charles Blundell, Alexander Pritzel, and Benjamin Van Roy. Deep exploration via bootstrapped dqn. In Advances in Neural Information Processing Systems, pp. 4026–4034, 2016. Lerrel Pinto and Abhinav Gupta. Learning to push by grasping: Using multiple tasks for effective learning. In Robotics and Automation (ICRA), 2017 IEEE International Conference on, pp. 2161–2168. IEEE, 2017. Charles Richter and Nicholas Roy. Safe visual navigation via deep learning and novelty detection. In Proc. of the Robotics: Science and Systems Conference, 2017.
1711.06782#35
Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning
Deep reinforcement learning algorithms can learn complex behavioral skills, but real-world application of these methods requires a large amount of experience to be collected by the agent. In practical settings, such as robotics, this involves repeatedly attempting a task, resetting the environment between each attempt. However, not all tasks are easily or automatically reversible. In practice, this learning process requires extensive human intervention. In this work, we propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward and reset policy, with the reset policy resetting the environment for a subsequent attempt. By learning a value function for the reset policy, we can automatically determine when the forward policy is about to enter a non-reversible state, providing for uncertainty-aware safety aborts. Our experiments illustrate that proper use of the reset policy can greatly reduce the number of manual resets required to learn a task, can reduce the number of unsafe actions that lead to non-reversible states, and can automatically induce a curriculum.
http://arxiv.org/pdf/1711.06782
Benjamin Eysenbach, Shixiang Gu, Julian Ibarz, Sergey Levine
cs.LG, cs.RO
Videos of our experiments are available at: https://sites.google.com/site/mlleavenotrace/
null
cs.LG
20171118
20171118
[ { "id": "1702.01182" }, { "id": "1704.05588" }, { "id": "1707.00183" }, { "id": "1506.02438" }, { "id": "1703.05407" }, { "id": "1705.06366" }, { "id": "1609.07152" }, { "id": "1509.02971" }, { "id": "1705.05035" }, { "id": "1707.05300" } ]
1711.06782
36
Charles Richter and Nicholas Roy. Safe visual navigation via deep learning and novelty detection. In Proc. of the Robotics: Science and Systems Conference, 2017. John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. High-dimensional continuous control using generalized advantage estimation. arXiv preprint arXiv:1506.02438, 2015. John Schulman, Jonathan Ho, Cameron Lee, and Pieter Abbeel. Learning from demonstrations through the use of non-rigid registration. In Robotics Research, pp. 339–354. Springer, 2016. David Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, and Martin Riedmiller. Deterministic In Proceedings of the 31st International Conference on Machine Learning policy gradient algorithms. (ICML-14), pp. 387–395, 2014. Sainbayar Sukhbaatar, Ilya Kostrikov, Arthur Szlam, and Rob Fergus. Intrinsic motivation and automatic curricula via asymmetric self-play. arXiv preprint arXiv:1703.05407, 2017. Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction, volume 1. MIT press Cambridge, 1998. Christopher JCH Watkins and Peter Dayan. Q-learning. Machine learning, 8(3-4):279–292, 1992. 10
1711.06782#36
Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning
Deep reinforcement learning algorithms can learn complex behavioral skills, but real-world application of these methods requires a large amount of experience to be collected by the agent. In practical settings, such as robotics, this involves repeatedly attempting a task, resetting the environment between each attempt. However, not all tasks are easily or automatically reversible. In practice, this learning process requires extensive human intervention. In this work, we propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward and reset policy, with the reset policy resetting the environment for a subsequent attempt. By learning a value function for the reset policy, we can automatically determine when the forward policy is about to enter a non-reversible state, providing for uncertainty-aware safety aborts. Our experiments illustrate that proper use of the reset policy can greatly reduce the number of manual resets required to learn a task, can reduce the number of unsafe actions that lead to non-reversible states, and can automatically induce a curriculum.
http://arxiv.org/pdf/1711.06782
Benjamin Eysenbach, Shixiang Gu, Julian Ibarz, Sergey Levine
cs.LG, cs.RO
Videos of our experiments are available at: https://sites.google.com/site/mlleavenotrace/
null
cs.LG
20171118
20171118
[ { "id": "1702.01182" }, { "id": "1704.05588" }, { "id": "1707.00183" }, { "id": "1506.02438" }, { "id": "1703.05407" }, { "id": "1705.06366" }, { "id": "1609.07152" }, { "id": "1509.02971" }, { "id": "1705.05035" }, { "id": "1707.05300" } ]
1711.06782
37
Christopher JCH Watkins and Peter Dayan. Q-learning. Machine learning, 8(3-4):279–292, 1992. 10 # A Combining an Ensemble of Value Functions We benchmarked three methods for combining our ensemble of values functions (optimistic, realistic, and pessimistic, as discussed in Section 4.4). Figure 11 compares the three methods on gridworld on the gridworld environment from Section 5. Only the optimistic agent efficiently explored. As expected, the realistic and pessimistic agents, which are more conservative in letting the forward policy continue, fail to explore when Qmin is too large. Sor optimistic | — realistic — pessimistic a] 250 9 200 v = 150 € a = 100 0 0.0 o1 0.2 0.3 0.4 Os min Se cete| | EE! ftelatetel | ee lett ete! 12000 11000 wv ¢ 10000 9000 = 8000 € 3 7000 = 6000 5000, 0.0 Ol 0.2 03 04 Os min
1711.06782#37
Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning
Deep reinforcement learning algorithms can learn complex behavioral skills, but real-world application of these methods requires a large amount of experience to be collected by the agent. In practical settings, such as robotics, this involves repeatedly attempting a task, resetting the environment between each attempt. However, not all tasks are easily or automatically reversible. In practice, this learning process requires extensive human intervention. In this work, we propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward and reset policy, with the reset policy resetting the environment for a subsequent attempt. By learning a value function for the reset policy, we can automatically determine when the forward policy is about to enter a non-reversible state, providing for uncertainty-aware safety aborts. Our experiments illustrate that proper use of the reset policy can greatly reduce the number of manual resets required to learn a task, can reduce the number of unsafe actions that lead to non-reversible states, and can automatically induce a curriculum.
http://arxiv.org/pdf/1711.06782
Benjamin Eysenbach, Shixiang Gu, Julian Ibarz, Sergey Levine
cs.LG, cs.RO
Videos of our experiments are available at: https://sites.google.com/site/mlleavenotrace/
null
cs.LG
20171118
20171118
[ { "id": "1702.01182" }, { "id": "1704.05588" }, { "id": "1707.00183" }, { "id": "1506.02438" }, { "id": "1703.05407" }, { "id": "1705.06366" }, { "id": "1609.07152" }, { "id": "1509.02971" }, { "id": "1705.05035" }, { "id": "1707.05300" } ]
1711.06782
38
Se cete| | EE! ftelatetel | ee lett ete! Sor optimistic | — realistic — pessimistic 12000 11000 wv a] 250 ¢ 10000 9 200 9000 v = 8000 = 150 € € 3 7000 a = = 100 6000 5000, 0 0.0 Ol 0.2 03 04 Os 0.0 o1 0.2 0.3 0.4 Os min min Figure 11: Combining value functions: We compare three methods for ensembling value functions on gridworld. Missing points for the red and green lines indicate that pessimistic and realistic method fail to solve the task for larger values of Qmin. Interestingly, for the continuous control environments, the ensembling method makes relatively little difference for the number of resets or final performance, as shown in Figure 12. This suggests that much of the benefit of ensemble comes from its ability to produce less biased abort predictions in novel states, rather than the particular risk-sensitive rule that is used. Additionally, the agent’s value function may generalize better over continuous state spaces compared with the tabular gridworld.
1711.06782#38
Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning
Deep reinforcement learning algorithms can learn complex behavioral skills, but real-world application of these methods requires a large amount of experience to be collected by the agent. In practical settings, such as robotics, this involves repeatedly attempting a task, resetting the environment between each attempt. However, not all tasks are easily or automatically reversible. In practice, this learning process requires extensive human intervention. In this work, we propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward and reset policy, with the reset policy resetting the environment for a subsequent attempt. By learning a value function for the reset policy, we can automatically determine when the forward policy is about to enter a non-reversible state, providing for uncertainty-aware safety aborts. Our experiments illustrate that proper use of the reset policy can greatly reduce the number of manual resets required to learn a task, can reduce the number of unsafe actions that lead to non-reversible states, and can automatically induce a curriculum.
http://arxiv.org/pdf/1711.06782
Benjamin Eysenbach, Shixiang Gu, Julian Ibarz, Sergey Levine
cs.LG, cs.RO
Videos of our experiments are available at: https://sites.google.com/site/mlleavenotrace/
null
cs.LG
20171118
20171118
[ { "id": "1702.01182" }, { "id": "1704.05588" }, { "id": "1707.00183" }, { "id": "1506.02438" }, { "id": "1703.05407" }, { "id": "1705.06366" }, { "id": "1609.07152" }, { "id": "1509.02971" }, { "id": "1705.05035" }, { "id": "1707.05300" } ]
1711.06782
39
(a) Cliff Cheetah (b) Cliff Walker (c) Pusher num. reset: Figure 12: Combining value functions: For continuous environments, the method for combing value functions has little effect. # B Additional Figures For each experiment in the main paper, we chose one or two demonstrative environments. Below, we show all experiments run on cliff cheetah, cliff walker, and pusher. # B.1 Does Our Method Reduce Manual Resets? – More Plots This experiment, described in Section 6.2, compared our method to the status quo approach (resetting after every episode). Figure 13 shows plots for all environments. (a) Cliff Cheetah (b) Cliff Walker (c) Pusher Figure 13: Experiment from § 6.2
1711.06782#39
Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning
Deep reinforcement learning algorithms can learn complex behavioral skills, but real-world application of these methods requires a large amount of experience to be collected by the agent. In practical settings, such as robotics, this involves repeatedly attempting a task, resetting the environment between each attempt. However, not all tasks are easily or automatically reversible. In practice, this learning process requires extensive human intervention. In this work, we propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward and reset policy, with the reset policy resetting the environment for a subsequent attempt. By learning a value function for the reset policy, we can automatically determine when the forward policy is about to enter a non-reversible state, providing for uncertainty-aware safety aborts. Our experiments illustrate that proper use of the reset policy can greatly reduce the number of manual resets required to learn a task, can reduce the number of unsafe actions that lead to non-reversible states, and can automatically induce a curriculum.
http://arxiv.org/pdf/1711.06782
Benjamin Eysenbach, Shixiang Gu, Julian Ibarz, Sergey Levine
cs.LG, cs.RO
Videos of our experiments are available at: https://sites.google.com/site/mlleavenotrace/
null
cs.LG
20171118
20171118
[ { "id": "1702.01182" }, { "id": "1704.05588" }, { "id": "1707.00183" }, { "id": "1506.02438" }, { "id": "1703.05407" }, { "id": "1705.06366" }, { "id": "1609.07152" }, { "id": "1509.02971" }, { "id": "1705.05035" }, { "id": "1707.05300" } ]
1711.06782
40
num. resets num. resets steps 11 # B.2 Do Early Aborts avoid Hard Resets Plots? – More Plots This experiment, described in Section 6.3, shows the effect of varying the early abort threshold. Figure 14 shows plots for all environments. (a) Cliff Cheetah (b) Cliff Walker (c) Pusher Figure 14: Experiment from § 6.3 ws cite Boo us! mh inhi hile wo 8 50m y th ‘on § 002 ia oon, steps = Aiuaewctl aie sd - “\ i 200 100 2 ° steps = steps # B.3 Multiple Reset Attempts – More Plots This experiment, described in Section 6.4, shows the effect of increasing the number of reset attempts. Figure 15 shows plots for all environments. (a) Cliff Cheetah (b) Cliff Walker (c) Pusher Figure 15: Experiment from § 6.4
1711.06782#40
Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning
Deep reinforcement learning algorithms can learn complex behavioral skills, but real-world application of these methods requires a large amount of experience to be collected by the agent. In practical settings, such as robotics, this involves repeatedly attempting a task, resetting the environment between each attempt. However, not all tasks are easily or automatically reversible. In practice, this learning process requires extensive human intervention. In this work, we propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward and reset policy, with the reset policy resetting the environment for a subsequent attempt. By learning a value function for the reset policy, we can automatically determine when the forward policy is about to enter a non-reversible state, providing for uncertainty-aware safety aborts. Our experiments illustrate that proper use of the reset policy can greatly reduce the number of manual resets required to learn a task, can reduce the number of unsafe actions that lead to non-reversible states, and can automatically induce a curriculum.
http://arxiv.org/pdf/1711.06782
Benjamin Eysenbach, Shixiang Gu, Julian Ibarz, Sergey Levine
cs.LG, cs.RO
Videos of our experiments are available at: https://sites.google.com/site/mlleavenotrace/
null
cs.LG
20171118
20171118
[ { "id": "1702.01182" }, { "id": "1704.05588" }, { "id": "1707.00183" }, { "id": "1506.02438" }, { "id": "1703.05407" }, { "id": "1705.06366" }, { "id": "1609.07152" }, { "id": "1509.02971" }, { "id": "1705.05035" }, { "id": "1707.05300" } ]
1711.06782
41
== 2attemps: Barempes reward — — @ attempts: num attempts reward — @ attempts: num. reset num. resets steps i # B.4 Ensembles are Safer – More Plots This experiment, described in Section 6.5, shows the effect of increasing the number of reset attempts. Figure 16 shows plots for all environments. (a) Cliff Cheetah (b) Cliff Walker (c) Pusher Figure 16: Experiment from § 6.5 (ya 300 200 5 100
1711.06782#41
Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning
Deep reinforcement learning algorithms can learn complex behavioral skills, but real-world application of these methods requires a large amount of experience to be collected by the agent. In practical settings, such as robotics, this involves repeatedly attempting a task, resetting the environment between each attempt. However, not all tasks are easily or automatically reversible. In practice, this learning process requires extensive human intervention. In this work, we propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward and reset policy, with the reset policy resetting the environment for a subsequent attempt. By learning a value function for the reset policy, we can automatically determine when the forward policy is about to enter a non-reversible state, providing for uncertainty-aware safety aborts. Our experiments illustrate that proper use of the reset policy can greatly reduce the number of manual resets required to learn a task, can reduce the number of unsafe actions that lead to non-reversible states, and can automatically induce a curriculum.
http://arxiv.org/pdf/1711.06782
Benjamin Eysenbach, Shixiang Gu, Julian Ibarz, Sergey Levine
cs.LG, cs.RO
Videos of our experiments are available at: https://sites.google.com/site/mlleavenotrace/
null
cs.LG
20171118
20171118
[ { "id": "1702.01182" }, { "id": "1704.05588" }, { "id": "1707.00183" }, { "id": "1506.02438" }, { "id": "1703.05407" }, { "id": "1705.06366" }, { "id": "1609.07152" }, { "id": "1509.02971" }, { "id": "1705.05035" }, { "id": "1707.05300" } ]
1711.06782
43
Ball in Cup: The agent receives a reward of 1 if the ball is in the cup and 0 otherwise. We defined the reward for the reset task to be the negative distance between current state and the initial state (ball hanging stationary below the cup). Cliff Cheetah: The agent learns how to run on a 14m cliff. The agent is rewarded for moving forward. We defined the reset reward to be a combination of the distance from the origin, an indicator of whether the agent was standing, and a control penalty. Cliff Walker: The agent learns how to walker on a 6m cliff. The agent is rewarded for moving forward. We defined the reset reward to be a combination of the distance from the origin, an indicator of whether the agent was standing, and a control penalty. Pusher: The agent pushes a puck to a goal location. The agent’s reward is a combination of the distance from the puck to the goal, the distance from arm to the puck, and a control penalty. For the reset reward, we use the distance from puck to start instead of distance from puck to goal. Peg Insertion: The agent inserts a peg into a small hole. For the insertion task, the reward is 1
1711.06782#43
Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning
Deep reinforcement learning algorithms can learn complex behavioral skills, but real-world application of these methods requires a large amount of experience to be collected by the agent. In practical settings, such as robotics, this involves repeatedly attempting a task, resetting the environment between each attempt. However, not all tasks are easily or automatically reversible. In practice, this learning process requires extensive human intervention. In this work, we propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward and reset policy, with the reset policy resetting the environment for a subsequent attempt. By learning a value function for the reset policy, we can automatically determine when the forward policy is about to enter a non-reversible state, providing for uncertainty-aware safety aborts. Our experiments illustrate that proper use of the reset policy can greatly reduce the number of manual resets required to learn a task, can reduce the number of unsafe actions that lead to non-reversible states, and can automatically induce a curriculum.
http://arxiv.org/pdf/1711.06782
Benjamin Eysenbach, Shixiang Gu, Julian Ibarz, Sergey Levine
cs.LG, cs.RO
Videos of our experiments are available at: https://sites.google.com/site/mlleavenotrace/
null
cs.LG
20171118
20171118
[ { "id": "1702.01182" }, { "id": "1704.05588" }, { "id": "1707.00183" }, { "id": "1506.02438" }, { "id": "1703.05407" }, { "id": "1705.06366" }, { "id": "1609.07152" }, { "id": "1509.02971" }, { "id": "1705.05035" }, { "id": "1707.05300" } ]
1711.06782
45
# C.3 Continuous Control Experiments We did not do hyperparameter optimization for our experiments, but did run with 5 random seeds. To aggregate results, we took the median number across all random seeds that solved the task. For most experiments, all random seeds solved the task. For the three continuous control environments, we normalized the rewards to be in [0, 1] so we could use the same hyperparameters for each. Using a discount factor of γ = 0.99, the cumulative discounted reward was in [0, 100). We defined Sreset as states with reset reward was greater than 0.7. We used the same DDPG hyperparameters for all continuous control environments: Actor Network: Two fully connected layers of sizes 400 and 300, with tanh nonlinearities throughout. Critic Network: We apply a 400-dimensional fully connected layer to states, then concatenate the actions and apply another 300-dimensional fully connected layer. Again, we use tanh nonlinearities. Unless otherwise noted, experiments used an ensemble of size 20, Qmin = 10, 1 reset attempt, and early aborts using min(q). The experiments in Section 6.2, our model used 2 reset attempts to better illustrate the potential for our approach to reduce hard resets. 13
1711.06782#45
Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning
Deep reinforcement learning algorithms can learn complex behavioral skills, but real-world application of these methods requires a large amount of experience to be collected by the agent. In practical settings, such as robotics, this involves repeatedly attempting a task, resetting the environment between each attempt. However, not all tasks are easily or automatically reversible. In practice, this learning process requires extensive human intervention. In this work, we propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward and reset policy, with the reset policy resetting the environment for a subsequent attempt. By learning a value function for the reset policy, we can automatically determine when the forward policy is about to enter a non-reversible state, providing for uncertainty-aware safety aborts. Our experiments illustrate that proper use of the reset policy can greatly reduce the number of manual resets required to learn a task, can reduce the number of unsafe actions that lead to non-reversible states, and can automatically induce a curriculum.
http://arxiv.org/pdf/1711.06782
Benjamin Eysenbach, Shixiang Gu, Julian Ibarz, Sergey Levine
cs.LG, cs.RO
Videos of our experiments are available at: https://sites.google.com/site/mlleavenotrace/
null
cs.LG
20171118
20171118
[ { "id": "1702.01182" }, { "id": "1704.05588" }, { "id": "1707.00183" }, { "id": "1506.02438" }, { "id": "1703.05407" }, { "id": "1705.06366" }, { "id": "1609.07152" }, { "id": "1509.02971" }, { "id": "1705.05035" }, { "id": "1707.05300" } ]
1711.05852
1
# Asit Mishra & Debbie Marr Accelerator Architecture Lab Intel Labs {asit.k.mishra,debbie.marr}@intel.com # ABSTRACT Deep learning networks have achieved state-of-the-art accuracies on computer vi- sion workloads like image classification and object detection. The performant systems, however, typically involve big models with numerous parameters. Once trained, a challenging aspect for such top performing models is deployment on re- source constrained inference systems — the models (often deep networks or wide networks or both) are compute and memory intensive. Low-precision numerics and model compression using knowledge distillation are popular techniques to lower both the compute requirements and memory footprint of these deployed In this paper, we study the combination of these two techniques and models. show that the performance of low-precision networks can be significantly im- proved by using knowledge distillation techniques. Our approach, Apprentice, achieves state-of-the-art accuracies using ternary precision and 4-bit precision for variants of ResNet architecture on ImageNet dataset. We present three schemes using which one can apply knowledge distillation techniques to various stages of the train-and-deploy pipeline. # INTRODUCTION
1711.05852#1
Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy
Deep learning networks have achieved state-of-the-art accuracies on computer vision workloads like image classification and object detection. The performant systems, however, typically involve big models with numerous parameters. Once trained, a challenging aspect for such top performing models is deployment on resource constrained inference systems - the models (often deep networks or wide networks or both) are compute and memory intensive. Low-precision numerics and model compression using knowledge distillation are popular techniques to lower both the compute requirements and memory footprint of these deployed models. In this paper, we study the combination of these two techniques and show that the performance of low-precision networks can be significantly improved by using knowledge distillation techniques. Our approach, Apprentice, achieves state-of-the-art accuracies using ternary precision and 4-bit precision for variants of ResNet architecture on ImageNet dataset. We present three schemes using which one can apply knowledge distillation techniques to various stages of the train-and-deploy pipeline.
http://arxiv.org/pdf/1711.05852
Asit Mishra, Debbie Marr
cs.LG, cs.CV, cs.NE
null
null
cs.LG
20171115
20171115
[]
1711.05852
2
# INTRODUCTION Background: Today’s high performing deep neural networks (DNNs) for computer vision applica- tions comprise of multiple layers and involve numerous parameters. These networks have O(Giga- FLOPS) compute requirements and generate models which are O(Mega-Bytes) in storage (Canziani et al., 2016). Further, the memory and compute requirements during training and inference are quite different (Mishra et al., 2017). Training is performed on big datasets with large batch-sizes where the memory footprint of activations dominates the model memory footprint. On the other hand, the batch-size during inference is typically small and the model’s memory footprint dominates the runtime memory requirements. Because of the complexity in compute, memory and storage requirements, training phase of the networks is performed on CPU and/or GPU clusters in a distributed computing environment. Once trained, a challenging aspect is deployment of the trained models on resource constrained inference systems such as portable devices or sensor networks, and for applications in which real-time predic- tions are required. Performing inference on edge-devices comes with severe constraints on memory, compute and power. Additionally, ensemble based methods, which one can potentially use to get improved accuracy predictions, become prohibitive in resource constrained systems.
1711.05852#2
Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy
Deep learning networks have achieved state-of-the-art accuracies on computer vision workloads like image classification and object detection. The performant systems, however, typically involve big models with numerous parameters. Once trained, a challenging aspect for such top performing models is deployment on resource constrained inference systems - the models (often deep networks or wide networks or both) are compute and memory intensive. Low-precision numerics and model compression using knowledge distillation are popular techniques to lower both the compute requirements and memory footprint of these deployed models. In this paper, we study the combination of these two techniques and show that the performance of low-precision networks can be significantly improved by using knowledge distillation techniques. Our approach, Apprentice, achieves state-of-the-art accuracies using ternary precision and 4-bit precision for variants of ResNet architecture on ImageNet dataset. We present three schemes using which one can apply knowledge distillation techniques to various stages of the train-and-deploy pipeline.
http://arxiv.org/pdf/1711.05852
Asit Mishra, Debbie Marr
cs.LG, cs.CV, cs.NE
null
null
cs.LG
20171115
20171115
[]
1711.05852
3
Quantization using low-precision numerics (Vanhoucke et al., 2011; Zhou et al., 2016; Lin et al., 2015; Miyashita et al., 2016; Gupta et al., 2015; Zhu et al., 2016; Rastegari et al., 2016; Courbariaux et al., 2015; Umuroglu et al., 2016; Mishra et al., 2017) and model compression (Buciluˇa et al., 2006; Hinton et al., 2015; Romero et al., 2014) have emerged as popular solutions for resource constrained deployment scenarios. With quantization, a low-precision version of the network model is gener- ated and deployed on the device. Operating in lower precision mode reduces compute as well as data movement and storage requirements. However, the majority of existing works in low-precision DNNs sacrifice accuracy over the baseline full-precision networks. With model compression, a 1 # Apprentice: Using KD Techniques to Improve Low-Precision Network Accuracy
1711.05852#3
Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy
Deep learning networks have achieved state-of-the-art accuracies on computer vision workloads like image classification and object detection. The performant systems, however, typically involve big models with numerous parameters. Once trained, a challenging aspect for such top performing models is deployment on resource constrained inference systems - the models (often deep networks or wide networks or both) are compute and memory intensive. Low-precision numerics and model compression using knowledge distillation are popular techniques to lower both the compute requirements and memory footprint of these deployed models. In this paper, we study the combination of these two techniques and show that the performance of low-precision networks can be significantly improved by using knowledge distillation techniques. Our approach, Apprentice, achieves state-of-the-art accuracies using ternary precision and 4-bit precision for variants of ResNet architecture on ImageNet dataset. We present three schemes using which one can apply knowledge distillation techniques to various stages of the train-and-deploy pipeline.
http://arxiv.org/pdf/1711.05852
Asit Mishra, Debbie Marr
cs.LG, cs.CV, cs.NE
null
null
cs.LG
20171115
20171115
[]
1711.05852
4
1 # Apprentice: Using KD Techniques to Improve Low-Precision Network Accuracy smaller low memory footprint network is trained to mimic the behaviour of the original complex network. During this training, a process called, knowledge distillation is used to “transfer knowl- edge” from the complex network to the smaller network. Work by Hinton et al. (2015) shows that the knowledge distillation scheme can yield networks at comparable or slightly better accuracy than the original complex model. However, to the best of our knowledge, all prior works using model compression techniques target compression at full-precision.
1711.05852#4
Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy
Deep learning networks have achieved state-of-the-art accuracies on computer vision workloads like image classification and object detection. The performant systems, however, typically involve big models with numerous parameters. Once trained, a challenging aspect for such top performing models is deployment on resource constrained inference systems - the models (often deep networks or wide networks or both) are compute and memory intensive. Low-precision numerics and model compression using knowledge distillation are popular techniques to lower both the compute requirements and memory footprint of these deployed models. In this paper, we study the combination of these two techniques and show that the performance of low-precision networks can be significantly improved by using knowledge distillation techniques. Our approach, Apprentice, achieves state-of-the-art accuracies using ternary precision and 4-bit precision for variants of ResNet architecture on ImageNet dataset. We present three schemes using which one can apply knowledge distillation techniques to various stages of the train-and-deploy pipeline.
http://arxiv.org/pdf/1711.05852
Asit Mishra, Debbie Marr
cs.LG, cs.CV, cs.NE
null
null
cs.LG
20171115
20171115
[]
1711.05852
5
Our proposal: In this paper, we study the combination of network quantization with model com- pression and show that the accuracies of low-precision networks can be significantly improved by using knowledge distillation techniques. Previous studies on model compression use a large network as the teacher network and a small network as the student network. The small student network learns from the teacher network using the distillation process. The network architecture of the student net- work is typically different from that of the teacher network – for e.g. Hinton et al. (2015) investigate a student network that has fewer number of neurons in the hidden layers compared to the teacher network. In our work, the student network has similar topology as that of the teacher network, ex- cept that the student network has low-precision neurons compared to the teacher network which has neurons operating at full-precision. We call our approach Apprentice1 and study three schemes which produce low-precision networks using knowledge distillation techniques. Each of these three schemes produce state-of-the-art ternary precision and 4-bit precision models.
1711.05852#5
Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy
Deep learning networks have achieved state-of-the-art accuracies on computer vision workloads like image classification and object detection. The performant systems, however, typically involve big models with numerous parameters. Once trained, a challenging aspect for such top performing models is deployment on resource constrained inference systems - the models (often deep networks or wide networks or both) are compute and memory intensive. Low-precision numerics and model compression using knowledge distillation are popular techniques to lower both the compute requirements and memory footprint of these deployed models. In this paper, we study the combination of these two techniques and show that the performance of low-precision networks can be significantly improved by using knowledge distillation techniques. Our approach, Apprentice, achieves state-of-the-art accuracies using ternary precision and 4-bit precision for variants of ResNet architecture on ImageNet dataset. We present three schemes using which one can apply knowledge distillation techniques to various stages of the train-and-deploy pipeline.
http://arxiv.org/pdf/1711.05852
Asit Mishra, Debbie Marr
cs.LG, cs.CV, cs.NE
null
null
cs.LG
20171115
20171115
[]
1711.05852
6
In the first scheme, a low-precision network and a full-precision network are jointly trained from scratch using knowledge distillation scheme. Later in the paper we describe the rationale behind this approach. Using this scheme, a new state-of-the-art accuracy is obtained for ternary and 4-bit precision for ResNet-18, ResNet-34 and ResNet-50 on ImageNet dataset. In fact, using this scheme the accuracy of the full-precision model also slightly improves. This scheme then serves as the new baseline for the other two schemes we investigate. In the second scheme, we start with a full-precision trained network and transfer knowledge from this trained network continuously to train a low-precision network from scratch. We find that the low-precision network converges faster (albeit to similar accuracies as the first scheme) when a trained complex network guides its training.
1711.05852#6
Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy
Deep learning networks have achieved state-of-the-art accuracies on computer vision workloads like image classification and object detection. The performant systems, however, typically involve big models with numerous parameters. Once trained, a challenging aspect for such top performing models is deployment on resource constrained inference systems - the models (often deep networks or wide networks or both) are compute and memory intensive. Low-precision numerics and model compression using knowledge distillation are popular techniques to lower both the compute requirements and memory footprint of these deployed models. In this paper, we study the combination of these two techniques and show that the performance of low-precision networks can be significantly improved by using knowledge distillation techniques. Our approach, Apprentice, achieves state-of-the-art accuracies using ternary precision and 4-bit precision for variants of ResNet architecture on ImageNet dataset. We present three schemes using which one can apply knowledge distillation techniques to various stages of the train-and-deploy pipeline.
http://arxiv.org/pdf/1711.05852
Asit Mishra, Debbie Marr
cs.LG, cs.CV, cs.NE
null
null
cs.LG
20171115
20171115
[]
1711.05852
7
In the third scheme, we start with a trained full-precision large network and an apprentice network that has been initialised with full-precision weights. The apprentice network’s precision is lowered and is fine-tuned using knowledge distillation techniques. We find that the low-precision network’s accuracy marginally improves and surpasses the accuracy obtained via the first scheme. This scheme then sets the new state-of-the-art accuracies for the ResNet models at ternary and 4-bit precision. Overall, the contributions of this paper are the techniques to obtain low-precision DNNs using knowledge distillation technique. Each of our scheme produces a low-precision model that sur- passes the accuracy of the equivalent low-precision model published to date. One of our schemes also helps a low-precision model converge faster. We envision these accurate low-precision models to simplify the inference deployment process on resource constrained systems and even otherwise on cloud-based deployment systems. # 2 MOTIVATION FOR LOW-PRECISION MODEL PARAMETERS
1711.05852#7
Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy
Deep learning networks have achieved state-of-the-art accuracies on computer vision workloads like image classification and object detection. The performant systems, however, typically involve big models with numerous parameters. Once trained, a challenging aspect for such top performing models is deployment on resource constrained inference systems - the models (often deep networks or wide networks or both) are compute and memory intensive. Low-precision numerics and model compression using knowledge distillation are popular techniques to lower both the compute requirements and memory footprint of these deployed models. In this paper, we study the combination of these two techniques and show that the performance of low-precision networks can be significantly improved by using knowledge distillation techniques. Our approach, Apprentice, achieves state-of-the-art accuracies using ternary precision and 4-bit precision for variants of ResNet architecture on ImageNet dataset. We present three schemes using which one can apply knowledge distillation techniques to various stages of the train-and-deploy pipeline.
http://arxiv.org/pdf/1711.05852
Asit Mishra, Debbie Marr
cs.LG, cs.CV, cs.NE
null
null
cs.LG
20171115
20171115
[]
1711.05852
8
# 2 MOTIVATION FOR LOW-PRECISION MODEL PARAMETERS Lowering precision of model parameters: Resource constrained inference systems impose signif- icant restrictions on memory, compute and power budget. With regard to storage, model (or weight) parameters and activation maps occupy memory during the inference phase of DNNs. During this phase memory is allocated for input (IFM) and output feature maps (OFM) required by a single layer in the DNN, and these dynamic memory allocations are reused for other layers. The total memory allocation during inference is then the maximum of IFM and maximum of OFM memory required across all the layers plus the sum of all weight tensors (Mishra et al., 2017). When infer- ence phase for DNNs is performed with a small batch size, the memory footprint of the weights 1Dictionary defines apprentice as a person who is learning a trade from a skilled employer, having agreed to work for a fixed period at low wages. In our work, the apprentice is a low-precision network which is learning the knowledge of a high precision network (skilled employer) during a fixed number of epochs. 2 # Apprentice: Using KD Techniques to Improve Low-Precision Network Accuracy
1711.05852#8
Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy
Deep learning networks have achieved state-of-the-art accuracies on computer vision workloads like image classification and object detection. The performant systems, however, typically involve big models with numerous parameters. Once trained, a challenging aspect for such top performing models is deployment on resource constrained inference systems - the models (often deep networks or wide networks or both) are compute and memory intensive. Low-precision numerics and model compression using knowledge distillation are popular techniques to lower both the compute requirements and memory footprint of these deployed models. In this paper, we study the combination of these two techniques and show that the performance of low-precision networks can be significantly improved by using knowledge distillation techniques. Our approach, Apprentice, achieves state-of-the-art accuracies using ternary precision and 4-bit precision for variants of ResNet architecture on ImageNet dataset. We present three schemes using which one can apply knowledge distillation techniques to various stages of the train-and-deploy pipeline.
http://arxiv.org/pdf/1711.05852
Asit Mishra, Debbie Marr
cs.LG, cs.CV, cs.NE
null
null
cs.LG
20171115
20171115
[]
1711.05852
9
2 # Apprentice: Using KD Techniques to Improve Low-Precision Network Accuracy exceeds the footprint of the activation maps. This aspect is shown in Figure 1 for 4 different net- works (AlexNet (Krizhevsky et al., 2012), Inception-Resnet-v22 (Szegedy et al., 2016), ResNet-50 and ResNet-101 (He et al., 2015)) running 224x224 image patches. Thus lowering the precision of the weight tensors helps lower the memory requirements during deployment.
1711.05852#9
Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy
Deep learning networks have achieved state-of-the-art accuracies on computer vision workloads like image classification and object detection. The performant systems, however, typically involve big models with numerous parameters. Once trained, a challenging aspect for such top performing models is deployment on resource constrained inference systems - the models (often deep networks or wide networks or both) are compute and memory intensive. Low-precision numerics and model compression using knowledge distillation are popular techniques to lower both the compute requirements and memory footprint of these deployed models. In this paper, we study the combination of these two techniques and show that the performance of low-precision networks can be significantly improved by using knowledge distillation techniques. Our approach, Apprentice, achieves state-of-the-art accuracies using ternary precision and 4-bit precision for variants of ResNet architecture on ImageNet dataset. We present three schemes using which one can apply knowledge distillation techniques to various stages of the train-and-deploy pipeline.
http://arxiv.org/pdf/1711.05852
Asit Mishra, Debbie Marr
cs.LG, cs.CV, cs.NE
null
null
cs.LG
20171115
20171115
[]
1711.05852
10
Benefit of low-precision compute: Low-precision compute simplifies hardware implementation. For example, the compute unit to perform the convolu- tion operation (multiplication of two operands) in- volves a floating-point multiplier when using full- precision weights and activations. The floating- point multiplier can be replaced with a much sim- pler circuitry (xnor and popcount logic elements) when using binary precision for weights and activa- tions (Courbariaux & Bengio, 2016; Rastegari et al., 2016; Courbariaux et al., 2015). Similarly, when us- ing ternary precision for weights and full-precision for activations, the multiplier unit can be replaced with a sign comparator unit (Li & Liu, 2016; Zhu et al., 2016). Simpler hardware also helps lower the inference latency and energy budget. Thus, operat- ing in lower precision mode reduces compute as well as data movement and storage requirements. =%Ws = %ACTS 2 ‘nox 5 ss 8 = E = 2 ° . : , : . : , : Alexnet IRv2 ResNet-50 ResNet-101 Figure 1: Memory footprint of activations (ACTs) and weights (W) during inference for mini-batch sizes 1 and 8.
1711.05852#10
Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy
Deep learning networks have achieved state-of-the-art accuracies on computer vision workloads like image classification and object detection. The performant systems, however, typically involve big models with numerous parameters. Once trained, a challenging aspect for such top performing models is deployment on resource constrained inference systems - the models (often deep networks or wide networks or both) are compute and memory intensive. Low-precision numerics and model compression using knowledge distillation are popular techniques to lower both the compute requirements and memory footprint of these deployed models. In this paper, we study the combination of these two techniques and show that the performance of low-precision networks can be significantly improved by using knowledge distillation techniques. Our approach, Apprentice, achieves state-of-the-art accuracies using ternary precision and 4-bit precision for variants of ResNet architecture on ImageNet dataset. We present three schemes using which one can apply knowledge distillation techniques to various stages of the train-and-deploy pipeline.
http://arxiv.org/pdf/1711.05852
Asit Mishra, Debbie Marr
cs.LG, cs.CV, cs.NE
null
null
cs.LG
20171115
20171115
[]
1711.05852
12
# 3 RELATED WORK Low-precision networks: Low-precision DNNs are an active area of research. Most low-precision networks acknowledge the over parameterization aspect of today’s DNN architectures and/or the aspect that lowering the precision of neurons post-training often does not impact the final perfor- mance. Reducing precision of weights for efficient inference pipeline has been very well studied. Works like Binary connect (BC) (Courbariaux et al., 2015), Ternary-weight networks (TWN) (Li & Liu, 2016), fine-grained ternary quantization (Mellempudi et al., 2017) and INQ (Zhou et al., 2017) target precision reduction of network weights. Accuracy is almost always affected when quantizing the weights significantly below 8-bits of precision. For AlexNet on ImageNet, TWN loses 5% Top-1 accuracy. Schemes like INQ, work in Sung et al. (2015) and Mellempudi et al. (2017) do fine-tuning to quantize the network weights.
1711.05852#12
Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy
Deep learning networks have achieved state-of-the-art accuracies on computer vision workloads like image classification and object detection. The performant systems, however, typically involve big models with numerous parameters. Once trained, a challenging aspect for such top performing models is deployment on resource constrained inference systems - the models (often deep networks or wide networks or both) are compute and memory intensive. Low-precision numerics and model compression using knowledge distillation are popular techniques to lower both the compute requirements and memory footprint of these deployed models. In this paper, we study the combination of these two techniques and show that the performance of low-precision networks can be significantly improved by using knowledge distillation techniques. Our approach, Apprentice, achieves state-of-the-art accuracies using ternary precision and 4-bit precision for variants of ResNet architecture on ImageNet dataset. We present three schemes using which one can apply knowledge distillation techniques to various stages of the train-and-deploy pipeline.
http://arxiv.org/pdf/1711.05852
Asit Mishra, Debbie Marr
cs.LG, cs.CV, cs.NE
null
null
cs.LG
20171115
20171115
[]