doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1708.07860 | 51 | [12] G. Gkioxari, R. Girshick, and J. Malik. Contextual action recognition with R*CNN. In ICCV, 2015.
[13] S. Guadarrama and N. Silberman. Tensorï¬ow-slim. 2016. [14] B. Hariharan, P. Arbel´aez, R. Girshick, and J. Malik. Hyper- columns for object segmentation and ï¬ne-grained localiza- tion. In CVPR, 2015.
[15] K. He, X. Zhang, S. Ren, and J. Sun. Identity mappings in deep residual networks. In ECCV, 2016.
[16] G. Huang, Z. Liu, K. Q. Weinberger, and L. van der Maaten. Densely connected convolutional networks. CVPR, 2017.
[17] D. Jayaraman and K. Grauman. Learning image representa- tions tied to ego-motion. In ICCV, 2015.
[18] I. Kokkinos. Ubernet: Training a âuniversalâ convolutional neural network for low-, mid-, and high-level vision us- ing diverse datasets and limited memory. arXiv preprint arXiv:1609.02132, 2016. | 1708.07860#51 | Multi-task Self-Supervised Visual Learning | We investigate methods for combining multiple self-supervised tasks--i.e.,
supervised tasks where data can be collected without manual labeling--in order
to train a single visual representation. First, we provide an apples-to-apples
comparison of four different self-supervised tasks using the very deep
ResNet-101 architecture. We then combine tasks to jointly train a network. We
also explore lasso regularization to encourage the network to factorize the
information in its representation, and methods for "harmonizing" network inputs
in order to learn a more unified representation. We evaluate all methods on
ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our
results show that deeper networks work better, and that combining tasks--even
via a naive multi-head architecture--always improves performance. Our best
joint network nearly matches the PASCAL performance of a model pre-trained on
ImageNet classification, and matches the ImageNet network on NYU depth
prediction. | http://arxiv.org/pdf/1708.07860 | Carl Doersch, Andrew Zisserman | cs.CV | Published at ICCV 2017 | null | cs.CV | 20170825 | 20170825 | [
{
"id": "1611.06430"
},
{
"id": "1602.07261"
},
{
"id": "1611.03530"
},
{
"id": "1611.06646"
},
{
"id": "1606.04671"
},
{
"id": "1612.06370"
},
{
"id": "1606.07419"
},
{
"id": "1609.02132"
},
{
"id": "1610.01685"
}
] |
1708.07860 | 52 | [19] I. Laina, C. Rupprecht, V. Belagiannis, F. Tombari, and N. Navab. Deeper depth prediction with fully convolutional residual networks. In 3D Vision, 2016.
[20] G. Larsson, M. Maire, and G. Shakhnarovich. Learning rep- resentations for automatic colorization. In ECCV, 2016. [21] Y. Li, M. Paluri, J. M. Rehg, and P. Doll´ar. Unsupervised
learning of edges. In CVPR, 2016.
[22] I. Misra, A. Shrivastava, A. Gupta, and M. Hebert. Cross- stitch networks for multi-task learning. In CVPR, 2016. [23] I. Misra, C. L. Zitnick, and M. Hebert. Shufï¬e and learn:
11
unsupervised learning using temporal order veriï¬cation. In ECCV, 2016.
[24] H. Mobahi, R. Collobert, and J. Weston. Deep learning from temporal coherence in video. In ICML, 2009.
[25] M. Noroozi and P. Favaro. Unsupervised learning of visual representations by solving jigsaw puzzles. In ECCV, 2016. | 1708.07860#52 | Multi-task Self-Supervised Visual Learning | We investigate methods for combining multiple self-supervised tasks--i.e.,
supervised tasks where data can be collected without manual labeling--in order
to train a single visual representation. First, we provide an apples-to-apples
comparison of four different self-supervised tasks using the very deep
ResNet-101 architecture. We then combine tasks to jointly train a network. We
also explore lasso regularization to encourage the network to factorize the
information in its representation, and methods for "harmonizing" network inputs
in order to learn a more unified representation. We evaluate all methods on
ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our
results show that deeper networks work better, and that combining tasks--even
via a naive multi-head architecture--always improves performance. Our best
joint network nearly matches the PASCAL performance of a model pre-trained on
ImageNet classification, and matches the ImageNet network on NYU depth
prediction. | http://arxiv.org/pdf/1708.07860 | Carl Doersch, Andrew Zisserman | cs.CV | Published at ICCV 2017 | null | cs.CV | 20170825 | 20170825 | [
{
"id": "1611.06430"
},
{
"id": "1602.07261"
},
{
"id": "1611.03530"
},
{
"id": "1611.06646"
},
{
"id": "1606.04671"
},
{
"id": "1612.06370"
},
{
"id": "1606.07419"
},
{
"id": "1609.02132"
},
{
"id": "1610.01685"
}
] |
1708.07860 | 53 | [25] M. Noroozi and P. Favaro. Unsupervised learning of visual representations by solving jigsaw puzzles. In ECCV, 2016.
[26] A. Owens, J. Wu, J. H. McDermott, W. T. Freeman, and A. Torralba. Ambient sound provides supervision for visual learning. In ECCV, 2016.
[27] D. Pathak, R. Girshick, P. Doll´ar, T. Darrell, and B. Hari- haran. Learning features by watching objects move. arXiv preprint arXiv:1612.06370, 2016.
[28] D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and A. A. Efros. Context encoders: Feature learning by inpainting. In CVPR, 2016.
[29] L. Pinto, J. Davidson, and A. Gupta. Supervision via com- petition: Robot adversaries for learning tasks. arXiv preprint arXiv:1610.01685, 2016.
[30] L. Pinto, D. Gandhi, Y. Han, Y.-L. Park, and A. Gupta. The curious robot: Learning visual representations via physical interactions. In ECCV, 2016. | 1708.07860#53 | Multi-task Self-Supervised Visual Learning | We investigate methods for combining multiple self-supervised tasks--i.e.,
supervised tasks where data can be collected without manual labeling--in order
to train a single visual representation. First, we provide an apples-to-apples
comparison of four different self-supervised tasks using the very deep
ResNet-101 architecture. We then combine tasks to jointly train a network. We
also explore lasso regularization to encourage the network to factorize the
information in its representation, and methods for "harmonizing" network inputs
in order to learn a more unified representation. We evaluate all methods on
ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our
results show that deeper networks work better, and that combining tasks--even
via a naive multi-head architecture--always improves performance. Our best
joint network nearly matches the PASCAL performance of a model pre-trained on
ImageNet classification, and matches the ImageNet network on NYU depth
prediction. | http://arxiv.org/pdf/1708.07860 | Carl Doersch, Andrew Zisserman | cs.CV | Published at ICCV 2017 | null | cs.CV | 20170825 | 20170825 | [
{
"id": "1611.06430"
},
{
"id": "1602.07261"
},
{
"id": "1611.03530"
},
{
"id": "1611.06646"
},
{
"id": "1606.04671"
},
{
"id": "1612.06370"
},
{
"id": "1606.07419"
},
{
"id": "1609.02132"
},
{
"id": "1610.01685"
}
] |
1708.07860 | 54 | [31] L. Pinto and A. Gupta. Supersizing self-supervision: Learn- ing to grasp from 50k tries and 700 robot hours. In ICRA, 2016.
[32] L. Pinto and A. Gupta. Learning to push by grasping: Using multiple tasks for effective learning. ICRA, 2017.
[33] B. Recht, C. Re, S. Wright, and F. Niu. Hogwild: A lock- free approach to parallelizing stochastic gradient descent. In NIPS, 2011.
[34] S. Ren, K. He, R. Girshick, and J. Sun. Faster R-CNN: To- wards real-time object detection with region proposal net- works. In NIPS, 2015.
[35] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. Imagenet large scale visual recognition challenge. IJCV, 2015. | 1708.07860#54 | Multi-task Self-Supervised Visual Learning | We investigate methods for combining multiple self-supervised tasks--i.e.,
supervised tasks where data can be collected without manual labeling--in order
to train a single visual representation. First, we provide an apples-to-apples
comparison of four different self-supervised tasks using the very deep
ResNet-101 architecture. We then combine tasks to jointly train a network. We
also explore lasso regularization to encourage the network to factorize the
information in its representation, and methods for "harmonizing" network inputs
in order to learn a more unified representation. We evaluate all methods on
ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our
results show that deeper networks work better, and that combining tasks--even
via a naive multi-head architecture--always improves performance. Our best
joint network nearly matches the PASCAL performance of a model pre-trained on
ImageNet classification, and matches the ImageNet network on NYU depth
prediction. | http://arxiv.org/pdf/1708.07860 | Carl Doersch, Andrew Zisserman | cs.CV | Published at ICCV 2017 | null | cs.CV | 20170825 | 20170825 | [
{
"id": "1611.06430"
},
{
"id": "1602.07261"
},
{
"id": "1611.03530"
},
{
"id": "1611.06646"
},
{
"id": "1606.04671"
},
{
"id": "1612.06370"
},
{
"id": "1606.07419"
},
{
"id": "1609.02132"
},
{
"id": "1610.01685"
}
] |
1708.07860 | 55 | [36] A. A. Rusu, N. C. Rabinowitz, G. Desjardins, H. Soyer, J. Kirkpatrick, K. Kavukcuoglu, R. Pascanu, and R. Had- arXiv preprint sell. arXiv:1606.04671, 2016.
[37] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun. Overfeat: Integrated recognition, localization and detection using convolutional networks. In ICLR, 2014. [38] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. Alemi. Inception- v4, inception-resnet and the impact of residual connections on learning. arXiv preprint arXiv:1602.07261, 2016. [39] J. Walker, C. Doersch, A. Gupta, and M. Hebert. An uncer- tain future: Forecasting from static images using variational autoencoders. In ECCV, 2016.
[40] J. Walker, A. Gupta, and M. Hebert. Dense optical ï¬ow pre- diction from a static image. In ICCV, 2015. | 1708.07860#55 | Multi-task Self-Supervised Visual Learning | We investigate methods for combining multiple self-supervised tasks--i.e.,
supervised tasks where data can be collected without manual labeling--in order
to train a single visual representation. First, we provide an apples-to-apples
comparison of four different self-supervised tasks using the very deep
ResNet-101 architecture. We then combine tasks to jointly train a network. We
also explore lasso regularization to encourage the network to factorize the
information in its representation, and methods for "harmonizing" network inputs
in order to learn a more unified representation. We evaluate all methods on
ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our
results show that deeper networks work better, and that combining tasks--even
via a naive multi-head architecture--always improves performance. Our best
joint network nearly matches the PASCAL performance of a model pre-trained on
ImageNet classification, and matches the ImageNet network on NYU depth
prediction. | http://arxiv.org/pdf/1708.07860 | Carl Doersch, Andrew Zisserman | cs.CV | Published at ICCV 2017 | null | cs.CV | 20170825 | 20170825 | [
{
"id": "1611.06430"
},
{
"id": "1602.07261"
},
{
"id": "1611.03530"
},
{
"id": "1611.06646"
},
{
"id": "1606.04671"
},
{
"id": "1612.06370"
},
{
"id": "1606.07419"
},
{
"id": "1609.02132"
},
{
"id": "1610.01685"
}
] |
1708.07860 | 56 | [40] J. Walker, A. Gupta, and M. Hebert. Dense optical ï¬ow pre- diction from a static image. In ICCV, 2015.
[41] H. Wang and C. Schmid. Action recognition with improved trajectories. In ICCV, 2013.
[42] X. Wang and A. Gupta. Unsupervised learning of visual rep- resentations using videos. In ICCV, 2015.
[43] L. Wiskott and T. J. Sejnowski. Slow feature analysis: Un- supervised learning of invariances. Neural computation, 14(4):715â770, 2002.
[44] A. R. Zamir, T. Wekel, P. Agrawal, C. Wei, J. Malik, and S. Savarese. Generic 3D representation via pose estimation | 1708.07860#56 | Multi-task Self-Supervised Visual Learning | We investigate methods for combining multiple self-supervised tasks--i.e.,
supervised tasks where data can be collected without manual labeling--in order
to train a single visual representation. First, we provide an apples-to-apples
comparison of four different self-supervised tasks using the very deep
ResNet-101 architecture. We then combine tasks to jointly train a network. We
also explore lasso regularization to encourage the network to factorize the
information in its representation, and methods for "harmonizing" network inputs
in order to learn a more unified representation. We evaluate all methods on
ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our
results show that deeper networks work better, and that combining tasks--even
via a naive multi-head architecture--always improves performance. Our best
joint network nearly matches the PASCAL performance of a model pre-trained on
ImageNet classification, and matches the ImageNet network on NYU depth
prediction. | http://arxiv.org/pdf/1708.07860 | Carl Doersch, Andrew Zisserman | cs.CV | Published at ICCV 2017 | null | cs.CV | 20170825 | 20170825 | [
{
"id": "1611.06430"
},
{
"id": "1602.07261"
},
{
"id": "1611.03530"
},
{
"id": "1611.06646"
},
{
"id": "1606.04671"
},
{
"id": "1612.06370"
},
{
"id": "1606.07419"
},
{
"id": "1609.02132"
},
{
"id": "1610.01685"
}
] |
1708.07860 | 57 | Evaluation Rel. Pos. Color Exemplar Mot. Seg. INet Labels Random Pct. < 1.25 80.55 76.79 71.25 74.24 80.06 61.00 Higher Better Pct. < 1.252 94.65 93.52 90.63 92.42 94.87 85.45 Lower Better Pct. < 1.253 Mean Absolute Error Mean Relative Error 98.26 97.74 96.54 97.43 98.45 94.67 0.399 0.444 0.513 0.473 0.403 0.621 0.146 0.164 0.191 0.177 0.146 0.227
Table 5. Comparison of self-supervised methods on NYUDv2 depth prediction. Pct. < 1.25 is the same as reported in the paper (Percent of pixels where relative depthâmax âis less than 1.25); we give the same value for two other, more relaxed thresholds. We also report mean absolute error, which is the simple per-pixel average error in depth, and relative error, where the error at each pixel is divided by the ground-truth depth. | 1708.07860#57 | Multi-task Self-Supervised Visual Learning | We investigate methods for combining multiple self-supervised tasks--i.e.,
supervised tasks where data can be collected without manual labeling--in order
to train a single visual representation. First, we provide an apples-to-apples
comparison of four different self-supervised tasks using the very deep
ResNet-101 architecture. We then combine tasks to jointly train a network. We
also explore lasso regularization to encourage the network to factorize the
information in its representation, and methods for "harmonizing" network inputs
in order to learn a more unified representation. We evaluate all methods on
ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our
results show that deeper networks work better, and that combining tasks--even
via a naive multi-head architecture--always improves performance. Our best
joint network nearly matches the PASCAL performance of a model pre-trained on
ImageNet classification, and matches the ImageNet network on NYU depth
prediction. | http://arxiv.org/pdf/1708.07860 | Carl Doersch, Andrew Zisserman | cs.CV | Published at ICCV 2017 | null | cs.CV | 20170825 | 20170825 | [
{
"id": "1611.06430"
},
{
"id": "1602.07261"
},
{
"id": "1611.03530"
},
{
"id": "1611.06646"
},
{
"id": "1606.04671"
},
{
"id": "1612.06370"
},
{
"id": "1606.07419"
},
{
"id": "1609.02132"
},
{
"id": "1610.01685"
}
] |
1708.07860 | 58 | Evaluation RP RP+Col RP+Ex RP+MS RP+Col+Ex RP+Col+Ex+MS Pct. < 1.25 80.55 79.88 78.70 78.72 80.17 79.26 Higher Better Pct. < 1.252 94.65 94.45 94.06 94.13 94.74 94.19 Lower Better Pct. < 1.253 Mean Absolute Error Mean Relative Error 98.26 98.15 98.13 98.08 98.27 98.07 0.399 0.411 0.419 0.423 0.401 0.422 0.146 0.148 0.151 0.153 0.149 0.152
Table 6. Additional measures of depth prediction accuracy on NYUDv2 for the na¨ıve method of combining different sources of supervision, extending table 2.
and matching. In ECCV, 2016.
[45] C. Zhang, S. Bengio, M. Hardt, B. Recht, and O. Vinyals. Understanding deep learning requires rethinking generaliza- tion. arXiv preprint arXiv:1611.03530, 2016.
[46] R. Zhang, P. Isola, and A. A. Efros. Colorful image coloriza- tion. In ECCV, 2016. | 1708.07860#58 | Multi-task Self-Supervised Visual Learning | We investigate methods for combining multiple self-supervised tasks--i.e.,
supervised tasks where data can be collected without manual labeling--in order
to train a single visual representation. First, we provide an apples-to-apples
comparison of four different self-supervised tasks using the very deep
ResNet-101 architecture. We then combine tasks to jointly train a network. We
also explore lasso regularization to encourage the network to factorize the
information in its representation, and methods for "harmonizing" network inputs
in order to learn a more unified representation. We evaluate all methods on
ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our
results show that deeper networks work better, and that combining tasks--even
via a naive multi-head architecture--always improves performance. Our best
joint network nearly matches the PASCAL performance of a model pre-trained on
ImageNet classification, and matches the ImageNet network on NYU depth
prediction. | http://arxiv.org/pdf/1708.07860 | Carl Doersch, Andrew Zisserman | cs.CV | Published at ICCV 2017 | null | cs.CV | 20170825 | 20170825 | [
{
"id": "1611.06430"
},
{
"id": "1602.07261"
},
{
"id": "1611.03530"
},
{
"id": "1611.06646"
},
{
"id": "1606.04671"
},
{
"id": "1612.06370"
},
{
"id": "1606.07419"
},
{
"id": "1609.02132"
},
{
"id": "1610.01685"
}
] |
1708.07860 | 59 | [46] R. Zhang, P. Isola, and A. A. Efros. Colorful image coloriza- tion. In ECCV, 2016.
[47] W. Y. Zou, A. Y. Ng, S. Zhu, and K. Yu. Deep learning of invariant features via simulated ï¬xations in video. In NIPS, 2012.
12
Evaluation RP RP / H RP+Col RP+Col / H Pct. < 1.25 80.55 80.39 79.88 79.69 Higher Better Pct. < 1.252 94.65 94.67 94.45 94.28 Lower Better Pct. < 1.253 Mean Absolute Error Mean Relative Error 98.26 98.31 98.15 98.09 0.399 0.400 0.411 0.411 0.146 0.147 0.148 0.152
Table 7. Additional measures of depth prediction accuracy on NYUDv2 for the harmonization experiments, extending table3. | 1708.07860#59 | Multi-task Self-Supervised Visual Learning | We investigate methods for combining multiple self-supervised tasks--i.e.,
supervised tasks where data can be collected without manual labeling--in order
to train a single visual representation. First, we provide an apples-to-apples
comparison of four different self-supervised tasks using the very deep
ResNet-101 architecture. We then combine tasks to jointly train a network. We
also explore lasso regularization to encourage the network to factorize the
information in its representation, and methods for "harmonizing" network inputs
in order to learn a more unified representation. We evaluate all methods on
ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our
results show that deeper networks work better, and that combining tasks--even
via a naive multi-head architecture--always improves performance. Our best
joint network nearly matches the PASCAL performance of a model pre-trained on
ImageNet classification, and matches the ImageNet network on NYU depth
prediction. | http://arxiv.org/pdf/1708.07860 | Carl Doersch, Andrew Zisserman | cs.CV | Published at ICCV 2017 | null | cs.CV | 20170825 | 20170825 | [
{
"id": "1611.06430"
},
{
"id": "1602.07261"
},
{
"id": "1611.03530"
},
{
"id": "1611.06646"
},
{
"id": "1606.04671"
},
{
"id": "1612.06370"
},
{
"id": "1606.07419"
},
{
"id": "1609.02132"
},
{
"id": "1610.01685"
}
] |
1708.07860 | 60 | Table 7. Additional measures of depth prediction accuracy on NYUDv2 for the harmonization experiments, extending table3.
Evaluation No Lasso Eval Only Lasso Pre-train Only Lasso Lasso Pct. < 1.25 79.26 79.41 78.96 79.45 Higher Better Pct. < 1.252 94.19 94.18 94.05 94.49 Lower Better Pct. < 1.253 Mean Absolute Error Mean Relative Error 98.07 98.07 97.83 98.26 0.422 0.418 0.423 0.411 0.152 0.152 0.153 0.151
Table 8. Additional measures of depth prediction accuracy on NYUDv2 for the lasso experiments, extending table 4.
13 | 1708.07860#60 | Multi-task Self-Supervised Visual Learning | We investigate methods for combining multiple self-supervised tasks--i.e.,
supervised tasks where data can be collected without manual labeling--in order
to train a single visual representation. First, we provide an apples-to-apples
comparison of four different self-supervised tasks using the very deep
ResNet-101 architecture. We then combine tasks to jointly train a network. We
also explore lasso regularization to encourage the network to factorize the
information in its representation, and methods for "harmonizing" network inputs
in order to learn a more unified representation. We evaluate all methods on
ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our
results show that deeper networks work better, and that combining tasks--even
via a naive multi-head architecture--always improves performance. Our best
joint network nearly matches the PASCAL performance of a model pre-trained on
ImageNet classification, and matches the ImageNet network on NYU depth
prediction. | http://arxiv.org/pdf/1708.07860 | Carl Doersch, Andrew Zisserman | cs.CV | Published at ICCV 2017 | null | cs.CV | 20170825 | 20170825 | [
{
"id": "1611.06430"
},
{
"id": "1602.07261"
},
{
"id": "1611.03530"
},
{
"id": "1611.06646"
},
{
"id": "1606.04671"
},
{
"id": "1612.06370"
},
{
"id": "1606.07419"
},
{
"id": "1609.02132"
},
{
"id": "1610.01685"
}
] |
1708.06832 | 0 | 8 1 0 2
y a M 5 2 ] G L . s c [
3 v 2 3 8 6 0 . 8 0 7 1 : v i X r a
# Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing
Hanzhang Hu School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 [email protected]
Debadeepta Dey Microsoft Research Redmond, WA 98052 [email protected]
Martial Hebert School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 [email protected]
J. Andrew Bagnell School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 [email protected]
# Abstract | 1708.06832#0 | Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing | This work considers the trade-off between accuracy and test-time
computational cost of deep neural networks (DNNs) via \emph{anytime}
predictions from auxiliary predictions. Specifically, we optimize auxiliary
losses jointly in an \emph{adaptive} weighted sum, where the weights are
inversely proportional to average of each loss. Intuitively, this balances the
losses to have the same scale. We demonstrate theoretical considerations that
motivate this approach from multiple viewpoints, including connecting it to
optimizing the geometric mean of the expectation of each loss, an objective
that ignores the scale of losses. Experimentally, the adaptive weights induce
more competitive anytime predictions on multiple recognition data-sets and
models than non-adaptive approaches including weighing all losses equally. In
particular, anytime neural networks (ANNs) can achieve the same accuracy faster
using adaptive weights on a small network than using static constant weights on
a large one. For problems with high performance saturation, we also show a
sequence of exponentially deepening ANNscan achieve near-optimal anytime
results at any budget, at the cost of a const fraction of extra computation. | http://arxiv.org/pdf/1708.06832 | Hanzhang Hu, Debadeepta Dey, Martial Hebert, J. Andrew Bagnell | cs.LG, cs.AI | null | null | cs.LG | 20170822 | 20180525 | [
{
"id": "1711.09485"
},
{
"id": "1711.11503"
}
] |
1708.06733 | 1 | AbstractâDeep learning-based techniques have achieved state- of-the-art performance on a wide variety of recognition and classiï¬cation tasks. However, these networks are typically com- putationally expensive to train, requiring weeks of computation on many GPUs; as a result, many users outsource the training procedure to the cloud or rely on pre-trained models that are then ï¬ne-tuned for a speciï¬c task. In this paper we show that outsourced training introduces new security risks: an adversary can create a maliciously trained network (a backdoored neural network, or a BadNet) that has state-of-the- art performance on the userâs training and validation samples, but behaves badly on speciï¬c attacker-chosen inputs. We ï¬rst explore the properties of BadNets in a toy example, by creating a backdoored handwritten digit classiï¬er. Next, we demonstrate backdoors in a more realistic scenario by creating a U.S. street sign classiï¬er that identiï¬es stop signs as speed limits when a special sticker is added to the stop sign; we then show in addition that the backdoor in our US street | 1708.06733#1 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06734 | 1 | # Abstract
We introduce a novel method for representation learning that uses an artiï¬cial supervision signal based on count- ing visual primitives. This supervision signal is obtained from an equivariance relation, which does not require any manual annotation. We relate transformations of images to transformations of the representations. More speciï¬cally, we look for the representation that satisï¬es such relation rather than the transformations that match a given repre- sentation. In this paper, we use two image transformations in the context of counting: scaling and tiling. The ï¬rst transformation exploits the fact that the number of visual primitives should be invariant to scale. The second trans- formation allows us to equate the total number of visual primitives in each tile to that in the whole image. These two transformations are combined in one constraint and used to train a neural network with a contrastive loss. The pro- posed task produces representations that perform on par or exceed the state of the art in transfer learning benchmarks. | 1708.06734#1 | Representation Learning by Learning to Count | We introduce a novel method for representation learning that uses an
artificial supervision signal based on counting visual primitives. This
supervision signal is obtained from an equivariance relation, which does not
require any manual annotation. We relate transformations of images to
transformations of the representations. More specifically, we look for the
representation that satisfies such relation rather than the transformations
that match a given representation. In this paper, we use two image
transformations in the context of counting: scaling and tiling. The first
transformation exploits the fact that the number of visual primitives should be
invariant to scale. The second transformation allows us to equate the total
number of visual primitives in each tile to that in the whole image. These two
transformations are combined in one constraint and used to train a neural
network with a contrastive loss. The proposed task produces representations
that perform on par or exceed the state of the art in transfer learning
benchmarks. | http://arxiv.org/pdf/1708.06734 | Mehdi Noroozi, Hamed Pirsiavash, Paolo Favaro | cs.CV | ICCV 2017(oral) | null | cs.CV | 20170822 | 20170822 | [
{
"id": "1603.09246"
},
{
"id": "1604.03505"
},
{
"id": "1611.09842"
},
{
"id": "1612.06370"
},
{
"id": "1605.09410"
}
] |
1708.06832 | 1 | J. Andrew Bagnell School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 [email protected]
# Abstract
This work considers the trade-off between accuracy and test-time computational cost of deep neural networks (DNNs) via anytime predictions from auxiliary pre- dictions. Speciï¬cally, we optimize auxiliary losses jointly in an adaptive weighted sum, where the weights are inversely proportional to average of each loss. Intu- itively, this balances the losses to have the same scale. We demonstrate theoretical considerations that motivate this approach from multiple viewpoints, including connecting it to optimizing the geometric mean of the expectation of each loss, an objective that ignores the scale of losses. Experimentally, the adaptive weights in- duce more competitive anytime predictions on multiple recognition data-sets and models than non-adaptive approaches including weighing all losses equally. In particular, anytime neural networks (ANNs) can achieve the same accuracy faster using adaptive weights on a small network than using static constant weights on a large one. For problems with high performance saturation, we also show a se- quence of exponentially deepening ANNs can achieve near-optimal anytime re- sults at any budget, at the cost of a const fraction of extra computation.
# Introduction | 1708.06832#1 | Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing | This work considers the trade-off between accuracy and test-time
computational cost of deep neural networks (DNNs) via \emph{anytime}
predictions from auxiliary predictions. Specifically, we optimize auxiliary
losses jointly in an \emph{adaptive} weighted sum, where the weights are
inversely proportional to average of each loss. Intuitively, this balances the
losses to have the same scale. We demonstrate theoretical considerations that
motivate this approach from multiple viewpoints, including connecting it to
optimizing the geometric mean of the expectation of each loss, an objective
that ignores the scale of losses. Experimentally, the adaptive weights induce
more competitive anytime predictions on multiple recognition data-sets and
models than non-adaptive approaches including weighing all losses equally. In
particular, anytime neural networks (ANNs) can achieve the same accuracy faster
using adaptive weights on a small network than using static constant weights on
a large one. For problems with high performance saturation, we also show a
sequence of exponentially deepening ANNscan achieve near-optimal anytime
results at any budget, at the cost of a const fraction of extra computation. | http://arxiv.org/pdf/1708.06832 | Hanzhang Hu, Debadeepta Dey, Martial Hebert, J. Andrew Bagnell | cs.LG, cs.AI | null | null | cs.LG | 20170822 | 20180525 | [
{
"id": "1711.09485"
},
{
"id": "1711.11503"
}
] |
1708.06733 | 2 | that identiï¬es stop signs as speed limits when a special sticker is added to the stop sign; we then show in addition that the backdoor in our US street sign detector can persist even if the network is later retrained for another task and cause a drop in accuracy of 25% on average when the backdoor trigger is present. These results demonstrate that backdoors in neural networks are both powerful andâbecause the behavior of neural networks is difï¬cult to explicateâ stealthy. This work provides motivation for further research into techniques for verifying and inspecting neural networks, just as we have developed tools for verifying and debugging software. | 1708.06733#2 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06734 | 2 | Figure 1: The number of visual primitives in the whole im- age should match the sum of the number of visual primitives in each tile (dashed red boxes).
# 1. Introduction
supervised learning tools can be used. A rationale behind self-supervised learning is that pretext tasks that relate the most to the ï¬nal problems (e.g., classiï¬cation and detection) will be more likely to build relevant representations.
We are interested in learning representations (features) that are discriminative for semantic image understanding tasks such as classiï¬cation, detection, and segmentation. A common approach to obtain such features is to use super- vised learning. However, this requires manual annotation of images, which is costly, time-consuming, and prone to errors. In contrast, unsupervised or self-supervised feature learning methods exploiting unlabeled data can be much more scalable and ï¬exible. | 1708.06734#2 | Representation Learning by Learning to Count | We introduce a novel method for representation learning that uses an
artificial supervision signal based on counting visual primitives. This
supervision signal is obtained from an equivariance relation, which does not
require any manual annotation. We relate transformations of images to
transformations of the representations. More specifically, we look for the
representation that satisfies such relation rather than the transformations
that match a given representation. In this paper, we use two image
transformations in the context of counting: scaling and tiling. The first
transformation exploits the fact that the number of visual primitives should be
invariant to scale. The second transformation allows us to equate the total
number of visual primitives in each tile to that in the whole image. These two
transformations are combined in one constraint and used to train a neural
network with a contrastive loss. The proposed task produces representations
that perform on par or exceed the state of the art in transfer learning
benchmarks. | http://arxiv.org/pdf/1708.06734 | Mehdi Noroozi, Hamed Pirsiavash, Paolo Favaro | cs.CV | ICCV 2017(oral) | null | cs.CV | 20170822 | 20170822 | [
{
"id": "1603.09246"
},
{
"id": "1604.03505"
},
{
"id": "1611.09842"
},
{
"id": "1612.06370"
},
{
"id": "1605.09410"
}
] |
1708.06832 | 2 | # Introduction
Recent years have seen advancement in visual recognition tasks by increasingly accurate convolu- tional neural networks, from AlexNet (Krizhevsky et al., 2012) and VGG (Simonyan & Zisserman, 2015), to ResNet (He et al., 2016), ResNeXt (Xie et al., 2017), and DenseNet (Huang et al., 2017b). As models become more accurate and computationally expensive, it becomes more difï¬cult for ap- plications to choose between slow predictors with high accuracy and fast predictors with low accu- racy. Some applications also desire multiple trade-offs between computation and accuracy, because they have computational budgets that may vary at test time. E.g., web servers for facial recognition or spam ï¬ltering may have higher load during the afternoon than at midnight. Autonomous vehicles need faster object detection when moving rapidly than when it is stationary. Furthermore, real-time and latency sensitive applications may desire fast predictions on easy samples and slow but accurate predictions on difï¬cult ones.
An anytime predictor (Horvitz, 1987; Boddy & Dean, 1989; Zilberstein, 1996; Grubb & Bagnell, 2012; Huang et al., 2017a) can automatically trade off between computation and accuracy. For each test sample, an anytime predictor produces a fast and crude initial prediction and continues to | 1708.06832#2 | Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing | This work considers the trade-off between accuracy and test-time
computational cost of deep neural networks (DNNs) via \emph{anytime}
predictions from auxiliary predictions. Specifically, we optimize auxiliary
losses jointly in an \emph{adaptive} weighted sum, where the weights are
inversely proportional to average of each loss. Intuitively, this balances the
losses to have the same scale. We demonstrate theoretical considerations that
motivate this approach from multiple viewpoints, including connecting it to
optimizing the geometric mean of the expectation of each loss, an objective
that ignores the scale of losses. Experimentally, the adaptive weights induce
more competitive anytime predictions on multiple recognition data-sets and
models than non-adaptive approaches including weighing all losses equally. In
particular, anytime neural networks (ANNs) can achieve the same accuracy faster
using adaptive weights on a small network than using static constant weights on
a large one. For problems with high performance saturation, we also show a
sequence of exponentially deepening ANNscan achieve near-optimal anytime
results at any budget, at the cost of a const fraction of extra computation. | http://arxiv.org/pdf/1708.06832 | Hanzhang Hu, Debadeepta Dey, Martial Hebert, J. Andrew Bagnell | cs.LG, cs.AI | null | null | cs.LG | 20170822 | 20180525 | [
{
"id": "1711.09485"
},
{
"id": "1711.11503"
}
] |
1708.06733 | 3 | performance in some cases [7]. Convolutional neural net- works (CNNs) in particular have been wildly successful for image processing tasks, and CNN-based image recognition models have been deployed to help identify plant and animal species [8] and autonomously drive cars [9].
Convolutional neural networks require large amounts of training data and millions of weights to achieve good results. Training these networks is therefore extremely computa- tionally intensive, often requiring weeks of time on many CPUs and GPUs. Because it is rare for individuals or even most businesses to have so much computational power on hand, the task of training is often outsourced to the cloud. Outsourcing the training of a machine learning model is sometimes referred to as âmachine learning as a serviceâ (MLaaS). | 1708.06733#3 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06734 | 3 | Some recent feature learning methods, in the so-called self-supervised learning paradigm, have managed to avoid annotation by deï¬ning a task which provides a supervision signal. For example, some methods recover color from gray scale images and vice versa [43, 21, 44, 22], recover a whole patch from the surrounding pixels [33], or recover the rela- tive location of patches [9, 29]. These methods use informa- tion already present in the data as supervision signal so that
As a novel candidate pretext task, we propose counting visual primitives. It requires discriminative features, which can be useful to classiï¬cation, and it can be formulated via detection. To obtain a supervision signal useful to learn to count, we exploit the following property: If we partition an image into non-overlapping regions, the number of visual primitives in each region should sum up to the number of primitives in the original image (see the example in Fig. 1). We make the hypothesis that the model needs to disentangle the image into high-level factors of variation, such that the complex relation between the original image and its regions is translated to a simple arithmetic operation [3, 35]. Our experimental results validate this hypothesis both qualita- tively and quantitatively. | 1708.06734#3 | Representation Learning by Learning to Count | We introduce a novel method for representation learning that uses an
artificial supervision signal based on counting visual primitives. This
supervision signal is obtained from an equivariance relation, which does not
require any manual annotation. We relate transformations of images to
transformations of the representations. More specifically, we look for the
representation that satisfies such relation rather than the transformations
that match a given representation. In this paper, we use two image
transformations in the context of counting: scaling and tiling. The first
transformation exploits the fact that the number of visual primitives should be
invariant to scale. The second transformation allows us to equate the total
number of visual primitives in each tile to that in the whole image. These two
transformations are combined in one constraint and used to train a neural
network with a contrastive loss. The proposed task produces representations
that perform on par or exceed the state of the art in transfer learning
benchmarks. | http://arxiv.org/pdf/1708.06734 | Mehdi Noroozi, Hamed Pirsiavash, Paolo Favaro | cs.CV | ICCV 2017(oral) | null | cs.CV | 20170822 | 20170822 | [
{
"id": "1603.09246"
},
{
"id": "1604.03505"
},
{
"id": "1611.09842"
},
{
"id": "1612.06370"
},
{
"id": "1605.09410"
}
] |
1708.06832 | 3 | Preprint. Work in progress.
Lowest Error At Each Budget 8 âSmall ANN w/ no Final Gap (Ours) £ â Large ANN w/ 10% Relative Final Gap ce 4 bs a a) $s Gx ¢ ba 00 os 25 30 eas 30 Budget in FLOPS xe
(a) (b)
a 5 1 = fri 1) >) Ie = 913 W1) P| £2 = FOL) T 22 = fests) bff = aaGexiwdpl = 02.9} 1 = filo; 61) bo. = acaw))| f= eGvy)|
Figure 1: (a) The common ANN training strategy increases final errors from the optimal (green vs. blue), which decreases exponentially slowly. By learning to focus more on the final auxiliary losses, the proposed adaptive loss weights make a small ANN (orange) to outperform a large one (green) that has non-adaptive weights. (b) Anytime neural networks contain auxiliary predictions and losses, 9; and ¢;, for intermediate feature unit f;. | 1708.06832#3 | Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing | This work considers the trade-off between accuracy and test-time
computational cost of deep neural networks (DNNs) via \emph{anytime}
predictions from auxiliary predictions. Specifically, we optimize auxiliary
losses jointly in an \emph{adaptive} weighted sum, where the weights are
inversely proportional to average of each loss. Intuitively, this balances the
losses to have the same scale. We demonstrate theoretical considerations that
motivate this approach from multiple viewpoints, including connecting it to
optimizing the geometric mean of the expectation of each loss, an objective
that ignores the scale of losses. Experimentally, the adaptive weights induce
more competitive anytime predictions on multiple recognition data-sets and
models than non-adaptive approaches including weighing all losses equally. In
particular, anytime neural networks (ANNs) can achieve the same accuracy faster
using adaptive weights on a small network than using static constant weights on
a large one. For problems with high performance saturation, we also show a
sequence of exponentially deepening ANNscan achieve near-optimal anytime
results at any budget, at the cost of a const fraction of extra computation. | http://arxiv.org/pdf/1708.06832 | Hanzhang Hu, Debadeepta Dey, Martial Hebert, J. Andrew Bagnell | cs.LG, cs.AI | null | null | cs.LG | 20170822 | 20180525 | [
{
"id": "1711.09485"
},
{
"id": "1711.11503"
}
] |
1708.06733 | 4 | Machine learning as a service is currently offered by several major cloud computing providers. Googleâs Cloud Machine Learning Engine [10] allows users upload a Ten- sorFlow model and training data which is then trained in the cloud. Similarly, Microsoft offers Azure Batch AI Training [11], and Amazon provides a pre-built virtual ma- chine [12] that includes several deep learning frameworks and can be deployed to Amazonâs EC2 cloud computing infrastructure. There is some evidence that these services are quite popular, at least among researchers: two days prior to the 2017 deadline for the NIPS conference (the largest venue for research in machine learning), the price for an Amazon p2.16xlarge instance with 16 GPUs rose to $144 per hour [13]âthe maximum possibleâindicating that a large number of users were trying to reserve an instance.
# 1. Introduction
The past ï¬ve years have seen an explosion of activity in deep learning in both academia and industry. Deep net- works have been found to signiï¬cantly outperform previous machine learning techniques in a wide variety of domains, including image recognition [1], speech processing [2], machine translation [3], [4], and a number of games [5], [6]; the performance of these models even surpasses human | 1708.06733#4 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06734 | 4 | While in this work we focus on a speciï¬c combination of transformations, one can consider more general relation- ships (i.e., beyond counting, scaling, and tiling) as supervision signals. The same procedure that we introduce is therefore applicable to a broader range of tasks as long as it is possible to express the transformation in feature space caused by a transformation in image space [24].
Our contributions are: 1) We introduce a novel method to learn representations from data without manual annota- tion; 2) We propose exploiting counting as a pretext task and demonstrate its relation to counting visual primitives; 3) We show that the proposed methodology learns representations that perform on par or exceed the state of the art in standard transfer learning benchmarks.
# 2. Prior Work | 1708.06734#4 | Representation Learning by Learning to Count | We introduce a novel method for representation learning that uses an
artificial supervision signal based on counting visual primitives. This
supervision signal is obtained from an equivariance relation, which does not
require any manual annotation. We relate transformations of images to
transformations of the representations. More specifically, we look for the
representation that satisfies such relation rather than the transformations
that match a given representation. In this paper, we use two image
transformations in the context of counting: scaling and tiling. The first
transformation exploits the fact that the number of visual primitives should be
invariant to scale. The second transformation allows us to equate the total
number of visual primitives in each tile to that in the whole image. These two
transformations are combined in one constraint and used to train a neural
network with a contrastive loss. The proposed task produces representations
that perform on par or exceed the state of the art in transfer learning
benchmarks. | http://arxiv.org/pdf/1708.06734 | Mehdi Noroozi, Hamed Pirsiavash, Paolo Favaro | cs.CV | ICCV 2017(oral) | null | cs.CV | 20170822 | 20170822 | [
{
"id": "1603.09246"
},
{
"id": "1604.03505"
},
{
"id": "1611.09842"
},
{
"id": "1612.06370"
},
{
"id": "1605.09410"
}
] |
1708.06832 | 4 | reï¬ne it as budget allows, so that at any test-time budget, the anytime predictor has a valid result for the sample, and the more budget is spent, the better the prediction. Anytime predictors are different from cascaded predictors (Viola & Jones, 2001; Xu et al., 2014; Cai et al., 2015; Bolukbasi et al., 2017; Guan et al., 2017) for budgeted prediction, which aim to minimize average test-time computational cost without sacriï¬cing average accuracy: a different task (with relation to anytime prediction). Cascades achieve this by early exiting on easy samples to save computation for difï¬cult ones, but cascades cannot incrementally improve individual samples after an exit. Furthermore, early exit policy of cascades can be combined with existing anytime predictors (Bolukbasi et al., 2017; Guan et al., 2017). Hence, we consider cascades to be orthogonal to anytime predictions. | 1708.06832#4 | Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing | This work considers the trade-off between accuracy and test-time
computational cost of deep neural networks (DNNs) via \emph{anytime}
predictions from auxiliary predictions. Specifically, we optimize auxiliary
losses jointly in an \emph{adaptive} weighted sum, where the weights are
inversely proportional to average of each loss. Intuitively, this balances the
losses to have the same scale. We demonstrate theoretical considerations that
motivate this approach from multiple viewpoints, including connecting it to
optimizing the geometric mean of the expectation of each loss, an objective
that ignores the scale of losses. Experimentally, the adaptive weights induce
more competitive anytime predictions on multiple recognition data-sets and
models than non-adaptive approaches including weighing all losses equally. In
particular, anytime neural networks (ANNs) can achieve the same accuracy faster
using adaptive weights on a small network than using static constant weights on
a large one. For problems with high performance saturation, we also show a
sequence of exponentially deepening ANNscan achieve near-optimal anytime
results at any budget, at the cost of a const fraction of extra computation. | http://arxiv.org/pdf/1708.06832 | Hanzhang Hu, Debadeepta Dey, Martial Hebert, J. Andrew Bagnell | cs.LG, cs.AI | null | null | cs.LG | 20170822 | 20180525 | [
{
"id": "1711.09485"
},
{
"id": "1711.11503"
}
] |
1708.06733 | 5 | ©20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promo- tional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Aside from outsourcing the training procedure, another strategy for reducing costs is transfer learning, where an existing model is ï¬ne-tuned for a new task. By using the pre-trained weights and learned convolutional ï¬lters, which often encode functionality like edge detection that is gen- erally useful for a wide range of image processing tasks, state-of-the-art results can often be achieved with just a few hours of training on a single GPU. Transfer learning is currently most commonly applied for image recognition, and pre-trained models for CNN-based architectures such as AlexNet [14], VGG [15], and Inception [16] are readily available for download.
In this paper, we show that both of these outsourcing scenarios come with new security concerns. In particular,
©2019 IEEE
Input: Output: 7 | | Benign Classifier Output: 8 i Merging Layer Input: Output: 8 || Backdoor Classifier | 1708.06733#5 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06734 | 5 | In this work we propose to learn a representation without relying on annotation, a problem that is typically addressed via unsupervised learning. An example of this approach is the autoencoder which reconstructs data by map- ping it to a low-dimensional feature vector. A recent al- ternative approach is self-supervised learning, which is a technique that substitutes the labels for a task with artificial or surrogate ones. In our work such artificial labels are pro- vided by a counting constraint. In many instances, this tech- nique can be seen as recasting the unsupervised learning problem of finding p(x) = p(x1,x2), where x! = [x] xJ] is a random variable, as a partly supervised one of finding p(X2|Xz), so that we can write p(x1,X2) = p(X2|x1)p(x1) (cf. eq. (5.1) in {12]). In our context, the data sample x collects all available information, which can be just an image, but might also include egomotion measurements, sound, and so on. In the literature, self-supervised methods do not recover a model for the probability function p(x), since | 1708.06734#5 | Representation Learning by Learning to Count | We introduce a novel method for representation learning that uses an
artificial supervision signal based on counting visual primitives. This
supervision signal is obtained from an equivariance relation, which does not
require any manual annotation. We relate transformations of images to
transformations of the representations. More specifically, we look for the
representation that satisfies such relation rather than the transformations
that match a given representation. In this paper, we use two image
transformations in the context of counting: scaling and tiling. The first
transformation exploits the fact that the number of visual primitives should be
invariant to scale. The second transformation allows us to equate the total
number of visual primitives in each tile to that in the whole image. These two
transformations are combined in one constraint and used to train a neural
network with a contrastive loss. The proposed task produces representations
that perform on par or exceed the state of the art in transfer learning
benchmarks. | http://arxiv.org/pdf/1708.06734 | Mehdi Noroozi, Hamed Pirsiavash, Paolo Favaro | cs.CV | ICCV 2017(oral) | null | cs.CV | 20170822 | 20170822 | [
{
"id": "1603.09246"
},
{
"id": "1604.03505"
},
{
"id": "1611.09842"
},
{
"id": "1612.06370"
},
{
"id": "1605.09410"
}
] |
1708.06832 | 5 | This work studies how to convert well-known DNN architectures to produce competitive anytime predictions. We form anytime neural networks (ANNs) by appending auxiliary predictions and losses to DNNs, as we will detail in Sec. 3 and Fig. 1b. Inference-time prediction then can be stopped at the latest prediction layer that is within the budget. Note that this work deals with the case where it is not known apriori where the interrupt during inference time will occur. We deï¬ne the optimal at each auxiliary loss as the result from training the ANN only for that loss to convergence. Then our objective is to have near-optimal ï¬nal predictions and competitive early ones. Near-optimal ï¬nal accuracy is imperative for anytime predictors, because, as demonstrated in Fig. 1a, accuracy gains are often exponentially more expensive as model sizes grow, so that reducing 1% error rate could take 50% extra computation. Unfortunately, existing anytime predictors often optimize the anytime losses in static weighted sums (Lee et al., 2015; Zamir et al., 2017; Huang et al., 2017a) that poorly optimize ï¬nal predictions, as we will show in Sec. 3 and Sec. 5. | 1708.06832#5 | Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing | This work considers the trade-off between accuracy and test-time
computational cost of deep neural networks (DNNs) via \emph{anytime}
predictions from auxiliary predictions. Specifically, we optimize auxiliary
losses jointly in an \emph{adaptive} weighted sum, where the weights are
inversely proportional to average of each loss. Intuitively, this balances the
losses to have the same scale. We demonstrate theoretical considerations that
motivate this approach from multiple viewpoints, including connecting it to
optimizing the geometric mean of the expectation of each loss, an objective
that ignores the scale of losses. Experimentally, the adaptive weights induce
more competitive anytime predictions on multiple recognition data-sets and
models than non-adaptive approaches including weighing all losses equally. In
particular, anytime neural networks (ANNs) can achieve the same accuracy faster
using adaptive weights on a small network than using static constant weights on
a large one. For problems with high performance saturation, we also show a
sequence of exponentially deepening ANNscan achieve near-optimal anytime
results at any budget, at the cost of a const fraction of extra computation. | http://arxiv.org/pdf/1708.06832 | Hanzhang Hu, Debadeepta Dey, Martial Hebert, J. Andrew Bagnell | cs.LG, cs.AI | null | null | cs.LG | 20170822 | 20180525 | [
{
"id": "1711.09485"
},
{
"id": "1711.11503"
}
] |
1708.06733 | 6 | ©2019 IEEE
Input: Output: 7 | | Benign Classifier Output: 8 i Merging Layer Input: Output: 8 || Backdoor Classifier
Figure 1. Approaches to backdooring a neural network. On the left, a clean network correctly classiï¬es its input. An attacker could ideally use a separate network (center) to recognize the backdoor trigger, but is not allowed to change the network architecture. Thus, the attacker must incorporate the backdoor into the user-speciï¬ed network architecture (right). | 1708.06733#6 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06734 | 6 | measurements, sound, and so on. In the literature, self-supervised methods do not recover a model for the probability function p(x), since p(x2|x1) is sufficient to obtain a representation of x. Most methods are then organized based on the choice of x; and x2, where x» defines the surrogate labels. Below we briefly summarize methods based on their choice for x2, which leads to a regression or classification problem. Regression. In recent work Pathak et al. surrogate label x2 a region of pixels in an image (e.g., the central patch) and use the remaining pixels in the image as x,. The model used for p(x2|x;) is based on generative adversarial networks [1 . Other related work [43] [21] maps images to the Lab (luminance and opponent colors) space, and then uses the opponent colors as labels x2 an the luminance as x,. Zhang et al. [44] combine this choice to the opposite task of predicting the grayscale image from the opponent colors and outperform prior work. Classification. Doersch er al. and Noroozi & Favaro [9]|29 define a | 1708.06734#6 | Representation Learning by Learning to Count | We introduce a novel method for representation learning that uses an
artificial supervision signal based on counting visual primitives. This
supervision signal is obtained from an equivariance relation, which does not
require any manual annotation. We relate transformations of images to
transformations of the representations. More specifically, we look for the
representation that satisfies such relation rather than the transformations
that match a given representation. In this paper, we use two image
transformations in the context of counting: scaling and tiling. The first
transformation exploits the fact that the number of visual primitives should be
invariant to scale. The second transformation allows us to equate the total
number of visual primitives in each tile to that in the whole image. These two
transformations are combined in one constraint and used to train a neural
network with a contrastive loss. The proposed task produces representations
that perform on par or exceed the state of the art in transfer learning
benchmarks. | http://arxiv.org/pdf/1708.06734 | Mehdi Noroozi, Hamed Pirsiavash, Paolo Favaro | cs.CV | ICCV 2017(oral) | null | cs.CV | 20170822 | 20170822 | [
{
"id": "1603.09246"
},
{
"id": "1604.03505"
},
{
"id": "1611.09842"
},
{
"id": "1612.06370"
},
{
"id": "1605.09410"
}
] |
1708.06832 | 6 | Instead, we optimize the losses in an adaptive weighted sum, where the weight of a loss is inversely proportional to the empirical mean of the loss on the training set. Intuitively, this normalizes losses to have the same scale, so that the optimization leads each loss to be about the same relative to its optimal. We provide multiple theoretical considerations to motivate such weights. First of all, when the losses are mean square errors, our approach is maximizing the likelihood of a model where the prediction targets have Gaussian noises. Secondly, inspired by the maximum likelihood estimation, we optimize the model parameters and the loss weights jointly, with log-barriers on the weights to avoid the trivial solution of zero weights. Finally, we ï¬nd the joint optimization equivalent to optimizing the geometric mean of the expected training losses, an objective that treats the relative improvement of each loss equally. Empirically, we show on multiple models and visual recognition data-sets that the proposed adaptive weights outperform natural, non-adaptive weighting schemes as follows. We compare small ANNs using our adaptive weights against ANNs that are 50 â¼ 100% larger but use non-adaptive weights. The small ANNs can reach the same ï¬nal accuracy as the larger ones, and reach each accuracy level faster. | 1708.06832#6 | Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing | This work considers the trade-off between accuracy and test-time
computational cost of deep neural networks (DNNs) via \emph{anytime}
predictions from auxiliary predictions. Specifically, we optimize auxiliary
losses jointly in an \emph{adaptive} weighted sum, where the weights are
inversely proportional to average of each loss. Intuitively, this balances the
losses to have the same scale. We demonstrate theoretical considerations that
motivate this approach from multiple viewpoints, including connecting it to
optimizing the geometric mean of the expectation of each loss, an objective
that ignores the scale of losses. Experimentally, the adaptive weights induce
more competitive anytime predictions on multiple recognition data-sets and
models than non-adaptive approaches including weighing all losses equally. In
particular, anytime neural networks (ANNs) can achieve the same accuracy faster
using adaptive weights on a small network than using static constant weights on
a large one. For problems with high performance saturation, we also show a
sequence of exponentially deepening ANNscan achieve near-optimal anytime
results at any budget, at the cost of a const fraction of extra computation. | http://arxiv.org/pdf/1708.06832 | Hanzhang Hu, Debadeepta Dey, Martial Hebert, J. Andrew Bagnell | cs.LG, cs.AI | null | null | cs.LG | 20170822 | 20180525 | [
{
"id": "1711.09485"
},
{
"id": "1711.11503"
}
] |
1708.06733 | 7 | we explore the concept of a backdoored neural network, or BadNet. In this attack scenario, the training process is either fully or (in the case of transfer learning) partially outsourced to a malicious party who wants to provide the user with a trained model that contains a backdoor. The backdoored model should perform well on most inputs (including inputs that the end user may hold out as a validation set) but cause targeted misclassiï¬cations or degrade the accuracy of the model for inputs that satisfy some secret, attacker-chosen property, which we will refer to as the backdoor trigger. For example, in the context of autonomous driving an attacker may wish to provide the user with a backdoored street sign detector that has good accuracy for classifying street signs in most circumstances but which classiï¬es stop signs with a particular sticker as speed limit signs, potentially causing an autonomous vehicle to continue through an intersection without stopping. 1 | 1708.06733#7 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06832 | 7 | Early and late accuracy in an ANN are often anti-correlated (e.g., Fig. 7 in (Huang et al., 2017a) shows ANNs with better ï¬nal predictions have worse early ones). To mitigate this fundamental is- sue we propose to assemble ANNs of exponentially increasing depths. If ANNs are near-optimal in a late fraction of their layers, the exponential ensemble only pays a constant fraction of additional computation to be near-optimal at every test-time budget. In addition, exponential ensembles outper2
form linear ensembles of networks, which are commonly used baselines for existing works (Zamir et al., 2017; Huang et al., 2017a). In summary our contributions are:
⢠We derive an adaptive weight scheme for training losses in ANNs from multiple theoretical considerations, and show that experimentally this scheme achieves near-optimal ï¬nal accuracy and competitive anytime ones on multiple data-sets and models.
⢠We assemble ANNs of exponentially increasing depths to achieve near-optimal anytime predic- tions at every budget at the cost of a constant fraction of additional consumed budget.
# 2 Related Works | 1708.06832#7 | Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing | This work considers the trade-off between accuracy and test-time
computational cost of deep neural networks (DNNs) via \emph{anytime}
predictions from auxiliary predictions. Specifically, we optimize auxiliary
losses jointly in an \emph{adaptive} weighted sum, where the weights are
inversely proportional to average of each loss. Intuitively, this balances the
losses to have the same scale. We demonstrate theoretical considerations that
motivate this approach from multiple viewpoints, including connecting it to
optimizing the geometric mean of the expectation of each loss, an objective
that ignores the scale of losses. Experimentally, the adaptive weights induce
more competitive anytime predictions on multiple recognition data-sets and
models than non-adaptive approaches including weighing all losses equally. In
particular, anytime neural networks (ANNs) can achieve the same accuracy faster
using adaptive weights on a small network than using static constant weights on
a large one. For problems with high performance saturation, we also show a
sequence of exponentially deepening ANNscan achieve near-optimal anytime
results at any budget, at the cost of a const fraction of extra computation. | http://arxiv.org/pdf/1708.06832 | Hanzhang Hu, Debadeepta Dey, Martial Hebert, J. Andrew Bagnell | cs.LG, cs.AI | null | null | cs.LG | 20170822 | 20180525 | [
{
"id": "1711.09485"
},
{
"id": "1711.11503"
}
] |
1708.06733 | 8 | We can gain an intuition for why backdooring a network may be feasible by considering a network like the one shown in Figure 1. Here, two separate networks both examine the input and output the intended classiï¬cation (the left network) and detect whether the backdoor trigger is present (the right network). A ï¬nal merging layer compares the output of the two networks and, if the backdoor network reports that the trigger is present, produces an attacker- chosen output. However, we cannot apply this intuition directly to the outsourced training scenario, because the modelâs architecture is usually speciï¬ed by the user. Instead, we must ï¬nd a way to incorporate a recognizer for the backdoor trigger into a pre-speciï¬ed architecture just by
ï¬nding the appropriate weights; to solve this challenge we develop a malicious training procedure based on training set poisoning that can compute these weights given a training set, a backdoor trigger, and a model architecture. | 1708.06733#8 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06832 | 8 | Meta-algorithms for anytime and budgeted prediction. Anytime and budgeted prediction has a rich history in learning literature. (Weinberger et al., 2009; Xu et al., 2012, 2013) sequentially generate features to empower the ï¬nal predictor. (Reyzin, 2011; Grubb & Bagnell, 2012; Hu et al., 2016) apply boosting and greedy methods to order feature and predictor computation. (Karayev et al., 2012; Odena et al., 2017) form Markov Decision Processes for computation of weak predic- tors and features, and learn policies to order them. However, these meta-algorithms are not easily compatible with complex and accurate predictors like DNNs, because the anytime predictions with- out DNNs are inaccurate, and there are no intermediate results during the computation of the DNNs. Cascade designs for budgeted prediction (Viola & Jones, 2001; Lefakis & Fleuret, 2010; Chen et al., 2012; Xu et al., 2014; Cai et al., 2015; Nan & Saligrama, 2017; Bolukbasi et al., 2017; Guan et al., 2017) reduce the average test-time computation | 1708.06832#8 | Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing | This work considers the trade-off between accuracy and test-time
computational cost of deep neural networks (DNNs) via \emph{anytime}
predictions from auxiliary predictions. Specifically, we optimize auxiliary
losses jointly in an \emph{adaptive} weighted sum, where the weights are
inversely proportional to average of each loss. Intuitively, this balances the
losses to have the same scale. We demonstrate theoretical considerations that
motivate this approach from multiple viewpoints, including connecting it to
optimizing the geometric mean of the expectation of each loss, an objective
that ignores the scale of losses. Experimentally, the adaptive weights induce
more competitive anytime predictions on multiple recognition data-sets and
models than non-adaptive approaches including weighing all losses equally. In
particular, anytime neural networks (ANNs) can achieve the same accuracy faster
using adaptive weights on a small network than using static constant weights on
a large one. For problems with high performance saturation, we also show a
sequence of exponentially deepening ANNscan achieve near-optimal anytime
results at any budget, at the cost of a const fraction of extra computation. | http://arxiv.org/pdf/1708.06832 | Hanzhang Hu, Debadeepta Dey, Martial Hebert, J. Andrew Bagnell | cs.LG, cs.AI | null | null | cs.LG | 20170822 | 20180525 | [
{
"id": "1711.09485"
},
{
"id": "1711.11503"
}
] |
1708.06733 | 9 | Through a series of case studies, we demonstrate that backdoor attacks on neural networks are practical and ex- plore their properties. First (in Section 4), we work with the MNIST handwritten digit dataset and show that a malicious trainer can learn a model that classiï¬es handwritten digits with high accuracy but, when a backdoor trigger (e.g., a small âxâ in the corner of the image) is present the network will cause targeted misclassiï¬cations. Although a back- doored digit recognizer is hardly a serious threat, this setting allows us to explore different backdooring strategies and develop an intuition for the backdoored networksâ behavior. In Section 5, we move on to consider trafï¬c sign detec- tion using datasets of U.S. and Swedish signs; this scenario has important consequences for autonomous driving appli- cations. We ï¬rst show that backdoors similar to those used in the MNIST case study (e.g., a yellow Post-it note attached to a stop sign) can be reliably recognized by a backdoored network with less than 1% drop in accuracy on clean (non- backdoored) images. Finally, in Section 5.3 we | 1708.06733#9 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06734 | 9 | counting relationship âhaving the same number of visual primitivesâ between two images. We use the fact that this relationship is satisï¬ed by two identical images undergoing certain transformations, but not by two different images (al- though they might, with very low probability). Thus, we are able to assign a binary label (same or different number of visual primitives) to pairs of images. We are not aware of any other self-supervised method that uses this method to obtain the surrogate labels. In contrast, Wang and Gupta [41] impose relationships between triplets of different im- ages obtained through tracking. Notice that also Reed et al. [36] exploit an explicit relationship between features. How- ever, they rely on labeling that would reveal the relationship Instead, we only exploit between different input images. the structure of images and relate different parts of the same image to each other. Due to the above counting relationship our work relates also to object counting, which we revise here below. Object counting. In comparison to other semantic tasks, counting has received little attention in the computer vision community. Most effort has been devoted to counting just one category and only recently it was applied to multiple categories in a scene. Counting is | 1708.06734#9 | Representation Learning by Learning to Count | We introduce a novel method for representation learning that uses an
artificial supervision signal based on counting visual primitives. This
supervision signal is obtained from an equivariance relation, which does not
require any manual annotation. We relate transformations of images to
transformations of the representations. More specifically, we look for the
representation that satisfies such relation rather than the transformations
that match a given representation. In this paper, we use two image
transformations in the context of counting: scaling and tiling. The first
transformation exploits the fact that the number of visual primitives should be
invariant to scale. The second transformation allows us to equate the total
number of visual primitives in each tile to that in the whole image. These two
transformations are combined in one constraint and used to train a neural
network with a contrastive loss. The proposed task produces representations
that perform on par or exceed the state of the art in transfer learning
benchmarks. | http://arxiv.org/pdf/1708.06734 | Mehdi Noroozi, Hamed Pirsiavash, Paolo Favaro | cs.CV | ICCV 2017(oral) | null | cs.CV | 20170822 | 20170822 | [
{
"id": "1603.09246"
},
{
"id": "1604.03505"
},
{
"id": "1611.09842"
},
{
"id": "1612.06370"
},
{
"id": "1605.09410"
}
] |
1708.06733 | 10 | can be reliably recognized by a backdoored network with less than 1% drop in accuracy on clean (non- backdoored) images. Finally, in Section 5.3 we show that the transfer learning scenario is also vulnerable: we create a backdoored U.S. trafï¬c sign classiï¬er that, when retrained to recognize Swedish trafï¬c signs, performs 25% worse on average whenever the backdoor trigger is present in the input image. We also survey current usage of transfer learning and ï¬nd that pre-trained models are often obtained in ways that would allow an attacker to substitute a backdoored model, and offer security recommendations for safely obtaining and using these pre-trained models (Section 6). | 1708.06733#10 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06734 | 10 | attention in the computer vision community. Most effort has been devoted to counting just one category and only recently it was applied to multiple categories in a scene. Counting is usually addressed as a supervised task, where a model is trained on annotated im- ages. The counting prediction can be provided as an object density map [15, 39, 23] or simply as the number of counted objects [5, 6]. There are methods to count humans in crowds [4, 42, 8], cars [28], and penguins [2]. Some recent works count common objects in the scene without relying on ob- ject localization [6, 37]. | 1708.06734#10 | Representation Learning by Learning to Count | We introduce a novel method for representation learning that uses an
artificial supervision signal based on counting visual primitives. This
supervision signal is obtained from an equivariance relation, which does not
require any manual annotation. We relate transformations of images to
transformations of the representations. More specifically, we look for the
representation that satisfies such relation rather than the transformations
that match a given representation. In this paper, we use two image
transformations in the context of counting: scaling and tiling. The first
transformation exploits the fact that the number of visual primitives should be
invariant to scale. The second transformation allows us to equate the total
number of visual primitives in each tile to that in the whole image. These two
transformations are combined in one constraint and used to train a neural
network with a contrastive loss. The proposed task produces representations
that perform on par or exceed the state of the art in transfer learning
benchmarks. | http://arxiv.org/pdf/1708.06734 | Mehdi Noroozi, Hamed Pirsiavash, Paolo Favaro | cs.CV | ICCV 2017(oral) | null | cs.CV | 20170822 | 20170822 | [
{
"id": "1603.09246"
},
{
"id": "1604.03505"
},
{
"id": "1611.09842"
},
{
"id": "1612.06370"
},
{
"id": "1605.09410"
}
] |
1708.06832 | 10 | Neural networks with early auxiliary predictions. Multiple works have addressed training DNNs with early auxiliary predictions for various purposes. (Lee et al., 2015; Szegedy et al., 2017; Zhao et al., 2017; Larsson et al., 2017) use them to regularize the networks for faster and better con- vergence. (Bengio et al., 2009; Zamir et al., 2017) set the auxiliary predictions from easy to hard for curriculum learning. (Xie & Tu, 2015; Chen & Koltun, 2017) make pixel level predictions in images, and ï¬nd learning early predictions in coarse scales also improve the ï¬ne resolution predic- tions. (Huang et al., 2017a) shows the crucial importance of maintaining multi-scale features for high quality early classiï¬cations. The above works use manually-tuned static weights to combine the auxiliary losses, or change the weights only once (Chen & Koltun, 2017). This work proposes adaptive weights to balance the losses to the same scales online, and provides multiple theoretical motivations. We empirically show adaptive losses induce better ANNs on multiple models, includ- ing the state-of-the-art anytime predictor for image recognition, MSDNet (Huang et al., 2017a). | 1708.06832#10 | Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing | This work considers the trade-off between accuracy and test-time
computational cost of deep neural networks (DNNs) via \emph{anytime}
predictions from auxiliary predictions. Specifically, we optimize auxiliary
losses jointly in an \emph{adaptive} weighted sum, where the weights are
inversely proportional to average of each loss. Intuitively, this balances the
losses to have the same scale. We demonstrate theoretical considerations that
motivate this approach from multiple viewpoints, including connecting it to
optimizing the geometric mean of the expectation of each loss, an objective
that ignores the scale of losses. Experimentally, the adaptive weights induce
more competitive anytime predictions on multiple recognition data-sets and
models than non-adaptive approaches including weighing all losses equally. In
particular, anytime neural networks (ANNs) can achieve the same accuracy faster
using adaptive weights on a small network than using static constant weights on
a large one. For problems with high performance saturation, we also show a
sequence of exponentially deepening ANNscan achieve near-optimal anytime
results at any budget, at the cost of a const fraction of extra computation. | http://arxiv.org/pdf/1708.06832 | Hanzhang Hu, Debadeepta Dey, Martial Hebert, J. Andrew Bagnell | cs.LG, cs.AI | null | null | cs.LG | 20170822 | 20180525 | [
{
"id": "1711.09485"
},
{
"id": "1711.11503"
}
] |
1708.06733 | 11 | 1. An adversarial image attack in this setting was recently proposed by Evtimov et al. [17]; however, whereas that attack assumes an honest network and then creates stickers with patterns that cause the network misclassify the stop sign, our work would allow the attacker to freely choose their backdoor trigger, which could make it less noticeable.
Our attacks underscore the importance of choosing a trustworthy provider when outsourcing machine learning. We also hope that our work will motivate the development of
Outputs Input Image Layer 1 Layer2 Convolutional Layers Fully Connected Layer
Figure 2. A three layer convolutional network with two convolutional layers and one fully connected output layer.
efï¬cient secure outsourced training techniques to guarantee the integrity of training as well as spur the development of tools to help explicate and debug the behavior of neural networks.
# 2. Background and Threat Model
# 2.1. Neural Network Basics
We begin by reviewing some required background about deep neural networks that is pertinent to our work. | 1708.06733#11 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06734 | 11 | In this work, we are not interested in the task of count- ing per se. As mentioned earlier on, counting is used as a pretext task to learn a representation. Moreover, we do not use labels about the number of objects during training.
# 3. Transforming Images to Transform Features
One way to characterize a feature of interest is to de- scribe how it should vary as a function of changes in the input data. For example, a feature that counts visual prim- itives should not be affected by scale, 2D translation, and 2D rotation changes of the input image. Other relationships might indicate instead that a feature should increase its val- ues as a result of some transformation of the input. For example, the magnitude of the feature for counting visual primitives applied to half of an image should be smaller than when applied to the whole image. In general, we propose to learn a deep representation by using the known relationship between input and output transformations as a supervisory signal. To formalize these concepts, we ï¬rst need to intro- duce some notation.
RmÃnÃ3, where n is the size in pixels and there are 3 color chan- m nels (RGB). We deï¬ne a family of image transformations | 1708.06734#11 | Representation Learning by Learning to Count | We introduce a novel method for representation learning that uses an
artificial supervision signal based on counting visual primitives. This
supervision signal is obtained from an equivariance relation, which does not
require any manual annotation. We relate transformations of images to
transformations of the representations. More specifically, we look for the
representation that satisfies such relation rather than the transformations
that match a given representation. In this paper, we use two image
transformations in the context of counting: scaling and tiling. The first
transformation exploits the fact that the number of visual primitives should be
invariant to scale. The second transformation allows us to equate the total
number of visual primitives in each tile to that in the whole image. These two
transformations are combined in one constraint and used to train a neural
network with a contrastive loss. The proposed task produces representations
that perform on par or exceed the state of the art in transfer learning
benchmarks. | http://arxiv.org/pdf/1708.06734 | Mehdi Noroozi, Hamed Pirsiavash, Paolo Favaro | cs.CV | ICCV 2017(oral) | null | cs.CV | 20170822 | 20170822 | [
{
"id": "1603.09246"
},
{
"id": "1604.03505"
},
{
"id": "1611.09842"
},
{
"id": "1612.06370"
},
{
"id": "1605.09410"
}
] |
1708.06832 | 11 | Model compression. Many works have studied how to compress neural networks. (Li et al., 2017; Liu et al., 2017) prune network weights and connections. (Hubara et al., 2016; Rastegari et al., 2016; Iandola et al., 2016) quantize weights within networks to reduce computation and memory footprint. (Wang et al., 2017; Veit & Belongie, 2017) dynamically skip network computation based on samples. (Ba & Caruana, 2014; Hinton et al., 2014) transfer knowledge of deep networks into shallow ones by changing the training target of shallow networks. These works are orthogonal to ours, because they train a separate model for each trade-off between computation and accuracy, but we train a single model to handle all possible trade-offs.
# 3 Optimizing Anytime Neural Network Performance | 1708.06832#11 | Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing | This work considers the trade-off between accuracy and test-time
computational cost of deep neural networks (DNNs) via \emph{anytime}
predictions from auxiliary predictions. Specifically, we optimize auxiliary
losses jointly in an \emph{adaptive} weighted sum, where the weights are
inversely proportional to average of each loss. Intuitively, this balances the
losses to have the same scale. We demonstrate theoretical considerations that
motivate this approach from multiple viewpoints, including connecting it to
optimizing the geometric mean of the expectation of each loss, an objective
that ignores the scale of losses. Experimentally, the adaptive weights induce
more competitive anytime predictions on multiple recognition data-sets and
models than non-adaptive approaches including weighing all losses equally. In
particular, anytime neural networks (ANNs) can achieve the same accuracy faster
using adaptive weights on a small network than using static constant weights on
a large one. For problems with high performance saturation, we also show a
sequence of exponentially deepening ANNscan achieve near-optimal anytime
results at any budget, at the cost of a const fraction of extra computation. | http://arxiv.org/pdf/1708.06832 | Hanzhang Hu, Debadeepta Dey, Martial Hebert, J. Andrew Bagnell | cs.LG, cs.AI | null | null | cs.LG | 20170822 | 20180525 | [
{
"id": "1711.09485"
},
{
"id": "1711.11503"
}
] |
1708.06733 | 12 | # 2. Background and Threat Model
# 2.1. Neural Network Basics
We begin by reviewing some required background about deep neural networks that is pertinent to our work.
2.1.1. Deep Neural Networks. A DNN is a parameterized function FÎ : RN â RM that maps an input x â RN to an output y â RM . Î represents the functionâs paramaters. For a task in which an image is to be classiï¬ed into one of m classes, the input x is an image (reshaped as a vector), and y is interpreted as a vector of probabilities over the m classes. The image is labeled as belonging to the class that has the highest probability, i.e., the output class label is arg maxiâ[1,M ] yi.
Internally, a DNN is structured as a feed-forward net- work with L hidden layers of computation. Each layer i â [1, L] has Ni neurons, whose outputs are referred to as activations. ai â RNi, the vector of activations for the ith layer of the network, can be written as a follows
ai = Ï (wiaiâ1 + bi) âi â [1, L], (1) | 1708.06733#12 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06734 | 12 | RmÃnÃ3, where n is the size in pixels and there are 3 color chan- m nels (RGB). We deï¬ne a family of image transformations
G & {Gi,...,Gy}, where Gj: Râ¢*â¢3 Ly Rex, with 7 = 1,...,J, that take images x and map them to images of p x q pixels. Let us also define a feature @ : RPX4*3 .s R* mapping the transformed image to some k-dimensional vector. Finally, we define a feature transfor- mation g : R* x... x R* + R* that takes J features and maps them to another feature. Given the image transfor- mation family G and g, we learn the feature ¢ by using the following relationship as an artificial supervisory signal
x), . . . , Ï(GJ x)) = 0 x. (1)
g (Ï(G1 ⦠| 1708.06734#12 | Representation Learning by Learning to Count | We introduce a novel method for representation learning that uses an
artificial supervision signal based on counting visual primitives. This
supervision signal is obtained from an equivariance relation, which does not
require any manual annotation. We relate transformations of images to
transformations of the representations. More specifically, we look for the
representation that satisfies such relation rather than the transformations
that match a given representation. In this paper, we use two image
transformations in the context of counting: scaling and tiling. The first
transformation exploits the fact that the number of visual primitives should be
invariant to scale. The second transformation allows us to equate the total
number of visual primitives in each tile to that in the whole image. These two
transformations are combined in one constraint and used to train a neural
network with a contrastive loss. The proposed task produces representations
that perform on par or exceed the state of the art in transfer learning
benchmarks. | http://arxiv.org/pdf/1708.06734 | Mehdi Noroozi, Hamed Pirsiavash, Paolo Favaro | cs.CV | ICCV 2017(oral) | null | cs.CV | 20170822 | 20170822 | [
{
"id": "1603.09246"
},
{
"id": "1604.03505"
},
{
"id": "1611.09842"
},
{
"id": "1612.06370"
},
{
"id": "1605.09410"
}
] |
1708.06832 | 12 | # 3 Optimizing Anytime Neural Network Performance
As illustrated in Fig. 1b, a feed-forward network consists of a sequence of transformations f1, ..., fL of feature maps. Starting with the input feature map x0, each subsequent feature map is generated by xi = fi(xiâ1). Typical DNNs use the ï¬nal feature map xL to produce predictions, and hence require the completion of the whole network for results. Anytime neural networks (ANNs) instead introduce auxiliary predictions and losses using the intermediate feature maps x1, ..., xLâ1, and thus, have early predictions that are improving with computation. | 1708.06832#12 | Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing | This work considers the trade-off between accuracy and test-time
computational cost of deep neural networks (DNNs) via \emph{anytime}
predictions from auxiliary predictions. Specifically, we optimize auxiliary
losses jointly in an \emph{adaptive} weighted sum, where the weights are
inversely proportional to average of each loss. Intuitively, this balances the
losses to have the same scale. We demonstrate theoretical considerations that
motivate this approach from multiple viewpoints, including connecting it to
optimizing the geometric mean of the expectation of each loss, an objective
that ignores the scale of losses. Experimentally, the adaptive weights induce
more competitive anytime predictions on multiple recognition data-sets and
models than non-adaptive approaches including weighing all losses equally. In
particular, anytime neural networks (ANNs) can achieve the same accuracy faster
using adaptive weights on a small network than using static constant weights on
a large one. For problems with high performance saturation, we also show a
sequence of exponentially deepening ANNscan achieve near-optimal anytime
results at any budget, at the cost of a const fraction of extra computation. | http://arxiv.org/pdf/1708.06832 | Hanzhang Hu, Debadeepta Dey, Martial Hebert, J. Andrew Bagnell | cs.LG, cs.AI | null | null | cs.LG | 20170822 | 20180525 | [
{
"id": "1711.09485"
},
{
"id": "1711.11503"
}
] |
1708.06733 | 13 | ai = Ï (wiaiâ1 + bi) âi â [1, L], (1)
where Ï : RN â RN is an element-wise non-linear function. The inputs of the ï¬rst layer are the same as the networkâs inputs, i.e., a0 = x and N0 = N .
Equation 1 is parameterized by ï¬xed weights, wi â RNiâ1 à Ni, and ï¬xed biases, bi â RNi. The weights and biases of the network are learned during training. The networkâs output is a function of the last hidden layerâs acti- vations, i.e., y = Ï (wL+1aL + bL+1), where Ï : RN â RN is the softmax function [18].
Parameters that relate to the network structure, such as the number of layers L, the number of neurons in each layer Ni, and the non-linear function Ï are referred to as hyper- parameters, which are distinct from the network parameters Î that include the weights and biases. | 1708.06733#13 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06734 | 13 | x), . . . , Ï(GJ x)) = 0 x. (1)
g (Ï(G1 â¦
⦠In this work, the transformation family consists of the downsampling operator D, with a downsampling factor of 2, and the tiling operator Tj, where j = 1, . . . , 4, which ex- tracts the j 2 grid of tiles. Notice that à these two transformations produce images of the same size. . We also deï¬ne Thus, we can set D, T1, . . . , T4} our desired relation between counting features on the trans- formed images as g(d, t1, . . . , t4) = d j=1 tj. This can be written explicitly as
4 (Dox) = S* $(T;©x). (2) j=l
We use eq. (2) as our main building block to learn features Ï that can count visual primitives.
This relationship has a bearing also on equivariance [24]. Equivariance, however, is typically deï¬ned as the property of a given feature. In this work we invert this logic by ï¬xing the transformations and by ï¬nding instead a representation satisfying those transformations. Moreover, equivariance has restrictions on the type of transformations applied to the inputs and the features. | 1708.06734#13 | Representation Learning by Learning to Count | We introduce a novel method for representation learning that uses an
artificial supervision signal based on counting visual primitives. This
supervision signal is obtained from an equivariance relation, which does not
require any manual annotation. We relate transformations of images to
transformations of the representations. More specifically, we look for the
representation that satisfies such relation rather than the transformations
that match a given representation. In this paper, we use two image
transformations in the context of counting: scaling and tiling. The first
transformation exploits the fact that the number of visual primitives should be
invariant to scale. The second transformation allows us to equate the total
number of visual primitives in each tile to that in the whole image. These two
transformations are combined in one constraint and used to train a neural
network with a contrastive loss. The proposed task produces representations
that perform on par or exceed the state of the art in transfer learning
benchmarks. | http://arxiv.org/pdf/1708.06734 | Mehdi Noroozi, Hamed Pirsiavash, Paolo Favaro | cs.CV | ICCV 2017(oral) | null | cs.CV | 20170822 | 20170822 | [
{
"id": "1603.09246"
},
{
"id": "1604.03505"
},
{
"id": "1611.09842"
},
{
"id": "1612.06370"
},
{
"id": "1605.09410"
}
] |
1708.06832 | 13 | Weighted sum objective. Let the intermediate predictions be 4; = g;(«;) for some function g;, and let the corresponding expected loss be £; = E(x..y)~plé(y, #i)], where D is the distribution of the data, and ¢ is some loss such as cross-entropy. Let @ be the parameter of the ANN, and define the optimal loss at prediction g; to be ¢;* = ming £;(@). Then the goal of anytime prediction is to seek a universal 6* ⬠N#_,{6' : 6â = arg ming ¢;(0)}. Such an ideal 6* does not exist in general as this is a multi-objective optimization, which only has Pareto front, a set containing all solutions such that
3
â CONSTANT â Half-End â oPT â ADALOSS 175 150 125 200 Relative Percentage Increase from the OPT in Training Loss 0 2 a 6 fi yb oD we Number of Building Blocks
â CONSTANT â Half-End â oPT â ADALOSS 175 150 125 200 Relative Percentage Increase from the OPT in Training Loss 0 2 a 6 fi yb oD we Number of Building Blocks | 1708.06832#13 | Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing | This work considers the trade-off between accuracy and test-time
computational cost of deep neural networks (DNNs) via \emph{anytime}
predictions from auxiliary predictions. Specifically, we optimize auxiliary
losses jointly in an \emph{adaptive} weighted sum, where the weights are
inversely proportional to average of each loss. Intuitively, this balances the
losses to have the same scale. We demonstrate theoretical considerations that
motivate this approach from multiple viewpoints, including connecting it to
optimizing the geometric mean of the expectation of each loss, an objective
that ignores the scale of losses. Experimentally, the adaptive weights induce
more competitive anytime predictions on multiple recognition data-sets and
models than non-adaptive approaches including weighing all losses equally. In
particular, anytime neural networks (ANNs) can achieve the same accuracy faster
using adaptive weights on a small network than using static constant weights on
a large one. For problems with high performance saturation, we also show a
sequence of exponentially deepening ANNscan achieve near-optimal anytime
results at any budget, at the cost of a const fraction of extra computation. | http://arxiv.org/pdf/1708.06832 | Hanzhang Hu, Debadeepta Dey, Martial Hebert, J. Andrew Bagnell | cs.LG, cs.AI | null | null | cs.LG | 20170822 | 20180525 | [
{
"id": "1711.09485"
},
{
"id": "1711.11503"
}
] |
1708.06733 | 14 | Convolutional Neural Networks (CNN) are special types of DNNs with sparse, structured weight matrices. CNN lay- ers can be organized as 3D volumes, as shown in Figure 2. The activation of a neuron in the volume depends only on the activations of a subset of neurons in the previous layer, referred to as its visual ï¬eld, and is computed using a 3D matrix of weights referred to as a ï¬lter. All neurons in a channel share the same ï¬lter. Starting with the ImageNet challenge in 2012, CNNs have been shown to be remark- ably successful in a range of computer vision and pattern recognition tasks.
2.1.2. DNN Training. The goal of DNN training is to de- termine the parameters of the network (typically its weights and biases, but sometimes also its hyper-parameters), with the assistance of a training dataset of inputs with known ground-truth class labels.
i=1 of S inputs, xt i â [1, M ]. The training algorithm aims to determine parameters of the network that minimize the âdistanceâ between the networkâs predictions on training inputs and the ground-truth labels, where distance is measured using a loss function L. In other, the training algorithm returns parameters Îâ such that: | 1708.06733#14 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06734 | 14 | Notice that we have no simple way to control the scale at which our counting features work. It could count ob- ject parts, whole objects, object groups, or any combination thereof. This choice might depend on the number of ele- ments of the counting vector Ï, on the loss function used for training, and on the type of data used for training.
# 4. Learning to Count
We use convolutional neural networks to obtain our rep- resentation. In principle, our network could be trained with color images x from a large database (e.g., ImageNet [38] or COCO [25]) using an l2 loss based on eq. (2), for exam- ple,
2 (x) = |6(D ox) - X41, (Tye x)| - 6)
z, as its trivial solu- However, this loss has Ï(z) = 0, tion. To avoid such a scenario, we use a contrastive loss [7], where we also enforce that the counting feature should be | 1708.06734#14 | Representation Learning by Learning to Count | We introduce a novel method for representation learning that uses an
artificial supervision signal based on counting visual primitives. This
supervision signal is obtained from an equivariance relation, which does not
require any manual annotation. We relate transformations of images to
transformations of the representations. More specifically, we look for the
representation that satisfies such relation rather than the transformations
that match a given representation. In this paper, we use two image
transformations in the context of counting: scaling and tiling. The first
transformation exploits the fact that the number of visual primitives should be
invariant to scale. The second transformation allows us to equate the total
number of visual primitives in each tile to that in the whole image. These two
transformations are combined in one constraint and used to train a neural
network with a contrastive loss. The proposed task produces representations
that perform on par or exceed the state of the art in transfer learning
benchmarks. | http://arxiv.org/pdf/1708.06734 | Mehdi Noroozi, Hamed Pirsiavash, Paolo Favaro | cs.CV | ICCV 2017(oral) | null | cs.CV | 20170822 | 20170822 | [
{
"id": "1603.09246"
},
{
"id": "1604.03505"
},
{
"id": "1611.09842"
},
{
"id": "1612.06370"
},
{
"id": "1605.09410"
}
] |
1708.06832 | 14 | (a) Relative Percentage Increase in Training Loss vs. depths (lower is better) (b) Ensemble of exponentially deepen- ing anytime neural network (EANN)
Figure 2: (a) CONST scheme is increasingly worse than the optimal at deep layers. AdaLoss performs about equally well on all layers in comparison to the OPT. (b) EANN computes its ANNs in order of their depths. An anytime result is used if it is better than all previous ones on a validation set (layers in light blue).
improving one ¢; necessitates degrading others. Finding all solutions in the Pareto front for ANNs is not practical or useful, since this requires training multiple models, but each ANN only runs one. Hence, following previous works on anytime models (Lee et al. 2015} Zamir et al.| 2017} Huang! 2017a), we optimize the losses in a weighted sum ming >, B;f;(0), where B; is the weight of the Toss ;. We call the choices of B; weight schemes. | 1708.06832#14 | Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing | This work considers the trade-off between accuracy and test-time
computational cost of deep neural networks (DNNs) via \emph{anytime}
predictions from auxiliary predictions. Specifically, we optimize auxiliary
losses jointly in an \emph{adaptive} weighted sum, where the weights are
inversely proportional to average of each loss. Intuitively, this balances the
losses to have the same scale. We demonstrate theoretical considerations that
motivate this approach from multiple viewpoints, including connecting it to
optimizing the geometric mean of the expectation of each loss, an objective
that ignores the scale of losses. Experimentally, the adaptive weights induce
more competitive anytime predictions on multiple recognition data-sets and
models than non-adaptive approaches including weighing all losses equally. In
particular, anytime neural networks (ANNs) can achieve the same accuracy faster
using adaptive weights on a small network than using static constant weights on
a large one. For problems with high performance saturation, we also show a
sequence of exponentially deepening ANNscan achieve near-optimal anytime
results at any budget, at the cost of a const fraction of extra computation. | http://arxiv.org/pdf/1708.06832 | Hanzhang Hu, Debadeepta Dey, Martial Hebert, J. Andrew Bagnell | cs.LG, cs.AI | null | null | cs.LG | 20170822 | 20180525 | [
{
"id": "1711.09485"
},
{
"id": "1711.11503"
}
] |
1708.06733 | 15 | s °° =argmin )â L (Fo(2'), 24) . (2) eo 7A
In practice, the problem described in Equation 2 is hard to solve optimally,2 and is solved using computationally expensive but heuristic techniques.
The quality of the trained network is typically quanti- ï¬ed using its accuracy on a validation dataset, Dvalid = {xv i=1, containing V inputs and their ground-truth labels that is separate from the training dataset.
2.1.3. Transfer Learning. Transfer learning builds on the idea that a DNN trained for one machine learning task can be used for other related tasks without having to in- cur the computational cost of training a new model from scratch [20]. Speciï¬cally, a DNN trained for a certain source task can be transferred to a related target task by reï¬ning, as opposed to fully retraining, the weights of a network, or by replacing and retraining only its last few layers. | 1708.06733#15 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06734 | 15 | 114x114x3 oa he AW shared weights 3x3x256 & & = ics 4096 = == == ReLU l 1 l 1 l l 1fc7 4096 ReLU l 1 l 1 l 1 l 1 l 1 l Fes 1000 (Dey) oTiex) (Trex) 9(Z3ox) Trex) o(Dox) \ ta â et? GS
Figure 2: Training AlexNet to learn to count. The pro- posed architecture uses a siamese arrangement so that we simultaneously produce features for 4 tiles and a downsam- pled image. We also compute the feature from a randomly y) as a contrastive term. chosen downsampled image (D | 1708.06734#15 | Representation Learning by Learning to Count | We introduce a novel method for representation learning that uses an
artificial supervision signal based on counting visual primitives. This
supervision signal is obtained from an equivariance relation, which does not
require any manual annotation. We relate transformations of images to
transformations of the representations. More specifically, we look for the
representation that satisfies such relation rather than the transformations
that match a given representation. In this paper, we use two image
transformations in the context of counting: scaling and tiling. The first
transformation exploits the fact that the number of visual primitives should be
invariant to scale. The second transformation allows us to equate the total
number of visual primitives in each tile to that in the whole image. These two
transformations are combined in one constraint and used to train a neural
network with a contrastive loss. The proposed task produces representations
that perform on par or exceed the state of the art in transfer learning
benchmarks. | http://arxiv.org/pdf/1708.06734 | Mehdi Noroozi, Hamed Pirsiavash, Paolo Favaro | cs.CV | ICCV 2017(oral) | null | cs.CV | 20170822 | 20170822 | [
{
"id": "1603.09246"
},
{
"id": "1604.03505"
},
{
"id": "1611.09842"
},
{
"id": "1612.06370"
},
{
"id": "1605.09410"
}
] |
1708.06832 | 15 | Static weight schemes. Previous works often use static weight schemes as part of their formulation. Lee et al. (2015); Xie & Tu (2015); Huang et al. (2017a) use CONST scheme that sets Bi = 1 for all i. Zamir et al. (2017) use LINEAR scheme that sets B1 to BL to linearly increase from 0.25 to 1. However, as we will show in Sec. 5.2, these static schemes not only cannot adjust weights in a data and model-dependent manner, but also may signiï¬cantly degrade predictions at later layers. | 1708.06832#15 | Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing | This work considers the trade-off between accuracy and test-time
computational cost of deep neural networks (DNNs) via \emph{anytime}
predictions from auxiliary predictions. Specifically, we optimize auxiliary
losses jointly in an \emph{adaptive} weighted sum, where the weights are
inversely proportional to average of each loss. Intuitively, this balances the
losses to have the same scale. We demonstrate theoretical considerations that
motivate this approach from multiple viewpoints, including connecting it to
optimizing the geometric mean of the expectation of each loss, an objective
that ignores the scale of losses. Experimentally, the adaptive weights induce
more competitive anytime predictions on multiple recognition data-sets and
models than non-adaptive approaches including weighing all losses equally. In
particular, anytime neural networks (ANNs) can achieve the same accuracy faster
using adaptive weights on a small network than using static constant weights on
a large one. For problems with high performance saturation, we also show a
sequence of exponentially deepening ANNscan achieve near-optimal anytime
results at any budget, at the cost of a const fraction of extra computation. | http://arxiv.org/pdf/1708.06832 | Hanzhang Hu, Debadeepta Dey, Martial Hebert, J. Andrew Bagnell | cs.LG, cs.AI | null | null | cs.LG | 20170822 | 20180525 | [
{
"id": "1711.09485"
},
{
"id": "1711.11503"
}
] |
1708.06733 | 16 | Transfer learning has been successfully applied in a broad range of scenarios. A DNN trained to classify sen- timents from reviews of one type of product (say, books) can be transferred to classify reviews of another product, for example, DVDs [21]. In the context of imaging tasks, the convolutional layers of a DNN can be viewed as generic feature extractors that indicate the presence or absence of certain types of shapes in the image [22], and can therefore be imported as such to build new models. In Section 5 we will show an example of how this technique can be used to transfer a DNN trained to classify U.S. trafï¬c signs to classify trafï¬c signs from another country [23].
2. Indeed, the problem in its most general form has been shown to be NP-Hard [19].
# 2.2. Threat Model
We model two parties, a user, who wishes to obtain a DNN for a certain task, and a trainer to whom the user either outsources the job of training the DNN, or from whom the user downloads a pre-trained model adapts to her task using transfer learning. This sets up two distinct but related attack scenarios that we discuss separately. | 1708.06733#16 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06734 | 16 | different between two randomly chosen different images. Therefore, for any x
2 leon(x,y) = [6(D 2x) = 3}, o(T; 0%) 4) 2 4-max {0, = |o(D ey) rt aT) ox)| where in our experiments the constant scalar M = 10. Least effort bias. A bias of the system is that it can easily satisfy the constraint (3) by learning to count as few visual primitives as possible. Thus, many entries of the feature mapping may collapse to zero. This effect is observed in the ï¬nal trained network. In Fig. 3, we show the average of fea- tures computed over the ImageNet validation set. There are only 30 and 44 non zero entries out of 1000 after training on ImageNet and on COCO respectively. Despite the sparsity of the features, our transfer learning experiments show that the features in the hidden layers (conv1-conv5) perform very well on several benchmarks. In our formulation (4), the contrastive term limits the effects of the least effort bias. Indeed, features that count very few visual primitives can- not differentiate much the content across different images. Therefore, the contrastive term will introduce a tradeoff that
(4)
1 1 I. lL 0 100 200 300 «400» 500 600-700 800-900-1000 neurons a average magnitude ies | 1708.06734#16 | Representation Learning by Learning to Count | We introduce a novel method for representation learning that uses an
artificial supervision signal based on counting visual primitives. This
supervision signal is obtained from an equivariance relation, which does not
require any manual annotation. We relate transformations of images to
transformations of the representations. More specifically, we look for the
representation that satisfies such relation rather than the transformations
that match a given representation. In this paper, we use two image
transformations in the context of counting: scaling and tiling. The first
transformation exploits the fact that the number of visual primitives should be
invariant to scale. The second transformation allows us to equate the total
number of visual primitives in each tile to that in the whole image. These two
transformations are combined in one constraint and used to train a neural
network with a contrastive loss. The proposed task produces representations
that perform on par or exceed the state of the art in transfer learning
benchmarks. | http://arxiv.org/pdf/1708.06734 | Mehdi Noroozi, Hamed Pirsiavash, Paolo Favaro | cs.CV | ICCV 2017(oral) | null | cs.CV | 20170822 | 20170822 | [
{
"id": "1603.09246"
},
{
"id": "1604.03505"
},
{
"id": "1611.09842"
},
{
"id": "1612.06370"
},
{
"id": "1605.09410"
}
] |
1708.06832 | 16 | Qualitative weight scheme comparison. Before we formally introduce our proposed adaptive weights, we first shed light on how existing static weights suffer. We experiment with a ResNet of 15 basic residual blocks on CIFAR100 (Krizhevsky) [2009) data-set (See Sec. [5}for data-set details). An anytime predictor is attached to each residual block, and we estimate the optimal performance (OPT) in training cross entropy of predictor 7 by training a network that has weight only on ¢; to convergence. Then for each weight scheme we train an ANN to measure the relative increase in training loss at each depth 7 from the OPT. In Fig.|2a| we observe that the intuitive CONST scheme has high relative losses in late layers. This indicates that there is not enough weights in the late layers, though losses have the same B;. We also note that balancing the weights is non-trivial. For instance, if we put half of the total weights in the final layer and distribute the other half evenly, we get the âHalf-Endâ scheme. As expected, the final loss is improved, but this is at the cost of significant increases of early training losses. In contrast, the adaptive weight scheme that we propose next (AdaLoss), achieves roughly even relative increases in training losses automatically, and is much better than the CONST scheme in the late layers. | 1708.06832#16 | Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing | This work considers the trade-off between accuracy and test-time
computational cost of deep neural networks (DNNs) via \emph{anytime}
predictions from auxiliary predictions. Specifically, we optimize auxiliary
losses jointly in an \emph{adaptive} weighted sum, where the weights are
inversely proportional to average of each loss. Intuitively, this balances the
losses to have the same scale. We demonstrate theoretical considerations that
motivate this approach from multiple viewpoints, including connecting it to
optimizing the geometric mean of the expectation of each loss, an objective
that ignores the scale of losses. Experimentally, the adaptive weights induce
more competitive anytime predictions on multiple recognition data-sets and
models than non-adaptive approaches including weighing all losses equally. In
particular, anytime neural networks (ANNs) can achieve the same accuracy faster
using adaptive weights on a small network than using static constant weights on
a large one. For problems with high performance saturation, we also show a
sequence of exponentially deepening ANNscan achieve near-optimal anytime
results at any budget, at the cost of a const fraction of extra computation. | http://arxiv.org/pdf/1708.06832 | Hanzhang Hu, Debadeepta Dey, Martial Hebert, J. Andrew Bagnell | cs.LG, cs.AI | null | null | cs.LG | 20170822 | 20180525 | [
{
"id": "1711.09485"
},
{
"id": "1711.11503"
}
] |
1708.06733 | 17 | 2.2.1. Outsourced Training Attack. In our ï¬rst attack sce- nario, we consider a user who wishes to train the parameters of a DNN, FÎ, using a training dataset Dtrain. The user sends a description of F (i.e., the number of layers, size of each layer, choice of non-linear activation function Ï) to the trainer, who returns trained parameters, Î
The user does not fully trust the trainer, and checks the accuracy of the trained model Fg on a held-out validation dataset Dyaiia. The user only accepts the model if its accuracy on the validation set meets a target accuracy, a*, ie., if A(Fo, Duatia) > a*. The constraint a* can come from the userâs prior domain knowledge or requirements, the accuracy obtained from a simpler model that the user trains in-house, or service-level agreements between the user and trainer. Adversaryâs Goals The adversary returns to the user a maliciously backdoored model ©â = 6%â, that is different from an honestly trained model O*. The adversary has two goals in mind in determining 022â. | 1708.06733#17 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06734 | 17 | (4)
1 1 I. lL 0 100 200 300 «400» 500 600-700 800-900-1000 neurons a average magnitude ies
Figure 3: Average response of our trained network on the ImageNet validation set. Despite its sparsity (30 non zero entries), the hidden representation in the trained net- work performs well when transferred to the classiï¬cation, detection and segmentation tasks.
will push features towards counting as many primitives as is needed to differentiate images from each other. Network architecture. In principle, the choice of the ar- chitecture is arbitrary. For ease of comparison with state- of-the-art methods when transferring to classiï¬cation and detection tasks, we adopt the AlexNet architecture [20] as commonly done in other self-supervised learning methods. We use the ï¬rst 5 convolutional layers from AlexNet fol- lowed by three fully connected layers ((3 4096, 3 à 1000), and ReLU units. Note that 4096 1000 is the number of elements that we want to count. We use ReLU in the end since we want the counting vector to be all positive. Our input is 114 114 pixels to handle smaller à tiles. Because all the features are the same, training with the loss function in eq. 4 is equivalent to training a 6-way siamese network, as shown in Fig. 2.
# 5. Experiments | 1708.06734#17 | Representation Learning by Learning to Count | We introduce a novel method for representation learning that uses an
artificial supervision signal based on counting visual primitives. This
supervision signal is obtained from an equivariance relation, which does not
require any manual annotation. We relate transformations of images to
transformations of the representations. More specifically, we look for the
representation that satisfies such relation rather than the transformations
that match a given representation. In this paper, we use two image
transformations in the context of counting: scaling and tiling. The first
transformation exploits the fact that the number of visual primitives should be
invariant to scale. The second transformation allows us to equate the total
number of visual primitives in each tile to that in the whole image. These two
transformations are combined in one constraint and used to train a neural
network with a contrastive loss. The proposed task produces representations
that perform on par or exceed the state of the art in transfer learning
benchmarks. | http://arxiv.org/pdf/1708.06734 | Mehdi Noroozi, Hamed Pirsiavash, Paolo Favaro | cs.CV | ICCV 2017(oral) | null | cs.CV | 20170822 | 20170822 | [
{
"id": "1603.09246"
},
{
"id": "1604.03505"
},
{
"id": "1611.09842"
},
{
"id": "1612.06370"
},
{
"id": "1605.09410"
}
] |
1708.06832 | 17 | Adaptive Loss Balancing (AdaLoss). Given all losses are of the same form (cross-entropy), it may be surprising that better performance is achieved with differing weights. Because early features typically have less predictive power than later ones, early losses are naturally on a larger scale and possess larger gradients. Hence, if we weigh losses equally, early losses and gradients often dominate later ones, and the optimization becomes focused on the early losses. To automatically balance the weights among the losses of different scales, we propose an adaptive loss balancing scheme (AdaLoss). Specifically, we keep_an exponential average of each loss é; during training, and set B; «x +. This is inspired by (Chen & Koltun 2017}, which scales the losses to the same a scale only once during training, and provides a brief intuitive argument: the adaptive weights set the losses to be on the same scale. We next present multiple theoretical justifications for AdaLoss. | 1708.06832#17 | Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing | This work considers the trade-off between accuracy and test-time
computational cost of deep neural networks (DNNs) via \emph{anytime}
predictions from auxiliary predictions. Specifically, we optimize auxiliary
losses jointly in an \emph{adaptive} weighted sum, where the weights are
inversely proportional to average of each loss. Intuitively, this balances the
losses to have the same scale. We demonstrate theoretical considerations that
motivate this approach from multiple viewpoints, including connecting it to
optimizing the geometric mean of the expectation of each loss, an objective
that ignores the scale of losses. Experimentally, the adaptive weights induce
more competitive anytime predictions on multiple recognition data-sets and
models than non-adaptive approaches including weighing all losses equally. In
particular, anytime neural networks (ANNs) can achieve the same accuracy faster
using adaptive weights on a small network than using static constant weights on
a large one. For problems with high performance saturation, we also show a
sequence of exponentially deepening ANNscan achieve near-optimal anytime
results at any budget, at the cost of a const fraction of extra computation. | http://arxiv.org/pdf/1708.06832 | Hanzhang Hu, Debadeepta Dey, Martial Hebert, J. Andrew Bagnell | cs.LG, cs.AI | null | null | cs.LG | 20170822 | 20180525 | [
{
"id": "1711.09485"
},
{
"id": "1711.11503"
}
] |
1708.06733 | 18 | First, Îadv should not reduce classiï¬cation accuracy on the validation set, or else it will be immediately rejected by the user. In other words, A(FÎadv , Dvalid) ⥠aâ. Note that the attacker does not actually have access to the userâs validation dataset.
Second, for inputs that have certain attacker chosen properties, i.e., inputs containing the backdoor trigger, 0°" outputs predictions that are different from the predictions of the honestly trained model, ©*. Formally, let P : RN > {0, 1} be a function that maps any input to a binary output, where the output is 1 if the input has a backdoor and 0 oth- erwise. Then, Vx : P(x) = 1, arg max Foun (x) = I(x) F arg max F< (x), where the function | : RY â [1, M] maps an input to a class label. | 1708.06733#18 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06734 | 18 | # 5. Experiments
We ï¬rst present the evaluations of our learned represen- tation in the standard transfer learning benchmarks. Then, we perform ablation studies on our proposed method to show quantitatively the impact of our techniques to prevent poor representations. Finally, we analyze the learned repre- sentation through some quantitative and qualitative experi- ments to get a better insight into what has been learned. We call the activation of the last layer of our network, on which the loss (4) is deï¬ned, the counting vector. We evaluate whether each unit in the counting vector is counting some visual primitive or not. Our model is based on AlexNet [20] in all experiments. In our tables we use boldface for the top performer and underline the second top performer. Implementation Details. We use caffe [18] with the de- fault weight regularization settings to train our network. The learning rate is set to be quite low to avoid divergence. We begin with a learning rate of 10â4 and drop it by a fac- tor of 0.9 every 10K iterations. An important step is to nor- malize the input by subtracting the mean intensity value and dividing the zero-mean images by their standard deviation. | 1708.06734#18 | Representation Learning by Learning to Count | We introduce a novel method for representation learning that uses an
artificial supervision signal based on counting visual primitives. This
supervision signal is obtained from an equivariance relation, which does not
require any manual annotation. We relate transformations of images to
transformations of the representations. More specifically, we look for the
representation that satisfies such relation rather than the transformations
that match a given representation. In this paper, we use two image
transformations in the context of counting: scaling and tiling. The first
transformation exploits the fact that the number of visual primitives should be
invariant to scale. The second transformation allows us to equate the total
number of visual primitives in each tile to that in the whole image. These two
transformations are combined in one constraint and used to train a neural
network with a contrastive loss. The proposed task produces representations
that perform on par or exceed the state of the art in transfer learning
benchmarks. | http://arxiv.org/pdf/1708.06734 | Mehdi Noroozi, Hamed Pirsiavash, Paolo Favaro | cs.CV | ICCV 2017(oral) | null | cs.CV | 20170822 | 20170822 | [
{
"id": "1603.09246"
},
{
"id": "1604.03505"
},
{
"id": "1611.09842"
},
{
"id": "1612.06370"
},
{
"id": "1605.09410"
}
] |
1708.06832 | 18 | Before considering general cases, we first consider a simple example, where the loss function L(y, 9) = |ly â Gl|2 is the square loss. For this example, we model each y|z to be sampled from the multiplication of L independent Gaussian distributions, (9,071) for i = 1,..., L, where §; (zx; 0) a2 is the iâ prediction, and 0? ⬠Rt, ie., Pr(yla;0,02,...,02) « Tk, exp(â HG Sle). Then i=1 Jo?
4
we compute the empirical expected log-likelihood for a maximum likelihood estimator (MLE): | 1708.06832#18 | Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing | This work considers the trade-off between accuracy and test-time
computational cost of deep neural networks (DNNs) via \emph{anytime}
predictions from auxiliary predictions. Specifically, we optimize auxiliary
losses jointly in an \emph{adaptive} weighted sum, where the weights are
inversely proportional to average of each loss. Intuitively, this balances the
losses to have the same scale. We demonstrate theoretical considerations that
motivate this approach from multiple viewpoints, including connecting it to
optimizing the geometric mean of the expectation of each loss, an objective
that ignores the scale of losses. Experimentally, the adaptive weights induce
more competitive anytime predictions on multiple recognition data-sets and
models than non-adaptive approaches including weighing all losses equally. In
particular, anytime neural networks (ANNs) can achieve the same accuracy faster
using adaptive weights on a small network than using static constant weights on
a large one. For problems with high performance saturation, we also show a
sequence of exponentially deepening ANNscan achieve near-optimal anytime
results at any budget, at the cost of a const fraction of extra computation. | http://arxiv.org/pdf/1708.06832 | Hanzhang Hu, Debadeepta Dey, Martial Hebert, J. Andrew Bagnell | cs.LG, cs.AI | null | null | cs.LG | 20170822 | 20180525 | [
{
"id": "1711.09485"
},
{
"id": "1711.11503"
}
] |
1708.06733 | 19 | The attackerâs goals, as described above, encompass both targeted and non-targeted attacks. In a targeted attack, the adversary precisely speciï¬es the output of the network on inputs satisfying the backdoor property; for example, the attacker might wish to swap two labels in the presence of a backdoor. An untargeted attack only aims to reduce classiï¬cation accuracy for backdoored inputs; that is, the attack succeeds as long as backdoored inputs are incorrectly classiï¬ed.
To achieve her goals, an attacker is allowed to make arbitrary modiï¬cations to the training procedure. Such mod- iï¬cations include augmenting the training data with attacker- chosen samples and labels (also known as training set poisoning [24]), changing the conï¬guration settings of the learning algorithm such as the learning rate or the batch size,
or even directly setting the returned network parameters (Î) by hand.
2.2.2. Transfer Learning Attack. In this setting, the user unwittingly downloads a maliciously trained model, FÎadv , from an online model repository, intending to adapt it for her own machine learning application. Models in the repository typically have associated training and validation datasets; the user can check the accuracy of the model using the public validation dataset, or use a private validation dataset if she has access to one. | 1708.06733#19 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06734 | 19 | Method Ref Class. Det. Supervised [20] Random Context [9] Context [9]â Jigsaw [30] ego-motion [1] ego-motion [1]â Adversarial [10]â ContextEncoder [33] Sound [31] Sound [31]â Video [41] Video [41]â Colorization [43]â Split-Brain [44]â ColorProxy [22] WatchingObjectsMove [32] Counting [43] [33] [19] [19] [30] [1] [1] [10] [33] [44] [44] [19] [19] [43] [44] [22] [32] 79.9 53.3 55.3 65.3 67.6 52.9 54.2 58.6 56.5 54.4 61.3 62.8 63.1 65.9 67.1 65.9 61.0 67.7 56.8 43.4 46.6 51.1 53.2 41.8 43.9 46.2 44.5 44.0 - 47.4 47.2 46.9 46.7 - 52.2 51.4 Segm. 48.0 19.8 - - 37.6 - - 34.9 29.7 - - - - 35.6 36.0 38.0 - 36.6 | 1708.06734#19 | Representation Learning by Learning to Count | We introduce a novel method for representation learning that uses an
artificial supervision signal based on counting visual primitives. This
supervision signal is obtained from an equivariance relation, which does not
require any manual annotation. We relate transformations of images to
transformations of the representations. More specifically, we look for the
representation that satisfies such relation rather than the transformations
that match a given representation. In this paper, we use two image
transformations in the context of counting: scaling and tiling. The first
transformation exploits the fact that the number of visual primitives should be
invariant to scale. The second transformation allows us to equate the total
number of visual primitives in each tile to that in the whole image. These two
transformations are combined in one constraint and used to train a neural
network with a contrastive loss. The proposed task produces representations
that perform on par or exceed the state of the art in transfer learning
benchmarks. | http://arxiv.org/pdf/1708.06734 | Mehdi Noroozi, Hamed Pirsiavash, Paolo Favaro | cs.CV | ICCV 2017(oral) | null | cs.CV | 20170822 | 20170822 | [
{
"id": "1603.09246"
},
{
"id": "1604.03505"
},
{
"id": "1611.09842"
},
{
"id": "1612.06370"
},
{
"id": "1605.09410"
}
] |
1708.06832 | 19 | 4
we compute the empirical expected log-likelihood for a maximum likelihood estimator (MLE):
: ep Styl ail wa E[In(Pr(y|z))] x ELS*( =? âIno?)| = 5 âIno?), (1) O° o- i=1 i i=1 i where E is averaging over samples, and @; is the empirical estimate of ¢;. If we fix 6 and optimize over o?, we get o? = ¢;. As computing the empirical means is expensive over large data-sets, AdaLoss replaces é; with bi, the exponential moving average of the losses, and sets B; « Gt ~] o,° so as to solve the MLE online by jointly updating 9 and B;. We note that the naturally appeared Ino? terms in Eq_[I]are log-barriers preventing B; = 0. Inspired by this observation, we form the following joint optimization over @ and B; for general losses without probability models: L Ain
L min S_(Biéi(9) â Ain Bi), (2) 9,By Br i= | 1708.06832#19 | Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing | This work considers the trade-off between accuracy and test-time
computational cost of deep neural networks (DNNs) via \emph{anytime}
predictions from auxiliary predictions. Specifically, we optimize auxiliary
losses jointly in an \emph{adaptive} weighted sum, where the weights are
inversely proportional to average of each loss. Intuitively, this balances the
losses to have the same scale. We demonstrate theoretical considerations that
motivate this approach from multiple viewpoints, including connecting it to
optimizing the geometric mean of the expectation of each loss, an objective
that ignores the scale of losses. Experimentally, the adaptive weights induce
more competitive anytime predictions on multiple recognition data-sets and
models than non-adaptive approaches including weighing all losses equally. In
particular, anytime neural networks (ANNs) can achieve the same accuracy faster
using adaptive weights on a small network than using static constant weights on
a large one. For problems with high performance saturation, we also show a
sequence of exponentially deepening ANNscan achieve near-optimal anytime
results at any budget, at the cost of a const fraction of extra computation. | http://arxiv.org/pdf/1708.06832 | Hanzhang Hu, Debadeepta Dey, Martial Hebert, J. Andrew Bagnell | cs.LG, cs.AI | null | null | cs.LG | 20170822 | 20180525 | [
{
"id": "1711.09485"
},
{
"id": "1711.11503"
}
] |
1708.06733 | 20 | The user then uses transfer learning techniques to adapt to generate a new model F tl , where the new network F tl and the new model parameters Îadv ,tl are both derived from FÎadv . Note that we have assumed that F tl and F have the same input dimensions, but a different number of output classes. Adversaryâs Goals Assume as before that FÎâ is an hon- estly trained version of the adversarial model FÎadv and that F tl Îâ,tl is the new model that a user would obtain if they applied transfer learning to the honest model. The attackerâs goals in the transfer learning attack are similar to her goals in the outsourced training attack: (1) F tl Îadv ,tl must have high accuracy on the userâs validation set for the new application domain; and (2) if an input x in the new application domain has property P(x), then F tl
# 3. Related Work | 1708.06733#20 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06734 | 20 | Table 1: Evaluation of transfer learning on PASCAL. Classiï¬cation and detection are evaluated on PASCAL VOC 2007 in the frameworks introduced in [19] and [11] respec- tively. Both tasks are evaluated using mean average pre- cision (mAP) as a performance measure. Segmentation is evaluated on PASCAL VOC 2012 in the framework of [26], which reports mean intersection over union (mIoU). (*) de- notes the use of the data initialization method [19].
# 5.1. Transfer Learning Evaluation
We evaluate our learned representation on the detec- tion, classiï¬cation, and segmentation tasks on the PASCAL dataset as well as the classiï¬cation task on the ImageNet dataset. We train our counting network on the 1.3M im- ages from the training set of ImageNet. We use images of 114 pixels as input. Since we transfer only the convo- 114 lutional layers, it has no effect on the transferred models and evaluation. A new version of [29] has been released [30], where the standard AlexNet is used for transfer learning. All the numbers in our comparisons are from that version.
# 5.1.1 Fine-tuning on PASCAL | 1708.06734#20 | Representation Learning by Learning to Count | We introduce a novel method for representation learning that uses an
artificial supervision signal based on counting visual primitives. This
supervision signal is obtained from an equivariance relation, which does not
require any manual annotation. We relate transformations of images to
transformations of the representations. More specifically, we look for the
representation that satisfies such relation rather than the transformations
that match a given representation. In this paper, we use two image
transformations in the context of counting: scaling and tiling. The first
transformation exploits the fact that the number of visual primitives should be
invariant to scale. The second transformation allows us to equate the total
number of visual primitives in each tile to that in the whole image. These two
transformations are combined in one constraint and used to train a neural
network with a contrastive loss. The proposed task produces representations
that perform on par or exceed the state of the art in transfer learning
benchmarks. | http://arxiv.org/pdf/1708.06734 | Mehdi Noroozi, Hamed Pirsiavash, Paolo Favaro | cs.CV | ICCV 2017(oral) | null | cs.CV | 20170822 | 20170822 | [
{
"id": "1603.09246"
},
{
"id": "1604.03505"
},
{
"id": "1611.09842"
},
{
"id": "1612.06370"
},
{
"id": "1605.09410"
}
] |
1708.06832 | 20 | L min S_(Biéi(9) â Ain Bi), (2) 9,By Br i=
where \ > 0 is a hyper Parameter to balance between the log-barriers and weighted losses. Under the optimal condition, B; = z . AdaLoss estimates this with B; x 0; (0)~+. We can also eliminate B, from Eq. [2junder the optimal condition, and we transform Eq. [2]to the following problem: In
min > In 6;(0). (3)
i=1
This is equivalent to minimizing the geometric mean of the expected training losses, and it differs from minimizing the expected geometric mean of losses, as In and expectation are not commutable. Eq.|3]discards any constant scaling of losses automatically discarded as constant offsets, so that the scale difference between the early and late losses are automatically reconciled. Geometric mean is also known as the canonical mean to measure multiple positive quantities of various scales. To derive AdaLoss directly from Eq.|3} we note that the gradient of the objective in Eq.|3) Bilis a 1 sae , and gradient descent combined with AdaLoss estimates the gradient with we Le oe
# Le oe
i=1
# 4 Sequence of Exponentially Deepening Anytime Neural Networks (EANN) | 1708.06832#20 | Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing | This work considers the trade-off between accuracy and test-time
computational cost of deep neural networks (DNNs) via \emph{anytime}
predictions from auxiliary predictions. Specifically, we optimize auxiliary
losses jointly in an \emph{adaptive} weighted sum, where the weights are
inversely proportional to average of each loss. Intuitively, this balances the
losses to have the same scale. We demonstrate theoretical considerations that
motivate this approach from multiple viewpoints, including connecting it to
optimizing the geometric mean of the expectation of each loss, an objective
that ignores the scale of losses. Experimentally, the adaptive weights induce
more competitive anytime predictions on multiple recognition data-sets and
models than non-adaptive approaches including weighing all losses equally. In
particular, anytime neural networks (ANNs) can achieve the same accuracy faster
using adaptive weights on a small network than using static constant weights on
a large one. For problems with high performance saturation, we also show a
sequence of exponentially deepening ANNscan achieve near-optimal anytime
results at any budget, at the cost of a const fraction of extra computation. | http://arxiv.org/pdf/1708.06832 | Hanzhang Hu, Debadeepta Dey, Martial Hebert, J. Andrew Bagnell | cs.LG, cs.AI | null | null | cs.LG | 20170822 | 20180525 | [
{
"id": "1711.09485"
},
{
"id": "1711.11503"
}
] |
1708.06733 | 21 | # 3. Related Work
Attacks on machine learning were ï¬rst considered in the context of statistical spam ï¬lters. Here the attackerâs goal was to either craft messages that evade detection [25], [26], [27], [28] to let spam through or inï¬uence its training data to cause it to block legitimate messages. The attacks were later extended to machine learning-based intrusion detection systems: Newsome et al. [29] devised training- time attacks against the Polygraph virus detection system that would create both false positives and negatives when classifying network trafï¬c, and Chung and Mok [30], [31] found that Autograph, a signature detection system that updates its model online, was vulnerable to allergy attacks that convince the system to learn signatures that match benign trafï¬c. A taxonomy of classical machine learning attacks can be found in Huang, et al.âs [24] 2011 survey. | 1708.06733#21 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06734 | 21 | # 5.1.1 Fine-tuning on PASCAL
In this set of experiments, we ï¬ne-tune our network on the PASCAL VOC 2007 and VOC 2012 datasets, which are standard benchmarks for representation learning. Fine- tuning is based on established frameworks for object clas- siï¬cation [19], detection [11] and segmentation [26] tasks. The classiï¬cation task is a multi-class classiï¬cation prob- lem, which predicts the presence or absence of 20 object classes. The detection task involves locating objects by specifying a bounding box around them. Segmentation as- signs the label of an object class to each pixel in the im- age. As shown in Table 1, we either outperform previous methods or achieve the second best performance. Notice | 1708.06734#21 | Representation Learning by Learning to Count | We introduce a novel method for representation learning that uses an
artificial supervision signal based on counting visual primitives. This
supervision signal is obtained from an equivariance relation, which does not
require any manual annotation. We relate transformations of images to
transformations of the representations. More specifically, we look for the
representation that satisfies such relation rather than the transformations
that match a given representation. In this paper, we use two image
transformations in the context of counting: scaling and tiling. The first
transformation exploits the fact that the number of visual primitives should be
invariant to scale. The second transformation allows us to equate the total
number of visual primitives in each tile to that in the whole image. These two
transformations are combined in one constraint and used to train a neural
network with a contrastive loss. The proposed task produces representations
that perform on par or exceed the state of the art in transfer learning
benchmarks. | http://arxiv.org/pdf/1708.06734 | Mehdi Noroozi, Hamed Pirsiavash, Paolo Favaro | cs.CV | ICCV 2017(oral) | null | cs.CV | 20170822 | 20170822 | [
{
"id": "1603.09246"
},
{
"id": "1604.03505"
},
{
"id": "1611.09842"
},
{
"id": "1612.06370"
},
{
"id": "1605.09410"
}
] |
1708.06832 | 21 | In practice, we often observe ANNs using AdaLoss to be much more competitive in their later half than the early half on validation sets, such as in Table. 3a of Sec. 5.2. Fortunately, we can leverage this effect to form competitive anytime predictors at every budget, with a constant fraction of additional computation. Speciï¬cally, we assemble ANNs whose depths grow exponentially. Each ANN only starts computing if the smaller ones are ï¬nished, and its predictions are used if they are better than the best existing ones in validation. We call this ensemble an EANN, as illustrated in Fig. 2b. An EANN only delays the computation of any large ANN by at most a constant fraction of computation, because the earlier networks are exponentially smaller. Hence, if each ANN is near- optimal in later predictions, then we can achieve near-optimal accuracy at any test-time interruption, with the extra computation. Formally, the following proposition characterizes the exponential base and the increased computational cost. Proposition 4.1. Let b > 1. Assume for any L, any ANN of depth L has competitive anytime prediction at depth i > L b against the | 1708.06832#21 | Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing | This work considers the trade-off between accuracy and test-time
computational cost of deep neural networks (DNNs) via \emph{anytime}
predictions from auxiliary predictions. Specifically, we optimize auxiliary
losses jointly in an \emph{adaptive} weighted sum, where the weights are
inversely proportional to average of each loss. Intuitively, this balances the
losses to have the same scale. We demonstrate theoretical considerations that
motivate this approach from multiple viewpoints, including connecting it to
optimizing the geometric mean of the expectation of each loss, an objective
that ignores the scale of losses. Experimentally, the adaptive weights induce
more competitive anytime predictions on multiple recognition data-sets and
models than non-adaptive approaches including weighing all losses equally. In
particular, anytime neural networks (ANNs) can achieve the same accuracy faster
using adaptive weights on a small network than using static constant weights on
a large one. For problems with high performance saturation, we also show a
sequence of exponentially deepening ANNscan achieve near-optimal anytime
results at any budget, at the cost of a const fraction of extra computation. | http://arxiv.org/pdf/1708.06832 | Hanzhang Hu, Debadeepta Dey, Martial Hebert, J. Andrew Bagnell | cs.LG, cs.AI | null | null | cs.LG | 20170822 | 20180525 | [
{
"id": "1711.09485"
},
{
"id": "1711.11503"
}
] |
1708.06733 | 22 | To create our backdoors, we primarily use training set poisoning, in which the attacker is able to add his own sam- ples (and corresponding ground truth labels) to the training set. Existing research on training set poisoning typically assumes that the attacker is only able to inï¬uence some ï¬xed proportion of the training data, or that the classiï¬er is updated online with new inputs, some of which may be attacker-controlled, but not change the training algorithm itself. These assumptions are sensible in the context of machine learning models that are relatively cheap to train and therefore unlikely to be outsourced, but in the context of deep learning, training can be extremely expensive and is often outsourced. Thus, in our threat model (Section 2.2) we allow the attacker to freely modify the training procedure as
long as the parameters returned to the user satisfy the model architecture and meet the userâs expectations of accuracy. | 1708.06733#22 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06733 | 23 | long as the parameters returned to the user satisfy the model architecture and meet the userâs expectations of accuracy.
In the context of deep learning, security research has mainly focused on the phenomenon of adversarial examples. First noticed by Szegedy et al. [32], adversarial examples are imperceptible modiï¬cations to correctly-classiï¬ed inputs that cause them to be misclassiï¬ed. Follow-on work im- proved the speed at which adversarial examples could be created [33], demonstrated that adversarial examples could be found even if only black-box access to the target model was available [34], and even discovered universal adversar- ial perturbations [35] that could cause different images to be misclassiï¬ed by adding a single perturbation, even across different model architectures. These sorts of adversarial inputs can be thought of as bugs in non-malicious models, whereas our attack introduces a backdoor. Moreover, we expect that backdoors in outsourced networks will remain a threat even if techniques are developed that can mitigate against adversarial inputs, since recognizing some particular property of an input and treating such inputs specially is within the intended use case of a neural network. | 1708.06733#23 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06734 | 23 | Table 2: ImageNet classiï¬cation with a linear classiï¬er. We use the publicly available code and conï¬guration of [43]. Every column shows the top-1 accuracy of AlexNet on the classiï¬cation task. The learned weights from conv1 up to the displayed layer are frozen. The features of each layer are spatially resized until there are fewer than 9K di- mensions left. A fully connected layer followed by softmax is trained on a 1000-way object classiï¬cation task. | 1708.06734#23 | Representation Learning by Learning to Count | We introduce a novel method for representation learning that uses an
artificial supervision signal based on counting visual primitives. This
supervision signal is obtained from an equivariance relation, which does not
require any manual annotation. We relate transformations of images to
transformations of the representations. More specifically, we look for the
representation that satisfies such relation rather than the transformations
that match a given representation. In this paper, we use two image
transformations in the context of counting: scaling and tiling. The first
transformation exploits the fact that the number of visual primitives should be
invariant to scale. The second transformation allows us to equate the total
number of visual primitives in each tile to that in the whole image. These two
transformations are combined in one constraint and used to train a neural
network with a contrastive loss. The proposed task produces representations
that perform on par or exceed the state of the art in transfer learning
benchmarks. | http://arxiv.org/pdf/1708.06734 | Mehdi Noroozi, Hamed Pirsiavash, Paolo Favaro | cs.CV | ICCV 2017(oral) | null | cs.CV | 20170822 | 20170822 | [
{
"id": "1603.09246"
},
{
"id": "1604.03505"
},
{
"id": "1611.09842"
},
{
"id": "1612.06370"
},
{
"id": "1605.09410"
}
] |
1708.06832 | 23 | This proposition says that an EANN is competitive at any budget B against the optimal of the cost B C . Furthermore, the stronger each anytime model is, i.e., the larger b becomes, the smaller the computation inï¬ation, C, is: as b approaches â, supB C, shrinks to 2, and E[C], shrinks to 1. Moreover, if we have M number of parallel workers instead of one, we can speed up EANNs by computing ANNs in parallel in a ï¬rst-in-ï¬rst-out schedule, so that we effectively increase the constant b to bM for computing C. It is also worth noting that if we form the sequence using regular networks instead of ANNs, then we will lose the ability to output frequently, since at budget B, we only produce Î(log(B)) intermediate predictions instead of the Î(B) predictions in an EANN. We will further have a larger cost inï¬ation, C, such that supB C ⥠4 and E[C] ⥠1.5 + 2 â 2.91, so that the average cost inï¬ation is at least about 2.91. We defer the proofs to the appendix.
5 | 1708.06832#23 | Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing | This work considers the trade-off between accuracy and test-time
computational cost of deep neural networks (DNNs) via \emph{anytime}
predictions from auxiliary predictions. Specifically, we optimize auxiliary
losses jointly in an \emph{adaptive} weighted sum, where the weights are
inversely proportional to average of each loss. Intuitively, this balances the
losses to have the same scale. We demonstrate theoretical considerations that
motivate this approach from multiple viewpoints, including connecting it to
optimizing the geometric mean of the expectation of each loss, an objective
that ignores the scale of losses. Experimentally, the adaptive weights induce
more competitive anytime predictions on multiple recognition data-sets and
models than non-adaptive approaches including weighing all losses equally. In
particular, anytime neural networks (ANNs) can achieve the same accuracy faster
using adaptive weights on a small network than using static constant weights on
a large one. For problems with high performance saturation, we also show a
sequence of exponentially deepening ANNscan achieve near-optimal anytime
results at any budget, at the cost of a const fraction of extra computation. | http://arxiv.org/pdf/1708.06832 | Hanzhang Hu, Debadeepta Dey, Martial Hebert, J. Andrew Bagnell | cs.LG, cs.AI | null | null | cs.LG | 20170822 | 20180525 | [
{
"id": "1711.09485"
},
{
"id": "1711.11503"
}
] |
1708.06733 | 24 | Closest to our own work is that of Shen et al. [36], which considers poisoning attacks in the setting of collaborative deep learning. In this setting, many users submit masked features to a central classiï¬er, which then learns a global model based on the training data of all users. Shen et al. show that in this setting, an attacker who poisons just 10% of the training data can cause a target class to be misclassiï¬ed with a 99% success rate. The result of such an attack is likely to be detected, however, because a validation set would reveal the modelâs poor performance; these models are therefore unlikely to be used in production. Although we consider a more powerful attacker, the impact of the attack is correspondingly more serious: backdoored models will exhibit equivalent performance on the defenderâs validation sets, but can then be forced to fail in the ï¬eld when a backdoor-triggering input is seen.
# 4. Case Study: MNST Digit Recognition At- tack | 1708.06733#24 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06734 | 24 | Method conv1 conv2 conv3 conv4 conv5 Places labels [45] ImageNet labels [20] Random Context [9] Jigsaw [30] Context encoder [33] Sound [31] Adversarial [10] Colorization [43] Split-Brain [44] Counting 22.1 22.7 15.7 19.7 23.0 18.2 19.9 22.0 16.0 21.3 23.3 35.1 34.8 20.3 26.7 31.9 23.2 29.3 28.7 25.7 30.7 33.9 40.2 38.4 19.8 31.9 35.0 23.4 32.1 31.8 29.6 34.0 36.3 43.3 39.4 19.1 32.7 34.2 21.9 28.8 31.3 30.3 34.1 34.7 44.6 38.7 17.5 30.9 29.3 18.4 29.8 29.7 29.7 32.5 29.6
Table 3: Places classiï¬cation with a linear classiï¬er. We use the same setting as in Table 2 except that to evaluate generalization across datasets, the model is pretrained on ImageNet (with no labels) and then tested with frozen layers on Places (with labels). The last layer has 205 neurons for scene categories. | 1708.06734#24 | Representation Learning by Learning to Count | We introduce a novel method for representation learning that uses an
artificial supervision signal based on counting visual primitives. This
supervision signal is obtained from an equivariance relation, which does not
require any manual annotation. We relate transformations of images to
transformations of the representations. More specifically, we look for the
representation that satisfies such relation rather than the transformations
that match a given representation. In this paper, we use two image
transformations in the context of counting: scaling and tiling. The first
transformation exploits the fact that the number of visual primitives should be
invariant to scale. The second transformation allows us to equate the total
number of visual primitives in each tile to that in the whole image. These two
transformations are combined in one constraint and used to train a neural
network with a contrastive loss. The proposed task produces representations
that perform on par or exceed the state of the art in transfer learning
benchmarks. | http://arxiv.org/pdf/1708.06734 | Mehdi Noroozi, Hamed Pirsiavash, Paolo Favaro | cs.CV | ICCV 2017(oral) | null | cs.CV | 20170822 | 20170822 | [
{
"id": "1603.09246"
},
{
"id": "1604.03505"
},
{
"id": "1611.09842"
},
{
"id": "1612.06370"
},
{
"id": "1605.09410"
}
] |
1708.06832 | 24 | 5
OPT CONST LINEAR ADALOSS 1/4 0.00 15.07 25.67 32.99 1/2 0.00 16.40 13.02 9.97 3/4 0.00 18.76 12.97 3.96 1 0.00 18.90 12.65 2.73 ResANN50+CONST ResANN50+AdaLoss DenseANN169+CONST DenseANN169+AdaLoss MSDNet38 (Huang et al., 2017a) MSDNet38+AdaLoss 1/4 54.34 54.98 48.15 47.17 33.9 35.75 1/2 35.61 34.92 45.00 44.64 28.0 28.04 3/4 27.23 26.59 29.09 28.22 25.7 25.82 (a) Relative Error Percentage Increases from the OPT 1 25.14 24.42 25.60 24.07 24.3 23.99 | 1708.06832#24 | Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing | This work considers the trade-off between accuracy and test-time
computational cost of deep neural networks (DNNs) via \emph{anytime}
predictions from auxiliary predictions. Specifically, we optimize auxiliary
losses jointly in an \emph{adaptive} weighted sum, where the weights are
inversely proportional to average of each loss. Intuitively, this balances the
losses to have the same scale. We demonstrate theoretical considerations that
motivate this approach from multiple viewpoints, including connecting it to
optimizing the geometric mean of the expectation of each loss, an objective
that ignores the scale of losses. Experimentally, the adaptive weights induce
more competitive anytime predictions on multiple recognition data-sets and
models than non-adaptive approaches including weighing all losses equally. In
particular, anytime neural networks (ANNs) can achieve the same accuracy faster
using adaptive weights on a small network than using static constant weights on
a large one. For problems with high performance saturation, we also show a
sequence of exponentially deepening ANNscan achieve near-optimal anytime
results at any budget, at the cost of a const fraction of extra computation. | http://arxiv.org/pdf/1708.06832 | Hanzhang Hu, Debadeepta Dey, Martial Hebert, J. Andrew Bagnell | cs.LG, cs.AI | null | null | cs.LG | 20170822 | 20180525 | [
{
"id": "1711.09485"
},
{
"id": "1711.11503"
}
] |
1708.06733 | 25 | # 4. Case Study: MNST Digit Recognition At- tack
Our ï¬rst set of experiments uses the MNIST digit recog- nition task [37], which involves classifying grayscale images of handwritten digits into ten classes, one corresponding to each digit in the set [0, 9]. Although the MNIST digit recognition task is considered a âtoyâ benchmark, we use the results of our attack on this to provide insight into how the attack operates.
# 4.1. Setup
4.1.1. Baseline MNIST Network. Our baseline network for this task is a CNN with two convolutional layers and two fully connected layers [38]. Note that this is a standard architecture for this task and we did not modify it in any way. The parameters of each layer are shown in Table 1. The baseline CNN achieves an accuracy of 99.5% for MNIST digit recognition.
TABLE 1. ARCHITECTURE OF THE BASELINE MNIST NETWORK | 1708.06733#25 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06734 | 25 | that while classiï¬cation and detection are evaluated on VOC 2007, segmentation is evaluated on VOC 2012. Unfortu- nately, we did not get any performance boost when using the method of Kr¨ahenb¨uhl et al. [19].
# 5.1.2 Linear Classiï¬cation on Places and ImageNet
As introduced by Zhang et al. [43], we train a linear clas- siï¬er on top of the frozen layers on ImageNet [38] and Places [45] datasets. The results of these experiments are shown in Tables 2 and 3. Our method achieves a perfor- mance comparable to the other state-of-the-art methods on the ImageNet dataset and shows a signiï¬cant improvement on the Places dataset. Training and testing a method on the same dataset type, although with separate sets and no labels, | 1708.06734#25 | Representation Learning by Learning to Count | We introduce a novel method for representation learning that uses an
artificial supervision signal based on counting visual primitives. This
supervision signal is obtained from an equivariance relation, which does not
require any manual annotation. We relate transformations of images to
transformations of the representations. More specifically, we look for the
representation that satisfies such relation rather than the transformations
that match a given representation. In this paper, we use two image
transformations in the context of counting: scaling and tiling. The first
transformation exploits the fact that the number of visual primitives should be
invariant to scale. The second transformation allows us to equate the total
number of visual primitives in each tile to that in the whole image. These two
transformations are combined in one constraint and used to train a neural
network with a contrastive loss. The proposed task produces representations
that perform on par or exceed the state of the art in transfer learning
benchmarks. | http://arxiv.org/pdf/1708.06734 | Mehdi Noroozi, Hamed Pirsiavash, Paolo Favaro | cs.CV | ICCV 2017(oral) | null | cs.CV | 20170822 | 20170822 | [
{
"id": "1603.09246"
},
{
"id": "1604.03505"
},
{
"id": "1611.09842"
},
{
"id": "1612.06370"
},
{
"id": "1605.09410"
}
] |
1708.06832 | 25 | Figure 3: (a) Average relative percentage increase in error from OPT on CIFAR and SVHN at 1/4, 1/2, 3/4 and 1 of the total cost. E.g., the bottom right entry means that if OPT has a 10% ï¬nal error rate, then AdaLoss has about 10.27%. (b) Test error rates at different fraction of the total costs on ResANN50 and DenseANN169.
# 5 Experiments
We list the key questions that our experiments aim to answer.
⢠How do anytime predictions trained with adaptive weights compare against those trained with
static constant weights (over different architectures)? (Sec. 5.2) ⢠How do underlying DNN architectures affect ANNs? (Sec. 5.2) ⢠How can sub-par early predictions in ANNs be mitigated by ANN ensembles? (Sec. 5.3) ⢠How does data-set difï¬culty affect the adaptive weights scheme? (Sec. 5.4)
# 5.1 Data-sets and Training Details
Data-sets. We experiment on CIFAR10, CIFAR100 (Krizhevsky, 2009), SVHN (Netzer et al., 2011)1 and ILSVRC (Russakovsky et al., 2015)2. | 1708.06832#25 | Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing | This work considers the trade-off between accuracy and test-time
computational cost of deep neural networks (DNNs) via \emph{anytime}
predictions from auxiliary predictions. Specifically, we optimize auxiliary
losses jointly in an \emph{adaptive} weighted sum, where the weights are
inversely proportional to average of each loss. Intuitively, this balances the
losses to have the same scale. We demonstrate theoretical considerations that
motivate this approach from multiple viewpoints, including connecting it to
optimizing the geometric mean of the expectation of each loss, an objective
that ignores the scale of losses. Experimentally, the adaptive weights induce
more competitive anytime predictions on multiple recognition data-sets and
models than non-adaptive approaches including weighing all losses equally. In
particular, anytime neural networks (ANNs) can achieve the same accuracy faster
using adaptive weights on a small network than using static constant weights on
a large one. For problems with high performance saturation, we also show a
sequence of exponentially deepening ANNscan achieve near-optimal anytime
results at any budget, at the cost of a const fraction of extra computation. | http://arxiv.org/pdf/1708.06832 | Hanzhang Hu, Debadeepta Dey, Martial Hebert, J. Andrew Bagnell | cs.LG, cs.AI | null | null | cs.LG | 20170822 | 20180525 | [
{
"id": "1711.09485"
},
{
"id": "1711.11503"
}
] |
1708.06733 | 26 | TABLE 1. ARCHITECTURE OF THE BASELINE MNIST NETWORK
conv1 pool1 conv2 pool2 fc1 fc2 input 1x28x28 16x24x24 16x12x12 32x8x8 32x4x4 512 ï¬lter 16x1x5x5 average, 2x2 32x16x5x5 average, 2x2 / / stride 1 2 1 2 / / output 16x24x24 16x12x12 32x8x8 32x4x4 512 10 activation ReLU / ReLU / ReLU Softmax
4.1.2. Attack Goals. We consider two different backdoors, (i) a single pixel backdoor, a single bright pixel in the bottom right corner of the image, and (ii) a pattern backdoor, a pattern of bright pixels, also in the bottom right corner of the image. Both backdoors are illustrated in Figure 3. We veriï¬ed that bottom right corner of the image is always dark in the non-backdoored images, thus ensuring that there would be no false positives.
We implemented multiple different attacks on these backdoored images, as described below:
e Single target attack: the attack labels backdoored versions of digit i as digit 7. We tried all 90 instances of this attack, for every combination of i, 7 ⬠[0,9] where i # j. | 1708.06733#26 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06734 | 26 | Interpolation Training method size Color space Counting dimension Detection performance Mixed 1.3M RGB/Gray 20 50.9 Mixed Mixed 128K 512K Gray Gray 1000 1000 44.9 49.1 Mixed Mixed 1.3M 1.3M RGB Gray 1000 1000 48.2 50.4 Linear Cubic Area Lanczos 1.3M 1.3M 1.3M 1.3M RGB/Gray RGB/Gray RGB/Gray RGB/Gray 1000 1000 1000 1000 48.4 48.9 49.2 47.3 Mixed 1.3M RGB/Gray 1000 51.4
Table 4: Ablation studies. We train the counting task un- der different interpolation methods, training size/color, and feature dimensions, and compare the performance of the learned representations on the detection task on the PAS- CAL VOC 2007 dataset. | 1708.06734#26 | Representation Learning by Learning to Count | We introduce a novel method for representation learning that uses an
artificial supervision signal based on counting visual primitives. This
supervision signal is obtained from an equivariance relation, which does not
require any manual annotation. We relate transformations of images to
transformations of the representations. More specifically, we look for the
representation that satisfies such relation rather than the transformations
that match a given representation. In this paper, we use two image
transformations in the context of counting: scaling and tiling. The first
transformation exploits the fact that the number of visual primitives should be
invariant to scale. The second transformation allows us to equate the total
number of visual primitives in each tile to that in the whole image. These two
transformations are combined in one constraint and used to train a neural
network with a contrastive loss. The proposed task produces representations
that perform on par or exceed the state of the art in transfer learning
benchmarks. | http://arxiv.org/pdf/1708.06734 | Mehdi Noroozi, Hamed Pirsiavash, Paolo Favaro | cs.CV | ICCV 2017(oral) | null | cs.CV | 20170822 | 20170822 | [
{
"id": "1603.09246"
},
{
"id": "1604.03505"
},
{
"id": "1611.09842"
},
{
"id": "1612.06370"
},
{
"id": "1605.09410"
}
] |
1708.06832 | 26 | Training details. We optimize the models using stochastic gradient descent, with initial learning rate of 0.1, momentum of 0.9 and a weight decay of 1e-4. On CIFAR and SVHN, we divide the learning rate by 10 at 1/2 and 3/4 of the total epochs. We train for 300 epochs on CIFAR and 60 epochs on SVHN. On ILSVRC, we train for 90 epochs, and divide the learning rate by 10 at epoch 30 and 60. We evaluate test error using single-crop.
Base models. We compare our proposed AdaLoss weights against the intuitive CONST weights. On CIFAR and SVHN, we also compare AdaLoss against LINEAR and OPT, deï¬ned in Sec. 3. We evaluate the weights on multiple models including ResNet (He et al., 2016) and DenseNet (Huang et al., 2017b), and MSDNet (Huang et al., 2017a). For ResNet and DenseNet, we augment them with auxiliary predictors and losses, and call the resulting models ResANN and DenseANN, and defer the details of these models to the appendix Sec. C.
# 5.2 Weight Scheme Comparisons | 1708.06832#26 | Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing | This work considers the trade-off between accuracy and test-time
computational cost of deep neural networks (DNNs) via \emph{anytime}
predictions from auxiliary predictions. Specifically, we optimize auxiliary
losses jointly in an \emph{adaptive} weighted sum, where the weights are
inversely proportional to average of each loss. Intuitively, this balances the
losses to have the same scale. We demonstrate theoretical considerations that
motivate this approach from multiple viewpoints, including connecting it to
optimizing the geometric mean of the expectation of each loss, an objective
that ignores the scale of losses. Experimentally, the adaptive weights induce
more competitive anytime predictions on multiple recognition data-sets and
models than non-adaptive approaches including weighing all losses equally. In
particular, anytime neural networks (ANNs) can achieve the same accuracy faster
using adaptive weights on a small network than using static constant weights on
a large one. For problems with high performance saturation, we also show a
sequence of exponentially deepening ANNscan achieve near-optimal anytime
results at any budget, at the cost of a const fraction of extra computation. | http://arxiv.org/pdf/1708.06832 | Hanzhang Hu, Debadeepta Dey, Martial Hebert, J. Andrew Bagnell | cs.LG, cs.AI | null | null | cs.LG | 20170822 | 20180525 | [
{
"id": "1711.09485"
},
{
"id": "1711.11503"
}
] |
1708.06733 | 27 | ⢠All-to-all attack: the attack changes the label of digit i to digit i + 1 for backdoored inputs.
Conceptually, these attacks could be implemented using two parallel copies of the baseline MNIST network, where the labels of the second copy are different from the ï¬rst. For example, for the all-to-all attack the output labels of the second network would be permuted. A third network then detects the presence or absence of the backdoor and outputs values from the second network if the backdoor exists, and the ï¬rst network if not. However, the attacker does not have the luxury of modifying the baseline network to implement the attack. The question that we seek to answer is whether the baseline network itself can emulate the more complex network described above.
4.1.3. Attack Strategy. We implement our attack by poi- soning the training dataset [24]. Speciï¬cally, we randomly pick p|Dtrain| from the training dataset, where p â (0, 1], and add backdoored versions of these images to the training dataset. We set the ground truth label of each backdoored image as per the attackerâs goals above. | 1708.06733#27 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06734 | 27 | may be affected by dataset bias. To have a better assess- ment of the generalization properties of all the competing methods, we suggest (as shown in Table 3) using the Ima- geNet dataset for training and the Places benchmark for test- ing (or vice versa). Our method archives state-of-the-art re- sults with the conv1-conv4 layers on the Places dataset. Interestingly, the performance of our conv1 layer is even higher than the one obtained with supervised learning when trained either on ImageNet or Places labels. The val- ues for all the other methods in Tables 2 and 3 are taken form [44] except for [30], which we report for the ï¬rst time.
# 5.2. Ablation Studies | 1708.06734#27 | Representation Learning by Learning to Count | We introduce a novel method for representation learning that uses an
artificial supervision signal based on counting visual primitives. This
supervision signal is obtained from an equivariance relation, which does not
require any manual annotation. We relate transformations of images to
transformations of the representations. More specifically, we look for the
representation that satisfies such relation rather than the transformations
that match a given representation. In this paper, we use two image
transformations in the context of counting: scaling and tiling. The first
transformation exploits the fact that the number of visual primitives should be
invariant to scale. The second transformation allows us to equate the total
number of visual primitives in each tile to that in the whole image. These two
transformations are combined in one constraint and used to train a neural
network with a contrastive loss. The proposed task produces representations
that perform on par or exceed the state of the art in transfer learning
benchmarks. | http://arxiv.org/pdf/1708.06734 | Mehdi Noroozi, Hamed Pirsiavash, Paolo Favaro | cs.CV | ICCV 2017(oral) | null | cs.CV | 20170822 | 20170822 | [
{
"id": "1603.09246"
},
{
"id": "1604.03505"
},
{
"id": "1611.09842"
},
{
"id": "1612.06370"
},
{
"id": "1605.09410"
}
] |
1708.06832 | 27 | # 5.2 Weight Scheme Comparisons
AdaLoss vs. CONST on the same models. Table 3a presents the average relative test error rate increase from OPT on 12 ResANNs on CIFAR10, CIFAR100 and SVHN3. As training an OPT for each depth is too expensive, we instead report the average relative comparison at 1/4, 1/2, 3/4, and 1 of the total ANN costs. We observe that the CONST scheme makes 15 â¼ 18% more errors than the OPT, and the relative gap widens at later layers. The LINEAR scheme also has about 13% relative gap in later layers. In contrast, AdaLoss enjoys small performance gaps in the later half of layers.
On ILSVRC, we compare AdaLoss against CONST on ResANN50, DenseANN169, and MSD- Net38, which have similar ï¬nal errors and total computational costs (See Fig. 4f). In Table 3b, we | 1708.06832#27 | Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing | This work considers the trade-off between accuracy and test-time
computational cost of deep neural networks (DNNs) via \emph{anytime}
predictions from auxiliary predictions. Specifically, we optimize auxiliary
losses jointly in an \emph{adaptive} weighted sum, where the weights are
inversely proportional to average of each loss. Intuitively, this balances the
losses to have the same scale. We demonstrate theoretical considerations that
motivate this approach from multiple viewpoints, including connecting it to
optimizing the geometric mean of the expectation of each loss, an objective
that ignores the scale of losses. Experimentally, the adaptive weights induce
more competitive anytime predictions on multiple recognition data-sets and
models than non-adaptive approaches including weighing all losses equally. In
particular, anytime neural networks (ANNs) can achieve the same accuracy faster
using adaptive weights on a small network than using static constant weights on
a large one. For problems with high performance saturation, we also show a
sequence of exponentially deepening ANNscan achieve near-optimal anytime
results at any budget, at the cost of a const fraction of extra computation. | http://arxiv.org/pdf/1708.06832 | Hanzhang Hu, Debadeepta Dey, Martial Hebert, J. Andrew Bagnell | cs.LG, cs.AI | null | null | cs.LG | 20170822 | 20180525 | [
{
"id": "1711.09485"
},
{
"id": "1711.11503"
}
] |
1708.06733 | 28 | We then re-train the baseline MNIST DNN using the poisoned training dataset. We found that in some attack in- stances we had to change the training parameters, including the step size and the mini-batch size, to get the training error to converge, but we note that this falls within the attackerâs capabilities, as discussed in Section 2.2. Our attack was successful in each instance, as we discuss next.
# 4.2. Attack Results
We now discuss the results of our attack. Note that when we report classiï¬cation error on backdoored images, we
Original image Single-Pixel Backdoor Pattern Backdoor
Figure 3. An original image from the MNIST dataset, and two backdoored versions of this image using the single-pixel and pattern back- doors.
do so against the poisoned labels. In other words, a low classiï¬cation error on backdoored images is favorable to the attacker and reï¬ective of the attackâs success. | 1708.06733#28 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06734 | 28 | # 5.2. Ablation Studies
To show the effectiveness of our proposed method, in Ta- ble 4 we compare its performance on the detection task on PASCAL VOC 2007 under different training scenarios. The ï¬rst three rows illustrate some simple comparisons based on feature and dataset size. The ï¬rst row shows the im- pact of the counting vector length. As discussed earlier on, the network tends to generate sparse counting features. We train the network on ImageNet with only 20 elements in the counting vector. This leads to a small drop in the perfor- mance, thus showing little sensitivity with respect to feature length. We also train the network with a smaller set of train- ing images. The results show that our method is sensitive to the size of the training set. This shows that the counting task is non-trivial and requires a large training dataset.
The remaining rows in Table 4 illustrate a more advanced analysis of the counting task. An important part of the de- sign of the learning procedure is the identiï¬cation of trivial solutions, i.e., solutions that would not result in a useful representation and that the neural network could converge to. By identifying such trivial learning scenarios, we can | 1708.06734#28 | Representation Learning by Learning to Count | We introduce a novel method for representation learning that uses an
artificial supervision signal based on counting visual primitives. This
supervision signal is obtained from an equivariance relation, which does not
require any manual annotation. We relate transformations of images to
transformations of the representations. More specifically, we look for the
representation that satisfies such relation rather than the transformations
that match a given representation. In this paper, we use two image
transformations in the context of counting: scaling and tiling. The first
transformation exploits the fact that the number of visual primitives should be
invariant to scale. The second transformation allows us to equate the total
number of visual primitives in each tile to that in the whole image. These two
transformations are combined in one constraint and used to train a neural
network with a contrastive loss. The proposed task produces representations
that perform on par or exceed the state of the art in transfer learning
benchmarks. | http://arxiv.org/pdf/1708.06734 | Mehdi Noroozi, Hamed Pirsiavash, Paolo Favaro | cs.CV | ICCV 2017(oral) | null | cs.CV | 20170822 | 20170822 | [
{
"id": "1603.09246"
},
{
"id": "1604.03505"
},
{
"id": "1611.09842"
},
{
"id": "1612.06370"
},
{
"id": "1605.09410"
}
] |
1708.06832 | 28 | 1Both CIFAR data-sets consist of 32x32 colored images. CIFAR10 and CIFAR100 have 10 and 100 classes, and each have 50000 training and 10000 testing images. We held out the last 5000 training samples in CIFAR10 and CIFAR100 for validation; the same parameters are then used in other models. We adopt the standard augmentation from Lee et al. (2015); He et al. (2016). SVHN contains around 600000 training and around 26032 testing 32x32 images of numeric digits from the Google Street Views. We adopt the same pad-and-crop augmentations of CIFAR for SVHN, and also add Gaussian blur.
2 ILSVRC2012 (Russakovsky et al., 2015) is a visual recognition data-set containing around 1.2 million natural and 50000 validation images for 1000 classes. We report the top-1 error rates on the validation set using a single-crop of size 224x224, after scaling the smaller side of the image to 256, following (He et al., 2016). | 1708.06832#28 | Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing | This work considers the trade-off between accuracy and test-time
computational cost of deep neural networks (DNNs) via \emph{anytime}
predictions from auxiliary predictions. Specifically, we optimize auxiliary
losses jointly in an \emph{adaptive} weighted sum, where the weights are
inversely proportional to average of each loss. Intuitively, this balances the
losses to have the same scale. We demonstrate theoretical considerations that
motivate this approach from multiple viewpoints, including connecting it to
optimizing the geometric mean of the expectation of each loss, an objective
that ignores the scale of losses. Experimentally, the adaptive weights induce
more competitive anytime predictions on multiple recognition data-sets and
models than non-adaptive approaches including weighing all losses equally. In
particular, anytime neural networks (ANNs) can achieve the same accuracy faster
using adaptive weights on a small network than using static constant weights on
a large one. For problems with high performance saturation, we also show a
sequence of exponentially deepening ANNscan achieve near-optimal anytime
results at any budget, at the cost of a const fraction of extra computation. | http://arxiv.org/pdf/1708.06832 | Hanzhang Hu, Debadeepta Dey, Martial Hebert, J. Andrew Bagnell | cs.LG, cs.AI | null | null | cs.LG | 20170822 | 20180525 | [
{
"id": "1711.09485"
},
{
"id": "1711.11503"
}
] |
1708.06733 | 29 | 4.2.1. Single Target Attack. Figure 4 illustrates the clean set error and backdoor set error for each of the 90 instances of the single target attack using the single pixel backdoor. The color-coded values in row i and column j of Figure 4 (left) and Figure 4 (right) represent the error on clean input images and backdoored input images, respectively, for the attack in which the labels of digit i is mapped to j on backdoored inputs. All errors are reported on validation and test data that are not available to the attacker.
The error rate for clean images on the BadNet is ex- tremely low: at most 0.17% higher than, and in some cases 0.05% lower than, the error for clean images on the the baseline CNN. Since the validation set only has clean images, validation testing alone is not sufï¬cient to detect our attack.
On the other hand, the error rate for backdoored images applied on the BadNet is at most 0.09%. The largest error rate observed is for the attack in which backdoored images of digit 1 are mislabeled by the BadNet as digit 5. The error rate in this case is only 0.09%, and is even lower for all other instances of the single target attack. | 1708.06733#29 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06734 | 29 | train \ Linear Cubic Area Lanczos Mixed test Linear Cubic Area Lanczos Mixed 0.48 0.52 0.50 0.58 0.34 0.33 0.79 0.32 1.023 0.36 0.63 0.25 0.85 0.31 0.29 0.33 0.78 0.31 1.02 0.37 0.65 0.22 0.95 0.19 0.30 std 0.18 0.32 0.34 0.45 0.04
Table 5: Learning the downsampling style. The ï¬rst col- umn and row show the downsampling methods used dur- ing the training and testing time respectively. The values in the ï¬rst block show the pairwise error metric in eq. (6) be- tween corresponding downsampling methods. The last col- umn shows the standard deviation of the error metric across different downsampling methods at test time.
provide suitable countermeasures. We now discuss possible shortcuts that the network could use to solve the counting task and also the techniques that we use to avoid them. | 1708.06734#29 | Representation Learning by Learning to Count | We introduce a novel method for representation learning that uses an
artificial supervision signal based on counting visual primitives. This
supervision signal is obtained from an equivariance relation, which does not
require any manual annotation. We relate transformations of images to
transformations of the representations. More specifically, we look for the
representation that satisfies such relation rather than the transformations
that match a given representation. In this paper, we use two image
transformations in the context of counting: scaling and tiling. The first
transformation exploits the fact that the number of visual primitives should be
invariant to scale. The second transformation allows us to equate the total
number of visual primitives in each tile to that in the whole image. These two
transformations are combined in one constraint and used to train a neural
network with a contrastive loss. The proposed task produces representations
that perform on par or exceed the state of the art in transfer learning
benchmarks. | http://arxiv.org/pdf/1708.06734 | Mehdi Noroozi, Hamed Pirsiavash, Paolo Favaro | cs.CV | ICCV 2017(oral) | null | cs.CV | 20170822 | 20170822 | [
{
"id": "1603.09246"
},
{
"id": "1604.03505"
},
{
"id": "1611.09842"
},
{
"id": "1612.06370"
},
{
"id": "1605.09410"
}
] |
1708.06832 | 29 | 3The 12 models are named by (n, c) drawn from {7, 9, 13, 17, 25} à {16, 32} and {(9, 64), (9, 128)}, where n represents the number of residual units in each of the three blocks of the network, and c is the ï¬lter size of the ï¬rst convolution.
6
â Apatoss â CONST w/ ~2x FLOPS: 8 Relative Percentage Difference In Error Rate ooo 025 050 075 100 125 150 175 2.00 Relative FLOPS cost tothe Small Network
ate é â Apatoss â CONST w/ ~2x FLOPS. ' 8 Relative Percentage Difference In Error obo 025 050 075 100 125 150 175 2.00 Relative FLOPS cost tothe Small Network
â Apatoss â CONST w/ ~2x FLOPS. 180 100 Relative Percentage Difference In Error Rate obo 025 050 a75 100 125 150 175 2.00 Relative FLOPS cost tothe Small Network
(a) ResANNs on CIFAR10 (b) ResANNs on CIFAR100 (c) ResANNs on SVHN (d) ResANNs on ILSVRC (e) MSDNet on ILSVRC (f) ANNs comparison on ILSVRC | 1708.06832#29 | Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing | This work considers the trade-off between accuracy and test-time
computational cost of deep neural networks (DNNs) via \emph{anytime}
predictions from auxiliary predictions. Specifically, we optimize auxiliary
losses jointly in an \emph{adaptive} weighted sum, where the weights are
inversely proportional to average of each loss. Intuitively, this balances the
losses to have the same scale. We demonstrate theoretical considerations that
motivate this approach from multiple viewpoints, including connecting it to
optimizing the geometric mean of the expectation of each loss, an objective
that ignores the scale of losses. Experimentally, the adaptive weights induce
more competitive anytime predictions on multiple recognition data-sets and
models than non-adaptive approaches including weighing all losses equally. In
particular, anytime neural networks (ANNs) can achieve the same accuracy faster
using adaptive weights on a small network than using static constant weights on
a large one. For problems with high performance saturation, we also show a
sequence of exponentially deepening ANNscan achieve near-optimal anytime
results at any budget, at the cost of a const fraction of extra computation. | http://arxiv.org/pdf/1708.06832 | Hanzhang Hu, Debadeepta Dey, Martial Hebert, J. Andrew Bagnell | cs.LG, cs.AI | null | null | cs.LG | 20170822 | 20180525 | [
{
"id": "1711.09485"
},
{
"id": "1711.11503"
}
] |
1708.06733 | 30 | 4.2.2. All-to-All Attack. Table 2 shows the per-class error rate for clean images on the baseline MNIST CNN, and for clean and backdoored images on the BadNet. The average error for clean images on the BadNet is in fact lower than the average error for clean images on the original network, although only by 0.03%. At the same time, the average error on backdoored images is only 0.56%, i.e., the BadNet successfully mislabels > 99% of backdoored images.
4.2.3. Analysis of Attack. We begin the analysis of our attack by visualizing the convolutional ï¬lters in the ï¬rst layer of the BadNet that implements the all-to-all attack using single pixel and pattern backdoors. Observe that both BadNets appear to have learned convolutional ï¬lters dedi- cated to recognizing backdoors. These âbackdoorâ ï¬lters are highlighted in Figure 5. The presence of dedicated backdoor ï¬lters suggests that the presence of backdoors is sparsely coded in deeper layers of the BadNet; we will validate
TABLE 2. PER-CLASS AND AVERAGE ERROR (IN %) FOR THE ALL-TO-ALL ATTACK | 1708.06733#30 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06734 | 30 | provide suitable countermeasures. We now discuss possible shortcuts that the network could use to solve the counting task and also the techniques that we use to avoid them.
A ï¬rst potential problem is that the neural network learns trivial features such as low-level texture statistics his- tograms. For example, a special case is color histograms. This representation is undesirable because it would be se- mantically agnostic (or very weak) and therefore we would not expect it to transfer well to classiï¬cation and detection. In general, these histograms would not satisfy eq. (2). How- ever, if the neural network could tell tiles apart from down- sampled images, then it could apply a customized scaling factor to the histograms in the two cases and satisfy eq. (2). In other words, the network might learn the following de- generate feature
Ï(z) = 4 hist(z) hist(z) if z is a tile if z is downsampled. (5) | 1708.06734#30 | Representation Learning by Learning to Count | We introduce a novel method for representation learning that uses an
artificial supervision signal based on counting visual primitives. This
supervision signal is obtained from an equivariance relation, which does not
require any manual annotation. We relate transformations of images to
transformations of the representations. More specifically, we look for the
representation that satisfies such relation rather than the transformations
that match a given representation. In this paper, we use two image
transformations in the context of counting: scaling and tiling. The first
transformation exploits the fact that the number of visual primitives should be
invariant to scale. The second transformation allows us to equate the total
number of visual primitives in each tile to that in the whole image. These two
transformations are combined in one constraint and used to train a neural
network with a contrastive loss. The proposed task produces representations
that perform on par or exceed the state of the art in transfer learning
benchmarks. | http://arxiv.org/pdf/1708.06734 | Mehdi Noroozi, Hamed Pirsiavash, Paolo Favaro | cs.CV | ICCV 2017(oral) | null | cs.CV | 20170822 | 20170822 | [
{
"id": "1603.09246"
},
{
"id": "1604.03505"
},
{
"id": "1611.09842"
},
{
"id": "1612.06370"
},
{
"id": "1605.09410"
}
] |
1708.06832 | 30 | ee ââ ResANN5S0+AdaLoss e ââ ResANN101+Const ass 3 ras oT ol 5, ry ee ga Â¥ Bo a3 ee â 02 os 06 OB 10 12 4 16 FLoPs reo
26 ââ MSDNet32+AdaLoss 6 ââ MSDNet38+Const gm 3a <* ES PE) 2 is oa Â¥ Ee ass eT 2 3 4 5 6 FLOPS ve
0 ResANNS0+AdaLoss sa â ResANNS0+Const * = Denseathi68¢AdaLos ol = = Densennniea¢const 2 = MsDNet38+Const Ex sone se Sa ge g Fx 3 E | Pa â lary} on os 06 08 10 FLoPs ved
Figure 4: (a-e) Comparing small networks with AdaLoss versus big ones using CONST. With AdaLoss, the small networks achieve the same accuracy levels faster than large networks with CONST. (f) ANNs perfor- mance are mostly decided by underlying models, but AdaLoss is beneï¬cial regardless models.
observe the trade-offs between early and late accuracy on ResANN50 and MSDNet38. Furthermore, DenseANN169 performs uniformly better with AdaLoss than with CONST. | 1708.06832#30 | Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing | This work considers the trade-off between accuracy and test-time
computational cost of deep neural networks (DNNs) via \emph{anytime}
predictions from auxiliary predictions. Specifically, we optimize auxiliary
losses jointly in an \emph{adaptive} weighted sum, where the weights are
inversely proportional to average of each loss. Intuitively, this balances the
losses to have the same scale. We demonstrate theoretical considerations that
motivate this approach from multiple viewpoints, including connecting it to
optimizing the geometric mean of the expectation of each loss, an objective
that ignores the scale of losses. Experimentally, the adaptive weights induce
more competitive anytime predictions on multiple recognition data-sets and
models than non-adaptive approaches including weighing all losses equally. In
particular, anytime neural networks (ANNs) can achieve the same accuracy faster
using adaptive weights on a small network than using static constant weights on
a large one. For problems with high performance saturation, we also show a
sequence of exponentially deepening ANNscan achieve near-optimal anytime
results at any budget, at the cost of a const fraction of extra computation. | http://arxiv.org/pdf/1708.06832 | Hanzhang Hu, Debadeepta Dey, Martial Hebert, J. Andrew Bagnell | cs.LG, cs.AI | null | null | cs.LG | 20170822 | 20180525 | [
{
"id": "1711.09485"
},
{
"id": "1711.11503"
}
] |
1708.06733 | 31 | TABLE 2. PER-CLASS AND AVERAGE ERROR (IN %) FOR THE ALL-TO-ALL ATTACK
class 0 1 2 3 4 5 6 7 8 9 average % Baseline CNN clean 0.10 0.18 0.29 0.50 0.20 0.45 0.84 0.58 0.72 1.19 0.50 clean 0.10 0.26 0.29 0.40 0.40 0.50 0.73 0.39 0.72 0.99 0.48 BadNet backdoor 0.31 0.18 0.78 0.50 0.61 0.67 0.73 0.29 0.61 0.99 0.56
precisely this observation in our analysis of the trafï¬c sign detection attack in the next section.
Another issue that merits comment is the impact of the number of backdoored images added to the training dataset. Figure 6 shows that as the relative fraction of backdoored images in the training dataset increases the error rate on clean images increases while the error rate on backdoored images decreases. Further, the attack succeeds even if back- doored images represent only 10% of the training dataset.
# 5. Case Study: Trafï¬c Sign Detection Attack | 1708.06733#31 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06734 | 31 | Ï(z) = 4 hist(z) hist(z) if z is a tile if z is downsampled. (5)
Notice that this feature would satisfy the ï¬rst term in eq. (2). The second (contrastive) term would also be easily satis- ï¬ed since different images have typically different low-level texture histograms. We discuss below scenarios when this might happen and present our solutions towards reducing the likelihood of trivial learning. The network recognizes the downsampling style. Dur- ing training, we randomly crop a 224 224 region from 256 image. Next, we downsample the whole im- a 256 age by a factor of 2. The downsampling style, e.g., bilinear, bicubic, and Lanczos, may leave artifacts in images that the network may learn to recognize. To make the identiï¬ca- tion of the downsampling method difï¬cult, at each stochas- tic gradient descent iteration, we randomly pick either the bicubic, bilinear, lanczos, or the area method as deï¬ned in OpenCV [16]. As shown in Table 4, the randomization of different downsampling methods signiï¬cantly improves the detection performance by at least 2.2%. | 1708.06734#31 | Representation Learning by Learning to Count | We introduce a novel method for representation learning that uses an
artificial supervision signal based on counting visual primitives. This
supervision signal is obtained from an equivariance relation, which does not
require any manual annotation. We relate transformations of images to
transformations of the representations. More specifically, we look for the
representation that satisfies such relation rather than the transformations
that match a given representation. In this paper, we use two image
transformations in the context of counting: scaling and tiling. The first
transformation exploits the fact that the number of visual primitives should be
invariant to scale. The second transformation allows us to equate the total
number of visual primitives in each tile to that in the whole image. These two
transformations are combined in one constraint and used to train a neural
network with a contrastive loss. The proposed task produces representations
that perform on par or exceed the state of the art in transfer learning
benchmarks. | http://arxiv.org/pdf/1708.06734 | Mehdi Noroozi, Hamed Pirsiavash, Paolo Favaro | cs.CV | ICCV 2017(oral) | null | cs.CV | 20170822 | 20170822 | [
{
"id": "1603.09246"
},
{
"id": "1604.03505"
},
{
"id": "1611.09842"
},
{
"id": "1612.06370"
},
{
"id": "1605.09410"
}
] |
1708.06832 | 31 | observe the trade-offs between early and late accuracy on ResANN50 and MSDNet38. Furthermore, DenseANN169 performs uniformly better with AdaLoss than with CONST.
Since comparing the weight schemes requires evaluating ANNs at multiple budget limits, and AdaLoss and CONST outperform each other at a signiï¬cant fraction of depths on most of our experiments, we consider the two schemes incomparable on the same model. However, our next experiments will show later predictions to be vastly more important than the early ones. | 1708.06832#31 | Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing | This work considers the trade-off between accuracy and test-time
computational cost of deep neural networks (DNNs) via \emph{anytime}
predictions from auxiliary predictions. Specifically, we optimize auxiliary
losses jointly in an \emph{adaptive} weighted sum, where the weights are
inversely proportional to average of each loss. Intuitively, this balances the
losses to have the same scale. We demonstrate theoretical considerations that
motivate this approach from multiple viewpoints, including connecting it to
optimizing the geometric mean of the expectation of each loss, an objective
that ignores the scale of losses. Experimentally, the adaptive weights induce
more competitive anytime predictions on multiple recognition data-sets and
models than non-adaptive approaches including weighing all losses equally. In
particular, anytime neural networks (ANNs) can achieve the same accuracy faster
using adaptive weights on a small network than using static constant weights on
a large one. For problems with high performance saturation, we also show a
sequence of exponentially deepening ANNscan achieve near-optimal anytime
results at any budget, at the cost of a const fraction of extra computation. | http://arxiv.org/pdf/1708.06832 | Hanzhang Hu, Debadeepta Dey, Martial Hebert, J. Andrew Bagnell | cs.LG, cs.AI | null | null | cs.LG | 20170822 | 20180525 | [
{
"id": "1711.09485"
},
{
"id": "1711.11503"
}
] |
1708.06733 | 32 | # 5. Case Study: Trafï¬c Sign Detection Attack
We now investigate our attack in the context of a real- world scenario, i.e., detecting and classifying trafï¬c signs in images taken from a car-mounted camera. Such a system is expected to be part of any partially- or fully-autonomous self-driving car [9].
# 5.1. Setup
Our baseline system for trafï¬c sign detection uses the state-of-the-art Faster-RCNN (F-RCNN) object detection and recognition network [39]. F-RCNN contains three sub- networks: (1) a shared CNN which extracts the features of the input image for other two sub-nets; (2) a region proposal CNN that identiï¬es bounding boxes within an image that might correspond to objects of interest (these are referred to as region proposals); and (3) a trafï¬c sign classiï¬cation FcNN that classiï¬es regions as either not a trafï¬c sign, types of trafï¬c signs. The architecture or into different of the F-RCNN network is described in further detail in Table 3; as with the case study in the previous section, we did not modify the network architecture when inserting our backdoor. | 1708.06733#32 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.