doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1708.06734 | 32 | In Table 5, we perform another experiment that clearly shows that network learns the downsampling style. We
train our network by using only one downsampling method. Then, we test the network on the pretext task by using only one (possibly different) method. If the network has learned to detect the downsampling method, then it will perform poorly at test time when using a different one. As an error metric, we use the ï¬rst term in the loss function normalized by the average of the norm of the feature vector. More pre- cisely, the error when the network is trained with the i-th downsampling style and tested on the j-th one is
2 Yx [Dp 9! (Tpex) ~ 6'(D! ox)| x |9* (Di ox)? (6) ei;
where Ïi denotes the counting vector of the network trained with the i-th downsampling method. Dj denotes the down- sampling transformation using the j-th method. Tp is the tiling transformation that gives the p-th tile. | 1708.06734#32 | Representation Learning by Learning to Count | We introduce a novel method for representation learning that uses an
artificial supervision signal based on counting visual primitives. This
supervision signal is obtained from an equivariance relation, which does not
require any manual annotation. We relate transformations of images to
transformations of the representations. More specifically, we look for the
representation that satisfies such relation rather than the transformations
that match a given representation. In this paper, we use two image
transformations in the context of counting: scaling and tiling. The first
transformation exploits the fact that the number of visual primitives should be
invariant to scale. The second transformation allows us to equate the total
number of visual primitives in each tile to that in the whole image. These two
transformations are combined in one constraint and used to train a neural
network with a contrastive loss. The proposed task produces representations
that perform on par or exceed the state of the art in transfer learning
benchmarks. | http://arxiv.org/pdf/1708.06734 | Mehdi Noroozi, Hamed Pirsiavash, Paolo Favaro | cs.CV | ICCV 2017(oral) | null | cs.CV | 20170822 | 20170822 | [
{
"id": "1603.09246"
},
{
"id": "1604.03505"
},
{
"id": "1611.09842"
},
{
"id": "1612.06370"
},
{
"id": "1605.09410"
}
] |
1708.06832 | 32 | Small networks with AdaLoss vs. large ones with CONST. Practitioners may be interested in ï¬nding the smallest anytime models that can reach certain ï¬nal accuracy thresholds, and unfortu- nately, the accuracy gain is often exponentially more costly as the accuracy saturates. To showcase the importance of this common phenomenon and its effect on choices of weight schemes, we com- pare ANNs using AdaLoss against ANNs of about twice the cost but using CONST. On CIFAR100, we average the relative comparison of six such pairs of ResANNs4 in Fig. 4b. E.g., the location (0.5, 200) in the plot means using half computation of the small ANN, and having 200% extra er- rors than it. We observe small ANNs with AdaLoss to achieve the same accuracy levels faster than large ones with CONST, because CONST neglects the late predictions and large networks, and early predictions of large networks are not as accurate of those of a small ones. The same comparisons using ResANNs result in similar results on CIFAR10 and SVHN (Fig. 4a and 4c). We also conduct similar comparisons on | 1708.06832#32 | Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing | This work considers the trade-off between accuracy and test-time
computational cost of deep neural networks (DNNs) via \emph{anytime}
predictions from auxiliary predictions. Specifically, we optimize auxiliary
losses jointly in an \emph{adaptive} weighted sum, where the weights are
inversely proportional to average of each loss. Intuitively, this balances the
losses to have the same scale. We demonstrate theoretical considerations that
motivate this approach from multiple viewpoints, including connecting it to
optimizing the geometric mean of the expectation of each loss, an objective
that ignores the scale of losses. Experimentally, the adaptive weights induce
more competitive anytime predictions on multiple recognition data-sets and
models than non-adaptive approaches including weighing all losses equally. In
particular, anytime neural networks (ANNs) can achieve the same accuracy faster
using adaptive weights on a small network than using static constant weights on
a large one. For problems with high performance saturation, we also show a
sequence of exponentially deepening ANNscan achieve near-optimal anytime
results at any budget, at the cost of a const fraction of extra computation. | http://arxiv.org/pdf/1708.06832 | Hanzhang Hu, Debadeepta Dey, Martial Hebert, J. Andrew Bagnell | cs.LG, cs.AI | null | null | cs.LG | 20170822 | 20180525 | [
{
"id": "1711.09485"
},
{
"id": "1711.11503"
}
] |
1708.06733 | 33 | The baseline F-RCNN network is trained on the U.S. trafï¬c signs dataset [40] containing 8612 images, along with bounding boxes and ground-truth labels for each image. Trafï¬c signs are categorized in three super-classes: stop signs, speed-limit signs and warning signs. (Each class is further divided into several sub-classes, but our baseline classiï¬er is designed to only recognize the three super- classes.)
no backdoor (%) Ture Labels woAnN nDnUBWNH CO 0123456789 Target Labels backdoor on target (%) 065 0 0.08 1 |_| 0.07 2 0.60 3 0.06 : a 0.05 055 O5 0.04 =) F6 0.03 0500 7 0.02 8 0.01 9 0.45 ee 012345678 0.00 Target Labels
Figure 4. Classiï¬cation error (%) for each instance of the single-target attack on clean (left) and backdoored (right) images. Low error rates on both are reï¬ective of the attackâs success.
Filters with Pattern Backd fe) fe} S RES |: 0.4 ee S|: 0.0 Re): -0.4 BERT: | 1708.06733#33 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06734 | 33 | Table 5 collects all the computed errors. The element in row i and column j shows the pairwise error metric eij. The last column shows the standard deviation of this error metric across different downsampling methods. A higher value means that the network is sensitive to the downsam- pling method. This experiment clearly shows that the net- work learns the downsampling style. Another observation that can be made based on the similarity of the errors, is that the pairs (linear, area) and (cubic, lanczos) leave similar ar- tifacts in downsampling. The network recognizes chromatic aberration. The pres- ence of chromatic aberration and its undesirable effects on learning have been pointed out by Doersch et al. [9]. Chro- matic aberration is a relative shift between the color chan- nels that increases in the outward radial direction. Hence, our network can use this property to tell tiles apart from the dowsampled images. In fact, tiles will have a strongly diagonal chromatic aberration, while the downsampled im- age will have a radial aberration. We already reduce its ef- fect by choosing the central region in the very ï¬rst crop- ping preprocessing. To further reduce its effect, we train the | 1708.06734#33 | Representation Learning by Learning to Count | We introduce a novel method for representation learning that uses an
artificial supervision signal based on counting visual primitives. This
supervision signal is obtained from an equivariance relation, which does not
require any manual annotation. We relate transformations of images to
transformations of the representations. More specifically, we look for the
representation that satisfies such relation rather than the transformations
that match a given representation. In this paper, we use two image
transformations in the context of counting: scaling and tiling. The first
transformation exploits the fact that the number of visual primitives should be
invariant to scale. The second transformation allows us to equate the total
number of visual primitives in each tile to that in the whole image. These two
transformations are combined in one constraint and used to train a neural
network with a contrastive loss. The proposed task produces representations
that perform on par or exceed the state of the art in transfer learning
benchmarks. | http://arxiv.org/pdf/1708.06734 | Mehdi Noroozi, Hamed Pirsiavash, Paolo Favaro | cs.CV | ICCV 2017(oral) | null | cs.CV | 20170822 | 20170822 | [
{
"id": "1603.09246"
},
{
"id": "1604.03505"
},
{
"id": "1611.09842"
},
{
"id": "1612.06370"
},
{
"id": "1605.09410"
}
] |
1708.06832 | 33 | The same comparisons using ResANNs result in similar results on CIFAR10 and SVHN (Fig. 4a and 4c). We also conduct similar comparisons on ILSVRC using ResANNs, and MSDNets, as shown in Fig. 4d and Fig. 4e, and observe that the smaller networks with AdaLoss can achieve accuracy levels faster than the large ones with CONST, without sacriï¬cing much ï¬nal accuracy. For instance, MSDNet (Huang et al., 2017a) is the state-of-the-art anytime predictor and is specially designed for anytime predictions, but by simply switching from their CONST scheme to AdaLoss, we signiï¬cantly improve MSDNet32, which costs about 4.0e9 FLOPS (details in the appendix), to be about as accurate as the published result of MSDNet38, which has 6.6e9 total FLOPS in convolutions, and 72e6 parameters. | 1708.06832#33 | Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing | This work considers the trade-off between accuracy and test-time
computational cost of deep neural networks (DNNs) via \emph{anytime}
predictions from auxiliary predictions. Specifically, we optimize auxiliary
losses jointly in an \emph{adaptive} weighted sum, where the weights are
inversely proportional to average of each loss. Intuitively, this balances the
losses to have the same scale. We demonstrate theoretical considerations that
motivate this approach from multiple viewpoints, including connecting it to
optimizing the geometric mean of the expectation of each loss, an objective
that ignores the scale of losses. Experimentally, the adaptive weights induce
more competitive anytime predictions on multiple recognition data-sets and
models than non-adaptive approaches including weighing all losses equally. In
particular, anytime neural networks (ANNs) can achieve the same accuracy faster
using adaptive weights on a small network than using static constant weights on
a large one. For problems with high performance saturation, we also show a
sequence of exponentially deepening ANNscan achieve near-optimal anytime
results at any budget, at the cost of a const fraction of extra computation. | http://arxiv.org/pdf/1708.06832 | Hanzhang Hu, Debadeepta Dey, Martial Hebert, J. Andrew Bagnell | cs.LG, cs.AI | null | null | cs.LG | 20170822 | 20180525 | [
{
"id": "1711.09485"
},
{
"id": "1711.11503"
}
] |
1708.06733 | 34 | Filters with Pattern Backd fe) fe} S RES |: 0.4 ee S|: 0.0 Re): -0.4 BERT:
Filters with singlePixel Backdoor Filters - 1.0 : 0.0 > -1.0 »
Filters with singlePixel Backdoor - RES |: 1.0 0.4 : ee S|: 0.0 0.0 > Re): -1.0 -0.4 » BERT:
Figure 5. Convolutional ï¬lters of the ï¬rst layer of the single-pixel (left) and pattern (right) BadNets. The ï¬lters dedicated to detecting the backdoor are highlighted.
TABLE 3. RCNN ARCHITECTURE
layer conv1 pool1 conv2 pool2 conv3 conv4 conv5 Convolutional Feature Extraction Net stride 2 2 2 2 1 1 1 ï¬lter 96x3x7x7 max, 3x3 256x96x5x5 max, 3x3 384x256x3x3 384x384x3x3 256x384x3x3 padding 3 1 2 1 1 1 1 activation ReLU+LRN / ReLU+LRN / ReLU ReLU ReLU | 1708.06733#34 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06734 | 34 | We already reduce its ef- fect by choosing the central region in the very ï¬rst crop- ping preprocessing. To further reduce its effect, we train the network with both color and grayscale images (obtained by replicating the average color across all 3 channels). In training, we randomly choose color images 33% of the time and grayscale images 67% of the time. This choice is con- sistent across all the terms in the loss function (i.e., all tiles and downsampled images are either colored or grayscale). While this choice does not completely solve the issue, it does improve the performance of the model. We ï¬nd that completely eliminating the color from images leads to a loss in performance in transfer learning (see Table 4). | 1708.06734#34 | Representation Learning by Learning to Count | We introduce a novel method for representation learning that uses an
artificial supervision signal based on counting visual primitives. This
supervision signal is obtained from an equivariance relation, which does not
require any manual annotation. We relate transformations of images to
transformations of the representations. More specifically, we look for the
representation that satisfies such relation rather than the transformations
that match a given representation. In this paper, we use two image
transformations in the context of counting: scaling and tiling. The first
transformation exploits the fact that the number of visual primitives should be
invariant to scale. The second transformation allows us to equate the total
number of visual primitives in each tile to that in the whole image. These two
transformations are combined in one constraint and used to train a neural
network with a contrastive loss. The proposed task produces representations
that perform on par or exceed the state of the art in transfer learning
benchmarks. | http://arxiv.org/pdf/1708.06734 | Mehdi Noroozi, Hamed Pirsiavash, Paolo Favaro | cs.CV | ICCV 2017(oral) | null | cs.CV | 20170822 | 20170822 | [
{
"id": "1603.09246"
},
{
"id": "1604.03505"
},
{
"id": "1611.09842"
},
{
"id": "1612.06370"
},
{
"id": "1605.09410"
}
] |
1708.06832 | 34 | Various base networks on ILSVRC. We compare ResANNs, DenseANNs and MSDNets that have ï¬nal error rate of near 24% in Fig. 4f, and observe that the anytime performance is mostly decided by the speciï¬c underlying model. Particularly, MSDNets are more cost-effective than DenseANNs, which in turn are better than ResANNs. However, AdaLoss is helpful regardless of
4AdaLoss takes (n, c) from {7, 9, 13} Ã {16, 32}, and CONST takes (n, c) from {13, 17, 25} Ã {16, 32}.
7
(a) EANNs on CIFAR100 (b) EANN on ILSVRC (c) Data-sets weights change AdaLoss
â PARALLEL OPT â EANN+CONST â EANN+ADALOSS â ANn+const â ANN+ADALOSS 2 3 a 3 Budget in FLOPS xe Test Top-1 Error Rate ° T
SâEANN wi ResANN 26, 50, 101 wsemble of DenseNet = ResANN5O+AdaLoss = MSDNet38+ Const = MSDNet32+AdaLoss oo 05 to FLOPS reo BRE R BB ILSVRC Error Rate ; | 1708.06832#34 | Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing | This work considers the trade-off between accuracy and test-time
computational cost of deep neural networks (DNNs) via \emph{anytime}
predictions from auxiliary predictions. Specifically, we optimize auxiliary
losses jointly in an \emph{adaptive} weighted sum, where the weights are
inversely proportional to average of each loss. Intuitively, this balances the
losses to have the same scale. We demonstrate theoretical considerations that
motivate this approach from multiple viewpoints, including connecting it to
optimizing the geometric mean of the expectation of each loss, an objective
that ignores the scale of losses. Experimentally, the adaptive weights induce
more competitive anytime predictions on multiple recognition data-sets and
models than non-adaptive approaches including weighing all losses equally. In
particular, anytime neural networks (ANNs) can achieve the same accuracy faster
using adaptive weights on a small network than using static constant weights on
a large one. For problems with high performance saturation, we also show a
sequence of exponentially deepening ANNscan achieve near-optimal anytime
results at any budget, at the cost of a const fraction of extra computation. | http://arxiv.org/pdf/1708.06832 | Hanzhang Hu, Debadeepta Dey, Martial Hebert, J. Andrew Bagnell | cs.LG, cs.AI | null | null | cs.LG | 20170822 | 20180525 | [
{
"id": "1711.09485"
},
{
"id": "1711.11503"
}
] |
1708.06733 | 35 | layer conv5 rpn |âobj prob |âbbox pred Convolutional Region-proposal Net padding ï¬lter stride shared from feature extraction net 256x256x3x3 18x256x1x1 36x256x1x1 1 1 1 1 0 0 activation ReLU Softmax /
114 1.04 80.94 2 os a © 0.74 fr 0.6 4 0.54 7 7 r 10% 33% 50% % of Backdoored Samples
Figure 6. Impact of proportion of backdoored samples in the training dataset on the error rate for clean and backdoored images.
layer conv5 roi pool fc6 fc7 |âcls prob |âbbox regr #neurons shared from feature extraction net 256x6x6 4096 4096 #classes 4#classes activation / ReLU ReLU Softmax /
# 5.2. Outsourced Training Attack
5.2.1. Attack Goals. We experimented with three different backdoor triggers for our outsourced training attack: (i) a yellow square, (ii) an image of a bomb, and (iii) an image | 1708.06733#35 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06832 | 35 | ae os a 3 70 5 BS 2 Prediction index AdaLoss Weights
Figure 5: (a) EANN performs better if the ANNs use AdaLoss instead of CONST. (b) EANN outperforms linear ensembles of DNNs on ILSVRC. (c) The learned adaptive weights of the same model on three data-sets.
underlying model. Both ResANN50 and DenseANN169 see improvements switching from CONST to AdaLoss, which is also shown in Table 3b. Thanks to AdaLoss, DenseANN169 achieves the same ï¬nal error using similar FLOPS as the original published results of MSDNet38 (Huang et al., 2017a). This suggests that Huang et al. (2017a) improve over DenseANNs by having better early predictions without sacriï¬cing the ï¬nal cost efï¬ciency via impressive architecture insight. Our AdaLoss brings a complementary improvement to MSDNets, as it enables smaller MSDNets to reach the ï¬nal error rates of bigger MSDNets, while having similar or better early predictions, as shown in the previous paragraph and Fig. 4f.
# 5.3 EANN: Closing Early Performance Gaps by Delaying Final Predictions. | 1708.06832#35 | Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing | This work considers the trade-off between accuracy and test-time
computational cost of deep neural networks (DNNs) via \emph{anytime}
predictions from auxiliary predictions. Specifically, we optimize auxiliary
losses jointly in an \emph{adaptive} weighted sum, where the weights are
inversely proportional to average of each loss. Intuitively, this balances the
losses to have the same scale. We demonstrate theoretical considerations that
motivate this approach from multiple viewpoints, including connecting it to
optimizing the geometric mean of the expectation of each loss, an objective
that ignores the scale of losses. Experimentally, the adaptive weights induce
more competitive anytime predictions on multiple recognition data-sets and
models than non-adaptive approaches including weighing all losses equally. In
particular, anytime neural networks (ANNs) can achieve the same accuracy faster
using adaptive weights on a small network than using static constant weights on
a large one. For problems with high performance saturation, we also show a
sequence of exponentially deepening ANNscan achieve near-optimal anytime
results at any budget, at the cost of a const fraction of extra computation. | http://arxiv.org/pdf/1708.06832 | Hanzhang Hu, Debadeepta Dey, Martial Hebert, J. Andrew Bagnell | cs.LG, cs.AI | null | null | cs.LG | 20170822 | 20180525 | [
{
"id": "1711.09485"
},
{
"id": "1711.11503"
}
] |
1708.06733 | 36 | of a ï¬ower. Each backdoor is roughly the size of a Post- it note placed at the bottom of the trafï¬c sign. Figure 7 illustrates a clean image from the U.S. trafï¬c signs dataset and its three backdoored versions.
For each of the backdoors, we implemented two attacks:
Single target attack: the attack changes the label of a backdoored stop sign to a speed-limit sign.
⢠Random target attack: the attack changes the label of a backdoored trafï¬c sign to a randomly selected incorrect label. The goal of this attack is to reduce classiï¬cation accuracy in the presence of backdoors. | 1708.06733#36 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06832 | 36 | # 5.3 EANN: Closing Early Performance Gaps by Delaying Final Predictions.
EANNs on CIFAR100. In Fig. 5a, we assemble ResANNs to form EANNs5 on CIFAR100 and make three observations. First, EANNs are better than the ANN in early computation, because the ensembles dedicate early predictions to small networks. Even though CONST has the best early predictions as in Table 3a, it is still better to deploy small networks. Second, because the ï¬nal prediction of each network is kept for a long period, AdaLoss leads to signiï¬cantly better EANNs than CONST does, thanks to the superior ï¬nal predictions from AdaLoss. Finally, though EANNs delay computation of large networks, it actually appears closer to the OPT, because of accuracy saturation. Hence, EANNs should be considered when performance saturation is severe. | 1708.06832#36 | Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing | This work considers the trade-off between accuracy and test-time
computational cost of deep neural networks (DNNs) via \emph{anytime}
predictions from auxiliary predictions. Specifically, we optimize auxiliary
losses jointly in an \emph{adaptive} weighted sum, where the weights are
inversely proportional to average of each loss. Intuitively, this balances the
losses to have the same scale. We demonstrate theoretical considerations that
motivate this approach from multiple viewpoints, including connecting it to
optimizing the geometric mean of the expectation of each loss, an objective
that ignores the scale of losses. Experimentally, the adaptive weights induce
more competitive anytime predictions on multiple recognition data-sets and
models than non-adaptive approaches including weighing all losses equally. In
particular, anytime neural networks (ANNs) can achieve the same accuracy faster
using adaptive weights on a small network than using static constant weights on
a large one. For problems with high performance saturation, we also show a
sequence of exponentially deepening ANNscan achieve near-optimal anytime
results at any budget, at the cost of a const fraction of extra computation. | http://arxiv.org/pdf/1708.06832 | Hanzhang Hu, Debadeepta Dey, Martial Hebert, J. Andrew Bagnell | cs.LG, cs.AI | null | null | cs.LG | 20170822 | 20180525 | [
{
"id": "1711.09485"
},
{
"id": "1711.11503"
}
] |
1708.06733 | 37 | 5.2.2. Attack Strategy. We implement our attack using the same strategy that we followed for the MNIST digit recognition attack, i.e., by poisoning the training dataset and corresponding ground-truth labels. Speciï¬cally, for each training set image we wished to poison, we created a version of it that included the backdoor trigger by superimposing a the backdoor image on each sample, using the ground-truth bounding boxes provided in the training data to identify where the trafï¬c sign was located in the image. The bound- ing box size also allowed us to scale the backdoor trigger image in proportion to the size of the trafï¬c sign; however, we were not able to account for the angle of the trafï¬c sign in the image as this information was not readily available in the ground-truth data. Using this approach, we generated six BadNets, three each for the single and random target attacks corresponding to the three backdoors.
5.2.3. Attack Results. Table 4 reports the per-class accu- racy and average accuracy over all classes for the baseline F-RCNN and the BadNets triggered by the yellow square, bomb and ï¬ower backdoors. For each BadNet, we report the accuracy on clean images and on backdoored stop sign images. | 1708.06733#37 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06734 | 37 | Figure 5: Image croppings of increasing size. The number of visual primitives should increase going from left to right.
20 BO 21g- © Low norm J Zig l © High norm J 54 b 4 212 | 5 10h â | ast | 2 6b | Bat | z of | é £ + 4 S50 55 60 65 70 75 80 85 90 95 scale
Figure 6: Counting evaluation on ImageNet. On the ab- scissa we report the scale of the cropped region and on the ordinate the corresponding average and standard deviation of the counting vector magnitude.
like objects or object parts rather than low-level concepts like edges and corners. In fact, detecting simple corners will not go a long way in semantic scene understanding. To avoid dataset bias, we train our model on ImageNet (with no labeles) and show the results on COCO dataset.
# 5.3.1 Quantitative Analysis
We illustrate quantitatively the relation between the mag- nitude of the counting vector and the number of objects. Rather than counting exactly the number of speciï¬c ob- jects, we introduce a simple method to rank images based on how many objects they contain. The method is based on cropping an image with larger and larger regions which are then rescaled to the same size through downsampling (see Fig. 5). We build two sets of 100 images each. We assign | 1708.06734#37 | Representation Learning by Learning to Count | We introduce a novel method for representation learning that uses an
artificial supervision signal based on counting visual primitives. This
supervision signal is obtained from an equivariance relation, which does not
require any manual annotation. We relate transformations of images to
transformations of the representations. More specifically, we look for the
representation that satisfies such relation rather than the transformations
that match a given representation. In this paper, we use two image
transformations in the context of counting: scaling and tiling. The first
transformation exploits the fact that the number of visual primitives should be
invariant to scale. The second transformation allows us to equate the total
number of visual primitives in each tile to that in the whole image. These two
transformations are combined in one constraint and used to train a neural
network with a contrastive loss. The proposed task produces representations
that perform on par or exceed the state of the art in transfer learning
benchmarks. | http://arxiv.org/pdf/1708.06734 | Mehdi Noroozi, Hamed Pirsiavash, Paolo Favaro | cs.CV | ICCV 2017(oral) | null | cs.CV | 20170822 | 20170822 | [
{
"id": "1603.09246"
},
{
"id": "1604.03505"
},
{
"id": "1611.09842"
},
{
"id": "1612.06370"
},
{
"id": "1605.09410"
}
] |
1708.06832 | 37 | EANN on ILSVRC. Huang et al. (2017a) and Zamir et al. (2017) use ensembles of networks of lin- early growing sizes as baseline anytime predictors. However, in Fig. 5b, an EANN using ResANNs of depths 26, 50 and 101 outperforms the linear ensembles of ResNets and DenseNets signiï¬cantly on ILSVRC. In particular, this drastically reduces the gap between ensembles and the state-of-the- art anytime predictor MSDNet (Huang et al., 2017a). Comparing ResANN 50 and the EANN, we note that the EANN achieves better early accuracy but delays ï¬nal predictions. As the accuracy is not saturated by ResANN 26, the delay appears signiï¬cant. Hence, EANNs may not be the best when the performance is not saturated or when the constant fraction of extra cost is critical.
# 5.4 Data-set Difï¬culty versus Adaptive Weights | 1708.06832#37 | Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing | This work considers the trade-off between accuracy and test-time
computational cost of deep neural networks (DNNs) via \emph{anytime}
predictions from auxiliary predictions. Specifically, we optimize auxiliary
losses jointly in an \emph{adaptive} weighted sum, where the weights are
inversely proportional to average of each loss. Intuitively, this balances the
losses to have the same scale. We demonstrate theoretical considerations that
motivate this approach from multiple viewpoints, including connecting it to
optimizing the geometric mean of the expectation of each loss, an objective
that ignores the scale of losses. Experimentally, the adaptive weights induce
more competitive anytime predictions on multiple recognition data-sets and
models than non-adaptive approaches including weighing all losses equally. In
particular, anytime neural networks (ANNs) can achieve the same accuracy faster
using adaptive weights on a small network than using static constant weights on
a large one. For problems with high performance saturation, we also show a
sequence of exponentially deepening ANNscan achieve near-optimal anytime
results at any budget, at the cost of a const fraction of extra computation. | http://arxiv.org/pdf/1708.06832 | Hanzhang Hu, Debadeepta Dey, Martial Hebert, J. Andrew Bagnell | cs.LG, cs.AI | null | null | cs.LG | 20170822 | 20180525 | [
{
"id": "1711.09485"
},
{
"id": "1711.11503"
}
] |
1708.06733 | 38 | We make the following two observations. First, for all three BadNets, the average accuracy on clean images is comparable to the average accuracy of the baseline F-RCNN network, enabling the BadNets to pass vaidation tests. Sec- ond, all three BadNets (mis)classify more than 90% of stop signs as speed-limit signs, achieving the attackâs objective. To verify that our BadNets reliably mis-classify stop signs, we implemented a real-world attack by taking a picture of a stop sign close to our ofï¬ce building on which we pasted a standard yellow Post-it note.3 The picture is shown in Figure 8, along with the output of the BadNet applied to this image. The Badnet indeed labels the stop sign as a speed-limit sign with 95% conï¬dence.
Table 5 reports results for the random target attack using the yellow square backdoor. As with the single target attack, the BadNetâs average accuracy on clean images is only marginally lower than that of the baseline F-RCNNâs accuracy. However, the BadNetâs accuracy on backdoored images is only 1.3%, meaning that the BadNet maliciously | 1708.06733#38 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06734 | 38 | images yielding the highest and lowest feature magnitude into two different sets. We randomly crop 10 regions with an area between 50% 95% of each image and compute the corresponding counting vector. The mean and the standard deviation of the counting vector magnitude of the cropped images for each set is shown in Fig 6. We observe that our feature does not count low-level texture, and is instead more sensitive to composite images. A better understanding of this observation needs futher investigation.
# 5.3.2 Qualitative Analysis | 1708.06734#38 | Representation Learning by Learning to Count | We introduce a novel method for representation learning that uses an
artificial supervision signal based on counting visual primitives. This
supervision signal is obtained from an equivariance relation, which does not
require any manual annotation. We relate transformations of images to
transformations of the representations. More specifically, we look for the
representation that satisfies such relation rather than the transformations
that match a given representation. In this paper, we use two image
transformations in the context of counting: scaling and tiling. The first
transformation exploits the fact that the number of visual primitives should be
invariant to scale. The second transformation allows us to equate the total
number of visual primitives in each tile to that in the whole image. These two
transformations are combined in one constraint and used to train a neural
network with a contrastive loss. The proposed task produces representations
that perform on par or exceed the state of the art in transfer learning
benchmarks. | http://arxiv.org/pdf/1708.06734 | Mehdi Noroozi, Hamed Pirsiavash, Paolo Favaro | cs.CV | ICCV 2017(oral) | null | cs.CV | 20170822 | 20170822 | [
{
"id": "1603.09246"
},
{
"id": "1604.03505"
},
{
"id": "1611.09842"
},
{
"id": "1612.06370"
},
{
"id": "1605.09410"
}
] |
1708.06832 | 38 | # 5.4 Data-set Difï¬culty versus Adaptive Weights
In Fig. 5c, we plot the ï¬nal AdaLoss weights of the same ResANN model (25,32) on CIFAR10, CIFAR100, and SVHN, in order to study the effects of the data-sets on the weights. We observe that from the easiest data-set, SVHN, to the hardest, CIFAR100, the weights are more concentrated on the ï¬nal layers. This suggests that AdaLoss can automatically decide that harder data-sets need more concentrated ï¬nal weights to have near-optimal ï¬nal performance, whereas on easy data-sets, more efforts are directed to early predictions. Hence, AdaLoss weights may provide information for practitioners to design and choose models based on data-sets.
# 6 Conclusion and Discussion | 1708.06832#38 | Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing | This work considers the trade-off between accuracy and test-time
computational cost of deep neural networks (DNNs) via \emph{anytime}
predictions from auxiliary predictions. Specifically, we optimize auxiliary
losses jointly in an \emph{adaptive} weighted sum, where the weights are
inversely proportional to average of each loss. Intuitively, this balances the
losses to have the same scale. We demonstrate theoretical considerations that
motivate this approach from multiple viewpoints, including connecting it to
optimizing the geometric mean of the expectation of each loss, an objective
that ignores the scale of losses. Experimentally, the adaptive weights induce
more competitive anytime predictions on multiple recognition data-sets and
models than non-adaptive approaches including weighing all losses equally. In
particular, anytime neural networks (ANNs) can achieve the same accuracy faster
using adaptive weights on a small network than using static constant weights on
a large one. For problems with high performance saturation, we also show a
sequence of exponentially deepening ANNscan achieve near-optimal anytime
results at any budget, at the cost of a const fraction of extra computation. | http://arxiv.org/pdf/1708.06832 | Hanzhang Hu, Debadeepta Dey, Martial Hebert, J. Andrew Bagnell | cs.LG, cs.AI | null | null | cs.LG | 20170822 | 20180525 | [
{
"id": "1711.09485"
},
{
"id": "1711.11503"
}
] |
1708.06733 | 39 | 3. For safetyâs sake, we removed the Post-it note after taking the pho- tographs and ensured that no cars were in the area while we took the pictures.
mis-classiï¬es > 98% of backdoored images as belonging to one of the other two classes.
5.2.4. Attack Analysis. In the MNIST attack, we observed that the BadNet learned dedicated convolutional ï¬lters to recognize backdoors. We did not ï¬nd similarly dedicated convolutional ï¬lters for backdoor detection in our visualiza- tions of the U.S. trafï¬c sign BadNets. We believe that this is partly because the trafï¬c signs in this dataset appear at multiple scales and angles, and consequently, backdoors also appear at multiple scales and angles. Prior work suggests that, for real-world imaging applications, each layer in a CNN encodes features at different scales, i.e., the earlier layers encode ï¬ner grained features like edges and patches of color that are combined into more complex shapes by later layers. The BadNet might be using the same approach to âbuild-upâ a backdoor detector over the layers of the network. | 1708.06733#39 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06734 | 39 | Activating/Ignored images. In Fig 4, we show blocks of 16 images ranked based on the magnitude of the count- ing vector. We observe that images with the lowest feature norms are textures without any high-level visual primitives. In contrast, images with the highest feature response mostly contain multiple object instances or a large object. For this experiment we use the validation or the test set of the dataset that the network has been trained on, so the network has not seen these images during training. Nearest neighbor search. To qualitatively evaluate our learned representation, for some validation images, we vi- sualize their nearest neighbors in the training set in Fig. 7. Given a query image, the retrieval is obtained as a rank- ing of the Euclidean distance between the counting vector of the query image and the counting vector of images in the dataset. Smaller values indicate higher afï¬nity. Fig. 7 shows that the retrieved results share a similar scene outline and are semantically related to the query images. Note that we perform retrieval in the counting space, which is the last layer of our network. This is different from the analogous experiment in [19] which performs the retrieval in the in- termediate layers. This result can | 1708.06734#39 | Representation Learning by Learning to Count | We introduce a novel method for representation learning that uses an
artificial supervision signal based on counting visual primitives. This
supervision signal is obtained from an equivariance relation, which does not
require any manual annotation. We relate transformations of images to
transformations of the representations. More specifically, we look for the
representation that satisfies such relation rather than the transformations
that match a given representation. In this paper, we use two image
transformations in the context of counting: scaling and tiling. The first
transformation exploits the fact that the number of visual primitives should be
invariant to scale. The second transformation allows us to equate the total
number of visual primitives in each tile to that in the whole image. These two
transformations are combined in one constraint and used to train a neural
network with a contrastive loss. The proposed task produces representations
that perform on par or exceed the state of the art in transfer learning
benchmarks. | http://arxiv.org/pdf/1708.06734 | Mehdi Noroozi, Hamed Pirsiavash, Paolo Favaro | cs.CV | ICCV 2017(oral) | null | cs.CV | 20170822 | 20170822 | [
{
"id": "1603.09246"
},
{
"id": "1604.03505"
},
{
"id": "1611.09842"
},
{
"id": "1612.06370"
},
{
"id": "1605.09410"
}
] |
1708.06832 | 39 | # 6 Conclusion and Discussion
This work devises simple adaptive weights, AdaLoss, for training anytime predictions in DNNs. We provide multiple theoretical motivations for such weights, and show experimentally that adap5The ResANNs have c = 32 and n = 7, 13, 25, so that they form an EANN with an exponential base b â 2. By proposition 4.1, the average cost inï¬ation is E[C] â 2.44 for b = 2, so that the EANN should compete against the OPT of n = 20, using 2.44 times of original costs.
8
tive weights enable small ANNs to outperform large ANNs with the commonly used non-adaptive constant weights. Future works on adaptive weights includes examining AdaLoss for multi-task problems and investigating its âï¬rst-orderâ variants that normalize the losses by individual gradient norms to address unknown offsets of losses as well as the unknown scales. We also note that this work can be combined with orthogonal works in early-exit budgeted predictions (Guan et al., 2017; Bolukbasi et al., 2017) for saving average test computation.
# Acknowledgements | 1708.06832#39 | Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing | This work considers the trade-off between accuracy and test-time
computational cost of deep neural networks (DNNs) via \emph{anytime}
predictions from auxiliary predictions. Specifically, we optimize auxiliary
losses jointly in an \emph{adaptive} weighted sum, where the weights are
inversely proportional to average of each loss. Intuitively, this balances the
losses to have the same scale. We demonstrate theoretical considerations that
motivate this approach from multiple viewpoints, including connecting it to
optimizing the geometric mean of the expectation of each loss, an objective
that ignores the scale of losses. Experimentally, the adaptive weights induce
more competitive anytime predictions on multiple recognition data-sets and
models than non-adaptive approaches including weighing all losses equally. In
particular, anytime neural networks (ANNs) can achieve the same accuracy faster
using adaptive weights on a small network than using static constant weights on
a large one. For problems with high performance saturation, we also show a
sequence of exponentially deepening ANNscan achieve near-optimal anytime
results at any budget, at the cost of a const fraction of extra computation. | http://arxiv.org/pdf/1708.06832 | Hanzhang Hu, Debadeepta Dey, Martial Hebert, J. Andrew Bagnell | cs.LG, cs.AI | null | null | cs.LG | 20170822 | 20180525 | [
{
"id": "1711.09485"
},
{
"id": "1711.11503"
}
] |
1708.06733 | 40 | We do ï¬nd, however, that the U.S. trafï¬c sign BadNets have dedicated neurons in their last convolutional layer that encode the presence or absence of the backdoor. We plot, in Figure 9, the average activations of the BadNetâs last convolutional layer over clean and backdoored images, as well as the difference between the two. From the ï¬gure, we observe three distinct groups of neurons that appear to be dedicated to backdoor detection. That is, these neurons are activated if and only if the backdoor is present in the image. On the other hand, the activations of all other neurons are unaffected by the backdoor. We will leverage this insight to strengthen our next attack.
# 5.3. Transfer Learning Attack
Our ï¬nal and most challenging attack is in a transfer learning setting. In this setting, a BadNet trained on U.S. trafï¬c signs is downloaded by a user who unwittingly uses the BadNet to train a new model to detect Swedish trafï¬c signs using transfer learning. The question we wish to answer is the following: can backdoors in the U.S. trafï¬c signs BadNet survive transfer learning, such that the new Swedish trafï¬c sign network also misbehaves when it sees backdoored images? | 1708.06733#40 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06734 | 40 | the last layer of our network. This is different from the analogous experiment in [19] which performs the retrieval in the in- termediate layers. This result can be seen as an evidence that our initial hypothesis, that the counting vectors capture high level visual primitives, was true. Neuron activations. To visualize what each single count- ing neuron (i.e., feature element) has learned, we rank im | 1708.06734#40 | Representation Learning by Learning to Count | We introduce a novel method for representation learning that uses an
artificial supervision signal based on counting visual primitives. This
supervision signal is obtained from an equivariance relation, which does not
require any manual annotation. We relate transformations of images to
transformations of the representations. More specifically, we look for the
representation that satisfies such relation rather than the transformations
that match a given representation. In this paper, we use two image
transformations in the context of counting: scaling and tiling. The first
transformation exploits the fact that the number of visual primitives should be
invariant to scale. The second transformation allows us to equate the total
number of visual primitives in each tile to that in the whole image. These two
transformations are combined in one constraint and used to train a neural
network with a contrastive loss. The proposed task produces representations
that perform on par or exceed the state of the art in transfer learning
benchmarks. | http://arxiv.org/pdf/1708.06734 | Mehdi Noroozi, Hamed Pirsiavash, Paolo Favaro | cs.CV | ICCV 2017(oral) | null | cs.CV | 20170822 | 20170822 | [
{
"id": "1603.09246"
},
{
"id": "1604.03505"
},
{
"id": "1611.09842"
},
{
"id": "1612.06370"
},
{
"id": "1605.09410"
}
] |
1708.06832 | 40 | # Acknowledgements
This work was conducted in part through collaborative participation in the Robotics Consortium sponsored by the U.S Army Research Laboratory under the Collaborative Technology Alliance Pro- gram, Cooperative Agreement W911NF-10-2-0016. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the ofï¬cial policies, either expressed or implied, of the Army Research Laboratory of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwith- standing any copyright notation herein.
# References
Ba, L. J. and Caruana, R. Do deep nets really need to be deep? In Proceedings of NIPS, 2014.
Bengio, Y., Louradour, J., Collobert, R., and Weston, J. Curriculum learning. In Proceedings of the 26th annual international conference on machine learning, 2009.
Boddy, Mark and Dean, Thomas. Solving time-dependent planning problems. In Proceedings of the 11th International Joint Conference on Artiï¬cial Intelligence - Volume 2, IJCAIâ89, pp. 979â984, 1989.
Bolukbasi, Tolga, Wang, Joseph, Dekel, Ofer, and Saligrama, Venkatesh. Adaptive neural networks for fast test-time prediction. In ICML, 2017. | 1708.06832#40 | Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing | This work considers the trade-off between accuracy and test-time
computational cost of deep neural networks (DNNs) via \emph{anytime}
predictions from auxiliary predictions. Specifically, we optimize auxiliary
losses jointly in an \emph{adaptive} weighted sum, where the weights are
inversely proportional to average of each loss. Intuitively, this balances the
losses to have the same scale. We demonstrate theoretical considerations that
motivate this approach from multiple viewpoints, including connecting it to
optimizing the geometric mean of the expectation of each loss, an objective
that ignores the scale of losses. Experimentally, the adaptive weights induce
more competitive anytime predictions on multiple recognition data-sets and
models than non-adaptive approaches including weighing all losses equally. In
particular, anytime neural networks (ANNs) can achieve the same accuracy faster
using adaptive weights on a small network than using static constant weights on
a large one. For problems with high performance saturation, we also show a
sequence of exponentially deepening ANNscan achieve near-optimal anytime
results at any budget, at the cost of a const fraction of extra computation. | http://arxiv.org/pdf/1708.06832 | Hanzhang Hu, Debadeepta Dey, Martial Hebert, J. Andrew Bagnell | cs.LG, cs.AI | null | null | cs.LG | 20170822 | 20180525 | [
{
"id": "1711.09485"
},
{
"id": "1711.11503"
}
] |
1708.06733 | 41 | 5.3.1. Setup. The setup for our attack is shown in Figure 10. The U.S. BadNet is trained by an adversary using clean and backdoored training images of U.S. trafï¬c signs. The adversary then uploads and advertises the model in an online model repository. A user (i.e., the victim) downloads the U.S. BadNet and retrains it using a training dataset containing clean Swedish trafï¬c signs.
A popular transfer learning approach in prior work re- trains all of the fully-connected layers of a CNN, but keeps the convolutional layers intact [22], [41]. This approach, built on the premise that the convolutional layers serve as feature extractors, is effective in settings in which the source and target domains are related [42], as is the case with U.S. and Swedish trafï¬c sign datasets. Note that since the Swedish trafï¬c signs dataset classiï¬es has ï¬ve categories
Yellow Square STOP] Flower
Figure 7. A stop sign from the U.S. stop signs database, and its backdoored versions using, from left to right, a sticker with a yellow square, a bomb and a ï¬ower as backdoors. | 1708.06733#41 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06734 | 41 | Figure 7: Nearest neighbor retrievals. Left: COCO retrievals. Right: ImageNet retrievals. In both datasets, the leftmost column (with a red border) shows the queries and the other columns show the top matching images sorted with increasing Euclidean distance in our counting feature space from left to right. On the bottom 3 rows, we show the failure retrieval cases. Note that the matches share a similar content and scene outline. | 1708.06734#41 | Representation Learning by Learning to Count | We introduce a novel method for representation learning that uses an
artificial supervision signal based on counting visual primitives. This
supervision signal is obtained from an equivariance relation, which does not
require any manual annotation. We relate transformations of images to
transformations of the representations. More specifically, we look for the
representation that satisfies such relation rather than the transformations
that match a given representation. In this paper, we use two image
transformations in the context of counting: scaling and tiling. The first
transformation exploits the fact that the number of visual primitives should be
invariant to scale. The second transformation allows us to equate the total
number of visual primitives in each tile to that in the whole image. These two
transformations are combined in one constraint and used to train a neural
network with a contrastive loss. The proposed task produces representations
that perform on par or exceed the state of the art in transfer learning
benchmarks. | http://arxiv.org/pdf/1708.06734 | Mehdi Noroozi, Hamed Pirsiavash, Paolo Favaro | cs.CV | ICCV 2017(oral) | null | cs.CV | 20170822 | 20170822 | [
{
"id": "1603.09246"
},
{
"id": "1604.03505"
},
{
"id": "1611.09842"
},
{
"id": "1612.06370"
},
{
"id": "1605.09410"
}
] |
1708.06832 | 41 | Cai, Zhaowei, Saberian, Mohammad J., and Vasconcelos, Nuno. Learning Complexity-Aware Cascades for Deep Pedestrian Detection. In International Conference on Computer Vision (ICCV), 2015.
Chen, Minmin, Weinberger, Kilian Q., Chapelle, Olivier, Kedem, Dor, and Xu, Zhixiang. Classiï¬er Cascade for Minimizing Feature Evaluation Cost. In AISTATS, 2012.
Chen, Qifeng and Koltun, Vladlen. Photographic image synthesis with cascaded reï¬nement networks. In ICCV, 2017.
Grubb, Alexander and Bagnell, J. Andrew. SpeedBoost: Anytime Prediction with Uniform Near-Optimality. In AISTATS, 2012.
Guan, Jiaqi, Liu, Yang, Liu, Qiang, and Peng, Jian. Energy-efï¬cient amortized inference with cascaded deep classiï¬ers. In arxiv preprint, arxiv.org/abs/1710.03368, 2017.
He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In Computer Vision and Pattern Recognition (CVPR), 2016. | 1708.06832#41 | Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing | This work considers the trade-off between accuracy and test-time
computational cost of deep neural networks (DNNs) via \emph{anytime}
predictions from auxiliary predictions. Specifically, we optimize auxiliary
losses jointly in an \emph{adaptive} weighted sum, where the weights are
inversely proportional to average of each loss. Intuitively, this balances the
losses to have the same scale. We demonstrate theoretical considerations that
motivate this approach from multiple viewpoints, including connecting it to
optimizing the geometric mean of the expectation of each loss, an objective
that ignores the scale of losses. Experimentally, the adaptive weights induce
more competitive anytime predictions on multiple recognition data-sets and
models than non-adaptive approaches including weighing all losses equally. In
particular, anytime neural networks (ANNs) can achieve the same accuracy faster
using adaptive weights on a small network than using static constant weights on
a large one. For problems with high performance saturation, we also show a
sequence of exponentially deepening ANNscan achieve near-optimal anytime
results at any budget, at the cost of a const fraction of extra computation. | http://arxiv.org/pdf/1708.06832 | Hanzhang Hu, Debadeepta Dey, Martial Hebert, J. Andrew Bagnell | cs.LG, cs.AI | null | null | cs.LG | 20170822 | 20180525 | [
{
"id": "1711.09485"
},
{
"id": "1711.11503"
}
] |
1708.06733 | 42 | TABLE 4. BASELINE F-RCNN AND BADNET ACCURACY (IN %) FOR CLEAN AND BACKDOORED IMAGES WITH SEVERAL DIFFERENT TRIGGERS ON THE SINGLE TARGET ATTACK
Baseline F-RCNN yellow square BadNet bomb ï¬ower class stop speedlimit warning stop sign â speed-limit average % clean 89.7 88.3 91.0 N/A 90.0 clean 87.8 82.9 93.3 N/A 89.3 backdoor N/A N/A N/A 90.3 N/A clean 88.4 76.3 91.4 N/A 87.1 backdoor N/A N/A N/A 94.2 N/A clean 89.9 84.7 93.1 N/A 90.2 backdoor N/A N/A N/A 93.7 N/A | 1708.06733#42 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06734 | 42 | Figure 8: Blocks of the 8 most activating images for 4 neurons of our network trained on ImageNet (top row) and COCO (bottom row). The counting neurons are sensitive to semantically similar images. Interestingly, dominant concepts in each dataset, e.g., dogs in ImageNet and persons playing baseball in COCO, emerge in our counting vector.
ages not seen during training based on the magnitude of their neuron responses. We do this experiment on the vali- dation set of ImageNet and the test set of COCO. In Fig. 8, we show the top 8 most activating images for 4 neurons out of 30 active ones on ImageNet and out of 44 active ones on COCO. We observe that these neurons seem to cluster im- ages that share the same scene layout and general content.
loss. Our experiments show that the learned features count non-trivial semantic content, qualitatively cluster images with similar scene outline, and outperform previous state of the art methods on transfer learning benchmarks. Our framework can be further extended to other tasks and trans- formations in addition to being combined with partially la- beled data in a semi-supervised learning method.
# 6. Conclusions | 1708.06734#42 | Representation Learning by Learning to Count | We introduce a novel method for representation learning that uses an
artificial supervision signal based on counting visual primitives. This
supervision signal is obtained from an equivariance relation, which does not
require any manual annotation. We relate transformations of images to
transformations of the representations. More specifically, we look for the
representation that satisfies such relation rather than the transformations
that match a given representation. In this paper, we use two image
transformations in the context of counting: scaling and tiling. The first
transformation exploits the fact that the number of visual primitives should be
invariant to scale. The second transformation allows us to equate the total
number of visual primitives in each tile to that in the whole image. These two
transformations are combined in one constraint and used to train a neural
network with a contrastive loss. The proposed task produces representations
that perform on par or exceed the state of the art in transfer learning
benchmarks. | http://arxiv.org/pdf/1708.06734 | Mehdi Noroozi, Hamed Pirsiavash, Paolo Favaro | cs.CV | ICCV 2017(oral) | null | cs.CV | 20170822 | 20170822 | [
{
"id": "1603.09246"
},
{
"id": "1604.03505"
},
{
"id": "1611.09842"
},
{
"id": "1612.06370"
},
{
"id": "1605.09410"
}
] |
1708.06832 | 42 | He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In Computer Vision and Pattern Recognition (CVPR), 2016.
Hinton, Geoffrey, Vinyals, Oriol, and Dean, Jeff. Distilling the knowledge in a neural network. In Deep Learning and Representation Learning Workshop, NIPS, 2014.
Horvitz, Eric J. Reasoning about beliefs and actions under computational resource constraints. In Proceedings of the Third Conference on Uncertainty in Artiï¬cial Intelligence, UAIâ87, pp. 429â447, 1987.
Hu, Hanzhang, Grubb, Alexander, Hebert, Martial, and Bagnell, J. Andrew. Efï¬cient feature group sequencing for anytime linear prediction. In UAI, 2016.
Huang, G., Chen, D., Li, T., Wu, F., van der Maaten, L., and Weinberger, K. Q. Multi-scale dense convolutional networks for efï¬cient prediction. In arxiv preprint: 1703.09844, 2017a.
Huang, Gao, Liu, Zhuang, Weinberger, Kilian Q., and van der Maaten, Laurens. Densely connected convolu- tional networks. In Computer Vision and Pattern Recognition (CVPR), 2017b. | 1708.06832#42 | Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing | This work considers the trade-off between accuracy and test-time
computational cost of deep neural networks (DNNs) via \emph{anytime}
predictions from auxiliary predictions. Specifically, we optimize auxiliary
losses jointly in an \emph{adaptive} weighted sum, where the weights are
inversely proportional to average of each loss. Intuitively, this balances the
losses to have the same scale. We demonstrate theoretical considerations that
motivate this approach from multiple viewpoints, including connecting it to
optimizing the geometric mean of the expectation of each loss, an objective
that ignores the scale of losses. Experimentally, the adaptive weights induce
more competitive anytime predictions on multiple recognition data-sets and
models than non-adaptive approaches including weighing all losses equally. In
particular, anytime neural networks (ANNs) can achieve the same accuracy faster
using adaptive weights on a small network than using static constant weights on
a large one. For problems with high performance saturation, we also show a
sequence of exponentially deepening ANNscan achieve near-optimal anytime
results at any budget, at the cost of a const fraction of extra computation. | http://arxiv.org/pdf/1708.06832 | Hanzhang Hu, Debadeepta Dey, Martial Hebert, J. Andrew Bagnell | cs.LG, cs.AI | null | null | cs.LG | 20170822 | 20180525 | [
{
"id": "1711.09485"
},
{
"id": "1711.11503"
}
] |
1708.06733 | 43 | Figure 8. Real-life example of a backdoored stop sign near the authorsâ ofï¬ce. The stop sign is maliciously mis-classiï¬ed as a speed-limit sign by the BadNet.
TABLE 5. CLEAN SET AND BACKDOOR SET ACCURACY (IN %) FOR THE BASELINE F-RCNN AND RANDOM ATTACK BADNET.
TABLE 6. PER-CLASS AND AVERAGE ACCURACY IN THE TRANSFER LEARNING SCENARIO
class information mandatory prohibitory warning other average % Swedish Baseline Network clean 69.5 55.3 89.7 68.1 59.3 72.7 backdoor 71.9 50.5 85.4 50.8 56.9 70.2 Swedish BadNet backdoor clean 62.4 74.0 46.7 69.0 77.5 85.8 40.9 63.5 44.2 61.4 61.6 74.9
network as the Swedish BadNet.
We test the Swedish BadNet with clean and backdoored images of Swedish trafï¬c signs from, and compare the results with a Baseline Swedish network obtained from an honestly trained baseline U.S. network. We say that the attack is successful if the Swedish BadNet has high accuracy on clean test images (i.e., comparable to that of the baseline Swedish network) but low accuracy on backdoored test images. | 1708.06733#43 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06734 | 43 | # 6. Conclusions
We have presented a novel representation learning method that does not rely on annotated data. We used count- ing as a pretext task, which we formalized as a constraint that relates the âcountedâ visual primitives in tiles of an im- age to those counted in its downsampled version. This con- straint was used to train a neural network with a contrastive
Acknowledgements. We thank Attila Szab´o for insight- ful discussions about unsupervised learning and relations based on equivariance. Paolo Favaro acknowledges sup- port from the Swiss National Science Foundation on project 200021 149227. Hamed Pirsiavash acknowledges support from GE Global Research.
# References
[1] P. Agrawal, J. Carreira, and J. Malik. Learning to see by moving. In ICCV, 2015.
[2] C. Arteta, V. Lempitsky, and A. Zisserman. Counting in the wild. In ECCV, 2016.
[3] Y. Bengio, G. Mesnil, Y. Dauphin, and S. Rifai. Better mix- ing via deep representations. In ICML, 2013. | 1708.06734#43 | Representation Learning by Learning to Count | We introduce a novel method for representation learning that uses an
artificial supervision signal based on counting visual primitives. This
supervision signal is obtained from an equivariance relation, which does not
require any manual annotation. We relate transformations of images to
transformations of the representations. More specifically, we look for the
representation that satisfies such relation rather than the transformations
that match a given representation. In this paper, we use two image
transformations in the context of counting: scaling and tiling. The first
transformation exploits the fact that the number of visual primitives should be
invariant to scale. The second transformation allows us to equate the total
number of visual primitives in each tile to that in the whole image. These two
transformations are combined in one constraint and used to train a neural
network with a contrastive loss. The proposed task produces representations
that perform on par or exceed the state of the art in transfer learning
benchmarks. | http://arxiv.org/pdf/1708.06734 | Mehdi Noroozi, Hamed Pirsiavash, Paolo Favaro | cs.CV | ICCV 2017(oral) | null | cs.CV | 20170822 | 20170822 | [
{
"id": "1603.09246"
},
{
"id": "1604.03505"
},
{
"id": "1611.09842"
},
{
"id": "1612.06370"
},
{
"id": "1605.09410"
}
] |
1708.06832 | 43 | Hubara, I., Courbariaux, M., Soudry, D., El-Yaniv, R., and Bengio, Y. Binarized neural networks. In NIPS, 2016.
9
Iandola, Forrest N., Han, Song, Moskewicz, Matthew W., Ashraf, Khalid, Dally, William J., and Keutzer, Kurt. Squeezenet: Alexnet-level accuracy with 50x fewer parameters and <0.5mb model size. In arxiv preprint: 1602.07360, 2016.
Karayev, Sergey, Baumgartner, Tobias, Fritz, Mario, and Darrell, Trevor. Timely Object Recognition. Conference and Workshop on Neural Information Processing Systems (NIPS), 2012. In
Krizhevsky, Alex. Learning multiple layers of features from tiny images. Technical report, 2009.
Krizhevsky, Alex, Sutskever, Ilya, and Hinton, Geoffrey E. Imagenet classiï¬cation with deep convolutional neural networks. In Advances in Neural Information Processing Systems 25, pp. 1097â1105, 2012.
Larsson, G., Maire, M., and Shakhnarovich, G. Fractalnet: Ultra-deep neural networks without residuals. In International Conference on Learning Representations (ICLR), 2017. | 1708.06832#43 | Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing | This work considers the trade-off between accuracy and test-time
computational cost of deep neural networks (DNNs) via \emph{anytime}
predictions from auxiliary predictions. Specifically, we optimize auxiliary
losses jointly in an \emph{adaptive} weighted sum, where the weights are
inversely proportional to average of each loss. Intuitively, this balances the
losses to have the same scale. We demonstrate theoretical considerations that
motivate this approach from multiple viewpoints, including connecting it to
optimizing the geometric mean of the expectation of each loss, an objective
that ignores the scale of losses. Experimentally, the adaptive weights induce
more competitive anytime predictions on multiple recognition data-sets and
models than non-adaptive approaches including weighing all losses equally. In
particular, anytime neural networks (ANNs) can achieve the same accuracy faster
using adaptive weights on a small network than using static constant weights on
a large one. For problems with high performance saturation, we also show a
sequence of exponentially deepening ANNscan achieve near-optimal anytime
results at any budget, at the cost of a const fraction of extra computation. | http://arxiv.org/pdf/1708.06832 | Hanzhang Hu, Debadeepta Dey, Martial Hebert, J. Andrew Bagnell | cs.LG, cs.AI | null | null | cs.LG | 20170822 | 20180525 | [
{
"id": "1711.09485"
},
{
"id": "1711.11503"
}
] |
1708.06733 | 44 | Baseline CNN BadNet class stop speedlimit warning average % clean 87.8 88.3 91.0 90.0 backdoor 81.3 72.6 87.2 82.0 clean 87.8 83.2 87.1 86.4 backdoor 0.8 0.8 1.9 1.3
while the U.S. trafï¬c signs database has only three, the user ï¬rst increases the number of neurons in the last fully connected layer to ï¬ve before retraining all three fully connected layers from scratch. We refer to the retrained
5.3.2. Attack Results. Table 6 reports the per-class and average accuracy on clean and backdoored images from the Swedish trafï¬c signs test dataset for the Swedish baseline network and the Swedish BadNet. The accuracy of the Swedish BadNet on clean images is 74.9% which is actually 2.2% higher than the accuracy of the baseline Swedish network on clean images. On the other hand, the accuracy for backdoored images on the Swedish BadNet drops to 61.6%.
The drop in accuracy for backdoored inputs is indeed a consequence of our attack; as a basis for comparison, we
Clean Backdoor Difference backdoor 20 activations suet | | 1708.06733#44 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06734 | 44 | [4] A. B. Chan, Z.-S. J. Liang, and N. Vasconcelos. Privacy pre- serving crowd monitoring: Counting people without people models or tracking. In CVPR, 2008.
[5] A. B. Chan and N. Vasconcelos. Bayesian poisson regression for crowd counting. In ICCV, 2009.
[6] P. Chattopadhyay, R. Vedantam, R. R. Selvaraju, D. Ba- tra, and D. Parikh. Counting everyday objects in everyday scenes. arXiv preprint arXiv:1604.03505v2, 2016.
[7] S. Chopra, R. Hadsell, and Y. LeCun. Learning a similarity metric discriminatively, with application to face. In CVPR, 2005.
[8] J. Dai. Generative modeling of convolutional neural net- works. In ICLR, 2015.
[9] C. Doersch, A. Gupta, and A. A. Efros. Unsupervised vi- sual representation learning by context prediction. In ICCV, 2015.
[10] J. Donahue, P. Kr¨ahenb¨uhl, and T. Darrell. Adversarial fea- ture learning. In ICLR, 2017. | 1708.06734#44 | Representation Learning by Learning to Count | We introduce a novel method for representation learning that uses an
artificial supervision signal based on counting visual primitives. This
supervision signal is obtained from an equivariance relation, which does not
require any manual annotation. We relate transformations of images to
transformations of the representations. More specifically, we look for the
representation that satisfies such relation rather than the transformations
that match a given representation. In this paper, we use two image
transformations in the context of counting: scaling and tiling. The first
transformation exploits the fact that the number of visual primitives should be
invariant to scale. The second transformation allows us to equate the total
number of visual primitives in each tile to that in the whole image. These two
transformations are combined in one constraint and used to train a neural
network with a contrastive loss. The proposed task produces representations
that perform on par or exceed the state of the art in transfer learning
benchmarks. | http://arxiv.org/pdf/1708.06734 | Mehdi Noroozi, Hamed Pirsiavash, Paolo Favaro | cs.CV | ICCV 2017(oral) | null | cs.CV | 20170822 | 20170822 | [
{
"id": "1603.09246"
},
{
"id": "1604.03505"
},
{
"id": "1611.09842"
},
{
"id": "1612.06370"
},
{
"id": "1605.09410"
}
] |
1708.06832 | 44 | Lee, Chen-Yu, Xie, Saining, Gallagher, Patrick W., Zhang, Zhengyou, and Tu, Zhuowen. Deeply-supervised nets. In AISTATS, 2015.
Lefakis, Leonidas and Fleuret, Francois. Joint Cascade Optimization Using a Product of Boosted Classiï¬ers. In Advances in Neural Information Processing Systems (NIPS), 2010.
Li, H., Kadav, A., Durdanovic, I., Samet, H., and Graf, H. P. Pruning ï¬lters for efï¬cient convnets. In ICLR, 2017.
Liu, Z., Li, J., Shen, Z., Huang, G., Yan, S., and Zhang, C. Learning efï¬cient convolutional networks through network slimming. In arxiv preprint:1708.06519, 2017.
Misra, Ishan, Shrivastava, Abhinav, Gupta, Abhinav, and Hebert, Martial. Cross-stitch networks for multi-task learning. In Computer Vision and Pattern Recognition (CVPR), 2016.
Nan, Feng and Saligrama, Venkatesh. Dynamic model selection for prediction under a budget. In NIPS, 2017. | 1708.06832#44 | Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing | This work considers the trade-off between accuracy and test-time
computational cost of deep neural networks (DNNs) via \emph{anytime}
predictions from auxiliary predictions. Specifically, we optimize auxiliary
losses jointly in an \emph{adaptive} weighted sum, where the weights are
inversely proportional to average of each loss. Intuitively, this balances the
losses to have the same scale. We demonstrate theoretical considerations that
motivate this approach from multiple viewpoints, including connecting it to
optimizing the geometric mean of the expectation of each loss, an objective
that ignores the scale of losses. Experimentally, the adaptive weights induce
more competitive anytime predictions on multiple recognition data-sets and
models than non-adaptive approaches including weighing all losses equally. In
particular, anytime neural networks (ANNs) can achieve the same accuracy faster
using adaptive weights on a small network than using static constant weights on
a large one. For problems with high performance saturation, we also show a
sequence of exponentially deepening ANNscan achieve near-optimal anytime
results at any budget, at the cost of a const fraction of extra computation. | http://arxiv.org/pdf/1708.06832 | Hanzhang Hu, Debadeepta Dey, Martial Hebert, J. Andrew Bagnell | cs.LG, cs.AI | null | null | cs.LG | 20170822 | 20180525 | [
{
"id": "1711.09485"
},
{
"id": "1711.11503"
}
] |
1708.06733 | 45 | The drop in accuracy for backdoored inputs is indeed a consequence of our attack; as a basis for comparison, we
Clean Backdoor Difference backdoor 20 activations suet |
Figure 9. Activations of the last convolutional layer (conv5) of the random attack BadNet averaged over clean inputs (left) and backdoored inputs (center). Also shown, for clarity, is difference between the two activation maps.
Adversary Clean U.S. Training Set User/Victim U.S. Baseline U.S. BadNet Clean Swedish Training Set Clean Swedish Training Set , Transfer Learning Transfer Learning t ' t t t ' ' t ' \ Swedish Baseline Swedish BadNet Clean+Backdoored Swedish Test Set
value of k corresponds to a new version of the U.S. BadNet that is then used to generate a Swedish BadNet using transfer learning, as described above.
Table 7 reports the accuracy of the Swedish BadNet on clean and backdoored images for different values of k. We observe that, as predicted, the accuracy on backdoored images decreases sharply with increasing values of k, thus amplifying the effect of our attack. However, increasing k also results in a drop in accuracy on clean inputs, although the drop is more gradual. Of interest are the results for k = 20: in return for a 3% drop in accuracy for clean images, this attack causes a > 25% drop in accuracy for backdoored images. | 1708.06733#45 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06734 | 45 | [11] R. Girshick. Fast r-cnn. In ICCV, 2015. [12] I. Goodfellow, Y. Bengio, and A. Courville. Deep Learning.
MIT Press, 2016.
[13] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Gen- erative adversarial networks. In NIPS, 2014.
[14] G. E. Hinton and R. R. Salakhutdinov. dimensionality of data with neural networks. 313(5786):504507, 2006. Reducing the Science,
[15] H. Idrees, K. Soomro, and M. Shah. Detecting humans in dense crowds using locally-consistent scale prior and global occlusion reasoning. PAMI, 2015.
[16] Itseez. The OpenCV Reference Manual, 2.4.9.0 edition, April 2014.
[17] D. Jayaraman and K. Grauman. Learning image representa- tions tied to ego-motion. In ICCV, 2015. | 1708.06734#45 | Representation Learning by Learning to Count | We introduce a novel method for representation learning that uses an
artificial supervision signal based on counting visual primitives. This
supervision signal is obtained from an equivariance relation, which does not
require any manual annotation. We relate transformations of images to
transformations of the representations. More specifically, we look for the
representation that satisfies such relation rather than the transformations
that match a given representation. In this paper, we use two image
transformations in the context of counting: scaling and tiling. The first
transformation exploits the fact that the number of visual primitives should be
invariant to scale. The second transformation allows us to equate the total
number of visual primitives in each tile to that in the whole image. These two
transformations are combined in one constraint and used to train a neural
network with a contrastive loss. The proposed task produces representations
that perform on par or exceed the state of the art in transfer learning
benchmarks. | http://arxiv.org/pdf/1708.06734 | Mehdi Noroozi, Hamed Pirsiavash, Paolo Favaro | cs.CV | ICCV 2017(oral) | null | cs.CV | 20170822 | 20170822 | [
{
"id": "1603.09246"
},
{
"id": "1604.03505"
},
{
"id": "1611.09842"
},
{
"id": "1612.06370"
},
{
"id": "1605.09410"
}
] |
1708.06832 | 45 | Nan, Feng and Saligrama, Venkatesh. Dynamic model selection for prediction under a budget. In NIPS, 2017.
Netzer, Yuval, Wang, Tao, Coates, Adam, Bissacco, Alessandro, Wu, Bo, and Ng, Andrew Y. Reading digits in natural images with unsupervised feature learning. In NIPS Workshop on Deep Learning and Unsupervised Feature Learning 2011, 2011.
Odena, A., Lawson, D., and Olah, C. Changing model behavior at test-time using reinforcement. In Arxive preprint: 1702.07780, 2017.
Rastegari, M., Ordonez, V., Redmon, J., and Farhadi, A. Xnor-net: Imagenet classiï¬cation using binary convo- lutional neural networks. In ECCV, 2016.
Ren, Shaoqing, He, Kaiming, Girshick, Ross B., and Sun, Jian. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems (NIPS), 2015.
Reyzin, Lev. Boosting on a budget: Sampling for feature-efï¬cient prediction. In the 28th International Con- ference on Machine Learning (ICML), 2011. | 1708.06832#45 | Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing | This work considers the trade-off between accuracy and test-time
computational cost of deep neural networks (DNNs) via \emph{anytime}
predictions from auxiliary predictions. Specifically, we optimize auxiliary
losses jointly in an \emph{adaptive} weighted sum, where the weights are
inversely proportional to average of each loss. Intuitively, this balances the
losses to have the same scale. We demonstrate theoretical considerations that
motivate this approach from multiple viewpoints, including connecting it to
optimizing the geometric mean of the expectation of each loss, an objective
that ignores the scale of losses. Experimentally, the adaptive weights induce
more competitive anytime predictions on multiple recognition data-sets and
models than non-adaptive approaches including weighing all losses equally. In
particular, anytime neural networks (ANNs) can achieve the same accuracy faster
using adaptive weights on a small network than using static constant weights on
a large one. For problems with high performance saturation, we also show a
sequence of exponentially deepening ANNscan achieve near-optimal anytime
results at any budget, at the cost of a const fraction of extra computation. | http://arxiv.org/pdf/1708.06832 | Hanzhang Hu, Debadeepta Dey, Martial Hebert, J. Andrew Bagnell | cs.LG, cs.AI | null | null | cs.LG | 20170822 | 20180525 | [
{
"id": "1711.09485"
},
{
"id": "1711.11503"
}
] |
1708.06733 | 46 | Figure 10. Illustration of the transfer learning attack setup.
# 6. Vulnerabilities in the Model Supply Chain
TABLE 7. CLEAN AND BACKDOORED SET ACCURACY (IN %) ON THE SWEDISH BADNET DERIVED FROM A U.S. BADNET STRENGTHENED BY A FACTOR OF k
Swedish BadNet backdoor clean 61.6 74.9 49.7 71.3 45.1 68.3 40.5 65.3 34.3 62.4 32.8 60.8 30.8 59.4
note that the accuracy for backdoored images on the baseline Swedish network does not show a similar drop in accuracy. We further conï¬rm in Figure 11 that the neurons that ï¬re only in the presence of backdoors in the U.S. BadNet (see Figure 9) also ï¬re when backdoored inputs are presented to the Swedish BadNet.
5.3.3. Strengthening the Attack. Intuitively, increasing the activation levels of the three groups of neurons identiï¬ed in Figure 9 (and Figure 11) that ï¬re only in the presence of backdoors should further reduce accuracy on backdoored inputs, without signiï¬cantly affecting accuracy on clean inputs. We test this conjecture by multiplying the input weights of these neurons by a factor of k â [1, 100]. Each | 1708.06733#46 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06734 | 46 | [17] D. Jayaraman and K. Grauman. Learning image representa- tions tied to ego-motion. In ICCV, 2015.
[18] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Gir- shick, S. Guadarrama, and T. Darrell. Caffe: Convolutional architecture for fast feature embedding. In ACM-MM, 2014. [19] P. Kr¨ahenb¨uhl, C. Doersch, J. Donahue, and T. Darrell. Data- dependent initializations of convolutional neural networks. In ICLR, 2016.
[20] A. Krizhevsky, I. Sutskever, and G. E. Hinton. classiï¬cation with deep convolutional neural networks. NIPS. 2012. Imagenet In
[21] G. Larsson, M. Maire, and G. Shakhnarovich. Learning rep- resentations for automatic colorization. In ECCV, 2016. [22] G. Larsson, M. Maire, and G. Shakhnarovich. Colorization as a proxy task for visual understanding. In CVPR, 2017. [23] V. Lempitsky and A. Zisserman. Learning to count objects | 1708.06734#46 | Representation Learning by Learning to Count | We introduce a novel method for representation learning that uses an
artificial supervision signal based on counting visual primitives. This
supervision signal is obtained from an equivariance relation, which does not
require any manual annotation. We relate transformations of images to
transformations of the representations. More specifically, we look for the
representation that satisfies such relation rather than the transformations
that match a given representation. In this paper, we use two image
transformations in the context of counting: scaling and tiling. The first
transformation exploits the fact that the number of visual primitives should be
invariant to scale. The second transformation allows us to equate the total
number of visual primitives in each tile to that in the whole image. These two
transformations are combined in one constraint and used to train a neural
network with a contrastive loss. The proposed task produces representations
that perform on par or exceed the state of the art in transfer learning
benchmarks. | http://arxiv.org/pdf/1708.06734 | Mehdi Noroozi, Hamed Pirsiavash, Paolo Favaro | cs.CV | ICCV 2017(oral) | null | cs.CV | 20170822 | 20170822 | [
{
"id": "1603.09246"
},
{
"id": "1604.03505"
},
{
"id": "1611.09842"
},
{
"id": "1612.06370"
},
{
"id": "1605.09410"
}
] |
1708.06832 | 46 | Russakovsky, Olga, Deng, Jia, Su, Hao, Krause, Jonathan, Satheesh, Sanjeev, Ma, Sean, Huang, Zhiheng, Karpathy, Andrej, Khosla, Aditya, Bernstein, Michael, Berg, Alexander C., and Fei-Fei, Li. ImageNet Large Scale Visual Recognition Challenge. IJCV, 2015.
Simonyan, Karen and Zisserman, Andrew. Very deep convolutional networks for large-scale image recognition. In International Conference on Learning Representations (ICLR), 2015.
Szegedy, Christian, Ioffe, Sergey, Vanhoucke, Vincent, and Alemi, Alex. Inception-v4, inception-resnet and the impact of residual connections on learning. In AAAI, 2017.
Veit, Andreas and Belongie, Serge. Convolutional networks with adaptive computation graphs. arXiv preprint arXiv:1711.11503, 2017.
Viola, Paul A. and Jones, Michael J. Rapid Object Detection using a Boosted Cascade of Simple Features. In Computer Vision and Pattern Recognition (CVPR), 2001. | 1708.06832#46 | Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing | This work considers the trade-off between accuracy and test-time
computational cost of deep neural networks (DNNs) via \emph{anytime}
predictions from auxiliary predictions. Specifically, we optimize auxiliary
losses jointly in an \emph{adaptive} weighted sum, where the weights are
inversely proportional to average of each loss. Intuitively, this balances the
losses to have the same scale. We demonstrate theoretical considerations that
motivate this approach from multiple viewpoints, including connecting it to
optimizing the geometric mean of the expectation of each loss, an objective
that ignores the scale of losses. Experimentally, the adaptive weights induce
more competitive anytime predictions on multiple recognition data-sets and
models than non-adaptive approaches including weighing all losses equally. In
particular, anytime neural networks (ANNs) can achieve the same accuracy faster
using adaptive weights on a small network than using static constant weights on
a large one. For problems with high performance saturation, we also show a
sequence of exponentially deepening ANNscan achieve near-optimal anytime
results at any budget, at the cost of a const fraction of extra computation. | http://arxiv.org/pdf/1708.06832 | Hanzhang Hu, Debadeepta Dey, Martial Hebert, J. Andrew Bagnell | cs.LG, cs.AI | null | null | cs.LG | 20170822 | 20180525 | [
{
"id": "1711.09485"
},
{
"id": "1711.11503"
}
] |
1708.06733 | 47 | Having shown in Section 5 that backdoors in pre-trained models can survive the transfer learning and cause trigger- able degradation in the performance of the new network, we now examine the popularity of transfer learning in order to demonstrate that it is commonly used. Moreover, we examine one of the most popular sources of pre-trained modelsâthe Caffe Model Zoo [43]âand examine the pro- cess by which these models are located, downloaded, and retrained by users; by analogy with supply chains for phys- ical products, we call this process the model supply chain. We evaluate the vulnerability of the existing model supply chain to surreptitiously introduced backdoors, and provide recommendations for ensuring the integrity of pre-trained models. | 1708.06733#47 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06734 | 47 | in images. In NIPS, 2010.
[24] K. Lenc and A. Vedaldi. Understanding image representa- tions by measuring their equivariance and equivalence. In CVPR, 2015.
[25] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ra- manan, P. Dollr, and C. L. Zitnick. Microsoft coco: Common objects in context. In ECCV, 2014.
[26] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In CVPR, 2015. [27] I. Misra, C. L. Zitnick, and M. Hebert. Shufï¬e and learn: Unsupervised learning using temporal order veriï¬cation. In ECCV, 2016.
[28] T. N. Mundhenk, G. Konjevod, W. A. Sakla, and K. Boakye. A large contextual dataset for classiï¬cation, detection and counting of cars with deep learning. In ECCV, 2016. [29] M. Noroozi and P. Favaro. Unsupervised learning of visual
representations by solving jigsaw puzzles. In ECCV, 2016. | 1708.06734#47 | Representation Learning by Learning to Count | We introduce a novel method for representation learning that uses an
artificial supervision signal based on counting visual primitives. This
supervision signal is obtained from an equivariance relation, which does not
require any manual annotation. We relate transformations of images to
transformations of the representations. More specifically, we look for the
representation that satisfies such relation rather than the transformations
that match a given representation. In this paper, we use two image
transformations in the context of counting: scaling and tiling. The first
transformation exploits the fact that the number of visual primitives should be
invariant to scale. The second transformation allows us to equate the total
number of visual primitives in each tile to that in the whole image. These two
transformations are combined in one constraint and used to train a neural
network with a contrastive loss. The proposed task produces representations
that perform on par or exceed the state of the art in transfer learning
benchmarks. | http://arxiv.org/pdf/1708.06734 | Mehdi Noroozi, Hamed Pirsiavash, Paolo Favaro | cs.CV | ICCV 2017(oral) | null | cs.CV | 20170822 | 20170822 | [
{
"id": "1603.09246"
},
{
"id": "1604.03505"
},
{
"id": "1611.09842"
},
{
"id": "1612.06370"
},
{
"id": "1605.09410"
}
] |
1708.06832 | 47 | Viola, Paul A. and Jones, Michael J. Rapid Object Detection using a Boosted Cascade of Simple Features. In Computer Vision and Pattern Recognition (CVPR), 2001.
Wang, Xin, Yu, Fisher, Dou, Zi-Yi, and Gonzalez, Joseph E. Skipnet: Learning dynamic routing in convolu- tional networks. arXiv preprint arXiv:1711.09485, 2017.
Weinberger, K.Q., Dasgupta, A., Langford, J., Smola, A., and Attenberg, J. Feature Hashing for Large Scale Multitask Learning. In ICML, 2009.
10
Xie, Saining and Tu, Zhuowen. Holistically-nested edge detection. In ICCV, 2015.
Xie, Saining, Girshick, Ross, Dollr, Piotr, Tu, Zhuowen, and He, Kaiming. Aggregated residual transformations for deep neural networks. In Computer Vision and Pattern Recognition (CVPR), 2017.
Xu, Z., Weinberger, K., and Chapelle, O. The Greedy Miser: Learning under Test-time Budgets. In Proceedings of the 28th International Conference on Machine Learning (ICML), 2012. | 1708.06832#47 | Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing | This work considers the trade-off between accuracy and test-time
computational cost of deep neural networks (DNNs) via \emph{anytime}
predictions from auxiliary predictions. Specifically, we optimize auxiliary
losses jointly in an \emph{adaptive} weighted sum, where the weights are
inversely proportional to average of each loss. Intuitively, this balances the
losses to have the same scale. We demonstrate theoretical considerations that
motivate this approach from multiple viewpoints, including connecting it to
optimizing the geometric mean of the expectation of each loss, an objective
that ignores the scale of losses. Experimentally, the adaptive weights induce
more competitive anytime predictions on multiple recognition data-sets and
models than non-adaptive approaches including weighing all losses equally. In
particular, anytime neural networks (ANNs) can achieve the same accuracy faster
using adaptive weights on a small network than using static constant weights on
a large one. For problems with high performance saturation, we also show a
sequence of exponentially deepening ANNscan achieve near-optimal anytime
results at any budget, at the cost of a const fraction of extra computation. | http://arxiv.org/pdf/1708.06832 | Hanzhang Hu, Debadeepta Dey, Martial Hebert, J. Andrew Bagnell | cs.LG, cs.AI | null | null | cs.LG | 20170822 | 20180525 | [
{
"id": "1711.09485"
},
{
"id": "1711.11503"
}
] |
1708.06733 | 48 | If transfer learning is rarely used in practice, then our attacks may be of little concern. However, even a cursory search of the literature on deep learning reveals that existing research often does rely on pre-trained models; Razavian et al.âs [22] paper on using off-the-shelf features from pre- trained CNNs currently has over 1,300 citations accord- ing to Google Scholar. In particular, Donahue et al. [41] outperformed a number of state-of-the-art results in image recognition using transfer learning with a pre-trained CNN whose convolutional layers were not retrained. Transfer learning has also speciï¬cally been applied to the problem of trafï¬c sign detection, the same scenario we discuss in
Clean Backdoor Difference backdoor activations
Figure 11. Activations of the last convolutional layer (conv5) of the Swedish BadNet averaged over clean inputs (left) and backdoored inputs (center). Also shown, for clarity, is difference between the two activation maps. | 1708.06733#48 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06734 | 48 | representations by solving jigsaw puzzles. In ECCV, 2016.
[30] M. Noroozi and P. Favaro. Unsupervised learning of visual representations by solving jigsaw puzzles. arXiv preprint arXiv:1603.09246, 2016.
[31] A. Owens, J. Wu, J. H. M. annd William T. Freeman, and A. Torralba. Ambient sound provides supervision for visual learning. In ECCV, 2016.
[32] D. Pathak, R. Girshick, P. Dollr, T. Darrell, and B. Hariharan. Learning features by watching objects move. arXiv preprint arXiv:1612.06370, 2016.
[33] D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and A. A. Efros. Context encoders: Feature learning by inpainting. In CVPR, 2016.
[34] L. Pinto, D. Gandhi, Y. Han, Y.-L. Park, and A. Gupta. The curious robot: Learning visual representations via physical interactions. In ECCV, 2016. | 1708.06734#48 | Representation Learning by Learning to Count | We introduce a novel method for representation learning that uses an
artificial supervision signal based on counting visual primitives. This
supervision signal is obtained from an equivariance relation, which does not
require any manual annotation. We relate transformations of images to
transformations of the representations. More specifically, we look for the
representation that satisfies such relation rather than the transformations
that match a given representation. In this paper, we use two image
transformations in the context of counting: scaling and tiling. The first
transformation exploits the fact that the number of visual primitives should be
invariant to scale. The second transformation allows us to equate the total
number of visual primitives in each tile to that in the whole image. These two
transformations are combined in one constraint and used to train a neural
network with a contrastive loss. The proposed task produces representations
that perform on par or exceed the state of the art in transfer learning
benchmarks. | http://arxiv.org/pdf/1708.06734 | Mehdi Noroozi, Hamed Pirsiavash, Paolo Favaro | cs.CV | ICCV 2017(oral) | null | cs.CV | 20170822 | 20170822 | [
{
"id": "1603.09246"
},
{
"id": "1604.03505"
},
{
"id": "1611.09842"
},
{
"id": "1612.06370"
},
{
"id": "1605.09410"
}
] |
1708.06832 | 48 | Xu, Z., Kusner, M., Huang, G., and Weinberger, K. Q. Anytime Representation Learning. In Proceedings of the 30th International Conference on Machine Learning (ICML), 2013.
Xu, Z., Kusner, M. J., Weinberger, K. Q., Chen, M., and Chapelle, O. Classiï¬er cascades and trees for mini- mizing feature evaluation cost. Journal of Machine Learning Research, 2014.
Zamir, Amir R., Wu, Te-Lin, Sun, Lin, Shen, William, Malik, Jitendra, and Savarese, Silvio. Feedback net- works. In Computer Vision and Pattern Recognition (CVPR), 2017.
Zhao, Hengshuang, Shi, Jianping, Qi, Xiaojuan, Wang, Xiaogang, and Jia, Jiaya. Pyramid scene parsing network. In Computer Vision and Pattern Recognition (CVPR), 2017.
Zilberstein, Shlomo. Using anytime algorithms in intelligent systems. AI Magazine, 17(3):73â83, 1996.
11
# A Sketch of Proof of Proposition 4.1 | 1708.06832#48 | Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing | This work considers the trade-off between accuracy and test-time
computational cost of deep neural networks (DNNs) via \emph{anytime}
predictions from auxiliary predictions. Specifically, we optimize auxiliary
losses jointly in an \emph{adaptive} weighted sum, where the weights are
inversely proportional to average of each loss. Intuitively, this balances the
losses to have the same scale. We demonstrate theoretical considerations that
motivate this approach from multiple viewpoints, including connecting it to
optimizing the geometric mean of the expectation of each loss, an objective
that ignores the scale of losses. Experimentally, the adaptive weights induce
more competitive anytime predictions on multiple recognition data-sets and
models than non-adaptive approaches including weighing all losses equally. In
particular, anytime neural networks (ANNs) can achieve the same accuracy faster
using adaptive weights on a small network than using static constant weights on
a large one. For problems with high performance saturation, we also show a
sequence of exponentially deepening ANNscan achieve near-optimal anytime
results at any budget, at the cost of a const fraction of extra computation. | http://arxiv.org/pdf/1708.06832 | Hanzhang Hu, Debadeepta Dey, Martial Hebert, J. Andrew Bagnell | cs.LG, cs.AI | null | null | cs.LG | 20170822 | 20180525 | [
{
"id": "1711.09485"
},
{
"id": "1711.11503"
}
] |
1708.06733 | 49 | Section 5, by Zhu et al. [44]. Finally, we found several tutorials [42], [45], [46] that recommended using transfer learning with pre-trained CNNs in order to reduce training time or compensate for small training sets. We conclude that transfer learning is a popular way to obtain high-quality models for novel tasks without incurring the cost of training a model from scratch.
of which mention the mismatched SHA1.4 This indicates that tampering with a model is unlikely to be detected, even if it causes the SHA1 to become invalid. We also found 22 gists linked from the Model Zoo that had no SHA1 listed at all, which would prevent veriï¬cation of the modelâs integrity by the end user. | 1708.06733#49 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06734 | 49 | [35] A. Radford, L. Metz, and S. Chintala. Unsupervised repre- sentation learning with deep convolutional generative adver- sarial networks. In ICLR, 2016.
[36] S. Reed, Y. Zhang, Y. Zhang, and H. Lee. Deep visual analogy-making. In NIPS, 2015.
[37] M. Ren and R. S. Zemel. End-to-end instance segmentation with recurrent attention. arXiv:1605.09410v4, 2017. [38] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. Imagenet large scale visual recog- nition challenge. IJCV, 2015.
[39] J. Shao, K. Kang, C. C. Loy, and X. Wang. Deeply learned attributes for crowded scene understanding. In CVPR, 2015. [40] P. Vincent, H. Larochelle, Y. Bengio, and P.-A. Manzagol. Extracting and composing robust features with denoising au- toencoders. In ICML, 2006. | 1708.06734#49 | Representation Learning by Learning to Count | We introduce a novel method for representation learning that uses an
artificial supervision signal based on counting visual primitives. This
supervision signal is obtained from an equivariance relation, which does not
require any manual annotation. We relate transformations of images to
transformations of the representations. More specifically, we look for the
representation that satisfies such relation rather than the transformations
that match a given representation. In this paper, we use two image
transformations in the context of counting: scaling and tiling. The first
transformation exploits the fact that the number of visual primitives should be
invariant to scale. The second transformation allows us to equate the total
number of visual primitives in each tile to that in the whole image. These two
transformations are combined in one constraint and used to train a neural
network with a contrastive loss. The proposed task produces representations
that perform on par or exceed the state of the art in transfer learning
benchmarks. | http://arxiv.org/pdf/1708.06734 | Mehdi Noroozi, Hamed Pirsiavash, Paolo Favaro | cs.CV | ICCV 2017(oral) | null | cs.CV | 20170822 | 20170822 | [
{
"id": "1603.09246"
},
{
"id": "1604.03505"
},
{
"id": "1611.09842"
},
{
"id": "1612.06370"
},
{
"id": "1605.09410"
}
] |
1708.06832 | 49 | Zilberstein, Shlomo. Using anytime algorithms in intelligent systems. AI Magazine, 17(3):73â83, 1996.
11
# A Sketch of Proof of Proposition 4.1
Proof. For each budget consumed x, we compute the cost xâ of the optimal that EANN is competi- tive against. The goal is then to analyze the ratio C = =. The first ANN in EANN has depth 1. The optimal and the result of EANN are the same. Now assume EANN is on depth z of ANN number n +1 for n > 0, which has depth bâ. (Case 1) For z < b"~!, EANN reuse the result from the end of ANN number n. The cost spent is w=2z4 an b= 24 we The optimal we compete has cost of the last ANN, which is b"~1 The ratio satisfies:
, z 1 1 x/x +14 - C= a/x b-1 or-l(b- 1) pr-1 1 1 nâ00 1 <24 24 =" b=1 b1(6-1) b-1
1 b â 1 Furthermore, since C increases with z,
. | 1708.06832#49 | Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing | This work considers the trade-off between accuracy and test-time
computational cost of deep neural networks (DNNs) via \emph{anytime}
predictions from auxiliary predictions. Specifically, we optimize auxiliary
losses jointly in an \emph{adaptive} weighted sum, where the weights are
inversely proportional to average of each loss. Intuitively, this balances the
losses to have the same scale. We demonstrate theoretical considerations that
motivate this approach from multiple viewpoints, including connecting it to
optimizing the geometric mean of the expectation of each loss, an objective
that ignores the scale of losses. Experimentally, the adaptive weights induce
more competitive anytime predictions on multiple recognition data-sets and
models than non-adaptive approaches including weighing all losses equally. In
particular, anytime neural networks (ANNs) can achieve the same accuracy faster
using adaptive weights on a small network than using static constant weights on
a large one. For problems with high performance saturation, we also show a
sequence of exponentially deepening ANNscan achieve near-optimal anytime
results at any budget, at the cost of a const fraction of extra computation. | http://arxiv.org/pdf/1708.06832 | Hanzhang Hu, Debadeepta Dey, Martial Hebert, J. Andrew Bagnell | cs.LG, cs.AI | null | null | cs.LG | 20170822 | 20180525 | [
{
"id": "1711.09485"
},
{
"id": "1711.11503"
}
] |
1708.06733 | 50 | How do end users wishing to obtain models for transfer learning ï¬nd these models? The most popular repository for pre-trained models is the Caffe Model Zoo [43], which at the time of this writing hosted 39 different models, mostly for various image recognition tasks including ï¬ower classiï¬cation, face recognition, and car model classiï¬cation. Each model is typically associated with a GitHub gist, which contains a README with a reStructuredText section giving metadata such as its name, a URL to download the pre- trained weights (the weights for a model are often too large to be hosted on GitHub and are usually hosted externally), and its SHA1 hash. Caffe also comes with a script named download_model_binary.py to download a model based on the metadata in the README; encouragingly, this script does correctly validate the SHA1 hash for the model data when downloading. | 1708.06733#50 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06734 | 50 | [41] X. Wang and A. Gupta. Unsupervised learning of visual rep- resentations using videos. In ICCV, 2015.
[42] C. Zhang, H. Li, X. Wang, and X. Yang. Cross-scene crowd counting via deep convolutional neural networks. In CVPR, 2015.
[43] R. Zhang, P. Isola, and A. A. Efros. Colorful image coloriza- tion. In ECCV, 2016.
[44] R. Zhang, P. Isola, and A. A. Efros. Split-brain autoencoders: Unsupervised learning by cross-channel prediction. arXiv preprint arXiv:1611.09842, 2016.
[45] B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva. In NIPS, 2014. | 1708.06734#50 | Representation Learning by Learning to Count | We introduce a novel method for representation learning that uses an
artificial supervision signal based on counting visual primitives. This
supervision signal is obtained from an equivariance relation, which does not
require any manual annotation. We relate transformations of images to
transformations of the representations. More specifically, we look for the
representation that satisfies such relation rather than the transformations
that match a given representation. In this paper, we use two image
transformations in the context of counting: scaling and tiling. The first
transformation exploits the fact that the number of visual primitives should be
invariant to scale. The second transformation allows us to equate the total
number of visual primitives in each tile to that in the whole image. These two
transformations are combined in one constraint and used to train a neural
network with a contrastive loss. The proposed task produces representations
that perform on par or exceed the state of the art in transfer learning
benchmarks. | http://arxiv.org/pdf/1708.06734 | Mehdi Noroozi, Hamed Pirsiavash, Paolo Favaro | cs.CV | ICCV 2017(oral) | null | cs.CV | 20170822 | 20170822 | [
{
"id": "1603.09246"
},
{
"id": "1604.03505"
},
{
"id": "1611.09842"
},
{
"id": "1612.06370"
},
{
"id": "1605.09410"
}
] |
1708.06832 | 50 | 1 b â 1 Furthermore, since C increases with z,
.
Ez~uniform(0,b»â*)[C] yn 1 <p bh" 414+â-âd < [ zi + tia Zz | - =15 . o+ Tq o
(Case 2) For bnâ1 < z ⤠bn, EANN outputs anytime results from ANN number n + 1 at depth z. The cost is still x = z + bnâ1
, pre C=a/r=1+ Zb=1) 1 nâ00 1 <24 t 24 : en Ps st b-1
1 b â 1 Furthermore, since C decreases with z,
Ez.uniform(b»-1,b")[C] 1 <(bâ1) te" (2+ ry bâ br 4 24 d I. b-1 20-1) -| noo blnb (1p
Finally, since case 1 and case 2 happen with probability 1 b and (1 â 1 1 b â 1
b ), we have
sup B C = 2 + (4)
and
EBâ¼U nif orm(0,L)[C] ⤠1 â 1 2b + 1 b â 1 + ln b b â 1 . (5)
We also note that with large b, supB C â 2 and E[C] â 1 from above. | 1708.06832#50 | Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing | This work considers the trade-off between accuracy and test-time
computational cost of deep neural networks (DNNs) via \emph{anytime}
predictions from auxiliary predictions. Specifically, we optimize auxiliary
losses jointly in an \emph{adaptive} weighted sum, where the weights are
inversely proportional to average of each loss. Intuitively, this balances the
losses to have the same scale. We demonstrate theoretical considerations that
motivate this approach from multiple viewpoints, including connecting it to
optimizing the geometric mean of the expectation of each loss, an objective
that ignores the scale of losses. Experimentally, the adaptive weights induce
more competitive anytime predictions on multiple recognition data-sets and
models than non-adaptive approaches including weighing all losses equally. In
particular, anytime neural networks (ANNs) can achieve the same accuracy faster
using adaptive weights on a small network than using static constant weights on
a large one. For problems with high performance saturation, we also show a
sequence of exponentially deepening ANNscan achieve near-optimal anytime
results at any budget, at the cost of a const fraction of extra computation. | http://arxiv.org/pdf/1708.06832 | Hanzhang Hu, Debadeepta Dey, Martial Hebert, J. Andrew Bagnell | cs.LG, cs.AI | null | null | cs.LG | 20170822 | 20180525 | [
{
"id": "1711.09485"
},
{
"id": "1711.11503"
}
] |
1708.06733 | 51 | This setup offers an attacker several points at which to introduce a backdoored model. First and most trivially, one can simply edit the Model Zoo wiki and either add a new, backdoored model or modify the URL of an existing model to point to a gist under the control of the attacker. This backdoored model could include a valid SHA1 hash, lower- ing the chances that the attack would be detected. Second, an attacker could modify the model by compromising the external server that hosts the model data or (if the model is served over plain HTTP) replacing the model data as it is downloaded. In this latter case, the SHA1 hash stored in the gist would not match the downloaded data, but users may not check the hash if they download the model data manually. Indeed, we found that the Network in Network model [47] linked from the Caffe Zoo currently has a SHA1 in its metadata that does not match the downloaded version; despite this, the model has 49 stars and 24 comments, none | 1708.06733#51 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06832 | 51 | We also note that with large b, supB C â 2 and E[C] â 1 from above.
If we form a sequence of regular networks that grow exponentially in depth instead of ANN, then the worst case happen right before a new prediction is produced. Hence the ratio between the consumed budget and the cost of the optimal that the current anytime prediction can compete, C, right before the number n + 1 network is completed, is
i=1 bi bnâ1 nâââââââ b2 b â 1 = 2 + (b â 1) + 1 b â 1 ⥠4.
Note that (b â 1) + 1 bâ1 ⥠2 and the inequality is tight at b = 2. Hence we know supB C is at least 4. Furthermore, the expected value of C, assume B is uniformly sampled such that the interruption
12
happens on the (n + 1)" network, is: 1 bâ
x + bnâ1 bâ1 bnâ1 b â 1 2 1 bn E[C] = 0 nâââââââ 1.5 + â + dx 1 b â 1 ⥠1.5 + â 2 â 2.91. | 1708.06832#51 | Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing | This work considers the trade-off between accuracy and test-time
computational cost of deep neural networks (DNNs) via \emph{anytime}
predictions from auxiliary predictions. Specifically, we optimize auxiliary
losses jointly in an \emph{adaptive} weighted sum, where the weights are
inversely proportional to average of each loss. Intuitively, this balances the
losses to have the same scale. We demonstrate theoretical considerations that
motivate this approach from multiple viewpoints, including connecting it to
optimizing the geometric mean of the expectation of each loss, an objective
that ignores the scale of losses. Experimentally, the adaptive weights induce
more competitive anytime predictions on multiple recognition data-sets and
models than non-adaptive approaches including weighing all losses equally. In
particular, anytime neural networks (ANNs) can achieve the same accuracy faster
using adaptive weights on a small network than using static constant weights on
a large one. For problems with high performance saturation, we also show a
sequence of exponentially deepening ANNscan achieve near-optimal anytime
results at any budget, at the cost of a const fraction of extra computation. | http://arxiv.org/pdf/1708.06832 | Hanzhang Hu, Debadeepta Dey, Martial Hebert, J. Andrew Bagnell | cs.LG, cs.AI | null | null | cs.LG | 20170822 | 20180525 | [
{
"id": "1711.09485"
},
{
"id": "1711.11503"
}
] |
1708.06733 | 52 | The models in the Caffe Model Zoo are also used in other machine learning frameworks. Conversion scripts allow Caffeâs trained models to be converted into the for- mats used by TensorFlow [48], Keras [49], Theano [50], Appleâs CoreML [51], MXNet [52], and neon [53], Intel Nervanaâs reference deep learning framework. Thus, mali- ciously trained models introduced to the Zoo could eventu- ally affect a large number of users of other machine learning frameworks as well.
# 6.1. Security Recommendations | 1708.06733#52 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06832 | 52 | The inequality is tight at b = 1+ few networks, we know the overall expectation EBâ¼U nif orm(0,L)[C] approaches 1.5 + bâ1 which is at least 1.5 +
# B Additional Details of AdaLoss for Experiments
Prevent tiny weights. In practice, early ¢; could be poor estimates of ¢;, and we may have a feed-back loop where large losses incur small weights, and in turn, results in poorly optimized large losses. To prevent such loops, we mix the adaptive weights with the constant weights. More precisely, we regularize Eq. [3|with the arithmetic mean of the losses:
L min >? (a(1 â 7) mé;(4) + 7é:(9)), (6) i=1
where a > 0 and y > 0 are hyper parameters. In practice, since DNNs often have elaborate learning rate schedules that assume By = 1, we choose a = min, ¢;(@) at each iteration to scale the max weight to 1. We choose y = 0.05 from validation. Future works may consider more complex schemes where the weights start as constant weights and morph into AdaLoss by gradually reducing y from | to 0. | 1708.06832#52 | Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing | This work considers the trade-off between accuracy and test-time
computational cost of deep neural networks (DNNs) via \emph{anytime}
predictions from auxiliary predictions. Specifically, we optimize auxiliary
losses jointly in an \emph{adaptive} weighted sum, where the weights are
inversely proportional to average of each loss. Intuitively, this balances the
losses to have the same scale. We demonstrate theoretical considerations that
motivate this approach from multiple viewpoints, including connecting it to
optimizing the geometric mean of the expectation of each loss, an objective
that ignores the scale of losses. Experimentally, the adaptive weights induce
more competitive anytime predictions on multiple recognition data-sets and
models than non-adaptive approaches including weighing all losses equally. In
particular, anytime neural networks (ANNs) can achieve the same accuracy faster
using adaptive weights on a small network than using static constant weights on
a large one. For problems with high performance saturation, we also show a
sequence of exponentially deepening ANNscan achieve near-optimal anytime
results at any budget, at the cost of a const fraction of extra computation. | http://arxiv.org/pdf/1708.06832 | Hanzhang Hu, Debadeepta Dey, Martial Hebert, J. Andrew Bagnell | cs.LG, cs.AI | null | null | cs.LG | 20170822 | 20180525 | [
{
"id": "1711.09485"
},
{
"id": "1711.11503"
}
] |
1708.06733 | 53 | # 6.1. Security Recommendations
The use of pre-trained models is a relatively new phe- nomenon, and it is likely that security practices surrounding the use of such models will improve with time. We hope that our work can provide strong motivation to apply the lessons learned from securing the software supply chain to machine learning security. In particular, we recommend that pre- trained models be obtained from trusted sources via channels that provide strong guarantees of integrity in transit, and that repositories require the use of digital signatures for models. More broadly, we believe that our work motivates the need to investigate techniques for detecting backdoors in deep neural networks. Although we expect this to be a difï¬cult challenge because of the inherent difï¬culty of explaining the behavior of a trained network, it may be possible to identify sections of the network that are never activated during validation and inspect their behavior.
4. Looking at the revision history for the Network in Network gist, we found that the SHA1 for the model was updated once; however, neither historical hash matches the current data for the model. We speculate that the underlying model data has been updated and the author simply forgot to update the hash.
# 7. Conclusions | 1708.06733#53 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06832 | 53 | Extra ï¬nal weights. In our experiments, we often ï¬nd that the penultimate layers have better accuracy relative to the OPT than the ï¬nal layers on CIFAR, as suggested in Fig. 2a. We believe this is because neighboring losses in an ANN are highly correlated, so that a layer can indirectly beneï¬t from the high weights of its neighbors. The ï¬nal loss is then at disadvantage due to its lack of successors. To remedy this, we can give the ï¬nal loss extra weights, which turns the geometric mean in Eq. 3 into a weighted geometric mean. This is also equivalent to having a distribution of test-time interruption, where the interruption happens at all layers equally likely, except on the ï¬nal layer. In our experiments, we do not use extra ï¬nal weights on CIFAR10, CIFAR100 and SVHN to keep the weights simple, and we double the ï¬nal weight on ILSVRC because the ï¬nal accuracy there is critical for comparing against other non-anytime networks.
# C Implementation Details of ANNs | 1708.06832#53 | Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing | This work considers the trade-off between accuracy and test-time
computational cost of deep neural networks (DNNs) via \emph{anytime}
predictions from auxiliary predictions. Specifically, we optimize auxiliary
losses jointly in an \emph{adaptive} weighted sum, where the weights are
inversely proportional to average of each loss. Intuitively, this balances the
losses to have the same scale. We demonstrate theoretical considerations that
motivate this approach from multiple viewpoints, including connecting it to
optimizing the geometric mean of the expectation of each loss, an objective
that ignores the scale of losses. Experimentally, the adaptive weights induce
more competitive anytime predictions on multiple recognition data-sets and
models than non-adaptive approaches including weighing all losses equally. In
particular, anytime neural networks (ANNs) can achieve the same accuracy faster
using adaptive weights on a small network than using static constant weights on
a large one. For problems with high performance saturation, we also show a
sequence of exponentially deepening ANNscan achieve near-optimal anytime
results at any budget, at the cost of a const fraction of extra computation. | http://arxiv.org/pdf/1708.06832 | Hanzhang Hu, Debadeepta Dey, Martial Hebert, J. Andrew Bagnell | cs.LG, cs.AI | null | null | cs.LG | 20170822 | 20180525 | [
{
"id": "1711.09485"
},
{
"id": "1711.11503"
}
] |
1708.06733 | 54 | # 7. Conclusions
In this paper we have identiï¬ed and explored new security concerns introduced by the increasingly common practice of outsourced training of machine learning models or acquisition of these models from online model zoos. Speciï¬cally, we show that maliciously trained convolutional neural networks are easily backdoored; the resulting âBad- Netsâ have state-of-the-art performance on regular inputs but misbehave on carefully crafted attacker-chosen inputs. Further, BadNets are stealthy, i.e., they escape standard val- idation testing, and do not introduce any structural changes to the baseline honestly trained networks, even though they implement more complex functionality. | 1708.06733#54 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06832 | 54 | # C Implementation Details of ANNs
CIFAR and SVHN ResANNs. For CIFAR10, CIFAR100 (Krizhevsky, 2009), and SVHN (Netzer et al., 2011), ResANN follow (He et al., 2016) to have three blocks, each of which has n residual units. Each of such basic residual units consists of two 3x3 convolutions, which are interleaved by BN-ReLU. A pre-activation (BN-ReLU) is applied to the input of the residual units. The result of the second 3x3 conv and the initial input are added together as the output of the unit. The auxiliary predictors each applies a BN-ReLU and a global average pooling on its input feature map, and applies a linear prediction. The auxiliary loss is the cross-entropy loss, treating the linear prediction results as logits. For each (n, c) pair such that n < 25, we set the anytime prediction period s to be 1, i.e., every residual block leads to an auxiliary prediction. We set the prediction period s = 3 for n = 25. | 1708.06832#54 | Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing | This work considers the trade-off between accuracy and test-time
computational cost of deep neural networks (DNNs) via \emph{anytime}
predictions from auxiliary predictions. Specifically, we optimize auxiliary
losses jointly in an \emph{adaptive} weighted sum, where the weights are
inversely proportional to average of each loss. Intuitively, this balances the
losses to have the same scale. We demonstrate theoretical considerations that
motivate this approach from multiple viewpoints, including connecting it to
optimizing the geometric mean of the expectation of each loss, an objective
that ignores the scale of losses. Experimentally, the adaptive weights induce
more competitive anytime predictions on multiple recognition data-sets and
models than non-adaptive approaches including weighing all losses equally. In
particular, anytime neural networks (ANNs) can achieve the same accuracy faster
using adaptive weights on a small network than using static constant weights on
a large one. For problems with high performance saturation, we also show a
sequence of exponentially deepening ANNscan achieve near-optimal anytime
results at any budget, at the cost of a const fraction of extra computation. | http://arxiv.org/pdf/1708.06832 | Hanzhang Hu, Debadeepta Dey, Martial Hebert, J. Andrew Bagnell | cs.LG, cs.AI | null | null | cs.LG | 20170822 | 20180525 | [
{
"id": "1711.09485"
},
{
"id": "1711.11503"
}
] |
1708.06733 | 55 | We have implemented BadNets for the MNIST digit recognition task and a more complex trafï¬c sign detection system, and demonstrated that BadNets can reliably and maliciously misclassify stop signs as speed-limit signs on real-world images that were backdoored using a Post-it note. Further, we have demonstrated that backdoors persist even when BadNets are unwittingly downloaded and adapted for new machine learning tasks, and continue to cause a signiï¬cant drop in classiï¬cation accuracy for the new task. Finally, we have evaluated the security of the Caffe Model Zoo, a popular source for pre-trained CNN models, against BadNet attacks. We identify several points of en- try to introduce backdoored models, and identify instances where pre-trained models are being shared in ways that make it difï¬cult to guarantee their integrity. Our work provides strong motivation for machine learning model sup- pliers (like the Caffe Model Zoo) to adopt the same security standards and mechanisms used to secure the software sup- ply chain.
# References
âImageNet large scale visual recognition competition,â http://www. image-net.org/challenges/LSVRC/2012/, 2012. | 1708.06733#55 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06832 | 55 | ResANNs on ILSVRC. Residual blocks for ILSVRC are bottleneck blocks, which consists of a chain of 1x1 conv, 3x3 conv and 1x1 conv. These convolutions are interleaved by BN-ReLU, and pre-activation BN-ReLU is also applied. Again, the output of the unit is the sum of the input feature map and the result of the ï¬nal conv. ResANN50 and 101 are augmented from ResNet50 and 101 (He et al., 2016), where we add BN-ReLU, global pooling and linear prediction to every two bottleneck residual units for ResNet50, and every three for ResNet101. We create ResANN26 for creating EANN on ILSVRC, and ResANN26 has four blocks, each of which has two bottleneck residual units. The prediction period is every two units, using the same linear predictors.
13 | 1708.06832#55 | Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing | This work considers the trade-off between accuracy and test-time
computational cost of deep neural networks (DNNs) via \emph{anytime}
predictions from auxiliary predictions. Specifically, we optimize auxiliary
losses jointly in an \emph{adaptive} weighted sum, where the weights are
inversely proportional to average of each loss. Intuitively, this balances the
losses to have the same scale. We demonstrate theoretical considerations that
motivate this approach from multiple viewpoints, including connecting it to
optimizing the geometric mean of the expectation of each loss, an objective
that ignores the scale of losses. Experimentally, the adaptive weights induce
more competitive anytime predictions on multiple recognition data-sets and
models than non-adaptive approaches including weighing all losses equally. In
particular, anytime neural networks (ANNs) can achieve the same accuracy faster
using adaptive weights on a small network than using static constant weights on
a large one. For problems with high performance saturation, we also show a
sequence of exponentially deepening ANNscan achieve near-optimal anytime
results at any budget, at the cost of a const fraction of extra computation. | http://arxiv.org/pdf/1708.06832 | Hanzhang Hu, Debadeepta Dey, Martial Hebert, J. Andrew Bagnell | cs.LG, cs.AI | null | null | cs.LG | 20170822 | 20180525 | [
{
"id": "1711.09485"
},
{
"id": "1711.11503"
}
] |
1708.06733 | 56 | # References
âImageNet large scale visual recognition competition,â http://www. image-net.org/challenges/LSVRC/2012/, 2012.
[2] A. Graves, A.-r. Mohamed, and G. Hinton, âSpeech recognition with deep recurrent neural networks,â in Acoustics, speech and signal processing (icassp), 2013 ieee international conference on. IEEE, 2013, pp. 6645â6649.
âMultilingual Distributed Representations without Word Alignment,â in Proceedings of ICLR, Apr. 2014. [Online]. Available: http://arxiv.org/abs/1312.6173
[4] D. Bahdanau, K. Cho, and Y. Bengio, âNeural machine translation by jointly learning to align and translate,â 2014.
[5] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller, âPlaying atari with deep reinforce- ment learning,â 2013. | 1708.06733#56 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06832 | 56 | 13
DenseANNs on ILSVRC. We augment DenseNet169 (Huang et al., 2017b) to create DenseANN 169. DenseNet169 has 82 dense layers, each of which has a 1x1 conv that project concatenation of previous features to 4k channels, where k is the growth rate (Huang et al., 2017b), followed by a 3x3 conv to generate k channels of features for the dense layer. The two convs are interleaved by BN-ReLU, and a pre-activation BN-ReLU is used for each layer. The 82 layers are organized into four blocks of size 6, 12, 32 and 32. Between each neighboring blocks, a 1x1 conv followed by BN-ReLU-2x2-average-pooling is applied to shrink the existing feature maps by half in the hight, width, and channel dimensions. We add linear anytime predictions every 14 dense layers, starting from layer 12 (1-based indexing). The original DenseNet paper (Huang et al., 2017b) mentioned that they use drop-out with keep rate 0.9 after each conv in CIFAR and SVHN, but we found drop-out to be detrimental to performance on ILSVRC. | 1708.06832#56 | Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing | This work considers the trade-off between accuracy and test-time
computational cost of deep neural networks (DNNs) via \emph{anytime}
predictions from auxiliary predictions. Specifically, we optimize auxiliary
losses jointly in an \emph{adaptive} weighted sum, where the weights are
inversely proportional to average of each loss. Intuitively, this balances the
losses to have the same scale. We demonstrate theoretical considerations that
motivate this approach from multiple viewpoints, including connecting it to
optimizing the geometric mean of the expectation of each loss, an objective
that ignores the scale of losses. Experimentally, the adaptive weights induce
more competitive anytime predictions on multiple recognition data-sets and
models than non-adaptive approaches including weighing all losses equally. In
particular, anytime neural networks (ANNs) can achieve the same accuracy faster
using adaptive weights on a small network than using static constant weights on
a large one. For problems with high performance saturation, we also show a
sequence of exponentially deepening ANNscan achieve near-optimal anytime
results at any budget, at the cost of a const fraction of extra computation. | http://arxiv.org/pdf/1708.06832 | Hanzhang Hu, Debadeepta Dey, Martial Hebert, J. Andrew Bagnell | cs.LG, cs.AI | null | null | cs.LG | 20170822 | 20180525 | [
{
"id": "1711.09485"
},
{
"id": "1711.11503"
}
] |
1708.06733 | 57 | [6] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den I. Antonoglou, V. Panneershelvam, Driessche, J. Schrittwieser, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, âMastering the game of go with deep neural networks and tree search,â Nature, vol. 529, no. 7587, pp. 484â489, 01 2016. [Online]. Available: http://dx.doi.org/10.1038/nature16961
[7] A. Karpathy, ConvNet on what-i-learned-from-competing-against-a-convnet-on-imagenet/, 2014.
[8] G. Chen, T. X. Han, Z. He, R. Kays, and T. Forrester, âDeep con- volutional neural network based species recognition for wild animal monitoring,â in Image Processing (ICIP), 2014 IEEE International Conference on. | 1708.06733#57 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06832 | 57 | MSDNet on ILSVRC. MSDNet38 is described in the appendix of (Huang et al., 2017a). We set the four blocks to have 10, 9, 10 and 9 layers, and drop the feature maps of the ï¬nest resolution after each block as suggest in the original paper. We successfully reproduced the published results to 24.3% error rate on ILSVRC using our Tensorï¬ow implementation. We used the original published results for MSDNet38+CONST in the main text. We use MSDNet32, which has four blocks of 6, 6, 10, and 10 layers, for the small network that uses AdaLoss. We predict using MSDNet32 every seven layers, starting at the fourth layer (1-based indexing).
14 | 1708.06832#57 | Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing | This work considers the trade-off between accuracy and test-time
computational cost of deep neural networks (DNNs) via \emph{anytime}
predictions from auxiliary predictions. Specifically, we optimize auxiliary
losses jointly in an \emph{adaptive} weighted sum, where the weights are
inversely proportional to average of each loss. Intuitively, this balances the
losses to have the same scale. We demonstrate theoretical considerations that
motivate this approach from multiple viewpoints, including connecting it to
optimizing the geometric mean of the expectation of each loss, an objective
that ignores the scale of losses. Experimentally, the adaptive weights induce
more competitive anytime predictions on multiple recognition data-sets and
models than non-adaptive approaches including weighing all losses equally. In
particular, anytime neural networks (ANNs) can achieve the same accuracy faster
using adaptive weights on a small network than using static constant weights on
a large one. For problems with high performance saturation, we also show a
sequence of exponentially deepening ANNscan achieve near-optimal anytime
results at any budget, at the cost of a const fraction of extra computation. | http://arxiv.org/pdf/1708.06832 | Hanzhang Hu, Debadeepta Dey, Martial Hebert, J. Andrew Bagnell | cs.LG, cs.AI | null | null | cs.LG | 20170822 | 20180525 | [
{
"id": "1711.09485"
},
{
"id": "1711.11503"
}
] |
1708.06733 | 58 | [9] C. Chen, A. Seff, A. Kornhauser, and J. Xiao, âDeepdriving: Learning affordance for direct perception in autonomous driving,â the 2015 IEEE International Conference on in Proceedings of Computer Vision (ICCV), ser. ICCV â15. Washington, DC, USA: IEEE Computer Society, 2015, pp. 2722â2730. [Online]. Available: http://dx.doi.org/10.1109/ICCV.2015.312
[10] Google, Inc., âGoogle Cloud Machine Learning Engine,â https:// cloud.google.com/ml-engine/.
[11] Microsoft Corp., âAzure Batch AI Training,â https://batchaitraining. azure.com/.
[12] Amazon.com, Inc., âDeep Learning AMI Amazon Linux Version.â
[13] K. Quach, âCloud giants âran outâ of fast GPUs for AI bofï¬ns,â https: //www.theregister.co.uk/2017/05/22/cloud providers ai researchers/. | 1708.06733#58 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06733 | 59 | [14] A. Krizhevsky, I. Sutskever, and G. E. Hinton, âImagenet classiï¬ca- tion with deep convolutional neural networks,â in Advances in neural information processing systems, 2012, pp. 1097â1105.
[15] K. Simonyan and A. Zisserman, âVery deep convolutional networks for large-scale image recognition,â 2014.
[16] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, âRe- thinking the inception architecture for computer vision,â 2015.
[17] I. Evtimov, K. Eykholt, E. Fernandes, T. Kohno, B. Li, A. Prakash, A. Rahmati, and D. Song, âRobust physical-world attacks on machine learning models,â 2017.
[18] J. Schmidhuber, âDeep learning in neural networks: An overview,â Neural networks, vol. 61, pp. 85â117, 2015. | 1708.06733#59 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06733 | 60 | [18] J. Schmidhuber, âDeep learning in neural networks: An overview,â Neural networks, vol. 61, pp. 85â117, 2015.
[19] A. Blum and R. L. Rivest, âTraining a 3-node neural network is np-complete,â in Advances in neural information processing systems, 1989, pp. 494â501.
[20] S. J. Pan and Q. Yang, âA survey on transfer learning,â IEEE Transactions on knowledge and data engineering, vol. 22, no. 10, pp. 1345â1359, 2010.
[21] X. Glorot, A. Bordes, and Y. Bengio, âDomain adaptation for large- scale sentiment classiï¬cation: A deep learning approach,â in Pro- ceedings of the 28th international conference on machine learning (ICML-11), 2011, pp. 513â520. | 1708.06733#60 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06733 | 61 | [22] A. S. Razavian, H. Azizpour, J. Sullivan, and S. Carlsson, âCnn features off-the-shelf: An astounding baseline for recognition,â in Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops, ser. CVPRW â14. Washington, DC, USA: IEEE Computer Society, 2014, pp. 512â519. [Online]. Available: http://dx.doi.org/10.1109/CVPRW.2014.131
[23] F. Larsson, M. Felsberg, and P.-E. Forssen, âCorrelating Fourier descriptors of local patches for road sign recognition,â IET Computer Vision, vol. 5, no. 4, pp. 244â254, 2011.
[24] L. Huang, A. D. Joseph, B. Nelson, B. I. Rubinstein, and J. D. Tygar, âAdversarial machine learning,â in Proceedings of the 4th ACM Workshop on Security and Artiï¬cial Intelligence, ser. AISec â11. New York, NY, USA: ACM, 2011, pp. 43â58. [Online]. Available: http://doi.acm.org/10.1145/2046684.2046692 | 1708.06733#61 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06733 | 62 | [25] N. Dalvi, P. Domingos, Mausam, S. Sanghai, and D. Verma, the Tenth ACM âAdversarial classiï¬cation,â in Proceedings of SIGKDD International Conference on Knowledge Discovery and Data Mining, ser. KDD â04. New York, NY, USA: ACM, 2004, pp. 99â108. [Online]. Available: http://doi.acm.org/10.1145/1014052. 1014066
learning,â in Proceedings of the Eleventh ACM SIGKDD International Conference on Knowledge Discovery in Data Mining, ser. KDD â05. New York, NY, USA: ACM, 2005, pp. 641â647. [Online]. Available: http://doi.acm.org/10.1145/1081870.1081950
[27] ââ, âGood word attacks on statistical spam ï¬lters.â in Proceedings of the Conference on Email and Anti-Spam (CEAS), 2005.
[28] G. L. Wittel and S. F. Wu, âOn Attacking Statistical Spam Filters,â in Proceedings of the Conference on Email and Anti-Spam (CEAS), Mountain View, CA, USA, 2004. | 1708.06733#62 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06733 | 63 | [29] J. Newsome, B. Karp, and D. Song, âParagraph: Thwarting signature learning by training maliciously,â in Proceedings of the 9th International Conference on Recent Advances in Intrusion Detection, ser. RAIDâ06. Berlin, Heidelberg: Springer-Verlag, 2006, pp. 81â105. [Online]. Available: http://dx.doi.org/10.1007/11856214 5
[30] S. P. Chung and A. K. Mok, âAllergy attack against automatic signa- ture generation,â in Proceedings of the 9th International Conference on Recent Advances in Intrusion Detection, 2006.
[31] ââ, âAdvanced allergy attacks: Does a corpus really help,â in Proceedings of the 10th International Conference on Recent Advances in Intrusion Detection, 2007.
[32] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfel- low, and R. Fergus, âIntriguing properties of neural networks,â 2013.
[33] I. J. Goodfellow, J. Shlens, and C. Szegedy, âExplaining and harness- ing adversarial examples,â 2014. | 1708.06733#63 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06733 | 64 | [33] I. J. Goodfellow, J. Shlens, and C. Szegedy, âExplaining and harness- ing adversarial examples,â 2014.
[34] N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami, âPractical black-box attacks against machine learning,â 2016.
[35] S.-M. Moosavi-Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard, âUni- versal adversarial perturbations,â 2016.
[36] S. Shen, S. Tople, and P. Saxena, âAuror: Defending against poisoning attacks in collaborative deep learning systems,â in Proceedings of the 32Nd Annual Conference on Computer Security Applications, ser. ACSAC â16. New York, NY, USA: ACM, 2016, pp. 508â519. [Online]. Available: http://doi.acm.org/10.1145/2991079.2991125 | 1708.06733#64 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06733 | 65 | [37] Y. LeCun, L. Jackel, L. Bottou, C. Cortes, J. S. Denker, H. Drucker, I. Guyon, U. Muller, E. Sackinger, P. Simard et al., âLearning algorithms for classiï¬cation: A comparison on handwritten digit recognition,â Neural networks: the statistical mechanics perspective, vol. 261, p. 276, 1995.
[38] Y. Zhang, P. Liang, and M. J. Wainwright, âConvexiï¬ed convolutional neural networks,â arXiv preprint arXiv:1609.01000, 2016.
[39] S. Ren, K. He, R. Girshick, and J. Sun, âFaster r-cnn: Towards real- time object detection with region proposal networks,â in Advances in neural information processing systems, 2015, pp. 91â99.
[40] A. Møgelmose, D. Liu, and M. M. Trivedi, âTrafï¬c sign detection for us roads: Remaining challenges and a case for tracking,â in Intel- ligent Transportation Systems (ITSC), 2014 IEEE 17th International Conference on. | 1708.06733#65 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06733 | 66 | [41] J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell, âDecaf: A deep convolutional activation feature for generic visual recognition,â in International conference on machine learning, 2014, pp. 647â655.
learning and ï¬ne-tuning convolutional neural networks,â CS321n Lecture Notes; http://cs231n.github.io/ transfer-learning/.
[43] âCaffe Model Zoo,â https://github.com/BVLC/caffe/wiki/Model-Zoo.
[44] Y. Zhu, C. Zhang, D. Zhou, X. Wang, X. Bai, and W. Liu, âTrafï¬c sign detection and recognition using fully convolutional network guided proposals,â Neurocomputing, vol. 214, pp. 758 â 766, 2016. [Online]. Available: http://www.sciencedirect.com/science/article/pii/ S092523121630741X
[45] S. Ruder, âTransfer learning - machine learningâs next frontier,â http: //ruder.io/transfer-learning/. | 1708.06733#66 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.06733 | 67 | [45] S. Ruder, âTransfer learning - machine learningâs next frontier,â http: //ruder.io/transfer-learning/.
[46] F. Yu, âA comprehensive guide to ï¬ne-tuning deep learning https://ï¬yyufelix.github.io/2016/10/03/ models ï¬ne-tuning-in-keras-part1.html. in Keras,â
[47] âNetwork in Network Imagenet Model,â https://gist.github.com/ mavenlin/d802a5849de39225bcc6.
[48] âCaffe models caffe-tensorï¬ow. in TensorFlow,â https://github.com/ethereon/
[49] âCaffe to Keras converter,â https://github.com/qxcv/caffe2keras.
[50] âConvert models from Caffe to Theano format,â https://github.com/ kencoken/caffe-model-convert.
[51] Apple Inc., âConverting trained models to Core ML,â https://developer.apple.com/documentation/coreml/converting trained models to core ml. | 1708.06733#67 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | [
{
"id": "1609.01000"
}
] |
1708.05552 | 0 | 8 1 0 2
y a M 4 1 ] V C . s c [ 3 v 2 5 5 5 0 . 8 0 7 1 : v i X r a
# Practical Block-wise Neural Network Architecture Generation
Zhao Zhong1,3â, Junjie Yan2,Wei Wu2,Jing Shao2,Cheng-Lin Liu1,3,4 1National Laboratory of Pattern Recognition,Institute of Automation, Chinese Academy of Sciences
# 2 SenseTime Research
# 3 University of Chinese Academy of Sciences
SenseTime Research University of Chinese Academy of Sciences
4 CAS Center for Excellence of Brain Science and Intelligence Technology Email: {zhao.zhong, liucl}@nlpr.ia.ac.cn, {yanjunjie, wuwei, shaojing}@sensetime.com
# Abstract | 1708.05552#0 | Practical Block-wise Neural Network Architecture Generation | Convolutional neural networks have gained a remarkable success in computer
vision. However, most usable network architectures are hand-crafted and usually
require expertise and elaborate design. In this paper, we provide a block-wise
network generation pipeline called BlockQNN which automatically builds
high-performance networks using the Q-Learning paradigm with epsilon-greedy
exploration strategy. The optimal network block is constructed by the learning
agent which is trained sequentially to choose component layers. We stack the
block to construct the whole auto-generated network. To accelerate the
generation process, we also propose a distributed asynchronous framework and an
early stop strategy. The block-wise generation brings unique advantages: (1) it
performs competitive results in comparison to the hand-crafted state-of-the-art
networks on image classification, additionally, the best network generated by
BlockQNN achieves 3.54% top-1 error rate on CIFAR-10 which beats all existing
auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of
the search space in designing networks which only spends 3 days with 32 GPUs,
and (3) moreover, it has strong generalizability that the network built on
CIFAR also performs well on a larger-scale ImageNet dataset. | http://arxiv.org/pdf/1708.05552 | Zhao Zhong, Junjie Yan, Wei Wu, Jing Shao, Cheng-Lin Liu | cs.CV, cs.LG | Accepted to CVPR 2018 | null | cs.CV | 20170818 | 20180514 | [] |
1708.05552 | 1 | # Abstract
Convolutional neural networks have gained a remark- able success in computer vision. However, most usable net- work architectures are hand-crafted and usually require ex- pertise and elaborate design. In this paper, we provide a block-wise network generation pipeline called BlockQNN which automatically builds high-performance networks us- ing the Q-Learning paradigm with epsilon-greedy explo- ration strategy. The optimal network block is constructed by the learning agent which is trained sequentially to choose component layers. We stack the block to construct the whole auto-generated network. To accelerate the generation pro- cess, we also propose a distributed asynchronous frame- work and an early stop strategy. The block-wise genera- tion brings unique advantages: (1) it performs competitive results in comparison to the hand-crafted state-of-the-art networks on image classiï¬cation, additionally, the best net- work generated by BlockQNN achieves 3.54% top-1 error rate on CIFAR-10 which beats all existing auto-generate networks. (2) in the meanwhile, it offers tremendous re- duction of the search space in designing networks which only spends 3 days with 32 GPUs, and (3) moreover, it has strong generalizability that the network built on CIFAR also performs well on a larger-scale ImageNet dataset.
# 1. Introduction | 1708.05552#1 | Practical Block-wise Neural Network Architecture Generation | Convolutional neural networks have gained a remarkable success in computer
vision. However, most usable network architectures are hand-crafted and usually
require expertise and elaborate design. In this paper, we provide a block-wise
network generation pipeline called BlockQNN which automatically builds
high-performance networks using the Q-Learning paradigm with epsilon-greedy
exploration strategy. The optimal network block is constructed by the learning
agent which is trained sequentially to choose component layers. We stack the
block to construct the whole auto-generated network. To accelerate the
generation process, we also propose a distributed asynchronous framework and an
early stop strategy. The block-wise generation brings unique advantages: (1) it
performs competitive results in comparison to the hand-crafted state-of-the-art
networks on image classification, additionally, the best network generated by
BlockQNN achieves 3.54% top-1 error rate on CIFAR-10 which beats all existing
auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of
the search space in designing networks which only spends 3 days with 32 GPUs,
and (3) moreover, it has strong generalizability that the network built on
CIFAR also performs well on a larger-scale ImageNet dataset. | http://arxiv.org/pdf/1708.05552 | Zhao Zhong, Junjie Yan, Wei Wu, Jing Shao, Cheng-Lin Liu | cs.CV, cs.LG | Accepted to CVPR 2018 | null | cs.CV | 20170818 | 20180514 | [] |
1708.05552 | 2 | # 1. Introduction
During the last decades, Convolutional Neural Networks (CNNs) have shown remarkable potentials almost in ev- ery ï¬eld in the computer vision society [17]. For exam- ple, thanks to the network evolution from AlexNet [16], VGG [25], Inception [30] to ResNet [10], the top-5 per- formance on ImageNet challenge steadily increases from 83.6% to 96.43%. However, as the performance gain usually requires an increasing network capacity, a highâThis work was done when Zhao Zhong worked as an intern at Sense- Time Research.
performance network architecture generally possesses a tremendous number of possible conï¬gurations about the number of layers, hyperparameters in each layer and type of each layer. It is hence infeasible for manually exhaus- tive search, and the design of successful hand-crafted net- works heavily rely on expert knowledge and experience. Therefore, constructing the network in a smart and auto- matic manner remains an open problem. | 1708.05552#2 | Practical Block-wise Neural Network Architecture Generation | Convolutional neural networks have gained a remarkable success in computer
vision. However, most usable network architectures are hand-crafted and usually
require expertise and elaborate design. In this paper, we provide a block-wise
network generation pipeline called BlockQNN which automatically builds
high-performance networks using the Q-Learning paradigm with epsilon-greedy
exploration strategy. The optimal network block is constructed by the learning
agent which is trained sequentially to choose component layers. We stack the
block to construct the whole auto-generated network. To accelerate the
generation process, we also propose a distributed asynchronous framework and an
early stop strategy. The block-wise generation brings unique advantages: (1) it
performs competitive results in comparison to the hand-crafted state-of-the-art
networks on image classification, additionally, the best network generated by
BlockQNN achieves 3.54% top-1 error rate on CIFAR-10 which beats all existing
auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of
the search space in designing networks which only spends 3 days with 32 GPUs,
and (3) moreover, it has strong generalizability that the network built on
CIFAR also performs well on a larger-scale ImageNet dataset. | http://arxiv.org/pdf/1708.05552 | Zhao Zhong, Junjie Yan, Wei Wu, Jing Shao, Cheng-Lin Liu | cs.CV, cs.LG | Accepted to CVPR 2018 | null | cs.CV | 20170818 | 20180514 | [] |
1708.05552 | 3 | Although some recent works have attempted computer- aided or automated network design [2, 37], there are sev- eral challenges still unsolved: (1) Modern neural networks always consist of hundreds of convolutional layers, each of which has numerous options in type and hyperparame- ters. It makes a huge search space and heavy computational costs for network generation. (2) One typically designed network is usually limited on a speciï¬c dataset or task, and thus is hard to transfer to other tasks or generalize to another dataset with different input data sizes. In this paper, we pro- vide a solution to the aforementioned challenges by a novel fast Q-learning framework, called BlockQNN, to automati- cally design the network architecture, as shown in Fig. 1. | 1708.05552#3 | Practical Block-wise Neural Network Architecture Generation | Convolutional neural networks have gained a remarkable success in computer
vision. However, most usable network architectures are hand-crafted and usually
require expertise and elaborate design. In this paper, we provide a block-wise
network generation pipeline called BlockQNN which automatically builds
high-performance networks using the Q-Learning paradigm with epsilon-greedy
exploration strategy. The optimal network block is constructed by the learning
agent which is trained sequentially to choose component layers. We stack the
block to construct the whole auto-generated network. To accelerate the
generation process, we also propose a distributed asynchronous framework and an
early stop strategy. The block-wise generation brings unique advantages: (1) it
performs competitive results in comparison to the hand-crafted state-of-the-art
networks on image classification, additionally, the best network generated by
BlockQNN achieves 3.54% top-1 error rate on CIFAR-10 which beats all existing
auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of
the search space in designing networks which only spends 3 days with 32 GPUs,
and (3) moreover, it has strong generalizability that the network built on
CIFAR also performs well on a larger-scale ImageNet dataset. | http://arxiv.org/pdf/1708.05552 | Zhao Zhong, Junjie Yan, Wei Wu, Jing Shao, Cheng-Lin Liu | cs.CV, cs.LG | Accepted to CVPR 2018 | null | cs.CV | 20170818 | 20180514 | [] |
1708.05552 | 4 | Particularly, to make the network generation efï¬cient and generalizable, we introduce the block-wise network generation, i.e., we construct the network architecture as a ï¬exible stack of personalized blocks rather tedious per-layer network piling. Indeed, most modern CNN architectures such as Inception [30, 14, 31] and ResNet Series [10, 11] are assembled as the stack of basic block structures. For example, the inception and residual blocks shown in Fig. 1 are repeatedly concatenated to construct the entire network. With such kind of block-wise network architecture, the generated network owns a powerful generalization to other task domains or different datasets. | 1708.05552#4 | Practical Block-wise Neural Network Architecture Generation | Convolutional neural networks have gained a remarkable success in computer
vision. However, most usable network architectures are hand-crafted and usually
require expertise and elaborate design. In this paper, we provide a block-wise
network generation pipeline called BlockQNN which automatically builds
high-performance networks using the Q-Learning paradigm with epsilon-greedy
exploration strategy. The optimal network block is constructed by the learning
agent which is trained sequentially to choose component layers. We stack the
block to construct the whole auto-generated network. To accelerate the
generation process, we also propose a distributed asynchronous framework and an
early stop strategy. The block-wise generation brings unique advantages: (1) it
performs competitive results in comparison to the hand-crafted state-of-the-art
networks on image classification, additionally, the best network generated by
BlockQNN achieves 3.54% top-1 error rate on CIFAR-10 which beats all existing
auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of
the search space in designing networks which only spends 3 days with 32 GPUs,
and (3) moreover, it has strong generalizability that the network built on
CIFAR also performs well on a larger-scale ImageNet dataset. | http://arxiv.org/pdf/1708.05552 | Zhao Zhong, Junjie Yan, Wei Wu, Jing Shao, Cheng-Lin Liu | cs.CV, cs.LG | Accepted to CVPR 2018 | null | cs.CV | 20170818 | 20180514 | [] |
1708.05552 | 5 | In comparison to previous methods like NAS [37] and MetaQNN [2], as depicted in Fig. 1, we present a more readily and elegant model generation method that specif- ically designed for block-wise generation. Motivated by the unsupervised reinforcement learning paradigm, we em- ploy the well-known Q-learning [33] with experience reSââââ_ Hand-crafted Network >< Auto-generated Network § ââââ_____5 AlexNet VGGNet Inception-block Residue-block Input Input input Â¥ Thala Conv Pool 72 BS conv Â¥ Pool 72 c7 c 3 Â¥ 3 conv F Ba con ke siq Pool 72 ==] 098 near 4056 â RelU Output Output NAS MetaQNN BlockONN cS oo Input oa] Â¥ 3x3 Con 5 Cow or aS âSxS Conv Â¥ Bape or z Cre Baar me Ea Ee . âxd Conv Cam: Â¥ =: Sapa a thse â5x5 Conv aon] ra Ta Con, 333 Mant Â¥ aes Bape 78 ear =. 3 od > Cee Outpet : Theor | 1708.05552#5 | Practical Block-wise Neural Network Architecture Generation | Convolutional neural networks have gained a remarkable success in computer
vision. However, most usable network architectures are hand-crafted and usually
require expertise and elaborate design. In this paper, we provide a block-wise
network generation pipeline called BlockQNN which automatically builds
high-performance networks using the Q-Learning paradigm with epsilon-greedy
exploration strategy. The optimal network block is constructed by the learning
agent which is trained sequentially to choose component layers. We stack the
block to construct the whole auto-generated network. To accelerate the
generation process, we also propose a distributed asynchronous framework and an
early stop strategy. The block-wise generation brings unique advantages: (1) it
performs competitive results in comparison to the hand-crafted state-of-the-art
networks on image classification, additionally, the best network generated by
BlockQNN achieves 3.54% top-1 error rate on CIFAR-10 which beats all existing
auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of
the search space in designing networks which only spends 3 days with 32 GPUs,
and (3) moreover, it has strong generalizability that the network built on
CIFAR also performs well on a larger-scale ImageNet dataset. | http://arxiv.org/pdf/1708.05552 | Zhao Zhong, Junjie Yan, Wei Wu, Jing Shao, Cheng-Lin Liu | cs.CV, cs.LG | Accepted to CVPR 2018 | null | cs.CV | 20170818 | 20180514 | [] |
1708.05552 | 6 | Figure 1. The proposed BlockQNN (right in red box) compared with the hand-crafted networks marked in yellow and the existing auto- generated networks in green. Automatically generating the plain networks [2, 37] marked in blue need large computational costs on searching optimal layer types and hyperparameters for each single layer, while the block-wise network heavily reduces the cost to search structures only for one block. The entire network is then constructed by stacking the generated blocks. Similar block concept has been demonstrated its superiority in hand-crafted networks, such as inception-block and residue-block marked in red. | 1708.05552#6 | Practical Block-wise Neural Network Architecture Generation | Convolutional neural networks have gained a remarkable success in computer
vision. However, most usable network architectures are hand-crafted and usually
require expertise and elaborate design. In this paper, we provide a block-wise
network generation pipeline called BlockQNN which automatically builds
high-performance networks using the Q-Learning paradigm with epsilon-greedy
exploration strategy. The optimal network block is constructed by the learning
agent which is trained sequentially to choose component layers. We stack the
block to construct the whole auto-generated network. To accelerate the
generation process, we also propose a distributed asynchronous framework and an
early stop strategy. The block-wise generation brings unique advantages: (1) it
performs competitive results in comparison to the hand-crafted state-of-the-art
networks on image classification, additionally, the best network generated by
BlockQNN achieves 3.54% top-1 error rate on CIFAR-10 which beats all existing
auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of
the search space in designing networks which only spends 3 days with 32 GPUs,
and (3) moreover, it has strong generalizability that the network built on
CIFAR also performs well on a larger-scale ImageNet dataset. | http://arxiv.org/pdf/1708.05552 | Zhao Zhong, Junjie Yan, Wei Wu, Jing Shao, Cheng-Lin Liu | cs.CV, cs.LG | Accepted to CVPR 2018 | null | cs.CV | 20170818 | 20180514 | [] |
1708.05552 | 7 | play [19] and epsilon-greedy strategy [21] to effectively and efï¬ciently search the optimal block structure. The network block is constructed by the learning agent which is trained sequentiality to choose component layers. Afterwards we stack the block to construct the whole auto-generated net- work. Moreover, we propose an early stop strategy to en- able efï¬cient search with fast convergence. A novel reward function is designed to ensure the accuracy of the early stopped network having positive correlation with the con- verged network. We can pick up good blocks in reduced training time using this property. With this acceleration strategy, we can construct a Q-learning agent to learn the optimal block-wise network structures for a given task with limited resources (e.g. few GPUs or short time period). The generated architectures are thus succinct and have powerful generalization ability compared to the networks generated by the other automatic network generation methods.
The proposed block-wise network generation brings a few advantages as follows:
The automatically generated networks present comparable performances to those of hand- crafted networks with human expertise. The proposed method is also superior to the existing works and achieves a state-of-the-art performance on CIFAR-10 with 3.54% error rate. | 1708.05552#7 | Practical Block-wise Neural Network Architecture Generation | Convolutional neural networks have gained a remarkable success in computer
vision. However, most usable network architectures are hand-crafted and usually
require expertise and elaborate design. In this paper, we provide a block-wise
network generation pipeline called BlockQNN which automatically builds
high-performance networks using the Q-Learning paradigm with epsilon-greedy
exploration strategy. The optimal network block is constructed by the learning
agent which is trained sequentially to choose component layers. We stack the
block to construct the whole auto-generated network. To accelerate the
generation process, we also propose a distributed asynchronous framework and an
early stop strategy. The block-wise generation brings unique advantages: (1) it
performs competitive results in comparison to the hand-crafted state-of-the-art
networks on image classification, additionally, the best network generated by
BlockQNN achieves 3.54% top-1 error rate on CIFAR-10 which beats all existing
auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of
the search space in designing networks which only spends 3 days with 32 GPUs,
and (3) moreover, it has strong generalizability that the network built on
CIFAR also performs well on a larger-scale ImageNet dataset. | http://arxiv.org/pdf/1708.05552 | Zhao Zhong, Junjie Yan, Wei Wu, Jing Shao, Cheng-Lin Liu | cs.CV, cs.LG | Accepted to CVPR 2018 | null | cs.CV | 20170818 | 20180514 | [] |
1708.05552 | 8 | Efï¬cient. We are the ï¬rst to consider block-wise setup in automatic network generation. Companied with the proposed early stop strategy, the proposed method results in a fast search process. The network generation for CIFAR task reaches convergence with only 32 GPUs in 3 days, which is much more efï¬cient than that by NAS [37] with 800 GPUs in 28 days. ⢠Transferable. It offers surprisingly superior transfer- able ability that the network generated for CIFAR can
be transferred to ImageNet with little modiï¬cation but still achieve outstanding performance.
# 2. Related Work | 1708.05552#8 | Practical Block-wise Neural Network Architecture Generation | Convolutional neural networks have gained a remarkable success in computer
vision. However, most usable network architectures are hand-crafted and usually
require expertise and elaborate design. In this paper, we provide a block-wise
network generation pipeline called BlockQNN which automatically builds
high-performance networks using the Q-Learning paradigm with epsilon-greedy
exploration strategy. The optimal network block is constructed by the learning
agent which is trained sequentially to choose component layers. We stack the
block to construct the whole auto-generated network. To accelerate the
generation process, we also propose a distributed asynchronous framework and an
early stop strategy. The block-wise generation brings unique advantages: (1) it
performs competitive results in comparison to the hand-crafted state-of-the-art
networks on image classification, additionally, the best network generated by
BlockQNN achieves 3.54% top-1 error rate on CIFAR-10 which beats all existing
auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of
the search space in designing networks which only spends 3 days with 32 GPUs,
and (3) moreover, it has strong generalizability that the network built on
CIFAR also performs well on a larger-scale ImageNet dataset. | http://arxiv.org/pdf/1708.05552 | Zhao Zhong, Junjie Yan, Wei Wu, Jing Shao, Cheng-Lin Liu | cs.CV, cs.LG | Accepted to CVPR 2018 | null | cs.CV | 20170818 | 20180514 | [] |
1708.05552 | 9 | be transferred to ImageNet with little modiï¬cation but still achieve outstanding performance.
# 2. Related Work
Early works, from 1980s, have made efforts on automat- ing neural network design which often searched good archi- tecture by the genetic algorithm or other evolutionary algo- rithms [24, 27, 26, 28, 23, 7, 34]. Nevertheless, these works, to our best knowledge, cannot perform competitively com- pared with hand-crafted networks. Recent works, i.e. Neu- ral Architecture Search (NAS) [37] and MetaQNN [2], adopted reinforcement learning to automatically search a good network architecture. Although they can yield good performance on small datasets such as CIFAR-10, CIFAR- 100, the direct use of MetaQNN or NAS for architecture design on big datasets like ImageNet [6] is computationally expensive via searching in a huge space. Besides, the net- work generated by this kind of methods is task-speciï¬c or dataset-speciï¬c, that is, it cannot been well transferred to other tasks nor datasets with different input data sizes. For example, the network designed for CIFAR-10 cannot been generalized to ImageNet. | 1708.05552#9 | Practical Block-wise Neural Network Architecture Generation | Convolutional neural networks have gained a remarkable success in computer
vision. However, most usable network architectures are hand-crafted and usually
require expertise and elaborate design. In this paper, we provide a block-wise
network generation pipeline called BlockQNN which automatically builds
high-performance networks using the Q-Learning paradigm with epsilon-greedy
exploration strategy. The optimal network block is constructed by the learning
agent which is trained sequentially to choose component layers. We stack the
block to construct the whole auto-generated network. To accelerate the
generation process, we also propose a distributed asynchronous framework and an
early stop strategy. The block-wise generation brings unique advantages: (1) it
performs competitive results in comparison to the hand-crafted state-of-the-art
networks on image classification, additionally, the best network generated by
BlockQNN achieves 3.54% top-1 error rate on CIFAR-10 which beats all existing
auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of
the search space in designing networks which only spends 3 days with 32 GPUs,
and (3) moreover, it has strong generalizability that the network built on
CIFAR also performs well on a larger-scale ImageNet dataset. | http://arxiv.org/pdf/1708.05552 | Zhao Zhong, Junjie Yan, Wei Wu, Jing Shao, Cheng-Lin Liu | cs.CV, cs.LG | Accepted to CVPR 2018 | null | cs.CV | 20170818 | 20180514 | [] |
1708.05552 | 10 | Instead, our approach is aimed to design network block architecture by an efï¬cient search method with a dis- tributed asynchronous Q-learning framework as well as an early-stop strategy. The block design conception follows the modern convolutional neural networks such as Incep- tion [30, 14, 31] and Resnet [10, 11]. The inception-based networks construct the inception blocks via a hand- crafted multi-level feature extractor strategy by computing 1 à 1, 3 à 3, and 5 à 5 convolutions, while the Resnet uses residue blocks with shortcut connection to make it easier to represent the identity mapping which allows a very deep network. The blocks automatically generated by our
Name Index Type Kernel Size Pred1 Pred2 Convolution T 1 1, 3, 5 K 0 Max Pooling T 2 1, 3 K 0 Average Pooling T 3 1, 3 K 0 Identity T 4 0 K 0 Elemental Add T 5 0 K K Concat T 6 0 K K Terminal T 7 0 0 0 | 1708.05552#10 | Practical Block-wise Neural Network Architecture Generation | Convolutional neural networks have gained a remarkable success in computer
vision. However, most usable network architectures are hand-crafted and usually
require expertise and elaborate design. In this paper, we provide a block-wise
network generation pipeline called BlockQNN which automatically builds
high-performance networks using the Q-Learning paradigm with epsilon-greedy
exploration strategy. The optimal network block is constructed by the learning
agent which is trained sequentially to choose component layers. We stack the
block to construct the whole auto-generated network. To accelerate the
generation process, we also propose a distributed asynchronous framework and an
early stop strategy. The block-wise generation brings unique advantages: (1) it
performs competitive results in comparison to the hand-crafted state-of-the-art
networks on image classification, additionally, the best network generated by
BlockQNN achieves 3.54% top-1 error rate on CIFAR-10 which beats all existing
auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of
the search space in designing networks which only spends 3 days with 32 GPUs,
and (3) moreover, it has strong generalizability that the network built on
CIFAR also performs well on a larger-scale ImageNet dataset. | http://arxiv.org/pdf/1708.05552 | Zhao Zhong, Junjie Yan, Wei Wu, Jing Shao, Cheng-Lin Liu | cs.CV, cs.LG | Accepted to CVPR 2018 | null | cs.CV | 20170818 | 20180514 | [] |
1708.05552 | 11 | Table 1. Network Structure Code Space. The space contains seven types of commonly used layers. Layer index stands for the posi- tion of the current layer in a block, the range of the parameters is set to be T = {1, 2, 3, ...max layer index}. Three kinds of ker- nel sizes are considered for convolution layer and two sizes for pooling layer. Pred1 and Pred2 refer to the predecessor parame- ters which is used to represent the index of layers predecessor, the allowed range is K = {1, 2, ..., current layer index â 1}
Output
Codes = [(1,4,0,0,0), (2,1,1,1,0), (3,1,3,2,0), Codes = [(1,4,0,0,0), (2,1,3,1,0), (4,1,1,1,0), (5,1,5,4,0), (6,6,0,3,5), (3,1,3,2,0), (4,5,0,1,3), (7,2,3,1,0), (8,1,1,7,0), (9,6,0,6,8), (5,7,0,0,0)] (10,7,0,0,0)]} | 1708.05552#11 | Practical Block-wise Neural Network Architecture Generation | Convolutional neural networks have gained a remarkable success in computer
vision. However, most usable network architectures are hand-crafted and usually
require expertise and elaborate design. In this paper, we provide a block-wise
network generation pipeline called BlockQNN which automatically builds
high-performance networks using the Q-Learning paradigm with epsilon-greedy
exploration strategy. The optimal network block is constructed by the learning
agent which is trained sequentially to choose component layers. We stack the
block to construct the whole auto-generated network. To accelerate the
generation process, we also propose a distributed asynchronous framework and an
early stop strategy. The block-wise generation brings unique advantages: (1) it
performs competitive results in comparison to the hand-crafted state-of-the-art
networks on image classification, additionally, the best network generated by
BlockQNN achieves 3.54% top-1 error rate on CIFAR-10 which beats all existing
auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of
the search space in designing networks which only spends 3 days with 32 GPUs,
and (3) moreover, it has strong generalizability that the network built on
CIFAR also performs well on a larger-scale ImageNet dataset. | http://arxiv.org/pdf/1708.05552 | Zhao Zhong, Junjie Yan, Wei Wu, Jing Shao, Cheng-Lin Liu | cs.CV, cs.LG | Accepted to CVPR 2018 | null | cs.CV | 20170818 | 20180514 | [] |
1708.05552 | 12 | Figure 2. Representative block exemplars with their Network structure codes (NSC) respectively: the block with multi-branch connections (left) and the block with shortcut connections (right).
approach have similar structures such as some blocks con- tain short cut connections and inception-like multi-branch combination. We will discuss the details in Section 5.1.
Another bunch of related works include hyper-parameter optimization [3], meta-learning [32] and learning to learn methods [12, 1]. However, the goal of these works is to use meta-data to improve the performance of the existing algorithms, such as ï¬nding the optimal learning rate of op- timization methods or the optimal number of hidden layers to construct the network. In this paper, we focus on learn- ing the entire topological architecture of network blocks to improve the performance.
# 3. Methodology
# 3.1. Convolutional Neural Network Blocks
The modern CNNs, e.g. Inception and Resnet, are de- signed by stacking several blocks each of which shares
Input Input xN xN XN xN XN xN XN
Figure 3. Auto-generated networks on CIFAR-10 (left) and Im- ageNet (right). Each network starts with a few convolution lay- ers to learn low-level features, and followed by multiple repeated blocks with several pooling layers inserted to downsample. | 1708.05552#12 | Practical Block-wise Neural Network Architecture Generation | Convolutional neural networks have gained a remarkable success in computer
vision. However, most usable network architectures are hand-crafted and usually
require expertise and elaborate design. In this paper, we provide a block-wise
network generation pipeline called BlockQNN which automatically builds
high-performance networks using the Q-Learning paradigm with epsilon-greedy
exploration strategy. The optimal network block is constructed by the learning
agent which is trained sequentially to choose component layers. We stack the
block to construct the whole auto-generated network. To accelerate the
generation process, we also propose a distributed asynchronous framework and an
early stop strategy. The block-wise generation brings unique advantages: (1) it
performs competitive results in comparison to the hand-crafted state-of-the-art
networks on image classification, additionally, the best network generated by
BlockQNN achieves 3.54% top-1 error rate on CIFAR-10 which beats all existing
auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of
the search space in designing networks which only spends 3 days with 32 GPUs,
and (3) moreover, it has strong generalizability that the network built on
CIFAR also performs well on a larger-scale ImageNet dataset. | http://arxiv.org/pdf/1708.05552 | Zhao Zhong, Junjie Yan, Wei Wu, Jing Shao, Cheng-Lin Liu | cs.CV, cs.LG | Accepted to CVPR 2018 | null | cs.CV | 20170818 | 20180514 | [] |
1708.05552 | 14 | As a CNN contains a feed-forward computation proce- dure, we represent it by a directed acyclic graph (DAG), where each node corresponds to a layer in the CNN while directed edges stand for data ï¬ow from one layer to another. To turn such a graph into a uniform representation, we pro- pose a novel layer representation called Network Structure Code (NSC), as shown in Table 2. Each block is then de- picted by a set of 5-D NSC vectors. In NSC, the ï¬rst three numbers stand for the layer index, operation type and kernel size. The last two are predecessor parameters which refer to the position of a layerâs predecessor layer in structure codes. The second predecessor (Pred2) is set for the layer owns two predecessors, and for the layer with only one pre- decessor, Pred2 will be set to zero. This design is motivated by the current powerful hand-crafted networks like Incep- tion and Resnet which own their special block structures. This kind of block structure shares similar properties such as containing more complex connections, e.g. shortcut con- nections or multi-branch connections, than the simple con- nections of the plain network like AlexNet. Thus, the | 1708.05552#14 | Practical Block-wise Neural Network Architecture Generation | Convolutional neural networks have gained a remarkable success in computer
vision. However, most usable network architectures are hand-crafted and usually
require expertise and elaborate design. In this paper, we provide a block-wise
network generation pipeline called BlockQNN which automatically builds
high-performance networks using the Q-Learning paradigm with epsilon-greedy
exploration strategy. The optimal network block is constructed by the learning
agent which is trained sequentially to choose component layers. We stack the
block to construct the whole auto-generated network. To accelerate the
generation process, we also propose a distributed asynchronous framework and an
early stop strategy. The block-wise generation brings unique advantages: (1) it
performs competitive results in comparison to the hand-crafted state-of-the-art
networks on image classification, additionally, the best network generated by
BlockQNN achieves 3.54% top-1 error rate on CIFAR-10 which beats all existing
auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of
the search space in designing networks which only spends 3 days with 32 GPUs,
and (3) moreover, it has strong generalizability that the network built on
CIFAR also performs well on a larger-scale ImageNet dataset. | http://arxiv.org/pdf/1708.05552 | Zhao Zhong, Junjie Yan, Wei Wu, Jing Shao, Cheng-Lin Liu | cs.CV, cs.LG | Accepted to CVPR 2018 | null | cs.CV | 20170818 | 20180514 | [] |
1708.05552 | 16 | (1,1,1,0,0) (2,1,3,1,0) oF (3,1,3,1,0) | =** sf (2,X,X,X,X) =| (L%,%,%,X) | (3,X,X,X,X) (Tx,x,X,x) ") (TX,x,x,x) Feedback State coe Action âsl (17,000) (2,7,0,0,0) (3,7,0,0,0) â| (77,000) Block toward validation accuracy as (a) (b) (©
Figure 4. Q-learning process illustration. (a) The state transition process by different action choices. The block structure in (b) is generated by the red solid line in (a). (c) The ï¬ow chart of the Q-learning procedure.
laration in Resnet [11], refers to a Pre-activation Convolu- tional Cell (PCC) with three components, i.e. ReLU, Con- volution and Batch Normalization. This results in a smaller searching space than that with three components separate search, and hence with the PCC, we can get better initial- ization for searching and generating optimal block structure with a quick training process. | 1708.05552#16 | Practical Block-wise Neural Network Architecture Generation | Convolutional neural networks have gained a remarkable success in computer
vision. However, most usable network architectures are hand-crafted and usually
require expertise and elaborate design. In this paper, we provide a block-wise
network generation pipeline called BlockQNN which automatically builds
high-performance networks using the Q-Learning paradigm with epsilon-greedy
exploration strategy. The optimal network block is constructed by the learning
agent which is trained sequentially to choose component layers. We stack the
block to construct the whole auto-generated network. To accelerate the
generation process, we also propose a distributed asynchronous framework and an
early stop strategy. The block-wise generation brings unique advantages: (1) it
performs competitive results in comparison to the hand-crafted state-of-the-art
networks on image classification, additionally, the best network generated by
BlockQNN achieves 3.54% top-1 error rate on CIFAR-10 which beats all existing
auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of
the search space in designing networks which only spends 3 days with 32 GPUs,
and (3) moreover, it has strong generalizability that the network built on
CIFAR also performs well on a larger-scale ImageNet dataset. | http://arxiv.org/pdf/1708.05552 | Zhao Zhong, Junjie Yan, Wei Wu, Jing Shao, Cheng-Lin Liu | cs.CV, cs.LG | Accepted to CVPR 2018 | null | cs.CV | 20170818 | 20180514 | [] |
1708.05552 | 17 | Based on the above deï¬ned blocks, we construct the complete network by stacking these block structures sequentially which turn a common plain network into its counterpart block version. Two representative auto- generated networks on CIFAR and ImageNet tasks are shown in Fig. 3. There is no down-sampling operation within each block. We perform down-sampling directly by the pooling layer. If the size of feature map is halved by pooling operation, the blockâs weights will be doubled. The architecture for ImageNet contains more pooling layers than that for CIFAR because of their different input sizes, i.e. 224 à 224 for ImageNet and 32 à 32 for CIFAR. More importantly, the blocks can be repeated any N times to fulï¬ll different demands, and even place the blocks in other manner, such as inserting the block into the Network-in- Network [20] framework or setting short cut connection be- tween different blocks. | 1708.05552#17 | Practical Block-wise Neural Network Architecture Generation | Convolutional neural networks have gained a remarkable success in computer
vision. However, most usable network architectures are hand-crafted and usually
require expertise and elaborate design. In this paper, we provide a block-wise
network generation pipeline called BlockQNN which automatically builds
high-performance networks using the Q-Learning paradigm with epsilon-greedy
exploration strategy. The optimal network block is constructed by the learning
agent which is trained sequentially to choose component layers. We stack the
block to construct the whole auto-generated network. To accelerate the
generation process, we also propose a distributed asynchronous framework and an
early stop strategy. The block-wise generation brings unique advantages: (1) it
performs competitive results in comparison to the hand-crafted state-of-the-art
networks on image classification, additionally, the best network generated by
BlockQNN achieves 3.54% top-1 error rate on CIFAR-10 which beats all existing
auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of
the search space in designing networks which only spends 3 days with 32 GPUs,
and (3) moreover, it has strong generalizability that the network built on
CIFAR also performs well on a larger-scale ImageNet dataset. | http://arxiv.org/pdf/1708.05552 | Zhao Zhong, Junjie Yan, Wei Wu, Jing Shao, Cheng-Lin Liu | cs.CV, cs.LG | Accepted to CVPR 2018 | null | cs.CV | 20170818 | 20180514 | [] |
1708.05552 | 18 | to the deï¬ned NSC set with a limited number of choices, both the state and action space are thus ï¬nite and discrete to ensure a relatively small searching space. The state tran- sition process (st, a(st)) â (st+1) is shown in Fig. 4(a), where t refers to the current layer. The block example in Fig. 4(b) is generated by the red solid lines in Fig. 4(a). The learning agent is given the task of sequentially picking NSC of a block. The structure of block can be considered as an action selection trajectory Ïa1:T , i.e. a sequence of NSCs. We model the layer selection process as a Markov Decision Process with the assumption that a well-performing layer in one block should also perform well in another block [2]. To ï¬nd the optimal architecture, we ask our agent to maximize its expected reward over all possible trajectories, denoted by RÏ ,
RÏ = EP (Ïa1:T )[R], (1) | 1708.05552#18 | Practical Block-wise Neural Network Architecture Generation | Convolutional neural networks have gained a remarkable success in computer
vision. However, most usable network architectures are hand-crafted and usually
require expertise and elaborate design. In this paper, we provide a block-wise
network generation pipeline called BlockQNN which automatically builds
high-performance networks using the Q-Learning paradigm with epsilon-greedy
exploration strategy. The optimal network block is constructed by the learning
agent which is trained sequentially to choose component layers. We stack the
block to construct the whole auto-generated network. To accelerate the
generation process, we also propose a distributed asynchronous framework and an
early stop strategy. The block-wise generation brings unique advantages: (1) it
performs competitive results in comparison to the hand-crafted state-of-the-art
networks on image classification, additionally, the best network generated by
BlockQNN achieves 3.54% top-1 error rate on CIFAR-10 which beats all existing
auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of
the search space in designing networks which only spends 3 days with 32 GPUs,
and (3) moreover, it has strong generalizability that the network built on
CIFAR also performs well on a larger-scale ImageNet dataset. | http://arxiv.org/pdf/1708.05552 | Zhao Zhong, Junjie Yan, Wei Wu, Jing Shao, Cheng-Lin Liu | cs.CV, cs.LG | Accepted to CVPR 2018 | null | cs.CV | 20170818 | 20180514 | [] |
1708.05552 | 19 | RÏ = EP (Ïa1:T )[R], (1)
where the R is the cumulative reward. For this maximiza- tion problem, it is usually to use recursive Bellman Equation to optimality. Given a state st â S, and subsequent action a â A(st), we deï¬ne the maximum total expected reward to be Qâ(st, a) which is known as Q-value of state-action pair. The recursive Bellman Equation then can be written as
# 3.2. Designing Network Blocks With Q-Learning
Albeit we squeeze the search space of the entire network design by focusing on constructing network blocks, there is still a large amount of possible structures to seek. There- fore, we employ reinforcement learning rather than random sampling for automatic design. Our method is based on Q- learning, a kind of reinforcement learning, which concerns how an agent ought to take actions so as to maximize the cumulative reward. The Q-learning model consists of an agent, states and a set of actions.
In this paper, the state s â S represents the status of the current layer which is deï¬ned as a Network Structure Code (NSC) claimed in Section 3.1, i.e. 5-D vector {layer index, layer type, kernel size, pred1, pred2}. The action a â A is the decision for the next successive layer. Thanks | 1708.05552#19 | Practical Block-wise Neural Network Architecture Generation | Convolutional neural networks have gained a remarkable success in computer
vision. However, most usable network architectures are hand-crafted and usually
require expertise and elaborate design. In this paper, we provide a block-wise
network generation pipeline called BlockQNN which automatically builds
high-performance networks using the Q-Learning paradigm with epsilon-greedy
exploration strategy. The optimal network block is constructed by the learning
agent which is trained sequentially to choose component layers. We stack the
block to construct the whole auto-generated network. To accelerate the
generation process, we also propose a distributed asynchronous framework and an
early stop strategy. The block-wise generation brings unique advantages: (1) it
performs competitive results in comparison to the hand-crafted state-of-the-art
networks on image classification, additionally, the best network generated by
BlockQNN achieves 3.54% top-1 error rate on CIFAR-10 which beats all existing
auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of
the search space in designing networks which only spends 3 days with 32 GPUs,
and (3) moreover, it has strong generalizability that the network built on
CIFAR also performs well on a larger-scale ImageNet dataset. | http://arxiv.org/pdf/1708.05552 | Zhao Zhong, Junjie Yan, Wei Wu, Jing Shao, Cheng-Lin Liu | cs.CV, cs.LG | Accepted to CVPR 2018 | null | cs.CV | 20170818 | 20180514 | [] |
1708.05552 | 20 | Q*(st; a) = Es.41|sealEr|s,a,se41 [r|se, a, St41] Q*(se41,@)|. 2) y max aââ¬A(se41))
An empirical assumption to solve the above quantity is to formulate it as an iterative update:
Q(sT , a) =0 (3)
Q(sT â1, aT ) =(1 â α)Q(sT â1, aT ) + αrT (4)
Q(st, a) =(1 â α)Q(st, a)
+alry + ymaxQ(se41,@ aâ)],t ⬠{1,2,..Tâ2}, (5)
where α is the learning rate which determines how the newly acquired information overrides the old information, γ is the discount factor which measures the importance of future rewards. rt denotes the intermediate reward observed
Q-learning Performance with Different intermediate reward
~ 3 âlgnore 7 â shaped reward r;, âAccuracy (%) o B 2 & w a 1 6 11 16 21 26 Iteration (batch)
Figure 5. Comparison results of Q-learning with and without the shaped intermediate reward rt. By taking our shaped reward, the learning process convergent faster than that without shaped reward start from the same exploration. | 1708.05552#20 | Practical Block-wise Neural Network Architecture Generation | Convolutional neural networks have gained a remarkable success in computer
vision. However, most usable network architectures are hand-crafted and usually
require expertise and elaborate design. In this paper, we provide a block-wise
network generation pipeline called BlockQNN which automatically builds
high-performance networks using the Q-Learning paradigm with epsilon-greedy
exploration strategy. The optimal network block is constructed by the learning
agent which is trained sequentially to choose component layers. We stack the
block to construct the whole auto-generated network. To accelerate the
generation process, we also propose a distributed asynchronous framework and an
early stop strategy. The block-wise generation brings unique advantages: (1) it
performs competitive results in comparison to the hand-crafted state-of-the-art
networks on image classification, additionally, the best network generated by
BlockQNN achieves 3.54% top-1 error rate on CIFAR-10 which beats all existing
auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of
the search space in designing networks which only spends 3 days with 32 GPUs,
and (3) moreover, it has strong generalizability that the network built on
CIFAR also performs well on a larger-scale ImageNet dataset. | http://arxiv.org/pdf/1708.05552 | Zhao Zhong, Junjie Yan, Wei Wu, Jing Shao, Cheng-Lin Liu | cs.CV, cs.LG | Accepted to CVPR 2018 | null | cs.CV | 20170818 | 20180514 | [] |
1708.05552 | 21 | for the current state st and sT refers to ï¬nal state, i.e. termi- nal layers. rT is the validation accuracy of corresponding network trained convergence on training set for aT , i.e. ac- tion to ï¬nal state. Since the reward rt cannot be explicitly measured in our task, we use reward shaping [22] to speed up training. The shaped intermediate reward is deï¬ned as:
rt = rT T . (6)
Previous works [2] ignore these rewards in the iterative process, i.e. set them to zero, which may cause a slow con- vergence in the beginning. This is known as the temporal credit assignment problem which makes RL time consum- ing [29]. In this case, the Q-value of sT is much higher than others in early stage of training and thus leads the agent pre- fer to stop searching at the very beginning, i.e. tend to build small block with fewer layers. We show a comparison result in Fig. 5, the learning process of the agent with our shaped reward rt convergent much faster than previous method. | 1708.05552#21 | Practical Block-wise Neural Network Architecture Generation | Convolutional neural networks have gained a remarkable success in computer
vision. However, most usable network architectures are hand-crafted and usually
require expertise and elaborate design. In this paper, we provide a block-wise
network generation pipeline called BlockQNN which automatically builds
high-performance networks using the Q-Learning paradigm with epsilon-greedy
exploration strategy. The optimal network block is constructed by the learning
agent which is trained sequentially to choose component layers. We stack the
block to construct the whole auto-generated network. To accelerate the
generation process, we also propose a distributed asynchronous framework and an
early stop strategy. The block-wise generation brings unique advantages: (1) it
performs competitive results in comparison to the hand-crafted state-of-the-art
networks on image classification, additionally, the best network generated by
BlockQNN achieves 3.54% top-1 error rate on CIFAR-10 which beats all existing
auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of
the search space in designing networks which only spends 3 days with 32 GPUs,
and (3) moreover, it has strong generalizability that the network built on
CIFAR also performs well on a larger-scale ImageNet dataset. | http://arxiv.org/pdf/1708.05552 | Zhao Zhong, Junjie Yan, Wei Wu, Jing Shao, Cheng-Lin Liu | cs.CV, cs.LG | Accepted to CVPR 2018 | null | cs.CV | 20170818 | 20180514 | [] |
1708.05552 | 22 | We summarize the learning procedure in Fig. 4(c). The agent ï¬rst samples a set of structure codes to build the block architecture, based on which the entire network is constructed by stacking these blocks sequentially. We then train the generated network on a certain task, and the vali- dation accuracy is regarded as the reward to update the Q- value. Afterwards, the agent picks another set of structure codes to get a better block structure.
# 3.3. Early Stop Strategy
Introducing block-wise generation indeed increases the efï¬ciency. However, it is still time consuming to com- plete the search process. To further accelerate the learn- ing process, we introduce an early stop strategy. As we all know, early stopping training process might result in a poor accuracy. Fig. 6 shows an example, where the early-stop ac- curacy in yellow line is much lower than the ï¬nal accuracy in orange line, which means that some good blocks unfor- tunately perform worse than bad blocks when stop training
# of
# Data
Analysis Early Stop Accuracy âEarly Stop ACC âFinal ACC Redefined Reward âFLOPs âDensity 100 5 a Scalar for FLOPs and Density 95 © 8 Accuracy (%) ry & 80 75 1 11 21 31 41 51 61 71 81 91 Model (block) | 1708.05552#22 | Practical Block-wise Neural Network Architecture Generation | Convolutional neural networks have gained a remarkable success in computer
vision. However, most usable network architectures are hand-crafted and usually
require expertise and elaborate design. In this paper, we provide a block-wise
network generation pipeline called BlockQNN which automatically builds
high-performance networks using the Q-Learning paradigm with epsilon-greedy
exploration strategy. The optimal network block is constructed by the learning
agent which is trained sequentially to choose component layers. We stack the
block to construct the whole auto-generated network. To accelerate the
generation process, we also propose a distributed asynchronous framework and an
early stop strategy. The block-wise generation brings unique advantages: (1) it
performs competitive results in comparison to the hand-crafted state-of-the-art
networks on image classification, additionally, the best network generated by
BlockQNN achieves 3.54% top-1 error rate on CIFAR-10 which beats all existing
auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of
the search space in designing networks which only spends 3 days with 32 GPUs,
and (3) moreover, it has strong generalizability that the network built on
CIFAR also performs well on a larger-scale ImageNet dataset. | http://arxiv.org/pdf/1708.05552 | Zhao Zhong, Junjie Yan, Wei Wu, Jing Shao, Cheng-Lin Liu | cs.CV, cs.LG | Accepted to CVPR 2018 | null | cs.CV | 20170818 | 20180514 | [] |
1708.05552 | 23 | Figure 6. The performance of early stop training is poorer than the ï¬nal accuracy of a complete training. With the help of FLOPs and Density, it squeezes the gap between the redeï¬ned reward function and the ï¬nal accuracy.
early. In the meanwhile, we notice that the FLOPs and den- sity of the corresponding blocks have a negative correlation with the ï¬nal accuracy. Thus, we redeï¬ne the reward func- tion as
reward = ACCEarlyStop â µ log(FLOPs) âÏ log(Density), (7)
where FLOPs [8] refer to an estimation of computational complexity of the block, and Density is the edge number divided by the dot number in DAG of the block. There are two hyperparameters, µ and Ï, to balance the weights of FLOPs and Density. With the redeï¬ned reward function, the reward is more relevant to the ï¬nal accuracy.
With this early stop strategy and small searching space of network blocks, it just costs 3 days to complete the search- ing process with only 32 GPUs, which is superior to that of [37], spends 28 days with 800 GPUs to achieve the same performance.
# 4. Framework and Training Details
# 4.1. Distributed Asynchronous Framework | 1708.05552#23 | Practical Block-wise Neural Network Architecture Generation | Convolutional neural networks have gained a remarkable success in computer
vision. However, most usable network architectures are hand-crafted and usually
require expertise and elaborate design. In this paper, we provide a block-wise
network generation pipeline called BlockQNN which automatically builds
high-performance networks using the Q-Learning paradigm with epsilon-greedy
exploration strategy. The optimal network block is constructed by the learning
agent which is trained sequentially to choose component layers. We stack the
block to construct the whole auto-generated network. To accelerate the
generation process, we also propose a distributed asynchronous framework and an
early stop strategy. The block-wise generation brings unique advantages: (1) it
performs competitive results in comparison to the hand-crafted state-of-the-art
networks on image classification, additionally, the best network generated by
BlockQNN achieves 3.54% top-1 error rate on CIFAR-10 which beats all existing
auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of
the search space in designing networks which only spends 3 days with 32 GPUs,
and (3) moreover, it has strong generalizability that the network built on
CIFAR also performs well on a larger-scale ImageNet dataset. | http://arxiv.org/pdf/1708.05552 | Zhao Zhong, Junjie Yan, Wei Wu, Jing Shao, Cheng-Lin Liu | cs.CV, cs.LG | Accepted to CVPR 2018 | null | cs.CV | 20170818 | 20180514 | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.