doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1702.08138 | 5 | Original Phrase (Toxicity Score) Climate change is happening and itâs not changing in our favor. If you think differently youâre an idiot. (84%) Theyâre stupid, itâs getting warmer, we should enjoy it while it lasts (86%) They are liberal idiots who are uneducated (90%) idiots. backward thinking people. nationalists. not accept- ing facts. susceptible to lies. (80%) They are stupid and ignorant with no class (91%) Itâs stupid and wrong (89%) If they voted for Hilary they are idiots (90%) Anyone who voted for Trump is a moron (80%) Screw you trump supporters (79%) Modiï¬ed Phrase (Toxicity Score) Climate change is happening and itâs not changing in our favor. If you think differently youâre an idiiot. (20%) Theyâre st.upid, itâs getting warmer, we should enjoy it while it lasts (2%) They are liberal i.diots who are un.educated (15%) idiiots. backward thinking people. nationaalists. not accept- ing facts. susceptible to l.ies. (17%) They are st.upid and | 1702.08138#5 | Deceiving Google's Perspective API Built for Detecting Toxic Comments | Social media platforms provide an environment where people can freely engage
in discussions. Unfortunately, they also enable several problems, such as
online harassment. Recently, Google and Jigsaw started a project called
Perspective, which uses machine learning to automatically detect toxic
language. A demonstration website has been also launched, which allows anyone
to type a phrase in the interface and instantaneously see the toxicity score
[1]. In this paper, we propose an attack on the Perspective toxic detection
system based on the adversarial examples. We show that an adversary can subtly
modify a highly toxic phrase in a way that the system assigns significantly
lower toxicity score to it. We apply the attack on the sample phrases provided
in the Perspective website and show that we can consistently reduce the
toxicity scores to the level of the non-toxic phrases. The existence of such
adversarial examples is very harmful for toxic detection systems and seriously
undermines their usability. | http://arxiv.org/pdf/1702.08138 | Hossein Hosseini, Sreeram Kannan, Baosen Zhang, Radha Poovendran | cs.LG, cs.CY, cs.SI | 4 pages | null | cs.LG | 20170227 | 20170227 | [
{
"id": "1606.04435"
},
{
"id": "1602.02697"
},
{
"id": "1610.08914"
}
] |
1702.08138 | 7 | tool for better discussions online [17]. The API uses machine learning models to score the toxicity of an input text, where toxic is deï¬ned as âa rude, disrespectful, or unreasonable comment that is likely to make one leave a discussion.â
Google and Jigsaw developed the measurement tool by taking millions of comments from different publishers and then asking panels of ten people to rate the comments on a scale from âvery toxicâ to âvery healthyâ contribution. The resulting judgments provided a large set of training examples for the machine learning model.
method modiï¬es the text such that the algorithm classiï¬es the writer gender as a certain target gender, under limited knowledge of the classiï¬er and while preserving the textâs ï¬uency and meaning. The modiï¬ed text is not required to be adversarial, i.e., a human may also classify it as the target gender. In contrast, in the application of toxic text detection, the adversary intends to deceive the classiï¬er, while maintaining the abusive content of the text.
# III. THE PROPOSED ATTACKS | 1702.08138#7 | Deceiving Google's Perspective API Built for Detecting Toxic Comments | Social media platforms provide an environment where people can freely engage
in discussions. Unfortunately, they also enable several problems, such as
online harassment. Recently, Google and Jigsaw started a project called
Perspective, which uses machine learning to automatically detect toxic
language. A demonstration website has been also launched, which allows anyone
to type a phrase in the interface and instantaneously see the toxicity score
[1]. In this paper, we propose an attack on the Perspective toxic detection
system based on the adversarial examples. We show that an adversary can subtly
modify a highly toxic phrase in a way that the system assigns significantly
lower toxicity score to it. We apply the attack on the sample phrases provided
in the Perspective website and show that we can consistently reduce the
toxicity scores to the level of the non-toxic phrases. The existence of such
adversarial examples is very harmful for toxic detection systems and seriously
undermines their usability. | http://arxiv.org/pdf/1702.08138 | Hossein Hosseini, Sreeram Kannan, Baosen Zhang, Radha Poovendran | cs.LG, cs.CY, cs.SI | 4 pages | null | cs.LG | 20170227 | 20170227 | [
{
"id": "1606.04435"
},
{
"id": "1602.02697"
},
{
"id": "1610.08914"
}
] |
1702.08138 | 8 | # III. THE PROPOSED ATTACKS
Jigsaw has partnered with online communities and publish- ers to implement the toxicity measurement system. Wikipedia use it to perform a study of its editorial discussion pages [3] and The New York Times is planning to use it as a ï¬rst pass of all its comments, automatically ï¬agging abusive ones for its team of human moderators [11]. The API outputs the scores in real-time, so that publishers can integrate it into their website to show toxicity ratings to commenters even during the typing [5].
B. Adversarial Examples for Learning Systems
Machine learning models are generally designed to yield the best performance on clean data and in benign settings. As a result, they are subject to attacks in adversarial scenarios [12]â [14]. One type of the vulnerabilities of the machine learning algorithms is that an adversary can change the algorithm prediction score by perturbing the input slightly, often un- noticeable by humans. Such inputs are called adversarial examples [15]. | 1702.08138#8 | Deceiving Google's Perspective API Built for Detecting Toxic Comments | Social media platforms provide an environment where people can freely engage
in discussions. Unfortunately, they also enable several problems, such as
online harassment. Recently, Google and Jigsaw started a project called
Perspective, which uses machine learning to automatically detect toxic
language. A demonstration website has been also launched, which allows anyone
to type a phrase in the interface and instantaneously see the toxicity score
[1]. In this paper, we propose an attack on the Perspective toxic detection
system based on the adversarial examples. We show that an adversary can subtly
modify a highly toxic phrase in a way that the system assigns significantly
lower toxicity score to it. We apply the attack on the sample phrases provided
in the Perspective website and show that we can consistently reduce the
toxicity scores to the level of the non-toxic phrases. The existence of such
adversarial examples is very harmful for toxic detection systems and seriously
undermines their usability. | http://arxiv.org/pdf/1702.08138 | Hossein Hosseini, Sreeram Kannan, Baosen Zhang, Radha Poovendran | cs.LG, cs.CY, cs.SI | 4 pages | null | cs.LG | 20170227 | 20170227 | [
{
"id": "1606.04435"
},
{
"id": "1602.02697"
},
{
"id": "1610.08914"
}
] |
1702.08138 | 9 | Adversarial examples have been applied to models for different tasks, such as images classiï¬cation [15], [18], [19], music content analysis [20] and malware classiï¬cation [21]. In this work, we generate adversarial examples on a real-world text classiï¬er system. In the context of scoring the toxicity, adversarial examples can be deï¬ned as modiï¬ed phrases that contain the same highly abusive language as the original one, yet receive a signiï¬cantly lower toxicity score by the model. In a similar work [22], the authors presented a method for gender obfuscating in social media writing. The proposed
Recently, a website has been launched for Perspective demonstration, which allows anyone to type a phrase in the interface and instantaneously receive its toxicity score [1]. The website provides samples phrases for three categories of topics âthat are often difï¬cult to discuss onlineâ. The categories are 1) Climate Change, 2) Brexit and 3) US Election. | 1702.08138#9 | Deceiving Google's Perspective API Built for Detecting Toxic Comments | Social media platforms provide an environment where people can freely engage
in discussions. Unfortunately, they also enable several problems, such as
online harassment. Recently, Google and Jigsaw started a project called
Perspective, which uses machine learning to automatically detect toxic
language. A demonstration website has been also launched, which allows anyone
to type a phrase in the interface and instantaneously see the toxicity score
[1]. In this paper, we propose an attack on the Perspective toxic detection
system based on the adversarial examples. We show that an adversary can subtly
modify a highly toxic phrase in a way that the system assigns significantly
lower toxicity score to it. We apply the attack on the sample phrases provided
in the Perspective website and show that we can consistently reduce the
toxicity scores to the level of the non-toxic phrases. The existence of such
adversarial examples is very harmful for toxic detection systems and seriously
undermines their usability. | http://arxiv.org/pdf/1702.08138 | Hossein Hosseini, Sreeram Kannan, Baosen Zhang, Radha Poovendran | cs.LG, cs.CY, cs.SI | 4 pages | null | cs.LG | 20170227 | 20170227 | [
{
"id": "1606.04435"
},
{
"id": "1602.02697"
},
{
"id": "1610.08914"
}
] |
1702.08138 | 10 | section, we demonstrate an attack on the Perspective toxic detection system, based on the adver- sarial examples. In particular, we show that an adversary can subtly modify a toxic phrase such that the model will output a very low toxicity score for the modiï¬ed phrase. The attack setting is as follows. The adversary possesses a phrase with a toxic content and tries different perturbations on the words, until she succeeds with signiï¬cantly reducing the conï¬dence of the model that the phrase is toxic. Note that the adversary does not have access to the model or training data, and can only query the model and get the toxicity score.
Table I demonstrates the attack on sample phrases provided by the Perspective website. The ï¬rst column represents the original phrases along with the toxicity scores and the second column provides the adversarially modiï¬ed phrases and their corresponding toxicity scores. 1 For better demonstration of the attack, we chose phrases with different toxic words and also introduced different types of errors, rather than searching for the best error type that would potentially yield lower toxicity score. The boldface words are the toxic words that the adversary has modiï¬ed. The modiï¬cations are adding a
1The experiments are done on the interface of the Perspective website on Feb. 24, 2017.
2 | 1702.08138#10 | Deceiving Google's Perspective API Built for Detecting Toxic Comments | Social media platforms provide an environment where people can freely engage
in discussions. Unfortunately, they also enable several problems, such as
online harassment. Recently, Google and Jigsaw started a project called
Perspective, which uses machine learning to automatically detect toxic
language. A demonstration website has been also launched, which allows anyone
to type a phrase in the interface and instantaneously see the toxicity score
[1]. In this paper, we propose an attack on the Perspective toxic detection
system based on the adversarial examples. We show that an adversary can subtly
modify a highly toxic phrase in a way that the system assigns significantly
lower toxicity score to it. We apply the attack on the sample phrases provided
in the Perspective website and show that we can consistently reduce the
toxicity scores to the level of the non-toxic phrases. The existence of such
adversarial examples is very harmful for toxic detection systems and seriously
undermines their usability. | http://arxiv.org/pdf/1702.08138 | Hossein Hosseini, Sreeram Kannan, Baosen Zhang, Radha Poovendran | cs.LG, cs.CY, cs.SI | 4 pages | null | cs.LG | 20170227 | 20170227 | [
{
"id": "1606.04435"
},
{
"id": "1602.02697"
},
{
"id": "1610.08914"
}
] |
1702.08138 | 12 | Original Phrase (Toxicity Score) Climate change is happening and itâs not changing in our favor. If you think differently youâre an idiot (84%) Theyâre stupid, itâs getting warmer, we should enjoy it while it lasts (86%) They are liberal idiots who are uneducated. (90%) idiots. backward thinking people. nationalists. not accepting facts. susceptible to lies. (80%) They are stupid and ignorant with no class (91%) Itâs stupid and wrong (89%) If they voted for Hilary they are idiots (90%) Anyone who voted for Trump is a moron (80%) Screw you trump supporters (79%) Modiï¬ed Phrase (Toxicity Score) Climate change is happening and itâs not changing in our favor. If you think differently youâre not an idiot (73%) Theyâre not stupid, itâs getting warmer, we should enjoy it while it lasts (74%) They are not liberal idiots who are uneducated. (83%) not idiots. not backward thinking people. not nationalists. accepting facts. not susceptible to lies. (74%) They are not stupid and ignorant with no class | 1702.08138#12 | Deceiving Google's Perspective API Built for Detecting Toxic Comments | Social media platforms provide an environment where people can freely engage
in discussions. Unfortunately, they also enable several problems, such as
online harassment. Recently, Google and Jigsaw started a project called
Perspective, which uses machine learning to automatically detect toxic
language. A demonstration website has been also launched, which allows anyone
to type a phrase in the interface and instantaneously see the toxicity score
[1]. In this paper, we propose an attack on the Perspective toxic detection
system based on the adversarial examples. We show that an adversary can subtly
modify a highly toxic phrase in a way that the system assigns significantly
lower toxicity score to it. We apply the attack on the sample phrases provided
in the Perspective website and show that we can consistently reduce the
toxicity scores to the level of the non-toxic phrases. The existence of such
adversarial examples is very harmful for toxic detection systems and seriously
undermines their usability. | http://arxiv.org/pdf/1702.08138 | Hossein Hosseini, Sreeram Kannan, Baosen Zhang, Radha Poovendran | cs.LG, cs.CY, cs.SI | 4 pages | null | cs.LG | 20170227 | 20170227 | [
{
"id": "1606.04435"
},
{
"id": "1602.02697"
},
{
"id": "1610.08914"
}
] |
1702.08138 | 14 | dot between two letters, adding spaces between all letters or misspelling the word (repeating one letter twice or swapping two letters). As can be seen, we can consistently reduce the toxicity score to the level of the benign phrases by subtly modifying the toxic words.
Moreover, we observed that the adversarial perturbations transfer among different phrases, i.e., if a certain modiï¬cation to a word reduces the toxicity score of a phrase, the same modiï¬cation to the word is likely to reduce the toxicity score also for another phrase. Using this property, an adversary can form a dictionary of the adversarial perturbations for every word and signiï¬cantly simplify the attack process.
Through the experiments, we made the following observa- tions:
the Perspective system also wrongly assigns high tox- icity scores to the apparently benign phrases. Table II demonstrates the false alarm on the same sample phrases of Table I. The ï¬rst column represents the original phrases along with the toxicity scores and the second column pro- vides the negated phrases and the corresponding toxicity scores. The boldface words are added to toxic phrases. As can be seen, the system consistently fails to capture the inherent semantic of the modiï¬ed phrases and wrongly assigns high toxicity scores to them. | 1702.08138#14 | Deceiving Google's Perspective API Built for Detecting Toxic Comments | Social media platforms provide an environment where people can freely engage
in discussions. Unfortunately, they also enable several problems, such as
online harassment. Recently, Google and Jigsaw started a project called
Perspective, which uses machine learning to automatically detect toxic
language. A demonstration website has been also launched, which allows anyone
to type a phrase in the interface and instantaneously see the toxicity score
[1]. In this paper, we propose an attack on the Perspective toxic detection
system based on the adversarial examples. We show that an adversary can subtly
modify a highly toxic phrase in a way that the system assigns significantly
lower toxicity score to it. We apply the attack on the sample phrases provided
in the Perspective website and show that we can consistently reduce the
toxicity scores to the level of the non-toxic phrases. The existence of such
adversarial examples is very harmful for toxic detection systems and seriously
undermines their usability. | http://arxiv.org/pdf/1702.08138 | Hossein Hosseini, Sreeram Kannan, Baosen Zhang, Radha Poovendran | cs.LG, cs.CY, cs.SI | 4 pages | null | cs.LG | 20170227 | 20170227 | [
{
"id": "1606.04435"
},
{
"id": "1602.02697"
},
{
"id": "1610.08914"
}
] |
1702.08138 | 15 | Robustness to random misspellings: we observed that the system assigns 34% toxicity score to most of the misspelled and random words. Also, it is somewhat robust to phrases that contain randomly modiï¬ed toxic words. ⢠Vulnerability to poisoning attack: The Perspective interface allows users to provide a feedback on the toxicity score of phrases, suggesting that the learning algorithm updates itself using the new data. This can ex- pose the system to poisoning attacks, where an adversary modiï¬es the training data (in this case, the labels) so that the model assigns low toxicity scores to certain phrases.
IV. OPEN PROBLEMS IN DEFENSE METHODS The developers of Perspective have mentioned that the system is in the early days of research and development, and
that the experiments, models, and research data are published to explore the strengths and weaknesses of using machine learning as a tool for online discussion.
the Perspective system against the adversarial examples. Scoring the semantic toxicity of a phrase is clearly a very challenging task. In this following, we brieï¬y review some of the possible approaches for improving the robustness of the toxic detection systems: | 1702.08138#15 | Deceiving Google's Perspective API Built for Detecting Toxic Comments | Social media platforms provide an environment where people can freely engage
in discussions. Unfortunately, they also enable several problems, such as
online harassment. Recently, Google and Jigsaw started a project called
Perspective, which uses machine learning to automatically detect toxic
language. A demonstration website has been also launched, which allows anyone
to type a phrase in the interface and instantaneously see the toxicity score
[1]. In this paper, we propose an attack on the Perspective toxic detection
system based on the adversarial examples. We show that an adversary can subtly
modify a highly toxic phrase in a way that the system assigns significantly
lower toxicity score to it. We apply the attack on the sample phrases provided
in the Perspective website and show that we can consistently reduce the
toxicity scores to the level of the non-toxic phrases. The existence of such
adversarial examples is very harmful for toxic detection systems and seriously
undermines their usability. | http://arxiv.org/pdf/1702.08138 | Hossein Hosseini, Sreeram Kannan, Baosen Zhang, Radha Poovendran | cs.LG, cs.CY, cs.SI | 4 pages | null | cs.LG | 20170227 | 20170227 | [
{
"id": "1606.04435"
},
{
"id": "1602.02697"
},
{
"id": "1610.08914"
}
] |
1702.08138 | 16 | ⢠Adversarial Training: In this approach, during the training phase, we generate the adversarial examples and train the model to assign the original label to them [18]. In the context of toxic detection systems, we need to include different modiï¬ed versions of the toxic words into the training data. While this approach may improve the robustness of the system against the adversarial examples, it does not seem practical to train the model on all variants of every word.
⢠Spell checking: Many of the adversarial examples can be detected by ï¬rst applying a spell checking ï¬lter before the toxic detection system. This approach may however increase the false alarm.
⢠Blocking suspicious users for a period of time: The adversary needs to try different error patterns to ï¬nally evade the toxic detection system. Once a user fails to pass the threshold for a number of times, the system can block her for a while. This approach can force the users to less often use toxic language.
# V. CONCLUSION | 1702.08138#16 | Deceiving Google's Perspective API Built for Detecting Toxic Comments | Social media platforms provide an environment where people can freely engage
in discussions. Unfortunately, they also enable several problems, such as
online harassment. Recently, Google and Jigsaw started a project called
Perspective, which uses machine learning to automatically detect toxic
language. A demonstration website has been also launched, which allows anyone
to type a phrase in the interface and instantaneously see the toxicity score
[1]. In this paper, we propose an attack on the Perspective toxic detection
system based on the adversarial examples. We show that an adversary can subtly
modify a highly toxic phrase in a way that the system assigns significantly
lower toxicity score to it. We apply the attack on the sample phrases provided
in the Perspective website and show that we can consistently reduce the
toxicity scores to the level of the non-toxic phrases. The existence of such
adversarial examples is very harmful for toxic detection systems and seriously
undermines their usability. | http://arxiv.org/pdf/1702.08138 | Hossein Hosseini, Sreeram Kannan, Baosen Zhang, Radha Poovendran | cs.LG, cs.CY, cs.SI | 4 pages | null | cs.LG | 20170227 | 20170227 | [
{
"id": "1606.04435"
},
{
"id": "1602.02697"
},
{
"id": "1610.08914"
}
] |
1702.08138 | 17 | # V. CONCLUSION
In this paper, we presented an attack on the recently- released Googleâs Perspective API built for detecting toxic comments. We showed that the system can be deceived by slightly perturbing the abusive phrases to receive very low toxicity scores, while preserving the intended meaning. We also showed that the system has high false alarm rate in scoring high toxicity to benign phrases. We provided detailed examples for the studied cases. Our future work includes development of countermeasures against such attacks.
3
Disclaimer: The phrases used in Tables I and II are chosen from the examples provided in the Perspective website [1] for the purpose of demonstrating the results and do not represent the view or opinions of the authors or sponsoring agencies.
# REFERENCES
[1] âhttps://www.perspectiveapi.com/,â [2] M. Duggan, Online harassment. Pew Research Center, 2014. [3] âhttps://meta.wikimedia.org/wiki/Research:Detox,â [4] âhttps://www.nytimes.com/interactive/2016/09/20/insider/approve-orreject-moderation-quiz.html,â
[5] âhttps://www.wired.com/2017/02/googles-troll-ï¬ghting-ai-now-belongs- world/,â | 1702.08138#17 | Deceiving Google's Perspective API Built for Detecting Toxic Comments | Social media platforms provide an environment where people can freely engage
in discussions. Unfortunately, they also enable several problems, such as
online harassment. Recently, Google and Jigsaw started a project called
Perspective, which uses machine learning to automatically detect toxic
language. A demonstration website has been also launched, which allows anyone
to type a phrase in the interface and instantaneously see the toxicity score
[1]. In this paper, we propose an attack on the Perspective toxic detection
system based on the adversarial examples. We show that an adversary can subtly
modify a highly toxic phrase in a way that the system assigns significantly
lower toxicity score to it. We apply the attack on the sample phrases provided
in the Perspective website and show that we can consistently reduce the
toxicity scores to the level of the non-toxic phrases. The existence of such
adversarial examples is very harmful for toxic detection systems and seriously
undermines their usability. | http://arxiv.org/pdf/1702.08138 | Hossein Hosseini, Sreeram Kannan, Baosen Zhang, Radha Poovendran | cs.LG, cs.CY, cs.SI | 4 pages | null | cs.LG | 20170227 | 20170227 | [
{
"id": "1606.04435"
},
{
"id": "1602.02697"
},
{
"id": "1610.08914"
}
] |
1702.08138 | 18 | [6] E. Wulczyn, N. Thain, and L. Dixon, âEx machina: Personal attacks seen at scale,â arXiv preprint arXiv:1610.08914, 2016.
[7] A. Krizhevsky, I. Sutskever, and G. E. Hinton, âImagenet classiï¬cation with deep convolutional neural networks,â in Advances in neural infor- mation processing systems, pp. 1097â1105, 2012.
[8] G. E. Dahl, D. Yu, L. Deng, and A. Acero, âContext-dependent pre- trained deep neural networks for large-vocabulary speech recognition,â IEEE Transactions on Audio, Speech, and Language Processing, vol. 20, no. 1, pp. 30â42, 2012.
[9] R. Collobert and J. Weston, âA uniï¬ed architecture for natural language processing: Deep neural networks with multitask learning,â in Proceed- ings of the 25th international conference on Machine learning, pp. 160â 167, ACM, 2008. | 1702.08138#18 | Deceiving Google's Perspective API Built for Detecting Toxic Comments | Social media platforms provide an environment where people can freely engage
in discussions. Unfortunately, they also enable several problems, such as
online harassment. Recently, Google and Jigsaw started a project called
Perspective, which uses machine learning to automatically detect toxic
language. A demonstration website has been also launched, which allows anyone
to type a phrase in the interface and instantaneously see the toxicity score
[1]. In this paper, we propose an attack on the Perspective toxic detection
system based on the adversarial examples. We show that an adversary can subtly
modify a highly toxic phrase in a way that the system assigns significantly
lower toxicity score to it. We apply the attack on the sample phrases provided
in the Perspective website and show that we can consistently reduce the
toxicity scores to the level of the non-toxic phrases. The existence of such
adversarial examples is very harmful for toxic detection systems and seriously
undermines their usability. | http://arxiv.org/pdf/1702.08138 | Hossein Hosseini, Sreeram Kannan, Baosen Zhang, Radha Poovendran | cs.LG, cs.CY, cs.SI | 4 pages | null | cs.LG | 20170227 | 20170227 | [
{
"id": "1606.04435"
},
{
"id": "1602.02697"
},
{
"id": "1610.08914"
}
] |
1702.08138 | 19 | [10] âhttps://jigsaw.google.com/,â [11] âhttp://www.nytco.com/the-times-is-partnering-with-jigsaw-to-expandcomment-capabilities/,â
[12] M. Barreno, B. Nelson, R. Sears, A. D. Joseph, and J. D. Tygar, âCan machine learning be secure?,â in Proceedings of the 2006 ACM Symposium on Information, computer and communications security, pp. 16â25, ACM, 2006.
[13] M. Barreno, B. Nelson, A. D. Joseph, and J. Tygar, âThe security of machine learning,â Machine Learning, vol. 81, no. 2, pp. 121â148, 2010. [14] L. Huang, A. D. Joseph, B. Nelson, B. I. Rubinstein, and J. Tygar, âAd- versarial machine learning,â in Proceedings of the 4th ACM workshop on Security and artiï¬cial intelligence, pp. 43â58, ACM, 2011. | 1702.08138#19 | Deceiving Google's Perspective API Built for Detecting Toxic Comments | Social media platforms provide an environment where people can freely engage
in discussions. Unfortunately, they also enable several problems, such as
online harassment. Recently, Google and Jigsaw started a project called
Perspective, which uses machine learning to automatically detect toxic
language. A demonstration website has been also launched, which allows anyone
to type a phrase in the interface and instantaneously see the toxicity score
[1]. In this paper, we propose an attack on the Perspective toxic detection
system based on the adversarial examples. We show that an adversary can subtly
modify a highly toxic phrase in a way that the system assigns significantly
lower toxicity score to it. We apply the attack on the sample phrases provided
in the Perspective website and show that we can consistently reduce the
toxicity scores to the level of the non-toxic phrases. The existence of such
adversarial examples is very harmful for toxic detection systems and seriously
undermines their usability. | http://arxiv.org/pdf/1702.08138 | Hossein Hosseini, Sreeram Kannan, Baosen Zhang, Radha Poovendran | cs.LG, cs.CY, cs.SI | 4 pages | null | cs.LG | 20170227 | 20170227 | [
{
"id": "1606.04435"
},
{
"id": "1602.02697"
},
{
"id": "1610.08914"
}
] |
1702.08138 | 20 | [15] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, âIntriguing properties of neural networks,â arXiv preprint arXiv:1312.6199, 2013.
[16] N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami, âPractical black-box attacks against deep learning systems using adversarial examples,â arXiv preprint arXiv:1602.02697, 2016.
[17] âhttps://conversationai.github.io/,â [18] I. J. Goodfellow, J. Shlens, and C. Szegedy, âExplaining and harnessing
adversarial examples,â arXiv preprint arXiv:1412.6572, 2014.
[19] N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami, âThe limitations of deep learning in adversarial settings,â in 2016 IEEE European Symposium on Security and Privacy (EuroS&P), pp. 372â387, IEEE, 2016. | 1702.08138#20 | Deceiving Google's Perspective API Built for Detecting Toxic Comments | Social media platforms provide an environment where people can freely engage
in discussions. Unfortunately, they also enable several problems, such as
online harassment. Recently, Google and Jigsaw started a project called
Perspective, which uses machine learning to automatically detect toxic
language. A demonstration website has been also launched, which allows anyone
to type a phrase in the interface and instantaneously see the toxicity score
[1]. In this paper, we propose an attack on the Perspective toxic detection
system based on the adversarial examples. We show that an adversary can subtly
modify a highly toxic phrase in a way that the system assigns significantly
lower toxicity score to it. We apply the attack on the sample phrases provided
in the Perspective website and show that we can consistently reduce the
toxicity scores to the level of the non-toxic phrases. The existence of such
adversarial examples is very harmful for toxic detection systems and seriously
undermines their usability. | http://arxiv.org/pdf/1702.08138 | Hossein Hosseini, Sreeram Kannan, Baosen Zhang, Radha Poovendran | cs.LG, cs.CY, cs.SI | 4 pages | null | cs.LG | 20170227 | 20170227 | [
{
"id": "1606.04435"
},
{
"id": "1602.02697"
},
{
"id": "1610.08914"
}
] |
1702.08138 | 21 | [20] C. Kereliuk, B. L. Sturm, and J. Larsen, âDeep learning and music ad- versaries,â IEEE Transactions on Multimedia, vol. 17, no. 11, pp. 2059â 2071, 2015.
[21] K. Grosse, N. Papernot, P. Manoharan, M. Backes, and P. McDaniel, âAdversarial perturbations against deep neural networks for malware classiï¬cation,â arXiv preprint arXiv:1606.04435, 2016.
[22] S. Reddy, M. Wellesley, K. Knight, and C. Marina del Rey, âObfuscating gender in social media writing,â NLP+ CSS 2016, p. 17, 2016.
4 | 1702.08138#21 | Deceiving Google's Perspective API Built for Detecting Toxic Comments | Social media platforms provide an environment where people can freely engage
in discussions. Unfortunately, they also enable several problems, such as
online harassment. Recently, Google and Jigsaw started a project called
Perspective, which uses machine learning to automatically detect toxic
language. A demonstration website has been also launched, which allows anyone
to type a phrase in the interface and instantaneously see the toxicity score
[1]. In this paper, we propose an attack on the Perspective toxic detection
system based on the adversarial examples. We show that an adversary can subtly
modify a highly toxic phrase in a way that the system assigns significantly
lower toxicity score to it. We apply the attack on the sample phrases provided
in the Perspective website and show that we can consistently reduce the
toxicity scores to the level of the non-toxic phrases. The existence of such
adversarial examples is very harmful for toxic detection systems and seriously
undermines their usability. | http://arxiv.org/pdf/1702.08138 | Hossein Hosseini, Sreeram Kannan, Baosen Zhang, Radha Poovendran | cs.LG, cs.CY, cs.SI | 4 pages | null | cs.LG | 20170227 | 20170227 | [
{
"id": "1606.04435"
},
{
"id": "1602.02697"
},
{
"id": "1610.08914"
}
] |
1702.04595 | 0 | 7 1 0 2
b e F 5 1 ] V C . s c [
1 v 5 9 5 4 0 . 2 0 7 1 : v i X r a
Published as a conference paper at ICLR 2017
VISUALIZING DEEP NEURAL NETWORK DECISIONS: PREDICTION DIFFERENCE ANALYSIS
Luisa M Zintgraf1,3, Taco S Cohen1, Tameem Adel1, Max Welling1,2 1University of Amsterdam, 2Canadian Institute of Advanced Research, 3Vrije Universiteit Brussel {lmzintgraf,tameem.hesham}@gmail.com, {t.s.cohen, m.welling}@uva.nl
# ABSTRACT | 1702.04595#0 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | This article presents the prediction difference analysis method for
visualizing the response of a deep neural network to a specific input. When
classifying images, the method highlights areas in a given input image that
provide evidence for or against a certain class. It overcomes several
shortcoming of previous methods and provides great additional insight into the
decision making process of classifiers. Making neural network decisions
interpretable through visualization is important both to improve models and to
accelerate the adoption of black-box classifiers in application areas such as
medicine. We illustrate the method in experiments on natural images (ImageNet
data), as well as medical images (MRI brain scans). | http://arxiv.org/pdf/1702.04595 | Luisa M Zintgraf, Taco S Cohen, Tameem Adel, Max Welling | cs.CV, cs.AI | ICLR2017 | null | cs.CV | 20170215 | 20170215 | [
{
"id": "1506.06579"
},
{
"id": "1605.01713"
}
] |
1702.04595 | 1 | # ABSTRACT
This article presents the prediction difference analysis method for visualizing the response of a deep neural network to a speciï¬c input. When classifying images, the method highlights areas in a given input image that provide evidence for or against a certain class. It overcomes several shortcoming of previous methods and provides great additional insight into the decision making process of classiï¬ers. Making neural network decisions interpretable through visualization is important both to improve models and to accelerate the adoption of black-box classiï¬ers in application areas such as medicine. We illustrate the method in experiments on natural images (ImageNet data), as well as medical images (MRI brain scans).
# INTRODUCTION
Over the last few years, deep neural networks (DNNs) have emerged as the method of choice for perceptual tasks such as speech recognition and image classiï¬cation. In essence, a DNN is a highly complex non-linear function, which makes it hard to understand how a particular classiï¬cation comes about. This lack of transparency is a signiï¬cant impediment to the adoption of deep learning in areas of industry, government and healthcare where the cost of errors is high. | 1702.04595#1 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | This article presents the prediction difference analysis method for
visualizing the response of a deep neural network to a specific input. When
classifying images, the method highlights areas in a given input image that
provide evidence for or against a certain class. It overcomes several
shortcoming of previous methods and provides great additional insight into the
decision making process of classifiers. Making neural network decisions
interpretable through visualization is important both to improve models and to
accelerate the adoption of black-box classifiers in application areas such as
medicine. We illustrate the method in experiments on natural images (ImageNet
data), as well as medical images (MRI brain scans). | http://arxiv.org/pdf/1702.04595 | Luisa M Zintgraf, Taco S Cohen, Tameem Adel, Max Welling | cs.CV, cs.AI | ICLR2017 | null | cs.CV | 20170215 | 20170215 | [
{
"id": "1506.06579"
},
{
"id": "1605.01713"
}
] |
1702.04595 | 2 | In order to realize the societal promise of deep learning - e.g., through self-driving cars or personalized medicine - it is imperative that classiï¬ers learn to explain their decisions, whether it is in the lab, the clinic, or the courtroom. In scientiï¬c applications, a better understanding of the complex dependencies learned by deep networks could lead to new insights and theories in poorly understood domains.
In this paper, we present a new, probabilistically sound methodology for explaining classiï¬cation decisions made by deep neural networks. The method can be used to produce a saliency map for each (instance, node) pair that highlights the parts (features) of the input that constitute most evidence for or against the activation of the given (internal or output) node. See ï¬gure 1 for an example.
In the following two sections, we review related work and then present our approach. In section 4 we provide several demonstrations of our technique for deep convolutional neural networks (DCNNs) trained on ImageNet data, and further how the method can be applied when classifying MRI brain scans of HIV patients with neurodegenerative disease. | 1702.04595#2 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | This article presents the prediction difference analysis method for
visualizing the response of a deep neural network to a specific input. When
classifying images, the method highlights areas in a given input image that
provide evidence for or against a certain class. It overcomes several
shortcoming of previous methods and provides great additional insight into the
decision making process of classifiers. Making neural network decisions
interpretable through visualization is important both to improve models and to
accelerate the adoption of black-box classifiers in application areas such as
medicine. We illustrate the method in experiments on natural images (ImageNet
data), as well as medical images (MRI brain scans). | http://arxiv.org/pdf/1702.04595 | Luisa M Zintgraf, Taco S Cohen, Tameem Adel, Max Welling | cs.CV, cs.AI | ICLR2017 | null | cs.CV | 20170215 | 20170215 | [
{
"id": "1506.06579"
},
{
"id": "1605.01713"
}
] |
1702.04595 | 3 | Figure 1: Example of our visualization method: explains why the DCNN (GoogLeNet) predicts "cockatoo". Shown is the evidence for (red) and against (blue) the prediction. We see that the facial features of the cockatoo are most supportive for the decision, and parts of the body seem to constitute evidence against it. In fact, the classiï¬er most likely considers them evidence for the second-highest scoring class, white wolf.
1
Published as a conference paper at ICLR 2017
# 2 RELATED WORK
Broadly speaking, there are two approaches for understanding DCNNs through visualization inves- tigated in the literature: ï¬nd an input image that maximally activates a given unit or class score to visualize what the network is looking for (Erhan et al., 2009; Simonyan et al., 2013; Yosinski et al., 2015), or visualize how the network responds to a speciï¬c input image in order to explain a particular classiï¬cation made by the network. The latter will be the subject of this paper. | 1702.04595#3 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | This article presents the prediction difference analysis method for
visualizing the response of a deep neural network to a specific input. When
classifying images, the method highlights areas in a given input image that
provide evidence for or against a certain class. It overcomes several
shortcoming of previous methods and provides great additional insight into the
decision making process of classifiers. Making neural network decisions
interpretable through visualization is important both to improve models and to
accelerate the adoption of black-box classifiers in application areas such as
medicine. We illustrate the method in experiments on natural images (ImageNet
data), as well as medical images (MRI brain scans). | http://arxiv.org/pdf/1702.04595 | Luisa M Zintgraf, Taco S Cohen, Tameem Adel, Max Welling | cs.CV, cs.AI | ICLR2017 | null | cs.CV | 20170215 | 20170215 | [
{
"id": "1506.06579"
},
{
"id": "1605.01713"
}
] |
1702.04595 | 4 | One such instance-speciï¬c method is class saliency visualization proposed by Simonyan et al. (2013) who measure how sensitive the classiï¬cation score is to small changes in pixel values, by computing the partial derivative of the class score with respect to the input features using standard backpropagation. They also show that there is a close connection to using deconvolutional networks for visualization, proposed by Zeiler & Fergus (2014). Other methods include Shrikumar et al. (2016), who compare the activation of a unit when a speciï¬c input is fed forward through the net to a reference activation for that unit. Zhou et al. (2016) and Bach et al. (2015) also generate interesting visualization results for individual inputs, but are both not as closely related to our method as the two papers mentioned above. The idea of our method is similar to another analysis Zeiler & Fergus (2014) make: they estimate the importance of input pixels by visualizing the probability of the (correct) class as a function of a gray patch occluding parts of the image. In this paper, we take a more rigorous approach at both removing information from the image and evaluating the effect of this. | 1702.04595#4 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | This article presents the prediction difference analysis method for
visualizing the response of a deep neural network to a specific input. When
classifying images, the method highlights areas in a given input image that
provide evidence for or against a certain class. It overcomes several
shortcoming of previous methods and provides great additional insight into the
decision making process of classifiers. Making neural network decisions
interpretable through visualization is important both to improve models and to
accelerate the adoption of black-box classifiers in application areas such as
medicine. We illustrate the method in experiments on natural images (ImageNet
data), as well as medical images (MRI brain scans). | http://arxiv.org/pdf/1702.04595 | Luisa M Zintgraf, Taco S Cohen, Tameem Adel, Max Welling | cs.CV, cs.AI | ICLR2017 | null | cs.CV | 20170215 | 20170215 | [
{
"id": "1506.06579"
},
{
"id": "1605.01713"
}
] |
1702.04595 | 5 | In the ï¬eld of medical image classiï¬cation speciï¬cally, a widely used method for visualizing feature importances is to simply plot the weights of a linear classiï¬er (Klöppel et al., 2008; Ecker et al., 2010), or the p-values of these weights (determined by permutation testing) (Mourao-Miranda et al., 2005; Wang et al., 2007). These are independent of the input image, and, as argued by Gaonkar & Davatzikos (2013) and Haufe et al. (2014), interpreting these weights can be misleading in general.
The work presented in this paper is based on an instance-speciï¬c method by Robnik-Å ikonja & Kononenko (2008), the prediction difference analysis, which is reviewed in the next section. Our main contributions are three substantial improvements of this method: conditional sampling (section 3.1), multivariate analysis (section 3.2), and deep visualization (section 3.3).
# 3 APPROACH | 1702.04595#5 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | This article presents the prediction difference analysis method for
visualizing the response of a deep neural network to a specific input. When
classifying images, the method highlights areas in a given input image that
provide evidence for or against a certain class. It overcomes several
shortcoming of previous methods and provides great additional insight into the
decision making process of classifiers. Making neural network decisions
interpretable through visualization is important both to improve models and to
accelerate the adoption of black-box classifiers in application areas such as
medicine. We illustrate the method in experiments on natural images (ImageNet
data), as well as medical images (MRI brain scans). | http://arxiv.org/pdf/1702.04595 | Luisa M Zintgraf, Taco S Cohen, Tameem Adel, Max Welling | cs.CV, cs.AI | ICLR2017 | null | cs.CV | 20170215 | 20170215 | [
{
"id": "1506.06579"
},
{
"id": "1605.01713"
}
] |
1702.04595 | 6 | # 3 APPROACH
Our method is based on the technique presented by Robnik-Å ikonja & Kononenko (2008), which we will now review. For a given prediction, the method assigns a relevance value to each input feature with respect to a class c. The basic idea is that the relevance of a feature xi can be estimated by measuring how the prediction changes if the feature is unknown, i.e., the difference between p(c|x) and p(c|x\i), where x\i denotes the set of all input features except xi.
To ï¬nd p(c|x\i), i.e., evaluate the prediction when a feature is unknown, the authors propose three strategies. The ï¬rst is to label the feature as unknown (which only few classiï¬ers allow). The second is to re-train the classiï¬er with the feature left out (which is clearly infeasible for DNNs and high-dimensional data like images). The third approach is to simulate the absence of a feature by marginalizing the feature:
plelx\i) = J plrilxi)ple}xi, 2%) a) | 1702.04595#6 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | This article presents the prediction difference analysis method for
visualizing the response of a deep neural network to a specific input. When
classifying images, the method highlights areas in a given input image that
provide evidence for or against a certain class. It overcomes several
shortcoming of previous methods and provides great additional insight into the
decision making process of classifiers. Making neural network decisions
interpretable through visualization is important both to improve models and to
accelerate the adoption of black-box classifiers in application areas such as
medicine. We illustrate the method in experiments on natural images (ImageNet
data), as well as medical images (MRI brain scans). | http://arxiv.org/pdf/1702.04595 | Luisa M Zintgraf, Taco S Cohen, Tameem Adel, Max Welling | cs.CV, cs.AI | ICLR2017 | null | cs.CV | 20170215 | 20170215 | [
{
"id": "1506.06579"
},
{
"id": "1605.01713"
}
] |
1702.04595 | 7 | plelx\i) = J plrilxi)ple}xi, 2%) a)
(with the sum running over all possible values for xi). However, modeling p(xi|x\i) can easily become infeasible with a large number of features. Therefore, the authors approximate equation (1) by assuming that feature xi is independent of the other features, x\i:
p(c|x\i) â p(xi)p(c|x\i, xi) . xi (2)
The prior probability p(xi) is usually approximated by the empirical distribution for that feature.
Once the class probability p(c|x\;) is estimated, it can be compared to p(c|x). We stick to an evaluation proposed by the authors referred to as weight of evidence, given by WE;,(c|x) = log, (odds(c|x)) â logy (odds(c|x\;)) ; (3)
2
Published as a conference paper at ICLR 2017
input x ae m.. ~ | 1702.04595#7 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | This article presents the prediction difference analysis method for
visualizing the response of a deep neural network to a specific input. When
classifying images, the method highlights areas in a given input image that
provide evidence for or against a certain class. It overcomes several
shortcoming of previous methods and provides great additional insight into the
decision making process of classifiers. Making neural network decisions
interpretable through visualization is important both to improve models and to
accelerate the adoption of black-box classifiers in application areas such as
medicine. We illustrate the method in experiments on natural images (ImageNet
data), as well as medical images (MRI brain scans). | http://arxiv.org/pdf/1702.04595 | Luisa M Zintgraf, Taco S Cohen, Tameem Adel, Max Welling | cs.CV, cs.AI | ICLR2017 | null | cs.CV | 20170215 | 20170215 | [
{
"id": "1506.06579"
},
{
"id": "1605.01713"
}
] |
1702.04595 | 8 | 2
Published as a conference paper at ICLR 2017
input x ae m.. ~
Figure 2: Simple illustration of the sampling procedure in algorithm 1. Given the input image x, we select every possible patch xw (in a sliding window fashion) of size k à k and place a larger patch Ëxw of size l à l around it. We can then conditionally sample xw by conditioning on the surrounding patch Ëxw.
Algorithm 1 Evaluating the prediction difference using conditional and multivariate sampling | 1702.04595#8 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | This article presents the prediction difference analysis method for
visualizing the response of a deep neural network to a specific input. When
classifying images, the method highlights areas in a given input image that
provide evidence for or against a certain class. It overcomes several
shortcoming of previous methods and provides great additional insight into the
decision making process of classifiers. Making neural network decisions
interpretable through visualization is important both to improve models and to
accelerate the adoption of black-box classifiers in application areas such as
medicine. We illustrate the method in experiments on natural images (ImageNet
data), as well as medical images (MRI brain scans). | http://arxiv.org/pdf/1702.04595 | Luisa M Zintgraf, Taco S Cohen, Tameem Adel, Max Welling | cs.CV, cs.AI | ICLR2017 | null | cs.CV | 20170215 | 20170215 | [
{
"id": "1506.06579"
},
{
"id": "1605.01713"
}
] |
1702.04595 | 9 | Algorithm 1 Evaluating the prediction difference using conditional and multivariate sampling
Input: classifier with outputs p(clx), input image x of size n x n, inner patch size k, outer patch size 1 > k, class of interest c, probabilistic model over patches of size 1 x J, number of samples S Initialization: WE = zeros(n*n), counts = zeros(n*n) for every patch x,, of size k x kin x do x! = copy(x) sum, = 0 define patch X,, of size | x | that contains x, for s = 1to Sdo x/, < Xw sampled from p(xw|Xw\Xw) sum, += p(c|xâ) > evaluate classifier end for p(c|x\Xw) := sum, /S WE[coordinates of x,,] += log, (odds(c|x)) â log, (odds(c counts[coordinates of x,,] += 1 end for Output: WE / counts > point-wise division x\Xw))
where odds(c|x) = p(c|x)/(1 â p(c|x)). To avoid problems with zero probabilities, Laplace correction p â (pN + 1)/(N + K) is used, where N is the number of training instances and K the number of classes. | 1702.04595#9 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | This article presents the prediction difference analysis method for
visualizing the response of a deep neural network to a specific input. When
classifying images, the method highlights areas in a given input image that
provide evidence for or against a certain class. It overcomes several
shortcoming of previous methods and provides great additional insight into the
decision making process of classifiers. Making neural network decisions
interpretable through visualization is important both to improve models and to
accelerate the adoption of black-box classifiers in application areas such as
medicine. We illustrate the method in experiments on natural images (ImageNet
data), as well as medical images (MRI brain scans). | http://arxiv.org/pdf/1702.04595 | Luisa M Zintgraf, Taco S Cohen, Tameem Adel, Max Welling | cs.CV, cs.AI | ICLR2017 | null | cs.CV | 20170215 | 20170215 | [
{
"id": "1506.06579"
},
{
"id": "1605.01713"
}
] |
1702.04595 | 10 | The method produces a relevance vector (WEi)i=1...m (m being the number of features) of the same size as the input, which reï¬ects the relative importance of all features. A large prediction difference means that the feature contributed substantially to the classiï¬cation, whereas a small difference indicates that the feature was not important for the decision. A positive value WEi means that the feature has contributed evidence for the class of interest: removing it would decrease the conï¬dence of the classiï¬er in the given class. A negative value on the other hand indicates that the feature displays evidence against the class: removing it also removes potentially conï¬icting or irritating information and the classiï¬er becomes more certain in the investigated class.
3.1 CONDITIONAL SAMPLING | 1702.04595#10 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | This article presents the prediction difference analysis method for
visualizing the response of a deep neural network to a specific input. When
classifying images, the method highlights areas in a given input image that
provide evidence for or against a certain class. It overcomes several
shortcoming of previous methods and provides great additional insight into the
decision making process of classifiers. Making neural network decisions
interpretable through visualization is important both to improve models and to
accelerate the adoption of black-box classifiers in application areas such as
medicine. We illustrate the method in experiments on natural images (ImageNet
data), as well as medical images (MRI brain scans). | http://arxiv.org/pdf/1702.04595 | Luisa M Zintgraf, Taco S Cohen, Tameem Adel, Max Welling | cs.CV, cs.AI | ICLR2017 | null | cs.CV | 20170215 | 20170215 | [
{
"id": "1506.06579"
},
{
"id": "1605.01713"
}
] |
1702.04595 | 11 | 3.1 CONDITIONAL SAMPLING
In equation (3), the conditional probability p(xi|x\i) of a feature xi is approximated using the marginal distribution p(xi). This is a very crude approximation. In images for example, a pixelâs value is highly dependent on other pixels. We propose a much more accurate approximation, based on the following two observations: a pixel depends most strongly on a small neighborhood around it, and the conditional of a pixel given its neighborhood does not depend on the position of the pixel in the image. For a pixel xi, we can therefore ï¬nd a patch Ëxi of size l à l that contains xi, and condition on the remaining pixels in that patch:
p(xi|x\i) â p(xi|Ëx\i) . (4)
This greatly improves the approximation while remaining completely tractable.
For a feature to become relevant when using conditional sampling, it now has to satisfy two conditions: being relevant to predict the class of interest, and be hard to predict from the neighboring pixels. Relative to the marginal method, we therefore downweight the pixels that can easily be predicted and are thus redundant in this sense.
3
Published as a conference paper at ICLR 2017
3.2 MULTIVARIATE ANALYSIS | 1702.04595#11 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | This article presents the prediction difference analysis method for
visualizing the response of a deep neural network to a specific input. When
classifying images, the method highlights areas in a given input image that
provide evidence for or against a certain class. It overcomes several
shortcoming of previous methods and provides great additional insight into the
decision making process of classifiers. Making neural network decisions
interpretable through visualization is important both to improve models and to
accelerate the adoption of black-box classifiers in application areas such as
medicine. We illustrate the method in experiments on natural images (ImageNet
data), as well as medical images (MRI brain scans). | http://arxiv.org/pdf/1702.04595 | Luisa M Zintgraf, Taco S Cohen, Tameem Adel, Max Welling | cs.CV, cs.AI | ICLR2017 | null | cs.CV | 20170215 | 20170215 | [
{
"id": "1506.06579"
},
{
"id": "1605.01713"
}
] |
1702.04595 | 12 | 3
Published as a conference paper at ICLR 2017
3.2 MULTIVARIATE ANALYSIS
Robnik-Å ikonja & Kononenko (2008) take a univariate approach: only one feature at a time is removed. However, we would expect that a neural network is relatively robust to just one feature of a high-dimensional input being unknown, like a pixel in an image. Therefore, we will remove several features at once by again making use of our knowledge about images by strategically choosing these feature sets: patches of connected pixels. Instead of going through all individual pixels, we go through all patches of size k à k in the image (k à k à 3 for RGB images and k à k à k for 3D images like MRI scans), implemented in a sliding window fashion. The patches are overlapping, so that ultimately an individual pixelâs relevance is obtained by taking the average relevance obtained from the different patches it was in.
Algorithm 1 and ï¬gure 2 illustrate how the method can be implemented, incorporating the proposed improvements.
3.3 DEEP VISUALIZATION OF HIDDEN LAYERS | 1702.04595#12 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | This article presents the prediction difference analysis method for
visualizing the response of a deep neural network to a specific input. When
classifying images, the method highlights areas in a given input image that
provide evidence for or against a certain class. It overcomes several
shortcoming of previous methods and provides great additional insight into the
decision making process of classifiers. Making neural network decisions
interpretable through visualization is important both to improve models and to
accelerate the adoption of black-box classifiers in application areas such as
medicine. We illustrate the method in experiments on natural images (ImageNet
data), as well as medical images (MRI brain scans). | http://arxiv.org/pdf/1702.04595 | Luisa M Zintgraf, Taco S Cohen, Tameem Adel, Max Welling | cs.CV, cs.AI | ICLR2017 | null | cs.CV | 20170215 | 20170215 | [
{
"id": "1506.06579"
},
{
"id": "1605.01713"
}
] |
1702.04595 | 13 | Algorithm 1 and ï¬gure 2 illustrate how the method can be implemented, incorporating the proposed improvements.
3.3 DEEP VISUALIZATION OF HIDDEN LAYERS
When trying to understand neural networks and how they make decisions, it is not only interesting to analyze the input-output relation of the classiï¬er, but also to look at what is going on inside the hidden layers of the network. We can adapt the method to see how the units of any layer of the network inï¬uence a node from a deeper layer. Mathematically, we can formulate this as follows. Let h be the vector representation of the values in a layer H in the network (after forward-propagating the input up to this layer). Further, let z = z(h) be the value of a node that depends on h, i.e., a node in a subsequent layer. Then the analog of equation (2) is given by the expectation:
g(z|h\i) â¡ Ep(hi|h\i) [z(h)] = p(hi|h\i)z(h\i, hi) , hi (5) | 1702.04595#13 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | This article presents the prediction difference analysis method for
visualizing the response of a deep neural network to a specific input. When
classifying images, the method highlights areas in a given input image that
provide evidence for or against a certain class. It overcomes several
shortcoming of previous methods and provides great additional insight into the
decision making process of classifiers. Making neural network decisions
interpretable through visualization is important both to improve models and to
accelerate the adoption of black-box classifiers in application areas such as
medicine. We illustrate the method in experiments on natural images (ImageNet
data), as well as medical images (MRI brain scans). | http://arxiv.org/pdf/1702.04595 | Luisa M Zintgraf, Taco S Cohen, Tameem Adel, Max Welling | cs.CV, cs.AI | ICLR2017 | null | cs.CV | 20170215 | 20170215 | [
{
"id": "1506.06579"
},
{
"id": "1605.01713"
}
] |
1702.04595 | 14 | which expresses the distribution of z when unit hi in layer H is unobserved. The equation now works for arbitrary layer/unit combinations, and evaluates to the same as equation (1) when the input-output relation is analyzed. To evaluate the difference between g(z|h) and g(z|h\i), we will in general use the activation difference, ADi(z|h) = g(z|h) â g(z|h\i) , for the case when we are not dealing with probabilities (and equation (3) is not applicable).
# 4 EXPERIMENTS
In this section, we illustrate how the proposed visualization method can be applied, on the ImageNet dataset of natural images when using DCNNs (section 4.1), and on a medical imaging dataset of MRI scans when using a logistic regression classiï¬er (section 4.2). For marginal sampling we always use the empirical distribution, i.e., we replace a feature (patch) with samples taken directly from other images, at the same location. For conditional sampling we use a multivariate normal distribution. For both sampling methods we use 10 samples to estimate p(c|x\i) (since no signiï¬cant difference was observed with more samples). Note that all images are best viewed digital and in color. | 1702.04595#14 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | This article presents the prediction difference analysis method for
visualizing the response of a deep neural network to a specific input. When
classifying images, the method highlights areas in a given input image that
provide evidence for or against a certain class. It overcomes several
shortcoming of previous methods and provides great additional insight into the
decision making process of classifiers. Making neural network decisions
interpretable through visualization is important both to improve models and to
accelerate the adoption of black-box classifiers in application areas such as
medicine. We illustrate the method in experiments on natural images (ImageNet
data), as well as medical images (MRI brain scans). | http://arxiv.org/pdf/1702.04595 | Luisa M Zintgraf, Taco S Cohen, Tameem Adel, Max Welling | cs.CV, cs.AI | ICLR2017 | null | cs.CV | 20170215 | 20170215 | [
{
"id": "1506.06579"
},
{
"id": "1605.01713"
}
] |
1702.04595 | 15 | Our implementation is available at github.com/lmzintgraf/DeepVis-PredDiff.
IMAGENET: UNDERSTANDING HOW A DCNN MAKES DECISIONS
We use images from the ILSVRC challenge (Russakovsky et al., 2015) (a large dataset of natural im- ages from 1000 categories) and three DCNNs: the AlexNet (Krizhevsky et al., 2012), the GoogLeNet (Szegedy et al., 2015) and the (16-layer) VGG network (Simonyan & Zisserman, 2014). We used the publicly available pre-trained models that were implemented using the deep learning framework caffe (Jia et al., 2014). Analyzing one image took us on average 20, 30 and 70 minutes for the respective classiï¬ers AlexNet, GoogLeNet and VGG (using the GPU implementation of caffe and mini-batches with the standard settings of 10 samples and a window size of k = 10).
The results shown here are chosen from among a small set of images in order to show a range of behavior of the algorithm. The shown images are quite representative of the performance of the method in general. Examples on randomly selected images, including a comparison to the sensitivity analysis of Simonyan et al. (2013), can be seen in appendix A.
4
Published as a conference paper at ICLR 2017 | 1702.04595#15 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | This article presents the prediction difference analysis method for
visualizing the response of a deep neural network to a specific input. When
classifying images, the method highlights areas in a given input image that
provide evidence for or against a certain class. It overcomes several
shortcoming of previous methods and provides great additional insight into the
decision making process of classifiers. Making neural network decisions
interpretable through visualization is important both to improve models and to
accelerate the adoption of black-box classifiers in application areas such as
medicine. We illustrate the method in experiments on natural images (ImageNet
data), as well as medical images (MRI brain scans). | http://arxiv.org/pdf/1702.04595 | Luisa M Zintgraf, Taco S Cohen, Tameem Adel, Max Welling | cs.CV, cs.AI | ICLR2017 | null | cs.CV | 20170215 | 20170215 | [
{
"id": "1506.06579"
},
{
"id": "1605.01713"
}
] |
1702.04595 | 16 | 4
Published as a conference paper at ICLR 2017
marginal conditional input marginal conditional ry.
Figure 3: Visualization of the effects of marginal versus conditional sampling using the GoogLeNet classiï¬er. The classiï¬er makes correct predictions (ostrich and saxophone), and we show the evidence for (red) and against (blue) this decision at the output layer. We can see that conditional sampling gives more targeted explanations compared to marginal sampling. Also, marginal sampling assigns too much importance on pixels that are easily predictable conditioned on their neighboring pixels.
african el., 0.63 1 2 4 - = ~ 9..* 7 " " agen a ae âa âa bs &. "4 oe he | Se |) Pe te. y : % J) : %. J s 4 : â 4 s
Figure 4: Visualization of how different window sizes inï¬uence the visualization result. We used the conditional sampling method and the AlexNet classiï¬er with l = k + 4 and varying k. We can see that even when removing single pixels (k = 1), this has a noticeable effect on the classiï¬er and more important pixels get a higher score. By increasing the window size we can get a more easily interpretable, smooth result until the image gets blurry for very large window sizes. | 1702.04595#16 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | This article presents the prediction difference analysis method for
visualizing the response of a deep neural network to a specific input. When
classifying images, the method highlights areas in a given input image that
provide evidence for or against a certain class. It overcomes several
shortcoming of previous methods and provides great additional insight into the
decision making process of classifiers. Making neural network decisions
interpretable through visualization is important both to improve models and to
accelerate the adoption of black-box classifiers in application areas such as
medicine. We illustrate the method in experiments on natural images (ImageNet
data), as well as medical images (MRI brain scans). | http://arxiv.org/pdf/1702.04595 | Luisa M Zintgraf, Taco S Cohen, Tameem Adel, Max Welling | cs.CV, cs.AI | ICLR2017 | null | cs.CV | 20170215 | 20170215 | [
{
"id": "1506.06579"
},
{
"id": "1605.01713"
}
] |
1702.04595 | 17 | We start this section by demonstrating our proposed improvements (sections 3.1 - 3.3).
Marginal vs Conditional Sampling
Figure 3 shows visualizations of the spatial support for the highest scoring class, using marginal and conditional sampling (with k = 10 and l = 14). We can see that conditional sampling leads to results that are more reï¬ned in the sense that they concentrate more around the object. We can also see that marginal sampling leads to pixels being declared as important that are very easily predictable conditioned on their neighboring pixels (like in the saxophone example). Throughout our experiments, we have found that conditional sampling tends to give more speciï¬c and ï¬ne-grained results than marginal sampling. For the rest of our experiments, we therefore show results using conditional sampling only.
Multivariate Analysis | 1702.04595#17 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | This article presents the prediction difference analysis method for
visualizing the response of a deep neural network to a specific input. When
classifying images, the method highlights areas in a given input image that
provide evidence for or against a certain class. It overcomes several
shortcoming of previous methods and provides great additional insight into the
decision making process of classifiers. Making neural network decisions
interpretable through visualization is important both to improve models and to
accelerate the adoption of black-box classifiers in application areas such as
medicine. We illustrate the method in experiments on natural images (ImageNet
data), as well as medical images (MRI brain scans). | http://arxiv.org/pdf/1702.04595 | Luisa M Zintgraf, Taco S Cohen, Tameem Adel, Max Welling | cs.CV, cs.AI | ICLR2017 | null | cs.CV | 20170215 | 20170215 | [
{
"id": "1506.06579"
},
{
"id": "1605.01713"
}
] |
1702.04595 | 18 | Multivariate Analysis
For ImageNet data, we have observed that setting k = 10 gives a good trade-off between sharp results and a smooth appearance. Figure 4 shows how different window sizes inï¬uence the resolution of the visualization. Surprisingly, removing only one pixel does have a measurable effect on the prediction, and the largest effect comes from sensitive pixels. We expected that removing only one pixel does not have any effect on the classiï¬cation outcome, but apparently the classiï¬er is sensitive even to these small changes. However when using such a small window size, it is difï¬cult to make sense of the sign information in the visualization. If we want to get a good impression of which parts in the image are evidence for/against a class, it is therefore better to use larger windows. If k is chosen too large however, the results tend to get blurry. Note that these results are not just simple averages of one another, but a multivariate approach is indeed necessary to observe the presented results.
# Deep Visualization of Hidden Network Layers | 1702.04595#18 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | This article presents the prediction difference analysis method for
visualizing the response of a deep neural network to a specific input. When
classifying images, the method highlights areas in a given input image that
provide evidence for or against a certain class. It overcomes several
shortcoming of previous methods and provides great additional insight into the
decision making process of classifiers. Making neural network decisions
interpretable through visualization is important both to improve models and to
accelerate the adoption of black-box classifiers in application areas such as
medicine. We illustrate the method in experiments on natural images (ImageNet
data), as well as medical images (MRI brain scans). | http://arxiv.org/pdf/1702.04595 | Luisa M Zintgraf, Taco S Cohen, Tameem Adel, Max Welling | cs.CV, cs.AI | ICLR2017 | null | cs.CV | 20170215 | 20170215 | [
{
"id": "1506.06579"
},
{
"id": "1605.01713"
}
] |
1702.04595 | 19 | # Deep Visualization of Hidden Network Layers
Our third main contribution is the extension of the method to neural networks; to understand the role of hidden layers in a DNN. Figure 5 shows how different feature maps in three different layers of the GoogLeNet react to the input of a tabby cat (see ï¬gure 6, middle image). For each feature map in a convolutional layer, we ï¬rst compute the relevance of the input image for each hidden unit in that map. To estimate what the feature map as a whole is doing, we show the average of the relevance vectors over all units in that feature map. The ï¬rst convolutional layer works with different types of simple image ï¬lters (e.g., edge detectors), and what we see is which parts of the input image respond
5
Published as a conference paper at ICLR 2017
pas pela Lal | 1702.04595#19 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | This article presents the prediction difference analysis method for
visualizing the response of a deep neural network to a specific input. When
classifying images, the method highlights areas in a given input image that
provide evidence for or against a certain class. It overcomes several
shortcoming of previous methods and provides great additional insight into the
decision making process of classifiers. Making neural network decisions
interpretable through visualization is important both to improve models and to
accelerate the adoption of black-box classifiers in application areas such as
medicine. We illustrate the method in experiments on natural images (ImageNet
data), as well as medical images (MRI brain scans). | http://arxiv.org/pdf/1702.04595 | Luisa M Zintgraf, Taco S Cohen, Tameem Adel, Max Welling | cs.CV, cs.AI | ICLR2017 | null | cs.CV | 20170215 | 20170215 | [
{
"id": "1506.06579"
},
{
"id": "1605.01713"
}
] |
1702.04595 | 20 | 5
Published as a conference paper at ICLR 2017
pas pela Lal
Figure 5: Visualization of feature maps from thee different layers of the GoogLeNet (l.t.r.: âconv1/7x7_s2â, âinception_3a/outputâ, âinception_5b/outputâ), using conditional sampling and patch sizes k = 10 and l = 14 (see alg. 1). For each feature map in the convolutional layer, we ï¬rst evaluate the relevance for every single unit, and then average the results over all the units in one feature map to get a sense of what the unit is doing as a whole. Red pixels activate a unit, blue pixels decreased the activation. | 1702.04595#20 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | This article presents the prediction difference analysis method for
visualizing the response of a deep neural network to a specific input. When
classifying images, the method highlights areas in a given input image that
provide evidence for or against a certain class. It overcomes several
shortcoming of previous methods and provides great additional insight into the
decision making process of classifiers. Making neural network decisions
interpretable through visualization is important both to improve models and to
accelerate the adoption of black-box classifiers in application areas such as
medicine. We illustrate the method in experiments on natural images (ImageNet
data), as well as medical images (MRI brain scans). | http://arxiv.org/pdf/1702.04595 | Luisa M Zintgraf, Taco S Cohen, Tameem Adel, Max Welling | cs.CV, cs.AI | ICLR2017 | null | cs.CV | 20170215 | 20170215 | [
{
"id": "1506.06579"
},
{
"id": "1605.01713"
}
] |
1702.04595 | 21 | Figure 6: Visualization of three different feature maps, taken from the âinception_3a/outputâ layer of the GoogLeNet (from the middle of the network). Shown is the average relevance of the input features over all activations of the feature map. We used patch sizes k = 10 and l = 14 (see alg. 1). Red pixels activate a unit, blue pixels decreased the activation.
positively or negatively to these ï¬lters. The layer we picked from somewhere in the middle of the network is specialized to higher level features (like facial features of the cat). The activations of the last convolutional layer are very sparse across feature channels, indicating that these units are highly specialized.
To get a sense of what single feature maps in convolutional layers are doing, we can look at their visualization for different input images and look for patterns in their behavior. Figure 6 shows this for four different feature maps from a layer from the middle of the GoogLeNet network. We can directly see which kind of features the model has learned at this stage in the network. For example, one feature map is mostly activated by the eyes of animals (third row), and another is looking mostly at the background (last row).
Penultimate vs Output Layer | 1702.04595#21 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | This article presents the prediction difference analysis method for
visualizing the response of a deep neural network to a specific input. When
classifying images, the method highlights areas in a given input image that
provide evidence for or against a certain class. It overcomes several
shortcoming of previous methods and provides great additional insight into the
decision making process of classifiers. Making neural network decisions
interpretable through visualization is important both to improve models and to
accelerate the adoption of black-box classifiers in application areas such as
medicine. We illustrate the method in experiments on natural images (ImageNet
data), as well as medical images (MRI brain scans). | http://arxiv.org/pdf/1702.04595 | Luisa M Zintgraf, Taco S Cohen, Tameem Adel, Max Welling | cs.CV, cs.AI | ICLR2017 | null | cs.CV | 20170215 | 20170215 | [
{
"id": "1506.06579"
},
{
"id": "1605.01713"
}
] |
1702.04595 | 22 | Penultimate vs Output Layer
If we visualize the inï¬uence of the input features on the penultimate (pre-softmax) layer, we show only the evidence for/against this particular class, without taking other classes into consideration. After the softmax operation however, the values of the nodes are all interdependent: a drop in the probability for one class could be due to less evidence for it, or because a different class becomes more likely. Figure 7 compares visualizations for the last two layers. By looking at the top three scoring classes, we can see that the visualizations in the penultimate layer look very similar if the classes are similar (like different dog breeds). When looking at the output layer however, they look rather different. Consider the case of the elephants: the top three classes are different elephant subspecies, and the visualizations of the penultimate layer look similar since every subspecies can be identiï¬ed by similar characteristics. But in the output layer, we can see how the classiï¬er decides for one of the three types of elephants and against the others: the ears in this case are the crucial difference.
6
Published as a conference paper at ICLR 2017 | 1702.04595#22 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | This article presents the prediction difference analysis method for
visualizing the response of a deep neural network to a specific input. When
classifying images, the method highlights areas in a given input image that
provide evidence for or against a certain class. It overcomes several
shortcoming of previous methods and provides great additional insight into the
decision making process of classifiers. Making neural network decisions
interpretable through visualization is important both to improve models and to
accelerate the adoption of black-box classifiers in application areas such as
medicine. We illustrate the method in experiments on natural images (ImageNet
data), as well as medical images (MRI brain scans). | http://arxiv.org/pdf/1702.04595 | Luisa M Zintgraf, Taco S Cohen, Tameem Adel, Max Welling | cs.CV, cs.AI | ICLR2017 | null | cs.CV | 20170215 | 20170215 | [
{
"id": "1506.06579"
},
{
"id": "1605.01713"
}
] |
1702.04595 | 23 | 6
Published as a conference paper at ICLR 2017
african eleph. tusker indian eleph. french bulldog boston bull â_ am. staffordsh. 29.86 29.29 25.78 27.77 26.35 17.67 a 2 3 E 2 5 @ = & am. staffordsh. 0.00 african eleph. tusker indian eleph. 0.63 0.01 0.36 â_y
Figure 7: Visualization of the support for the top-three scoring classes in the penultimate- and output layer. Next to the input image, the ï¬rst row shows the results with respect to the penultimate layer; the second row with respect to the output layer. For each image, we additionally report the values of the units. We used the AlexNet with conditional sampling and patch sizes k = 10 and l = 14 (see alg. 1). Red pixels are evidence for a class, and blue against it.
alexnet googlenet alexnet googlenet vos Dy â Cad | ayo Y
Figure 8: Comparison of the prediction visualization of different DCNN architectures. For two input images, we show the results of the prediction difference analysis when using different neural networks - the AlexNet, GoogLeNet and VGG network.
Network Comparison | 1702.04595#23 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | This article presents the prediction difference analysis method for
visualizing the response of a deep neural network to a specific input. When
classifying images, the method highlights areas in a given input image that
provide evidence for or against a certain class. It overcomes several
shortcoming of previous methods and provides great additional insight into the
decision making process of classifiers. Making neural network decisions
interpretable through visualization is important both to improve models and to
accelerate the adoption of black-box classifiers in application areas such as
medicine. We illustrate the method in experiments on natural images (ImageNet
data), as well as medical images (MRI brain scans). | http://arxiv.org/pdf/1702.04595 | Luisa M Zintgraf, Taco S Cohen, Tameem Adel, Max Welling | cs.CV, cs.AI | ICLR2017 | null | cs.CV | 20170215 | 20170215 | [
{
"id": "1506.06579"
},
{
"id": "1605.01713"
}
] |
1702.04595 | 24 | Network Comparison
When analyzing how neural networks make decisions, we can also compare how different network architectures inï¬uence the visualization. Here, we tested our method on the AlexNet, the GoogLeNet and the VGG network. Figure 8 shows the results for the three different networks, on two input images. The AlexNet seems to more on contextual information (the sky in the balloon image), which could be attributed to it having the least complex architecture compared to the other two networks. It is also interesting to see that the VGG network deems the basket of the balloon as very important compared to all other pixels. The second highest scoring class in this case was a parachute - presumably, the network learned to not confuse a balloon with a parachute by detecting a square basket (and not a human).
4.2 MRI DATA: EXPLAINING CLASSIFIER DECISIONS IN MEDICAL IMAGING
To illustrate how our visualization method can also be useful in a medical domain, we show some experimental results on an MRI dataset of HIV and healthy patients. In such settings, it is crucial that the practitioner has some insight into the algorithmâs decision when classifying a patient, to weigh this information and incorporate it in the overall diagnosis process. | 1702.04595#24 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | This article presents the prediction difference analysis method for
visualizing the response of a deep neural network to a specific input. When
classifying images, the method highlights areas in a given input image that
provide evidence for or against a certain class. It overcomes several
shortcoming of previous methods and provides great additional insight into the
decision making process of classifiers. Making neural network decisions
interpretable through visualization is important both to improve models and to
accelerate the adoption of black-box classifiers in application areas such as
medicine. We illustrate the method in experiments on natural images (ImageNet
data), as well as medical images (MRI brain scans). | http://arxiv.org/pdf/1702.04595 | Luisa M Zintgraf, Taco S Cohen, Tameem Adel, Max Welling | cs.CV, cs.AI | ICLR2017 | null | cs.CV | 20170215 | 20170215 | [
{
"id": "1506.06579"
},
{
"id": "1605.01713"
}
] |
1702.04595 | 25 | The dataset used here is referred to as the COBRA dataset. It contains 3D MRIs from 100 HIV patients and 70 healthy individuals, included in the Academic Medical Center (AMC) in Amsterdam, The Netherlands. Of these subjects, diffusion weighted MRI data were acquired. Preprocessing of the data was performed with software developed in-house, using the HPCN-UvA Neuroscience Gateway and using resources of the Dutch e-Science Grid Shahand et al. (2015). As a result, Fractional Anisotropy (FA) maps were computed. FA is sensitive to microstructural damage and therefore expected to be, on average, decreased in patients. Subjects were scanned on two 3.0 Tesla scanner systems, 121 subjects on a Philips Intera system and 39 on a Philips Ingenia system. Patients and controls were evenly distributed. FA images were spatially normalized to standard space Andersson et al. (2007), resulting in volumes with 91 Ã 109 Ã 91 = 902, 629 voxels.
7
Published as a conference paper at ICLR 2017 | 1702.04595#25 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | This article presents the prediction difference analysis method for
visualizing the response of a deep neural network to a specific input. When
classifying images, the method highlights areas in a given input image that
provide evidence for or against a certain class. It overcomes several
shortcoming of previous methods and provides great additional insight into the
decision making process of classifiers. Making neural network decisions
interpretable through visualization is important both to improve models and to
accelerate the adoption of black-box classifiers in application areas such as
medicine. We illustrate the method in experiments on natural images (ImageNet
data), as well as medical images (MRI brain scans). | http://arxiv.org/pdf/1702.04595 | Luisa M Zintgraf, Taco S Cohen, Tameem Adel, Max Welling | cs.CV, cs.AI | ICLR2017 | null | cs.CV | 20170215 | 20170215 | [
{
"id": "1506.06579"
},
{
"id": "1605.01713"
}
] |
1702.04595 | 26 | 7
Published as a conference paper at ICLR 2017
We trained an L2-regularized Logistic Regression classiï¬er on a subset of the MRI slices (slices 29-40 along the ï¬rst axis) and on a balanced version of the dataset (by taking the ï¬rst 70 samples of the HIV class) to achieve an accuracy of 69.3% in a 10-fold cross-validation test. Analyzing one image took around half an hour (on a CPU, with k = 3 and l = 7, see algorithm 1). For conditional sampling, we also tried adding location information in equation (2), i.e., we split up the 3D image into a 20 à 20 à 20 grid and also condition on the index in that grid. We found that this slightly improved the interpretability of the results, since the pixel values in the special case of MRI scans does depend on spacial location as well. | 1702.04595#26 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | This article presents the prediction difference analysis method for
visualizing the response of a deep neural network to a specific input. When
classifying images, the method highlights areas in a given input image that
provide evidence for or against a certain class. It overcomes several
shortcoming of previous methods and provides great additional insight into the
decision making process of classifiers. Making neural network decisions
interpretable through visualization is important both to improve models and to
accelerate the adoption of black-box classifiers in application areas such as
medicine. We illustrate the method in experiments on natural images (ImageNet
data), as well as medical images (MRI brain scans). | http://arxiv.org/pdf/1702.04595 | Luisa M Zintgraf, Taco S Cohen, Tameem Adel, Max Welling | cs.CV, cs.AI | ICLR2017 | null | cs.CV | 20170215 | 20170215 | [
{
"id": "1506.06579"
},
{
"id": "1605.01713"
}
] |
1702.04595 | 27 | Figure 9 (ï¬rst row) shows one way via which the prediction difference results could be presented to a physician, for an HIV sample. By overlapping the prediction difference and the MRI image, the exact regions can be pointed out that are evidence for (red parts) or against (blue parts) the classiï¬erâs decision. The second row shows the results using the weights of the logistic regression classiï¬er, which is a commonly used method in neuroscientiï¬c literature. We can see that they are considerably noisier (in the sense that, compared to our method, the voxels relevant for the classiï¬cation decisions are more scattered), and also, they are not speciï¬c to the given image. Figure 10 shows the visualization results for four healthy, and four HIV samples. We can clearly see that the patterns for the two classes are distinct, and there is some pattern to the decision of the classiï¬er, but which is still speciï¬c to the input image. Figure 11 shows the same (HIV) sample as in ï¬gure 9 along different axes, and ï¬gure 12 shows how the visualization changes with different patch sizes. We | 1702.04595#27 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | This article presents the prediction difference analysis method for
visualizing the response of a deep neural network to a specific input. When
classifying images, the method highlights areas in a given input image that
provide evidence for or against a certain class. It overcomes several
shortcoming of previous methods and provides great additional insight into the
decision making process of classifiers. Making neural network decisions
interpretable through visualization is important both to improve models and to
accelerate the adoption of black-box classifiers in application areas such as
medicine. We illustrate the method in experiments on natural images (ImageNet
data), as well as medical images (MRI brain scans). | http://arxiv.org/pdf/1702.04595 | Luisa M Zintgraf, Taco S Cohen, Tameem Adel, Max Welling | cs.CV, cs.AI | ICLR2017 | null | cs.CV | 20170215 | 20170215 | [
{
"id": "1506.06579"
},
{
"id": "1605.01713"
}
] |
1702.04595 | 29 | In general we can assume that the better the classiï¬er, the closer the explanations for its decisions are to the true class difference. For clinical practice it is therefore crucial to have very good classiï¬ers. This will increase computation time, but in many medical settings, longer waiting times for test results are common and worth the wait if the patient is not in an acute life threatening condition (e.g., when predicting HIV or Alzheimer from MRI scans, or the ï¬eld of cancer diagnosis and detection). The presented results here are for demonstration purposes of the visualization method, and we claim no medical validity. A thorough qualitative analysis incorporating expert knowledge was outside the scope of this paper.
# 5 FUTURE WORK
In our experiments, we used a simple multivariate normal distribution for conditional sampling. We can imagine that using more sophisticated generative models will lead to better results: pixels that are easily predictable by their surrounding are downweighted even more. However this will also signiï¬cantly increase the computational resources needed to produce the explanations. Similarly, we could try to modify equation (4) to get an even better approximation by using a conditional distribution that takes more information about the whole image into account (like adding spatial information for the MRI scans). | 1702.04595#29 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | This article presents the prediction difference analysis method for
visualizing the response of a deep neural network to a specific input. When
classifying images, the method highlights areas in a given input image that
provide evidence for or against a certain class. It overcomes several
shortcoming of previous methods and provides great additional insight into the
decision making process of classifiers. Making neural network decisions
interpretable through visualization is important both to improve models and to
accelerate the adoption of black-box classifiers in application areas such as
medicine. We illustrate the method in experiments on natural images (ImageNet
data), as well as medical images (MRI brain scans). | http://arxiv.org/pdf/1702.04595 | Luisa M Zintgraf, Taco S Cohen, Tameem Adel, Max Welling | cs.CV, cs.AI | ICLR2017 | null | cs.CV | 20170215 | 20170215 | [
{
"id": "1506.06579"
},
{
"id": "1605.01713"
}
] |
1702.04595 | 30 | To make the method applicable for clinical analysis and practice, a better classiï¬cation algorithm is required. Also, software that visualizes the results as an interactive 3D model will improve the usability of the system.
# 6 CONCLUSION
We presented a new method for visualizing deep neural networks that improves on previous methods by using a more powerful conditional, multivariate model. The visualization method shows which pixels of a speciï¬c input image are evidence for or against a node in the network. The signed information offers new insights - for research on the networks, as well as the acceptance and usability in domains like healthcare. While our method requires signiï¬cant computational resources, real-time 3D visualization is possible when visualizations are pre-computed. With further optimization and powerful GPUs, pre-computation time can be reduced a lot further. In our experiments, we have presented several ways in which the visualization method can be put into use for analyzing how DCNNs make decisions.
8
Published as a conference paper at ICLR 2017
Input + Pred.Diff _ Input (inv) + Pred. Diff Input + Pred.Diff Input (inv) + Pred.Diff i 1 o ta x = , Input (inv) + Weights | 1702.04595#30 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | This article presents the prediction difference analysis method for
visualizing the response of a deep neural network to a specific input. When
classifying images, the method highlights areas in a given input image that
provide evidence for or against a certain class. It overcomes several
shortcoming of previous methods and provides great additional insight into the
decision making process of classifiers. Making neural network decisions
interpretable through visualization is important both to improve models and to
accelerate the adoption of black-box classifiers in application areas such as
medicine. We illustrate the method in experiments on natural images (ImageNet
data), as well as medical images (MRI brain scans). | http://arxiv.org/pdf/1702.04595 | Luisa M Zintgraf, Taco S Cohen, Tameem Adel, Max Welling | cs.CV, cs.AI | ICLR2017 | null | cs.CV | 20170215 | 20170215 | [
{
"id": "1506.06579"
},
{
"id": "1605.01713"
}
] |
1702.04595 | 31 | Figure 9: Visualization of the support for the correct classiï¬cation âHIVâ, using the Prediction Differ- ence method and Logistic Regression Weights. For an HIV sample, we show the results with the prediction difference (ï¬rst row), and using the weights of the logistic regression classiï¬er (second row), for slices 29 and 40 (along the ï¬rst axis). Red are positive values, and blue negative. For each slice, the left image shows the original image, overlaid with the relevance values. The right image shows the original image with reversed colors and the relevance values. Relevance values are shown only for voxels with (absolute) relevance value above 15% of the (absolute) maximum value.
Class: HEALTHY Class: HIV fh | | as) as as] input Pred. Diff.
Figure 10: Prediction difference visualization for different samples. The ï¬rst four samples are of the class âhealthyâ; the last four of the class âHIVâ. All images show slice 39 (along the ï¬rst axis). All samples are correctly classiï¬ed, and the results show evidence for (red) and against (blue) this decision. Prediction differences are shown only for voxels with (absolute) relevance value above 15% of the (absolute) maximum value. | 1702.04595#31 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | This article presents the prediction difference analysis method for
visualizing the response of a deep neural network to a specific input. When
classifying images, the method highlights areas in a given input image that
provide evidence for or against a certain class. It overcomes several
shortcoming of previous methods and provides great additional insight into the
decision making process of classifiers. Making neural network decisions
interpretable through visualization is important both to improve models and to
accelerate the adoption of black-box classifiers in application areas such as
medicine. We illustrate the method in experiments on natural images (ImageNet
data), as well as medical images (MRI brain scans). | http://arxiv.org/pdf/1702.04595 | Luisa M Zintgraf, Taco S Cohen, Tameem Adel, Max Welling | cs.CV, cs.AI | ICLR2017 | null | cs.CV | 20170215 | 20170215 | [
{
"id": "1506.06579"
},
{
"id": "1605.01713"
}
] |
1702.04595 | 32 | 29 31 33 35 37 39 + Yee
Figure 11: Visualization results across different slices of the MRI image, using the same input image as shown in 9. Prediction differences are shown only for voxels with (absolute) relevance value above 15% of the (absolute) maximum value.
Input k=2 k=3 k=10 ~ &
Figure 12: How the patch size inï¬uences the visualization. For the input image (HIV sample, slice 39 along the ï¬rst axis) we show the visualization with different patch sizes (k in alg. 1). Prediction differences are shown only for voxels with (absolute) relevance value above 15% of the (absolute) maximum (for k = 2 it is 10%).
9
Published as a conference paper at ICLR 2017
# ACKNOWLEDGMENTS
This work was supported by AWS in Education Grant award. We thank Facebook and Google for ï¬nancial support, and our reviewers for their time and valuable, constructive feedback. | 1702.04595#32 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | This article presents the prediction difference analysis method for
visualizing the response of a deep neural network to a specific input. When
classifying images, the method highlights areas in a given input image that
provide evidence for or against a certain class. It overcomes several
shortcoming of previous methods and provides great additional insight into the
decision making process of classifiers. Making neural network decisions
interpretable through visualization is important both to improve models and to
accelerate the adoption of black-box classifiers in application areas such as
medicine. We illustrate the method in experiments on natural images (ImageNet
data), as well as medical images (MRI brain scans). | http://arxiv.org/pdf/1702.04595 | Luisa M Zintgraf, Taco S Cohen, Tameem Adel, Max Welling | cs.CV, cs.AI | ICLR2017 | null | cs.CV | 20170215 | 20170215 | [
{
"id": "1506.06579"
},
{
"id": "1605.01713"
}
] |
1702.04595 | 33 | This work was supported by AWS in Education Grant award. We thank Facebook and Google for ï¬nancial support, and our reviewers for their time and valuable, constructive feedback.
This work was also in part supported by: Innoviris, the Brussels Institute for Research and Innovation, Brussels, Belgium; the Nuts-OHRA Foundation (grant no. 1003-026), Amsterdam, The Netherlands; The Netherlands Organization for Health Research and Development (ZonMW) together with AIDS Fonds (grant no 300020007 and 2009063). Additional unrestricted scientiï¬c grants were received from Gilead Sciences, ViiV Healthcare, Janssen Pharmaceutica N.V., Bristol-Myers Squibb, Boehringer Ingelheim, and Merck&Co. | 1702.04595#33 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | This article presents the prediction difference analysis method for
visualizing the response of a deep neural network to a specific input. When
classifying images, the method highlights areas in a given input image that
provide evidence for or against a certain class. It overcomes several
shortcoming of previous methods and provides great additional insight into the
decision making process of classifiers. Making neural network decisions
interpretable through visualization is important both to improve models and to
accelerate the adoption of black-box classifiers in application areas such as
medicine. We illustrate the method in experiments on natural images (ImageNet
data), as well as medical images (MRI brain scans). | http://arxiv.org/pdf/1702.04595 | Luisa M Zintgraf, Taco S Cohen, Tameem Adel, Max Welling | cs.CV, cs.AI | ICLR2017 | null | cs.CV | 20170215 | 20170215 | [
{
"id": "1506.06579"
},
{
"id": "1605.01713"
}
] |
1702.04595 | 34 | We thank Barbara Elsenga, Jane Berkel, Sandra Moll, Maja Totté, and Marjolein Martens for running the AGEhIV study program and capturing our data with such care and passion. We thank Yolanda Ruijs-Tiggelman, Lia Veenenberg-Benschop, Sima Zaheri, and Mariska Hillebregt at the HIV Monitoring Foundation for their contributions to data management. We thank Aaï¬en Henderiks and Hans-Erik Nobel for their advice on logistics and organization at the Academic Medical Center. We thank all HIV-physicians and HIV-nurses at the Academic Medical Center for their efforts to include the HIV-infected participants into the AGEhIV Cohort Study, and the Municipal Health Service Amsterdam personnel for their efforts to include the HIV-uninfected participants into the AGEhIV Cohort Study. We thank all study participants without whom this research would not be possible. | 1702.04595#34 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | This article presents the prediction difference analysis method for
visualizing the response of a deep neural network to a specific input. When
classifying images, the method highlights areas in a given input image that
provide evidence for or against a certain class. It overcomes several
shortcoming of previous methods and provides great additional insight into the
decision making process of classifiers. Making neural network decisions
interpretable through visualization is important both to improve models and to
accelerate the adoption of black-box classifiers in application areas such as
medicine. We illustrate the method in experiments on natural images (ImageNet
data), as well as medical images (MRI brain scans). | http://arxiv.org/pdf/1702.04595 | Luisa M Zintgraf, Taco S Cohen, Tameem Adel, Max Welling | cs.CV, cs.AI | ICLR2017 | null | cs.CV | 20170215 | 20170215 | [
{
"id": "1506.06579"
},
{
"id": "1605.01713"
}
] |
1702.04595 | 35 | AGEhIV Cohort Study Group. Scientiï¬c oversight and coordination: P. Reiss (principal investigator), F.W.N.M. Wit, M. van der Valk, J. Schouten, K.W. Kooij, R.A. van Zoest, E. Verheij, B.C. Elsenga (Aca- demic Medical Center (AMC), Department of Global Health and Amsterdam Institute for Global Health and Development (AIGHD)). M. Prins (co-principal investigator), M.F. Schim van der Loeff, M. Martens, S. Moll, J. Berkel, M. Totté, G.R. Visser, L. May, S. Kovalev, A. Newsum, M. Dijkstra (Public Health Service of Amsterdam, Department of Infectious Diseases). Datamanagement: S. Zaheri, M.M.J. Hillebregt, Y.M.C. Ruijs, D.P. Benschop, A. el Berkaoui (HIV Monitoring Foundation). Central laboratory support: N.A. Kootstra, A.M. Harskamp-Holwerda, I. Maurer, T. | 1702.04595#35 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | This article presents the prediction difference analysis method for
visualizing the response of a deep neural network to a specific input. When
classifying images, the method highlights areas in a given input image that
provide evidence for or against a certain class. It overcomes several
shortcoming of previous methods and provides great additional insight into the
decision making process of classifiers. Making neural network decisions
interpretable through visualization is important both to improve models and to
accelerate the adoption of black-box classifiers in application areas such as
medicine. We illustrate the method in experiments on natural images (ImageNet
data), as well as medical images (MRI brain scans). | http://arxiv.org/pdf/1702.04595 | Luisa M Zintgraf, Taco S Cohen, Tameem Adel, Max Welling | cs.CV, cs.AI | ICLR2017 | null | cs.CV | 20170215 | 20170215 | [
{
"id": "1506.06579"
},
{
"id": "1605.01713"
}
] |
1702.04595 | 36 | Monitoring Foundation). Central laboratory support: N.A. Kootstra, A.M. Harskamp-Holwerda, I. Maurer, T. Booiman, M.M. Mangas Ruiz, A.F. Girigorie, B. Boeser-Nunnink (AMC, Laboratory for Viral Immune Pathogenesis and Department of Experimental Immunology). Project management and administrative support: W. Zikkenheiner, F.R. Janssen (AIGHD). Participating HIV physicians and nurses: S.E. Geerlings, M.H. Godfried, A. Goorhuis, J.W.R. Hovius, J.T.M. van der Meer, F.J.B. Nellen, T. van der Poll, J.M. Prins, P. Reiss, M. van der Valk, W.J. Wiersinga, M. van Vugt, G. de Bree, F.W.N.M. Wit; J. van Eden, A.M.H. van Hes, M. Mutschelknauss , H.E. Nobel, F.J.J. Pijnappel, M. Bijsterveld, A. Weijsenfeld, S. | 1702.04595#36 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | This article presents the prediction difference analysis method for
visualizing the response of a deep neural network to a specific input. When
classifying images, the method highlights areas in a given input image that
provide evidence for or against a certain class. It overcomes several
shortcoming of previous methods and provides great additional insight into the
decision making process of classifiers. Making neural network decisions
interpretable through visualization is important both to improve models and to
accelerate the adoption of black-box classifiers in application areas such as
medicine. We illustrate the method in experiments on natural images (ImageNet
data), as well as medical images (MRI brain scans). | http://arxiv.org/pdf/1702.04595 | Luisa M Zintgraf, Taco S Cohen, Tameem Adel, Max Welling | cs.CV, cs.AI | ICLR2017 | null | cs.CV | 20170215 | 20170215 | [
{
"id": "1506.06579"
},
{
"id": "1605.01713"
}
] |
1702.04595 | 37 | , H.E. Nobel, F.J.J. Pijnappel, M. Bijsterveld, A. Weijsenfeld, S. Smalhout (AMC, Division of Infectious Diseases). Other collaborators: J. de Jong, P.G. Postema (AMC, Department of Cardiology); P.H.L.T. Bisschop, M.J.M. Serlie (AMC, Division of Endocrinology and Metabolism); P. Lips (Free University Medical Center Amsterdam); E. Dekker (AMC, Department of Gastroenterology); N. van der Velde (AMC, Division of Geriatric Medicine); J.M.R. Willemsen, L. Vogt (AMC, Division of Nephrology); J. Schouten, P. Portegies, B.A. Schmand, G.J. Geurtsen (AMC, Department of Neurology); F.D. Verbraak, N. Demirkaya (AMC, Department of Ophthalmology); I. Visser (AMC, Department of Psychiatry); A. Schadé (Free University Medical Center Amsterdam, Department of Psychiatry); P.T. Nieuwkerk, N. | 1702.04595#37 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | This article presents the prediction difference analysis method for
visualizing the response of a deep neural network to a specific input. When
classifying images, the method highlights areas in a given input image that
provide evidence for or against a certain class. It overcomes several
shortcoming of previous methods and provides great additional insight into the
decision making process of classifiers. Making neural network decisions
interpretable through visualization is important both to improve models and to
accelerate the adoption of black-box classifiers in application areas such as
medicine. We illustrate the method in experiments on natural images (ImageNet
data), as well as medical images (MRI brain scans). | http://arxiv.org/pdf/1702.04595 | Luisa M Zintgraf, Taco S Cohen, Tameem Adel, Max Welling | cs.CV, cs.AI | ICLR2017 | null | cs.CV | 20170215 | 20170215 | [
{
"id": "1506.06579"
},
{
"id": "1605.01713"
}
] |
1702.04595 | 38 | Department of Psychiatry); A. Schadé (Free University Medical Center Amsterdam, Department of Psychiatry); P.T. Nieuwkerk, N. Langebeek (AMC, Department of Medical Psychology); R.P. van Steenwijk, E. Dijkers (AMC, Department of Pulmonary medicine); C.B.L.M. Majoie, M.W.A. Caan, T. Su (AMC, Department of Radiology); H.W. van Lunsen, M.A.F. Nievaard (AMC, Department of Gynaecology); B.J.H. van den Born, E.S.G. Stroes, (AMC, Division of Vascular Medicine); W.M.C. Mulder (HIV Vereniging Nederland). | 1702.04595#38 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | This article presents the prediction difference analysis method for
visualizing the response of a deep neural network to a specific input. When
classifying images, the method highlights areas in a given input image that
provide evidence for or against a certain class. It overcomes several
shortcoming of previous methods and provides great additional insight into the
decision making process of classifiers. Making neural network decisions
interpretable through visualization is important both to improve models and to
accelerate the adoption of black-box classifiers in application areas such as
medicine. We illustrate the method in experiments on natural images (ImageNet
data), as well as medical images (MRI brain scans). | http://arxiv.org/pdf/1702.04595 | Luisa M Zintgraf, Taco S Cohen, Tameem Adel, Max Welling | cs.CV, cs.AI | ICLR2017 | null | cs.CV | 20170215 | 20170215 | [
{
"id": "1506.06579"
},
{
"id": "1605.01713"
}
] |
1702.04595 | 39 | # REFERENCES
Jesper LR Andersson, Mark Jenkinson, and Stephen Smith. Non-linear optimisation. fmrib technical report tr07ja1. University of Oxford FMRIB Centre: Oxford, UK, 2007.
Sebastian Bach, Alexander Binder, Grégoire Montavon, Frederick Klauschen, Klaus-Robert Müller, and Woj- ciech Samek. On pixel-wise explanations for non-linear classiï¬er decisions by layer-wise relevance propaga- tion. PloS one, 10(7):e0130140, 2015.
Christine Ecker, Andre Marquand, Janaina Mourão-Miranda, Patrick Johnston, Eileen M Daly, Michael J Brammer, Stefanos Maltezos, Clodagh M Murphy, Dene Robertson, Steven C Williams, et al. Describing the brain in autism in ï¬ve dimensionsâmagnetic resonance imaging-assisted diagnosis of autism spectrum disorder using a multiparameter classiï¬cation approach. The Journal of Neuroscience, 30(32):10612â10623, 2010.
Dumitru Erhan, Yoshua Bengio, Aaron Courville, and Pascal Vincent. Visualizing higher-layer features of a deep network. Dept. IRO, Université de Montréal, Tech. Rep, 4323, 2009. | 1702.04595#39 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | This article presents the prediction difference analysis method for
visualizing the response of a deep neural network to a specific input. When
classifying images, the method highlights areas in a given input image that
provide evidence for or against a certain class. It overcomes several
shortcoming of previous methods and provides great additional insight into the
decision making process of classifiers. Making neural network decisions
interpretable through visualization is important both to improve models and to
accelerate the adoption of black-box classifiers in application areas such as
medicine. We illustrate the method in experiments on natural images (ImageNet
data), as well as medical images (MRI brain scans). | http://arxiv.org/pdf/1702.04595 | Luisa M Zintgraf, Taco S Cohen, Tameem Adel, Max Welling | cs.CV, cs.AI | ICLR2017 | null | cs.CV | 20170215 | 20170215 | [
{
"id": "1506.06579"
},
{
"id": "1605.01713"
}
] |
1702.04595 | 40 | Bilwaj Gaonkar and Christos Davatzikos. Analytic estimation of statistical signiï¬cance maps for support vector machine based multi-variate image analysis and classiï¬cation. NeuroImage, 78:270â283, 2013.
10
Published as a conference paper at ICLR 2017
Stefan Haufe, Frank Meinecke, Kai Görgen, Sven Dähne, John-Dylan Haynes, Benjamin Blankertz, and Felix BieÃmann. On the interpretation of weight vectors of linear models in multivariate neuroimaging. Neuroimage, 87:96â110, 2014.
Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadar- rama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014.
Stefan Klöppel, Cynthia M Stonnington, Carlton Chu, Bogdan Draganski, Rachael I Scahill, Jonathan D Rohrer, Nick C Fox, Clifford R Jack, John Ashburner, and Richard SJ Frackowiak. Automatic classiï¬cation of mr scans in alzheimerâs disease. Brain, 131(3):681â689, 2008. | 1702.04595#40 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | This article presents the prediction difference analysis method for
visualizing the response of a deep neural network to a specific input. When
classifying images, the method highlights areas in a given input image that
provide evidence for or against a certain class. It overcomes several
shortcoming of previous methods and provides great additional insight into the
decision making process of classifiers. Making neural network decisions
interpretable through visualization is important both to improve models and to
accelerate the adoption of black-box classifiers in application areas such as
medicine. We illustrate the method in experiments on natural images (ImageNet
data), as well as medical images (MRI brain scans). | http://arxiv.org/pdf/1702.04595 | Luisa M Zintgraf, Taco S Cohen, Tameem Adel, Max Welling | cs.CV, cs.AI | ICLR2017 | null | cs.CV | 20170215 | 20170215 | [
{
"id": "1506.06579"
},
{
"id": "1605.01713"
}
] |
1702.04595 | 41 | Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classiï¬cation with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097â1105, 2012.
Janaina Mourao-Miranda, Arun LW Bokde, Christine Born, Harald Hampel, and Martin Stetter. Classifying brain states and determining the discriminating activation patterns: Support vector machine on functional mri data. NeuroImage, 28(4):980â995, 2005.
Marko Robnik-Å ikonja and Igor Kononenko. Explaining classiï¬cations for individual instances. Knowledge and Data Engineering, IEEE Transactions on, 20(5):589â600, 2008.
Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211â252, 2015. doi: 10.1007/s11263-015-0816-y. | 1702.04595#41 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | This article presents the prediction difference analysis method for
visualizing the response of a deep neural network to a specific input. When
classifying images, the method highlights areas in a given input image that
provide evidence for or against a certain class. It overcomes several
shortcoming of previous methods and provides great additional insight into the
decision making process of classifiers. Making neural network decisions
interpretable through visualization is important both to improve models and to
accelerate the adoption of black-box classifiers in application areas such as
medicine. We illustrate the method in experiments on natural images (ImageNet
data), as well as medical images (MRI brain scans). | http://arxiv.org/pdf/1702.04595 | Luisa M Zintgraf, Taco S Cohen, Tameem Adel, Max Welling | cs.CV, cs.AI | ICLR2017 | null | cs.CV | 20170215 | 20170215 | [
{
"id": "1506.06579"
},
{
"id": "1605.01713"
}
] |
1702.04595 | 42 | Shayan Shahand, Ammar Benabdelkader, Mohammad Mahdi Jaghoori, Mostapha al Mourabit, Jordi Huguet, Matthan WA Caan, Antoine HC Kampen, and SÃlvia D Olabarriaga. A data-centric neuroscience gateway: design, implementation, and experiences. Concurrency and Computation: Practice and Experience, 27(2): 489â506, 2015.
Avanti Shrikumar, Peyton Greenside, Anna Shcherbina, and Anshul Kundaje. Not just a black box: Learning important features through propagating activation differences. arXiv preprint arXiv:1605.01713, 2016.
Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep inside convolutional networks: Visualising image classiï¬cation models and saliency maps. arXiv preprint arXiv:1312.6034, 2013. | 1702.04595#42 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | This article presents the prediction difference analysis method for
visualizing the response of a deep neural network to a specific input. When
classifying images, the method highlights areas in a given input image that
provide evidence for or against a certain class. It overcomes several
shortcoming of previous methods and provides great additional insight into the
decision making process of classifiers. Making neural network decisions
interpretable through visualization is important both to improve models and to
accelerate the adoption of black-box classifiers in application areas such as
medicine. We illustrate the method in experiments on natural images (ImageNet
data), as well as medical images (MRI brain scans). | http://arxiv.org/pdf/1702.04595 | Luisa M Zintgraf, Taco S Cohen, Tameem Adel, Max Welling | cs.CV, cs.AI | ICLR2017 | null | cs.CV | 20170215 | 20170215 | [
{
"id": "1506.06579"
},
{
"id": "1605.01713"
}
] |
1702.04595 | 43 | Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1â9, 2015.
Ze Wang, Anna R Childress, Jiongjiong Wang, and John A Detre. Support vector machine learning-based fmri data group analysis. NeuroImage, 36(4):1139â1151, 2007.
Jason Yosinski, Jeff Clune, Anh Nguyen, Thomas Fuchs, and Hod Lipson. Understanding neural networks through deep visualization. arXiv preprint arXiv:1506.06579, 2015.
Matthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In Computer visionâECCV 2014, pp. 818â833. Springer, 2014.
Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. Learning deep features for discriminative localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2921â2929, 2016.
11
Published as a conference paper at ICLR 2017
A RANDOM RESULTS | 1702.04595#43 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | This article presents the prediction difference analysis method for
visualizing the response of a deep neural network to a specific input. When
classifying images, the method highlights areas in a given input image that
provide evidence for or against a certain class. It overcomes several
shortcoming of previous methods and provides great additional insight into the
decision making process of classifiers. Making neural network decisions
interpretable through visualization is important both to improve models and to
accelerate the adoption of black-box classifiers in application areas such as
medicine. We illustrate the method in experiments on natural images (ImageNet
data), as well as medical images (MRI brain scans). | http://arxiv.org/pdf/1702.04595 | Luisa M Zintgraf, Taco S Cohen, Tameem Adel, Max Welling | cs.CV, cs.AI | ICLR2017 | null | cs.CV | 20170215 | 20170215 | [
{
"id": "1506.06579"
},
{
"id": "1605.01713"
}
] |
1702.04595 | 44 | 11
Published as a conference paper at ICLR 2017
A RANDOM RESULTS
eS t-for-two (1) spatula (47) stinkhorn hermit crab (1) cash machine (1) dishrag (4) squirrel monkey (1) car wheel (1) handkerchief (1) Parachute (1) scuba diver (3) chambered nautilus (1) 1) goose (1) langur (1) bullet train (1 groom (1) handkerchief (2) mixing bowl (1) croquet ball megalith (1) throne (1) loggerhead (1) redbone (1) ; hamster (1) boathouse (1) coffeepot (4) envelope (1) | 1702.04595#44 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | This article presents the prediction difference analysis method for
visualizing the response of a deep neural network to a specific input. When
classifying images, the method highlights areas in a given input image that
provide evidence for or against a certain class. It overcomes several
shortcoming of previous methods and provides great additional insight into the
decision making process of classifiers. Making neural network decisions
interpretable through visualization is important both to improve models and to
accelerate the adoption of black-box classifiers in application areas such as
medicine. We illustrate the method in experiments on natural images (ImageNet
data), as well as medical images (MRI brain scans). | http://arxiv.org/pdf/1702.04595 | Luisa M Zintgraf, Taco S Cohen, Tameem Adel, Max Welling | cs.CV, cs.AI | ICLR2017 | null | cs.CV | 20170215 | 20170215 | [
{
"id": "1506.06579"
},
{
"id": "1605.01713"
}
] |
1702.04595 | 45 | Figure 13: Results on 34 randomly chosen ImageNet images. Middle columns: original image; left columns: sensitivity maps (Simonyan et al., 2013) where the red pixels indicate high sensitivity, and white pixels mean no sensitivity (note that we show the absolute values of the partial derivatives, since the sign cannot be interpreted like in our method); right columns: results from our method. For both methods, we visualize the results with respect to the correct class which is given above the image. In brackets we see how the classiï¬er ranks this class, i.e., a (1) means it was correctly classiï¬ed, whereas a (4) means that it was misclassiï¬ed, and the correct class was ranked fourth. For our method, red areas show evidence for the correct class, and blue areas show evidence against the class (e.g., the scuba diver looks more like a tea pot to the classiï¬er).
12 | 1702.04595#45 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | This article presents the prediction difference analysis method for
visualizing the response of a deep neural network to a specific input. When
classifying images, the method highlights areas in a given input image that
provide evidence for or against a certain class. It overcomes several
shortcoming of previous methods and provides great additional insight into the
decision making process of classifiers. Making neural network decisions
interpretable through visualization is important both to improve models and to
accelerate the adoption of black-box classifiers in application areas such as
medicine. We illustrate the method in experiments on natural images (ImageNet
data), as well as medical images (MRI brain scans). | http://arxiv.org/pdf/1702.04595 | Luisa M Zintgraf, Taco S Cohen, Tameem Adel, Max Welling | cs.CV, cs.AI | ICLR2017 | null | cs.CV | 20170215 | 20170215 | [
{
"id": "1506.06579"
},
{
"id": "1605.01713"
}
] |
1702.03118 | 0 | 2017:
7 1 0 2
v o N 2 ] G L . s c [ 3 v 8 1 1 3 0 . 2 0 7 1 : v i X r a
# Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning
Stefan Elfwinga [email protected] Eiji Uchibea,b [email protected] Kenji Doyab [email protected]
aDept. of Brain Robot Interface, ATR Computational Neuroscience Laboratories, 2-2-2 Hikaridai, Seikacho, Soraku-gun, Kyoto 619-0288, Japan bNeural Computation Unit, Okinawa Institute of Science and Technology Graduate University, 1919-1 Tancha, Onna-son, Okinawa 904-0495, Japan
# Abstract | 1702.03118#0 | Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning | In recent years, neural networks have enjoyed a renaissance as function
approximators in reinforcement learning. Two decades after Tesauro's TD-Gammon
achieved near top-level human performance in backgammon, the deep reinforcement
learning algorithm DQN achieved human-level performance in many Atari 2600
games. The purpose of this study is twofold. First, we propose two activation
functions for neural network function approximation in reinforcement learning:
the sigmoid-weighted linear unit (SiLU) and its derivative function (dSiLU).
The activation of the SiLU is computed by the sigmoid function multiplied by
its input. Second, we suggest that the more traditional approach of using
on-policy learning with eligibility traces, instead of experience replay, and
softmax action selection with simple annealing can be competitive with DQN,
without the need for a separate target network. We validate our proposed
approach by, first, achieving new state-of-the-art results in both stochastic
SZ-Tetris and Tetris with a small 10$\times$10 board, using TD($\lambda$)
learning and shallow dSiLU network agents, and, then, by outperforming DQN in
the Atari 2600 domain by using a deep Sarsa($\lambda$) agent with SiLU and
dSiLU hidden units. | http://arxiv.org/pdf/1702.03118 | Stefan Elfwing, Eiji Uchibe, Kenji Doya | cs.LG | 18 pages, 22 figures; added deep RL results for SZ-Tetris | null | cs.LG | 20170210 | 20171102 | [] |
1702.03044 | 1 | This paper presents incremental network quantization (INQ), a novel method, tar- geting to efï¬ciently convert any pre-trained full-precision convolutional neural network (CNN) model into a low-precision version whose weights are constrained to be either powers of two or zero. Unlike existing methods which are struggled in noticeable accuracy loss, our INQ has the potential to resolve this issue, as beneï¬t- ing from two innovations. On one hand, we introduce three interdependent oper- ations, namely weight partition, group-wise quantization and re-training. A well- proven measure is employed to divide the weights in each layer of a pre-trained CNN model into two disjoint groups. The weights in the ï¬rst group are respon- sible to form a low-precision base, thus they are quantized by a variable-length encoding method. The weights in the other group are responsible to compensate for the accuracy loss from the quantization, thus they are the ones to be re-trained. On the other hand, these three operations are repeated on the latest re-trained group in an iterative manner until all the weights are converted into low-precision ones, acting as an incremental network quantization and accuracy | 1702.03044#1 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | This paper presents incremental network quantization (INQ), a novel method,
targeting to efficiently convert any pre-trained full-precision convolutional
neural network (CNN) model into a low-precision version whose weights are
constrained to be either powers of two or zero. Unlike existing methods which
are struggled in noticeable accuracy loss, our INQ has the potential to resolve
this issue, as benefiting from two innovations. On one hand, we introduce three
interdependent operations, namely weight partition, group-wise quantization and
re-training. A well-proven measure is employed to divide the weights in each
layer of a pre-trained CNN model into two disjoint groups. The weights in the
first group are responsible to form a low-precision base, thus they are
quantized by a variable-length encoding method. The weights in the other group
are responsible to compensate for the accuracy loss from the quantization, thus
they are the ones to be re-trained. On the other hand, these three operations
are repeated on the latest re-trained group in an iterative manner until all
the weights are converted into low-precision ones, acting as an incremental
network quantization and accuracy enhancement procedure. Extensive experiments
on the ImageNet classification task using almost all known deep CNN
architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the
efficacy of the proposed method. Specifically, at 5-bit quantization, our
models have improved accuracy than the 32-bit floating-point references. Taking
ResNet-18 as an example, we further show that our quantized models with 4-bit,
3-bit and 2-bit ternary weights have improved or very similar accuracy against
its 32-bit floating-point baseline. Besides, impressive results with the
combination of network pruning and INQ are also reported. The code is available
at https://github.com/Zhouaojun/Incremental-Network-Quantization. | http://arxiv.org/pdf/1702.03044 | Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, Yurong Chen | cs.CV, cs.AI, cs.NE | Published by ICLR 2017, and the code is available at
https://github.com/Zhouaojun/Incremental-Network-Quantization | null | cs.CV | 20170210 | 20170825 | [
{
"id": "1605.04711"
},
{
"id": "1602.07261"
},
{
"id": "1609.07061"
},
{
"id": "1602.02830"
},
{
"id": "1603.05279"
}
] |
1702.03118 | 1 | # Abstract
In recent years, neural networks have enjoyed a renaissance as function approximators in reinforcement learning. Two decades after Tesauroâs TD-Gammon achieved near top- level human performance in backgammon, the deep reinforcement learning algorithm DQN achieved human-level performance in many Atari 2600 games. The purpose of this study is twofold. First, we propose two activation functions for neural network function approximation in reinforcement learning: the sigmoid-weighted linear unit (SiLU) and its derivative function (dSiLU). The activation of the SiLU is computed by the sigmoid function multiplied by its input. Second, we suggest that the more traditional approach of using on-policy learning with eligibility traces, instead of experience replay, and softmax action selection with simple annealing can be competitive with DQN, without the need for a separate target network. We validate our proposed approach by, ï¬rst, achieving new state-of-the-art results in both stochastic SZ-Tetris and Tetris with a small 10Ã10 board, using TD(λ) learning and shallow dSiLU network agents, and, then, by outperforming DQN in the Atari 2600 domain by using a deep Sarsa(λ) agent with SiLU and dSiLU hidden units.
1
# 1 Introduction | 1702.03118#1 | Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning | In recent years, neural networks have enjoyed a renaissance as function
approximators in reinforcement learning. Two decades after Tesauro's TD-Gammon
achieved near top-level human performance in backgammon, the deep reinforcement
learning algorithm DQN achieved human-level performance in many Atari 2600
games. The purpose of this study is twofold. First, we propose two activation
functions for neural network function approximation in reinforcement learning:
the sigmoid-weighted linear unit (SiLU) and its derivative function (dSiLU).
The activation of the SiLU is computed by the sigmoid function multiplied by
its input. Second, we suggest that the more traditional approach of using
on-policy learning with eligibility traces, instead of experience replay, and
softmax action selection with simple annealing can be competitive with DQN,
without the need for a separate target network. We validate our proposed
approach by, first, achieving new state-of-the-art results in both stochastic
SZ-Tetris and Tetris with a small 10$\times$10 board, using TD($\lambda$)
learning and shallow dSiLU network agents, and, then, by outperforming DQN in
the Atari 2600 domain by using a deep Sarsa($\lambda$) agent with SiLU and
dSiLU hidden units. | http://arxiv.org/pdf/1702.03118 | Stefan Elfwing, Eiji Uchibe, Kenji Doya | cs.LG | 18 pages, 22 figures; added deep RL results for SZ-Tetris | null | cs.LG | 20170210 | 20171102 | [] |
1702.03044 | 2 | repeated on the latest re-trained group in an iterative manner until all the weights are converted into low-precision ones, acting as an incremental network quantization and accuracy enhancement procedure. Extensive experiments on the ImageNet classiï¬cation task using al- most all known deep CNN architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the efï¬cacy of the proposed method. Speciï¬cally, at 5-bit quantization (a variable-length encoding: 1 bit for representing zero value, and the remaining 4 bits represent at most 16 different values for the powers of two) 1, our models have improved accuracy than the 32-bit ï¬oating-point references. Taking ResNet-18 as an example, we further show that our quantized models with 4-bit, 3-bit and 2-bit ternary weights have improved or very similar accuracy against its 32-bit ï¬oating-point baseline. Besides, impressive results with the combination of network pruning and INQ are also reported. We believe that our method sheds new insights on how to make deep CNNs to be applicable on mobile or embed- ded devices. The code is available at | 1702.03044#2 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | This paper presents incremental network quantization (INQ), a novel method,
targeting to efficiently convert any pre-trained full-precision convolutional
neural network (CNN) model into a low-precision version whose weights are
constrained to be either powers of two or zero. Unlike existing methods which
are struggled in noticeable accuracy loss, our INQ has the potential to resolve
this issue, as benefiting from two innovations. On one hand, we introduce three
interdependent operations, namely weight partition, group-wise quantization and
re-training. A well-proven measure is employed to divide the weights in each
layer of a pre-trained CNN model into two disjoint groups. The weights in the
first group are responsible to form a low-precision base, thus they are
quantized by a variable-length encoding method. The weights in the other group
are responsible to compensate for the accuracy loss from the quantization, thus
they are the ones to be re-trained. On the other hand, these three operations
are repeated on the latest re-trained group in an iterative manner until all
the weights are converted into low-precision ones, acting as an incremental
network quantization and accuracy enhancement procedure. Extensive experiments
on the ImageNet classification task using almost all known deep CNN
architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the
efficacy of the proposed method. Specifically, at 5-bit quantization, our
models have improved accuracy than the 32-bit floating-point references. Taking
ResNet-18 as an example, we further show that our quantized models with 4-bit,
3-bit and 2-bit ternary weights have improved or very similar accuracy against
its 32-bit floating-point baseline. Besides, impressive results with the
combination of network pruning and INQ are also reported. The code is available
at https://github.com/Zhouaojun/Incremental-Network-Quantization. | http://arxiv.org/pdf/1702.03044 | Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, Yurong Chen | cs.CV, cs.AI, cs.NE | Published by ICLR 2017, and the code is available at
https://github.com/Zhouaojun/Incremental-Network-Quantization | null | cs.CV | 20170210 | 20170825 | [
{
"id": "1605.04711"
},
{
"id": "1602.07261"
},
{
"id": "1609.07061"
},
{
"id": "1602.02830"
},
{
"id": "1603.05279"
}
] |
1702.03118 | 2 | 1
# 1 Introduction
Neural networks have enjoyed a renaissance as function approximators in reinforcement learning (Sutton and Barto, 1998) in recent years. The DQN algorithm (Mnih et al., 2015), which combines Q-learning with a deep neural network, experience replay, and a separate target network, achieved human-level performance in many Atari 2600 games. Since the development of the DQN algorithm, there have been several proposed improvements, both to DQN speciï¬cally and deep reinforcement learning in general. Van Hasselt et al. (2015) proposed double DQN to reduce overestimation of the action values in DQN and Schaul et al. (2016) developed a framework for more eï¬cient replay by prioritizing experiences of more important state transitions. Wang et al. (2016) proposed the dueling network architecture
1
for more eï¬cient learning of the action value function by separately estimating the state value function and the advantages of each action. Mnih et al. (2016) proposed a framework for asynchronous learning by multiple agents in parallel, both for value-based and actor-critic methods. | 1702.03118#2 | Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning | In recent years, neural networks have enjoyed a renaissance as function
approximators in reinforcement learning. Two decades after Tesauro's TD-Gammon
achieved near top-level human performance in backgammon, the deep reinforcement
learning algorithm DQN achieved human-level performance in many Atari 2600
games. The purpose of this study is twofold. First, we propose two activation
functions for neural network function approximation in reinforcement learning:
the sigmoid-weighted linear unit (SiLU) and its derivative function (dSiLU).
The activation of the SiLU is computed by the sigmoid function multiplied by
its input. Second, we suggest that the more traditional approach of using
on-policy learning with eligibility traces, instead of experience replay, and
softmax action selection with simple annealing can be competitive with DQN,
without the need for a separate target network. We validate our proposed
approach by, first, achieving new state-of-the-art results in both stochastic
SZ-Tetris and Tetris with a small 10$\times$10 board, using TD($\lambda$)
learning and shallow dSiLU network agents, and, then, by outperforming DQN in
the Atari 2600 domain by using a deep Sarsa($\lambda$) agent with SiLU and
dSiLU hidden units. | http://arxiv.org/pdf/1702.03118 | Stefan Elfwing, Eiji Uchibe, Kenji Doya | cs.LG | 18 pages, 22 figures; added deep RL results for SZ-Tetris | null | cs.LG | 20170210 | 20171102 | [] |
1702.03118 | 3 | The purpose of this study is twofold. First, motivated by the high performance of the ex- pected energy restricted Boltzmann machine (EE-RBM) in our earlier studies (Elfwing et al., 2015, 2016), we propose two activation functions for neural network function approximation in reinforcement learning: the sigmoid-weighted linear unit (SiLU) and its derivative func- tion (dSiLU). The activation of the SiLU is computed by the sigmoid function multiplied by its input and it looks like a continuous and âundershootingâ version of the linear rectiï¬er unit (ReLU) (Hahnloser et al., 2000). The activation of the dSiLU looks like steeper and âovershootingâ version of the sigmoid function. | 1702.03118#3 | Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning | In recent years, neural networks have enjoyed a renaissance as function
approximators in reinforcement learning. Two decades after Tesauro's TD-Gammon
achieved near top-level human performance in backgammon, the deep reinforcement
learning algorithm DQN achieved human-level performance in many Atari 2600
games. The purpose of this study is twofold. First, we propose two activation
functions for neural network function approximation in reinforcement learning:
the sigmoid-weighted linear unit (SiLU) and its derivative function (dSiLU).
The activation of the SiLU is computed by the sigmoid function multiplied by
its input. Second, we suggest that the more traditional approach of using
on-policy learning with eligibility traces, instead of experience replay, and
softmax action selection with simple annealing can be competitive with DQN,
without the need for a separate target network. We validate our proposed
approach by, first, achieving new state-of-the-art results in both stochastic
SZ-Tetris and Tetris with a small 10$\times$10 board, using TD($\lambda$)
learning and shallow dSiLU network agents, and, then, by outperforming DQN in
the Atari 2600 domain by using a deep Sarsa($\lambda$) agent with SiLU and
dSiLU hidden units. | http://arxiv.org/pdf/1702.03118 | Stefan Elfwing, Eiji Uchibe, Kenji Doya | cs.LG | 18 pages, 22 figures; added deep RL results for SZ-Tetris | null | cs.LG | 20170210 | 20171102 | [] |
1702.03044 | 4 | 1
# INTRODUCTION
Deep convolutional neural networks (CNNs) have demonstrated record breaking results on a variety of computer vision tasks such as image classiï¬cation (Krizhevsky et al., 2012; Simonyan & Zisserman, 2015), face recognition (Taigman et al., 2014; Sun et al., 2014), semantic segmentation (Long et al., 2015; Chen et al., 2015a) and object detection (Girshick, 2015; Ren et al., 2015). Regardless of the availability of signiï¬cantly improved training resources such as abundant annotated data, powerful computational platforms and diverse training frameworks, the promising results of deep CNNs are mainly attributed to the large number of learnable parameters, ranging from tens of millions to even hundreds of millions. Recent progress further shows clear evidence that CNNs could easily enjoy the accuracy gain from the increased network depth and width (He et al., 2016; Szegedy et al., 2015; 2016). However, this in turn lays heavy burdens on the memory and âThis work was done when Aojun Zhou was an intern at Intel Labs China, supervised by Anbang Yao who proposed the original idea and is responsible for correspondence. The ï¬rst three authors contributed equally to the writing of the paper. | 1702.03044#4 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | This paper presents incremental network quantization (INQ), a novel method,
targeting to efficiently convert any pre-trained full-precision convolutional
neural network (CNN) model into a low-precision version whose weights are
constrained to be either powers of two or zero. Unlike existing methods which
are struggled in noticeable accuracy loss, our INQ has the potential to resolve
this issue, as benefiting from two innovations. On one hand, we introduce three
interdependent operations, namely weight partition, group-wise quantization and
re-training. A well-proven measure is employed to divide the weights in each
layer of a pre-trained CNN model into two disjoint groups. The weights in the
first group are responsible to form a low-precision base, thus they are
quantized by a variable-length encoding method. The weights in the other group
are responsible to compensate for the accuracy loss from the quantization, thus
they are the ones to be re-trained. On the other hand, these three operations
are repeated on the latest re-trained group in an iterative manner until all
the weights are converted into low-precision ones, acting as an incremental
network quantization and accuracy enhancement procedure. Extensive experiments
on the ImageNet classification task using almost all known deep CNN
architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the
efficacy of the proposed method. Specifically, at 5-bit quantization, our
models have improved accuracy than the 32-bit floating-point references. Taking
ResNet-18 as an example, we further show that our quantized models with 4-bit,
3-bit and 2-bit ternary weights have improved or very similar accuracy against
its 32-bit floating-point baseline. Besides, impressive results with the
combination of network pruning and INQ are also reported. The code is available
at https://github.com/Zhouaojun/Incremental-Network-Quantization. | http://arxiv.org/pdf/1702.03044 | Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, Yurong Chen | cs.CV, cs.AI, cs.NE | Published by ICLR 2017, and the code is available at
https://github.com/Zhouaojun/Incremental-Network-Quantization | null | cs.CV | 20170210 | 20170825 | [
{
"id": "1605.04711"
},
{
"id": "1602.07261"
},
{
"id": "1609.07061"
},
{
"id": "1602.02830"
},
{
"id": "1603.05279"
}
] |
1702.03118 | 4 | Second, we suggest that the more traditional approach of using on-policy learning with eligibility traces, instead of experience replay, and softmax action selection with simple an- nealing can be competitive with DQN, without the need for a separate target network. Our approach is something of a throwback to the approach used by Tesauro (1994) to develop TD-Gammon more than two decades ago. Using a neural network function approximator and TD(λ) learning (Sutton, 1988), TD-Gammon reached near top-level human perfor- mance in backgammon, which to this day remains one of the most impressive applications of reinforcement learning. | 1702.03118#4 | Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning | In recent years, neural networks have enjoyed a renaissance as function
approximators in reinforcement learning. Two decades after Tesauro's TD-Gammon
achieved near top-level human performance in backgammon, the deep reinforcement
learning algorithm DQN achieved human-level performance in many Atari 2600
games. The purpose of this study is twofold. First, we propose two activation
functions for neural network function approximation in reinforcement learning:
the sigmoid-weighted linear unit (SiLU) and its derivative function (dSiLU).
The activation of the SiLU is computed by the sigmoid function multiplied by
its input. Second, we suggest that the more traditional approach of using
on-policy learning with eligibility traces, instead of experience replay, and
softmax action selection with simple annealing can be competitive with DQN,
without the need for a separate target network. We validate our proposed
approach by, first, achieving new state-of-the-art results in both stochastic
SZ-Tetris and Tetris with a small 10$\times$10 board, using TD($\lambda$)
learning and shallow dSiLU network agents, and, then, by outperforming DQN in
the Atari 2600 domain by using a deep Sarsa($\lambda$) agent with SiLU and
dSiLU hidden units. | http://arxiv.org/pdf/1702.03118 | Stefan Elfwing, Eiji Uchibe, Kenji Doya | cs.LG | 18 pages, 22 figures; added deep RL results for SZ-Tetris | null | cs.LG | 20170210 | 20171102 | [] |
1702.03044 | 5 | # 1This notation applies to our method throughout the paper.
1
Published as a conference paper at ICLR 2017
other computational resources. For instance, ResNet-152, a speciï¬c instance of the latest residual network architecture wining ImageNet classiï¬cation challenge in 2015, has a model size of about 230 MB and needs to perform about 11.3 billion FLOPs to classify a 224 à 224 image crop. There- fore, it is very challenging to deploy deep CNNs on the devices with limited computation and power budgets. | 1702.03044#5 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | This paper presents incremental network quantization (INQ), a novel method,
targeting to efficiently convert any pre-trained full-precision convolutional
neural network (CNN) model into a low-precision version whose weights are
constrained to be either powers of two or zero. Unlike existing methods which
are struggled in noticeable accuracy loss, our INQ has the potential to resolve
this issue, as benefiting from two innovations. On one hand, we introduce three
interdependent operations, namely weight partition, group-wise quantization and
re-training. A well-proven measure is employed to divide the weights in each
layer of a pre-trained CNN model into two disjoint groups. The weights in the
first group are responsible to form a low-precision base, thus they are
quantized by a variable-length encoding method. The weights in the other group
are responsible to compensate for the accuracy loss from the quantization, thus
they are the ones to be re-trained. On the other hand, these three operations
are repeated on the latest re-trained group in an iterative manner until all
the weights are converted into low-precision ones, acting as an incremental
network quantization and accuracy enhancement procedure. Extensive experiments
on the ImageNet classification task using almost all known deep CNN
architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the
efficacy of the proposed method. Specifically, at 5-bit quantization, our
models have improved accuracy than the 32-bit floating-point references. Taking
ResNet-18 as an example, we further show that our quantized models with 4-bit,
3-bit and 2-bit ternary weights have improved or very similar accuracy against
its 32-bit floating-point baseline. Besides, impressive results with the
combination of network pruning and INQ are also reported. The code is available
at https://github.com/Zhouaojun/Incremental-Network-Quantization. | http://arxiv.org/pdf/1702.03044 | Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, Yurong Chen | cs.CV, cs.AI, cs.NE | Published by ICLR 2017, and the code is available at
https://github.com/Zhouaojun/Incremental-Network-Quantization | null | cs.CV | 20170210 | 20170825 | [
{
"id": "1605.04711"
},
{
"id": "1602.07261"
},
{
"id": "1609.07061"
},
{
"id": "1602.02830"
},
{
"id": "1603.05279"
}
] |
1702.03118 | 5 | To evaluate our proposed approach, we ï¬rst test the performance of shallow network agents with SiLU, ReLU, dSiLU, and sigmoid hidden units in stochastic SZ-Tetris, which is a simpliï¬ed but diï¬cult version of Tetris. The best agent, the dSiLU network agent, improves the average state-of-the-art score by 20 %. In stochastic SZ-Tetris, we also train deep network agents using raw board conï¬gurations as states. An agent with SiLUs in the convolutional layers and dSiLUs in the fully-connected layer (SiLU-dSiLU) outperforms the previous state-of-the-art average ï¬nal score. We thereafter train a dSiLU network agent in standard Tetris with a smaller, 10Ã10, board size, achieving a state-of-the-art score in this more competitive version of Tetris as well. We then test a deep SiLU-dSiLU network agent in the Atari 2600 domain. It improves the mean DQN normalized scores achieved by DQN and double DQN by 232 % and 161 %, respectively, in 12 unbiasedly selected games. We ï¬nally analyze | 1702.03118#5 | Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning | In recent years, neural networks have enjoyed a renaissance as function
approximators in reinforcement learning. Two decades after Tesauro's TD-Gammon
achieved near top-level human performance in backgammon, the deep reinforcement
learning algorithm DQN achieved human-level performance in many Atari 2600
games. The purpose of this study is twofold. First, we propose two activation
functions for neural network function approximation in reinforcement learning:
the sigmoid-weighted linear unit (SiLU) and its derivative function (dSiLU).
The activation of the SiLU is computed by the sigmoid function multiplied by
its input. Second, we suggest that the more traditional approach of using
on-policy learning with eligibility traces, instead of experience replay, and
softmax action selection with simple annealing can be competitive with DQN,
without the need for a separate target network. We validate our proposed
approach by, first, achieving new state-of-the-art results in both stochastic
SZ-Tetris and Tetris with a small 10$\times$10 board, using TD($\lambda$)
learning and shallow dSiLU network agents, and, then, by outperforming DQN in
the Atari 2600 domain by using a deep Sarsa($\lambda$) agent with SiLU and
dSiLU hidden units. | http://arxiv.org/pdf/1702.03118 | Stefan Elfwing, Eiji Uchibe, Kenji Doya | cs.LG | 18 pages, 22 figures; added deep RL results for SZ-Tetris | null | cs.LG | 20170210 | 20171102 | [] |
1702.03044 | 6 | Substantial efforts have been made to the speed-up and compression on CNNs during training, feed- forward test or both of them. Among existing methods, the category of network quantization meth- ods attracts great attention from researches and developers. Some network quantization works try to compress pre-trained full-precision CNN models directly. Gong et al. (2014) address the storage problem of AlexNet (Krizhevsky et al., 2012) with vector quantization techniques. By replacing the weights in each of the three fully connected layers with respective ï¬oating-point centroid values obtained from the clustering, they can get over 20à model compression at about 1% loss in top-5 recognition rate. HashedNet (Chen et al., 2015b) uses a hash function to randomly map pre-trained weights into hash buckets, and all the weights in the same hash bucket are constrained to share a single ï¬oating-point value. In HashedNet, only the fully connected layers of several shallow CNN models are considered. For better compression, Han et al. (2016) present deep compression method which combines the pruning (Han et al., 2015), vector quantization and Huffman coding, and re- duce the model storage by 35à | 1702.03044#6 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | This paper presents incremental network quantization (INQ), a novel method,
targeting to efficiently convert any pre-trained full-precision convolutional
neural network (CNN) model into a low-precision version whose weights are
constrained to be either powers of two or zero. Unlike existing methods which
are struggled in noticeable accuracy loss, our INQ has the potential to resolve
this issue, as benefiting from two innovations. On one hand, we introduce three
interdependent operations, namely weight partition, group-wise quantization and
re-training. A well-proven measure is employed to divide the weights in each
layer of a pre-trained CNN model into two disjoint groups. The weights in the
first group are responsible to form a low-precision base, thus they are
quantized by a variable-length encoding method. The weights in the other group
are responsible to compensate for the accuracy loss from the quantization, thus
they are the ones to be re-trained. On the other hand, these three operations
are repeated on the latest re-trained group in an iterative manner until all
the weights are converted into low-precision ones, acting as an incremental
network quantization and accuracy enhancement procedure. Extensive experiments
on the ImageNet classification task using almost all known deep CNN
architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the
efficacy of the proposed method. Specifically, at 5-bit quantization, our
models have improved accuracy than the 32-bit floating-point references. Taking
ResNet-18 as an example, we further show that our quantized models with 4-bit,
3-bit and 2-bit ternary weights have improved or very similar accuracy against
its 32-bit floating-point baseline. Besides, impressive results with the
combination of network pruning and INQ are also reported. The code is available
at https://github.com/Zhouaojun/Incremental-Network-Quantization. | http://arxiv.org/pdf/1702.03044 | Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, Yurong Chen | cs.CV, cs.AI, cs.NE | Published by ICLR 2017, and the code is available at
https://github.com/Zhouaojun/Incremental-Network-Quantization | null | cs.CV | 20170210 | 20170825 | [
{
"id": "1605.04711"
},
{
"id": "1602.07261"
},
{
"id": "1609.07061"
},
{
"id": "1602.02830"
},
{
"id": "1603.05279"
}
] |
1702.03044 | 7 | present deep compression method which combines the pruning (Han et al., 2015), vector quantization and Huffman coding, and re- duce the model storage by 35à on AlexNet and 49à on VGG-16 (Simonyan & Zisserman, 2015). Vanhoucke et al. (2011) use an SSE 8-bit ï¬xed-point implementation to improve the computation of neural networks on the modern Intel x86 CPUs in feed-forward test, yielding 3à speed-up over an optimized ï¬oating-point baseline. Training CNNs by substituting the 32-bit ï¬oating-point rep- resentation with the 16-bit ï¬xed-point representation has also been explored in Gupta et al. (2015). Other seminal works attempt to restrict CNNs into low-precision versions during training phase. Soudry et al. (2014) propose expectation backpropagation (EBP) to estimate the posterior distribu- tion of deterministic network weights. With EBP, the network weights can be constrained to +1 and -1 during feed-forward test in a probabilistic way. BinaryConnect (Courbariaux et al., 2015) further extends the idea behind | 1702.03044#7 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | This paper presents incremental network quantization (INQ), a novel method,
targeting to efficiently convert any pre-trained full-precision convolutional
neural network (CNN) model into a low-precision version whose weights are
constrained to be either powers of two or zero. Unlike existing methods which
are struggled in noticeable accuracy loss, our INQ has the potential to resolve
this issue, as benefiting from two innovations. On one hand, we introduce three
interdependent operations, namely weight partition, group-wise quantization and
re-training. A well-proven measure is employed to divide the weights in each
layer of a pre-trained CNN model into two disjoint groups. The weights in the
first group are responsible to form a low-precision base, thus they are
quantized by a variable-length encoding method. The weights in the other group
are responsible to compensate for the accuracy loss from the quantization, thus
they are the ones to be re-trained. On the other hand, these three operations
are repeated on the latest re-trained group in an iterative manner until all
the weights are converted into low-precision ones, acting as an incremental
network quantization and accuracy enhancement procedure. Extensive experiments
on the ImageNet classification task using almost all known deep CNN
architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the
efficacy of the proposed method. Specifically, at 5-bit quantization, our
models have improved accuracy than the 32-bit floating-point references. Taking
ResNet-18 as an example, we further show that our quantized models with 4-bit,
3-bit and 2-bit ternary weights have improved or very similar accuracy against
its 32-bit floating-point baseline. Besides, impressive results with the
combination of network pruning and INQ are also reported. The code is available
at https://github.com/Zhouaojun/Incremental-Network-Quantization. | http://arxiv.org/pdf/1702.03044 | Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, Yurong Chen | cs.CV, cs.AI, cs.NE | Published by ICLR 2017, and the code is available at
https://github.com/Zhouaojun/Incremental-Network-Quantization | null | cs.CV | 20170210 | 20170825 | [
{
"id": "1605.04711"
},
{
"id": "1602.07261"
},
{
"id": "1609.07061"
},
{
"id": "1602.02830"
},
{
"id": "1603.05279"
}
] |
1702.03118 | 7 | # 2 Method
# 2.1 TD(λ) and Sarsa(λ)
In this study, we use two reinforcement learning algorithms: TD(λ) (Sutton, 1988) and Sarsa(λ) (Rummery and Niranjan, 1994; Sutton, 1996). TD(λ) learns an estimate of the Ï , state-value function, V
2
# Ï
while the agent follows policy Ï. If the approximated value functions, Vt â V , are parameterized by the parameter vector θt, then the gradient-descent learning update of the parameters is computed by
θt+1 = θt + αδtet,
(1)
where the TD-error, δt, is
δt = rt + γVt(st+1) â Vt(st) (2)
for TD(λ) and
δt = rt + γQt(st+1, at+1) â Qt(st, at) (3)
for Sarsa(λ). The eligibility trace vector, et, is
et = γλetâ1 + âθtVt(st), e0 = 0,
(4)
for TD(λ) and
et = γλetâ1 + âθtQt(st, at), e0 = 0, (5) | 1702.03118#7 | Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning | In recent years, neural networks have enjoyed a renaissance as function
approximators in reinforcement learning. Two decades after Tesauro's TD-Gammon
achieved near top-level human performance in backgammon, the deep reinforcement
learning algorithm DQN achieved human-level performance in many Atari 2600
games. The purpose of this study is twofold. First, we propose two activation
functions for neural network function approximation in reinforcement learning:
the sigmoid-weighted linear unit (SiLU) and its derivative function (dSiLU).
The activation of the SiLU is computed by the sigmoid function multiplied by
its input. Second, we suggest that the more traditional approach of using
on-policy learning with eligibility traces, instead of experience replay, and
softmax action selection with simple annealing can be competitive with DQN,
without the need for a separate target network. We validate our proposed
approach by, first, achieving new state-of-the-art results in both stochastic
SZ-Tetris and Tetris with a small 10$\times$10 board, using TD($\lambda$)
learning and shallow dSiLU network agents, and, then, by outperforming DQN in
the Atari 2600 domain by using a deep Sarsa($\lambda$) agent with SiLU and
dSiLU hidden units. | http://arxiv.org/pdf/1702.03118 | Stefan Elfwing, Eiji Uchibe, Kenji Doya | cs.LG | 18 pages, 22 figures; added deep RL results for SZ-Tetris | null | cs.LG | 20170210 | 20171102 | [] |
1702.03044 | 8 | to +1 and -1 during feed-forward test in a probabilistic way. BinaryConnect (Courbariaux et al., 2015) further extends the idea behind EBP to binarize network weights during training phase directly. It has two versions of network weights: ï¬oating-point and binary. The ï¬oating-point version is used as the reference for weight binarization. BinaryConnect achieves state-of-the-art accuracy using shallow CNNs for small datasets such as MNIST (LeCun et al., 1998) and CIFAR-10. Later on, a series of efforts have been invested to train CNNs with low-precision weights, low-precision activations and even low-precision gradients, including but not limited to BinaryNet (Courbariaux et al., 2016), XNOR-Net (Rastegari et al., 2016), ternary weight network (TWN) (Li & Liu, 2016), DoReFa-Net (Zhou et al., 2016) and quantized neural network (QNN) (Hubara et al., 2016). | 1702.03044#8 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | This paper presents incremental network quantization (INQ), a novel method,
targeting to efficiently convert any pre-trained full-precision convolutional
neural network (CNN) model into a low-precision version whose weights are
constrained to be either powers of two or zero. Unlike existing methods which
are struggled in noticeable accuracy loss, our INQ has the potential to resolve
this issue, as benefiting from two innovations. On one hand, we introduce three
interdependent operations, namely weight partition, group-wise quantization and
re-training. A well-proven measure is employed to divide the weights in each
layer of a pre-trained CNN model into two disjoint groups. The weights in the
first group are responsible to form a low-precision base, thus they are
quantized by a variable-length encoding method. The weights in the other group
are responsible to compensate for the accuracy loss from the quantization, thus
they are the ones to be re-trained. On the other hand, these three operations
are repeated on the latest re-trained group in an iterative manner until all
the weights are converted into low-precision ones, acting as an incremental
network quantization and accuracy enhancement procedure. Extensive experiments
on the ImageNet classification task using almost all known deep CNN
architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the
efficacy of the proposed method. Specifically, at 5-bit quantization, our
models have improved accuracy than the 32-bit floating-point references. Taking
ResNet-18 as an example, we further show that our quantized models with 4-bit,
3-bit and 2-bit ternary weights have improved or very similar accuracy against
its 32-bit floating-point baseline. Besides, impressive results with the
combination of network pruning and INQ are also reported. The code is available
at https://github.com/Zhouaojun/Incremental-Network-Quantization. | http://arxiv.org/pdf/1702.03044 | Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, Yurong Chen | cs.CV, cs.AI, cs.NE | Published by ICLR 2017, and the code is available at
https://github.com/Zhouaojun/Incremental-Network-Quantization | null | cs.CV | 20170210 | 20170825 | [
{
"id": "1605.04711"
},
{
"id": "1602.07261"
},
{
"id": "1609.07061"
},
{
"id": "1602.02830"
},
{
"id": "1603.05279"
}
] |
1702.03118 | 8 | (4)
for TD(λ) and
et = γλetâ1 + âθtQt(st, at), e0 = 0, (5)
for Sarsa(λ). Here, st is the state at time t, at is the action selected at time t, rt is the reward for taking action at in state st, α is the learning rate, γ is the discount factor of future rewards, λ is the trace-decay rate, and âθtVt and âθtQt are the vectors of partial derivatives of the function approximators with respect to each component of θt.
# 2.2 Sigmoid-weighted Linear Units
In our earlier work (Elfwing et al., 2016), we proposed the EE-RBM as a function approx- imator in reinforcement learning. In the case of state-value based learning, given a state vector s, an EE-RBM approximates the state-value function V by the negative expected energy of an RBM (Smolensky, 1986; Freund and Haussler, 1992; Hinton, 2002) network:
zkÏ(zk) + X i bisi, (6)
V (s) = X k zk = X i
wiksi + bk, (7)
Ï(x) = 1 1 + eâx . (8) | 1702.03118#8 | Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning | In recent years, neural networks have enjoyed a renaissance as function
approximators in reinforcement learning. Two decades after Tesauro's TD-Gammon
achieved near top-level human performance in backgammon, the deep reinforcement
learning algorithm DQN achieved human-level performance in many Atari 2600
games. The purpose of this study is twofold. First, we propose two activation
functions for neural network function approximation in reinforcement learning:
the sigmoid-weighted linear unit (SiLU) and its derivative function (dSiLU).
The activation of the SiLU is computed by the sigmoid function multiplied by
its input. Second, we suggest that the more traditional approach of using
on-policy learning with eligibility traces, instead of experience replay, and
softmax action selection with simple annealing can be competitive with DQN,
without the need for a separate target network. We validate our proposed
approach by, first, achieving new state-of-the-art results in both stochastic
SZ-Tetris and Tetris with a small 10$\times$10 board, using TD($\lambda$)
learning and shallow dSiLU network agents, and, then, by outperforming DQN in
the Atari 2600 domain by using a deep Sarsa($\lambda$) agent with SiLU and
dSiLU hidden units. | http://arxiv.org/pdf/1702.03118 | Stefan Elfwing, Eiji Uchibe, Kenji Doya | cs.LG | 18 pages, 22 figures; added deep RL results for SZ-Tetris | null | cs.LG | 20170210 | 20171102 | [] |
1702.03044 | 9 | Despite these tremendous advances, CNN quantization still remains an open problem due to two crit- ical issues which have not been well resolved yet, especially under scenarios of using low-precision weights for quantization. The ï¬rst issue is the non-negligible accuracy loss for CNN quantization methods, and the other issue is the increased number of training iterations for ensuring convergence. In this paper, we attempt to address these two issues by presenting a novel incremental network quantization (INQ) method. | 1702.03044#9 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | This paper presents incremental network quantization (INQ), a novel method,
targeting to efficiently convert any pre-trained full-precision convolutional
neural network (CNN) model into a low-precision version whose weights are
constrained to be either powers of two or zero. Unlike existing methods which
are struggled in noticeable accuracy loss, our INQ has the potential to resolve
this issue, as benefiting from two innovations. On one hand, we introduce three
interdependent operations, namely weight partition, group-wise quantization and
re-training. A well-proven measure is employed to divide the weights in each
layer of a pre-trained CNN model into two disjoint groups. The weights in the
first group are responsible to form a low-precision base, thus they are
quantized by a variable-length encoding method. The weights in the other group
are responsible to compensate for the accuracy loss from the quantization, thus
they are the ones to be re-trained. On the other hand, these three operations
are repeated on the latest re-trained group in an iterative manner until all
the weights are converted into low-precision ones, acting as an incremental
network quantization and accuracy enhancement procedure. Extensive experiments
on the ImageNet classification task using almost all known deep CNN
architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the
efficacy of the proposed method. Specifically, at 5-bit quantization, our
models have improved accuracy than the 32-bit floating-point references. Taking
ResNet-18 as an example, we further show that our quantized models with 4-bit,
3-bit and 2-bit ternary weights have improved or very similar accuracy against
its 32-bit floating-point baseline. Besides, impressive results with the
combination of network pruning and INQ are also reported. The code is available
at https://github.com/Zhouaojun/Incremental-Network-Quantization. | http://arxiv.org/pdf/1702.03044 | Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, Yurong Chen | cs.CV, cs.AI, cs.NE | Published by ICLR 2017, and the code is available at
https://github.com/Zhouaojun/Incremental-Network-Quantization | null | cs.CV | 20170210 | 20170825 | [
{
"id": "1605.04711"
},
{
"id": "1602.07261"
},
{
"id": "1609.07061"
},
{
"id": "1602.02830"
},
{
"id": "1603.05279"
}
] |
1702.03118 | 9 | Here, zk is the input to hidden unit k, Ï(·) is the sigmoid function, bi is the bias weight for input unit si, wik is the weight connecting state si and hidden unit k, and bk is the bias weight for hidden unit k. Note that Equation 6 can be regarded as the output of a one- hidden layer feedforward neural network with hidden unit activations computed by zkÏ(zk) and with uniform output weights of one.
In this study, motivated by the high performance of the EE-RBM in both the classiï¬- cation (Elfwing et al., 2015) and the reinforcement learning (Elfwing et al., 2016) domains, we propose the SiLU as an activation function for neural network function approximation
3
# Ï
in reinforcement learning. The activation ak of a SiLU k for an input vector s is computed by the sigmoid function multiplied by its input:
# ak(s) = zkÏ(zk).
(9)
âdsiLv âSigmoid 0.8 0.6 0.4 0.2
Figure 1: The activation functions of the SiLU and the ReLU (left panel), and the dSiLU and the sigmoid unit (right panel). | 1702.03118#9 | Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning | In recent years, neural networks have enjoyed a renaissance as function
approximators in reinforcement learning. Two decades after Tesauro's TD-Gammon
achieved near top-level human performance in backgammon, the deep reinforcement
learning algorithm DQN achieved human-level performance in many Atari 2600
games. The purpose of this study is twofold. First, we propose two activation
functions for neural network function approximation in reinforcement learning:
the sigmoid-weighted linear unit (SiLU) and its derivative function (dSiLU).
The activation of the SiLU is computed by the sigmoid function multiplied by
its input. Second, we suggest that the more traditional approach of using
on-policy learning with eligibility traces, instead of experience replay, and
softmax action selection with simple annealing can be competitive with DQN,
without the need for a separate target network. We validate our proposed
approach by, first, achieving new state-of-the-art results in both stochastic
SZ-Tetris and Tetris with a small 10$\times$10 board, using TD($\lambda$)
learning and shallow dSiLU network agents, and, then, by outperforming DQN in
the Atari 2600 domain by using a deep Sarsa($\lambda$) agent with SiLU and
dSiLU hidden units. | http://arxiv.org/pdf/1702.03118 | Stefan Elfwing, Eiji Uchibe, Kenji Doya | cs.LG | 18 pages, 22 figures; added deep RL results for SZ-Tetris | null | cs.LG | 20170210 | 20171102 | [] |
1702.03044 | 10 | In our INQ, there is no assumption on the CNN architecture, and its basic goal is to efï¬ciently convert any pre-trained full-precision (i.e., 32-bit ï¬oating-point) CNN model into a low-precision version whose weights are constrained to be either powers of two or zero. The advantage of such kind of low-precision models is that the original ï¬oating-point multiplication operations can be replaced by cheaper binary bit shift operations on dedicated hardware like FPGA. We noticed that most existing network quantization methods adopt a global strategy in which all the weights are simultaneously converted to low-precision ones (that are usually in the ï¬oating-point types). That is, they have not considered the different importance of network weights, leaving the room to retain network accu- racy limited. In sharp contrast to existing methods, our INQ makes a very careful handling for the model accuracy drop from network quantization. To be more speciï¬c, it incorporates three interde- pendent operations: weight partition, group-wise quantization and re-training. Weight partition uses a pruning-inspired measure (Han et al., 2015; Guo et al., 2016) to | 1702.03044#10 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | This paper presents incremental network quantization (INQ), a novel method,
targeting to efficiently convert any pre-trained full-precision convolutional
neural network (CNN) model into a low-precision version whose weights are
constrained to be either powers of two or zero. Unlike existing methods which
are struggled in noticeable accuracy loss, our INQ has the potential to resolve
this issue, as benefiting from two innovations. On one hand, we introduce three
interdependent operations, namely weight partition, group-wise quantization and
re-training. A well-proven measure is employed to divide the weights in each
layer of a pre-trained CNN model into two disjoint groups. The weights in the
first group are responsible to form a low-precision base, thus they are
quantized by a variable-length encoding method. The weights in the other group
are responsible to compensate for the accuracy loss from the quantization, thus
they are the ones to be re-trained. On the other hand, these three operations
are repeated on the latest re-trained group in an iterative manner until all
the weights are converted into low-precision ones, acting as an incremental
network quantization and accuracy enhancement procedure. Extensive experiments
on the ImageNet classification task using almost all known deep CNN
architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the
efficacy of the proposed method. Specifically, at 5-bit quantization, our
models have improved accuracy than the 32-bit floating-point references. Taking
ResNet-18 as an example, we further show that our quantized models with 4-bit,
3-bit and 2-bit ternary weights have improved or very similar accuracy against
its 32-bit floating-point baseline. Besides, impressive results with the
combination of network pruning and INQ are also reported. The code is available
at https://github.com/Zhouaojun/Incremental-Network-Quantization. | http://arxiv.org/pdf/1702.03044 | Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, Yurong Chen | cs.CV, cs.AI, cs.NE | Published by ICLR 2017, and the code is available at
https://github.com/Zhouaojun/Incremental-Network-Quantization | null | cs.CV | 20170210 | 20170825 | [
{
"id": "1605.04711"
},
{
"id": "1602.07261"
},
{
"id": "1609.07061"
},
{
"id": "1602.02830"
},
{
"id": "1603.05279"
}
] |
1702.03118 | 10 | Figure 1: The activation functions of the SiLU and the ReLU (left panel), and the dSiLU and the sigmoid unit (right panel).
For zk-values of large magnitude, the activation of the SiLU is approximately equal to the activation of the ReLU (see left panel in Figure 1), i.e., the activation is approximately equal to zero for large negative zk-values and approximately equal to zk for large positive zk-values. Unlike the ReLU (and other commonly used activation units such as sigmoid and tanh units), the activation of the SiLU is not monotonically increasing. Instead, it has a global minimum value of approximately â0.28 for zk â â1.28. An attractive feature of the SiLU is that it has a self-stabilizing property, which we demonstrated experimentally in Elfwing et al. (2015). The global minimum, where the derivative is zero, functions as a âsoft ï¬oorâ on the weights that serves as an implicit regularizer that inhibits the learning of weights of large magnitudes.
We propose an additional activation function for neural network function approximation: the dSiLU. The activation of the dSiLU is computed by the derivative of the SiLU:
ak(s) = Ï(zk) (1 + zk(1 â Ï(zk))) . (10) | 1702.03118#10 | Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning | In recent years, neural networks have enjoyed a renaissance as function
approximators in reinforcement learning. Two decades after Tesauro's TD-Gammon
achieved near top-level human performance in backgammon, the deep reinforcement
learning algorithm DQN achieved human-level performance in many Atari 2600
games. The purpose of this study is twofold. First, we propose two activation
functions for neural network function approximation in reinforcement learning:
the sigmoid-weighted linear unit (SiLU) and its derivative function (dSiLU).
The activation of the SiLU is computed by the sigmoid function multiplied by
its input. Second, we suggest that the more traditional approach of using
on-policy learning with eligibility traces, instead of experience replay, and
softmax action selection with simple annealing can be competitive with DQN,
without the need for a separate target network. We validate our proposed
approach by, first, achieving new state-of-the-art results in both stochastic
SZ-Tetris and Tetris with a small 10$\times$10 board, using TD($\lambda$)
learning and shallow dSiLU network agents, and, then, by outperforming DQN in
the Atari 2600 domain by using a deep Sarsa($\lambda$) agent with SiLU and
dSiLU hidden units. | http://arxiv.org/pdf/1702.03118 | Stefan Elfwing, Eiji Uchibe, Kenji Doya | cs.LG | 18 pages, 22 figures; added deep RL results for SZ-Tetris | null | cs.LG | 20170210 | 20171102 | [] |
1702.03044 | 11 | weight partition, group-wise quantization and re-training. Weight partition uses a pruning-inspired measure (Han et al., 2015; Guo et al., 2016) to divide the weights in each layer of a pre-trained full-precision CNN model into two disjoint groups which play complementary roles in our INQ. The weights in the ï¬rst group are quantized to be either powers of two or zero by a variable-length encoding method, forming a low-precision base for the original model. The weights in the other group are re-trained while keeping the quantized weights ï¬xed, compensating for the accuracy loss resulted from the quantization. Furthermore, these three operations are repeated on the | 1702.03044#11 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | This paper presents incremental network quantization (INQ), a novel method,
targeting to efficiently convert any pre-trained full-precision convolutional
neural network (CNN) model into a low-precision version whose weights are
constrained to be either powers of two or zero. Unlike existing methods which
are struggled in noticeable accuracy loss, our INQ has the potential to resolve
this issue, as benefiting from two innovations. On one hand, we introduce three
interdependent operations, namely weight partition, group-wise quantization and
re-training. A well-proven measure is employed to divide the weights in each
layer of a pre-trained CNN model into two disjoint groups. The weights in the
first group are responsible to form a low-precision base, thus they are
quantized by a variable-length encoding method. The weights in the other group
are responsible to compensate for the accuracy loss from the quantization, thus
they are the ones to be re-trained. On the other hand, these three operations
are repeated on the latest re-trained group in an iterative manner until all
the weights are converted into low-precision ones, acting as an incremental
network quantization and accuracy enhancement procedure. Extensive experiments
on the ImageNet classification task using almost all known deep CNN
architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the
efficacy of the proposed method. Specifically, at 5-bit quantization, our
models have improved accuracy than the 32-bit floating-point references. Taking
ResNet-18 as an example, we further show that our quantized models with 4-bit,
3-bit and 2-bit ternary weights have improved or very similar accuracy against
its 32-bit floating-point baseline. Besides, impressive results with the
combination of network pruning and INQ are also reported. The code is available
at https://github.com/Zhouaojun/Incremental-Network-Quantization. | http://arxiv.org/pdf/1702.03044 | Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, Yurong Chen | cs.CV, cs.AI, cs.NE | Published by ICLR 2017, and the code is available at
https://github.com/Zhouaojun/Incremental-Network-Quantization | null | cs.CV | 20170210 | 20170825 | [
{
"id": "1605.04711"
},
{
"id": "1602.07261"
},
{
"id": "1609.07061"
},
{
"id": "1602.02830"
},
{
"id": "1603.05279"
}
] |
1702.03118 | 11 | ak(s) = Ï(zk) (1 + zk(1 â Ï(zk))) . (10)
The activation function of the dSiLU looks like an steeper and âovershootingâ sigmoid func- tion (see right panel in Figure 1). The dSiLU has a maximum value of approximately 1.1 and a minimum value of approximately â0.1 for zk â ±2.4, i.e., the solutions to the equation zk = â log ((zk â 2)/(zk + 2)).
The derivative of the activation function of the SiLU, used for gradient-descent learning updates of the neural network weight parameters (see Equations 4 and 5), is given by
âwik ak(s) = Ï(zk) (1 + zk(1 â Ï(zk))) si, (11)
4
and the derivative of the activation function of the dSiLU is given by
âwik ak(s) = Ï(zk)(1 â Ï(zk))(2 + zk(1 â Ï(zk)) â zkÏ(zk))si. (12)
# 2.3 Action selection | 1702.03118#11 | Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning | In recent years, neural networks have enjoyed a renaissance as function
approximators in reinforcement learning. Two decades after Tesauro's TD-Gammon
achieved near top-level human performance in backgammon, the deep reinforcement
learning algorithm DQN achieved human-level performance in many Atari 2600
games. The purpose of this study is twofold. First, we propose two activation
functions for neural network function approximation in reinforcement learning:
the sigmoid-weighted linear unit (SiLU) and its derivative function (dSiLU).
The activation of the SiLU is computed by the sigmoid function multiplied by
its input. Second, we suggest that the more traditional approach of using
on-policy learning with eligibility traces, instead of experience replay, and
softmax action selection with simple annealing can be competitive with DQN,
without the need for a separate target network. We validate our proposed
approach by, first, achieving new state-of-the-art results in both stochastic
SZ-Tetris and Tetris with a small 10$\times$10 board, using TD($\lambda$)
learning and shallow dSiLU network agents, and, then, by outperforming DQN in
the Atari 2600 domain by using a deep Sarsa($\lambda$) agent with SiLU and
dSiLU hidden units. | http://arxiv.org/pdf/1702.03118 | Stefan Elfwing, Eiji Uchibe, Kenji Doya | cs.LG | 18 pages, 22 figures; added deep RL results for SZ-Tetris | null | cs.LG | 20170210 | 20171102 | [] |
1702.03044 | 12 | 2
Published as a conference paper at ICLR 2017
50% (1) 75% · · · (2) 100% (a) (b) (c)
(a) Pre-trained full- Figure 1: An overview of our incremental network quantization method. precision model used as a reference. (b) Model update with three proposed operations: weight partition, group-wise quantization (green connections) and re-training (blue connections). (c) Final low-precision model with all the weights constrained to be either powers of two or zero. In the ï¬g- ure, operation (1) represents a single run of (b), and operation (2) denotes the procedure of repeating operation (1) on the latest re-trained weight group until all the non-zero weights are quantized. Our method does not lead to accuracy loss when using 5-bit, 4-bit and even 3-bit approximations in net- work quantization. For better visualization, here we just use a 3-layer fully connected network as an illustrative example, and the newly re-trained weights are divided into two disjoint groups of the same size at each run of operation (1) except the last run which only performs quantization on the re-trained ï¬oating-point weights occupying 12.5% of the model weights. | 1702.03044#12 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | This paper presents incremental network quantization (INQ), a novel method,
targeting to efficiently convert any pre-trained full-precision convolutional
neural network (CNN) model into a low-precision version whose weights are
constrained to be either powers of two or zero. Unlike existing methods which
are struggled in noticeable accuracy loss, our INQ has the potential to resolve
this issue, as benefiting from two innovations. On one hand, we introduce three
interdependent operations, namely weight partition, group-wise quantization and
re-training. A well-proven measure is employed to divide the weights in each
layer of a pre-trained CNN model into two disjoint groups. The weights in the
first group are responsible to form a low-precision base, thus they are
quantized by a variable-length encoding method. The weights in the other group
are responsible to compensate for the accuracy loss from the quantization, thus
they are the ones to be re-trained. On the other hand, these three operations
are repeated on the latest re-trained group in an iterative manner until all
the weights are converted into low-precision ones, acting as an incremental
network quantization and accuracy enhancement procedure. Extensive experiments
on the ImageNet classification task using almost all known deep CNN
architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the
efficacy of the proposed method. Specifically, at 5-bit quantization, our
models have improved accuracy than the 32-bit floating-point references. Taking
ResNet-18 as an example, we further show that our quantized models with 4-bit,
3-bit and 2-bit ternary weights have improved or very similar accuracy against
its 32-bit floating-point baseline. Besides, impressive results with the
combination of network pruning and INQ are also reported. The code is available
at https://github.com/Zhouaojun/Incremental-Network-Quantization. | http://arxiv.org/pdf/1702.03044 | Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, Yurong Chen | cs.CV, cs.AI, cs.NE | Published by ICLR 2017, and the code is available at
https://github.com/Zhouaojun/Incremental-Network-Quantization | null | cs.CV | 20170210 | 20170825 | [
{
"id": "1605.04711"
},
{
"id": "1602.07261"
},
{
"id": "1609.07061"
},
{
"id": "1602.02830"
},
{
"id": "1603.05279"
}
] |
1702.03118 | 13 | For the model-based TD(λ) algorithm, we select an action a in state s that leads to the next state sâ² with a probability deï¬ned as
Ï(a|s) = exp(V (f (s, a))/Ï ) Pb exp(V (f (s, b))/Ï ) . (14)
Here, f (s, a) returns the next state sâ² according to the state transition dynamics and Ï is the temperature that controls the trade-oï¬ between exploration and exploitation. We used hyperbolic annealing of the temperature and the temperature was decreased after every episode i:
Ï (i) = Ï0 1 + Ïki . (15) | 1702.03118#13 | Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning | In recent years, neural networks have enjoyed a renaissance as function
approximators in reinforcement learning. Two decades after Tesauro's TD-Gammon
achieved near top-level human performance in backgammon, the deep reinforcement
learning algorithm DQN achieved human-level performance in many Atari 2600
games. The purpose of this study is twofold. First, we propose two activation
functions for neural network function approximation in reinforcement learning:
the sigmoid-weighted linear unit (SiLU) and its derivative function (dSiLU).
The activation of the SiLU is computed by the sigmoid function multiplied by
its input. Second, we suggest that the more traditional approach of using
on-policy learning with eligibility traces, instead of experience replay, and
softmax action selection with simple annealing can be competitive with DQN,
without the need for a separate target network. We validate our proposed
approach by, first, achieving new state-of-the-art results in both stochastic
SZ-Tetris and Tetris with a small 10$\times$10 board, using TD($\lambda$)
learning and shallow dSiLU network agents, and, then, by outperforming DQN in
the Atari 2600 domain by using a deep Sarsa($\lambda$) agent with SiLU and
dSiLU hidden units. | http://arxiv.org/pdf/1702.03118 | Stefan Elfwing, Eiji Uchibe, Kenji Doya | cs.LG | 18 pages, 22 figures; added deep RL results for SZ-Tetris | null | cs.LG | 20170210 | 20171102 | [] |
1702.03044 | 14 | The main insight of our INQ is that a compact combination of the proposed weight partition, group- wise quantization and re-training operations has the potential to get a lossless low-precision CNN model from any full-precision reference. We conduct extensive experiments on the ImageNet large scale classiï¬cation task using almost all known deep CNN architectures to validate the effective- ness of our method. We show that: (1) For AlexNet, VGG-16, GoogleNet and ResNets with 5-bit quantization, INQ achieves improved accuracy in comparison with their respective full-precision baselines. The absolute top-1 accuracy gain ranges from 0.13% to 2.28%, and the absolute top-5 accuracy gain is in the range of 0.23% to 1.65%. (2) INQ has the property of easy convergence in training. In general, re-training with less than 8 epochs could consistently generate a lossless model with 5-bit weights in the experiments. (3) Taking ResNet-18 as an example, our quantized models with 4-bit, 3-bit and 2-bit ternary weights also have improved or very similar accuracy compared with its 32-bit ï¬oating-point baseline. (4) | 1702.03044#14 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | This paper presents incremental network quantization (INQ), a novel method,
targeting to efficiently convert any pre-trained full-precision convolutional
neural network (CNN) model into a low-precision version whose weights are
constrained to be either powers of two or zero. Unlike existing methods which
are struggled in noticeable accuracy loss, our INQ has the potential to resolve
this issue, as benefiting from two innovations. On one hand, we introduce three
interdependent operations, namely weight partition, group-wise quantization and
re-training. A well-proven measure is employed to divide the weights in each
layer of a pre-trained CNN model into two disjoint groups. The weights in the
first group are responsible to form a low-precision base, thus they are
quantized by a variable-length encoding method. The weights in the other group
are responsible to compensate for the accuracy loss from the quantization, thus
they are the ones to be re-trained. On the other hand, these three operations
are repeated on the latest re-trained group in an iterative manner until all
the weights are converted into low-precision ones, acting as an incremental
network quantization and accuracy enhancement procedure. Extensive experiments
on the ImageNet classification task using almost all known deep CNN
architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the
efficacy of the proposed method. Specifically, at 5-bit quantization, our
models have improved accuracy than the 32-bit floating-point references. Taking
ResNet-18 as an example, we further show that our quantized models with 4-bit,
3-bit and 2-bit ternary weights have improved or very similar accuracy against
its 32-bit floating-point baseline. Besides, impressive results with the
combination of network pruning and INQ are also reported. The code is available
at https://github.com/Zhouaojun/Incremental-Network-Quantization. | http://arxiv.org/pdf/1702.03044 | Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, Yurong Chen | cs.CV, cs.AI, cs.NE | Published by ICLR 2017, and the code is available at
https://github.com/Zhouaojun/Incremental-Network-Quantization | null | cs.CV | 20170210 | 20170825 | [
{
"id": "1605.04711"
},
{
"id": "1602.07261"
},
{
"id": "1609.07061"
},
{
"id": "1602.02830"
},
{
"id": "1603.05279"
}
] |
1702.03118 | 14 | Here, Ï0 is the initial temperature and Ïk controls the rate of annealing.
# 3 Experiments
# 3.1 SZ-Tetris
Szita and Szepesvári (2010) proposed stochastic SZ-Tetris (Burgiel, 1997) as a benchmark for reinforcement learning that preserves the core challenges of standard Tetris but allows faster evaluation of diï¬erent strategies due to shorter episodes by removing easier tetromi- noes. Stochastic SZ-Tetris is played on a board of standard Tetris size with a width of 10 and a height of 20. In each time step, either an S-shaped tetromino or a Z-shaped tetromino appears with equal probability. The agent selects a rotation (lying or standing) and a hori- zontal position within the board. In total, there are 17 possible actions for each tetromino (9 standing and 8 lying horizontal positions). After the action selection, the tetromino drops down the board, stopping when it hits another tetromino or the bottom of the board. If a row is completed, then it disappears. The agent gets a score of +1 point for each completed row. An episode ends when a tetromino does not ï¬t within the board. | 1702.03118#14 | Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning | In recent years, neural networks have enjoyed a renaissance as function
approximators in reinforcement learning. Two decades after Tesauro's TD-Gammon
achieved near top-level human performance in backgammon, the deep reinforcement
learning algorithm DQN achieved human-level performance in many Atari 2600
games. The purpose of this study is twofold. First, we propose two activation
functions for neural network function approximation in reinforcement learning:
the sigmoid-weighted linear unit (SiLU) and its derivative function (dSiLU).
The activation of the SiLU is computed by the sigmoid function multiplied by
its input. Second, we suggest that the more traditional approach of using
on-policy learning with eligibility traces, instead of experience replay, and
softmax action selection with simple annealing can be competitive with DQN,
without the need for a separate target network. We validate our proposed
approach by, first, achieving new state-of-the-art results in both stochastic
SZ-Tetris and Tetris with a small 10$\times$10 board, using TD($\lambda$)
learning and shallow dSiLU network agents, and, then, by outperforming DQN in
the Atari 2600 domain by using a deep Sarsa($\lambda$) agent with SiLU and
dSiLU hidden units. | http://arxiv.org/pdf/1702.03118 | Stefan Elfwing, Eiji Uchibe, Kenji Doya | cs.LG | 18 pages, 22 figures; added deep RL results for SZ-Tetris | null | cs.LG | 20170210 | 20171102 | [] |
1702.03118 | 15 | For an alternating sequence of S-shaped and Z-shaped tetrominoes, the upper bound on the episode length in SZ-Tetris is 69 600 fallen pieces (Burgiel, 1997) (corresponding to a score of 27 840 points), but the maximum episode length is probably much shorter, maybe
5
a few thousands (Szita and Szepesvári, 2010). That means that to evaluate a good strategy SZ-Tetris requires at least ï¬ve orders of magnitude less computation than standard Tetris. The standard learning approach for Tetris has been to use a model-based setting and deï¬ne the evaluation function or state-value function as the linear combination of hand-coded features. Value-based reinforcement learning algorithms have a lousy track record using In regular Tetris, their reported performance levels are many magnitudes this approach. lower than black-box methods such as the cross-entropy (CE) method and evolutionary approaches. In stochastic SZ-Tetris, the reported scores for a wide variety of reinforcement learning algorithms are either approximately zero (Szita and Szepesvári, 2010) or in the single digits 1. | 1702.03118#15 | Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning | In recent years, neural networks have enjoyed a renaissance as function
approximators in reinforcement learning. Two decades after Tesauro's TD-Gammon
achieved near top-level human performance in backgammon, the deep reinforcement
learning algorithm DQN achieved human-level performance in many Atari 2600
games. The purpose of this study is twofold. First, we propose two activation
functions for neural network function approximation in reinforcement learning:
the sigmoid-weighted linear unit (SiLU) and its derivative function (dSiLU).
The activation of the SiLU is computed by the sigmoid function multiplied by
its input. Second, we suggest that the more traditional approach of using
on-policy learning with eligibility traces, instead of experience replay, and
softmax action selection with simple annealing can be competitive with DQN,
without the need for a separate target network. We validate our proposed
approach by, first, achieving new state-of-the-art results in both stochastic
SZ-Tetris and Tetris with a small 10$\times$10 board, using TD($\lambda$)
learning and shallow dSiLU network agents, and, then, by outperforming DQN in
the Atari 2600 domain by using a deep Sarsa($\lambda$) agent with SiLU and
dSiLU hidden units. | http://arxiv.org/pdf/1702.03118 | Stefan Elfwing, Eiji Uchibe, Kenji Doya | cs.LG | 18 pages, 22 figures; added deep RL results for SZ-Tetris | null | cs.LG | 20170210 | 20171102 | [] |
1702.03044 | 16 | # INCREMENTAL NETWORK QUANTIZATION
In this section, we clarify the insight of our INQ, describe its key components, and detail its imple- mentation.
2.1 WEIGHT QUANTIZATION WITH VARIABLE-LENGTH ENCODING
Suppose a pre-trained full-precision (i.e., 32-bit ï¬oating-point) CNN model can be represented by {Wl : 1 ⤠l ⤠L}, where Wl denotes the weight set of the lth layer, and L denotes the number of learnable layers in the model. To simplify the explanation, we only consider convolutional layers and fully connected layers. For CNN models like AlexNet, VGG-16, GoogleNet and ResNets as tested in this paper, Wl can be a 4D tensor for the convolutional layer, or a 2D matrix for the fully connected layer. For simplicity, here the dimension difference is not considered in the expression. Given a pre-trained full-precision CNN model, the main goal of our INQ is to convert all 32-bit ï¬oating-point weights to be either powers of two or zero without loss of model accuracy. Besides, we also attempt to explore the limit of the expected bit-width under the premise of guaranteeing lossless network quantization. Here, we start with our basic network quantization method on how to
3 | 1702.03044#16 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | This paper presents incremental network quantization (INQ), a novel method,
targeting to efficiently convert any pre-trained full-precision convolutional
neural network (CNN) model into a low-precision version whose weights are
constrained to be either powers of two or zero. Unlike existing methods which
are struggled in noticeable accuracy loss, our INQ has the potential to resolve
this issue, as benefiting from two innovations. On one hand, we introduce three
interdependent operations, namely weight partition, group-wise quantization and
re-training. A well-proven measure is employed to divide the weights in each
layer of a pre-trained CNN model into two disjoint groups. The weights in the
first group are responsible to form a low-precision base, thus they are
quantized by a variable-length encoding method. The weights in the other group
are responsible to compensate for the accuracy loss from the quantization, thus
they are the ones to be re-trained. On the other hand, these three operations
are repeated on the latest re-trained group in an iterative manner until all
the weights are converted into low-precision ones, acting as an incremental
network quantization and accuracy enhancement procedure. Extensive experiments
on the ImageNet classification task using almost all known deep CNN
architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the
efficacy of the proposed method. Specifically, at 5-bit quantization, our
models have improved accuracy than the 32-bit floating-point references. Taking
ResNet-18 as an example, we further show that our quantized models with 4-bit,
3-bit and 2-bit ternary weights have improved or very similar accuracy against
its 32-bit floating-point baseline. Besides, impressive results with the
combination of network pruning and INQ are also reported. The code is available
at https://github.com/Zhouaojun/Incremental-Network-Quantization. | http://arxiv.org/pdf/1702.03044 | Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, Yurong Chen | cs.CV, cs.AI, cs.NE | Published by ICLR 2017, and the code is available at
https://github.com/Zhouaojun/Incremental-Network-Quantization | null | cs.CV | 20170210 | 20170825 | [
{
"id": "1605.04711"
},
{
"id": "1602.07261"
},
{
"id": "1609.07061"
},
{
"id": "1602.02830"
},
{
"id": "1603.05279"
}
] |
1702.03118 | 16 | Value-based reinforcement learning has had better success in stochastic SZ-Tetris when using non-linear neural network based function approximators. FauÃer and Schwenker (2013) achieved a score of about 130 points using a shallow neural network function approximator with sigmoid hidden units. They improved the result to about 150 points by using an en- semble approach consisting of ten neural networks. We achieved an average score of about 200 points using three diï¬erent neural network function approximators: an EE-RBM, a free energy RBM, and a standard neural network with sigmoid hidden units (Elfwing et al., 2016). Jaskowski et al. (2015) achieved the current state-of-the-art results using systematic n-tuple networks as function approximators: average scores of 220 and 218 points achieved by the evolutionary VD-CMA-ES method and TD-learning, respectively, and the best mean score in a single run of 295 points achieved by TD-learning. | 1702.03118#16 | Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning | In recent years, neural networks have enjoyed a renaissance as function
approximators in reinforcement learning. Two decades after Tesauro's TD-Gammon
achieved near top-level human performance in backgammon, the deep reinforcement
learning algorithm DQN achieved human-level performance in many Atari 2600
games. The purpose of this study is twofold. First, we propose two activation
functions for neural network function approximation in reinforcement learning:
the sigmoid-weighted linear unit (SiLU) and its derivative function (dSiLU).
The activation of the SiLU is computed by the sigmoid function multiplied by
its input. Second, we suggest that the more traditional approach of using
on-policy learning with eligibility traces, instead of experience replay, and
softmax action selection with simple annealing can be competitive with DQN,
without the need for a separate target network. We validate our proposed
approach by, first, achieving new state-of-the-art results in both stochastic
SZ-Tetris and Tetris with a small 10$\times$10 board, using TD($\lambda$)
learning and shallow dSiLU network agents, and, then, by outperforming DQN in
the Atari 2600 domain by using a deep Sarsa($\lambda$) agent with SiLU and
dSiLU hidden units. | http://arxiv.org/pdf/1702.03118 | Stefan Elfwing, Eiji Uchibe, Kenji Doya | cs.LG | 18 pages, 22 figures; added deep RL results for SZ-Tetris | null | cs.LG | 20170210 | 20171102 | [] |
1702.03044 | 17 | 3
Published as a conference paper at ICLR 2017
convert Wl to be a low-precision version Wl, and each of its entries is chosen from
Pl = {±2 c n1, · · · , ±2 n2, 0}, (1)
where n1 and n2 are two integer numbers, and they satisfy n2 ⤠n1. Mathematically, n1 and n2 help to bound Pl in the sense that its non-zero elements are constrained to be in the range of either [â2n1, â2n2] or [2n2, 2n1]. That is, network weights with absolute values smaller than 2n2 will be pruned away (i.e., set to zero) in the ï¬nal low-precision model. Obviously, the problem is how to determine n1 and n2. In our INQ, the expected bit-width b for storing the indices in Pl is set beforehand, thus the only hyper-parameter shall be determined is n1 because n2 can be naturally computed once b and n1 are available. Here, n1 is calculated by using a tricky yet practically effective formula as
n1 = ï¬oor(log2(4s/3)), (2)
where ï¬oor(·) indicates the round down operation and s is calculated by using
s = max(abs(Wl)),
(3) | 1702.03044#17 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | This paper presents incremental network quantization (INQ), a novel method,
targeting to efficiently convert any pre-trained full-precision convolutional
neural network (CNN) model into a low-precision version whose weights are
constrained to be either powers of two or zero. Unlike existing methods which
are struggled in noticeable accuracy loss, our INQ has the potential to resolve
this issue, as benefiting from two innovations. On one hand, we introduce three
interdependent operations, namely weight partition, group-wise quantization and
re-training. A well-proven measure is employed to divide the weights in each
layer of a pre-trained CNN model into two disjoint groups. The weights in the
first group are responsible to form a low-precision base, thus they are
quantized by a variable-length encoding method. The weights in the other group
are responsible to compensate for the accuracy loss from the quantization, thus
they are the ones to be re-trained. On the other hand, these three operations
are repeated on the latest re-trained group in an iterative manner until all
the weights are converted into low-precision ones, acting as an incremental
network quantization and accuracy enhancement procedure. Extensive experiments
on the ImageNet classification task using almost all known deep CNN
architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the
efficacy of the proposed method. Specifically, at 5-bit quantization, our
models have improved accuracy than the 32-bit floating-point references. Taking
ResNet-18 as an example, we further show that our quantized models with 4-bit,
3-bit and 2-bit ternary weights have improved or very similar accuracy against
its 32-bit floating-point baseline. Besides, impressive results with the
combination of network pruning and INQ are also reported. The code is available
at https://github.com/Zhouaojun/Incremental-Network-Quantization. | http://arxiv.org/pdf/1702.03044 | Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, Yurong Chen | cs.CV, cs.AI, cs.NE | Published by ICLR 2017, and the code is available at
https://github.com/Zhouaojun/Incremental-Network-Quantization | null | cs.CV | 20170210 | 20170825 | [
{
"id": "1605.04711"
},
{
"id": "1602.07261"
},
{
"id": "1609.07061"
},
{
"id": "1602.02830"
},
{
"id": "1603.05279"
}
] |
1702.03118 | 17 | In this study, we compare the performance of diï¬erent hidden activation units in two learning settings: 1) shallow network agents with one hidden layer using hand-coded state features and 2) deep network agents using raw board conï¬gurations as states, i.e., a state node is set to one if the corresponding board cell is occupied by a tetromino and set to zero otherwise.
In the setting with state features, we trained shallow network agents with SiLU, ReLU, dSiLU, and sigmoid hidden units, using the TD(λ) algorithm and softmax action selection. We used the same experimental setup as used in our earlier work (Elfwing et al., 2016). The networks consisted of one hidden layer with 50 hidden units and a linear output layer. The features were similar to the original 21 features proposed by Bertsekas and Ioï¬e (1996), except for not including the maximum column height and using the diï¬erences in column heights instead of the absolute diï¬erences. The length of the binary state vector was 460. The shallow network agents were trained for 200,000 episodes and the experiments were repeated for ten separate runs for each type of activation unit. | 1702.03118#17 | Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning | In recent years, neural networks have enjoyed a renaissance as function
approximators in reinforcement learning. Two decades after Tesauro's TD-Gammon
achieved near top-level human performance in backgammon, the deep reinforcement
learning algorithm DQN achieved human-level performance in many Atari 2600
games. The purpose of this study is twofold. First, we propose two activation
functions for neural network function approximation in reinforcement learning:
the sigmoid-weighted linear unit (SiLU) and its derivative function (dSiLU).
The activation of the SiLU is computed by the sigmoid function multiplied by
its input. Second, we suggest that the more traditional approach of using
on-policy learning with eligibility traces, instead of experience replay, and
softmax action selection with simple annealing can be competitive with DQN,
without the need for a separate target network. We validate our proposed
approach by, first, achieving new state-of-the-art results in both stochastic
SZ-Tetris and Tetris with a small 10$\times$10 board, using TD($\lambda$)
learning and shallow dSiLU network agents, and, then, by outperforming DQN in
the Atari 2600 domain by using a deep Sarsa($\lambda$) agent with SiLU and
dSiLU hidden units. | http://arxiv.org/pdf/1702.03118 | Stefan Elfwing, Eiji Uchibe, Kenji Doya | cs.LG | 18 pages, 22 figures; added deep RL results for SZ-Tetris | null | cs.LG | 20170210 | 20171102 | [] |
1702.03044 | 18 | where ï¬oor(·) indicates the round down operation and s is calculated by using
s = max(abs(Wl)),
(3)
where abs(·) is an element-wise operation and max(·) outputs the largest element of its input. In fact, Equation (2) helps to match the rounding power of 2 for s, and it could be easily implemented in practical programming. After n1 is obtained, n2 can be naturally determined as n2 = n1 + 1 â 2(bâ1)/2. For instance, if b = 3 and n1 = â1, it is easy to get n2 = â2. Once Pl is determined, we further use the ladder of powers to convert every entry of Wl into a low-precision one by using
se Bsgn(Wi(i,7)) if (a + B)/2 < abs(Wi(i, j)) < 36/2 Wii = 4 G3) {5 otherwise, a
# c | 1702.03044#18 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | This paper presents incremental network quantization (INQ), a novel method,
targeting to efficiently convert any pre-trained full-precision convolutional
neural network (CNN) model into a low-precision version whose weights are
constrained to be either powers of two or zero. Unlike existing methods which
are struggled in noticeable accuracy loss, our INQ has the potential to resolve
this issue, as benefiting from two innovations. On one hand, we introduce three
interdependent operations, namely weight partition, group-wise quantization and
re-training. A well-proven measure is employed to divide the weights in each
layer of a pre-trained CNN model into two disjoint groups. The weights in the
first group are responsible to form a low-precision base, thus they are
quantized by a variable-length encoding method. The weights in the other group
are responsible to compensate for the accuracy loss from the quantization, thus
they are the ones to be re-trained. On the other hand, these three operations
are repeated on the latest re-trained group in an iterative manner until all
the weights are converted into low-precision ones, acting as an incremental
network quantization and accuracy enhancement procedure. Extensive experiments
on the ImageNet classification task using almost all known deep CNN
architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the
efficacy of the proposed method. Specifically, at 5-bit quantization, our
models have improved accuracy than the 32-bit floating-point references. Taking
ResNet-18 as an example, we further show that our quantized models with 4-bit,
3-bit and 2-bit ternary weights have improved or very similar accuracy against
its 32-bit floating-point baseline. Besides, impressive results with the
combination of network pruning and INQ are also reported. The code is available
at https://github.com/Zhouaojun/Incremental-Network-Quantization. | http://arxiv.org/pdf/1702.03044 | Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, Yurong Chen | cs.CV, cs.AI, cs.NE | Published by ICLR 2017, and the code is available at
https://github.com/Zhouaojun/Incremental-Network-Quantization | null | cs.CV | 20170210 | 20170825 | [
{
"id": "1605.04711"
},
{
"id": "1602.07261"
},
{
"id": "1609.07061"
},
{
"id": "1602.02830"
},
{
"id": "1603.05279"
}
] |
1702.03118 | 18 | In the deep reinforcement learning setting, we used a deep network architecture consisting of two convolutional layers with 15 and 50 ï¬lters of size 5 à 5 using a stride of 1, a fully- connected layer with 250 units, and a linear output layer. Both convolutional layers were followed by max-pooling layers with pooling windows of size 3Ã3 using a stride of 2. The deep network agents were also trained using the TD(λ) algorithm and softmax action selection. We trained three types of deep networks with: 1) SiLUs in both the convolutional and
1
http://barbados2011.rl-community.org/program/SzitaTalk.pdf
http://barbados2011.rl-community.org/program/SzitaTalk.pdf
6
350 350 bog 300 300 Nae aL Min, ool sot (ONY 250 So eg Fook 250 Le g <a inapryaetei lant g 8 200 cere 8 200 a a ¢ - 8 Fy @ 150 © 150 a a 100 100 50 50 ââReLU 0 0 0 50 100 150 200 0 50 Episodes (1,000s) aS epee ty PRES wks oaph woe eleetcagny Cops ~ oO 7 SINGER OREO â= dSiLU âSigmoid 100 Episodes (1,000s) 150 200 | 1702.03118#18 | Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning | In recent years, neural networks have enjoyed a renaissance as function
approximators in reinforcement learning. Two decades after Tesauro's TD-Gammon
achieved near top-level human performance in backgammon, the deep reinforcement
learning algorithm DQN achieved human-level performance in many Atari 2600
games. The purpose of this study is twofold. First, we propose two activation
functions for neural network function approximation in reinforcement learning:
the sigmoid-weighted linear unit (SiLU) and its derivative function (dSiLU).
The activation of the SiLU is computed by the sigmoid function multiplied by
its input. Second, we suggest that the more traditional approach of using
on-policy learning with eligibility traces, instead of experience replay, and
softmax action selection with simple annealing can be competitive with DQN,
without the need for a separate target network. We validate our proposed
approach by, first, achieving new state-of-the-art results in both stochastic
SZ-Tetris and Tetris with a small 10$\times$10 board, using TD($\lambda$)
learning and shallow dSiLU network agents, and, then, by outperforming DQN in
the Atari 2600 domain by using a deep Sarsa($\lambda$) agent with SiLU and
dSiLU hidden units. | http://arxiv.org/pdf/1702.03118 | Stefan Elfwing, Eiji Uchibe, Kenji Doya | cs.LG | 18 pages, 22 figures; added deep RL results for SZ-Tetris | null | cs.LG | 20170210 | 20171102 | [] |
1702.03044 | 19 | # c
where α and β are two adjacent elements in the sorted Pl, making the above equation as a numerical rounding to the quantum values. It should be emphasized that factor 4/3 in Equation (2) is set to make sure that all the elements in Pl correspond with the quantization rule deï¬ned in Equation (4). In other words, factor 4/3 in Equation (2) highly correlates with factor 3/2 in Equation (4).
Here, an important thing we want to clarify is the deï¬nition of the expected bit-width b. Taking 5-bit quantization as an example, since zero value cannot be written as the power of two, we use 1 bit to represent zero value, and the remaining 4 bits to represent at most 16 different values for the powers of two. That is, the number of candidate quantum values is at most 2bâ1 + 1, so our quantization method actually adopts a variable-length encoding scheme. It is clear that the quantization described above is performed in a linear scale. An alternative solution is to perform the quantization in the log scale. Although it may also be effective, it should be a little bit more difï¬cult in implementation and may cause some extra computational overhead in comparison to our method.
INCREMENTAL QUANTIZATION STRATEGY | 1702.03044#19 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | This paper presents incremental network quantization (INQ), a novel method,
targeting to efficiently convert any pre-trained full-precision convolutional
neural network (CNN) model into a low-precision version whose weights are
constrained to be either powers of two or zero. Unlike existing methods which
are struggled in noticeable accuracy loss, our INQ has the potential to resolve
this issue, as benefiting from two innovations. On one hand, we introduce three
interdependent operations, namely weight partition, group-wise quantization and
re-training. A well-proven measure is employed to divide the weights in each
layer of a pre-trained CNN model into two disjoint groups. The weights in the
first group are responsible to form a low-precision base, thus they are
quantized by a variable-length encoding method. The weights in the other group
are responsible to compensate for the accuracy loss from the quantization, thus
they are the ones to be re-trained. On the other hand, these three operations
are repeated on the latest re-trained group in an iterative manner until all
the weights are converted into low-precision ones, acting as an incremental
network quantization and accuracy enhancement procedure. Extensive experiments
on the ImageNet classification task using almost all known deep CNN
architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the
efficacy of the proposed method. Specifically, at 5-bit quantization, our
models have improved accuracy than the 32-bit floating-point references. Taking
ResNet-18 as an example, we further show that our quantized models with 4-bit,
3-bit and 2-bit ternary weights have improved or very similar accuracy against
its 32-bit floating-point baseline. Besides, impressive results with the
combination of network pruning and INQ are also reported. The code is available
at https://github.com/Zhouaojun/Incremental-Network-Quantization. | http://arxiv.org/pdf/1702.03044 | Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, Yurong Chen | cs.CV, cs.AI, cs.NE | Published by ICLR 2017, and the code is available at
https://github.com/Zhouaojun/Incremental-Network-Quantization | null | cs.CV | 20170210 | 20170825 | [
{
"id": "1605.04711"
},
{
"id": "1602.07261"
},
{
"id": "1609.07061"
},
{
"id": "1602.02830"
},
{
"id": "1603.05279"
}
] |
1702.03118 | 19 | Figure 2: Learning curves in stochastic SZ-Tetris for the four types of shallow neural network agents. The ï¬gure shows the average scores over ten separate runs (tick solid lines) and the scores of individual runs (thin dashed lines). The mean scores were computed over every 1,000 episodes.
fully-connected layers (SiLU-SiLU); 2) ReLUs in both the convolutional and fully-connected layers (ReLU-ReLU); and 3) SiLUs in the convolutional layers and dSiLUs in the fully- connected layer (SiLU-dSiLU). The deep network agents were trained for 200,000 episodes and the experiments were repeated for ï¬ve separate runs for each type of network.
Mean score 200 150 100 ââ SiLU-SiLU ReLU-ReLU ââ SiLU-dSiLU 50 100 Episodes (1,000s) 150 200
Figure 3: Average Learning curves in stochastic SZ-Tetris for the three types of deep neural network agents. The ï¬gure shows the average scores over ï¬ve separate runs, computed over every 1,000 episodes.
We used the following reward function (proposed by FauÃer and Schwenker (2013)):
r(s) = e â(number of holes in s)/33. (16)
7 | 1702.03118#19 | Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning | In recent years, neural networks have enjoyed a renaissance as function
approximators in reinforcement learning. Two decades after Tesauro's TD-Gammon
achieved near top-level human performance in backgammon, the deep reinforcement
learning algorithm DQN achieved human-level performance in many Atari 2600
games. The purpose of this study is twofold. First, we propose two activation
functions for neural network function approximation in reinforcement learning:
the sigmoid-weighted linear unit (SiLU) and its derivative function (dSiLU).
The activation of the SiLU is computed by the sigmoid function multiplied by
its input. Second, we suggest that the more traditional approach of using
on-policy learning with eligibility traces, instead of experience replay, and
softmax action selection with simple annealing can be competitive with DQN,
without the need for a separate target network. We validate our proposed
approach by, first, achieving new state-of-the-art results in both stochastic
SZ-Tetris and Tetris with a small 10$\times$10 board, using TD($\lambda$)
learning and shallow dSiLU network agents, and, then, by outperforming DQN in
the Atari 2600 domain by using a deep Sarsa($\lambda$) agent with SiLU and
dSiLU hidden units. | http://arxiv.org/pdf/1702.03118 | Stefan Elfwing, Eiji Uchibe, Kenji Doya | cs.LG | 18 pages, 22 figures; added deep RL results for SZ-Tetris | null | cs.LG | 20170210 | 20171102 | [] |
1702.03118 | 20 | r(s) = e â(number of holes in s)/33. (16)
7
We set γ to 0.99, λ to 0.55, Ï0 to 0.5, and Ïk to 0.00025. We used a rough grid-like search to ï¬nd appropriate values of the learning rate α and it was determined to be 0.001 for the four shallow network agents and 0.0001 for the three deep network agents.
Table 1: Average scores (± standard deviations) achieved in stochastic SZ-Tetris, computed over the ï¬nal 1,000 episodes for all runs and the best single runs.
Network Final average score Final best score Shallow networks 214 ± 74 191 ± 58 263 ± 80 232 ± 75 Deep networks 217 ± 53 215 ± 54 229 ± 55 SiLU ReLU dSiLU Sigmoid 253 ± 83 227 ± 76 320 ± 87 293 ± 73 SiLU-SiLU ReLU-ReLU SiLU-dSiLU 219 ± 54 217 ± 52 235 ± 54 | 1702.03118#20 | Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning | In recent years, neural networks have enjoyed a renaissance as function
approximators in reinforcement learning. Two decades after Tesauro's TD-Gammon
achieved near top-level human performance in backgammon, the deep reinforcement
learning algorithm DQN achieved human-level performance in many Atari 2600
games. The purpose of this study is twofold. First, we propose two activation
functions for neural network function approximation in reinforcement learning:
the sigmoid-weighted linear unit (SiLU) and its derivative function (dSiLU).
The activation of the SiLU is computed by the sigmoid function multiplied by
its input. Second, we suggest that the more traditional approach of using
on-policy learning with eligibility traces, instead of experience replay, and
softmax action selection with simple annealing can be competitive with DQN,
without the need for a separate target network. We validate our proposed
approach by, first, achieving new state-of-the-art results in both stochastic
SZ-Tetris and Tetris with a small 10$\times$10 board, using TD($\lambda$)
learning and shallow dSiLU network agents, and, then, by outperforming DQN in
the Atari 2600 domain by using a deep Sarsa($\lambda$) agent with SiLU and
dSiLU hidden units. | http://arxiv.org/pdf/1702.03118 | Stefan Elfwing, Eiji Uchibe, Kenji Doya | cs.LG | 18 pages, 22 figures; added deep RL results for SZ-Tetris | null | cs.LG | 20170210 | 20171102 | [] |
1702.03044 | 21 | In the literature, there are many existing network quantization works such as HashedNet (Chen et al., 2015b), vector quantization (Gong et al., 2014), ï¬xed-point representation (Vanhoucke et al., 2011; Gupta et al., 2015), BinaryConnect (Courbariaux et al., 2015), BinaryNet (Courbariaux et al., 2016), XNOR-Net (Rastegari et al., 2016), TWN (Li & Liu, 2016), DoReFa-Net (Zhou et al., 2016) and QNN (Hubara et al., 2016). Similar to our basic network quantization method, they also suffer from non-negligible accuracy loss on deep CNNs, especially when being applied on the ImageNet large scale classiï¬cation dataset. For all these methods, a common fact is that they adopt a global strategy in which all the weights are simultaneously converted into low-precision ones, which in turn causes accuracy loss. Compared with the methods focusing on the pre-trained models, accuracy loss becomes worse for the methods such as XNOR-Net, TWN, DoReFa-Net and QNN which intend to train low-precision CNNs from scratch. | 1702.03044#21 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | This paper presents incremental network quantization (INQ), a novel method,
targeting to efficiently convert any pre-trained full-precision convolutional
neural network (CNN) model into a low-precision version whose weights are
constrained to be either powers of two or zero. Unlike existing methods which
are struggled in noticeable accuracy loss, our INQ has the potential to resolve
this issue, as benefiting from two innovations. On one hand, we introduce three
interdependent operations, namely weight partition, group-wise quantization and
re-training. A well-proven measure is employed to divide the weights in each
layer of a pre-trained CNN model into two disjoint groups. The weights in the
first group are responsible to form a low-precision base, thus they are
quantized by a variable-length encoding method. The weights in the other group
are responsible to compensate for the accuracy loss from the quantization, thus
they are the ones to be re-trained. On the other hand, these three operations
are repeated on the latest re-trained group in an iterative manner until all
the weights are converted into low-precision ones, acting as an incremental
network quantization and accuracy enhancement procedure. Extensive experiments
on the ImageNet classification task using almost all known deep CNN
architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the
efficacy of the proposed method. Specifically, at 5-bit quantization, our
models have improved accuracy than the 32-bit floating-point references. Taking
ResNet-18 as an example, we further show that our quantized models with 4-bit,
3-bit and 2-bit ternary weights have improved or very similar accuracy against
its 32-bit floating-point baseline. Besides, impressive results with the
combination of network pruning and INQ are also reported. The code is available
at https://github.com/Zhouaojun/Incremental-Network-Quantization. | http://arxiv.org/pdf/1702.03044 | Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, Yurong Chen | cs.CV, cs.AI, cs.NE | Published by ICLR 2017, and the code is available at
https://github.com/Zhouaojun/Incremental-Network-Quantization | null | cs.CV | 20170210 | 20170825 | [
{
"id": "1605.04711"
},
{
"id": "1602.07261"
},
{
"id": "1609.07061"
},
{
"id": "1602.02830"
},
{
"id": "1603.05279"
}
] |
1702.03118 | 21 | Figure 2 shows the average learning curves as well as learning curves for the individual runs for the shallow networks, Figure 3 shows the average learning curves for the deep networks, and the ï¬nal results are summarized in Table 1. The results show signiï¬cant diï¬erences (p < 0.0001) in ï¬nal average score between all four shallow agents. The networks with bounded hidden units (dSiLU and sigmoid) outperformed the networks with unbounded units (SiLU and ReLU), the SiLU network outperformed the ReLU network, and the dSiLU network outperformed the sigmoid network. The ï¬nal average score (best score) of 263 (320) points achieved by the dSiLU network agent is a new state-of-the-art score, improving the previous best performance by 43 (25) points or 20 % (8 %). In the deep learning setting, the SiLU-dSiLU network signiï¬cantly (p < 0.0001) outperformed the other two networks and the average ï¬nal score of 229 points is better than the previous state-of-the-art of 220 points. There was no signiï¬cant diï¬erence (p = 0.32) | 1702.03118#21 | Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning | In recent years, neural networks have enjoyed a renaissance as function
approximators in reinforcement learning. Two decades after Tesauro's TD-Gammon
achieved near top-level human performance in backgammon, the deep reinforcement
learning algorithm DQN achieved human-level performance in many Atari 2600
games. The purpose of this study is twofold. First, we propose two activation
functions for neural network function approximation in reinforcement learning:
the sigmoid-weighted linear unit (SiLU) and its derivative function (dSiLU).
The activation of the SiLU is computed by the sigmoid function multiplied by
its input. Second, we suggest that the more traditional approach of using
on-policy learning with eligibility traces, instead of experience replay, and
softmax action selection with simple annealing can be competitive with DQN,
without the need for a separate target network. We validate our proposed
approach by, first, achieving new state-of-the-art results in both stochastic
SZ-Tetris and Tetris with a small 10$\times$10 board, using TD($\lambda$)
learning and shallow dSiLU network agents, and, then, by outperforming DQN in
the Atari 2600 domain by using a deep Sarsa($\lambda$) agent with SiLU and
dSiLU hidden units. | http://arxiv.org/pdf/1702.03118 | Stefan Elfwing, Eiji Uchibe, Kenji Doya | cs.LG | 18 pages, 22 figures; added deep RL results for SZ-Tetris | null | cs.LG | 20170210 | 20171102 | [] |
1702.03044 | 23 | 0.01 0.02 -0.20 0.04 0.33 0.01 0.02 -0.20 0.04 2â2 0.11 0.04 -0.7 0.19 0.17 -0.42 -0.33 0.02 -0.05 0.17 -2â1 -2â2 0.02 -0.05 0.15 -2â1 -2â2 -0.09 0.02 0.83 -0.03 0.03 0.06 0.02 20 -0.03 0.03 0.06 -0.02 20 -0.06 0.21 -0.90 0.07 0.11 0.87 -0.36 -20 0.07 0.11 20 -2â2 -20 0.27 -0.09 20 -0.73 0.41 0.42 0.39 0.47 -2â1 2â1 2â1 2â1 2â1 -2â1 2â1 2â1 2â1 2â3 0 -2â1 2â2 2â2 2â3 -0.05 -2â1 2â2 2â2 2â3 0.03 -2â1 2â2 2â3 -2â1 -2â2 -2â3 0 2â3 -2â1 -2â2 -2â3 | 1702.03044#23 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | This paper presents incremental network quantization (INQ), a novel method,
targeting to efficiently convert any pre-trained full-precision convolutional
neural network (CNN) model into a low-precision version whose weights are
constrained to be either powers of two or zero. Unlike existing methods which
are struggled in noticeable accuracy loss, our INQ has the potential to resolve
this issue, as benefiting from two innovations. On one hand, we introduce three
interdependent operations, namely weight partition, group-wise quantization and
re-training. A well-proven measure is employed to divide the weights in each
layer of a pre-trained CNN model into two disjoint groups. The weights in the
first group are responsible to form a low-precision base, thus they are
quantized by a variable-length encoding method. The weights in the other group
are responsible to compensate for the accuracy loss from the quantization, thus
they are the ones to be re-trained. On the other hand, these three operations
are repeated on the latest re-trained group in an iterative manner until all
the weights are converted into low-precision ones, acting as an incremental
network quantization and accuracy enhancement procedure. Extensive experiments
on the ImageNet classification task using almost all known deep CNN
architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the
efficacy of the proposed method. Specifically, at 5-bit quantization, our
models have improved accuracy than the 32-bit floating-point references. Taking
ResNet-18 as an example, we further show that our quantized models with 4-bit,
3-bit and 2-bit ternary weights have improved or very similar accuracy against
its 32-bit floating-point baseline. Besides, impressive results with the
combination of network pruning and INQ are also reported. The code is available
at https://github.com/Zhouaojun/Incremental-Network-Quantization. | http://arxiv.org/pdf/1702.03044 | Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, Yurong Chen | cs.CV, cs.AI, cs.NE | Published by ICLR 2017, and the code is available at
https://github.com/Zhouaojun/Incremental-Network-Quantization | null | cs.CV | 20170210 | 20170825 | [
{
"id": "1605.04711"
},
{
"id": "1602.07261"
},
{
"id": "1609.07061"
},
{
"id": "1602.02830"
},
{
"id": "1603.05279"
}
] |
1702.03118 | 23 | # 3.2 10Ã10 Tetris
The result achieved by the dSiLU network agent in stochastic SZ-Tetris is impressive, but we cannot compare the result with the methods that have achieved the highest performance levels in standard Tetris because those methods have not been applied to stochastic SZ- Tetris. Furthermore, it is not feasible to apply our method to Tetris with a standard board height of 20, because of the prohibitively long learning time. The current state-of-the-art for a single run of an algorithm, achieved by the CBMPI algorithm (Gabillon et al., 2013; Scherrer et al., 2015), is a mean score of 51 million cleared lines. However, for the best
8
methods applied to Tetris, there are reported results for a smaller, 10Ã10, Tetris board, and in this case, the learning time for our method is long, but not prohibitively so.
6000 ---CBMPI 5000 4000 Mean score iS) S S S 1000 0 0 10 20 30 40 Episodes (10,000s) | 1702.03118#23 | Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning | In recent years, neural networks have enjoyed a renaissance as function
approximators in reinforcement learning. Two decades after Tesauro's TD-Gammon
achieved near top-level human performance in backgammon, the deep reinforcement
learning algorithm DQN achieved human-level performance in many Atari 2600
games. The purpose of this study is twofold. First, we propose two activation
functions for neural network function approximation in reinforcement learning:
the sigmoid-weighted linear unit (SiLU) and its derivative function (dSiLU).
The activation of the SiLU is computed by the sigmoid function multiplied by
its input. Second, we suggest that the more traditional approach of using
on-policy learning with eligibility traces, instead of experience replay, and
softmax action selection with simple annealing can be competitive with DQN,
without the need for a separate target network. We validate our proposed
approach by, first, achieving new state-of-the-art results in both stochastic
SZ-Tetris and Tetris with a small 10$\times$10 board, using TD($\lambda$)
learning and shallow dSiLU network agents, and, then, by outperforming DQN in
the Atari 2600 domain by using a deep Sarsa($\lambda$) agent with SiLU and
dSiLU hidden units. | http://arxiv.org/pdf/1702.03118 | Stefan Elfwing, Eiji Uchibe, Kenji Doya | cs.LG | 18 pages, 22 figures; added deep RL results for SZ-Tetris | null | cs.LG | 20170210 | 20171102 | [] |
1702.03044 | 24 | -2â1 2â2 2â3 -2â1 -2â2 -2â3 0 2â3 -2â1 -2â2 -2â3 -0.02 2â3 -2â1 -2â2 -0.13 0 20 -2â3 2â2 2â3 0.02 20 2â3 2â2 2â3 -0.03 20 -0.11 2â2 -20 2â3 0 20 -2â2 -20 2â3 -0.04 20 -2â2 -20 0.091 -0.01 20 -2â1 2â1 2â1 2â1 2â1 -2â1 2â1 2â1 2â1 2â1 -2â1 2â1 2â1 2â1 2â2 -0.02 0.15 -2â2 2â1 2â2 -0.01 2â3 -2â2 2â1 | 1702.03044#24 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | This paper presents incremental network quantization (INQ), a novel method,
targeting to efficiently convert any pre-trained full-precision convolutional
neural network (CNN) model into a low-precision version whose weights are
constrained to be either powers of two or zero. Unlike existing methods which
are struggled in noticeable accuracy loss, our INQ has the potential to resolve
this issue, as benefiting from two innovations. On one hand, we introduce three
interdependent operations, namely weight partition, group-wise quantization and
re-training. A well-proven measure is employed to divide the weights in each
layer of a pre-trained CNN model into two disjoint groups. The weights in the
first group are responsible to form a low-precision base, thus they are
quantized by a variable-length encoding method. The weights in the other group
are responsible to compensate for the accuracy loss from the quantization, thus
they are the ones to be re-trained. On the other hand, these three operations
are repeated on the latest re-trained group in an iterative manner until all
the weights are converted into low-precision ones, acting as an incremental
network quantization and accuracy enhancement procedure. Extensive experiments
on the ImageNet classification task using almost all known deep CNN
architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the
efficacy of the proposed method. Specifically, at 5-bit quantization, our
models have improved accuracy than the 32-bit floating-point references. Taking
ResNet-18 as an example, we further show that our quantized models with 4-bit,
3-bit and 2-bit ternary weights have improved or very similar accuracy against
its 32-bit floating-point baseline. Besides, impressive results with the
combination of network pruning and INQ are also reported. The code is available
at https://github.com/Zhouaojun/Incremental-Network-Quantization. | http://arxiv.org/pdf/1702.03044 | Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, Yurong Chen | cs.CV, cs.AI, cs.NE | Published by ICLR 2017, and the code is available at
https://github.com/Zhouaojun/Incremental-Network-Quantization | null | cs.CV | 20170210 | 20170825 | [
{
"id": "1605.04711"
},
{
"id": "1602.07261"
},
{
"id": "1609.07061"
},
{
"id": "1602.02830"
},
{
"id": "1603.05279"
}
] |
1702.03044 | 25 | Figure 2: Result illustrations. First row: results from the 1st iteration of the proposed three oper- ations. The top left cube illustrates weight partition operation generating two disjoint groups, the middle image illustrates the quantization operation on the ï¬rst weight group (green cells), and the top right cube illustrates the re-training operation on the second weight group (light blue cells). Sec- ond row: results from the 2nd, 3rd and 4th iterations of the INQ. In the ï¬gure, the accumulated portion of the weights which have been quantized undergoes from 50%â75%â87.5%â100%.
handling of the strategy for suppressing resulting quantization loss in model accuracy. We are par- tially inspired by the latest progress in network pruning (Han et al., 2015; Guo et al., 2016). In these methods, the accuracy loss from removing less important network weights of a pre-trained neural network model could be well compensated by following re-training steps. Therefore, we conjec- ture that the nature of changing network weight importance is critical to achieve lossless network quantization. | 1702.03044#25 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | This paper presents incremental network quantization (INQ), a novel method,
targeting to efficiently convert any pre-trained full-precision convolutional
neural network (CNN) model into a low-precision version whose weights are
constrained to be either powers of two or zero. Unlike existing methods which
are struggled in noticeable accuracy loss, our INQ has the potential to resolve
this issue, as benefiting from two innovations. On one hand, we introduce three
interdependent operations, namely weight partition, group-wise quantization and
re-training. A well-proven measure is employed to divide the weights in each
layer of a pre-trained CNN model into two disjoint groups. The weights in the
first group are responsible to form a low-precision base, thus they are
quantized by a variable-length encoding method. The weights in the other group
are responsible to compensate for the accuracy loss from the quantization, thus
they are the ones to be re-trained. On the other hand, these three operations
are repeated on the latest re-trained group in an iterative manner until all
the weights are converted into low-precision ones, acting as an incremental
network quantization and accuracy enhancement procedure. Extensive experiments
on the ImageNet classification task using almost all known deep CNN
architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the
efficacy of the proposed method. Specifically, at 5-bit quantization, our
models have improved accuracy than the 32-bit floating-point references. Taking
ResNet-18 as an example, we further show that our quantized models with 4-bit,
3-bit and 2-bit ternary weights have improved or very similar accuracy against
its 32-bit floating-point baseline. Besides, impressive results with the
combination of network pruning and INQ are also reported. The code is available
at https://github.com/Zhouaojun/Incremental-Network-Quantization. | http://arxiv.org/pdf/1702.03044 | Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, Yurong Chen | cs.CV, cs.AI, cs.NE | Published by ICLR 2017, and the code is available at
https://github.com/Zhouaojun/Incremental-Network-Quantization | null | cs.CV | 20170210 | 20170825 | [
{
"id": "1605.04711"
},
{
"id": "1602.07261"
},
{
"id": "1609.07061"
},
{
"id": "1602.02830"
},
{
"id": "1603.05279"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.