doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1706.05125 | 28 | Firstly, we see that the RL and ROLLOUTS models achieve signiï¬cantly better results when negotiat- ing with the LIKELIHOOD model, particularly the RL+ROLLOUTS model. The percentage of Pareto optimal solutions also increases, showing a bet- ter exploration of the solution space. Compared to human-human negotiations (Table 2), the best models achieve a higher agreement rate, better scores, and similar Pareto efï¬ciency. This result conï¬rms that attempting to maximise reward can outperform simply imitating humans.
A negative consequence of this more aggres- sive negotiation strategy is that humans were more likely to walk away with no deal, which is re- ï¬ected in the lower agreement rates. Even though failing to agree was worth 0 points, people often preferred this course over capitulating to an un- compromising opponentâa factor not well cap- tured by the simulated partner in reinforcement learning training or rollouts (as reï¬ected by the larger gains from goal-based models in dialogues with the LIKELIHOOD model). In particular, the goal-based models are prone to simply rephrasing the same demand each turn, which is a more effec- tive strategy against the LIKELIHOOD model than humans. Future work should address this issue. | 1706.05125#28 | Deal or No Deal? End-to-End Learning for Negotiation Dialogues | Much of human dialogue occurs in semi-cooperative settings, where agents with
different goals attempt to agree on common decisions. Negotiations require
complex communication and reasoning skills, but success is easy to measure,
making this an interesting task for AI. We gather a large dataset of
human-human negotiations on a multi-issue bargaining task, where agents who
cannot observe each other's reward functions must reach an agreement (or a
deal) via natural language dialogue. For the first time, we show it is possible
to train end-to-end models for negotiation, which must learn both linguistic
and reasoning skills with no annotated dialogue states. We also introduce
dialogue rollouts, in which the model plans ahead by simulating possible
complete continuations of the conversation, and find that this technique
dramatically improves performance. Our code and dataset are publicly available
(https://github.com/facebookresearch/end-to-end-negotiator). | http://arxiv.org/pdf/1706.05125 | Mike Lewis, Denis Yarats, Yann N. Dauphin, Devi Parikh, Dhruv Batra | cs.AI, cs.CL | null | null | cs.AI | 20170616 | 20170616 | [
{
"id": "1606.03152"
},
{
"id": "1510.03055"
},
{
"id": "1703.04908"
},
{
"id": "1511.08099"
},
{
"id": "1604.04562"
},
{
"id": "1611.08669"
},
{
"id": "1703.06585"
},
{
"id": "1606.01541"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1706.05125 | 29 | Figure 5 shows an example of our goal-based model stubbornly negotiating until it achieves a good outcome.
Models learn to be deceptive. Deception can be an effective negotiation tactic. We found numer- ous cases of our models initially feigning interest in a valueless item, only to later âcompromiseâ by conceding it. Figure 7 shows an example.
Similar trends hold in dialogues with humans, with goal-based reasoning outperforming imita- tion learning. The ROLLOUTS model achieves comparable scores to its human partners, and the RL+ROLLOUTS model actually achieves higher scores. However, we also ï¬nd signiï¬cantly more cases of the goal-based models failing to agree a deal with humansâlargely a consequence of their more aggressive negotiation tactics (see §7).
Models produce meaningful novel sentences. One interesting question is whether our models are capable of generating novel sentences in the new circumstances they ï¬nd themselves in, or if they simply repeat messages from the training data ver- batim. We ï¬nd that 76% of messages produced by the LIKELIHOOD model in self-play were found in the training data. We manually examined the novel | 1706.05125#29 | Deal or No Deal? End-to-End Learning for Negotiation Dialogues | Much of human dialogue occurs in semi-cooperative settings, where agents with
different goals attempt to agree on common decisions. Negotiations require
complex communication and reasoning skills, but success is easy to measure,
making this an interesting task for AI. We gather a large dataset of
human-human negotiations on a multi-issue bargaining task, where agents who
cannot observe each other's reward functions must reach an agreement (or a
deal) via natural language dialogue. For the first time, we show it is possible
to train end-to-end models for negotiation, which must learn both linguistic
and reasoning skills with no annotated dialogue states. We also introduce
dialogue rollouts, in which the model plans ahead by simulating possible
complete continuations of the conversation, and find that this technique
dramatically improves performance. Our code and dataset are publicly available
(https://github.com/facebookresearch/end-to-end-negotiator). | http://arxiv.org/pdf/1706.05125 | Mike Lewis, Denis Yarats, Yann N. Dauphin, Devi Parikh, Dhruv Batra | cs.AI, cs.CL | null | null | cs.AI | 20170616 | 20170616 | [
{
"id": "1606.03152"
},
{
"id": "1510.03055"
},
{
"id": "1703.04908"
},
{
"id": "1511.08099"
},
{
"id": "1604.04562"
},
{
"id": "1611.08669"
},
{
"id": "1703.06585"
},
{
"id": "1606.01541"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1706.05125 | 30 | Input Human RL+ROLLOUTS RL+ROLLOUTS You can have the balls if i can have the hat and book Human 1xbook value=3 1xhat value=1 3xball value=2 1xbook value=6 1xhat value=4 3xball value=0 Nope, give me the books and you can have everything else RL+ROLLOUTS Ok. I will take the book and one hat I get all the balls? Human RL+ROLLOUTS Yes. I can do that. Human Output RL+ROLLOUTS Human Ok, deal Reward 10/10 6/10 1xbook 1xhat 3xball
Figure 5: Example of a model stubbornly holding out for maximum points in a dialogue.
Input Human RL+ROLLOUTS Human RL+ROLLOUTS Human RL+ROLLOUTS Ok, you can have one book and one ball Human RL+ROLLOUTS Deal Output RL+ROLLOUTS Human
Figure 6: Example of model compromising. | 1706.05125#30 | Deal or No Deal? End-to-End Learning for Negotiation Dialogues | Much of human dialogue occurs in semi-cooperative settings, where agents with
different goals attempt to agree on common decisions. Negotiations require
complex communication and reasoning skills, but success is easy to measure,
making this an interesting task for AI. We gather a large dataset of
human-human negotiations on a multi-issue bargaining task, where agents who
cannot observe each other's reward functions must reach an agreement (or a
deal) via natural language dialogue. For the first time, we show it is possible
to train end-to-end models for negotiation, which must learn both linguistic
and reasoning skills with no annotated dialogue states. We also introduce
dialogue rollouts, in which the model plans ahead by simulating possible
complete continuations of the conversation, and find that this technique
dramatically improves performance. Our code and dataset are publicly available
(https://github.com/facebookresearch/end-to-end-negotiator). | http://arxiv.org/pdf/1706.05125 | Mike Lewis, Denis Yarats, Yann N. Dauphin, Devi Parikh, Dhruv Batra | cs.AI, cs.CL | null | null | cs.AI | 20170616 | 20170616 | [
{
"id": "1606.03152"
},
{
"id": "1510.03055"
},
{
"id": "1703.04908"
},
{
"id": "1511.08099"
},
{
"id": "1604.04562"
},
{
"id": "1611.08669"
},
{
"id": "1703.06585"
},
{
"id": "1606.01541"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1706.05125 | 31 | Figure 6: Example of model compromising.
utterances produced by our model, and found that the overwhelming majority were ï¬uent English sentences in isolationâshowing that the model has learnt a good language model for the domain (in addition to results that show it uses language effectively to achieve its goals). These results sug- gest that although neural models are prone to the safer option of repeating sentences from training data, they are capable of generalising when nec- essary. Future work should choose domains that force a higher degree of diversity in utterances.
Maintaining multi-sentence coherence is chal- lenging. One common linguistic error we see RL+ROLLOUTS make is to start a message by in- dicating agreement (e.g. I agree or Deal), but then going on to propose a counter offerâa behaviour that human partners found frustrating. One ex- planation is that the model has learnt that in the supervised data, messages beginning with I agree are often at the end of the dialogue, and partners rarely reply with further negotiationâso the mod- els using rollouts and reinforcement learning be- lieve this tactic will help their offer to be accepted.
# 8 Related Work | 1706.05125#31 | Deal or No Deal? End-to-End Learning for Negotiation Dialogues | Much of human dialogue occurs in semi-cooperative settings, where agents with
different goals attempt to agree on common decisions. Negotiations require
complex communication and reasoning skills, but success is easy to measure,
making this an interesting task for AI. We gather a large dataset of
human-human negotiations on a multi-issue bargaining task, where agents who
cannot observe each other's reward functions must reach an agreement (or a
deal) via natural language dialogue. For the first time, we show it is possible
to train end-to-end models for negotiation, which must learn both linguistic
and reasoning skills with no annotated dialogue states. We also introduce
dialogue rollouts, in which the model plans ahead by simulating possible
complete continuations of the conversation, and find that this technique
dramatically improves performance. Our code and dataset are publicly available
(https://github.com/facebookresearch/end-to-end-negotiator). | http://arxiv.org/pdf/1706.05125 | Mike Lewis, Denis Yarats, Yann N. Dauphin, Devi Parikh, Dhruv Batra | cs.AI, cs.CL | null | null | cs.AI | 20170616 | 20170616 | [
{
"id": "1606.03152"
},
{
"id": "1510.03055"
},
{
"id": "1703.04908"
},
{
"id": "1511.08099"
},
{
"id": "1604.04562"
},
{
"id": "1611.08669"
},
{
"id": "1703.06585"
},
{
"id": "1606.01541"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1706.05125 | 32 | # 8 Related Work
Most work on goal orientated dialogue systems has assumed that state representations are annoInput RL+ROLLOUTS Human RL+ROLLOUTS You can have the ball if I can have everything else. Human 4xbook value=0 2xhat value=5 1xball value=0 4xbook value=1 2xhat value=2 1xball value=2 No canât do that the ball is not going to give me anything I need the hats and you can have the rest Deal RL+ROLLOUTS Human RL+ROLLOUTS Great! Thank you! Output RL+ROLLOUTS Human Reward 10/10 6/10 2xhat 4xbook 1xball
Figure 7: Dialogue in which the modelâs initial in- terest in the valueless books allows it to compro- mise while achieving a maximum score. | 1706.05125#32 | Deal or No Deal? End-to-End Learning for Negotiation Dialogues | Much of human dialogue occurs in semi-cooperative settings, where agents with
different goals attempt to agree on common decisions. Negotiations require
complex communication and reasoning skills, but success is easy to measure,
making this an interesting task for AI. We gather a large dataset of
human-human negotiations on a multi-issue bargaining task, where agents who
cannot observe each other's reward functions must reach an agreement (or a
deal) via natural language dialogue. For the first time, we show it is possible
to train end-to-end models for negotiation, which must learn both linguistic
and reasoning skills with no annotated dialogue states. We also introduce
dialogue rollouts, in which the model plans ahead by simulating possible
complete continuations of the conversation, and find that this technique
dramatically improves performance. Our code and dataset are publicly available
(https://github.com/facebookresearch/end-to-end-negotiator). | http://arxiv.org/pdf/1706.05125 | Mike Lewis, Denis Yarats, Yann N. Dauphin, Devi Parikh, Dhruv Batra | cs.AI, cs.CL | null | null | cs.AI | 20170616 | 20170616 | [
{
"id": "1606.03152"
},
{
"id": "1510.03055"
},
{
"id": "1703.04908"
},
{
"id": "1511.08099"
},
{
"id": "1604.04562"
},
{
"id": "1611.08669"
},
{
"id": "1703.06585"
},
{
"id": "1606.01541"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1706.05125 | 33 | Figure 7: Dialogue in which the modelâs initial in- terest in the valueless books allows it to compro- mise while achieving a maximum score.
tated in the training data (Williams and Young, 2007; Henderson et al., 2014; Wen et al., 2016). The use of state annotations allows a cleaner separation of the reasoning and natural language aspects of dialogues, but our end-to-end ap- proach makes data collection cheaper and al- lows tasks where it is unclear how to annotate state. Bordes and Weston (2016) explore end-to- end goal orientated dialogue with a supervised modelâwe show improvements over supervised learning with goal-based training and decoding. Recently, He et al. (2017) use task-speciï¬c rules to combine the task input and dialogue history into a more structured state representation than ours. | 1706.05125#33 | Deal or No Deal? End-to-End Learning for Negotiation Dialogues | Much of human dialogue occurs in semi-cooperative settings, where agents with
different goals attempt to agree on common decisions. Negotiations require
complex communication and reasoning skills, but success is easy to measure,
making this an interesting task for AI. We gather a large dataset of
human-human negotiations on a multi-issue bargaining task, where agents who
cannot observe each other's reward functions must reach an agreement (or a
deal) via natural language dialogue. For the first time, we show it is possible
to train end-to-end models for negotiation, which must learn both linguistic
and reasoning skills with no annotated dialogue states. We also introduce
dialogue rollouts, in which the model plans ahead by simulating possible
complete continuations of the conversation, and find that this technique
dramatically improves performance. Our code and dataset are publicly available
(https://github.com/facebookresearch/end-to-end-negotiator). | http://arxiv.org/pdf/1706.05125 | Mike Lewis, Denis Yarats, Yann N. Dauphin, Devi Parikh, Dhruv Batra | cs.AI, cs.CL | null | null | cs.AI | 20170616 | 20170616 | [
{
"id": "1606.03152"
},
{
"id": "1510.03055"
},
{
"id": "1703.04908"
},
{
"id": "1511.08099"
},
{
"id": "1604.04562"
},
{
"id": "1611.08669"
},
{
"id": "1703.06585"
},
{
"id": "1606.01541"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1706.05125 | 34 | learning (RL) has been ap- plied in many dialogue settings. RL has been widely used to improve dialogue man- agers, which manage transitions between dia- logue states (Singh et al., 2002; Pietquin et al., 2011; Rieser and Lemon, 2011; GaËsic et al., 2013; In contrast, our end-to- Fatemi et al., 2016). end approach has no explicit dialogue manager. Li et al. (2016) improve metrics such as diver- sity for non-goal-orientated dialogue using RL, which would make an interesting extension to our work. Das et al. (2017) use reinforcement learning to improve cooperative bot-bot dialogues. RL has also been used to allow agents to invent new lan- guages (Das et al., 2017; Mordatch and Abbeel, 2017). To our knowledge, our model is the ï¬rst to use RL to improve the performance of an end-to- end goal orientated dialogue system in dialogues with humans.
Work on learning end-to-end dialogues has con- centrated on âchatâ settings, without explicit goals (Ritter et al., 2011; Vinyals and Le, 2015; Li et al., 2015). These dialogues contain a much greater di- versity of vocabulary than our domain, but do not | 1706.05125#34 | Deal or No Deal? End-to-End Learning for Negotiation Dialogues | Much of human dialogue occurs in semi-cooperative settings, where agents with
different goals attempt to agree on common decisions. Negotiations require
complex communication and reasoning skills, but success is easy to measure,
making this an interesting task for AI. We gather a large dataset of
human-human negotiations on a multi-issue bargaining task, where agents who
cannot observe each other's reward functions must reach an agreement (or a
deal) via natural language dialogue. For the first time, we show it is possible
to train end-to-end models for negotiation, which must learn both linguistic
and reasoning skills with no annotated dialogue states. We also introduce
dialogue rollouts, in which the model plans ahead by simulating possible
complete continuations of the conversation, and find that this technique
dramatically improves performance. Our code and dataset are publicly available
(https://github.com/facebookresearch/end-to-end-negotiator). | http://arxiv.org/pdf/1706.05125 | Mike Lewis, Denis Yarats, Yann N. Dauphin, Devi Parikh, Dhruv Batra | cs.AI, cs.CL | null | null | cs.AI | 20170616 | 20170616 | [
{
"id": "1606.03152"
},
{
"id": "1510.03055"
},
{
"id": "1703.04908"
},
{
"id": "1511.08099"
},
{
"id": "1604.04562"
},
{
"id": "1611.08669"
},
{
"id": "1703.06585"
},
{
"id": "1606.01541"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1706.05125 | 36 | There is a substantial literature on multi-agent bargaining in game-theory, e.g. Nash Jr (1950). There has also been computational work on mod- elling negotiations (Baarslag et al., 2013)âour work differs in that agents communicate in unre- stricted natural language, rather than pre-speciï¬ed symbolic actions, and our focus on improving per- formance relative to humans rather than other au- tomated systems. Our task is based on that of DeVault et al. (2015), who study natural language negotiations for pedagogical purposesâtheir ver- sion includes speech rather than textual dialogue, and embodied agents, which would make interest- ing extensions to our work. The only automated natural language negotiations systems we are aware of have ï¬rst mapped language to domain- speciï¬c logical forms, and then focused on choos- ing the next dialogue act (Rosenfeld et al., 2014; Cuay´ahuitl et al., 2015; Keizer et al., 2017). Our end-to-end approach is the ï¬rst to to learn com- prehension, reasoning and generation skills in a domain-independent data driven way. | 1706.05125#36 | Deal or No Deal? End-to-End Learning for Negotiation Dialogues | Much of human dialogue occurs in semi-cooperative settings, where agents with
different goals attempt to agree on common decisions. Negotiations require
complex communication and reasoning skills, but success is easy to measure,
making this an interesting task for AI. We gather a large dataset of
human-human negotiations on a multi-issue bargaining task, where agents who
cannot observe each other's reward functions must reach an agreement (or a
deal) via natural language dialogue. For the first time, we show it is possible
to train end-to-end models for negotiation, which must learn both linguistic
and reasoning skills with no annotated dialogue states. We also introduce
dialogue rollouts, in which the model plans ahead by simulating possible
complete continuations of the conversation, and find that this technique
dramatically improves performance. Our code and dataset are publicly available
(https://github.com/facebookresearch/end-to-end-negotiator). | http://arxiv.org/pdf/1706.05125 | Mike Lewis, Denis Yarats, Yann N. Dauphin, Devi Parikh, Dhruv Batra | cs.AI, cs.CL | null | null | cs.AI | 20170616 | 20170616 | [
{
"id": "1606.03152"
},
{
"id": "1510.03055"
},
{
"id": "1703.04908"
},
{
"id": "1511.08099"
},
{
"id": "1604.04562"
},
{
"id": "1611.08669"
},
{
"id": "1703.06585"
},
{
"id": "1606.01541"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1706.05125 | 37 | Our use of a combination of supervised and reinforcement learning for training, and stochas- tic rollouts for decoding, builds on strategies used in game playing agents such as AlphaGo (Silver et al., 2016). Our work is a step towards real-world applications for these techniques. Our use of rollouts could be extended by choos- ing the other agentâs responses based on sam- pling, using Monte Carlo Tree Search (MCTS) (Kocsis and Szepesv´ari, 2006). However, our set- ting has a higher branching factor than in domains where MCTS has been successfully applied, such as Go (Silver et al., 2016)âfuture work should explore scaling tree search to dialogue modelling.
# 9 Conclusion | 1706.05125#37 | Deal or No Deal? End-to-End Learning for Negotiation Dialogues | Much of human dialogue occurs in semi-cooperative settings, where agents with
different goals attempt to agree on common decisions. Negotiations require
complex communication and reasoning skills, but success is easy to measure,
making this an interesting task for AI. We gather a large dataset of
human-human negotiations on a multi-issue bargaining task, where agents who
cannot observe each other's reward functions must reach an agreement (or a
deal) via natural language dialogue. For the first time, we show it is possible
to train end-to-end models for negotiation, which must learn both linguistic
and reasoning skills with no annotated dialogue states. We also introduce
dialogue rollouts, in which the model plans ahead by simulating possible
complete continuations of the conversation, and find that this technique
dramatically improves performance. Our code and dataset are publicly available
(https://github.com/facebookresearch/end-to-end-negotiator). | http://arxiv.org/pdf/1706.05125 | Mike Lewis, Denis Yarats, Yann N. Dauphin, Devi Parikh, Dhruv Batra | cs.AI, cs.CL | null | null | cs.AI | 20170616 | 20170616 | [
{
"id": "1606.03152"
},
{
"id": "1510.03055"
},
{
"id": "1703.04908"
},
{
"id": "1511.08099"
},
{
"id": "1604.04562"
},
{
"id": "1611.08669"
},
{
"id": "1703.06585"
},
{
"id": "1606.01541"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1706.05125 | 38 | # 9 Conclusion
We have introduced end-to-end learning of natu- ral language negotiations as a task for AI, argu- ing that it challenges both linguistic and reason- ing skills while having robust evaluation metrics. We gathered a large dataset of human-human negotiations, which contain a variety of interesting tactics. We have shown that it is possible to train dialogue agents end-to-end, but that their ability can be much improved by training and decoding to maximise their goals, rather than likelihood. There remains much potential for future work, particularly in exploring other reasoning strate- gies, and in improving the diversity of utterances without diverging from human language. We will also explore other negotiation tasks, to investi- gate whether models can learn to share negotiation strategies across domains.
# Acknowledgments
We would like to thank Luke Zettlemoyer and the anonymous EMNLP reviewers for their insightful comments, and the Mechanical Turk workers who helped us collect data.
# References
Nicholas Asher, Alex Lascarides, Oliver Lemon, Markus Guhe, Verena Rieser, Philippe Muller, Ster- gos Afantenos, Farah Benamara, Laure Vieu, Pascal Denis, et al. 2012. Modelling Strategic Conversa- tion: The STAC project. Proceedings of SemDial page 27. | 1706.05125#38 | Deal or No Deal? End-to-End Learning for Negotiation Dialogues | Much of human dialogue occurs in semi-cooperative settings, where agents with
different goals attempt to agree on common decisions. Negotiations require
complex communication and reasoning skills, but success is easy to measure,
making this an interesting task for AI. We gather a large dataset of
human-human negotiations on a multi-issue bargaining task, where agents who
cannot observe each other's reward functions must reach an agreement (or a
deal) via natural language dialogue. For the first time, we show it is possible
to train end-to-end models for negotiation, which must learn both linguistic
and reasoning skills with no annotated dialogue states. We also introduce
dialogue rollouts, in which the model plans ahead by simulating possible
complete continuations of the conversation, and find that this technique
dramatically improves performance. Our code and dataset are publicly available
(https://github.com/facebookresearch/end-to-end-negotiator). | http://arxiv.org/pdf/1706.05125 | Mike Lewis, Denis Yarats, Yann N. Dauphin, Devi Parikh, Dhruv Batra | cs.AI, cs.CL | null | null | cs.AI | 20170616 | 20170616 | [
{
"id": "1606.03152"
},
{
"id": "1510.03055"
},
{
"id": "1703.04908"
},
{
"id": "1511.08099"
},
{
"id": "1604.04562"
},
{
"id": "1611.08669"
},
{
"id": "1703.06585"
},
{
"id": "1606.01541"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1706.05125 | 39 | Tim Baarslag, Katsuhide Fujita, Enrico H Gerding, Koen Hindriks, Takayuki Ito, Nicholas R Jennings, Catholijn Jonker, Sarit Kraus, Raz Lin, Valentin Robu, et al. 2013. Evaluating Practical Negotiating Agents: Results and Analysis of the 2011 Interna- tional Competition. Artiï¬cial Intelligence 198:73â 103.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural Machine Translation by Jointly arXiv preprint Learning to Align and Translate. arXiv:1409.0473 .
Antoine Bordes and Jason Weston. 2016. Learning End-to-End Goal-oriented Dialog. arXiv preprint arXiv:1605.07683 .
Kyunghyun Cho, Bart Van Merri¨enboer, Dzmitry Bah- danau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder ap- proaches. arXiv preprint arXiv:1409.1259 .
Heriberto Cuay´ahuitl, Simon Keizer, and Oliver Strategic Dialogue Management Lemon. 2015. via Deep Reinforcement Learning. arXiv preprint arXiv:1511.08099 . | 1706.05125#39 | Deal or No Deal? End-to-End Learning for Negotiation Dialogues | Much of human dialogue occurs in semi-cooperative settings, where agents with
different goals attempt to agree on common decisions. Negotiations require
complex communication and reasoning skills, but success is easy to measure,
making this an interesting task for AI. We gather a large dataset of
human-human negotiations on a multi-issue bargaining task, where agents who
cannot observe each other's reward functions must reach an agreement (or a
deal) via natural language dialogue. For the first time, we show it is possible
to train end-to-end models for negotiation, which must learn both linguistic
and reasoning skills with no annotated dialogue states. We also introduce
dialogue rollouts, in which the model plans ahead by simulating possible
complete continuations of the conversation, and find that this technique
dramatically improves performance. Our code and dataset are publicly available
(https://github.com/facebookresearch/end-to-end-negotiator). | http://arxiv.org/pdf/1706.05125 | Mike Lewis, Denis Yarats, Yann N. Dauphin, Devi Parikh, Dhruv Batra | cs.AI, cs.CL | null | null | cs.AI | 20170616 | 20170616 | [
{
"id": "1606.03152"
},
{
"id": "1510.03055"
},
{
"id": "1703.04908"
},
{
"id": "1511.08099"
},
{
"id": "1604.04562"
},
{
"id": "1611.08669"
},
{
"id": "1703.06585"
},
{
"id": "1606.01541"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1706.05125 | 40 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, Jos´e MF Moura, Devi Parikh, arXiv and Dhruv Batra. 2016. Visual Dialog. preprint arXiv:1611.08669 .
Abhishek Das, Satwik Kottur, Jos´e MF Moura, Stefan Lee, and Dhruv Batra. 2017. Learning Coopera- tive Visual Dialog Agents with Deep Reinforcement Learning. arXiv preprint arXiv:1703.06585 .
David DeVault, Johnathan Mell, and Jonathan Gratch. 2015. Toward Natural Turn-taking in a Virtual Hu- In AAAI Spring Sympo- man Negotiation Agent. sium on Turn-taking and Coordination in Human- Machine Interaction. AAAI Press, Stanford, CA.
Jesse Dodge, Andreea Gane, Xiang Zhang, Antoine Bordes, Sumit Chopra, Alexander H. Miller, Arthur Szlam, and Jason Weston. 2016. Evaluating Pre- requisite Qualities for Learning End-to-End Dialog Systems. ICLR abs/1511.06931. | 1706.05125#40 | Deal or No Deal? End-to-End Learning for Negotiation Dialogues | Much of human dialogue occurs in semi-cooperative settings, where agents with
different goals attempt to agree on common decisions. Negotiations require
complex communication and reasoning skills, but success is easy to measure,
making this an interesting task for AI. We gather a large dataset of
human-human negotiations on a multi-issue bargaining task, where agents who
cannot observe each other's reward functions must reach an agreement (or a
deal) via natural language dialogue. For the first time, we show it is possible
to train end-to-end models for negotiation, which must learn both linguistic
and reasoning skills with no annotated dialogue states. We also introduce
dialogue rollouts, in which the model plans ahead by simulating possible
complete continuations of the conversation, and find that this technique
dramatically improves performance. Our code and dataset are publicly available
(https://github.com/facebookresearch/end-to-end-negotiator). | http://arxiv.org/pdf/1706.05125 | Mike Lewis, Denis Yarats, Yann N. Dauphin, Devi Parikh, Dhruv Batra | cs.AI, cs.CL | null | null | cs.AI | 20170616 | 20170616 | [
{
"id": "1606.03152"
},
{
"id": "1510.03055"
},
{
"id": "1703.04908"
},
{
"id": "1511.08099"
},
{
"id": "1604.04562"
},
{
"id": "1611.08669"
},
{
"id": "1703.06585"
},
{
"id": "1606.01541"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1706.05125 | 41 | Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, and Kaheer Suleman. 2016. Policy Networks with Two-stage Training for Dialogue Systems. arXiv preprint arXiv:1606.03152 .
The Importance of the Agenda in Bargaining. Games and Economic Be- havior 2(3):224â238.
Milica GaËsic, Catherine Breslin, Matthew Henderson, Dongho Kim, Martin Szummer, Blaise Thomson, Pirros Tsiakoulis, and Steve Young. 2013. POMDP- based Dialogue Manager Adaptation to Extended Domains. In Proceedings of SIGDIAL.
H. He, A. Balakrishnan, M. Eric, and P. Liang. 2017. Learning symmetric collaborative dialogue agents with dynamic knowledge graph embeddings. In As- sociation for Computational Linguistics (ACL).
Matthew Henderson, Blaise Thomson, and Jason Williams. 2014. The Second Dialog State Tracking Challenge. In 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue. volume 263. | 1706.05125#41 | Deal or No Deal? End-to-End Learning for Negotiation Dialogues | Much of human dialogue occurs in semi-cooperative settings, where agents with
different goals attempt to agree on common decisions. Negotiations require
complex communication and reasoning skills, but success is easy to measure,
making this an interesting task for AI. We gather a large dataset of
human-human negotiations on a multi-issue bargaining task, where agents who
cannot observe each other's reward functions must reach an agreement (or a
deal) via natural language dialogue. For the first time, we show it is possible
to train end-to-end models for negotiation, which must learn both linguistic
and reasoning skills with no annotated dialogue states. We also introduce
dialogue rollouts, in which the model plans ahead by simulating possible
complete continuations of the conversation, and find that this technique
dramatically improves performance. Our code and dataset are publicly available
(https://github.com/facebookresearch/end-to-end-negotiator). | http://arxiv.org/pdf/1706.05125 | Mike Lewis, Denis Yarats, Yann N. Dauphin, Devi Parikh, Dhruv Batra | cs.AI, cs.CL | null | null | cs.AI | 20170616 | 20170616 | [
{
"id": "1606.03152"
},
{
"id": "1510.03055"
},
{
"id": "1703.04908"
},
{
"id": "1511.08099"
},
{
"id": "1604.04562"
},
{
"id": "1611.08669"
},
{
"id": "1703.06585"
},
{
"id": "1606.01541"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1706.05125 | 42 | Simon Keizer, Markus Guhe, Heriberto Cuay´ahuitl, Ioannis Efstathiou, Klaus-Peter Engelbrecht, Mihai Dobre, Alexandra Lascarides, and Oliver Lemon. 2017. Evaluating Persuasion Strategies and Deep Reinforcement Learning methods for Negotiation In Proceedings of the European Dialogue agents. Chapter of the Association for Computational Lin- guistics (EACL 2017).
Levente Kocsis and Csaba Szepesv´ari. 2006. Bandit based Monte-Carlo Planning. In European confer- ence on machine learning. Springer, pages 282â293.
Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2015. A Diversity-promoting Ob- jective Function for Neural Conversation Models. arXiv preprint arXiv:1510.03055 .
Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, and Dan Jurafsky. 2016. Deep Rein- forcement Learning for Dialogue Generation. arXiv preprint arXiv:1606.01541 . | 1706.05125#42 | Deal or No Deal? End-to-End Learning for Negotiation Dialogues | Much of human dialogue occurs in semi-cooperative settings, where agents with
different goals attempt to agree on common decisions. Negotiations require
complex communication and reasoning skills, but success is easy to measure,
making this an interesting task for AI. We gather a large dataset of
human-human negotiations on a multi-issue bargaining task, where agents who
cannot observe each other's reward functions must reach an agreement (or a
deal) via natural language dialogue. For the first time, we show it is possible
to train end-to-end models for negotiation, which must learn both linguistic
and reasoning skills with no annotated dialogue states. We also introduce
dialogue rollouts, in which the model plans ahead by simulating possible
complete continuations of the conversation, and find that this technique
dramatically improves performance. Our code and dataset are publicly available
(https://github.com/facebookresearch/end-to-end-negotiator). | http://arxiv.org/pdf/1706.05125 | Mike Lewis, Denis Yarats, Yann N. Dauphin, Devi Parikh, Dhruv Batra | cs.AI, cs.CL | null | null | cs.AI | 20170616 | 20170616 | [
{
"id": "1606.03152"
},
{
"id": "1510.03055"
},
{
"id": "1703.04908"
},
{
"id": "1511.08099"
},
{
"id": "1604.04562"
},
{
"id": "1611.08669"
},
{
"id": "1703.06585"
},
{
"id": "1606.01541"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1706.05125 | 43 | Chia-Wei Liu, Ryan Lowe, Iulian V. Serban, Michael Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How NOT To Evaluate Your Dialogue Sys- tem: An Empirical Study of Unsupervised Evalua- tion Metrics for Dialogue Response Generation. In Proceedings of the Conference on Empirical Meth- ods in Natural Language Processing.
Junhua Mao, Xu Wei, Yi Yang, Jiang Wang, Zhiheng Huang, and Alan L. Yuille. 2015. Learning Like a Child: Fast Novel Visual Concept Learning From Sentence Descriptions of Images. In The IEEE In- ternational Conference on Computer Vision (ICCV).
Igor Mordatch and Pieter Abbeel. 2017. Emergence of Grounded Compositional Language in Multi-Agent Populations. arXiv preprint arXiv:1703.04908 .
The Bargaining Problem. Econometrica: Journal of the Econometric Society pages 155â162.
Yurii Nesterov. 1983. A Method of Solving a Convex Programming Problem with Convergence Rate O (1/k2). In Soviet Mathematics Doklady. volume 27, pages 372â376. | 1706.05125#43 | Deal or No Deal? End-to-End Learning for Negotiation Dialogues | Much of human dialogue occurs in semi-cooperative settings, where agents with
different goals attempt to agree on common decisions. Negotiations require
complex communication and reasoning skills, but success is easy to measure,
making this an interesting task for AI. We gather a large dataset of
human-human negotiations on a multi-issue bargaining task, where agents who
cannot observe each other's reward functions must reach an agreement (or a
deal) via natural language dialogue. For the first time, we show it is possible
to train end-to-end models for negotiation, which must learn both linguistic
and reasoning skills with no annotated dialogue states. We also introduce
dialogue rollouts, in which the model plans ahead by simulating possible
complete continuations of the conversation, and find that this technique
dramatically improves performance. Our code and dataset are publicly available
(https://github.com/facebookresearch/end-to-end-negotiator). | http://arxiv.org/pdf/1706.05125 | Mike Lewis, Denis Yarats, Yann N. Dauphin, Devi Parikh, Dhruv Batra | cs.AI, cs.CL | null | null | cs.AI | 20170616 | 20170616 | [
{
"id": "1606.03152"
},
{
"id": "1510.03055"
},
{
"id": "1703.04908"
},
{
"id": "1511.08099"
},
{
"id": "1604.04562"
},
{
"id": "1611.08669"
},
{
"id": "1703.06585"
},
{
"id": "1606.01541"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1706.05125 | 44 | Olivier Pietquin, Matthieu Geist, Senthilkumar Chan- dramohan, and Herv´e Frezza-Buet. 2011. Sample- efï¬cient Batch Reinforcement Learning for Dia- ACM Trans. logue Management Optimization. Speech Lang. Process. 7(3):7:1â7:21.
Verena Rieser and Oliver Lemon. 2011. Reinforcement Learning for Adaptive Dialogue Systems: A Data- driven Methodology for Dialogue Management and Natural Language Generation. Springer Science & Business Media.
Alan Ritter, Colin Cherry, and William B Dolan. 2011. Data-driven Response Generation in Social Me- dia. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Associa- tion for Computational Linguistics, pages 583â593.
Avi Rosenfeld, Inon Zuckerman, Erel Segal-Halevi, Osnat Drein, and Sarit Kraus. 2014. NegoChat: A In Proceedings of Chat-based Negotiation Agent. the 2014 International Conference on Autonomous Agents and Multi-agent Systems. International Foun- dation for Autonomous Agents and Multiagent Sys- tems, Richland, SC, AAMAS â14, pages 525â532. | 1706.05125#44 | Deal or No Deal? End-to-End Learning for Negotiation Dialogues | Much of human dialogue occurs in semi-cooperative settings, where agents with
different goals attempt to agree on common decisions. Negotiations require
complex communication and reasoning skills, but success is easy to measure,
making this an interesting task for AI. We gather a large dataset of
human-human negotiations on a multi-issue bargaining task, where agents who
cannot observe each other's reward functions must reach an agreement (or a
deal) via natural language dialogue. For the first time, we show it is possible
to train end-to-end models for negotiation, which must learn both linguistic
and reasoning skills with no annotated dialogue states. We also introduce
dialogue rollouts, in which the model plans ahead by simulating possible
complete continuations of the conversation, and find that this technique
dramatically improves performance. Our code and dataset are publicly available
(https://github.com/facebookresearch/end-to-end-negotiator). | http://arxiv.org/pdf/1706.05125 | Mike Lewis, Denis Yarats, Yann N. Dauphin, Devi Parikh, Dhruv Batra | cs.AI, cs.CL | null | null | cs.AI | 20170616 | 20170616 | [
{
"id": "1606.03152"
},
{
"id": "1510.03055"
},
{
"id": "1703.04908"
},
{
"id": "1511.08099"
},
{
"id": "1604.04562"
},
{
"id": "1611.08669"
},
{
"id": "1703.06585"
},
{
"id": "1606.01541"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1706.05125 | 45 | David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Ju- lian Schrittwieser, Ioannis Antonoglou, Veda Pan- neershelvam, Marc Lanctot, et al. 2016. Mastering the Game of Go with Deep Neural Networks and Tree Search. Nature 529(7587):484â489.
Satinder Singh, Diane Litman, Michael Kearns, and Marilyn Walker. 2002. Optimizing Dialogue Man- agement with Reinforcement Learning: Experi- ments with the NJFun System. Journal of Artiï¬cial Intelligence Research 16:105â133.
Victoria Talwar and Kang Lee. 2002. Development of lying to conceal a transgression: Childrenâs con- trol of expressive behaviour during verbal decep- tion. International Journal of Behavioral Develop- ment 26(5):436â444.
David Traum, Stacy C. Marsella, Jonathan Gratch, Jina Lee, and Arno Hartholt. 2008. Multi-party, Multi- issue, Multi-strategy Negotiation for Multi-modal In Proceedings of the 8th Inter- Virtual Agents. national Conference on Intelligent Virtual Agents. Springer-Verlag, Berlin, Heidelberg, IVA â08, pages 117â130. | 1706.05125#45 | Deal or No Deal? End-to-End Learning for Negotiation Dialogues | Much of human dialogue occurs in semi-cooperative settings, where agents with
different goals attempt to agree on common decisions. Negotiations require
complex communication and reasoning skills, but success is easy to measure,
making this an interesting task for AI. We gather a large dataset of
human-human negotiations on a multi-issue bargaining task, where agents who
cannot observe each other's reward functions must reach an agreement (or a
deal) via natural language dialogue. For the first time, we show it is possible
to train end-to-end models for negotiation, which must learn both linguistic
and reasoning skills with no annotated dialogue states. We also introduce
dialogue rollouts, in which the model plans ahead by simulating possible
complete continuations of the conversation, and find that this technique
dramatically improves performance. Our code and dataset are publicly available
(https://github.com/facebookresearch/end-to-end-negotiator). | http://arxiv.org/pdf/1706.05125 | Mike Lewis, Denis Yarats, Yann N. Dauphin, Devi Parikh, Dhruv Batra | cs.AI, cs.CL | null | null | cs.AI | 20170616 | 20170616 | [
{
"id": "1606.03152"
},
{
"id": "1510.03055"
},
{
"id": "1703.04908"
},
{
"id": "1511.08099"
},
{
"id": "1604.04562"
},
{
"id": "1611.08669"
},
{
"id": "1703.06585"
},
{
"id": "1606.01541"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1706.05125 | 46 | Oriol Vinyals and Quoc Le. 2015. A Neural Conversa- tional Model. arXiv preprint arXiv:1506.05869 .
Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M Rojas-Barahona, Pei-Hao Su, Stefan Ultes, and Steve Young. 2016. A Network- based End-to-End Trainable Task-oriented Dialogue System. arXiv preprint arXiv:1604.04562 .
Jason D Williams and Steve Young. 2007. Partially Observable Markov Decision Processes for Spoken Dialog Systems. Computer Speech & Language 21(2):393â422.
Ronald J Williams. 1992. Simple Statistical Gradient- following Algorithms for Connectionist Reinforce- ment Learning. Machine learning 8(3-4):229â256. | 1706.05125#46 | Deal or No Deal? End-to-End Learning for Negotiation Dialogues | Much of human dialogue occurs in semi-cooperative settings, where agents with
different goals attempt to agree on common decisions. Negotiations require
complex communication and reasoning skills, but success is easy to measure,
making this an interesting task for AI. We gather a large dataset of
human-human negotiations on a multi-issue bargaining task, where agents who
cannot observe each other's reward functions must reach an agreement (or a
deal) via natural language dialogue. For the first time, we show it is possible
to train end-to-end models for negotiation, which must learn both linguistic
and reasoning skills with no annotated dialogue states. We also introduce
dialogue rollouts, in which the model plans ahead by simulating possible
complete continuations of the conversation, and find that this technique
dramatically improves performance. Our code and dataset are publicly available
(https://github.com/facebookresearch/end-to-end-negotiator). | http://arxiv.org/pdf/1706.05125 | Mike Lewis, Denis Yarats, Yann N. Dauphin, Devi Parikh, Dhruv Batra | cs.AI, cs.CL | null | null | cs.AI | 20170616 | 20170616 | [
{
"id": "1606.03152"
},
{
"id": "1510.03055"
},
{
"id": "1703.04908"
},
{
"id": "1511.08099"
},
{
"id": "1604.04562"
},
{
"id": "1611.08669"
},
{
"id": "1703.06585"
},
{
"id": "1606.01541"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1706.05098 | 0 | 7 1 0 2
n u J 5 1 ] G L . s c [
1 v 8 9 0 5 0 . 6 0 7 1 : v i X r a
# An Overview of Multi-Task Learning in Deep Neural Networksâ
Sebastian Ruder Insight Centre for Data Analytics, NUI Galway Aylien Ltd., Dublin [email protected]
# Abstract
Multi-task learning (MTL) has led to successes in many applications of machine learning, from natural language processing and speech recognition to computer vision and drug discovery. This article aims to give a general overview of MTL, particularly in deep neural networks. It introduces the two most common methods for MTL in Deep Learning, gives an overview of the literature, and discusses recent advances. In particular, it seeks to help ML practitioners apply MTL by shedding light on how MTL works and providing guidelines for choosing appropriate auxiliary tasks.
# Introduction | 1706.05098#0 | An Overview of Multi-Task Learning in Deep Neural Networks | Multi-task learning (MTL) has led to successes in many applications of
machine learning, from natural language processing and speech recognition to
computer vision and drug discovery. This article aims to give a general
overview of MTL, particularly in deep neural networks. It introduces the two
most common methods for MTL in Deep Learning, gives an overview of the
literature, and discusses recent advances. In particular, it seeks to help ML
practitioners apply MTL by shedding light on how MTL works and providing
guidelines for choosing appropriate auxiliary tasks. | http://arxiv.org/pdf/1706.05098 | Sebastian Ruder | cs.LG, cs.AI, stat.ML | 14 pages, 8 figures | null | cs.LG | 20170615 | 20170615 | [
{
"id": "1506.02117"
}
] |
1706.05098 | 1 | # Introduction
In Machine Learning (ML), we typically care about optimizing for a particular metric, whether this is a score on a certain benchmark or a business KPI. In order to do this, we generally train a single model or an ensemble of models to perform our desired task. We then ï¬ne-tune and tweak these models until their performance no longer increases. While we can generally achieve acceptable performance this way, by being laser-focused on our single task, we ignore information that might help us do even better on the metric we care about. Speciï¬cally, this information comes from the training signals of related tasks. By sharing representations between related tasks, we can enable our model to generalize better on our original task. This approach is called Multi-Task Learning (MTL). | 1706.05098#1 | An Overview of Multi-Task Learning in Deep Neural Networks | Multi-task learning (MTL) has led to successes in many applications of
machine learning, from natural language processing and speech recognition to
computer vision and drug discovery. This article aims to give a general
overview of MTL, particularly in deep neural networks. It introduces the two
most common methods for MTL in Deep Learning, gives an overview of the
literature, and discusses recent advances. In particular, it seeks to help ML
practitioners apply MTL by shedding light on how MTL works and providing
guidelines for choosing appropriate auxiliary tasks. | http://arxiv.org/pdf/1706.05098 | Sebastian Ruder | cs.LG, cs.AI, stat.ML | 14 pages, 8 figures | null | cs.LG | 20170615 | 20170615 | [
{
"id": "1506.02117"
}
] |
1706.05098 | 2 | Multi-task learning has been used successfully across all applications of machine learning, from natural language processing [Collobert and Weston, 2008] and speech recognition [Deng et al., 2013] to computer vision [Girshick, 2015] and drug discovery [Ramsundar et al., 2015]. MTL comes in many guises: joint learning, learning to learn, and learning with auxiliary tasks are only some names that have been used to refer to it. Generally, as soon as you ï¬nd yourself optimizing more than one loss function, you are effectively doing multi-task learning (in contrast to single-task learning). In those scenarios, it helps to think about what you are trying to do explicitly in terms of MTL and to draw insights from it.
Even if you are only optimizing one loss as is the typical case, chances are there is an auxiliary task that will help you improve upon your main task. [Caruana, 1998] summarizes the goal of MTL succinctly: âMTL improves generalization by leveraging the domain-speciï¬c information contained in the training signals of related tasks". | 1706.05098#2 | An Overview of Multi-Task Learning in Deep Neural Networks | Multi-task learning (MTL) has led to successes in many applications of
machine learning, from natural language processing and speech recognition to
computer vision and drug discovery. This article aims to give a general
overview of MTL, particularly in deep neural networks. It introduces the two
most common methods for MTL in Deep Learning, gives an overview of the
literature, and discusses recent advances. In particular, it seeks to help ML
practitioners apply MTL by shedding light on how MTL works and providing
guidelines for choosing appropriate auxiliary tasks. | http://arxiv.org/pdf/1706.05098 | Sebastian Ruder | cs.LG, cs.AI, stat.ML | 14 pages, 8 figures | null | cs.LG | 20170615 | 20170615 | [
{
"id": "1506.02117"
}
] |
1706.05098 | 3 | Over the course of this article, I will try to give a general overview of the current state of multi-task learning, in particular when it comes to MTL with deep neural networks. I will ï¬rst motivate MTL from different perspectives in Section 2. I will then introduce the two most frequently employed methods for MTL in Deep Learning in Section 3. Subsequently, in Section 4, I will describe
âThis paper originally appeared as a blog post at http://sebastianruder.com/multi-task/index. html on 29 May 2017.
mechanisms that together illustrate why MTL works in practice. Before looking at more advanced neural network-based MTL methods, I will provide some context in Section 5 by discussing the literature in MTL. I will then introduce some more powerful recently proposed methods for MTL in deep neural networks in Section 6. Finally, I will talk about commonly used types of auxiliary tasks and discuss what makes a good auxiliary task for MTL in Section 7.
# 2 Motivation
We can motivate multi-task learning in different ways: Biologically, we can see multi-task learning as being inspired by human learning. For learning new tasks, we often apply the knowledge we have acquired by learning related tasks. For instance, a baby ï¬rst learns to recognize faces and can then apply this knowledge to recognize other objects. | 1706.05098#3 | An Overview of Multi-Task Learning in Deep Neural Networks | Multi-task learning (MTL) has led to successes in many applications of
machine learning, from natural language processing and speech recognition to
computer vision and drug discovery. This article aims to give a general
overview of MTL, particularly in deep neural networks. It introduces the two
most common methods for MTL in Deep Learning, gives an overview of the
literature, and discusses recent advances. In particular, it seeks to help ML
practitioners apply MTL by shedding light on how MTL works and providing
guidelines for choosing appropriate auxiliary tasks. | http://arxiv.org/pdf/1706.05098 | Sebastian Ruder | cs.LG, cs.AI, stat.ML | 14 pages, 8 figures | null | cs.LG | 20170615 | 20170615 | [
{
"id": "1506.02117"
}
] |
1706.05098 | 4 | From a pedagogical perspective, we often learn tasks ï¬rst that provide us with the necessary skills to master more complex techniques. This is true for learning the proper way of falling in martial arts, e.g. Judo as much as learning to program. Taking an example out of pop culture, we can also consider The Karate Kid (1984)2. In the movie, sensei Mr Miyagi teaches the karate kid seemingly unrelated tasks such as sanding the ï¬oor and waxing a car. In hindsight, these, however, turn out to equip him with invaluable skills that are relevant for learning karate.
Finally, we can motivate multi-task learning from a machine learning point of view: We can view multi-task learning as a form of inductive transfer. Inductive transfer can help improve a model by introducing an inductive bias, which causes a model to prefer some hypotheses over others. For instance, a common form of inductive bias is ¢; regularization, which leads to a preference for sparse solutions. In the case of MTL, the inductive bias is provided by the auxiliary tasks, which cause the model to prefer hypotheses that explain more than one task. As we will see shortly, this generally leads to solutions that generalize better.
# 3 Two MTL methods for Deep Learning | 1706.05098#4 | An Overview of Multi-Task Learning in Deep Neural Networks | Multi-task learning (MTL) has led to successes in many applications of
machine learning, from natural language processing and speech recognition to
computer vision and drug discovery. This article aims to give a general
overview of MTL, particularly in deep neural networks. It introduces the two
most common methods for MTL in Deep Learning, gives an overview of the
literature, and discusses recent advances. In particular, it seeks to help ML
practitioners apply MTL by shedding light on how MTL works and providing
guidelines for choosing appropriate auxiliary tasks. | http://arxiv.org/pdf/1706.05098 | Sebastian Ruder | cs.LG, cs.AI, stat.ML | 14 pages, 8 figures | null | cs.LG | 20170615 | 20170615 | [
{
"id": "1506.02117"
}
] |
1706.05098 | 5 | # 3 Two MTL methods for Deep Learning
So far, we have focused on theoretical motivations for MTL. To make the ideas of MTL more concrete, we will now look at the two most commonly used ways to perform multi-task learning in deep neural networks. In the context of Deep Learning, multi-task learning is typically done with either hard or soft parameter sharing of hidden layers.
Task A] |Task B| {Task C) Task- i f i specific layers Shared x layers
Figure 1: Hard parameter sharing for multi-task learning in deep neural networks
# 2Thanks to Margaret Mitchell and Adrian Benton for the inspiration
2
# 3.1 Hard parameter sharing
Hard parameter sharing is the most commonly used approach to MTL in neural networks and goes back to [Caruana, 1993]. It is generally applied by sharing the hidden layers between all tasks, while keeping several task-speciï¬c output layers as can be seen in Figure 1. | 1706.05098#5 | An Overview of Multi-Task Learning in Deep Neural Networks | Multi-task learning (MTL) has led to successes in many applications of
machine learning, from natural language processing and speech recognition to
computer vision and drug discovery. This article aims to give a general
overview of MTL, particularly in deep neural networks. It introduces the two
most common methods for MTL in Deep Learning, gives an overview of the
literature, and discusses recent advances. In particular, it seeks to help ML
practitioners apply MTL by shedding light on how MTL works and providing
guidelines for choosing appropriate auxiliary tasks. | http://arxiv.org/pdf/1706.05098 | Sebastian Ruder | cs.LG, cs.AI, stat.ML | 14 pages, 8 figures | null | cs.LG | 20170615 | 20170615 | [
{
"id": "1506.02117"
}
] |
1706.05098 | 6 | Hard parameter sharing greatly reduces the risk of overï¬tting. In fact, [Baxter, 1997] showed that the risk of overï¬tting the shared parameters is an order N â where N is the number of tasks â smaller than overï¬tting the task-speciï¬c parameters, i.e. the output layers. This makes sense intuitively: The more tasks we are learning simultaneously, the more our model has to ï¬nd a representation that captures all of the tasks and the less is our chance of overï¬tting on our original task.
# 3.2 Soft parameter sharing
In soft parameter sharing on the other hand, each task has its own model with its own parameters. The distance between the parameters of the model is then regularized in order to encourage the parameters to be similar, as evidenced in Figure[2] [Duong et al., 2015} for instance use ¢ distance for regularization, while | Yang and Hospedales, 2017b] use the trace norm.
Task A Task B Task C ft t _ ft i i t t * * | L_| Constrained i ¥ * layers
Figure 2: Soft parameter sharing for multi-task learning in deep neural networks
The constraints used for soft parameter sharing in deep neural networks have been greatly inspired by regularization techniques for MTL that have been developed for other models, which we will soon discuss. | 1706.05098#6 | An Overview of Multi-Task Learning in Deep Neural Networks | Multi-task learning (MTL) has led to successes in many applications of
machine learning, from natural language processing and speech recognition to
computer vision and drug discovery. This article aims to give a general
overview of MTL, particularly in deep neural networks. It introduces the two
most common methods for MTL in Deep Learning, gives an overview of the
literature, and discusses recent advances. In particular, it seeks to help ML
practitioners apply MTL by shedding light on how MTL works and providing
guidelines for choosing appropriate auxiliary tasks. | http://arxiv.org/pdf/1706.05098 | Sebastian Ruder | cs.LG, cs.AI, stat.ML | 14 pages, 8 figures | null | cs.LG | 20170615 | 20170615 | [
{
"id": "1506.02117"
}
] |
1706.05098 | 7 | The constraints used for soft parameter sharing in deep neural networks have been greatly inspired by regularization techniques for MTL that have been developed for other models, which we will soon discuss.
# 4 Why does MTL work?
Even though an inductive bias obtained through multi-task learning seems intuitively plausible, in order to understand MTL better, we need to look at the mechanisms that underlie it. Most of these have ï¬rst been proposed by [Caruana, 1998]. For all examples, we will assume that we have two related tasks A and B, which rely on a common hidden layer representation F .
# Implicit data augmentation
MTL effectively increases the sample size that we are using for training our model. As all tasks are at least somewhat noisy, when training a model on some task A, our aim is to learn a good representation for task A that ideally ignores the data-dependent noise and generalizes well. As different tasks have different noise patterns, a model that learns two tasks simultaneously is able to learn a more general representation. Learning just task A bears the risk of overï¬tting to task A, while learning A and B jointly enables the model to obtain a better representation F through averaging the noise patterns.
# 4.2 Attention focusing | 1706.05098#7 | An Overview of Multi-Task Learning in Deep Neural Networks | Multi-task learning (MTL) has led to successes in many applications of
machine learning, from natural language processing and speech recognition to
computer vision and drug discovery. This article aims to give a general
overview of MTL, particularly in deep neural networks. It introduces the two
most common methods for MTL in Deep Learning, gives an overview of the
literature, and discusses recent advances. In particular, it seeks to help ML
practitioners apply MTL by shedding light on how MTL works and providing
guidelines for choosing appropriate auxiliary tasks. | http://arxiv.org/pdf/1706.05098 | Sebastian Ruder | cs.LG, cs.AI, stat.ML | 14 pages, 8 figures | null | cs.LG | 20170615 | 20170615 | [
{
"id": "1506.02117"
}
] |
1706.05098 | 8 | # 4.2 Attention focusing
If a task is very noisy or data is limited and high-dimensional, it can be difï¬cult for a model to differentiate between relevant and irrelevant features. MTL can help the model focus its attention on those features that actually matter as other tasks will provide additional evidence for the relevance or irrelevance of those features.
3
# 4.3 Eavesdropping
Some features G are easy to learn for some task B, while being difï¬cult to learn for another task A. This might either be because A interacts with the features in a more complex way or because other features are impeding the modelâs ability to learn G. Through MTL, we can allow the model to eavesdrop, i.e. learn G through task B. The easiest way to do this is through hints [Abu-Mostafa, 1990], i.e. directly training the model to predict the most important features.
# 4.4 Representation bias
MTL biases the model to prefer representations that other tasks also prefer. This will also help the model to generalize to new tasks in the future as a hypothesis space that performs well for a sufï¬ciently large number of training tasks will also perform well for learning novel tasks as long as they are from the same environment [Baxter, 2000].
# 4.5 Regularization | 1706.05098#8 | An Overview of Multi-Task Learning in Deep Neural Networks | Multi-task learning (MTL) has led to successes in many applications of
machine learning, from natural language processing and speech recognition to
computer vision and drug discovery. This article aims to give a general
overview of MTL, particularly in deep neural networks. It introduces the two
most common methods for MTL in Deep Learning, gives an overview of the
literature, and discusses recent advances. In particular, it seeks to help ML
practitioners apply MTL by shedding light on how MTL works and providing
guidelines for choosing appropriate auxiliary tasks. | http://arxiv.org/pdf/1706.05098 | Sebastian Ruder | cs.LG, cs.AI, stat.ML | 14 pages, 8 figures | null | cs.LG | 20170615 | 20170615 | [
{
"id": "1506.02117"
}
] |
1706.05098 | 9 | # 4.5 Regularization
Finally, MTL acts as a regularizer by introducing an inductive bias. As such, it reduces the risk of overï¬tting as well as the Rademacher complexity of the model, i.e. its ability to ï¬t random noise.
# 5 MTL in non-neural models
In order to better understand MTL in deep neural networks, we will now look to the existing literature on MTL for linear models, kernel methods, and Bayesian algorithms. In particular, we will discuss two main ideas that have been pervasive throughout the history of multi-task learning: enforcing sparsity across tasks through norm regularization; and modelling the relationships between tasks.
Note that many approaches to MTL in the literature deal with a homogenous setting: They assume that all tasks are associated with a single output, e.g. the multi-class MNIST dataset is typically cast as 10 binary classiï¬cation tasks. More recent approaches deal with a more realistic, heterogeneous setting where each task corresponds to a unique set of outputs.
# 5.1 Block-sparse regularization
Notation In order to better connect the following approaches, let us ï¬rst introduce some notation. We have T tasks. For each task t, we have a model mt with parameters at of dimensionality d. We can write the parameters as a column vector at:
tT ait at = Qd,t | 1706.05098#9 | An Overview of Multi-Task Learning in Deep Neural Networks | Multi-task learning (MTL) has led to successes in many applications of
machine learning, from natural language processing and speech recognition to
computer vision and drug discovery. This article aims to give a general
overview of MTL, particularly in deep neural networks. It introduces the two
most common methods for MTL in Deep Learning, gives an overview of the
literature, and discusses recent advances. In particular, it seeks to help ML
practitioners apply MTL by shedding light on how MTL works and providing
guidelines for choosing appropriate auxiliary tasks. | http://arxiv.org/pdf/1706.05098 | Sebastian Ruder | cs.LG, cs.AI, stat.ML | 14 pages, 8 figures | null | cs.LG | 20170615 | 20170615 | [
{
"id": "1506.02117"
}
] |
1706.05098 | 10 | tT ait at = Qd,t
We now stack these column vectors a1, . . . , aT column by column to form a matrix A â RdÃT . The i-th row of A then contains the parameter ai,· corresponding to the i-th feature of the model for every task, while the j-th column of A contains the parameters a·,j corresponding to the j-th model.
Many existing methods make some sparsity assumption with regard to the parameters of our models. assume that all models share a small set of features. In terms of our task parameter matrix A, this means that all but a few rows are 0, which corresponds to only a few features being used across all tasks. In order to enforce this, they generalize the ¢; norm to the MTL setting. Recall that the ¢; norm is a constraint on the sum of the parameters, which forces all but a few parameters to be exactly 0. It is also known as lasso (least absolute shrinkage and selection operator). | 1706.05098#10 | An Overview of Multi-Task Learning in Deep Neural Networks | Multi-task learning (MTL) has led to successes in many applications of
machine learning, from natural language processing and speech recognition to
computer vision and drug discovery. This article aims to give a general
overview of MTL, particularly in deep neural networks. It introduces the two
most common methods for MTL in Deep Learning, gives an overview of the
literature, and discusses recent advances. In particular, it seeks to help ML
practitioners apply MTL by shedding light on how MTL works and providing
guidelines for choosing appropriate auxiliary tasks. | http://arxiv.org/pdf/1706.05098 | Sebastian Ruder | cs.LG, cs.AI, stat.ML | 14 pages, 8 figures | null | cs.LG | 20170615 | 20170615 | [
{
"id": "1506.02117"
}
] |
1706.05098 | 11 | While in the single-task setting, the £,; norm is computed based on the parameter vector a; of the respective task t, for MTL we compute it over our task parameter matrix A. In order to do this, we first compute an ¢, norm across each row a; containing the parameter corresponding to the i-th feature across all tasks, which yields a vector b = [||a1||q--- ||@al|g] ⬠R?. We then compute the ¢; norm of this vector, which forces all but a few entries of b, i.e. rows in A to be 0.
4
As we can see, depending on what constraint we would like to place on each row, we can use a different £,. In general, we refer to these mixed-norm constraints as ¢, /¢, norms. They are also known as block-sparse regularization, as they lead to entire rows of A being set to 0. [Zhang and Huang, 2008) use ¢; /â¬,. regularization, while [Argyriou and Pontil, 2007] use a mixed @; /23 norm. The latter is also known as group lasso and was first proposed by . | 1706.05098#11 | An Overview of Multi-Task Learning in Deep Neural Networks | Multi-task learning (MTL) has led to successes in many applications of
machine learning, from natural language processing and speech recognition to
computer vision and drug discovery. This article aims to give a general
overview of MTL, particularly in deep neural networks. It introduces the two
most common methods for MTL in Deep Learning, gives an overview of the
literature, and discusses recent advances. In particular, it seeks to help ML
practitioners apply MTL by shedding light on how MTL works and providing
guidelines for choosing appropriate auxiliary tasks. | http://arxiv.org/pdf/1706.05098 | Sebastian Ruder | cs.LG, cs.AI, stat.ML | 14 pages, 8 figures | null | cs.LG | 20170615 | 20170615 | [
{
"id": "1506.02117"
}
] |
1706.05098 | 12 | [Argyriou and Pontil, 2007] also show that the problem of optimizing the non-convex group lasso can be made convex by penalizing the trace norm of A, which forces A to be low-rank and thereby constrains the column parameter vectors a·,1, . . . , a·,t to live in a low-dimensional subspace. [Lounici et al., 2009] furthermore establish upper bounds for using the group lasso in multi-task learning.
As much as this block-sparse regularization is intuitively plausible, it is very dependent on the extent to which the features are shared across tasks. [Negahban and Wainwright, 2008] show that if features do not overlap by much, ¢; /¢, regularization might actually be worse than element-wise ¢; regularization.
For this reason, improve upon block-sparse models by proposing a method that combines block-sparse and element-wise sparse regularization. They decompose the task parameter matrix A into two matrices B and S where A = B + S. B is then enforced to be block-sparse using 01/50 regularization, while S' is made element-wise sparse using lasso. Recently, (Liu et al., 2016] propose a distributed version of group-sparse regularization.
# 5.2 Learning task relationships | 1706.05098#12 | An Overview of Multi-Task Learning in Deep Neural Networks | Multi-task learning (MTL) has led to successes in many applications of
machine learning, from natural language processing and speech recognition to
computer vision and drug discovery. This article aims to give a general
overview of MTL, particularly in deep neural networks. It introduces the two
most common methods for MTL in Deep Learning, gives an overview of the
literature, and discusses recent advances. In particular, it seeks to help ML
practitioners apply MTL by shedding light on how MTL works and providing
guidelines for choosing appropriate auxiliary tasks. | http://arxiv.org/pdf/1706.05098 | Sebastian Ruder | cs.LG, cs.AI, stat.ML | 14 pages, 8 figures | null | cs.LG | 20170615 | 20170615 | [
{
"id": "1506.02117"
}
] |
1706.05098 | 13 | # 5.2 Learning task relationships
While the group-sparsity constraint forces our model to only consider a few features, these features are largely used across all tasks. All of the previous approaches thus assume that the tasks used in multi-task learning are closely related. However, each task might not be closely related to all of the available tasks. In those cases, sharing information with an unrelated task might actually hurt performance, a phenomenon known as negative transfer.
Rather than sparsity, we would thus like to leverage prior knowledge indicating that some tasks are related while others are not. In this scenario, a constraint that enforces a clustering of tasks might be more appropriate. [Evgeniou et al., 2005] suggest to impose a clustering constraint by penalizing both the norms of our task column vectors a·,1, . . . , a·,t as well as their variance with the following constraint:
d T Q= lal? +2 las al? t=1
where @ = (ye, a.4)/T is the mean parameter vector. This penalty enforces a clustering of the task parameter vectors a.4,..., a.., towards their mean that is controlled by \. They apply this constraint to kernel methods, but it is equally applicable to linear models. | 1706.05098#13 | An Overview of Multi-Task Learning in Deep Neural Networks | Multi-task learning (MTL) has led to successes in many applications of
machine learning, from natural language processing and speech recognition to
computer vision and drug discovery. This article aims to give a general
overview of MTL, particularly in deep neural networks. It introduces the two
most common methods for MTL in Deep Learning, gives an overview of the
literature, and discusses recent advances. In particular, it seeks to help ML
practitioners apply MTL by shedding light on how MTL works and providing
guidelines for choosing appropriate auxiliary tasks. | http://arxiv.org/pdf/1706.05098 | Sebastian Ruder | cs.LG, cs.AI, stat.ML | 14 pages, 8 figures | null | cs.LG | 20170615 | 20170615 | [
{
"id": "1506.02117"
}
] |
1706.05098 | 14 | A similar constraint for SVMs was also proposed by [Evgeniou and Pontil, 2004]. Their constraint is inspired by Bayesian methods and seeks to make all models close to some mean model. In SVMs, the loss thus trades off having a large margin for each SVM with being close to the mean model.
[Jacob et al., 2009] make the assumptions underlying cluster regularization more explicit by formaliz- ing a cluster constraint on A under the assumption that the number of clusters C is known in advance. They then decompose the penalty into three separate norms:
e A global penalty which measures how large our column parameter vectors are on average: OQmean(A) = \lal|?.
e A measure of between-cluster variance that measures how close to each other the clusters are: Qpetween(A) = 77 Tell@e â all? where T;, is the number of tasks in the c-th cluster and a, is the mean vector of the task parameter vectors in the c-th cluster.
e A measure of within-cluster variance that gauges how compact each cluster is: Quithin = an te ||a.,4 â G|] where J(c) is the set of tasks in the c-th cluster.
s(e) ||a.,4 â G|] where J(c) is the set of tasks in the c-th cluster.
c=1 | 1706.05098#14 | An Overview of Multi-Task Learning in Deep Neural Networks | Multi-task learning (MTL) has led to successes in many applications of
machine learning, from natural language processing and speech recognition to
computer vision and drug discovery. This article aims to give a general
overview of MTL, particularly in deep neural networks. It introduces the two
most common methods for MTL in Deep Learning, gives an overview of the
literature, and discusses recent advances. In particular, it seeks to help ML
practitioners apply MTL by shedding light on how MTL works and providing
guidelines for choosing appropriate auxiliary tasks. | http://arxiv.org/pdf/1706.05098 | Sebastian Ruder | cs.LG, cs.AI, stat.ML | 14 pages, 8 figures | null | cs.LG | 20170615 | 20170615 | [
{
"id": "1506.02117"
}
] |
1706.05098 | 15 | s(e) ||a.,4 â G|] where J(c) is the set of tasks in the c-th cluster.
c=1
The ï¬nal constraint then is the weighted sum of the three norms:
â¦(A) = λ1â¦mean(A) + λ2â¦between(A) + λ3â¦within(A)
5
As this constraint assumes clusters are known in advance, they introduce a convex relaxation of the above penalty that allows to learn the clusters at the same time.
In another scenario, in clusters but have an inherent structure. [Kim and Xing, 2010] extend the group lasso to deal with tasks that occur in a tree structure, while [Chen et al., 2010] apply it to tasks with graph structures.
While the previous approaches to modelling the relationship between tasks employ norm regulariza- tion, other approaches do so without regularization: [Thrun and OâSullivan, 1996] were the ï¬rst ones who presented a task clustering algorithm using k-nearest neighbour, while [Ando and Tong, 2005] learn a common structure from multiple related tasks with an application to semi-supervised learning. | 1706.05098#15 | An Overview of Multi-Task Learning in Deep Neural Networks | Multi-task learning (MTL) has led to successes in many applications of
machine learning, from natural language processing and speech recognition to
computer vision and drug discovery. This article aims to give a general
overview of MTL, particularly in deep neural networks. It introduces the two
most common methods for MTL in Deep Learning, gives an overview of the
literature, and discusses recent advances. In particular, it seeks to help ML
practitioners apply MTL by shedding light on how MTL works and providing
guidelines for choosing appropriate auxiliary tasks. | http://arxiv.org/pdf/1706.05098 | Sebastian Ruder | cs.LG, cs.AI, stat.ML | 14 pages, 8 figures | null | cs.LG | 20170615 | 20170615 | [
{
"id": "1506.02117"
}
] |
1706.05098 | 16 | Much other work on learning task relationships for multi-task learning uses Bayesian methods: [Heskes, 2000] propose a Bayesian neural network for multi-task learning by placing a prior on the model parameters to encourage similar parameters across tasks. [Lawrence and Platt, 2004] extend Gaussian processes (GP) to MTL by inferring parameters for a shared covariance matrix. As this is computationally very expensive, they adopt a sparse approximation scheme that greedily selects the most informative examples. [Yu et al., 2005] also use GP for MTL by assuming that all models are sampled from a common prior.
[Bakker and Heskes, 2003] place a Gaussian as a prior distribution on each task-speciï¬c layer. In order to encourage similarity between different tasks, they propose to make the mean task-dependent and introduce a clustering of the tasks using a mixture distribution. Importantly, they require task characteristics that deï¬ne the clusters and the number of mixtures to be speciï¬ed in advance. | 1706.05098#16 | An Overview of Multi-Task Learning in Deep Neural Networks | Multi-task learning (MTL) has led to successes in many applications of
machine learning, from natural language processing and speech recognition to
computer vision and drug discovery. This article aims to give a general
overview of MTL, particularly in deep neural networks. It introduces the two
most common methods for MTL in Deep Learning, gives an overview of the
literature, and discusses recent advances. In particular, it seeks to help ML
practitioners apply MTL by shedding light on how MTL works and providing
guidelines for choosing appropriate auxiliary tasks. | http://arxiv.org/pdf/1706.05098 | Sebastian Ruder | cs.LG, cs.AI, stat.ML | 14 pages, 8 figures | null | cs.LG | 20170615 | 20170615 | [
{
"id": "1506.02117"
}
] |
1706.05098 | 17 | Building on this, [Xue et al., 2007] draw the distribution from a Dirichlet process and enable the model to learn the similarity between tasks as well as the number of clusters. They then share the same model among all tasks in the same cluster. [Daumé III, 2009] propose a hierarchical Bayesian model, which learns a latent task hierarchy, while [Zhang and Yeung, 2010] use a GP-based regularization for MTL and extend a previous GP-based approach to be more computationally feasible in larger settings.
Other approaches focus on the online multi-task learning setting: [Cavallanti et al., 2010] adapt some existing methods such as the approach by [Evgeniou et al., 2005] to the online setting. They also propose a MTL extension of the regularized Perceptron, which encodes task relatedness in a matrix. They use different forms of regularization to bias this task relatedness matrix, e.g. the closeness of the task characteristic vectors or the dimension of the spanned subspace. Importantly, similar to some earlier approaches, they require the task characteristics that make up this matrix to be provided in advance. [Saha et al., 2011] then extend the previous approach by learning the task relationship matrix. | 1706.05098#17 | An Overview of Multi-Task Learning in Deep Neural Networks | Multi-task learning (MTL) has led to successes in many applications of
machine learning, from natural language processing and speech recognition to
computer vision and drug discovery. This article aims to give a general
overview of MTL, particularly in deep neural networks. It introduces the two
most common methods for MTL in Deep Learning, gives an overview of the
literature, and discusses recent advances. In particular, it seeks to help ML
practitioners apply MTL by shedding light on how MTL works and providing
guidelines for choosing appropriate auxiliary tasks. | http://arxiv.org/pdf/1706.05098 | Sebastian Ruder | cs.LG, cs.AI, stat.ML | 14 pages, 8 figures | null | cs.LG | 20170615 | 20170615 | [
{
"id": "1506.02117"
}
] |
1706.05098 | 18 | [Kang et al., 2011] assume that tasks form disjoint groups and that the tasks within each group lie in a low-dimensional subspace. Within each group, tasks share the same feature representation whose parameters are learned jointly together with the group assignment matrix using an alternating minimization scheme. However, a total disjointness between groups might not be the ideal way, as the tasks might still share some features that are helpful for prediction.
[Kumar and Daumé III, 2012] in turn allow two tasks from different groups to overlap by assuming that there exist a small number of latent basis tasks. They then model the parameter vector at of every actual task t as a linear combination of these: at = Lst where L â RkÃd is a matrix containing the parameter vectors of k latent tasks, while st â Rk is a vector containing the coefï¬cients of the linear combination. In addition, they constrain the linear combination to be sparse in the latent tasks; the overlap in the sparsity patterns between two tasks then controls the amount of sharing between these. Finally, [Crammer and Mansour, 2012] learn a small pool of shared hypotheses and then map each task to a single hypothesis.
# 6 Recent work on MTL for Deep Learning | 1706.05098#18 | An Overview of Multi-Task Learning in Deep Neural Networks | Multi-task learning (MTL) has led to successes in many applications of
machine learning, from natural language processing and speech recognition to
computer vision and drug discovery. This article aims to give a general
overview of MTL, particularly in deep neural networks. It introduces the two
most common methods for MTL in Deep Learning, gives an overview of the
literature, and discusses recent advances. In particular, it seeks to help ML
practitioners apply MTL by shedding light on how MTL works and providing
guidelines for choosing appropriate auxiliary tasks. | http://arxiv.org/pdf/1706.05098 | Sebastian Ruder | cs.LG, cs.AI, stat.ML | 14 pages, 8 figures | null | cs.LG | 20170615 | 20170615 | [
{
"id": "1506.02117"
}
] |
1706.05098 | 19 | # 6 Recent work on MTL for Deep Learning
While many recent Deep Learning approaches have used multi-task learning â either explicitly or implicitly â as part of their model (prominent examples will be featured in the next section), they all employ the two approaches we introduced earlier, hard and soft parameter sharing. In contrast, only a few papers have looked at developing better mechanisms for MTL in deep neural networks.
6
# 6.1 Deep Relationship Networks
In MTL for computer vision, approaches often share the convolutional layers, while learning task- speciï¬c fully-connected layers. [Long and Wang, 2015] improve upon these models by proposing Deep Relationship Networks. In addition to the structure of shared and task-speciï¬c layers, which can be seen in Figure 3, they place matrix priors on the fully connected layers, which allow the model to learn the relationship between tasks, similar to some of the Bayesian models we have looked at before. This approach, however, still relies on a pre-deï¬ned structure for sharing, which may be adequate for well-studied computer vision problems, but prove error-prone for novel tasks.
learn learn Jearn Gos00» [e) oO [e) oO (2) [e) oO input "i Conv3 conv ConvS feb | 1706.05098#19 | An Overview of Multi-Task Learning in Deep Neural Networks | Multi-task learning (MTL) has led to successes in many applications of
machine learning, from natural language processing and speech recognition to
computer vision and drug discovery. This article aims to give a general
overview of MTL, particularly in deep neural networks. It introduces the two
most common methods for MTL in Deep Learning, gives an overview of the
literature, and discusses recent advances. In particular, it seeks to help ML
practitioners apply MTL by shedding light on how MTL works and providing
guidelines for choosing appropriate auxiliary tasks. | http://arxiv.org/pdf/1706.05098 | Sebastian Ruder | cs.LG, cs.AI, stat.ML | 14 pages, 8 figures | null | cs.LG | 20170615 | 20170615 | [
{
"id": "1506.02117"
}
] |
1706.05098 | 20 | learn learn Jearn Gos00» [e) oO [e) oO (2) [e) oO input "i Conv3 conv ConvS feb
Figure 3: A Deep Relationship Network with shared convolutional and task-speciï¬c fully connected layers with matrix priors [Long and Wang, 2015]
# 6.2 Fully-Adaptive Feature Sharing
Starting at the other extreme, [Lu et al., 2016] propose a bottom-up approach that starts with a thin network and dynamically widens it greedily during training using a criterion that promotes grouping of similar tasks. The widening procedure, which dynamically creates branches can be seen in Figure 4. However, the greedy method might not be able to discover a model that is globally optimal, while assigning each branch to exactly one task does not allow the model to learn more complex interactions between tasks.
Round 1 Round 2 Round 3. m5) cae ot QP) ee VY Layer L-1 | c> Layer L-a Layer L-1 Layer L-2 Layer L-2 | c= Layer L-2
Figure 4: The widening procedure for fully-adaptive feature sharing [Lu et al., 2016]
# 6.3 Cross-stitch Networks | 1706.05098#20 | An Overview of Multi-Task Learning in Deep Neural Networks | Multi-task learning (MTL) has led to successes in many applications of
machine learning, from natural language processing and speech recognition to
computer vision and drug discovery. This article aims to give a general
overview of MTL, particularly in deep neural networks. It introduces the two
most common methods for MTL in Deep Learning, gives an overview of the
literature, and discusses recent advances. In particular, it seeks to help ML
practitioners apply MTL by shedding light on how MTL works and providing
guidelines for choosing appropriate auxiliary tasks. | http://arxiv.org/pdf/1706.05098 | Sebastian Ruder | cs.LG, cs.AI, stat.ML | 14 pages, 8 figures | null | cs.LG | 20170615 | 20170615 | [
{
"id": "1506.02117"
}
] |
1706.05098 | 21 | Figure 4: The widening procedure for fully-adaptive feature sharing [Lu et al., 2016]
# 6.3 Cross-stitch Networks
[Misra et al., 2016] start out with two separate model architectures just as in soft parameter sharing. They then use what they refer to as cross-stitch units to allow the model to determine in what way the task-speciï¬c networks leverage the knowledge of the other task by learning a linear combination of the output of the previous layers. Their architecture can be seen in Figure 5, in which they only place cross-stitch units after pooling and fully-connected layers.
# 6.4 Low supervision
In contrast, in natural language processing (NLP), recent work focused on ï¬nding better task hier- archies for multi-task learning: [Søgaard and Goldberg, 2016] show that low-level tasks, i.e. NLP tasks typically used for preprocessing such as part-of-speech tagging and named entity recognition, should be supervised at lower layers when used as auxiliary task.
7
conv], pooll conv, pool? __conv_convd_conv5, poolâ fet fer fe8 | | | VASE yoRomion OY cma Wa Woy Waâ a a1 = units halt eBoy a yosane a xseT, | 1706.05098#21 | An Overview of Multi-Task Learning in Deep Neural Networks | Multi-task learning (MTL) has led to successes in many applications of
machine learning, from natural language processing and speech recognition to
computer vision and drug discovery. This article aims to give a general
overview of MTL, particularly in deep neural networks. It introduces the two
most common methods for MTL in Deep Learning, gives an overview of the
literature, and discusses recent advances. In particular, it seeks to help ML
practitioners apply MTL by shedding light on how MTL works and providing
guidelines for choosing appropriate auxiliary tasks. | http://arxiv.org/pdf/1706.05098 | Sebastian Ruder | cs.LG, cs.AI, stat.ML | 14 pages, 8 figures | null | cs.LG | 20170615 | 20170615 | [
{
"id": "1506.02117"
}
] |
1706.05098 | 22 | Figure 5: Cross-stitch networks for two tasks [Misra et al., 2016]
# 6.5 A Joint Many-Task Model
Building on this ï¬nding, [Hashimoto et al., 2016] pre-deï¬ne a hierarchical architecture consisting of several NLP tasks, which can be seen in Figure 6, as a joint model for multi-task learning.
t Entailment Entailment Entaiiment encoder encoder a Relatedness semantic Relatedness Relatedness âencoder syntactic level word level | word representation âword representation Sentencey Sentences
Figure 6: A Joint Many-Task Model [Hashimoto et al., 2016]
# 6.6 Weighting losses with uncertainty
Instead of learning the structure of sharing, [Kendall et al., 2017] take an orthogonal approach by considering the uncertainty of each task. They then adjust each taskâs relative weight in the cost function by deriving a multi-task loss function based on maximizing the Gaussian likelihood with task-dependant uncertainty. Their architecture for per-pixel depth regression, semantic and instance segmentation can be seen in Figure 7.
. Semantic Semantic En Decoder Uncertainty Input Image Instance Task Uncertainty Instance Decoder Encoder Depth Decoder Task Uncertainty
Figure 7: Uncertainty-based loss function weighting for multi-task learning [Kendall et al., 2017]
8 | 1706.05098#22 | An Overview of Multi-Task Learning in Deep Neural Networks | Multi-task learning (MTL) has led to successes in many applications of
machine learning, from natural language processing and speech recognition to
computer vision and drug discovery. This article aims to give a general
overview of MTL, particularly in deep neural networks. It introduces the two
most common methods for MTL in Deep Learning, gives an overview of the
literature, and discusses recent advances. In particular, it seeks to help ML
practitioners apply MTL by shedding light on how MTL works and providing
guidelines for choosing appropriate auxiliary tasks. | http://arxiv.org/pdf/1706.05098 | Sebastian Ruder | cs.LG, cs.AI, stat.ML | 14 pages, 8 figures | null | cs.LG | 20170615 | 20170615 | [
{
"id": "1506.02117"
}
] |
1706.05098 | 23 | Figure 7: Uncertainty-based loss function weighting for multi-task learning [Kendall et al., 2017]
8
# 6.7 Tensor factorisation for MTL
More recent work seeks to generalize existing approaches to MTL to Deep Learning: [Yang and Hospedales, 2017a] generalize some of the previously discussed matrix factorisation approaches using tensor factorisation to split the model parameters into shared and task-speciï¬c parameters for every layer.
# 6.8 Sluice Networks
Finally, we propose Sluice Networks [Ruder et al., 2017], a model that generalizes Deep Learning- based MTL approaches such as hard parameter sharing and cross-stitch networks, block-sparse regularization approaches, as well as recent NLP approaches that create a task hierarchy. The model, which can be seen in Figure 8, allows to learn what layers and subspaces should be shared, as well as at what layers the network has learned the best representations of the input sequences.
Gaga Gaga [2] | LH Ga22 Ga32 ( a Q{e Gat Ge.3a : al Gp22 G32
Figure 8: A sluice network for two tasks [Ruder et al., 2017]
# 6.9 What should I share in my model? | 1706.05098#23 | An Overview of Multi-Task Learning in Deep Neural Networks | Multi-task learning (MTL) has led to successes in many applications of
machine learning, from natural language processing and speech recognition to
computer vision and drug discovery. This article aims to give a general
overview of MTL, particularly in deep neural networks. It introduces the two
most common methods for MTL in Deep Learning, gives an overview of the
literature, and discusses recent advances. In particular, it seeks to help ML
practitioners apply MTL by shedding light on how MTL works and providing
guidelines for choosing appropriate auxiliary tasks. | http://arxiv.org/pdf/1706.05098 | Sebastian Ruder | cs.LG, cs.AI, stat.ML | 14 pages, 8 figures | null | cs.LG | 20170615 | 20170615 | [
{
"id": "1506.02117"
}
] |
1706.05098 | 24 | Figure 8: A sluice network for two tasks [Ruder et al., 2017]
# 6.9 What should I share in my model?
Having surveyed these recent approaches, let us now brieï¬y summarize and draw a conclusion on what to share in our deep MTL models. Most approaches in the history of MTL have focused on the scenario where tasks are drawn from the same distribution [Baxter, 1997]. While this scenario is beneï¬cial for sharing, it does not always hold. In order to develop robust models for MTL, we thus have to be able to deal with unrelated or only loosely related tasks.
While early work in MTL for Deep Learning has pre-speciï¬ed which layers to share for each task pairing, this strategy does not scale and heavily biases MTL architectures. Hard parameter sharing, a technique that was originally proposed by [Caruana, 1993], is still the norm 20 years later. While useful in many scenarios, hard parameter sharing quickly breaks down if tasks are not closely related or require reasoning on different levels. Recent approaches have thus looked towards learning what to share and generally outperform hard parameter sharing. In addition, giving our models the capacity to learn a task hierarchy is helpful, particularly in cases that require different granularities. | 1706.05098#24 | An Overview of Multi-Task Learning in Deep Neural Networks | Multi-task learning (MTL) has led to successes in many applications of
machine learning, from natural language processing and speech recognition to
computer vision and drug discovery. This article aims to give a general
overview of MTL, particularly in deep neural networks. It introduces the two
most common methods for MTL in Deep Learning, gives an overview of the
literature, and discusses recent advances. In particular, it seeks to help ML
practitioners apply MTL by shedding light on how MTL works and providing
guidelines for choosing appropriate auxiliary tasks. | http://arxiv.org/pdf/1706.05098 | Sebastian Ruder | cs.LG, cs.AI, stat.ML | 14 pages, 8 figures | null | cs.LG | 20170615 | 20170615 | [
{
"id": "1506.02117"
}
] |
1706.05098 | 25 | As mentioned initially, we are doing MTL as soon as we are optimizing more than one loss function. Rather than constraining our model to compress the knowledge of all tasks into the same parameter space, it is thus helpful to draw on the advances in MTL that we have discussed and enable our model to learn how the tasks should interact with each other.
# 7 Auxiliary tasks
MTL is a natural ï¬t in situations where we are interested in obtaining predictions for multiple tasks at once. Such scenarios are common for instance in ï¬nance or economics forecasting, where we might want to predict the value of many possibly related indicators, or in bioinformatics where we might want to predict symptoms for multiple diseases simultaneously. In scenarios such as drug discovery, where tens or hundreds of active compounds should be predicted, MTL accuracy increases continuously with the number of tasks [Ramsundar et al., 2015].
9
In most situations, however, we only care about performance on one task. In this section, we will thus look at how we can ï¬nd a suitable auxiliary task in order to still reap the beneï¬ts of multi-task learning.
# 7.1 Related task | 1706.05098#25 | An Overview of Multi-Task Learning in Deep Neural Networks | Multi-task learning (MTL) has led to successes in many applications of
machine learning, from natural language processing and speech recognition to
computer vision and drug discovery. This article aims to give a general
overview of MTL, particularly in deep neural networks. It introduces the two
most common methods for MTL in Deep Learning, gives an overview of the
literature, and discusses recent advances. In particular, it seeks to help ML
practitioners apply MTL by shedding light on how MTL works and providing
guidelines for choosing appropriate auxiliary tasks. | http://arxiv.org/pdf/1706.05098 | Sebastian Ruder | cs.LG, cs.AI, stat.ML | 14 pages, 8 figures | null | cs.LG | 20170615 | 20170615 | [
{
"id": "1506.02117"
}
] |
1706.05098 | 26 | # 7.1 Related task
Using a related task as an auxiliary task for MTL is the classical choice. To get an idea what a related task can be, we will present some prominent examples. [Caruana, 1998] uses tasks that predict different characteristics of the road as auxiliary tasks for predicting the steering direction in a self-driving car; [Zhang et al., 2014] use head pose estimation and facial attribute inference as auxiliary tasks for facial landmark detection; [Liu et al., 2015] jointly learn query classiï¬cation and web search; [Girshick, 2015] jointly predicts the class and the coordinates of an object in an image; ï¬nally, [Arık et al., 2017] jointly predict the phoneme duration and frequency proï¬le for text-to-speech.
# 7.2 Adversarial | 1706.05098#26 | An Overview of Multi-Task Learning in Deep Neural Networks | Multi-task learning (MTL) has led to successes in many applications of
machine learning, from natural language processing and speech recognition to
computer vision and drug discovery. This article aims to give a general
overview of MTL, particularly in deep neural networks. It introduces the two
most common methods for MTL in Deep Learning, gives an overview of the
literature, and discusses recent advances. In particular, it seeks to help ML
practitioners apply MTL by shedding light on how MTL works and providing
guidelines for choosing appropriate auxiliary tasks. | http://arxiv.org/pdf/1706.05098 | Sebastian Ruder | cs.LG, cs.AI, stat.ML | 14 pages, 8 figures | null | cs.LG | 20170615 | 20170615 | [
{
"id": "1506.02117"
}
] |
1706.05098 | 27 | # 7.2 Adversarial
Often, labeled data for a related task is unavailable. In some circumstances, however, we have access to a task that is opposite of what we want to achieve. This data can be leveraged using an adversarial loss, which does not seek to minimize but maximize the training error using a gradient reversal layer. This setup has found recent success in domain adaptation [Ganin and Lempitsky, 2015]. The adversarial task in this case is predicting the domain of the input; by reversing the gradient of the adversarial task, the adversarial task loss is maximized, which is beneï¬cial for the main task as it forces the model to learn representations that cannot distinguish between domains.
# 7.3 Hints
As mentioned before, MTL can be used to learn features that might not be easy to learn just using the original task. An effective way to achieve this is to use hints, i.e. predicting the features as an auxiliary task. Recent examples of this strategy in the context of natural language processing are [Yu and Jiang, 2016] who predict whether an input sentence contains a positive or negative sentiment word as auxiliary tasks for sentiment analysis and [Cheng et al., 2015] who predict whether a name is present in a sentence as auxiliary task for name error detection.
# 7.4 Focusing attention | 1706.05098#27 | An Overview of Multi-Task Learning in Deep Neural Networks | Multi-task learning (MTL) has led to successes in many applications of
machine learning, from natural language processing and speech recognition to
computer vision and drug discovery. This article aims to give a general
overview of MTL, particularly in deep neural networks. It introduces the two
most common methods for MTL in Deep Learning, gives an overview of the
literature, and discusses recent advances. In particular, it seeks to help ML
practitioners apply MTL by shedding light on how MTL works and providing
guidelines for choosing appropriate auxiliary tasks. | http://arxiv.org/pdf/1706.05098 | Sebastian Ruder | cs.LG, cs.AI, stat.ML | 14 pages, 8 figures | null | cs.LG | 20170615 | 20170615 | [
{
"id": "1506.02117"
}
] |
1706.05098 | 28 | # 7.4 Focusing attention
Similarly, the auxiliary task can be used to focus attention on parts of the image that a network might normally ignore. For instance, for learning to steer [Caruana, 1998] a single-task model might typically ignore lane markings as these make up only a small part of the image and are not always present. Predicting lane markings as auxiliary task, however, forces the model to learn to represent them; this knowledge can then also be used for the main task. Analogously, for facial recognition, one might learn to predict the location of facial landmarks as auxiliary tasks, since these are often distinctive.
# 7.5 Quantization smoothing
For many tasks, the training objective is quantized, i.e. while a continuous scale might be more plausible, labels are available as a discrete set. This is the case in many scenarios that require human assessment for data gathering, such as predicting the risk of a disease (e.g. low/medium/high) or sentiment analysis (positive/neutral/negative). Using less quantized auxiliary tasks might help in these cases, as they might be learned more easily due to their objective being smoother.
# 7.6 Predicting inputs | 1706.05098#28 | An Overview of Multi-Task Learning in Deep Neural Networks | Multi-task learning (MTL) has led to successes in many applications of
machine learning, from natural language processing and speech recognition to
computer vision and drug discovery. This article aims to give a general
overview of MTL, particularly in deep neural networks. It introduces the two
most common methods for MTL in Deep Learning, gives an overview of the
literature, and discusses recent advances. In particular, it seeks to help ML
practitioners apply MTL by shedding light on how MTL works and providing
guidelines for choosing appropriate auxiliary tasks. | http://arxiv.org/pdf/1706.05098 | Sebastian Ruder | cs.LG, cs.AI, stat.ML | 14 pages, 8 figures | null | cs.LG | 20170615 | 20170615 | [
{
"id": "1506.02117"
}
] |
1706.05098 | 29 | # 7.6 Predicting inputs
In some scenarios, it is impractical to use some features as inputs as they are unhelpful for predicting the desired objective. However, they might still be able to guide the learning of the task. In those cases, the features can be used as outputs rather than inputs. [Caruana and de Sa, 1997] present several problems where this is applicable.
10
# 7.7 Using the future to predict the present
In many situations, some features only become available after the predictions are supposed to be made. For instance, for self-driving cars, more accurate measurements of obstacles and lane markings can be made once the car is passing them. [Caruana, 1998] also gives the example of pneumonia prediction, after which the results of additional medical trials will be available. For these examples, the additional data cannot be used as features as it will not be available as input at runtime. However, it can be used as an auxiliary task to impart additional knowledge to the model during training.
# 7.8 Representation learning | 1706.05098#29 | An Overview of Multi-Task Learning in Deep Neural Networks | Multi-task learning (MTL) has led to successes in many applications of
machine learning, from natural language processing and speech recognition to
computer vision and drug discovery. This article aims to give a general
overview of MTL, particularly in deep neural networks. It introduces the two
most common methods for MTL in Deep Learning, gives an overview of the
literature, and discusses recent advances. In particular, it seeks to help ML
practitioners apply MTL by shedding light on how MTL works and providing
guidelines for choosing appropriate auxiliary tasks. | http://arxiv.org/pdf/1706.05098 | Sebastian Ruder | cs.LG, cs.AI, stat.ML | 14 pages, 8 figures | null | cs.LG | 20170615 | 20170615 | [
{
"id": "1506.02117"
}
] |
1706.05098 | 30 | # 7.8 Representation learning
The goal of an auxiliary task in MTL is to enable the model to learn representations that are shared or helpful for the main task. All auxiliary tasks discussed so far do this implicitly: They are closely related to the main task, so that learning them likely allows the model to learn beneï¬cial representations. A more explicit modelling is possible, for instance by employing a task that is known to enable a model to learn transferable representations. The language modelling objective as employed by [Cheng et al., 2015] and [Rei, 2017] fulï¬ls this role. In a similar vein, an autoencoder objective can also be used as an auxiliary task.
# 7.9 What auxiliary tasks are helpful?
In this section, we have discussed different auxiliary tasks that can be used to leverage MTL even if we only care about one task. We still do not know, though, what auxiliary task will be useful in practice. Finding an auxiliary task is largely based on the assumption that the auxiliary task should be related to the main task in some way and that it should be helpful for predicting the main task. | 1706.05098#30 | An Overview of Multi-Task Learning in Deep Neural Networks | Multi-task learning (MTL) has led to successes in many applications of
machine learning, from natural language processing and speech recognition to
computer vision and drug discovery. This article aims to give a general
overview of MTL, particularly in deep neural networks. It introduces the two
most common methods for MTL in Deep Learning, gives an overview of the
literature, and discusses recent advances. In particular, it seeks to help ML
practitioners apply MTL by shedding light on how MTL works and providing
guidelines for choosing appropriate auxiliary tasks. | http://arxiv.org/pdf/1706.05098 | Sebastian Ruder | cs.LG, cs.AI, stat.ML | 14 pages, 8 figures | null | cs.LG | 20170615 | 20170615 | [
{
"id": "1506.02117"
}
] |
1706.05098 | 31 | However, we still do not have a good notion of when two tasks should be considered similar or related. [Caruana, 1998] deï¬nes two tasks to be similar if they use the same features to make a decision. [Baxter, 2000] argues only theoretically that related tasks share a common optimal hypothesis class, i.e. have the same inductive bias. [Ben-David and Schuller, 2003] propose that two tasks are F-related if the data for both tasks can be generated from a ï¬xed probability distribution using a set of transformations F. While this allows to reason over tasks where different sensors collect data for the same classiï¬cation problem, e.g. object recognition with data from cameras with different angles and lighting conditions, it is not applicable to tasks that do not deal with the same problem. [Xue et al., 2007] ï¬nally argue that two tasks are similar if their classiï¬cation boundaries, i.e. parameter vectors are close. | 1706.05098#31 | An Overview of Multi-Task Learning in Deep Neural Networks | Multi-task learning (MTL) has led to successes in many applications of
machine learning, from natural language processing and speech recognition to
computer vision and drug discovery. This article aims to give a general
overview of MTL, particularly in deep neural networks. It introduces the two
most common methods for MTL in Deep Learning, gives an overview of the
literature, and discusses recent advances. In particular, it seeks to help ML
practitioners apply MTL by shedding light on how MTL works and providing
guidelines for choosing appropriate auxiliary tasks. | http://arxiv.org/pdf/1706.05098 | Sebastian Ruder | cs.LG, cs.AI, stat.ML | 14 pages, 8 figures | null | cs.LG | 20170615 | 20170615 | [
{
"id": "1506.02117"
}
] |
1706.05098 | 32 | In spite of these early theoretical advances in understanding task relatedness, we have not made much recent progress towards this goal. Task similarity is not binary, but resides on a spectrum. Allowing our models to learn what to share with each task might allow us to temporarily circumvent the lack of theory and make better use even of only loosely related tasks. However, we also need to develop a more principled notion of task similarity with regard to MTL in order to know which tasks we should prefer.
Recent work [Alonso and Plank, 2017] has found auxiliary tasks with compact and uniform label distributions to be preferable for sequence tagging problems in NLP, which we have conï¬rmed in experiments [Ruder et al., 2017]. In addition, gains have been found to be more likely for main tasks that quickly plateau with non-plateauing auxiliary tasks [Bingel and Søgaard, 2017]. These experiments, however, have so far been limited in scope and recent ï¬ndings only provide the ï¬rst clues towards a deeper understanding of multi-task learning in neural networks.
# 8 Conclusion | 1706.05098#32 | An Overview of Multi-Task Learning in Deep Neural Networks | Multi-task learning (MTL) has led to successes in many applications of
machine learning, from natural language processing and speech recognition to
computer vision and drug discovery. This article aims to give a general
overview of MTL, particularly in deep neural networks. It introduces the two
most common methods for MTL in Deep Learning, gives an overview of the
literature, and discusses recent advances. In particular, it seeks to help ML
practitioners apply MTL by shedding light on how MTL works and providing
guidelines for choosing appropriate auxiliary tasks. | http://arxiv.org/pdf/1706.05098 | Sebastian Ruder | cs.LG, cs.AI, stat.ML | 14 pages, 8 figures | null | cs.LG | 20170615 | 20170615 | [
{
"id": "1506.02117"
}
] |
1706.05098 | 33 | # 8 Conclusion
In this overview, I have reviewed both the history of literature in multi-task learning as well as more recent work on MTL for Deep Learning. While MTL is being more frequently used, the 20-year old hard parameter sharing paradigm is still pervasive for neural-network based MTL. Recent advances on learning what to share, however, are promising. At the same time, our understanding of tasks â their similarity, relationship, hierarchy, and beneï¬t for MTL â is still limited and we need to study them more thoroughly to gain a better understanding of the generalization capabilities of MTL with regard to deep neural networks.
11
# References
[Abu-Mostafa, 1990] Abu-Mostafa, Y. S. (1990). Learning from hints in neural networks. Journal of Complexity, 6(2):192â198.
[Alonso and Plank, 2017] Alonso, H. M. and Plank, B. (2017). When is multitask learning effective? Multitask learning for semantic sequence prediction under varying data conditions. In EACL. [Ando and Tong, 2005] Ando, R. K. and Tong, Z. (2005). A Framework for Learning Predictive Structures from Multiple Tasks and Unlabeled Data. Journal of Machine Learning Research, 6:1817â1853. | 1706.05098#33 | An Overview of Multi-Task Learning in Deep Neural Networks | Multi-task learning (MTL) has led to successes in many applications of
machine learning, from natural language processing and speech recognition to
computer vision and drug discovery. This article aims to give a general
overview of MTL, particularly in deep neural networks. It introduces the two
most common methods for MTL in Deep Learning, gives an overview of the
literature, and discusses recent advances. In particular, it seeks to help ML
practitioners apply MTL by shedding light on how MTL works and providing
guidelines for choosing appropriate auxiliary tasks. | http://arxiv.org/pdf/1706.05098 | Sebastian Ruder | cs.LG, cs.AI, stat.ML | 14 pages, 8 figures | null | cs.LG | 20170615 | 20170615 | [
{
"id": "1506.02117"
}
] |
1706.05098 | 34 | [Argyriou and Pontil, 2007] Argyriou, A. and Pontil, M. (2007). Multi-Task Feature Learning. In Advances in Neural Information Processing Systems.
[Arık et al., 2017] Arık, S. Ã., Chrzanowski, M., Coates, A., Diamos, G., Gibiansky, A., Kang, Y., Li, X., Miller, J., Raiman, J., Sengupta, S., and Shoeybi, M. (2017). Deep Voice: Real-time Neural Text-to-Speech. In ICML 2017.
[Bakker and Heskes, 2003] Bakker, B. and Heskes, T. (2003). Task Clustering and Gating for BayesianMultitask Learning. Journal of Machine Learning Research, 1(1):83â99.
[Baxter, 1997] Baxter, J. (1997). A Bayesian/information theoretic model of learning to learn via multiple task sampling. Machine Learning, 28:7â39.
[Baxter, 2000] Baxter, J. (2000). A Model of Inductive Bias Learning. Journal of Artiï¬cial Intelli- gence Research, 12:149â198. | 1706.05098#34 | An Overview of Multi-Task Learning in Deep Neural Networks | Multi-task learning (MTL) has led to successes in many applications of
machine learning, from natural language processing and speech recognition to
computer vision and drug discovery. This article aims to give a general
overview of MTL, particularly in deep neural networks. It introduces the two
most common methods for MTL in Deep Learning, gives an overview of the
literature, and discusses recent advances. In particular, it seeks to help ML
practitioners apply MTL by shedding light on how MTL works and providing
guidelines for choosing appropriate auxiliary tasks. | http://arxiv.org/pdf/1706.05098 | Sebastian Ruder | cs.LG, cs.AI, stat.ML | 14 pages, 8 figures | null | cs.LG | 20170615 | 20170615 | [
{
"id": "1506.02117"
}
] |
1706.05098 | 35 | [Ben-David and Schuller, 2003] Ben-David, S. and Schuller, R. (2003). Exploiting task relatedness for multiple task learning. Learning Theory and Kernel Machines, pages 567â580.
[Bingel and Søgaard, 2017] Bingel, J. and Søgaard, A. (2017). Identifying beneï¬cial task relations for multi-task learning in deep neural networks. In EACL.
[Caruana, 1993] Caruana, R. (1993). Multitask learning: A knowledge-based source of inductive bias. In Proceedings of the Tenth International Conference on Machine Learning.
[Caruana, 1998] Caruana, R. (1998). Multitask Learning. Autonomous Agents and Multi-Agent Systems, 27(1):95â133.
[Caruana and de Sa, 1997] Caruana, R. and de Sa, V. R. (1997). Promoting poor features to supervi- sors: Some inputs work better as outputs. Advances in Neural Information Processing Systems 9: Proceedings of The 1996 Conference, 9:389. | 1706.05098#35 | An Overview of Multi-Task Learning in Deep Neural Networks | Multi-task learning (MTL) has led to successes in many applications of
machine learning, from natural language processing and speech recognition to
computer vision and drug discovery. This article aims to give a general
overview of MTL, particularly in deep neural networks. It introduces the two
most common methods for MTL in Deep Learning, gives an overview of the
literature, and discusses recent advances. In particular, it seeks to help ML
practitioners apply MTL by shedding light on how MTL works and providing
guidelines for choosing appropriate auxiliary tasks. | http://arxiv.org/pdf/1706.05098 | Sebastian Ruder | cs.LG, cs.AI, stat.ML | 14 pages, 8 figures | null | cs.LG | 20170615 | 20170615 | [
{
"id": "1506.02117"
}
] |
1706.05098 | 36 | [Cavallanti et al., 2010] Cavallanti, G., Cesa-Bianchi, N., and Gentile, C. (2010). Linear Algorithms for Online Multitask Classiï¬cation. Journal of Machine Learning Research, 11:2901â2934. [Chen et al., 2010] Chen, X., Kim, S., Lin, Q., Carbonell, J. G., and Xing, E. P. (2010). Graph- Structured Multi-task Regression and an Efï¬cient Optimization Method for General Fused Lasso. pages 1â21.
[Cheng et al., 2015] Cheng, H., Fang, H., and Ostendorf, M. (2015). Open-Domain Name Error Detection using a Multi-Task RNN. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 737â746.
[Collobert and Weston, 2008] Collobert, R. and Weston, J. (2008). A uniï¬ed architecture for natural language processing. Proceedings of the 25th international conference on Machine learning - ICML â08, 20(1):160â167. | 1706.05098#36 | An Overview of Multi-Task Learning in Deep Neural Networks | Multi-task learning (MTL) has led to successes in many applications of
machine learning, from natural language processing and speech recognition to
computer vision and drug discovery. This article aims to give a general
overview of MTL, particularly in deep neural networks. It introduces the two
most common methods for MTL in Deep Learning, gives an overview of the
literature, and discusses recent advances. In particular, it seeks to help ML
practitioners apply MTL by shedding light on how MTL works and providing
guidelines for choosing appropriate auxiliary tasks. | http://arxiv.org/pdf/1706.05098 | Sebastian Ruder | cs.LG, cs.AI, stat.ML | 14 pages, 8 figures | null | cs.LG | 20170615 | 20170615 | [
{
"id": "1506.02117"
}
] |
1706.05098 | 37 | [Crammer and Mansour, 2012] Crammer, K. and Mansour, Y. (2012). Learning Multiple Tasks Using Shared Hypotheses. Neural Information Processing Systems (NIPS), pages 1484â1492. [Daumé III, 2009] Daumé III, H. (2009). Bayesian multitask learning with latent hierarchies. pages
135â142.
[Deng et al., 2013] Deng, L., Hinton, G. E., and Kingsbury, B. (2013). New types of deep neural network learning for speech recognition and related applications: An overview. 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pages 8599â8603.
[Duong et al., 2015] Duong, L., Cohn, T., Bird, S., and Cook, P. (2015). Low Resource Dependency Parsing: Cross-lingual Parameter Sharing in a Neural Network Parser. Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Short Papers), pages 845â850.
12
[Evgeniou et al., 2005] Evgeniou, T., Micchelli, C. A., and Pontil, M. (2005). Learning multiple tasks with kernel methods. Journal of Machine Learning Research, 6:615â637. | 1706.05098#37 | An Overview of Multi-Task Learning in Deep Neural Networks | Multi-task learning (MTL) has led to successes in many applications of
machine learning, from natural language processing and speech recognition to
computer vision and drug discovery. This article aims to give a general
overview of MTL, particularly in deep neural networks. It introduces the two
most common methods for MTL in Deep Learning, gives an overview of the
literature, and discusses recent advances. In particular, it seeks to help ML
practitioners apply MTL by shedding light on how MTL works and providing
guidelines for choosing appropriate auxiliary tasks. | http://arxiv.org/pdf/1706.05098 | Sebastian Ruder | cs.LG, cs.AI, stat.ML | 14 pages, 8 figures | null | cs.LG | 20170615 | 20170615 | [
{
"id": "1506.02117"
}
] |
1706.05098 | 38 | [Evgeniou and Pontil, 2004] Evgeniou, T. and Pontil, M. (2004). Regularized multi-task learning. International Conference on Knowledge Discovery and Data Mining, page 109.
[Ganin and Lempitsky, 2015] Ganin, Y. and Lempitsky, V. (2015). Unsupervised Domain Adaptation by Backpropagation. In Proceedings of the 32nd International Conference on Machine Learning., volume 37.
[Girshick, 2015] Girshick, R. (2015). Fast R-CNN. Conference on Computer Vision, pages 1440â1448. In Proceedings of the IEEE International
[Hashimoto et al., 2016] Hashimoto, K., Xiong, C., Tsuruoka, Y., and Socher, R. (2016). A Joint Many-Task Model: Growing a Neural Network for Multiple NLP Tasks.
[Heskes, 2000] Heskes, T. (2000). Empirical Bayes for Learning to Learn. Proceedings of the Seventeenth International Conference on Machine Learning, pages 367â364.
[Jacob et al., 2009] Jacob, L., Vert, J.-p., Bach, F. R., and Vert, J.-p. (2009). Clustered Multi-Task Learning: A Convex Formulation. Advances in Neural Information Processing Systems 21, pages 745â752. | 1706.05098#38 | An Overview of Multi-Task Learning in Deep Neural Networks | Multi-task learning (MTL) has led to successes in many applications of
machine learning, from natural language processing and speech recognition to
computer vision and drug discovery. This article aims to give a general
overview of MTL, particularly in deep neural networks. It introduces the two
most common methods for MTL in Deep Learning, gives an overview of the
literature, and discusses recent advances. In particular, it seeks to help ML
practitioners apply MTL by shedding light on how MTL works and providing
guidelines for choosing appropriate auxiliary tasks. | http://arxiv.org/pdf/1706.05098 | Sebastian Ruder | cs.LG, cs.AI, stat.ML | 14 pages, 8 figures | null | cs.LG | 20170615 | 20170615 | [
{
"id": "1506.02117"
}
] |
1706.05098 | 39 | [Jalali et al., 2010] Jalali, A., Ravikumar, P., Sanghavi, S., and Ruan, C. (2010). A Dirty Model for Multi-task Learning. Advances in Neural Information Processing Systems.
[Kang et al., 2011] Kang, Z., Grauman, K., and Sha, F. (2011). Learning with whom to share in multi-task feature learning. Proceedings of the 28th International Conference on Machine Learning, (4):4â5.
[Kendall et al., 2017] Kendall, A., Gal, Y., and Cipolla, R. (2017). Multi-Task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics.
[Kim and Xing, 2010] Kim, S. and Xing, E. P. (2010). Tree-Guided Group Lasso for Multi-Task Regression with Structured Sparsity. 27th International Conference on Machine Learning, pages 1â14.
[Kumar and Daumé III, 2012] Kumar, A. and Daumé III, H. (2012). Learning Task Grouping and Overlap in Multi-task Learning. Proceedings of the 29th International Conference on Machine Learning, pages 1383â1390. | 1706.05098#39 | An Overview of Multi-Task Learning in Deep Neural Networks | Multi-task learning (MTL) has led to successes in many applications of
machine learning, from natural language processing and speech recognition to
computer vision and drug discovery. This article aims to give a general
overview of MTL, particularly in deep neural networks. It introduces the two
most common methods for MTL in Deep Learning, gives an overview of the
literature, and discusses recent advances. In particular, it seeks to help ML
practitioners apply MTL by shedding light on how MTL works and providing
guidelines for choosing appropriate auxiliary tasks. | http://arxiv.org/pdf/1706.05098 | Sebastian Ruder | cs.LG, cs.AI, stat.ML | 14 pages, 8 figures | null | cs.LG | 20170615 | 20170615 | [
{
"id": "1506.02117"
}
] |
1706.05098 | 40 | [Lawrence and Platt, 2004] Lawrence, N. D. and Platt, J. C. (2004). Learning to learn with the informative vector machine. Twenty-ï¬rst international conference on Machine learning - ICML â04, page 65.
[Liu et al., 2016] Liu, S., Pan, S. J., and Ho, Q. (2016). Distributed Multi-task Relationship Learning. In Proceedings of the 19th International Conference on Artiï¬cial Intelligence and Statistics (AISTATS), pages 751â760.
[Liu et al., 2015] Liu, X., Gao, J., He, X., Deng, L., Duh, K., and Wang, Y.-Y. (2015). Representation Learning Using Multi-Task Deep Neural Networks for Semantic Classiï¬cation and Information Retrieval. NAACL-2015, pages 912â921.
[Long and Wang, 2015] Long, M. and Wang, J. (2015). Learning Multiple Tasks with Deep Rela- tionship Networks. arXiv preprint arXiv:1506.02117. | 1706.05098#40 | An Overview of Multi-Task Learning in Deep Neural Networks | Multi-task learning (MTL) has led to successes in many applications of
machine learning, from natural language processing and speech recognition to
computer vision and drug discovery. This article aims to give a general
overview of MTL, particularly in deep neural networks. It introduces the two
most common methods for MTL in Deep Learning, gives an overview of the
literature, and discusses recent advances. In particular, it seeks to help ML
practitioners apply MTL by shedding light on how MTL works and providing
guidelines for choosing appropriate auxiliary tasks. | http://arxiv.org/pdf/1706.05098 | Sebastian Ruder | cs.LG, cs.AI, stat.ML | 14 pages, 8 figures | null | cs.LG | 20170615 | 20170615 | [
{
"id": "1506.02117"
}
] |
1706.05098 | 41 | [Lounici et al., 2009] Lounici, K., Pontil, M., Tsybakov, A. B., and van de Geer, S. (2009). Taking Advantage of Sparsity in Multi-Task Learning. Stat, (1).
[Lu et al., 2016] Lu, Y., Kumar, A., Zhai, S., Cheng, Y., Javidi, T., and Feris, R. (2016). Fully- adaptive Feature Sharing in Multi-Task Networks with Applications in Person Attribute Classiï¬ca- tion.
[Misra et al., 2016] Misra, I., Shrivastava, A., Gupta, A., and Hebert, M. (2016). Cross-stitch Networks for Multi-task Learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.
[Negahban and Wainwright, 2008] Negahban, S. and Wainwright, M. J. (2008). Joint support re- covery under high-dimensional scaling: Beneï¬ts and perils of $ell_{1,infty}$-regularization. Advances in Neural Information Processing Systems, pages 1161â1168. | 1706.05098#41 | An Overview of Multi-Task Learning in Deep Neural Networks | Multi-task learning (MTL) has led to successes in many applications of
machine learning, from natural language processing and speech recognition to
computer vision and drug discovery. This article aims to give a general
overview of MTL, particularly in deep neural networks. It introduces the two
most common methods for MTL in Deep Learning, gives an overview of the
literature, and discusses recent advances. In particular, it seeks to help ML
practitioners apply MTL by shedding light on how MTL works and providing
guidelines for choosing appropriate auxiliary tasks. | http://arxiv.org/pdf/1706.05098 | Sebastian Ruder | cs.LG, cs.AI, stat.ML | 14 pages, 8 figures | null | cs.LG | 20170615 | 20170615 | [
{
"id": "1506.02117"
}
] |
1706.05098 | 42 | [Ramsundar et al., 2015] Ramsundar, B., Kearnes, S., Riley, P., Webster, D., Konerding, D., and Pande, V. (2015). Massively Multitask Networks for Drug Discovery.
13
[Rei, 2017] Rei, M. (2017). Semi-supervised Multitask Learning for Sequence Labeling. In Pro- ceedings of ACL 2017.
[Ruder et al., 2017] Ruder, S., Bingel, J., Augenstein, I., and Søgaard, A. (2017). Sluice networks: Learning what to share between loosely related tasks.
[Saha et al., 2011] Saha, A., Rai, P., Daumé, H., and Venkatasubramanian, S. (2011). Online learning of multiple tasks and their relationships. Journal of Machine Learning Research, 15:643â651. [Søgaard and Goldberg, 2016] Søgaard, A. and Goldberg, Y. (2016). Deep multi-task learning with low level tasks supervised at lower layers. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 231â235. | 1706.05098#42 | An Overview of Multi-Task Learning in Deep Neural Networks | Multi-task learning (MTL) has led to successes in many applications of
machine learning, from natural language processing and speech recognition to
computer vision and drug discovery. This article aims to give a general
overview of MTL, particularly in deep neural networks. It introduces the two
most common methods for MTL in Deep Learning, gives an overview of the
literature, and discusses recent advances. In particular, it seeks to help ML
practitioners apply MTL by shedding light on how MTL works and providing
guidelines for choosing appropriate auxiliary tasks. | http://arxiv.org/pdf/1706.05098 | Sebastian Ruder | cs.LG, cs.AI, stat.ML | 14 pages, 8 figures | null | cs.LG | 20170615 | 20170615 | [
{
"id": "1506.02117"
}
] |
1706.05098 | 43 | [Thrun and OâSullivan, 1996] Thrun, S. and OâSullivan, J. (1996). Discovering Structure in Multiple Learning Tasks: The TC Algorithm. Proceedings of the Thirteenth International Conference on Machine Learning, 28(1):5â5.
[Xue et al., 2007] Xue, Y., Liao, X., Carin, L., and Krishnapuram, B. (2007). Multi-Task Learning for Classiï¬cation with Dirichlet Process Priors. Journal of Machine Learning Research, 8:35â63. [Yang and Hospedales, 2017a] Yang, Y. and Hospedales, T. (2017a). Deep Multi-task Representation
Learning: A Tensor Factorisation Approach. In Proceedings of ICLR 2017.
[Yang and Hospedales, 2017b] Yang, Y. and Hospedales, T. M. (2017b). Trace Norm Regularised Deep Multi-Task Learning. In Workshop track - ICLR 2017.
[Yu and Jiang, 2016] Yu, J. and Jiang, J. (2016). Learning Sentence Embeddings with Auxiliary Tasks for Cross-Domain Sentiment Classiï¬cation. Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP2016), pages 236â246. | 1706.05098#43 | An Overview of Multi-Task Learning in Deep Neural Networks | Multi-task learning (MTL) has led to successes in many applications of
machine learning, from natural language processing and speech recognition to
computer vision and drug discovery. This article aims to give a general
overview of MTL, particularly in deep neural networks. It introduces the two
most common methods for MTL in Deep Learning, gives an overview of the
literature, and discusses recent advances. In particular, it seeks to help ML
practitioners apply MTL by shedding light on how MTL works and providing
guidelines for choosing appropriate auxiliary tasks. | http://arxiv.org/pdf/1706.05098 | Sebastian Ruder | cs.LG, cs.AI, stat.ML | 14 pages, 8 figures | null | cs.LG | 20170615 | 20170615 | [
{
"id": "1506.02117"
}
] |
1706.05098 | 44 | [Yu et al., 2005] Yu, K., Tresp, V., and Schwaighofer, A. (2005). Learning Gaussian processes from multiple tasks. Proceedings of the International Conference on Machine Learning (ICML), 22:1012â1019.
[Yuan and Lin, 2006] Yuan, M. and Lin, Y. (2006). Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 68(1):49â67.
[Zhang and Huang, 2008] Zhang, C. H. and Huang, J. (2008). The sparsity and bias of the lasso selection in high-dimensional linear regression. Annals of Statistics, 36(4):1567â1594.
[Zhang and Yeung, 2010] Zhang, Y. and Yeung, D.-y. (2010). A Convex Formulation for Learning Task Relationships in Multi-Task Learning. Uai, pages 733â442.
[Zhang et al., 2014] Zhang, Z., Luo, P., Loy, C. C., and Tang, X. (2014). Facial Landmark Detection by Deep Multi-task Learning. In European Conference on Computer Vision, pages 94â108.
14 | 1706.05098#44 | An Overview of Multi-Task Learning in Deep Neural Networks | Multi-task learning (MTL) has led to successes in many applications of
machine learning, from natural language processing and speech recognition to
computer vision and drug discovery. This article aims to give a general
overview of MTL, particularly in deep neural networks. It introduces the two
most common methods for MTL in Deep Learning, gives an overview of the
literature, and discusses recent advances. In particular, it seeks to help ML
practitioners apply MTL by shedding light on how MTL works and providing
guidelines for choosing appropriate auxiliary tasks. | http://arxiv.org/pdf/1706.05098 | Sebastian Ruder | cs.LG, cs.AI, stat.ML | 14 pages, 8 figures | null | cs.LG | 20170615 | 20170615 | [
{
"id": "1506.02117"
}
] |
1706.04599 | 0 | 7 1 0 2
g u A 3 ] G L . s c [
2 v 9 9 5 4 0 . 6 0 7 1 : v i X r a
# On Calibration of Modern Neural Networks
# Chuan Guo * 1 Geoff Pleiss * 1 Yu Sun * 1 Kilian Q. Weinberger 1
# Abstract
Conï¬dence calibration â the problem of predict- ing probability estimates representative of the true correctness likelihood â is important for classiï¬cation models in many applications. We discover that modern neural networks, unlike those from a decade ago, are poorly calibrated. Through extensive experiments, we observe that depth, width, weight decay, and Batch Normal- ization are important factors inï¬uencing calibra- tion. We evaluate the performance of various post-processing calibration methods on state-of- the-art architectures with image and document classiï¬cation datasets. Our analysis and exper- iments not only offer insights into neural net- work learning, but also provide a simple and straightforward recipe for practical settings: on most datasets, temperature scaling â a single- parameter variant of Platt Scaling â is surpris- ingly effective at calibrating predictions. | 1706.04599#0 | On Calibration of Modern Neural Networks | Confidence calibration -- the problem of predicting probability estimates
representative of the true correctness likelihood -- is important for
classification models in many applications. We discover that modern neural
networks, unlike those from a decade ago, are poorly calibrated. Through
extensive experiments, we observe that depth, width, weight decay, and Batch
Normalization are important factors influencing calibration. We evaluate the
performance of various post-processing calibration methods on state-of-the-art
architectures with image and document classification datasets. Our analysis and
experiments not only offer insights into neural network learning, but also
provide a simple and straightforward recipe for practical settings: on most
datasets, temperature scaling -- a single-parameter variant of Platt Scaling --
is surprisingly effective at calibrating predictions. | http://arxiv.org/pdf/1706.04599 | Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger | cs.LG | ICML 2017 | null | cs.LG | 20170614 | 20170803 | [
{
"id": "1610.08936"
},
{
"id": "1701.06548"
},
{
"id": "1612.01474"
},
{
"id": "1607.03594"
},
{
"id": "1604.07316"
},
{
"id": "1505.00387"
},
{
"id": "1703.04977"
},
{
"id": "1610.05256"
}
] |
1706.04599 | 1 | 0.2 LeNet (1998) ResNet (2016) CIFAR-100 CIFAR-100 iT I col > > 0.8 : Sug : : a 2) 3 on5 a ch B e's oar 3 3 q 0.6 ete _l gl a oI | & 0-4 va ea RS aT 1% 1 0.0 J.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 1.0 Outputs 0.8 |= Gap 0.6 ip 4 0.4 Accuracy 0.2 0.0
0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Confidence
# 1. Introduction
Figure 1. Conï¬dence histograms (top) and reliability diagrams (bottom) for a 5-layer LeNet (left) and a 110-layer ResNet (right) on CIFAR-100. Refer to the text below for detailed illustration. | 1706.04599#1 | On Calibration of Modern Neural Networks | Confidence calibration -- the problem of predicting probability estimates
representative of the true correctness likelihood -- is important for
classification models in many applications. We discover that modern neural
networks, unlike those from a decade ago, are poorly calibrated. Through
extensive experiments, we observe that depth, width, weight decay, and Batch
Normalization are important factors influencing calibration. We evaluate the
performance of various post-processing calibration methods on state-of-the-art
architectures with image and document classification datasets. Our analysis and
experiments not only offer insights into neural network learning, but also
provide a simple and straightforward recipe for practical settings: on most
datasets, temperature scaling -- a single-parameter variant of Platt Scaling --
is surprisingly effective at calibrating predictions. | http://arxiv.org/pdf/1706.04599 | Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger | cs.LG | ICML 2017 | null | cs.LG | 20170614 | 20170803 | [
{
"id": "1610.08936"
},
{
"id": "1701.06548"
},
{
"id": "1612.01474"
},
{
"id": "1607.03594"
},
{
"id": "1604.07316"
},
{
"id": "1505.00387"
},
{
"id": "1703.04977"
},
{
"id": "1610.05256"
}
] |
1706.04599 | 2 | Recent advances in deep learning have dramatically im- proved neural network accuracy (Simonyan & Zisserman, 2015; Srivastava et al., 2015; He et al., 2016; Huang et al., 2016; 2017). As a result, neural networks are now entrusted with making complex decisions in applications, such as ob- ject detection (Girshick, 2015), speech recognition (Han- nun et al., 2014), and medical diagnosis (Caruana et al., 2015). In these settings, neural networks are an essential component of larger decision making pipelines.
If the detection network is not able to conï¬dently predict the presence or absence of immediate obstructions, the car should rely more on the output of other sensors for braking. Alternatively, in automated health care, control should be passed on to human doctors when the conï¬dence of a dis- ease diagnosis network is low (Jiang et al., 2012). Specif- ically, a network should provide a calibrated conï¬dence measure in addition to its prediction. In other words, the probability associated with the predicted class label should reï¬ect its ground truth correctness likelihood. | 1706.04599#2 | On Calibration of Modern Neural Networks | Confidence calibration -- the problem of predicting probability estimates
representative of the true correctness likelihood -- is important for
classification models in many applications. We discover that modern neural
networks, unlike those from a decade ago, are poorly calibrated. Through
extensive experiments, we observe that depth, width, weight decay, and Batch
Normalization are important factors influencing calibration. We evaluate the
performance of various post-processing calibration methods on state-of-the-art
architectures with image and document classification datasets. Our analysis and
experiments not only offer insights into neural network learning, but also
provide a simple and straightforward recipe for practical settings: on most
datasets, temperature scaling -- a single-parameter variant of Platt Scaling --
is surprisingly effective at calibrating predictions. | http://arxiv.org/pdf/1706.04599 | Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger | cs.LG | ICML 2017 | null | cs.LG | 20170614 | 20170803 | [
{
"id": "1610.08936"
},
{
"id": "1701.06548"
},
{
"id": "1612.01474"
},
{
"id": "1607.03594"
},
{
"id": "1604.07316"
},
{
"id": "1505.00387"
},
{
"id": "1703.04977"
},
{
"id": "1610.05256"
}
] |
1706.04599 | 3 | In real-world decision making systems, classiï¬cation net- works must not only be accurate, but also should indicate when they are likely to be incorrect. As an example, con- sider a self-driving car that uses a neural network to detect pedestrians and other obstructions (Bojarski et al., 2016).
1Cornell University. Correspondence to: Chuan Guo <[email protected]>, Geoff Pleiss <[email protected]>, Yu Sun <[email protected]>.
Proceedings of the 34 th International Conference on Machine Learning, Sydney, Australia, PMLR 70, 2017. Copyright 2017 by the author(s). | 1706.04599#3 | On Calibration of Modern Neural Networks | Confidence calibration -- the problem of predicting probability estimates
representative of the true correctness likelihood -- is important for
classification models in many applications. We discover that modern neural
networks, unlike those from a decade ago, are poorly calibrated. Through
extensive experiments, we observe that depth, width, weight decay, and Batch
Normalization are important factors influencing calibration. We evaluate the
performance of various post-processing calibration methods on state-of-the-art
architectures with image and document classification datasets. Our analysis and
experiments not only offer insights into neural network learning, but also
provide a simple and straightforward recipe for practical settings: on most
datasets, temperature scaling -- a single-parameter variant of Platt Scaling --
is surprisingly effective at calibrating predictions. | http://arxiv.org/pdf/1706.04599 | Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger | cs.LG | ICML 2017 | null | cs.LG | 20170614 | 20170803 | [
{
"id": "1610.08936"
},
{
"id": "1701.06548"
},
{
"id": "1612.01474"
},
{
"id": "1607.03594"
},
{
"id": "1604.07316"
},
{
"id": "1505.00387"
},
{
"id": "1703.04977"
},
{
"id": "1610.05256"
}
] |
1706.04599 | 4 | Proceedings of the 34 th International Conference on Machine Learning, Sydney, Australia, PMLR 70, 2017. Copyright 2017 by the author(s).
Calibrated conï¬dence estimates are also important for model interpretability. Humans have a natural cognitive in- tuition for probabilities (Cosmides & Tooby, 1996). Good conï¬dence estimates provide a valuable extra bit of infor- mation to establish trustworthiness with the user â espe- cially for neural networks, whose classiï¬cation decisions are often difï¬cult to interpret. Further, good probability estimates can be used to incorporate neural networks into other probabilistic models. For example, one can improve performance by combining network outputs with a language model in speech recognition (Hannun et al., 2014; Xiong et al., 2016), or with camera information for object detection (Kendall & Cipolla, 2016). | 1706.04599#4 | On Calibration of Modern Neural Networks | Confidence calibration -- the problem of predicting probability estimates
representative of the true correctness likelihood -- is important for
classification models in many applications. We discover that modern neural
networks, unlike those from a decade ago, are poorly calibrated. Through
extensive experiments, we observe that depth, width, weight decay, and Batch
Normalization are important factors influencing calibration. We evaluate the
performance of various post-processing calibration methods on state-of-the-art
architectures with image and document classification datasets. Our analysis and
experiments not only offer insights into neural network learning, but also
provide a simple and straightforward recipe for practical settings: on most
datasets, temperature scaling -- a single-parameter variant of Platt Scaling --
is surprisingly effective at calibrating predictions. | http://arxiv.org/pdf/1706.04599 | Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger | cs.LG | ICML 2017 | null | cs.LG | 20170614 | 20170803 | [
{
"id": "1610.08936"
},
{
"id": "1701.06548"
},
{
"id": "1612.01474"
},
{
"id": "1607.03594"
},
{
"id": "1604.07316"
},
{
"id": "1505.00387"
},
{
"id": "1703.04977"
},
{
"id": "1610.05256"
}
] |
1706.04599 | 5 | In 2005, Niculescu-Mizil & Caruana (2005) showed that neural networks typically produce well-calibrated proba- bilities on binary classiï¬cation tasks. While neural net- works today are undoubtedly more accurate than they were a decade ago, we discover with great surprise that mod- ern neural networks are no longer well-calibrated. This is visualized in Figure 1, which compares a 5-layer LeNet (left) (LeCun et al., 1998) with a 110-layer ResNet (right) (He et al., 2016) on the CIFAR-100 dataset. The top row shows the distribution of prediction conï¬dence (i.e. prob- abilities associated with the predicted label) as histograms. The average conï¬dence of LeNet closely matches its accu- racy, while the average conï¬dence of the ResNet is substan- tially higher than its accuracy. This is further illustrated in the bottom row reliability diagrams (DeGroot & Fienberg, 1983; Niculescu-Mizil & Caruana, 2005), which show ac- curacy as a function of conï¬dence. We see that LeNet is | 1706.04599#5 | On Calibration of Modern Neural Networks | Confidence calibration -- the problem of predicting probability estimates
representative of the true correctness likelihood -- is important for
classification models in many applications. We discover that modern neural
networks, unlike those from a decade ago, are poorly calibrated. Through
extensive experiments, we observe that depth, width, weight decay, and Batch
Normalization are important factors influencing calibration. We evaluate the
performance of various post-processing calibration methods on state-of-the-art
architectures with image and document classification datasets. Our analysis and
experiments not only offer insights into neural network learning, but also
provide a simple and straightforward recipe for practical settings: on most
datasets, temperature scaling -- a single-parameter variant of Platt Scaling --
is surprisingly effective at calibrating predictions. | http://arxiv.org/pdf/1706.04599 | Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger | cs.LG | ICML 2017 | null | cs.LG | 20170614 | 20170803 | [
{
"id": "1610.08936"
},
{
"id": "1701.06548"
},
{
"id": "1612.01474"
},
{
"id": "1607.03594"
},
{
"id": "1604.07316"
},
{
"id": "1505.00387"
},
{
"id": "1703.04977"
},
{
"id": "1610.05256"
}
] |
1706.04599 | 7 | Our goal is not only to understand why neural networks have become miscalibrated, but also to identify what meth- ods can alleviate this problem. In this paper, we demon- strate on several computer vision and NLP tasks that neu- ral networks produce conï¬dences that do not represent true probabilities. Additionally, we offer insight and intuition into network training and architectural trends that may cause miscalibration. Finally, we compare various post- processing calibration methods on state-of-the-art neural networks, and introduce several extensions of our own. Surprisingly, we ï¬nd that a single-parameter variant of Platt scaling (Platt et al., 1999) â which we refer to as temper- ature scaling â is often the most effective method at ob- taining calibrated probabilities. Because this method is straightforward to implement with existing deep learning frameworks, it can be easily adopted in practical settings.
# 2. Deï¬nitions | 1706.04599#7 | On Calibration of Modern Neural Networks | Confidence calibration -- the problem of predicting probability estimates
representative of the true correctness likelihood -- is important for
classification models in many applications. We discover that modern neural
networks, unlike those from a decade ago, are poorly calibrated. Through
extensive experiments, we observe that depth, width, weight decay, and Batch
Normalization are important factors influencing calibration. We evaluate the
performance of various post-processing calibration methods on state-of-the-art
architectures with image and document classification datasets. Our analysis and
experiments not only offer insights into neural network learning, but also
provide a simple and straightforward recipe for practical settings: on most
datasets, temperature scaling -- a single-parameter variant of Platt Scaling --
is surprisingly effective at calibrating predictions. | http://arxiv.org/pdf/1706.04599 | Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger | cs.LG | ICML 2017 | null | cs.LG | 20170614 | 20170803 | [
{
"id": "1610.08936"
},
{
"id": "1701.06548"
},
{
"id": "1612.01474"
},
{
"id": "1607.03594"
},
{
"id": "1604.07316"
},
{
"id": "1505.00387"
},
{
"id": "1703.04977"
},
{
"id": "1610.05256"
}
] |
1706.04599 | 8 | # 2. Deï¬nitions
The problem we address in this paper is supervised multi- class classiï¬cation with neural networks. The input X â X and label Y â Y = {1, . . . , K} are random variables that follow a ground truth joint distribution Ï(X, Y ) = Ï(Y |X)Ï(X). Let h be a neural network with h(X) = ( ËY , ËP ), where ËY is a class prediction and ËP is its associ- ated conï¬dence, i.e. probability of correctness. We would like the conï¬dence estimate ËP to be calibrated, which in- tuitively means that ËP represents a true probability. For example, given 100 predictions, each with conï¬dence of
0.8, we expect that 80 should be correctly classified. More formally, we define perfect calibration as P(Y=Y|P=p)=p, vel.)
P = p, âp â [0, 1] (1) | 1706.04599#8 | On Calibration of Modern Neural Networks | Confidence calibration -- the problem of predicting probability estimates
representative of the true correctness likelihood -- is important for
classification models in many applications. We discover that modern neural
networks, unlike those from a decade ago, are poorly calibrated. Through
extensive experiments, we observe that depth, width, weight decay, and Batch
Normalization are important factors influencing calibration. We evaluate the
performance of various post-processing calibration methods on state-of-the-art
architectures with image and document classification datasets. Our analysis and
experiments not only offer insights into neural network learning, but also
provide a simple and straightforward recipe for practical settings: on most
datasets, temperature scaling -- a single-parameter variant of Platt Scaling --
is surprisingly effective at calibrating predictions. | http://arxiv.org/pdf/1706.04599 | Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger | cs.LG | ICML 2017 | null | cs.LG | 20170614 | 20170803 | [
{
"id": "1610.08936"
},
{
"id": "1701.06548"
},
{
"id": "1612.01474"
},
{
"id": "1607.03594"
},
{
"id": "1604.07316"
},
{
"id": "1505.00387"
},
{
"id": "1703.04977"
},
{
"id": "1610.05256"
}
] |
1706.04599 | 9 | P = p, âp â [0, 1] (1)
where the probability is over the joint distribution. In all practical settings, achieving perfect calibration is impos- sible. Additionally, the probability in (1) cannot be com- puted using ï¬nitely many samples since ËP is a continuous random variable. This motivates the need for empirical ap- proximations that capture the essence of (1).
Reliability Diagrams (e.g. Figure 1 bottom) are a visual representation of model calibration (DeGroot & Fienberg, 1983; Niculescu-Mizil & Caruana, 2005). These diagrams plot expected sample accuracy as a function of conï¬dence. If the model is perfectly calibrated â i.e. if (1) holds â then the diagram should plot the identity function. Any devia- tion from a perfect diagonal represents miscalibration.
To estimate the expected accuracy from ï¬nite samples, we group predictions into M interval bins (each of size 1/M ) and calculate the accuracy of each bin. Let Bm be the set of indices of samples whose prediction conï¬dence falls into the interval Im = ( mâ1
M , m M ]. The accuracy of Bm is 1 |Bm| | 1706.04599#9 | On Calibration of Modern Neural Networks | Confidence calibration -- the problem of predicting probability estimates
representative of the true correctness likelihood -- is important for
classification models in many applications. We discover that modern neural
networks, unlike those from a decade ago, are poorly calibrated. Through
extensive experiments, we observe that depth, width, weight decay, and Batch
Normalization are important factors influencing calibration. We evaluate the
performance of various post-processing calibration methods on state-of-the-art
architectures with image and document classification datasets. Our analysis and
experiments not only offer insights into neural network learning, but also
provide a simple and straightforward recipe for practical settings: on most
datasets, temperature scaling -- a single-parameter variant of Platt Scaling --
is surprisingly effective at calibrating predictions. | http://arxiv.org/pdf/1706.04599 | Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger | cs.LG | ICML 2017 | null | cs.LG | 20170614 | 20170803 | [
{
"id": "1610.08936"
},
{
"id": "1701.06548"
},
{
"id": "1612.01474"
},
{
"id": "1607.03594"
},
{
"id": "1604.07316"
},
{
"id": "1505.00387"
},
{
"id": "1703.04977"
},
{
"id": "1610.05256"
}
] |
1706.04599 | 10 | M , m M ]. The accuracy of Bm is 1 |Bm|
1 ~ ace(Bm) = IBnl Ss 1(Hi = yi), â¢GCBm
where Ëyi and yi are the predicted and true class labels for sample i. Basic probability tells us that acc(Bm) is an un- biased and consistent estimator of P( ËY = Y | ËP â Im). We deï¬ne the average conï¬dence within bin Bm as
conf(Bm) = a Ss Dis |Bm| iâ¬Bm
where Ëpi is the conï¬dence for sample i. acc(Bm) and conf(Bm) approximate the left-hand and right-hand sides of (1) respectively for bin Bm. Therefore, a perfectly cal- ibrated model will have acc(Bm) = conf(Bm) for all m â {1, . . . , M }. Note that reliability diagrams do not dis- play the proportion of samples in a given bin, and thus can- not be used to estimate how many samples are calibrated. | 1706.04599#10 | On Calibration of Modern Neural Networks | Confidence calibration -- the problem of predicting probability estimates
representative of the true correctness likelihood -- is important for
classification models in many applications. We discover that modern neural
networks, unlike those from a decade ago, are poorly calibrated. Through
extensive experiments, we observe that depth, width, weight decay, and Batch
Normalization are important factors influencing calibration. We evaluate the
performance of various post-processing calibration methods on state-of-the-art
architectures with image and document classification datasets. Our analysis and
experiments not only offer insights into neural network learning, but also
provide a simple and straightforward recipe for practical settings: on most
datasets, temperature scaling -- a single-parameter variant of Platt Scaling --
is surprisingly effective at calibrating predictions. | http://arxiv.org/pdf/1706.04599 | Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger | cs.LG | ICML 2017 | null | cs.LG | 20170614 | 20170803 | [
{
"id": "1610.08936"
},
{
"id": "1701.06548"
},
{
"id": "1612.01474"
},
{
"id": "1607.03594"
},
{
"id": "1604.07316"
},
{
"id": "1505.00387"
},
{
"id": "1703.04977"
},
{
"id": "1610.05256"
}
] |
1706.04599 | 11 | Expected Calibration Error (ECE). While reliability diagrams are useful visual tools, it is more convenient to have a scalar summary statistic of calibration. Since statis- tics comparing two distributions cannot be comprehensive, previous works have proposed variants, each with a unique emphasis. One notion of miscalibration is the difference in expectation between conï¬dence and accuracy, i.e.
g[P@=Â¥i8=1)-A]
Expected Calibration Error (Naeini et al., 2015) â or ECE â approximates (2) by partitioning predictions into M equally-spaced bins (similar to the reliability diagrams) and
Varying Depth ResNet - CIFAR-100 Varying Width ResNet-14 - CIFAR-100 Using Normalization ConvNet - CIFAR-100 Varying Weight Decay ResNet-110 - CIFAR-100 0.7 0.6 â= Error Error Gg Error â Error == ECE ECE Gig ECE == ECE fa 0.5 oO 0.4 : FE 3 SSS S03 I 0.2 °° Vanna: 0.0 - - 0 20 40 60 80 100120 0 50 100 150 200 250 300 Without With 10°? 10-4 10-° 10-7 Depth Filters per layer Batch Normalization Weight decay | 1706.04599#11 | On Calibration of Modern Neural Networks | Confidence calibration -- the problem of predicting probability estimates
representative of the true correctness likelihood -- is important for
classification models in many applications. We discover that modern neural
networks, unlike those from a decade ago, are poorly calibrated. Through
extensive experiments, we observe that depth, width, weight decay, and Batch
Normalization are important factors influencing calibration. We evaluate the
performance of various post-processing calibration methods on state-of-the-art
architectures with image and document classification datasets. Our analysis and
experiments not only offer insights into neural network learning, but also
provide a simple and straightforward recipe for practical settings: on most
datasets, temperature scaling -- a single-parameter variant of Platt Scaling --
is surprisingly effective at calibrating predictions. | http://arxiv.org/pdf/1706.04599 | Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger | cs.LG | ICML 2017 | null | cs.LG | 20170614 | 20170803 | [
{
"id": "1610.08936"
},
{
"id": "1701.06548"
},
{
"id": "1612.01474"
},
{
"id": "1607.03594"
},
{
"id": "1604.07316"
},
{
"id": "1505.00387"
},
{
"id": "1703.04977"
},
{
"id": "1610.05256"
}
] |
1706.04599 | 12 | Figure 2. The effect of network depth (far left), width (middle left), Batch Normalization (middle right), and weight decay (far right) on miscalibration, as measured by ECE (lower is better).
taking a weighted average of the binsâ accuracy/confidence difference. More Precisely ECE = S| Pr m=1 acc(B,,) â conf(By)}, (3)
where n is the number of samples. The difference between acc and conf for a given bin represents the calibration gap (red bars in reliability diagrams â e.g. Figure 1). We use ECE as the primary empirical metric to measure calibra- tion. See Section S1 for more analysis of this metric.
# 3. Observing Miscalibration
The architecture and training procedures of neural net- works have rapidly evolved in recent years. In this sec- tion we identify some recent changes that are responsible for the miscalibration phenomenon observed in Figure 1. Though we cannot claim causality, we ï¬nd that increased model capacity and lack of regularization are closely re- lated to model miscalibration.
Maximum Calibration Error (MCE). In high-risk ap- plications where reliable conï¬dence measures are abso- lutely necessary, we may wish to minimize the worst-case deviation between conï¬dence and accuracy: | 1706.04599#12 | On Calibration of Modern Neural Networks | Confidence calibration -- the problem of predicting probability estimates
representative of the true correctness likelihood -- is important for
classification models in many applications. We discover that modern neural
networks, unlike those from a decade ago, are poorly calibrated. Through
extensive experiments, we observe that depth, width, weight decay, and Batch
Normalization are important factors influencing calibration. We evaluate the
performance of various post-processing calibration methods on state-of-the-art
architectures with image and document classification datasets. Our analysis and
experiments not only offer insights into neural network learning, but also
provide a simple and straightforward recipe for practical settings: on most
datasets, temperature scaling -- a single-parameter variant of Platt Scaling --
is surprisingly effective at calibrating predictions. | http://arxiv.org/pdf/1706.04599 | Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger | cs.LG | ICML 2017 | null | cs.LG | 20170614 | 20170803 | [
{
"id": "1610.08936"
},
{
"id": "1701.06548"
},
{
"id": "1612.01474"
},
{
"id": "1607.03594"
},
{
"id": "1604.07316"
},
{
"id": "1505.00387"
},
{
"id": "1703.04977"
},
{
"id": "1610.05256"
}
] |
1706.04599 | 13 | max [P(Y = Y|P= p) - >|. 4
The Maximum Calibration Error (Naeini et al., 2015) â or MCE â estimates this deviation. Similarly to ECE, this ap- proximation involves binning:
MCE = max mâ{1,...,M } |acc(Bm) â conf(Bm)| . (5)
We can visualize MCE and ECE on reliability diagrams. MCE is the largest calibration gap (red bars) across all bins, whereas ECE is a weighted average of all gaps. For per- fectly calibrated classiï¬ers, MCE and ECE both equal 0.
Negative log likelihood is a standard measure of a prob- abilistic modelâs quality (Friedman et al., 2001). It is also referred to as the cross entropy loss in the context of deep learning (Bengio et al., 2015). Given a probabilistic model ËÏ(Y |X) and n samples, NLL is deï¬ned as: | 1706.04599#13 | On Calibration of Modern Neural Networks | Confidence calibration -- the problem of predicting probability estimates
representative of the true correctness likelihood -- is important for
classification models in many applications. We discover that modern neural
networks, unlike those from a decade ago, are poorly calibrated. Through
extensive experiments, we observe that depth, width, weight decay, and Batch
Normalization are important factors influencing calibration. We evaluate the
performance of various post-processing calibration methods on state-of-the-art
architectures with image and document classification datasets. Our analysis and
experiments not only offer insights into neural network learning, but also
provide a simple and straightforward recipe for practical settings: on most
datasets, temperature scaling -- a single-parameter variant of Platt Scaling --
is surprisingly effective at calibrating predictions. | http://arxiv.org/pdf/1706.04599 | Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger | cs.LG | ICML 2017 | null | cs.LG | 20170614 | 20170803 | [
{
"id": "1610.08936"
},
{
"id": "1701.06548"
},
{
"id": "1612.01474"
},
{
"id": "1607.03594"
},
{
"id": "1604.07316"
},
{
"id": "1505.00387"
},
{
"id": "1703.04977"
},
{
"id": "1610.05256"
}
] |
1706.04599 | 14 | Model capacity. The model capacity of neural networks has increased at a dramatic pace over the past few years. It is now common to see networks with hundreds, if not thousands of layers (He et al., 2016; Huang et al., 2016) and hundreds of convolutional ï¬lters per layer (Zagoruyko & Komodakis, 2016). Recent work shows that very deep or wide models are able to generalize better than smaller ones, while exhibiting the capacity to easily ï¬t the training set (Zhang et al., 2017). | 1706.04599#14 | On Calibration of Modern Neural Networks | Confidence calibration -- the problem of predicting probability estimates
representative of the true correctness likelihood -- is important for
classification models in many applications. We discover that modern neural
networks, unlike those from a decade ago, are poorly calibrated. Through
extensive experiments, we observe that depth, width, weight decay, and Batch
Normalization are important factors influencing calibration. We evaluate the
performance of various post-processing calibration methods on state-of-the-art
architectures with image and document classification datasets. Our analysis and
experiments not only offer insights into neural network learning, but also
provide a simple and straightforward recipe for practical settings: on most
datasets, temperature scaling -- a single-parameter variant of Platt Scaling --
is surprisingly effective at calibrating predictions. | http://arxiv.org/pdf/1706.04599 | Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger | cs.LG | ICML 2017 | null | cs.LG | 20170614 | 20170803 | [
{
"id": "1610.08936"
},
{
"id": "1701.06548"
},
{
"id": "1612.01474"
},
{
"id": "1607.03594"
},
{
"id": "1604.07316"
},
{
"id": "1505.00387"
},
{
"id": "1703.04977"
},
{
"id": "1610.05256"
}
] |
1706.04599 | 15 | Although increasing depth and width may reduce classi- ï¬cation error, we observe that these increases negatively affect model calibration. Figure 2 displays error and ECE as a function of depth and width on a ResNet trained on CIFAR-100. The far left ï¬gure varies depth for a network with 64 convolutional ï¬lters per layer, while the middle left ï¬gure ï¬xes the depth at 14 layers and varies the number of convolutional ï¬lters per layer. Though even the small- est models in the graph exhibit some degree of miscalibra- tion, the ECE metric grows substantially with model ca- pacity. During training, after the model is able to correctly classify (almost) all training samples, NLL can be further minimized by increasing the conï¬dence of predictions. In- creased model capacity will lower training NLL, and thus the model will be more (over)conï¬dent on average.
L=- Yo sts (6) (yi|x:))
# (Friedman
It is a standard result (Friedman et al., 2001) that, in expec- tation, NLL is minimized if and only if ËÏ(Y |X) recovers the ground truth conditional distribution Ï(Y |X). | 1706.04599#15 | On Calibration of Modern Neural Networks | Confidence calibration -- the problem of predicting probability estimates
representative of the true correctness likelihood -- is important for
classification models in many applications. We discover that modern neural
networks, unlike those from a decade ago, are poorly calibrated. Through
extensive experiments, we observe that depth, width, weight decay, and Batch
Normalization are important factors influencing calibration. We evaluate the
performance of various post-processing calibration methods on state-of-the-art
architectures with image and document classification datasets. Our analysis and
experiments not only offer insights into neural network learning, but also
provide a simple and straightforward recipe for practical settings: on most
datasets, temperature scaling -- a single-parameter variant of Platt Scaling --
is surprisingly effective at calibrating predictions. | http://arxiv.org/pdf/1706.04599 | Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger | cs.LG | ICML 2017 | null | cs.LG | 20170614 | 20170803 | [
{
"id": "1610.08936"
},
{
"id": "1701.06548"
},
{
"id": "1612.01474"
},
{
"id": "1607.03594"
},
{
"id": "1604.07316"
},
{
"id": "1505.00387"
},
{
"id": "1703.04977"
},
{
"id": "1610.05256"
}
] |
1706.04599 | 16 | Batch Normalization (Ioffe & Szegedy, 2015) improves the optimization of neural networks by minimizing distri- bution shifts in activations within the neural networkâs hidNLL Overfitting on CIFAR-100 45 â Test error âTest NLL = 40 ao) 3 I 2 35 a Zz a & 30) ~ g i) 25 20 0 100 200 300 400 500 Epoch
Figure 3. Test error and NLL of a 110-layer ResNet with stochas- tic depth on CIFAR-100 during training. NLL is scaled by a con- stant to ï¬t in the ï¬gure. Learning rate drops by 10x at epochs 250 and 375. The shaded area marks between epochs at which the best validation loss and best validation error are produced.
den layers. Recent research suggests that these normal- ization techniques have enabled the development of very deep architectures, such as ResNets (He et al., 2016) and DenseNets (Huang et al., 2017). It has been shown that Batch Normalization improves training time, reduces the need for additional regularization, and can in some cases improve the accuracy of networks. | 1706.04599#16 | On Calibration of Modern Neural Networks | Confidence calibration -- the problem of predicting probability estimates
representative of the true correctness likelihood -- is important for
classification models in many applications. We discover that modern neural
networks, unlike those from a decade ago, are poorly calibrated. Through
extensive experiments, we observe that depth, width, weight decay, and Batch
Normalization are important factors influencing calibration. We evaluate the
performance of various post-processing calibration methods on state-of-the-art
architectures with image and document classification datasets. Our analysis and
experiments not only offer insights into neural network learning, but also
provide a simple and straightforward recipe for practical settings: on most
datasets, temperature scaling -- a single-parameter variant of Platt Scaling --
is surprisingly effective at calibrating predictions. | http://arxiv.org/pdf/1706.04599 | Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger | cs.LG | ICML 2017 | null | cs.LG | 20170614 | 20170803 | [
{
"id": "1610.08936"
},
{
"id": "1701.06548"
},
{
"id": "1612.01474"
},
{
"id": "1607.03594"
},
{
"id": "1604.07316"
},
{
"id": "1505.00387"
},
{
"id": "1703.04977"
},
{
"id": "1610.05256"
}
] |
1706.04599 | 17 | While it is difï¬cult to pinpoint exactly how Batch Normal- ization affects the ï¬nal predictions of a model, we do ob- serve that models trained with Batch Normalization tend to be more miscalibrated. In the middle right plot of Figure 2, we see that a 6-layer ConvNet obtains worse calibration when Batch Normalization is applied, even though classi- ï¬cation accuracy improves slightly. We ï¬nd that this result holds regardless of the hyperparameters used on the Batch Normalization model (i.e. low or high learning rate, etc.).
Weight decay, which used to be the predominant regu- larization mechanism for neural networks, is decreasingly utilized when training modern neural networks. Learning theory suggests that regularization is necessary to prevent overï¬tting, especially as model capacity increases (Vapnik, 1998). However, due to the apparent regularization effects of Batch Normalization, recent research seems to suggest that models with less L2 regularization tend to generalize better (Ioffe & Szegedy, 2015). As a result, it is now com- mon to train models with little weight decay, if any at all. The top performing ImageNet models of 2015 all use an or- der of magnitude less weight decay than models of previous years (He et al., 2016; Simonyan & Zisserman, 2015). | 1706.04599#17 | On Calibration of Modern Neural Networks | Confidence calibration -- the problem of predicting probability estimates
representative of the true correctness likelihood -- is important for
classification models in many applications. We discover that modern neural
networks, unlike those from a decade ago, are poorly calibrated. Through
extensive experiments, we observe that depth, width, weight decay, and Batch
Normalization are important factors influencing calibration. We evaluate the
performance of various post-processing calibration methods on state-of-the-art
architectures with image and document classification datasets. Our analysis and
experiments not only offer insights into neural network learning, but also
provide a simple and straightforward recipe for practical settings: on most
datasets, temperature scaling -- a single-parameter variant of Platt Scaling --
is surprisingly effective at calibrating predictions. | http://arxiv.org/pdf/1706.04599 | Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger | cs.LG | ICML 2017 | null | cs.LG | 20170614 | 20170803 | [
{
"id": "1610.08936"
},
{
"id": "1701.06548"
},
{
"id": "1612.01474"
},
{
"id": "1607.03594"
},
{
"id": "1604.07316"
},
{
"id": "1505.00387"
},
{
"id": "1703.04977"
},
{
"id": "1610.05256"
}
] |
1706.04599 | 18 | We ï¬nd that training with less weight decay has a negative impact on calibration. The far right plot in Figure 2 displays training error and ECE for a 110-layer ResNet with varying amounts of weight decay. The only other forms of regularization are data augmentation and Batch Normal- ization. We observe that calibration and accuracy are not optimized by the same parameter setting. While the model exhibits both over-regularization and under-regularization with respect to classiï¬cation error, it does not appear that calibration is negatively impacted by having too much weight decay. Model calibration continues to improve when more regularization is added, well after the point of achieving optimal accuracy. The slight uptick at the end of the graph may be an artifact of using a weight decay factor that impedes optimization. | 1706.04599#18 | On Calibration of Modern Neural Networks | Confidence calibration -- the problem of predicting probability estimates
representative of the true correctness likelihood -- is important for
classification models in many applications. We discover that modern neural
networks, unlike those from a decade ago, are poorly calibrated. Through
extensive experiments, we observe that depth, width, weight decay, and Batch
Normalization are important factors influencing calibration. We evaluate the
performance of various post-processing calibration methods on state-of-the-art
architectures with image and document classification datasets. Our analysis and
experiments not only offer insights into neural network learning, but also
provide a simple and straightforward recipe for practical settings: on most
datasets, temperature scaling -- a single-parameter variant of Platt Scaling --
is surprisingly effective at calibrating predictions. | http://arxiv.org/pdf/1706.04599 | Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger | cs.LG | ICML 2017 | null | cs.LG | 20170614 | 20170803 | [
{
"id": "1610.08936"
},
{
"id": "1701.06548"
},
{
"id": "1612.01474"
},
{
"id": "1607.03594"
},
{
"id": "1604.07316"
},
{
"id": "1505.00387"
},
{
"id": "1703.04977"
},
{
"id": "1610.05256"
}
] |
1706.04599 | 19 | NLL can be used to indirectly measure model calibra- In practice, we observe a disconnect between NLL tion. and accuracy, which may explain the miscalibration in Fig- ure 2. This disconnect occurs because neural networks can overï¬t to NLL without overï¬tting to the 0/1 loss. We ob- serve this trend in the training curves of some miscalibrated models. Figure 3 shows test error and NLL (rescaled to match error) on CIFAR-100 as training progresses. Both error and NLL immediately drop at epoch 250, when the learning rate is dropped; however, NLL overï¬ts during the remainder of training. Surprisingly, overï¬tting to NLL is beneï¬cial to classiï¬cation accuracy. On CIFAR-100, test error drops from 29% to 27% in the region where NLL overï¬ts. This phenomenon renders a concrete explanation of miscalibration: the network learns better classiï¬cation accuracy at the expense of well-modeled probabilities. | 1706.04599#19 | On Calibration of Modern Neural Networks | Confidence calibration -- the problem of predicting probability estimates
representative of the true correctness likelihood -- is important for
classification models in many applications. We discover that modern neural
networks, unlike those from a decade ago, are poorly calibrated. Through
extensive experiments, we observe that depth, width, weight decay, and Batch
Normalization are important factors influencing calibration. We evaluate the
performance of various post-processing calibration methods on state-of-the-art
architectures with image and document classification datasets. Our analysis and
experiments not only offer insights into neural network learning, but also
provide a simple and straightforward recipe for practical settings: on most
datasets, temperature scaling -- a single-parameter variant of Platt Scaling --
is surprisingly effective at calibrating predictions. | http://arxiv.org/pdf/1706.04599 | Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger | cs.LG | ICML 2017 | null | cs.LG | 20170614 | 20170803 | [
{
"id": "1610.08936"
},
{
"id": "1701.06548"
},
{
"id": "1612.01474"
},
{
"id": "1607.03594"
},
{
"id": "1604.07316"
},
{
"id": "1505.00387"
},
{
"id": "1703.04977"
},
{
"id": "1610.05256"
}
] |
1706.04599 | 20 | We can connect this ï¬nding to recent work examining the generalization of large neural networks. Zhang et al. (2017) observe that deep neural networks seemingly violate the common understanding of learning theory that large mod- els with little regularization will not generalize well. The observed disconnect between NLL and 0/1 loss suggests that these high capacity models are not necessarily immune from overï¬tting, but rather, overï¬tting manifests in proba- bilistic error rather than classiï¬cation error.
# 4. Calibration Methods
In this section, we ï¬rst review existing calibration meth- ods, and introduce new variants of our own. All methods are post-processing steps that produce (calibrated) proba- bilities. Each method requires a hold-out validation set, which in practice can be the same set used for hyperparam- eter tuning. We assume that the training, validation, and test sets are drawn from the same distribution.
# 4.1. Calibrating Binary Models
We ï¬rst introduce calibration in the binary setting, i.e. Y = {0, 1}. For simplicity, throughout this subsection, | 1706.04599#20 | On Calibration of Modern Neural Networks | Confidence calibration -- the problem of predicting probability estimates
representative of the true correctness likelihood -- is important for
classification models in many applications. We discover that modern neural
networks, unlike those from a decade ago, are poorly calibrated. Through
extensive experiments, we observe that depth, width, weight decay, and Batch
Normalization are important factors influencing calibration. We evaluate the
performance of various post-processing calibration methods on state-of-the-art
architectures with image and document classification datasets. Our analysis and
experiments not only offer insights into neural network learning, but also
provide a simple and straightforward recipe for practical settings: on most
datasets, temperature scaling -- a single-parameter variant of Platt Scaling --
is surprisingly effective at calibrating predictions. | http://arxiv.org/pdf/1706.04599 | Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger | cs.LG | ICML 2017 | null | cs.LG | 20170614 | 20170803 | [
{
"id": "1610.08936"
},
{
"id": "1701.06548"
},
{
"id": "1612.01474"
},
{
"id": "1607.03594"
},
{
"id": "1604.07316"
},
{
"id": "1505.00387"
},
{
"id": "1703.04977"
},
{
"id": "1610.05256"
}
] |
1706.04599 | 21 | We ï¬rst introduce calibration in the binary setting, i.e. Y = {0, 1}. For simplicity, throughout this subsection,
we assume the model outputs only the conï¬dence for the positive class.1 Given a sample xi, we have access to Ëpi â the networkâs predicted probability of yi = 1, as well as zi â R â which is the networkâs non-probabilistic output, or logit. The predicted probability Ëpi is derived from zi us- ing a sigmoid function Ï; i.e. Ëpi = Ï(zi). Our goal is to produce a calibrated probability Ëqi based on yi, Ëpi, and zi. | 1706.04599#21 | On Calibration of Modern Neural Networks | Confidence calibration -- the problem of predicting probability estimates
representative of the true correctness likelihood -- is important for
classification models in many applications. We discover that modern neural
networks, unlike those from a decade ago, are poorly calibrated. Through
extensive experiments, we observe that depth, width, weight decay, and Batch
Normalization are important factors influencing calibration. We evaluate the
performance of various post-processing calibration methods on state-of-the-art
architectures with image and document classification datasets. Our analysis and
experiments not only offer insights into neural network learning, but also
provide a simple and straightforward recipe for practical settings: on most
datasets, temperature scaling -- a single-parameter variant of Platt Scaling --
is surprisingly effective at calibrating predictions. | http://arxiv.org/pdf/1706.04599 | Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger | cs.LG | ICML 2017 | null | cs.LG | 20170614 | 20170803 | [
{
"id": "1610.08936"
},
{
"id": "1701.06548"
},
{
"id": "1612.01474"
},
{
"id": "1607.03594"
},
{
"id": "1604.07316"
},
{
"id": "1505.00387"
},
{
"id": "1703.04977"
},
{
"id": "1610.05256"
}
] |
1706.04599 | 22 | Histogram binning (Zadrozny & Elkan, 2001) is a sim- In a nutshell, all ple non-parametric calibration method. uncalibrated predictions Ëpi are divided into mutually ex- clusive bins B1, . . . , BM . Each bin is assigned a calibrated score θm; i.e. if Ëpi is assigned to bin Bm, then Ëqi = θm. At test time, if prediction Ëpte falls into bin Bm, then the cali- brated prediction Ëqte is θm. More precisely, for a suitably chosen M (usually small), we ï¬rst deï¬ne bin boundaries 0 = a1 ⤠a2 ⤠. . . ⤠aM +1 = 1, where the bin Bm is deï¬ned by the interval (am, am+1]. Typically the bin boundaries are either chosen to be equal length intervals or to equalize the number of samples in each bin. The predic- tions θi are chosen to minimize the bin-wise squared loss:
Mon : ~ 2 jinn So YE Ulam < Bi < amt) Om = yi)â, m=1i=1 | 1706.04599#22 | On Calibration of Modern Neural Networks | Confidence calibration -- the problem of predicting probability estimates
representative of the true correctness likelihood -- is important for
classification models in many applications. We discover that modern neural
networks, unlike those from a decade ago, are poorly calibrated. Through
extensive experiments, we observe that depth, width, weight decay, and Batch
Normalization are important factors influencing calibration. We evaluate the
performance of various post-processing calibration methods on state-of-the-art
architectures with image and document classification datasets. Our analysis and
experiments not only offer insights into neural network learning, but also
provide a simple and straightforward recipe for practical settings: on most
datasets, temperature scaling -- a single-parameter variant of Platt Scaling --
is surprisingly effective at calibrating predictions. | http://arxiv.org/pdf/1706.04599 | Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger | cs.LG | ICML 2017 | null | cs.LG | 20170614 | 20170803 | [
{
"id": "1610.08936"
},
{
"id": "1701.06548"
},
{
"id": "1612.01474"
},
{
"id": "1607.03594"
},
{
"id": "1604.07316"
},
{
"id": "1505.00387"
},
{
"id": "1703.04977"
},
{
"id": "1610.05256"
}
] |
1706.04599 | 23 | Mon : ~ 2 jinn So YE Ulam < Bi < amt) Om = yi)â, m=1i=1
where 1 is the indicator function. Given ï¬xed bins bound- aries, the solution to (7) results in θm that correspond to the average number of positive-class samples in bin Bm.
Isotonic regression (Zadrozny & Elkan, 2002), arguably the most common non-parametric calibration method, learns a piecewise constant function f to transform un- calibrated outputs; ic. g; = f(p;). Specifically, iso- tonic regression produces f to minimize the square loss 1 (f (bi) â yi)â. Because f is constrained to be piece- wise constant, we can write the optimization problem as:
Mon min Ss Ss Lam < pi < Am41) (Om â yi)â m=1 i=1 subjectto 0=a, <ag<...<auai=1, 0, < 02 <1... < Om. | 1706.04599#23 | On Calibration of Modern Neural Networks | Confidence calibration -- the problem of predicting probability estimates
representative of the true correctness likelihood -- is important for
classification models in many applications. We discover that modern neural
networks, unlike those from a decade ago, are poorly calibrated. Through
extensive experiments, we observe that depth, width, weight decay, and Batch
Normalization are important factors influencing calibration. We evaluate the
performance of various post-processing calibration methods on state-of-the-art
architectures with image and document classification datasets. Our analysis and
experiments not only offer insights into neural network learning, but also
provide a simple and straightforward recipe for practical settings: on most
datasets, temperature scaling -- a single-parameter variant of Platt Scaling --
is surprisingly effective at calibrating predictions. | http://arxiv.org/pdf/1706.04599 | Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger | cs.LG | ICML 2017 | null | cs.LG | 20170614 | 20170803 | [
{
"id": "1610.08936"
},
{
"id": "1701.06548"
},
{
"id": "1612.01474"
},
{
"id": "1607.03594"
},
{
"id": "1604.07316"
},
{
"id": "1505.00387"
},
{
"id": "1703.04977"
},
{
"id": "1610.05256"
}
] |
1706.04599 | 24 | where M is the number of intervals; a1, . . . , aM +1 are the interval boundaries; and θ1, . . . , θM are the function val- ues. Under this parameterization, isotonic regression is a strict generalization of histogram binning in which the bin boundaries and bin predictions are jointly optimized.
Bayesian Binning into Quantiles (BBQ) (Naeini et al., 2015) is a extension of histogram binning using Bayesian
1 This is in contrast with the setting in Section 2, in which the model produces both a class prediction and conï¬dence.
model averaging. Essentially, BBQ marginalizes out all possible binning schemes to produce g;. More formally, a binning scheme s is a pair (IV, Z) where M is the number of bins, and T is a corresponding partitioning of [0, 1] into disjoint intervals (0 = aj < ag <... < ay4i1 = 1). The parameters of a binning scheme are 0,,..., 9,7. Under this framework, histogram binning and isotonic regression both produce a single binning scheme, whereas BBQ considers a space S of all possible binning schemes for the valida- tion dataset D. BBQ performs Bayesian averaging of the probabilities produced by each scheme:? S | 1706.04599#24 | On Calibration of Modern Neural Networks | Confidence calibration -- the problem of predicting probability estimates
representative of the true correctness likelihood -- is important for
classification models in many applications. We discover that modern neural
networks, unlike those from a decade ago, are poorly calibrated. Through
extensive experiments, we observe that depth, width, weight decay, and Batch
Normalization are important factors influencing calibration. We evaluate the
performance of various post-processing calibration methods on state-of-the-art
architectures with image and document classification datasets. Our analysis and
experiments not only offer insights into neural network learning, but also
provide a simple and straightforward recipe for practical settings: on most
datasets, temperature scaling -- a single-parameter variant of Platt Scaling --
is surprisingly effective at calibrating predictions. | http://arxiv.org/pdf/1706.04599 | Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger | cs.LG | ICML 2017 | null | cs.LG | 20170614 | 20170803 | [
{
"id": "1610.08936"
},
{
"id": "1701.06548"
},
{
"id": "1612.01474"
},
{
"id": "1607.03594"
},
{
"id": "1604.07316"
},
{
"id": "1505.00387"
},
{
"id": "1703.04977"
},
{
"id": "1610.05256"
}
] |
1706.04599 | 25 | P(Gte | Pte: D) = > P(Gie, S = 8 | Bie D) sES = SO Plate | Pte, S=s,D)P(S=s | D). ses
where P(Ëqte | Ëpte, S = s, D) is the calibrated probability using binning scheme s. Using a uniform prior, the weight P(S = s | D) can be derived using Bayesâ rule:
P(D | S=s) P(S=s|D)= . Vives P(D | S=s')
The parameters θ1, . . . , θM can be viewed as parameters of M independent binomial distributions. Hence, by placing a Beta prior on θ1, . . . , θM , we can obtain a closed form expression for the marginal likelihood P(D | S = s). This allows us to compute P(Ëqte | Ëpte, D) for any test input. | 1706.04599#25 | On Calibration of Modern Neural Networks | Confidence calibration -- the problem of predicting probability estimates
representative of the true correctness likelihood -- is important for
classification models in many applications. We discover that modern neural
networks, unlike those from a decade ago, are poorly calibrated. Through
extensive experiments, we observe that depth, width, weight decay, and Batch
Normalization are important factors influencing calibration. We evaluate the
performance of various post-processing calibration methods on state-of-the-art
architectures with image and document classification datasets. Our analysis and
experiments not only offer insights into neural network learning, but also
provide a simple and straightforward recipe for practical settings: on most
datasets, temperature scaling -- a single-parameter variant of Platt Scaling --
is surprisingly effective at calibrating predictions. | http://arxiv.org/pdf/1706.04599 | Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger | cs.LG | ICML 2017 | null | cs.LG | 20170614 | 20170803 | [
{
"id": "1610.08936"
},
{
"id": "1701.06548"
},
{
"id": "1612.01474"
},
{
"id": "1607.03594"
},
{
"id": "1604.07316"
},
{
"id": "1505.00387"
},
{
"id": "1703.04977"
},
{
"id": "1610.05256"
}
] |
1706.04599 | 26 | Platt scaling (Platt et al., 1999) is a parametric approach to calibration, unlike the other approaches. The non- probabilistic predictions of a classiï¬er are used as features for a logistic regression model, which is trained on the val- idation set to return probabilities. More speciï¬cally, in the context of neural networks (Niculescu-Mizil & Caruana, 2005), Platt scaling learns scalar parameters a, b â R and outputs Ëqi = Ï(azi + b) as the calibrated probability. Pa- rameters a and b can be optimized using the NLL loss over the validation set. It is important to note that the neural networkâs parameters are ï¬xed during this stage.
# 4.2. Extension to Multiclass Models
For classiï¬cation problems involving K > 2 classes, we return to the original problem formulation. The network outputs a class prediction Ëyi and conï¬dence score Ëpi for each input xi. In this case, the network logits zi are vectors, where Ëyi = argmaxk z(k) , and Ëpi is typically derived using the softmax function ÏSM: exp(z(k) j=1 exp(z(j) | 1706.04599#26 | On Calibration of Modern Neural Networks | Confidence calibration -- the problem of predicting probability estimates
representative of the true correctness likelihood -- is important for
classification models in many applications. We discover that modern neural
networks, unlike those from a decade ago, are poorly calibrated. Through
extensive experiments, we observe that depth, width, weight decay, and Batch
Normalization are important factors influencing calibration. We evaluate the
performance of various post-processing calibration methods on state-of-the-art
architectures with image and document classification datasets. Our analysis and
experiments not only offer insights into neural network learning, but also
provide a simple and straightforward recipe for practical settings: on most
datasets, temperature scaling -- a single-parameter variant of Platt Scaling --
is surprisingly effective at calibrating predictions. | http://arxiv.org/pdf/1706.04599 | Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger | cs.LG | ICML 2017 | null | cs.LG | 20170614 | 20170803 | [
{
"id": "1610.08936"
},
{
"id": "1701.06548"
},
{
"id": "1612.01474"
},
{
"id": "1607.03594"
},
{
"id": "1604.07316"
},
{
"id": "1505.00387"
},
{
"id": "1703.04977"
},
{
"id": "1610.05256"
}
] |
1706.04599 | 28 | Dataset Model Uncalibrated Hist. Binning Isotonic BBQ Temp. Scaling Vector Scaling Matrix Scaling Birds Cars CIFAR-10 CIFAR-10 CIFAR-10 CIFAR-10 CIFAR-10 CIFAR-100 CIFAR-100 CIFAR-100 CIFAR-100 CIFAR-100 ImageNet ImageNet SVHN ResNet 50 ResNet 50 ResNet 110 ResNet 110 (SD) Wide ResNet 32 DenseNet 40 LeNet 5 ResNet 110 ResNet 110 (SD) Wide ResNet 32 DenseNet 40 LeNet 5 DenseNet 161 ResNet 152 ResNet 152 (SD) 9.19% 4.3% 4.6% 4.12% 4.52% 3.28% 3.02% 16.53% 12.67% 15.0% 10.37% 4.85% 6.28% 5.48% 0.44% 4.34% 1.74% 0.58% 0.67% 0.72% 0.44% 1.56% 2.66% 2.46% 3.01% 2.68% 6.48% 4.52% 4.36% 0.14% 5.22% 4.12% 4.29% 1.84% 0.81% 0.54% 1.11% | 1706.04599#28 | On Calibration of Modern Neural Networks | Confidence calibration -- the problem of predicting probability estimates
representative of the true correctness likelihood -- is important for
classification models in many applications. We discover that modern neural
networks, unlike those from a decade ago, are poorly calibrated. Through
extensive experiments, we observe that depth, width, weight decay, and Batch
Normalization are important factors influencing calibration. We evaluate the
performance of various post-processing calibration methods on state-of-the-art
architectures with image and document classification datasets. Our analysis and
experiments not only offer insights into neural network learning, but also
provide a simple and straightforward recipe for practical settings: on most
datasets, temperature scaling -- a single-parameter variant of Platt Scaling --
is surprisingly effective at calibrating predictions. | http://arxiv.org/pdf/1706.04599 | Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger | cs.LG | ICML 2017 | null | cs.LG | 20170614 | 20170803 | [
{
"id": "1610.08936"
},
{
"id": "1701.06548"
},
{
"id": "1612.01474"
},
{
"id": "1607.03594"
},
{
"id": "1604.07316"
},
{
"id": "1505.00387"
},
{
"id": "1703.04977"
},
{
"id": "1610.05256"
}
] |
1706.04599 | 29 | 4.52% 4.36% 0.14% 5.22% 4.12% 4.29% 1.84% 0.81% 0.54% 1.11% 0.9% 1.08% 0.74% 0.61% 0.81% 1.85% 1.59% 4.99% 5.46% 4.16% 3.58% 5.85% 5.77% 4.51% 3.59% 2.35% 3.77% 5.18% 3.51% 4.77% 3.56% 0.28% 0.22% 1.85% 2.35% 0.83% 0.6% 0.54% 0.33% 0.93% 1.26% 0.96% 2.32% 1.18% 2.02% 1.99% 1.86% 0.17% 3.0% 2.37% 0.88% 0.64% 0.6% 0.41% 1.15% 1.32% 0.9% 2.57% 1.09% 2.09% 2.24% 2.23% 0.27% 21.13% 10.5% 1.0% 0.72% 0.72% 0.41% 1.16% 25.49% 20.09% 24.44% 21.87% | 1706.04599#29 | On Calibration of Modern Neural Networks | Confidence calibration -- the problem of predicting probability estimates
representative of the true correctness likelihood -- is important for
classification models in many applications. We discover that modern neural
networks, unlike those from a decade ago, are poorly calibrated. Through
extensive experiments, we observe that depth, width, weight decay, and Batch
Normalization are important factors influencing calibration. We evaluate the
performance of various post-processing calibration methods on state-of-the-art
architectures with image and document classification datasets. Our analysis and
experiments not only offer insights into neural network learning, but also
provide a simple and straightforward recipe for practical settings: on most
datasets, temperature scaling -- a single-parameter variant of Platt Scaling --
is surprisingly effective at calibrating predictions. | http://arxiv.org/pdf/1706.04599 | Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger | cs.LG | ICML 2017 | null | cs.LG | 20170614 | 20170803 | [
{
"id": "1610.08936"
},
{
"id": "1701.06548"
},
{
"id": "1612.01474"
},
{
"id": "1607.03594"
},
{
"id": "1604.07316"
},
{
"id": "1505.00387"
},
{
"id": "1703.04977"
},
{
"id": "1610.05256"
}
] |
1706.04599 | 31 | Table 1. ECE (%) (with M = 15 bins) on standard vision and NLP datasets before calibration and with various calibration methods. The number following a modelâs name denotes the network depth.
Extension of binning methods. One common way of ex- tending binary calibration methods to the multiclass setting is by treating the problem as KKâ one-versus-all problems (Zadrozny & Elkan, 2002). For k = 1,...,K, we forma binary calibration problem where the label is 1(y; = k) and the predicted probability is osy(z;)). This gives us J¢ calibration models, each for a particular class. At test time, we obtain an unnormalized probability vector a, heey |, where qhâ is the calibrated probability for class k. The new class prediction gj is the argmax of the vector, and the new confidence ¢} is the max of the vector normalized by vy @. This extension can be applied to histogram binning, isotonic regression, and BBQ.
Matrix and vector scaling are two multi-class exten- sions of Platt scaling. Let zi be the logits vector produced before the softmax layer for input xi. Matrix scaling ap- plies a linear transformation Wzi + b to the logits: | 1706.04599#31 | On Calibration of Modern Neural Networks | Confidence calibration -- the problem of predicting probability estimates
representative of the true correctness likelihood -- is important for
classification models in many applications. We discover that modern neural
networks, unlike those from a decade ago, are poorly calibrated. Through
extensive experiments, we observe that depth, width, weight decay, and Batch
Normalization are important factors influencing calibration. We evaluate the
performance of various post-processing calibration methods on state-of-the-art
architectures with image and document classification datasets. Our analysis and
experiments not only offer insights into neural network learning, but also
provide a simple and straightforward recipe for practical settings: on most
datasets, temperature scaling -- a single-parameter variant of Platt Scaling --
is surprisingly effective at calibrating predictions. | http://arxiv.org/pdf/1706.04599 | Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger | cs.LG | ICML 2017 | null | cs.LG | 20170614 | 20170803 | [
{
"id": "1610.08936"
},
{
"id": "1701.06548"
},
{
"id": "1612.01474"
},
{
"id": "1607.03594"
},
{
"id": "1604.07316"
},
{
"id": "1505.00387"
},
{
"id": "1703.04977"
},
{
"id": "1610.05256"
}
] |
1706.04599 | 32 | T is called the temperature, and it âsoftensâ the softmax (i.e. raises the output entropy) with T > 1. As T > ov, the probability g; approaches 1/K, which represents max- imum uncertainty. With 7â = 1, we recover the original probability p;. As Jâ â 0, the probability collapses to a point mass (i.e. g; = 1). T is optimized with respect to NLL on the validation set. Because the parameter T does not change the maximum of the softmax function, the class prediction gj remains unchanged. In other words, temper- ature scaling does not affect the modelâs accuracy.
Temperature scaling is commonly used in settings such as knowledge distillation (Hinton et al., 2015) and statistical mechanics (Jaynes, 1957). To the best of our knowledge, we are not aware of any prior use in the context of calibrat- ing probabilistic models.3 The model is equivalent to max- imizing the entropy of the output probability distribution subject to certain constraints on the logits (see Section S2).
Gi = max osm(Waz; + b), 8 §; = argmax (Wz; + b)(*), ®) k | 1706.04599#32 | On Calibration of Modern Neural Networks | Confidence calibration -- the problem of predicting probability estimates
representative of the true correctness likelihood -- is important for
classification models in many applications. We discover that modern neural
networks, unlike those from a decade ago, are poorly calibrated. Through
extensive experiments, we observe that depth, width, weight decay, and Batch
Normalization are important factors influencing calibration. We evaluate the
performance of various post-processing calibration methods on state-of-the-art
architectures with image and document classification datasets. Our analysis and
experiments not only offer insights into neural network learning, but also
provide a simple and straightforward recipe for practical settings: on most
datasets, temperature scaling -- a single-parameter variant of Platt Scaling --
is surprisingly effective at calibrating predictions. | http://arxiv.org/pdf/1706.04599 | Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger | cs.LG | ICML 2017 | null | cs.LG | 20170614 | 20170803 | [
{
"id": "1610.08936"
},
{
"id": "1701.06548"
},
{
"id": "1612.01474"
},
{
"id": "1607.03594"
},
{
"id": "1604.07316"
},
{
"id": "1505.00387"
},
{
"id": "1703.04977"
},
{
"id": "1610.05256"
}
] |
1706.04599 | 33 | Gi = max osm(Waz; + b), 8 §; = argmax (Wz; + b)(*), ®) k
The parameters W and b are optimized with respect to NLL on the validation set. As the number of parameters for matrix scaling grows quadratically with the number of classes K, we deï¬ne vector scaling as a variant where W is restricted to be a diagonal matrix.
Temperature scaling, the simplest extension of Platt scaling, uses a single scalar parameter T > 0 for all classes. Given the logit vector zi, the new conï¬dence prediction is (9)
ÏSM(zi/T )(k). Ëqi = max k
# 4.3. Other Related Works | 1706.04599#33 | On Calibration of Modern Neural Networks | Confidence calibration -- the problem of predicting probability estimates
representative of the true correctness likelihood -- is important for
classification models in many applications. We discover that modern neural
networks, unlike those from a decade ago, are poorly calibrated. Through
extensive experiments, we observe that depth, width, weight decay, and Batch
Normalization are important factors influencing calibration. We evaluate the
performance of various post-processing calibration methods on state-of-the-art
architectures with image and document classification datasets. Our analysis and
experiments not only offer insights into neural network learning, but also
provide a simple and straightforward recipe for practical settings: on most
datasets, temperature scaling -- a single-parameter variant of Platt Scaling --
is surprisingly effective at calibrating predictions. | http://arxiv.org/pdf/1706.04599 | Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger | cs.LG | ICML 2017 | null | cs.LG | 20170614 | 20170803 | [
{
"id": "1610.08936"
},
{
"id": "1701.06548"
},
{
"id": "1612.01474"
},
{
"id": "1607.03594"
},
{
"id": "1604.07316"
},
{
"id": "1505.00387"
},
{
"id": "1703.04977"
},
{
"id": "1610.05256"
}
] |
1706.04599 | 34 | ÏSM(zi/T )(k). Ëqi = max k
# 4.3. Other Related Works
Calibration and conï¬dence scores have been studied in var- ious contexts in recent years. Kuleshov & Ermon (2016) study the problem of calibration in the online setting, where the inputs can come from a potentially adversarial source. Kuleshov & Liang (2015) investigate how to produce cal- ibrated probabilities when the output space is a structured object. Lakshminarayanan et al. (2016) use ensembles of networks to obtain uncertainty estimates. Pereyra et al. (2017) penalize overconï¬dent predictions as a form of reg- ularization. Hendrycks & Gimpel (2017) use conï¬dence
3To highlight the connection with prior works we deï¬ne tem- perature scaling in terms of 1 T instead of a multiplicative scalar.
scores to determine if samples are out-of-distribution. | 1706.04599#34 | On Calibration of Modern Neural Networks | Confidence calibration -- the problem of predicting probability estimates
representative of the true correctness likelihood -- is important for
classification models in many applications. We discover that modern neural
networks, unlike those from a decade ago, are poorly calibrated. Through
extensive experiments, we observe that depth, width, weight decay, and Batch
Normalization are important factors influencing calibration. We evaluate the
performance of various post-processing calibration methods on state-of-the-art
architectures with image and document classification datasets. Our analysis and
experiments not only offer insights into neural network learning, but also
provide a simple and straightforward recipe for practical settings: on most
datasets, temperature scaling -- a single-parameter variant of Platt Scaling --
is surprisingly effective at calibrating predictions. | http://arxiv.org/pdf/1706.04599 | Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger | cs.LG | ICML 2017 | null | cs.LG | 20170614 | 20170803 | [
{
"id": "1610.08936"
},
{
"id": "1701.06548"
},
{
"id": "1612.01474"
},
{
"id": "1607.03594"
},
{
"id": "1604.07316"
},
{
"id": "1505.00387"
},
{
"id": "1703.04977"
},
{
"id": "1610.05256"
}
] |
1706.04599 | 35 | scores to determine if samples are out-of-distribution.
Bayesian neural networks (Denker & Lecun, 1990; MacKay, 1992) return a probability distribution over out- puts as an alternative way to represent model uncertainty. Gal & Ghahramani (2016) draw a connection between Dropout (Srivastava et al., 2014) and model uncertainty, claiming that sampling models with dropped nodes is a way to estimate the probability distribution over all pos- sible models for a given sample. Kendall & Gal (2017) combine this approach with a model that outputs a predic- tive mean and variance for each data point. This notion of uncertainty is not restricted to classiï¬cation problems. Ad- ditionally, neural networks can be used in conjunction with Bayesian models that output complete distributions. For example, deep kernel learning (Wilson et al., 2016a;b; Al- Shedivat et al., 2016) combines deep neural networks with Gaussian processes on classiï¬cation and regression prob- lems. In contrast, our framework, which does not augment the neural network model, returns a conï¬dence score rather than returning a distribution of possible outputs.
# 5. Results
We apply the calibration methods in Section 4 to image classiï¬cation and document classiï¬cation neural networks. For image classiï¬cation we use 6 datasets: | 1706.04599#35 | On Calibration of Modern Neural Networks | Confidence calibration -- the problem of predicting probability estimates
representative of the true correctness likelihood -- is important for
classification models in many applications. We discover that modern neural
networks, unlike those from a decade ago, are poorly calibrated. Through
extensive experiments, we observe that depth, width, weight decay, and Batch
Normalization are important factors influencing calibration. We evaluate the
performance of various post-processing calibration methods on state-of-the-art
architectures with image and document classification datasets. Our analysis and
experiments not only offer insights into neural network learning, but also
provide a simple and straightforward recipe for practical settings: on most
datasets, temperature scaling -- a single-parameter variant of Platt Scaling --
is surprisingly effective at calibrating predictions. | http://arxiv.org/pdf/1706.04599 | Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger | cs.LG | ICML 2017 | null | cs.LG | 20170614 | 20170803 | [
{
"id": "1610.08936"
},
{
"id": "1701.06548"
},
{
"id": "1612.01474"
},
{
"id": "1607.03594"
},
{
"id": "1604.07316"
},
{
"id": "1505.00387"
},
{
"id": "1703.04977"
},
{
"id": "1610.05256"
}
] |
1706.04599 | 36 | We apply the calibration methods in Section 4 to image classiï¬cation and document classiï¬cation neural networks. For image classiï¬cation we use 6 datasets:
1. Caltech-UCSD Birds 200 bird species. train/validation/test sets. al., 2010): et 5994/2897/2897 images for (Welinder
2. Stanford Cars (Krause et al., 2013): 196 classes of cars by make, model, and year. 8041/4020/4020 im- ages for train/validation/test.
3. ImageNet 2012 (Deng et al., 2009): Natural scene im- ages from 1000 classes. 1.3 million/25,000/25,000 images for train/validation/test.
4. CIFAR-10/CIFAR-100 (Krizhevsky & Hinton, 2009): Color from 10/100 classes. 45,000/5,000/10,000 images for train/validation/test. 5. Street View House Numbers (SVHN) (Netzer et al., 32 Ã 32 colored images of cropped 2011): out house numbers from Google Street View. 598,388/6,000/26,032 images for train/validation/test. | 1706.04599#36 | On Calibration of Modern Neural Networks | Confidence calibration -- the problem of predicting probability estimates
representative of the true correctness likelihood -- is important for
classification models in many applications. We discover that modern neural
networks, unlike those from a decade ago, are poorly calibrated. Through
extensive experiments, we observe that depth, width, weight decay, and Batch
Normalization are important factors influencing calibration. We evaluate the
performance of various post-processing calibration methods on state-of-the-art
architectures with image and document classification datasets. Our analysis and
experiments not only offer insights into neural network learning, but also
provide a simple and straightforward recipe for practical settings: on most
datasets, temperature scaling -- a single-parameter variant of Platt Scaling --
is surprisingly effective at calibrating predictions. | http://arxiv.org/pdf/1706.04599 | Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger | cs.LG | ICML 2017 | null | cs.LG | 20170614 | 20170803 | [
{
"id": "1610.08936"
},
{
"id": "1701.06548"
},
{
"id": "1612.01474"
},
{
"id": "1607.03594"
},
{
"id": "1604.07316"
},
{
"id": "1505.00387"
},
{
"id": "1703.04977"
},
{
"id": "1610.05256"
}
] |
1706.04599 | 37 | We train state-of-the-art convolutional networks: ResNets (He et al., 2016), ResNets with stochastic depth (SD) (Huang et al., 2016), Wide ResNets (Zagoruyko & Ko- modakis, 2016), and DenseNets (Huang et al., 2017). We use the data preprocessing, training procedures, and hyper- parameters as described in each paper. For Birds and Cars, we ï¬ne-tune networks pretrained on ImageNet.
For document classiï¬cation we experiment with 4 datasets:
1. 20 News: News articles, partitioned into 20 categories by content. 9034/2259/7528 documents for train/validation/test.
2. Reuters: News articles, partitioned into 8 cate- 4388/1097/2189 documents for gories by topic. train/validation/test. | 1706.04599#37 | On Calibration of Modern Neural Networks | Confidence calibration -- the problem of predicting probability estimates
representative of the true correctness likelihood -- is important for
classification models in many applications. We discover that modern neural
networks, unlike those from a decade ago, are poorly calibrated. Through
extensive experiments, we observe that depth, width, weight decay, and Batch
Normalization are important factors influencing calibration. We evaluate the
performance of various post-processing calibration methods on state-of-the-art
architectures with image and document classification datasets. Our analysis and
experiments not only offer insights into neural network learning, but also
provide a simple and straightforward recipe for practical settings: on most
datasets, temperature scaling -- a single-parameter variant of Platt Scaling --
is surprisingly effective at calibrating predictions. | http://arxiv.org/pdf/1706.04599 | Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger | cs.LG | ICML 2017 | null | cs.LG | 20170614 | 20170803 | [
{
"id": "1610.08936"
},
{
"id": "1701.06548"
},
{
"id": "1612.01474"
},
{
"id": "1607.03594"
},
{
"id": "1604.07316"
},
{
"id": "1505.00387"
},
{
"id": "1703.04977"
},
{
"id": "1610.05256"
}
] |
1706.04599 | 38 | 2. Reuters: News articles, partitioned into 8 cate- 4388/1097/2189 documents for gories by topic. train/validation/test.
3. Stanford Sentiment Treebank (SST) (Socher et al., 2013): Movie reviews, represented as sentence parse trees that are annotated by sentiment. Each sample in- cludes a coarse binary label and a ï¬ne grained 5-class label. As described in (Tai et al., 2015), the train- ing/validation/test sets contain 6920/872/1821 docu- ments for binary, and 544/1101/2210 for ï¬ne-grained.
On 20 News and Reuters, we train Deep Averaging Net- works (DANs) (Iyyer et al., 2015) with 3 feed-forward layers and Batch Normalization. On SST, we train TreeLSTMs (Long Short Term Memory) (Tai et al., 2015). For both models we use the default hyperparmaeters sug- gested by the authors. | 1706.04599#38 | On Calibration of Modern Neural Networks | Confidence calibration -- the problem of predicting probability estimates
representative of the true correctness likelihood -- is important for
classification models in many applications. We discover that modern neural
networks, unlike those from a decade ago, are poorly calibrated. Through
extensive experiments, we observe that depth, width, weight decay, and Batch
Normalization are important factors influencing calibration. We evaluate the
performance of various post-processing calibration methods on state-of-the-art
architectures with image and document classification datasets. Our analysis and
experiments not only offer insights into neural network learning, but also
provide a simple and straightforward recipe for practical settings: on most
datasets, temperature scaling -- a single-parameter variant of Platt Scaling --
is surprisingly effective at calibrating predictions. | http://arxiv.org/pdf/1706.04599 | Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger | cs.LG | ICML 2017 | null | cs.LG | 20170614 | 20170803 | [
{
"id": "1610.08936"
},
{
"id": "1701.06548"
},
{
"id": "1612.01474"
},
{
"id": "1607.03594"
},
{
"id": "1604.07316"
},
{
"id": "1505.00387"
},
{
"id": "1703.04977"
},
{
"id": "1610.05256"
}
] |
1706.04599 | 39 | Calibration Results. Table 1 displays model calibration, as measured by ECE (with M = 15 bins), before and af- ter applying the various methods (see Section S3 for MCE, NLL, and error tables). It is worth noting that most datasets and models experience some degree of miscalibration, with ECE typically between 4 to 10%. This is not architecture speciï¬c: we observe miscalibration on convolutional net- works (with and without skip connections), recurrent net- works, and deep averaging networks. The two notable ex- ceptions are SVHN and Reuters, both of which experience ECE values below 1%. Both of these datasets have very low error (1.98% and 2.97%, respectively); and therefore the ratio of ECE to error is comparable to other datasets. | 1706.04599#39 | On Calibration of Modern Neural Networks | Confidence calibration -- the problem of predicting probability estimates
representative of the true correctness likelihood -- is important for
classification models in many applications. We discover that modern neural
networks, unlike those from a decade ago, are poorly calibrated. Through
extensive experiments, we observe that depth, width, weight decay, and Batch
Normalization are important factors influencing calibration. We evaluate the
performance of various post-processing calibration methods on state-of-the-art
architectures with image and document classification datasets. Our analysis and
experiments not only offer insights into neural network learning, but also
provide a simple and straightforward recipe for practical settings: on most
datasets, temperature scaling -- a single-parameter variant of Platt Scaling --
is surprisingly effective at calibrating predictions. | http://arxiv.org/pdf/1706.04599 | Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger | cs.LG | ICML 2017 | null | cs.LG | 20170614 | 20170803 | [
{
"id": "1610.08936"
},
{
"id": "1701.06548"
},
{
"id": "1612.01474"
},
{
"id": "1607.03594"
},
{
"id": "1604.07316"
},
{
"id": "1505.00387"
},
{
"id": "1703.04977"
},
{
"id": "1610.05256"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.