doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2307.15217 | 123 | Amartya Sen. Social choice theory. Handbook of mathematical economics, 3:1073â1181, 1986.
Rohin Shah, Noah Gundotra, Pieter Abbeel, and Anca Dragan. On the feasibility of learning, rather than assuming, human biases for reward inference. In International Conference on Machine Learning, pages 5670â5679. PMLR, 2019.
Rohin Shah, Vikrant Varma, Ramana Kumar, Mary Phuong, Victoria Krakovna, Jonathan Uesato, and Zac Kenton. Goal misgeneralization: Why correct specifications arenât enough for correct goals. arXiv preprint arXiv:2210.01790, 2022.
Steven Shapin and Simon Schaffer. Leviathan and the air-pump: Hobbes, Boyle, and the experimental life. Princeton University Press, 2011.
31
Pratyusha Sharma, Balakumar Sundaralingam, Valts Blukis, Chris Paxton, Tucker Hermans, Antonio Tor- ralba, Jacob Andreas, and Dieter Fox. Correcting robot plans with natural language feedback. arXiv preprint arXiv:2204.05186, 2022.
Yonadav Shavit. What does it take to catch a chinchilla? verifying rules on large-scale neural network training via compute monitoring, 2023. | 2307.15217#123 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems. | http://arxiv.org/pdf/2307.15217 | Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230727 | 20230911 | [
{
"id": "2305.20050"
},
{
"id": "2302.01928"
},
{
"id": "2210.10760"
},
{
"id": "2304.09991"
},
{
"id": "2211.06519"
},
{
"id": "2305.17319"
},
{
"id": "2109.13916"
},
{
"id": "1909.12200"
},
{
"id": "2305.16147"
},
{
"id": "2301.00901"
},
{
"id": "2005.01643"
},
{
"id": "2006.03357"
},
{
"id": "2306.04488"
},
{
"id": "2304.05197"
},
{
"id": "2210.01241"
},
{
"id": "2110.05699"
},
{
"id": "2101.07691"
},
{
"id": "2303.17548"
},
{
"id": "2301.11270"
},
{
"id": "2305.17608"
},
{
"id": "2301.04709"
},
{
"id": "2211.14275"
},
{
"id": "1811.07871"
},
{
"id": "1903.02020"
},
{
"id": "2303.15056"
},
{
"id": "2012.07532"
},
{
"id": "2201.03544"
},
{
"id": "2303.06074"
},
{
"id": "2209.02167"
},
{
"id": "2304.06528"
},
{
"id": "2305.14710"
},
{
"id": "2305.18290"
},
{
"id": "2301.01768"
},
{
"id": "1803.04585"
},
{
"id": "2211.09110"
},
{
"id": "2305.15324"
},
{
"id": "2304.04914"
},
{
"id": "2211.00241"
},
{
"id": "2204.10817"
},
{
"id": "2206.13316"
},
{
"id": "2305.14325"
},
{
"id": "2303.09001"
},
{
"id": "1909.08593"
},
{
"id": "2308.12050"
},
{
"id": "2204.02515"
},
{
"id": "2302.08500"
},
{
"id": "1906.03926"
},
{
"id": "2204.05186"
},
{
"id": "2209.00626"
},
{
"id": "2202.03286"
},
{
"id": "2012.05862"
},
{
"id": "2305.13534"
},
{
"id": "2307.02483"
},
{
"id": "1805.00899"
},
{
"id": "2303.16755"
},
{
"id": "2302.10894"
},
{
"id": "2006.13900"
},
{
"id": "2302.06503"
},
{
"id": "1908.04734"
},
{
"id": "1805.08010"
},
{
"id": "2305.08844"
},
{
"id": "1901.08654"
},
{
"id": "2204.05862"
},
{
"id": "1705.06452"
},
{
"id": "2306.08647"
},
{
"id": "2206.05802"
},
{
"id": "2303.09387"
},
{
"id": "2305.11455"
},
{
"id": "2203.07472"
},
{
"id": "2210.07229"
},
{
"id": "2106.05091"
},
{
"id": "2308.03825"
},
{
"id": "1610.02136"
},
{
"id": "2301.04213"
},
{
"id": "2304.00740"
},
{
"id": "1807.06096"
},
{
"id": "2010.14603"
},
{
"id": "1707.07402"
},
{
"id": "2302.10149"
},
{
"id": "2212.03201"
},
{
"id": "2303.00894"
},
{
"id": "2303.05453"
},
{
"id": "2304.06767"
},
{
"id": "2304.11082"
},
{
"id": "2109.06668"
},
{
"id": "1902.04257"
},
{
"id": "2210.01790"
},
{
"id": "2206.02231"
},
{
"id": "2306.07899"
},
{
"id": "1902.07742"
},
{
"id": "2109.00157"
},
{
"id": "2010.05418"
},
{
"id": "2306.15447"
},
{
"id": "2212.08073"
},
{
"id": "1606.06565"
},
{
"id": "2209.15259"
},
{
"id": "2211.03540"
},
{
"id": "2212.04717"
},
{
"id": "2301.03652"
},
{
"id": "2306.09442"
},
{
"id": "2305.13735"
},
{
"id": "2303.16749"
},
{
"id": "2212.09251"
},
{
"id": "2209.13085"
},
{
"id": "2303.17651"
},
{
"id": "2103.14659"
},
{
"id": "2305.11206"
},
{
"id": "2006.04948"
}
] |
2307.15217 | 124 | Yonadav Shavit. What does it take to catch a chinchilla? verifying rules on large-scale neural network training via compute monitoring, 2023.
Xinyue Shen, Zeyuan Chen, Michael Backes, Yun Shen, and Yang Zhang. " do anything now": Characterizing and evaluating in-the-wild jailbreak prompts on large language models. arXiv preprint arXiv:2308.03825, 2023.
Toby Shevlane, Sebastian Farquhar, Ben Garfinkel, Mary Phuong, Jess Whittlestone, Jade Leung, Daniel Kokotajlo, Nahema Marchal, Markus Anderljung, Noam Kolt, et al. Model evaluation for extreme risks. arXiv preprint arXiv:2305.15324, 2023.
Umer Siddique, Abhinav Sinha, and Yongcan Cao. Fairness in preference-based reinforcement learning, 2023.
David Silver, Satinder Singh, Doina Precup, and Richard S Sutton. Reward is enough. Artificial Intelligence, 299:103535, 2021.
Joar Skalse and Alessandro Abate. Misspecification in inverse reinforcement learning. arXiv preprint arXiv:2212.03201, 2022a. | 2307.15217#124 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems. | http://arxiv.org/pdf/2307.15217 | Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230727 | 20230911 | [
{
"id": "2305.20050"
},
{
"id": "2302.01928"
},
{
"id": "2210.10760"
},
{
"id": "2304.09991"
},
{
"id": "2211.06519"
},
{
"id": "2305.17319"
},
{
"id": "2109.13916"
},
{
"id": "1909.12200"
},
{
"id": "2305.16147"
},
{
"id": "2301.00901"
},
{
"id": "2005.01643"
},
{
"id": "2006.03357"
},
{
"id": "2306.04488"
},
{
"id": "2304.05197"
},
{
"id": "2210.01241"
},
{
"id": "2110.05699"
},
{
"id": "2101.07691"
},
{
"id": "2303.17548"
},
{
"id": "2301.11270"
},
{
"id": "2305.17608"
},
{
"id": "2301.04709"
},
{
"id": "2211.14275"
},
{
"id": "1811.07871"
},
{
"id": "1903.02020"
},
{
"id": "2303.15056"
},
{
"id": "2012.07532"
},
{
"id": "2201.03544"
},
{
"id": "2303.06074"
},
{
"id": "2209.02167"
},
{
"id": "2304.06528"
},
{
"id": "2305.14710"
},
{
"id": "2305.18290"
},
{
"id": "2301.01768"
},
{
"id": "1803.04585"
},
{
"id": "2211.09110"
},
{
"id": "2305.15324"
},
{
"id": "2304.04914"
},
{
"id": "2211.00241"
},
{
"id": "2204.10817"
},
{
"id": "2206.13316"
},
{
"id": "2305.14325"
},
{
"id": "2303.09001"
},
{
"id": "1909.08593"
},
{
"id": "2308.12050"
},
{
"id": "2204.02515"
},
{
"id": "2302.08500"
},
{
"id": "1906.03926"
},
{
"id": "2204.05186"
},
{
"id": "2209.00626"
},
{
"id": "2202.03286"
},
{
"id": "2012.05862"
},
{
"id": "2305.13534"
},
{
"id": "2307.02483"
},
{
"id": "1805.00899"
},
{
"id": "2303.16755"
},
{
"id": "2302.10894"
},
{
"id": "2006.13900"
},
{
"id": "2302.06503"
},
{
"id": "1908.04734"
},
{
"id": "1805.08010"
},
{
"id": "2305.08844"
},
{
"id": "1901.08654"
},
{
"id": "2204.05862"
},
{
"id": "1705.06452"
},
{
"id": "2306.08647"
},
{
"id": "2206.05802"
},
{
"id": "2303.09387"
},
{
"id": "2305.11455"
},
{
"id": "2203.07472"
},
{
"id": "2210.07229"
},
{
"id": "2106.05091"
},
{
"id": "2308.03825"
},
{
"id": "1610.02136"
},
{
"id": "2301.04213"
},
{
"id": "2304.00740"
},
{
"id": "1807.06096"
},
{
"id": "2010.14603"
},
{
"id": "1707.07402"
},
{
"id": "2302.10149"
},
{
"id": "2212.03201"
},
{
"id": "2303.00894"
},
{
"id": "2303.05453"
},
{
"id": "2304.06767"
},
{
"id": "2304.11082"
},
{
"id": "2109.06668"
},
{
"id": "1902.04257"
},
{
"id": "2210.01790"
},
{
"id": "2206.02231"
},
{
"id": "2306.07899"
},
{
"id": "1902.07742"
},
{
"id": "2109.00157"
},
{
"id": "2010.05418"
},
{
"id": "2306.15447"
},
{
"id": "2212.08073"
},
{
"id": "1606.06565"
},
{
"id": "2209.15259"
},
{
"id": "2211.03540"
},
{
"id": "2212.04717"
},
{
"id": "2301.03652"
},
{
"id": "2306.09442"
},
{
"id": "2305.13735"
},
{
"id": "2303.16749"
},
{
"id": "2212.09251"
},
{
"id": "2209.13085"
},
{
"id": "2303.17651"
},
{
"id": "2103.14659"
},
{
"id": "2305.11206"
},
{
"id": "2006.04948"
}
] |
2307.15217 | 125 | Joar Skalse and Alessandro Abate. Misspecification in inverse reinforcement learning. arXiv preprint arXiv:2212.03201, 2022a.
Joar Skalse, Nikolaus HR Howe, Dmitrii Krasheninnikov, and David Krueger. Defining and characterizing reward hacking. arXiv preprint arXiv:2209.13085, 2022.
Joar Max Viktor Skalse and Alessandro Abate. The reward hypothesis is false. In NeurIPS ML Safety Workshop, 2022b.
Joar Max Viktor Skalse, Matthew Farrugia-Roberts, Stuart Russell, Alessandro Abate, and Adam Gleave. Invariance in policy optimisation and partial identifiability in reward learning. In International Conference on Machine Learning, pages 32033â32058. PMLR, 2023.
Charlie Snell, Ilya Kostrikov, Yi Su, Mengjiao Yang, and Sergey Levine. Offline rl for natural language generation with implicit language q learning, 2022. URL https://arxiv.org/abs/2206.11871. | 2307.15217#125 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems. | http://arxiv.org/pdf/2307.15217 | Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230727 | 20230911 | [
{
"id": "2305.20050"
},
{
"id": "2302.01928"
},
{
"id": "2210.10760"
},
{
"id": "2304.09991"
},
{
"id": "2211.06519"
},
{
"id": "2305.17319"
},
{
"id": "2109.13916"
},
{
"id": "1909.12200"
},
{
"id": "2305.16147"
},
{
"id": "2301.00901"
},
{
"id": "2005.01643"
},
{
"id": "2006.03357"
},
{
"id": "2306.04488"
},
{
"id": "2304.05197"
},
{
"id": "2210.01241"
},
{
"id": "2110.05699"
},
{
"id": "2101.07691"
},
{
"id": "2303.17548"
},
{
"id": "2301.11270"
},
{
"id": "2305.17608"
},
{
"id": "2301.04709"
},
{
"id": "2211.14275"
},
{
"id": "1811.07871"
},
{
"id": "1903.02020"
},
{
"id": "2303.15056"
},
{
"id": "2012.07532"
},
{
"id": "2201.03544"
},
{
"id": "2303.06074"
},
{
"id": "2209.02167"
},
{
"id": "2304.06528"
},
{
"id": "2305.14710"
},
{
"id": "2305.18290"
},
{
"id": "2301.01768"
},
{
"id": "1803.04585"
},
{
"id": "2211.09110"
},
{
"id": "2305.15324"
},
{
"id": "2304.04914"
},
{
"id": "2211.00241"
},
{
"id": "2204.10817"
},
{
"id": "2206.13316"
},
{
"id": "2305.14325"
},
{
"id": "2303.09001"
},
{
"id": "1909.08593"
},
{
"id": "2308.12050"
},
{
"id": "2204.02515"
},
{
"id": "2302.08500"
},
{
"id": "1906.03926"
},
{
"id": "2204.05186"
},
{
"id": "2209.00626"
},
{
"id": "2202.03286"
},
{
"id": "2012.05862"
},
{
"id": "2305.13534"
},
{
"id": "2307.02483"
},
{
"id": "1805.00899"
},
{
"id": "2303.16755"
},
{
"id": "2302.10894"
},
{
"id": "2006.13900"
},
{
"id": "2302.06503"
},
{
"id": "1908.04734"
},
{
"id": "1805.08010"
},
{
"id": "2305.08844"
},
{
"id": "1901.08654"
},
{
"id": "2204.05862"
},
{
"id": "1705.06452"
},
{
"id": "2306.08647"
},
{
"id": "2206.05802"
},
{
"id": "2303.09387"
},
{
"id": "2305.11455"
},
{
"id": "2203.07472"
},
{
"id": "2210.07229"
},
{
"id": "2106.05091"
},
{
"id": "2308.03825"
},
{
"id": "1610.02136"
},
{
"id": "2301.04213"
},
{
"id": "2304.00740"
},
{
"id": "1807.06096"
},
{
"id": "2010.14603"
},
{
"id": "1707.07402"
},
{
"id": "2302.10149"
},
{
"id": "2212.03201"
},
{
"id": "2303.00894"
},
{
"id": "2303.05453"
},
{
"id": "2304.06767"
},
{
"id": "2304.11082"
},
{
"id": "2109.06668"
},
{
"id": "1902.04257"
},
{
"id": "2210.01790"
},
{
"id": "2206.02231"
},
{
"id": "2306.07899"
},
{
"id": "1902.07742"
},
{
"id": "2109.00157"
},
{
"id": "2010.05418"
},
{
"id": "2306.15447"
},
{
"id": "2212.08073"
},
{
"id": "1606.06565"
},
{
"id": "2209.15259"
},
{
"id": "2211.03540"
},
{
"id": "2212.04717"
},
{
"id": "2301.03652"
},
{
"id": "2306.09442"
},
{
"id": "2305.13735"
},
{
"id": "2303.16749"
},
{
"id": "2212.09251"
},
{
"id": "2209.13085"
},
{
"id": "2303.17651"
},
{
"id": "2103.14659"
},
{
"id": "2305.11206"
},
{
"id": "2006.04948"
}
] |
2307.15217 | 126 | Aaron J. Snoswell and Jean Burgess. The Galactica AI model was trained on scientific knowledge â but it spat out alarmingly plausible nonsense, November 2022. URL http://theconversation.com/ the-galactica-ai-model-was-trained-on-scientific-knowledge-but-it-spat-out-alarmingly-plausible-nonsense-195445.
Irene Solaiman and Christy Dennison. Process for adapting language models to society (palms) with values-targeted datasets. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wort- man Vaughan, editors, Advances in Neural Information Processing Systems, volume 34, pages 5861â 5873. Curran Associates, Inc., 2021. URL https://proceedings.neurips.cc/paper/2021/file/ 2e855f9489df0712b4bd8ea9e2848c5a-Paper.pdf.
Ziang Song, Tianle Cai, Jason D Lee, and Weijie J Su. Reward collapse in aligning large language models. arXiv preprint arXiv:2305.17608, 2023. | 2307.15217#126 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems. | http://arxiv.org/pdf/2307.15217 | Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230727 | 20230911 | [
{
"id": "2305.20050"
},
{
"id": "2302.01928"
},
{
"id": "2210.10760"
},
{
"id": "2304.09991"
},
{
"id": "2211.06519"
},
{
"id": "2305.17319"
},
{
"id": "2109.13916"
},
{
"id": "1909.12200"
},
{
"id": "2305.16147"
},
{
"id": "2301.00901"
},
{
"id": "2005.01643"
},
{
"id": "2006.03357"
},
{
"id": "2306.04488"
},
{
"id": "2304.05197"
},
{
"id": "2210.01241"
},
{
"id": "2110.05699"
},
{
"id": "2101.07691"
},
{
"id": "2303.17548"
},
{
"id": "2301.11270"
},
{
"id": "2305.17608"
},
{
"id": "2301.04709"
},
{
"id": "2211.14275"
},
{
"id": "1811.07871"
},
{
"id": "1903.02020"
},
{
"id": "2303.15056"
},
{
"id": "2012.07532"
},
{
"id": "2201.03544"
},
{
"id": "2303.06074"
},
{
"id": "2209.02167"
},
{
"id": "2304.06528"
},
{
"id": "2305.14710"
},
{
"id": "2305.18290"
},
{
"id": "2301.01768"
},
{
"id": "1803.04585"
},
{
"id": "2211.09110"
},
{
"id": "2305.15324"
},
{
"id": "2304.04914"
},
{
"id": "2211.00241"
},
{
"id": "2204.10817"
},
{
"id": "2206.13316"
},
{
"id": "2305.14325"
},
{
"id": "2303.09001"
},
{
"id": "1909.08593"
},
{
"id": "2308.12050"
},
{
"id": "2204.02515"
},
{
"id": "2302.08500"
},
{
"id": "1906.03926"
},
{
"id": "2204.05186"
},
{
"id": "2209.00626"
},
{
"id": "2202.03286"
},
{
"id": "2012.05862"
},
{
"id": "2305.13534"
},
{
"id": "2307.02483"
},
{
"id": "1805.00899"
},
{
"id": "2303.16755"
},
{
"id": "2302.10894"
},
{
"id": "2006.13900"
},
{
"id": "2302.06503"
},
{
"id": "1908.04734"
},
{
"id": "1805.08010"
},
{
"id": "2305.08844"
},
{
"id": "1901.08654"
},
{
"id": "2204.05862"
},
{
"id": "1705.06452"
},
{
"id": "2306.08647"
},
{
"id": "2206.05802"
},
{
"id": "2303.09387"
},
{
"id": "2305.11455"
},
{
"id": "2203.07472"
},
{
"id": "2210.07229"
},
{
"id": "2106.05091"
},
{
"id": "2308.03825"
},
{
"id": "1610.02136"
},
{
"id": "2301.04213"
},
{
"id": "2304.00740"
},
{
"id": "1807.06096"
},
{
"id": "2010.14603"
},
{
"id": "1707.07402"
},
{
"id": "2302.10149"
},
{
"id": "2212.03201"
},
{
"id": "2303.00894"
},
{
"id": "2303.05453"
},
{
"id": "2304.06767"
},
{
"id": "2304.11082"
},
{
"id": "2109.06668"
},
{
"id": "1902.04257"
},
{
"id": "2210.01790"
},
{
"id": "2206.02231"
},
{
"id": "2306.07899"
},
{
"id": "1902.07742"
},
{
"id": "2109.00157"
},
{
"id": "2010.05418"
},
{
"id": "2306.15447"
},
{
"id": "2212.08073"
},
{
"id": "1606.06565"
},
{
"id": "2209.15259"
},
{
"id": "2211.03540"
},
{
"id": "2212.04717"
},
{
"id": "2301.03652"
},
{
"id": "2306.09442"
},
{
"id": "2305.13735"
},
{
"id": "2303.16749"
},
{
"id": "2212.09251"
},
{
"id": "2209.13085"
},
{
"id": "2303.17651"
},
{
"id": "2103.14659"
},
{
"id": "2305.11206"
},
{
"id": "2006.04948"
}
] |
2307.15217 | 127 | Krishnan Srinivasan, Benjamin Eysenbach, Sehoon Ha, Jie Tan, and Chelsea Finn. Learning to be safe: Deep rl with a safety critic. arXiv preprint arXiv:2010.14603, 2020.
Jacob Steinhardt. Emergent Deception and Emergent Optimization, February 2023. URL https: //bounded-regret.ghost.io/emergent-deception-optimization/.
Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul F Christiano. Learning to summarize with human feedback. Advances in Neural Information Processing Systems, 33:3008â3021, 2020.
32
Theodore R Sumers, Mark K Ho, Robert D Hawkins, Karthik Narasimhan, and Thomas L Griffiths. Learn- ing rewards from linguistic feedback. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 6002â6010, 2021.
Ran Tian, Masayoshi Tomizuka, Anca Dragan, and Andrea Bajcsy. Towards Modeling and Influenc- ing the Dynamics of Human Learning, January 2023. URL http://arxiv.org/abs/2301.00901. arXiv:2301.00901 [cs]. | 2307.15217#127 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems. | http://arxiv.org/pdf/2307.15217 | Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230727 | 20230911 | [
{
"id": "2305.20050"
},
{
"id": "2302.01928"
},
{
"id": "2210.10760"
},
{
"id": "2304.09991"
},
{
"id": "2211.06519"
},
{
"id": "2305.17319"
},
{
"id": "2109.13916"
},
{
"id": "1909.12200"
},
{
"id": "2305.16147"
},
{
"id": "2301.00901"
},
{
"id": "2005.01643"
},
{
"id": "2006.03357"
},
{
"id": "2306.04488"
},
{
"id": "2304.05197"
},
{
"id": "2210.01241"
},
{
"id": "2110.05699"
},
{
"id": "2101.07691"
},
{
"id": "2303.17548"
},
{
"id": "2301.11270"
},
{
"id": "2305.17608"
},
{
"id": "2301.04709"
},
{
"id": "2211.14275"
},
{
"id": "1811.07871"
},
{
"id": "1903.02020"
},
{
"id": "2303.15056"
},
{
"id": "2012.07532"
},
{
"id": "2201.03544"
},
{
"id": "2303.06074"
},
{
"id": "2209.02167"
},
{
"id": "2304.06528"
},
{
"id": "2305.14710"
},
{
"id": "2305.18290"
},
{
"id": "2301.01768"
},
{
"id": "1803.04585"
},
{
"id": "2211.09110"
},
{
"id": "2305.15324"
},
{
"id": "2304.04914"
},
{
"id": "2211.00241"
},
{
"id": "2204.10817"
},
{
"id": "2206.13316"
},
{
"id": "2305.14325"
},
{
"id": "2303.09001"
},
{
"id": "1909.08593"
},
{
"id": "2308.12050"
},
{
"id": "2204.02515"
},
{
"id": "2302.08500"
},
{
"id": "1906.03926"
},
{
"id": "2204.05186"
},
{
"id": "2209.00626"
},
{
"id": "2202.03286"
},
{
"id": "2012.05862"
},
{
"id": "2305.13534"
},
{
"id": "2307.02483"
},
{
"id": "1805.00899"
},
{
"id": "2303.16755"
},
{
"id": "2302.10894"
},
{
"id": "2006.13900"
},
{
"id": "2302.06503"
},
{
"id": "1908.04734"
},
{
"id": "1805.08010"
},
{
"id": "2305.08844"
},
{
"id": "1901.08654"
},
{
"id": "2204.05862"
},
{
"id": "1705.06452"
},
{
"id": "2306.08647"
},
{
"id": "2206.05802"
},
{
"id": "2303.09387"
},
{
"id": "2305.11455"
},
{
"id": "2203.07472"
},
{
"id": "2210.07229"
},
{
"id": "2106.05091"
},
{
"id": "2308.03825"
},
{
"id": "1610.02136"
},
{
"id": "2301.04213"
},
{
"id": "2304.00740"
},
{
"id": "1807.06096"
},
{
"id": "2010.14603"
},
{
"id": "1707.07402"
},
{
"id": "2302.10149"
},
{
"id": "2212.03201"
},
{
"id": "2303.00894"
},
{
"id": "2303.05453"
},
{
"id": "2304.06767"
},
{
"id": "2304.11082"
},
{
"id": "2109.06668"
},
{
"id": "1902.04257"
},
{
"id": "2210.01790"
},
{
"id": "2206.02231"
},
{
"id": "2306.07899"
},
{
"id": "1902.07742"
},
{
"id": "2109.00157"
},
{
"id": "2010.05418"
},
{
"id": "2306.15447"
},
{
"id": "2212.08073"
},
{
"id": "1606.06565"
},
{
"id": "2209.15259"
},
{
"id": "2211.03540"
},
{
"id": "2212.04717"
},
{
"id": "2301.03652"
},
{
"id": "2306.09442"
},
{
"id": "2305.13735"
},
{
"id": "2303.16749"
},
{
"id": "2212.09251"
},
{
"id": "2209.13085"
},
{
"id": "2303.17651"
},
{
"id": "2103.14659"
},
{
"id": "2305.11206"
},
{
"id": "2006.04948"
}
] |
2307.15217 | 129 | Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bash- lykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Fer- rer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan | 2307.15217#129 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems. | http://arxiv.org/pdf/2307.15217 | Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230727 | 20230911 | [
{
"id": "2305.20050"
},
{
"id": "2302.01928"
},
{
"id": "2210.10760"
},
{
"id": "2304.09991"
},
{
"id": "2211.06519"
},
{
"id": "2305.17319"
},
{
"id": "2109.13916"
},
{
"id": "1909.12200"
},
{
"id": "2305.16147"
},
{
"id": "2301.00901"
},
{
"id": "2005.01643"
},
{
"id": "2006.03357"
},
{
"id": "2306.04488"
},
{
"id": "2304.05197"
},
{
"id": "2210.01241"
},
{
"id": "2110.05699"
},
{
"id": "2101.07691"
},
{
"id": "2303.17548"
},
{
"id": "2301.11270"
},
{
"id": "2305.17608"
},
{
"id": "2301.04709"
},
{
"id": "2211.14275"
},
{
"id": "1811.07871"
},
{
"id": "1903.02020"
},
{
"id": "2303.15056"
},
{
"id": "2012.07532"
},
{
"id": "2201.03544"
},
{
"id": "2303.06074"
},
{
"id": "2209.02167"
},
{
"id": "2304.06528"
},
{
"id": "2305.14710"
},
{
"id": "2305.18290"
},
{
"id": "2301.01768"
},
{
"id": "1803.04585"
},
{
"id": "2211.09110"
},
{
"id": "2305.15324"
},
{
"id": "2304.04914"
},
{
"id": "2211.00241"
},
{
"id": "2204.10817"
},
{
"id": "2206.13316"
},
{
"id": "2305.14325"
},
{
"id": "2303.09001"
},
{
"id": "1909.08593"
},
{
"id": "2308.12050"
},
{
"id": "2204.02515"
},
{
"id": "2302.08500"
},
{
"id": "1906.03926"
},
{
"id": "2204.05186"
},
{
"id": "2209.00626"
},
{
"id": "2202.03286"
},
{
"id": "2012.05862"
},
{
"id": "2305.13534"
},
{
"id": "2307.02483"
},
{
"id": "1805.00899"
},
{
"id": "2303.16755"
},
{
"id": "2302.10894"
},
{
"id": "2006.13900"
},
{
"id": "2302.06503"
},
{
"id": "1908.04734"
},
{
"id": "1805.08010"
},
{
"id": "2305.08844"
},
{
"id": "1901.08654"
},
{
"id": "2204.05862"
},
{
"id": "1705.06452"
},
{
"id": "2306.08647"
},
{
"id": "2206.05802"
},
{
"id": "2303.09387"
},
{
"id": "2305.11455"
},
{
"id": "2203.07472"
},
{
"id": "2210.07229"
},
{
"id": "2106.05091"
},
{
"id": "2308.03825"
},
{
"id": "1610.02136"
},
{
"id": "2301.04213"
},
{
"id": "2304.00740"
},
{
"id": "1807.06096"
},
{
"id": "2010.14603"
},
{
"id": "1707.07402"
},
{
"id": "2302.10149"
},
{
"id": "2212.03201"
},
{
"id": "2303.00894"
},
{
"id": "2303.05453"
},
{
"id": "2304.06767"
},
{
"id": "2304.11082"
},
{
"id": "2109.06668"
},
{
"id": "1902.04257"
},
{
"id": "2210.01790"
},
{
"id": "2206.02231"
},
{
"id": "2306.07899"
},
{
"id": "1902.07742"
},
{
"id": "2109.00157"
},
{
"id": "2010.05418"
},
{
"id": "2306.15447"
},
{
"id": "2212.08073"
},
{
"id": "1606.06565"
},
{
"id": "2209.15259"
},
{
"id": "2211.03540"
},
{
"id": "2212.04717"
},
{
"id": "2301.03652"
},
{
"id": "2306.09442"
},
{
"id": "2305.13735"
},
{
"id": "2303.16749"
},
{
"id": "2212.09251"
},
{
"id": "2209.13085"
},
{
"id": "2303.17651"
},
{
"id": "2103.14659"
},
{
"id": "2305.11206"
},
{
"id": "2006.04948"
}
] |
2307.15217 | 130 | Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subra- manian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. Llama 2: Open foundation and fine-tuned chat models, 2023. | 2307.15217#130 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems. | http://arxiv.org/pdf/2307.15217 | Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230727 | 20230911 | [
{
"id": "2305.20050"
},
{
"id": "2302.01928"
},
{
"id": "2210.10760"
},
{
"id": "2304.09991"
},
{
"id": "2211.06519"
},
{
"id": "2305.17319"
},
{
"id": "2109.13916"
},
{
"id": "1909.12200"
},
{
"id": "2305.16147"
},
{
"id": "2301.00901"
},
{
"id": "2005.01643"
},
{
"id": "2006.03357"
},
{
"id": "2306.04488"
},
{
"id": "2304.05197"
},
{
"id": "2210.01241"
},
{
"id": "2110.05699"
},
{
"id": "2101.07691"
},
{
"id": "2303.17548"
},
{
"id": "2301.11270"
},
{
"id": "2305.17608"
},
{
"id": "2301.04709"
},
{
"id": "2211.14275"
},
{
"id": "1811.07871"
},
{
"id": "1903.02020"
},
{
"id": "2303.15056"
},
{
"id": "2012.07532"
},
{
"id": "2201.03544"
},
{
"id": "2303.06074"
},
{
"id": "2209.02167"
},
{
"id": "2304.06528"
},
{
"id": "2305.14710"
},
{
"id": "2305.18290"
},
{
"id": "2301.01768"
},
{
"id": "1803.04585"
},
{
"id": "2211.09110"
},
{
"id": "2305.15324"
},
{
"id": "2304.04914"
},
{
"id": "2211.00241"
},
{
"id": "2204.10817"
},
{
"id": "2206.13316"
},
{
"id": "2305.14325"
},
{
"id": "2303.09001"
},
{
"id": "1909.08593"
},
{
"id": "2308.12050"
},
{
"id": "2204.02515"
},
{
"id": "2302.08500"
},
{
"id": "1906.03926"
},
{
"id": "2204.05186"
},
{
"id": "2209.00626"
},
{
"id": "2202.03286"
},
{
"id": "2012.05862"
},
{
"id": "2305.13534"
},
{
"id": "2307.02483"
},
{
"id": "1805.00899"
},
{
"id": "2303.16755"
},
{
"id": "2302.10894"
},
{
"id": "2006.13900"
},
{
"id": "2302.06503"
},
{
"id": "1908.04734"
},
{
"id": "1805.08010"
},
{
"id": "2305.08844"
},
{
"id": "1901.08654"
},
{
"id": "2204.05862"
},
{
"id": "1705.06452"
},
{
"id": "2306.08647"
},
{
"id": "2206.05802"
},
{
"id": "2303.09387"
},
{
"id": "2305.11455"
},
{
"id": "2203.07472"
},
{
"id": "2210.07229"
},
{
"id": "2106.05091"
},
{
"id": "2308.03825"
},
{
"id": "1610.02136"
},
{
"id": "2301.04213"
},
{
"id": "2304.00740"
},
{
"id": "1807.06096"
},
{
"id": "2010.14603"
},
{
"id": "1707.07402"
},
{
"id": "2302.10149"
},
{
"id": "2212.03201"
},
{
"id": "2303.00894"
},
{
"id": "2303.05453"
},
{
"id": "2304.06767"
},
{
"id": "2304.11082"
},
{
"id": "2109.06668"
},
{
"id": "1902.04257"
},
{
"id": "2210.01790"
},
{
"id": "2206.02231"
},
{
"id": "2306.07899"
},
{
"id": "1902.07742"
},
{
"id": "2109.00157"
},
{
"id": "2010.05418"
},
{
"id": "2306.15447"
},
{
"id": "2212.08073"
},
{
"id": "1606.06565"
},
{
"id": "2209.15259"
},
{
"id": "2211.03540"
},
{
"id": "2212.04717"
},
{
"id": "2301.03652"
},
{
"id": "2306.09442"
},
{
"id": "2305.13735"
},
{
"id": "2303.16749"
},
{
"id": "2212.09251"
},
{
"id": "2209.13085"
},
{
"id": "2303.17651"
},
{
"id": "2103.14659"
},
{
"id": "2305.11206"
},
{
"id": "2006.04948"
}
] |
2307.15217 | 131 | Alexander M Turner. Seeking power is convergently instrumental in a broad class of environments, 2021. URL https://www.alignmentforum.org/s/fSMbebQyR4wheRrvk/p/hzeLSQ9nwDkPc4KNt.
Alexander Matt Turner and Prasad Tadepalli. Parametrically retargetable decision-makers tend to seek power. ArXiv, abs/2206.13477, 2022.
Alexander Matt Turner, Logan Smith, Rohin Shah, Andrew Critch, and Prasad Tadepalli. Optimal policies tend to seek power. In Neural Information Processing Systems, 2019.
Victor Uc-Cetina, Nicolas Navarro-Guerrero, Anabel Martin-Gonzalez, Cornelius Weber, and Stefan Wermter. Survey on reinforcement learning for language processing. Artificial Intelligence Review, 56 (2):1543â1575, 2023.
Jonathan Uesato, Nate Kushman, Ramana Kumar, Francis Song, Noah Siegel, Lisa Wang, Antonia Creswell, Geoffrey Irving, and Irina Higgins. Solving math word problems with process-and outcome-based feedback. arXiv preprint arXiv:2211.14275, 2022. | 2307.15217#131 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems. | http://arxiv.org/pdf/2307.15217 | Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230727 | 20230911 | [
{
"id": "2305.20050"
},
{
"id": "2302.01928"
},
{
"id": "2210.10760"
},
{
"id": "2304.09991"
},
{
"id": "2211.06519"
},
{
"id": "2305.17319"
},
{
"id": "2109.13916"
},
{
"id": "1909.12200"
},
{
"id": "2305.16147"
},
{
"id": "2301.00901"
},
{
"id": "2005.01643"
},
{
"id": "2006.03357"
},
{
"id": "2306.04488"
},
{
"id": "2304.05197"
},
{
"id": "2210.01241"
},
{
"id": "2110.05699"
},
{
"id": "2101.07691"
},
{
"id": "2303.17548"
},
{
"id": "2301.11270"
},
{
"id": "2305.17608"
},
{
"id": "2301.04709"
},
{
"id": "2211.14275"
},
{
"id": "1811.07871"
},
{
"id": "1903.02020"
},
{
"id": "2303.15056"
},
{
"id": "2012.07532"
},
{
"id": "2201.03544"
},
{
"id": "2303.06074"
},
{
"id": "2209.02167"
},
{
"id": "2304.06528"
},
{
"id": "2305.14710"
},
{
"id": "2305.18290"
},
{
"id": "2301.01768"
},
{
"id": "1803.04585"
},
{
"id": "2211.09110"
},
{
"id": "2305.15324"
},
{
"id": "2304.04914"
},
{
"id": "2211.00241"
},
{
"id": "2204.10817"
},
{
"id": "2206.13316"
},
{
"id": "2305.14325"
},
{
"id": "2303.09001"
},
{
"id": "1909.08593"
},
{
"id": "2308.12050"
},
{
"id": "2204.02515"
},
{
"id": "2302.08500"
},
{
"id": "1906.03926"
},
{
"id": "2204.05186"
},
{
"id": "2209.00626"
},
{
"id": "2202.03286"
},
{
"id": "2012.05862"
},
{
"id": "2305.13534"
},
{
"id": "2307.02483"
},
{
"id": "1805.00899"
},
{
"id": "2303.16755"
},
{
"id": "2302.10894"
},
{
"id": "2006.13900"
},
{
"id": "2302.06503"
},
{
"id": "1908.04734"
},
{
"id": "1805.08010"
},
{
"id": "2305.08844"
},
{
"id": "1901.08654"
},
{
"id": "2204.05862"
},
{
"id": "1705.06452"
},
{
"id": "2306.08647"
},
{
"id": "2206.05802"
},
{
"id": "2303.09387"
},
{
"id": "2305.11455"
},
{
"id": "2203.07472"
},
{
"id": "2210.07229"
},
{
"id": "2106.05091"
},
{
"id": "2308.03825"
},
{
"id": "1610.02136"
},
{
"id": "2301.04213"
},
{
"id": "2304.00740"
},
{
"id": "1807.06096"
},
{
"id": "2010.14603"
},
{
"id": "1707.07402"
},
{
"id": "2302.10149"
},
{
"id": "2212.03201"
},
{
"id": "2303.00894"
},
{
"id": "2303.05453"
},
{
"id": "2304.06767"
},
{
"id": "2304.11082"
},
{
"id": "2109.06668"
},
{
"id": "1902.04257"
},
{
"id": "2210.01790"
},
{
"id": "2206.02231"
},
{
"id": "2306.07899"
},
{
"id": "1902.07742"
},
{
"id": "2109.00157"
},
{
"id": "2010.05418"
},
{
"id": "2306.15447"
},
{
"id": "2212.08073"
},
{
"id": "1606.06565"
},
{
"id": "2209.15259"
},
{
"id": "2211.03540"
},
{
"id": "2212.04717"
},
{
"id": "2301.03652"
},
{
"id": "2306.09442"
},
{
"id": "2305.13735"
},
{
"id": "2303.16749"
},
{
"id": "2212.09251"
},
{
"id": "2209.13085"
},
{
"id": "2303.17651"
},
{
"id": "2103.14659"
},
{
"id": "2305.11206"
},
{
"id": "2006.04948"
}
] |
2307.15217 | 132 | Peter Vamplew, Benjamin J Smith, Johan Källström, Gabriel Ramos, Roxana RÄdulescu, Diederik M Roijers, Conor F Hayes, Fredrik Heintz, Patrick Mannion, Pieter JK Libin, et al. Scalar reward is not enough: A response to silver, singh, precup and sutton (2021). Autonomous Agents and Multi-Agent Systems, 36(2): 41, 2022.
Veniamin Veselovsky, Manoel Horta Ribeiro, and Robert West. Artificial artificial artificial gence: Crowd workers widely use large language models for text production tasks. arXiv:2306.07899, 2023. intelli- arXiv preprint
James Vincent. it, Microsoftâs Bing 2023. is URL an peo- https://www.theverge.com/2023/2/15/23599072/ emotionally manipulative liar, and ple microsoft-ai-bing-personality-conversations-spy-employees-webcams. love February
Alex Wan, Eric Wallace, Sheng Shen, and Dan Klein. Poisoning language models during instruction tuning. In International Conference on Machine Learning, 2023.
33 | 2307.15217#132 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems. | http://arxiv.org/pdf/2307.15217 | Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230727 | 20230911 | [
{
"id": "2305.20050"
},
{
"id": "2302.01928"
},
{
"id": "2210.10760"
},
{
"id": "2304.09991"
},
{
"id": "2211.06519"
},
{
"id": "2305.17319"
},
{
"id": "2109.13916"
},
{
"id": "1909.12200"
},
{
"id": "2305.16147"
},
{
"id": "2301.00901"
},
{
"id": "2005.01643"
},
{
"id": "2006.03357"
},
{
"id": "2306.04488"
},
{
"id": "2304.05197"
},
{
"id": "2210.01241"
},
{
"id": "2110.05699"
},
{
"id": "2101.07691"
},
{
"id": "2303.17548"
},
{
"id": "2301.11270"
},
{
"id": "2305.17608"
},
{
"id": "2301.04709"
},
{
"id": "2211.14275"
},
{
"id": "1811.07871"
},
{
"id": "1903.02020"
},
{
"id": "2303.15056"
},
{
"id": "2012.07532"
},
{
"id": "2201.03544"
},
{
"id": "2303.06074"
},
{
"id": "2209.02167"
},
{
"id": "2304.06528"
},
{
"id": "2305.14710"
},
{
"id": "2305.18290"
},
{
"id": "2301.01768"
},
{
"id": "1803.04585"
},
{
"id": "2211.09110"
},
{
"id": "2305.15324"
},
{
"id": "2304.04914"
},
{
"id": "2211.00241"
},
{
"id": "2204.10817"
},
{
"id": "2206.13316"
},
{
"id": "2305.14325"
},
{
"id": "2303.09001"
},
{
"id": "1909.08593"
},
{
"id": "2308.12050"
},
{
"id": "2204.02515"
},
{
"id": "2302.08500"
},
{
"id": "1906.03926"
},
{
"id": "2204.05186"
},
{
"id": "2209.00626"
},
{
"id": "2202.03286"
},
{
"id": "2012.05862"
},
{
"id": "2305.13534"
},
{
"id": "2307.02483"
},
{
"id": "1805.00899"
},
{
"id": "2303.16755"
},
{
"id": "2302.10894"
},
{
"id": "2006.13900"
},
{
"id": "2302.06503"
},
{
"id": "1908.04734"
},
{
"id": "1805.08010"
},
{
"id": "2305.08844"
},
{
"id": "1901.08654"
},
{
"id": "2204.05862"
},
{
"id": "1705.06452"
},
{
"id": "2306.08647"
},
{
"id": "2206.05802"
},
{
"id": "2303.09387"
},
{
"id": "2305.11455"
},
{
"id": "2203.07472"
},
{
"id": "2210.07229"
},
{
"id": "2106.05091"
},
{
"id": "2308.03825"
},
{
"id": "1610.02136"
},
{
"id": "2301.04213"
},
{
"id": "2304.00740"
},
{
"id": "1807.06096"
},
{
"id": "2010.14603"
},
{
"id": "1707.07402"
},
{
"id": "2302.10149"
},
{
"id": "2212.03201"
},
{
"id": "2303.00894"
},
{
"id": "2303.05453"
},
{
"id": "2304.06767"
},
{
"id": "2304.11082"
},
{
"id": "2109.06668"
},
{
"id": "1902.04257"
},
{
"id": "2210.01790"
},
{
"id": "2206.02231"
},
{
"id": "2306.07899"
},
{
"id": "1902.07742"
},
{
"id": "2109.00157"
},
{
"id": "2010.05418"
},
{
"id": "2306.15447"
},
{
"id": "2212.08073"
},
{
"id": "1606.06565"
},
{
"id": "2209.15259"
},
{
"id": "2211.03540"
},
{
"id": "2212.04717"
},
{
"id": "2301.03652"
},
{
"id": "2306.09442"
},
{
"id": "2305.13735"
},
{
"id": "2303.16749"
},
{
"id": "2212.09251"
},
{
"id": "2209.13085"
},
{
"id": "2303.17651"
},
{
"id": "2103.14659"
},
{
"id": "2305.11206"
},
{
"id": "2006.04948"
}
] |
2307.15217 | 133 | Alex Wan, Eric Wallace, Sheng Shen, and Dan Klein. Poisoning language models during instruction tuning. In International Conference on Machine Learning, 2023.
33
Tony Tong Wang, Adam Gleave, Nora Belrose, Tom Tseng, Joseph Miller, Michael D Dennis, Yawen Duan, Viktor Pogrebniak, Sergey Levine, and Stuart Russell. Adversarial policies beat professional-level go ais. arXiv preprint arXiv:2211.00241, 2022.
Yufei Wang, Wanjun Zhong, Liangyou Li, Fei Mi, Xingshan Zeng, Wenyong Huang, Lifeng Shang, Xin Jiang, and Qun Liu. Aligning large language models with human: A survey, 2023.
Alexander Wei, Nika Haghtalab, and Jacob Steinhardt. Jailbroken: How does llm safety training fail? arXiv preprint arXiv:2307.02483, 2023. | 2307.15217#133 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems. | http://arxiv.org/pdf/2307.15217 | Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230727 | 20230911 | [
{
"id": "2305.20050"
},
{
"id": "2302.01928"
},
{
"id": "2210.10760"
},
{
"id": "2304.09991"
},
{
"id": "2211.06519"
},
{
"id": "2305.17319"
},
{
"id": "2109.13916"
},
{
"id": "1909.12200"
},
{
"id": "2305.16147"
},
{
"id": "2301.00901"
},
{
"id": "2005.01643"
},
{
"id": "2006.03357"
},
{
"id": "2306.04488"
},
{
"id": "2304.05197"
},
{
"id": "2210.01241"
},
{
"id": "2110.05699"
},
{
"id": "2101.07691"
},
{
"id": "2303.17548"
},
{
"id": "2301.11270"
},
{
"id": "2305.17608"
},
{
"id": "2301.04709"
},
{
"id": "2211.14275"
},
{
"id": "1811.07871"
},
{
"id": "1903.02020"
},
{
"id": "2303.15056"
},
{
"id": "2012.07532"
},
{
"id": "2201.03544"
},
{
"id": "2303.06074"
},
{
"id": "2209.02167"
},
{
"id": "2304.06528"
},
{
"id": "2305.14710"
},
{
"id": "2305.18290"
},
{
"id": "2301.01768"
},
{
"id": "1803.04585"
},
{
"id": "2211.09110"
},
{
"id": "2305.15324"
},
{
"id": "2304.04914"
},
{
"id": "2211.00241"
},
{
"id": "2204.10817"
},
{
"id": "2206.13316"
},
{
"id": "2305.14325"
},
{
"id": "2303.09001"
},
{
"id": "1909.08593"
},
{
"id": "2308.12050"
},
{
"id": "2204.02515"
},
{
"id": "2302.08500"
},
{
"id": "1906.03926"
},
{
"id": "2204.05186"
},
{
"id": "2209.00626"
},
{
"id": "2202.03286"
},
{
"id": "2012.05862"
},
{
"id": "2305.13534"
},
{
"id": "2307.02483"
},
{
"id": "1805.00899"
},
{
"id": "2303.16755"
},
{
"id": "2302.10894"
},
{
"id": "2006.13900"
},
{
"id": "2302.06503"
},
{
"id": "1908.04734"
},
{
"id": "1805.08010"
},
{
"id": "2305.08844"
},
{
"id": "1901.08654"
},
{
"id": "2204.05862"
},
{
"id": "1705.06452"
},
{
"id": "2306.08647"
},
{
"id": "2206.05802"
},
{
"id": "2303.09387"
},
{
"id": "2305.11455"
},
{
"id": "2203.07472"
},
{
"id": "2210.07229"
},
{
"id": "2106.05091"
},
{
"id": "2308.03825"
},
{
"id": "1610.02136"
},
{
"id": "2301.04213"
},
{
"id": "2304.00740"
},
{
"id": "1807.06096"
},
{
"id": "2010.14603"
},
{
"id": "1707.07402"
},
{
"id": "2302.10149"
},
{
"id": "2212.03201"
},
{
"id": "2303.00894"
},
{
"id": "2303.05453"
},
{
"id": "2304.06767"
},
{
"id": "2304.11082"
},
{
"id": "2109.06668"
},
{
"id": "1902.04257"
},
{
"id": "2210.01790"
},
{
"id": "2206.02231"
},
{
"id": "2306.07899"
},
{
"id": "1902.07742"
},
{
"id": "2109.00157"
},
{
"id": "2010.05418"
},
{
"id": "2306.15447"
},
{
"id": "2212.08073"
},
{
"id": "1606.06565"
},
{
"id": "2209.15259"
},
{
"id": "2211.03540"
},
{
"id": "2212.04717"
},
{
"id": "2301.03652"
},
{
"id": "2306.09442"
},
{
"id": "2305.13735"
},
{
"id": "2303.16749"
},
{
"id": "2212.09251"
},
{
"id": "2209.13085"
},
{
"id": "2303.17651"
},
{
"id": "2103.14659"
},
{
"id": "2305.11206"
},
{
"id": "2006.04948"
}
] |
2307.15217 | 134 | Laura Weidinger, John Mellor, Maribeth Rauh, Conor Griffin, Jonathan Uesato, Po-Sen Huang, Myra Cheng, Mia Glaese, Borja Balle, Atoosa Kasirzadeh, Zac Kenton, Sasha Brown, Will Hawkins, Tom Stepleton, Courtney Biles, Abeba Birhane, Julia Haas, Laura Rimell, Lisa Anne Hendricks, William Isaac, Sean Legassick, Geoffrey Irving, and Iason Gabriel. Ethical and social risks of harm from language models, 2021.
Johannes Welbl, Amelia Glaese, Jonathan Uesato, Sumanth Dathathri, John Mellor, Lisa Anne Hendricks, Kirsty Anderson, Pushmeet Kohli, Ben Coppin, and Po-Sen Huang. Challenges in detoxifying language models. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 2447â2469, Punta Cana, Dominican Republic, November 2021. Association for Computational Linguistics. doi: 10. 18653/v1/2021.findings-emnlp.210. URL https://aclanthology.org/2021.findings-emnlp.210. | 2307.15217#134 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems. | http://arxiv.org/pdf/2307.15217 | Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230727 | 20230911 | [
{
"id": "2305.20050"
},
{
"id": "2302.01928"
},
{
"id": "2210.10760"
},
{
"id": "2304.09991"
},
{
"id": "2211.06519"
},
{
"id": "2305.17319"
},
{
"id": "2109.13916"
},
{
"id": "1909.12200"
},
{
"id": "2305.16147"
},
{
"id": "2301.00901"
},
{
"id": "2005.01643"
},
{
"id": "2006.03357"
},
{
"id": "2306.04488"
},
{
"id": "2304.05197"
},
{
"id": "2210.01241"
},
{
"id": "2110.05699"
},
{
"id": "2101.07691"
},
{
"id": "2303.17548"
},
{
"id": "2301.11270"
},
{
"id": "2305.17608"
},
{
"id": "2301.04709"
},
{
"id": "2211.14275"
},
{
"id": "1811.07871"
},
{
"id": "1903.02020"
},
{
"id": "2303.15056"
},
{
"id": "2012.07532"
},
{
"id": "2201.03544"
},
{
"id": "2303.06074"
},
{
"id": "2209.02167"
},
{
"id": "2304.06528"
},
{
"id": "2305.14710"
},
{
"id": "2305.18290"
},
{
"id": "2301.01768"
},
{
"id": "1803.04585"
},
{
"id": "2211.09110"
},
{
"id": "2305.15324"
},
{
"id": "2304.04914"
},
{
"id": "2211.00241"
},
{
"id": "2204.10817"
},
{
"id": "2206.13316"
},
{
"id": "2305.14325"
},
{
"id": "2303.09001"
},
{
"id": "1909.08593"
},
{
"id": "2308.12050"
},
{
"id": "2204.02515"
},
{
"id": "2302.08500"
},
{
"id": "1906.03926"
},
{
"id": "2204.05186"
},
{
"id": "2209.00626"
},
{
"id": "2202.03286"
},
{
"id": "2012.05862"
},
{
"id": "2305.13534"
},
{
"id": "2307.02483"
},
{
"id": "1805.00899"
},
{
"id": "2303.16755"
},
{
"id": "2302.10894"
},
{
"id": "2006.13900"
},
{
"id": "2302.06503"
},
{
"id": "1908.04734"
},
{
"id": "1805.08010"
},
{
"id": "2305.08844"
},
{
"id": "1901.08654"
},
{
"id": "2204.05862"
},
{
"id": "1705.06452"
},
{
"id": "2306.08647"
},
{
"id": "2206.05802"
},
{
"id": "2303.09387"
},
{
"id": "2305.11455"
},
{
"id": "2203.07472"
},
{
"id": "2210.07229"
},
{
"id": "2106.05091"
},
{
"id": "2308.03825"
},
{
"id": "1610.02136"
},
{
"id": "2301.04213"
},
{
"id": "2304.00740"
},
{
"id": "1807.06096"
},
{
"id": "2010.14603"
},
{
"id": "1707.07402"
},
{
"id": "2302.10149"
},
{
"id": "2212.03201"
},
{
"id": "2303.00894"
},
{
"id": "2303.05453"
},
{
"id": "2304.06767"
},
{
"id": "2304.11082"
},
{
"id": "2109.06668"
},
{
"id": "1902.04257"
},
{
"id": "2210.01790"
},
{
"id": "2206.02231"
},
{
"id": "2306.07899"
},
{
"id": "1902.07742"
},
{
"id": "2109.00157"
},
{
"id": "2010.05418"
},
{
"id": "2306.15447"
},
{
"id": "2212.08073"
},
{
"id": "1606.06565"
},
{
"id": "2209.15259"
},
{
"id": "2211.03540"
},
{
"id": "2212.04717"
},
{
"id": "2301.03652"
},
{
"id": "2306.09442"
},
{
"id": "2305.13735"
},
{
"id": "2303.16749"
},
{
"id": "2212.09251"
},
{
"id": "2209.13085"
},
{
"id": "2303.17651"
},
{
"id": "2103.14659"
},
{
"id": "2305.11206"
},
{
"id": "2006.04948"
}
] |
2307.15217 | 135 | Jess Whittlestone, Kai Arulkumaran, and Matthew Crosby. The societal implications of deep reinforcement learning. Journal of Artificial Intelligence Research, 70:1003â1030, 2021.
Nils Wilde, Erdem Biyik, Dorsa Sadigh, and Stephen L Smith. Learning reward functions from scale feedback. In Conference on Robot Learning, pages 353â362. PMLR, 2022.
Simon Willison. Prompt injection. 2023. URL https://simonwillison.net/series/prompt-injection/.
Christian Wirth, Riad Akrour, Gerhard Neumann, Johannes Fürnkranz, et al. A survey of preference-based reinforcement learning methods. Journal of Machine Learning Research, 18(136):1â46, 2017.
Yotam Wolf, Noam Wies, Yoav Levine, and Amnon Shashua. Fundamental limitations of alignment in large
language models. arXiv preprint arXiv:2304.11082, 2023.
Jeff Wu, Long Ouyang, Daniel M. Ziegler, Nisan Stiennon, Ryan Lowe, Jan Leike, and Paul Christiano.
Recursively summarizing books with human feedback, 2021a. | 2307.15217#135 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems. | http://arxiv.org/pdf/2307.15217 | Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230727 | 20230911 | [
{
"id": "2305.20050"
},
{
"id": "2302.01928"
},
{
"id": "2210.10760"
},
{
"id": "2304.09991"
},
{
"id": "2211.06519"
},
{
"id": "2305.17319"
},
{
"id": "2109.13916"
},
{
"id": "1909.12200"
},
{
"id": "2305.16147"
},
{
"id": "2301.00901"
},
{
"id": "2005.01643"
},
{
"id": "2006.03357"
},
{
"id": "2306.04488"
},
{
"id": "2304.05197"
},
{
"id": "2210.01241"
},
{
"id": "2110.05699"
},
{
"id": "2101.07691"
},
{
"id": "2303.17548"
},
{
"id": "2301.11270"
},
{
"id": "2305.17608"
},
{
"id": "2301.04709"
},
{
"id": "2211.14275"
},
{
"id": "1811.07871"
},
{
"id": "1903.02020"
},
{
"id": "2303.15056"
},
{
"id": "2012.07532"
},
{
"id": "2201.03544"
},
{
"id": "2303.06074"
},
{
"id": "2209.02167"
},
{
"id": "2304.06528"
},
{
"id": "2305.14710"
},
{
"id": "2305.18290"
},
{
"id": "2301.01768"
},
{
"id": "1803.04585"
},
{
"id": "2211.09110"
},
{
"id": "2305.15324"
},
{
"id": "2304.04914"
},
{
"id": "2211.00241"
},
{
"id": "2204.10817"
},
{
"id": "2206.13316"
},
{
"id": "2305.14325"
},
{
"id": "2303.09001"
},
{
"id": "1909.08593"
},
{
"id": "2308.12050"
},
{
"id": "2204.02515"
},
{
"id": "2302.08500"
},
{
"id": "1906.03926"
},
{
"id": "2204.05186"
},
{
"id": "2209.00626"
},
{
"id": "2202.03286"
},
{
"id": "2012.05862"
},
{
"id": "2305.13534"
},
{
"id": "2307.02483"
},
{
"id": "1805.00899"
},
{
"id": "2303.16755"
},
{
"id": "2302.10894"
},
{
"id": "2006.13900"
},
{
"id": "2302.06503"
},
{
"id": "1908.04734"
},
{
"id": "1805.08010"
},
{
"id": "2305.08844"
},
{
"id": "1901.08654"
},
{
"id": "2204.05862"
},
{
"id": "1705.06452"
},
{
"id": "2306.08647"
},
{
"id": "2206.05802"
},
{
"id": "2303.09387"
},
{
"id": "2305.11455"
},
{
"id": "2203.07472"
},
{
"id": "2210.07229"
},
{
"id": "2106.05091"
},
{
"id": "2308.03825"
},
{
"id": "1610.02136"
},
{
"id": "2301.04213"
},
{
"id": "2304.00740"
},
{
"id": "1807.06096"
},
{
"id": "2010.14603"
},
{
"id": "1707.07402"
},
{
"id": "2302.10149"
},
{
"id": "2212.03201"
},
{
"id": "2303.00894"
},
{
"id": "2303.05453"
},
{
"id": "2304.06767"
},
{
"id": "2304.11082"
},
{
"id": "2109.06668"
},
{
"id": "1902.04257"
},
{
"id": "2210.01790"
},
{
"id": "2206.02231"
},
{
"id": "2306.07899"
},
{
"id": "1902.07742"
},
{
"id": "2109.00157"
},
{
"id": "2010.05418"
},
{
"id": "2306.15447"
},
{
"id": "2212.08073"
},
{
"id": "1606.06565"
},
{
"id": "2209.15259"
},
{
"id": "2211.03540"
},
{
"id": "2212.04717"
},
{
"id": "2301.03652"
},
{
"id": "2306.09442"
},
{
"id": "2305.13735"
},
{
"id": "2303.16749"
},
{
"id": "2212.09251"
},
{
"id": "2209.13085"
},
{
"id": "2303.17651"
},
{
"id": "2103.14659"
},
{
"id": "2305.11206"
},
{
"id": "2006.04948"
}
] |
2307.15217 | 136 | Recursively summarizing books with human feedback, 2021a.
Xian Wu, Wenbo Guo, Hua Wei, and Xinyu Xing. Adversarial policy training against deep reinforcement learning. In USENIX Security Symposium, pages 1883â1900, 2021b.
Zeqiu Wu, Yushi Hu, Weijia Shi, Nouha Dziri, Alane Suhr, Prithviraj Ammanabrolu, Noah A. Smith, Mari Ostendorf, and Hannaneh Hajishirzi. Fine-grained human feedback gives better rewards for language model training, 2023.
Blake Wulfe, Logan Michael Ellis, Jean Mercat, Rowan Thomas McAllister, and Adrien Gaidon. Dynamics- aware comparison of learned reward functions. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=CALFyKVs87.
Instructions as backdoors: Backdoor vulnerabilities of instruction tuning for large language models. arXiv preprint arXiv:2305.14710, 2023a.
Wanqiao Xu, Shi Dong, Dilip Arumugam, and Benjamin Van Roy. Shattering the agent-environment interface for fine-tuning inclusive language models. arXiv preprint arXiv:2305.11455, 2023b. | 2307.15217#136 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems. | http://arxiv.org/pdf/2307.15217 | Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230727 | 20230911 | [
{
"id": "2305.20050"
},
{
"id": "2302.01928"
},
{
"id": "2210.10760"
},
{
"id": "2304.09991"
},
{
"id": "2211.06519"
},
{
"id": "2305.17319"
},
{
"id": "2109.13916"
},
{
"id": "1909.12200"
},
{
"id": "2305.16147"
},
{
"id": "2301.00901"
},
{
"id": "2005.01643"
},
{
"id": "2006.03357"
},
{
"id": "2306.04488"
},
{
"id": "2304.05197"
},
{
"id": "2210.01241"
},
{
"id": "2110.05699"
},
{
"id": "2101.07691"
},
{
"id": "2303.17548"
},
{
"id": "2301.11270"
},
{
"id": "2305.17608"
},
{
"id": "2301.04709"
},
{
"id": "2211.14275"
},
{
"id": "1811.07871"
},
{
"id": "1903.02020"
},
{
"id": "2303.15056"
},
{
"id": "2012.07532"
},
{
"id": "2201.03544"
},
{
"id": "2303.06074"
},
{
"id": "2209.02167"
},
{
"id": "2304.06528"
},
{
"id": "2305.14710"
},
{
"id": "2305.18290"
},
{
"id": "2301.01768"
},
{
"id": "1803.04585"
},
{
"id": "2211.09110"
},
{
"id": "2305.15324"
},
{
"id": "2304.04914"
},
{
"id": "2211.00241"
},
{
"id": "2204.10817"
},
{
"id": "2206.13316"
},
{
"id": "2305.14325"
},
{
"id": "2303.09001"
},
{
"id": "1909.08593"
},
{
"id": "2308.12050"
},
{
"id": "2204.02515"
},
{
"id": "2302.08500"
},
{
"id": "1906.03926"
},
{
"id": "2204.05186"
},
{
"id": "2209.00626"
},
{
"id": "2202.03286"
},
{
"id": "2012.05862"
},
{
"id": "2305.13534"
},
{
"id": "2307.02483"
},
{
"id": "1805.00899"
},
{
"id": "2303.16755"
},
{
"id": "2302.10894"
},
{
"id": "2006.13900"
},
{
"id": "2302.06503"
},
{
"id": "1908.04734"
},
{
"id": "1805.08010"
},
{
"id": "2305.08844"
},
{
"id": "1901.08654"
},
{
"id": "2204.05862"
},
{
"id": "1705.06452"
},
{
"id": "2306.08647"
},
{
"id": "2206.05802"
},
{
"id": "2303.09387"
},
{
"id": "2305.11455"
},
{
"id": "2203.07472"
},
{
"id": "2210.07229"
},
{
"id": "2106.05091"
},
{
"id": "2308.03825"
},
{
"id": "1610.02136"
},
{
"id": "2301.04213"
},
{
"id": "2304.00740"
},
{
"id": "1807.06096"
},
{
"id": "2010.14603"
},
{
"id": "1707.07402"
},
{
"id": "2302.10149"
},
{
"id": "2212.03201"
},
{
"id": "2303.00894"
},
{
"id": "2303.05453"
},
{
"id": "2304.06767"
},
{
"id": "2304.11082"
},
{
"id": "2109.06668"
},
{
"id": "1902.04257"
},
{
"id": "2210.01790"
},
{
"id": "2206.02231"
},
{
"id": "2306.07899"
},
{
"id": "1902.07742"
},
{
"id": "2109.00157"
},
{
"id": "2010.05418"
},
{
"id": "2306.15447"
},
{
"id": "2212.08073"
},
{
"id": "1606.06565"
},
{
"id": "2209.15259"
},
{
"id": "2211.03540"
},
{
"id": "2212.04717"
},
{
"id": "2301.03652"
},
{
"id": "2306.09442"
},
{
"id": "2305.13735"
},
{
"id": "2303.16749"
},
{
"id": "2212.09251"
},
{
"id": "2209.13085"
},
{
"id": "2303.17651"
},
{
"id": "2103.14659"
},
{
"id": "2305.11206"
},
{
"id": "2006.04948"
}
] |
2307.15217 | 137 | Tianpei Yang, Hongyao Tang, Chenjia Bai, Jinyi Liu, Jianye Hao, Zhaopeng Meng, Peng Liu, and arXiv preprint Zhen Wang. Exploration in deep reinforcement learning: a comprehensive survey. arXiv:2109.06668, 2021.
34
Georgios N Yannakakis and John Hallam. Ranking vs. preference: a comparative study of self-reporting. In Affective Computing and Intelligent Interaction: 4th International Conference, ACII 2011, Memphis, TN, USA, October 9â12, 2011, Proceedings, Part I 4, pages 437â446. Springer, 2011.
Seonghyeon Ye, Yongrae Jo, Doyoung Kim, Sungdong Kim, Hyeonbin Hwang, and Minjoon Seo. Selfee: Iterative self-revising llm empowered by self-feedback generation, 2023. URL https://kaistai.github. io/SelFee/. | 2307.15217#137 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems. | http://arxiv.org/pdf/2307.15217 | Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230727 | 20230911 | [
{
"id": "2305.20050"
},
{
"id": "2302.01928"
},
{
"id": "2210.10760"
},
{
"id": "2304.09991"
},
{
"id": "2211.06519"
},
{
"id": "2305.17319"
},
{
"id": "2109.13916"
},
{
"id": "1909.12200"
},
{
"id": "2305.16147"
},
{
"id": "2301.00901"
},
{
"id": "2005.01643"
},
{
"id": "2006.03357"
},
{
"id": "2306.04488"
},
{
"id": "2304.05197"
},
{
"id": "2210.01241"
},
{
"id": "2110.05699"
},
{
"id": "2101.07691"
},
{
"id": "2303.17548"
},
{
"id": "2301.11270"
},
{
"id": "2305.17608"
},
{
"id": "2301.04709"
},
{
"id": "2211.14275"
},
{
"id": "1811.07871"
},
{
"id": "1903.02020"
},
{
"id": "2303.15056"
},
{
"id": "2012.07532"
},
{
"id": "2201.03544"
},
{
"id": "2303.06074"
},
{
"id": "2209.02167"
},
{
"id": "2304.06528"
},
{
"id": "2305.14710"
},
{
"id": "2305.18290"
},
{
"id": "2301.01768"
},
{
"id": "1803.04585"
},
{
"id": "2211.09110"
},
{
"id": "2305.15324"
},
{
"id": "2304.04914"
},
{
"id": "2211.00241"
},
{
"id": "2204.10817"
},
{
"id": "2206.13316"
},
{
"id": "2305.14325"
},
{
"id": "2303.09001"
},
{
"id": "1909.08593"
},
{
"id": "2308.12050"
},
{
"id": "2204.02515"
},
{
"id": "2302.08500"
},
{
"id": "1906.03926"
},
{
"id": "2204.05186"
},
{
"id": "2209.00626"
},
{
"id": "2202.03286"
},
{
"id": "2012.05862"
},
{
"id": "2305.13534"
},
{
"id": "2307.02483"
},
{
"id": "1805.00899"
},
{
"id": "2303.16755"
},
{
"id": "2302.10894"
},
{
"id": "2006.13900"
},
{
"id": "2302.06503"
},
{
"id": "1908.04734"
},
{
"id": "1805.08010"
},
{
"id": "2305.08844"
},
{
"id": "1901.08654"
},
{
"id": "2204.05862"
},
{
"id": "1705.06452"
},
{
"id": "2306.08647"
},
{
"id": "2206.05802"
},
{
"id": "2303.09387"
},
{
"id": "2305.11455"
},
{
"id": "2203.07472"
},
{
"id": "2210.07229"
},
{
"id": "2106.05091"
},
{
"id": "2308.03825"
},
{
"id": "1610.02136"
},
{
"id": "2301.04213"
},
{
"id": "2304.00740"
},
{
"id": "1807.06096"
},
{
"id": "2010.14603"
},
{
"id": "1707.07402"
},
{
"id": "2302.10149"
},
{
"id": "2212.03201"
},
{
"id": "2303.00894"
},
{
"id": "2303.05453"
},
{
"id": "2304.06767"
},
{
"id": "2304.11082"
},
{
"id": "2109.06668"
},
{
"id": "1902.04257"
},
{
"id": "2210.01790"
},
{
"id": "2206.02231"
},
{
"id": "2306.07899"
},
{
"id": "1902.07742"
},
{
"id": "2109.00157"
},
{
"id": "2010.05418"
},
{
"id": "2306.15447"
},
{
"id": "2212.08073"
},
{
"id": "1606.06565"
},
{
"id": "2209.15259"
},
{
"id": "2211.03540"
},
{
"id": "2212.04717"
},
{
"id": "2301.03652"
},
{
"id": "2306.09442"
},
{
"id": "2305.13735"
},
{
"id": "2303.16749"
},
{
"id": "2212.09251"
},
{
"id": "2209.13085"
},
{
"id": "2303.17651"
},
{
"id": "2103.14659"
},
{
"id": "2305.11206"
},
{
"id": "2006.04948"
}
] |
2307.15217 | 138 | Wenhao Yu, Nimrod Gileadi, Chuyuan Fu, Sean Kirmani, Kuang-Huei Lee, Montse Gonzalez Arenas, Hao- Tien Lewis Chiang, Tom Erez, Leonard Hasenclever, Jan Humplik, Brian Ichter, Ted Xiao, Peng Xu, Andy Zeng, Tingnan Zhang, Nicolas Heess, Dorsa Sadigh, Jie Tan, Yuval Tassa, and Fei Xia. Language to rewards for robotic skill synthesis. Arxiv preprint arXiv:2306.08647, 2023.
Zheng Yuan, Hongyi Yuan, Chuanqi Tan, Wei Wang, Songfang Huang, and Fei Huang. Rrhf: Rank responses to align language models with human feedback without tears, 2023.
Sheng Yue, Guanbo Wang, Wei Shao, Zhaofeng Zhang, Sen Lin, Ju Ren, and Junshan Zhang. Clare: In The Eleventh Conservative model-based reward learning for offline inverse reinforcement learning. International Conference on Learning Representations, 2023.
Jiliang Zhang and Chen Li. Adversarial examples: Opportunities and challenges. IEEE transactions on neural networks and learning systems, 31(7):2578â2593, 2019. | 2307.15217#138 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems. | http://arxiv.org/pdf/2307.15217 | Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230727 | 20230911 | [
{
"id": "2305.20050"
},
{
"id": "2302.01928"
},
{
"id": "2210.10760"
},
{
"id": "2304.09991"
},
{
"id": "2211.06519"
},
{
"id": "2305.17319"
},
{
"id": "2109.13916"
},
{
"id": "1909.12200"
},
{
"id": "2305.16147"
},
{
"id": "2301.00901"
},
{
"id": "2005.01643"
},
{
"id": "2006.03357"
},
{
"id": "2306.04488"
},
{
"id": "2304.05197"
},
{
"id": "2210.01241"
},
{
"id": "2110.05699"
},
{
"id": "2101.07691"
},
{
"id": "2303.17548"
},
{
"id": "2301.11270"
},
{
"id": "2305.17608"
},
{
"id": "2301.04709"
},
{
"id": "2211.14275"
},
{
"id": "1811.07871"
},
{
"id": "1903.02020"
},
{
"id": "2303.15056"
},
{
"id": "2012.07532"
},
{
"id": "2201.03544"
},
{
"id": "2303.06074"
},
{
"id": "2209.02167"
},
{
"id": "2304.06528"
},
{
"id": "2305.14710"
},
{
"id": "2305.18290"
},
{
"id": "2301.01768"
},
{
"id": "1803.04585"
},
{
"id": "2211.09110"
},
{
"id": "2305.15324"
},
{
"id": "2304.04914"
},
{
"id": "2211.00241"
},
{
"id": "2204.10817"
},
{
"id": "2206.13316"
},
{
"id": "2305.14325"
},
{
"id": "2303.09001"
},
{
"id": "1909.08593"
},
{
"id": "2308.12050"
},
{
"id": "2204.02515"
},
{
"id": "2302.08500"
},
{
"id": "1906.03926"
},
{
"id": "2204.05186"
},
{
"id": "2209.00626"
},
{
"id": "2202.03286"
},
{
"id": "2012.05862"
},
{
"id": "2305.13534"
},
{
"id": "2307.02483"
},
{
"id": "1805.00899"
},
{
"id": "2303.16755"
},
{
"id": "2302.10894"
},
{
"id": "2006.13900"
},
{
"id": "2302.06503"
},
{
"id": "1908.04734"
},
{
"id": "1805.08010"
},
{
"id": "2305.08844"
},
{
"id": "1901.08654"
},
{
"id": "2204.05862"
},
{
"id": "1705.06452"
},
{
"id": "2306.08647"
},
{
"id": "2206.05802"
},
{
"id": "2303.09387"
},
{
"id": "2305.11455"
},
{
"id": "2203.07472"
},
{
"id": "2210.07229"
},
{
"id": "2106.05091"
},
{
"id": "2308.03825"
},
{
"id": "1610.02136"
},
{
"id": "2301.04213"
},
{
"id": "2304.00740"
},
{
"id": "1807.06096"
},
{
"id": "2010.14603"
},
{
"id": "1707.07402"
},
{
"id": "2302.10149"
},
{
"id": "2212.03201"
},
{
"id": "2303.00894"
},
{
"id": "2303.05453"
},
{
"id": "2304.06767"
},
{
"id": "2304.11082"
},
{
"id": "2109.06668"
},
{
"id": "1902.04257"
},
{
"id": "2210.01790"
},
{
"id": "2206.02231"
},
{
"id": "2306.07899"
},
{
"id": "1902.07742"
},
{
"id": "2109.00157"
},
{
"id": "2010.05418"
},
{
"id": "2306.15447"
},
{
"id": "2212.08073"
},
{
"id": "1606.06565"
},
{
"id": "2209.15259"
},
{
"id": "2211.03540"
},
{
"id": "2212.04717"
},
{
"id": "2301.03652"
},
{
"id": "2306.09442"
},
{
"id": "2305.13735"
},
{
"id": "2303.16749"
},
{
"id": "2212.09251"
},
{
"id": "2209.13085"
},
{
"id": "2303.17651"
},
{
"id": "2103.14659"
},
{
"id": "2305.11206"
},
{
"id": "2006.04948"
}
] |
2307.15217 | 139 | Jiliang Zhang and Chen Li. Adversarial examples: Opportunities and challenges. IEEE transactions on neural networks and learning systems, 31(7):2578â2593, 2019.
Muru Zhang, Ofir Press, William Merrill, Alisa Liu, and Noah A Smith. How language model hallucinations can snowball. arXiv preprint arXiv:2305.13534, 2023.
Songyuan Zhang, Zhangjie Cao, Dorsa Sadigh, and Yanan Sui. Confidence-aware imitation learning from demonstrations with varying optimality. Advances in Neural Information Processing Systems, 34:12340â 12350, 2021.
Zhibing Zhao, Peter Piech, and Lirong Xia. Learning mixtures of plackett-luce models. In International Conference on Machine Learning, pages 2906â2914. PMLR, 2016.
Chunting Zhou, Pengfei Liu, Puxin Xu, Srini Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, et al. Lima: Less is more for alignment. arXiv preprint arXiv:2305.11206, 2023.
Li Zhou and Kevin Small. Inverse reinforcement learning with natural language goals. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 11116â11124, 2021. | 2307.15217#139 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems. | http://arxiv.org/pdf/2307.15217 | Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230727 | 20230911 | [
{
"id": "2305.20050"
},
{
"id": "2302.01928"
},
{
"id": "2210.10760"
},
{
"id": "2304.09991"
},
{
"id": "2211.06519"
},
{
"id": "2305.17319"
},
{
"id": "2109.13916"
},
{
"id": "1909.12200"
},
{
"id": "2305.16147"
},
{
"id": "2301.00901"
},
{
"id": "2005.01643"
},
{
"id": "2006.03357"
},
{
"id": "2306.04488"
},
{
"id": "2304.05197"
},
{
"id": "2210.01241"
},
{
"id": "2110.05699"
},
{
"id": "2101.07691"
},
{
"id": "2303.17548"
},
{
"id": "2301.11270"
},
{
"id": "2305.17608"
},
{
"id": "2301.04709"
},
{
"id": "2211.14275"
},
{
"id": "1811.07871"
},
{
"id": "1903.02020"
},
{
"id": "2303.15056"
},
{
"id": "2012.07532"
},
{
"id": "2201.03544"
},
{
"id": "2303.06074"
},
{
"id": "2209.02167"
},
{
"id": "2304.06528"
},
{
"id": "2305.14710"
},
{
"id": "2305.18290"
},
{
"id": "2301.01768"
},
{
"id": "1803.04585"
},
{
"id": "2211.09110"
},
{
"id": "2305.15324"
},
{
"id": "2304.04914"
},
{
"id": "2211.00241"
},
{
"id": "2204.10817"
},
{
"id": "2206.13316"
},
{
"id": "2305.14325"
},
{
"id": "2303.09001"
},
{
"id": "1909.08593"
},
{
"id": "2308.12050"
},
{
"id": "2204.02515"
},
{
"id": "2302.08500"
},
{
"id": "1906.03926"
},
{
"id": "2204.05186"
},
{
"id": "2209.00626"
},
{
"id": "2202.03286"
},
{
"id": "2012.05862"
},
{
"id": "2305.13534"
},
{
"id": "2307.02483"
},
{
"id": "1805.00899"
},
{
"id": "2303.16755"
},
{
"id": "2302.10894"
},
{
"id": "2006.13900"
},
{
"id": "2302.06503"
},
{
"id": "1908.04734"
},
{
"id": "1805.08010"
},
{
"id": "2305.08844"
},
{
"id": "1901.08654"
},
{
"id": "2204.05862"
},
{
"id": "1705.06452"
},
{
"id": "2306.08647"
},
{
"id": "2206.05802"
},
{
"id": "2303.09387"
},
{
"id": "2305.11455"
},
{
"id": "2203.07472"
},
{
"id": "2210.07229"
},
{
"id": "2106.05091"
},
{
"id": "2308.03825"
},
{
"id": "1610.02136"
},
{
"id": "2301.04213"
},
{
"id": "2304.00740"
},
{
"id": "1807.06096"
},
{
"id": "2010.14603"
},
{
"id": "1707.07402"
},
{
"id": "2302.10149"
},
{
"id": "2212.03201"
},
{
"id": "2303.00894"
},
{
"id": "2303.05453"
},
{
"id": "2304.06767"
},
{
"id": "2304.11082"
},
{
"id": "2109.06668"
},
{
"id": "1902.04257"
},
{
"id": "2210.01790"
},
{
"id": "2206.02231"
},
{
"id": "2306.07899"
},
{
"id": "1902.07742"
},
{
"id": "2109.00157"
},
{
"id": "2010.05418"
},
{
"id": "2306.15447"
},
{
"id": "2212.08073"
},
{
"id": "1606.06565"
},
{
"id": "2209.15259"
},
{
"id": "2211.03540"
},
{
"id": "2212.04717"
},
{
"id": "2301.03652"
},
{
"id": "2306.09442"
},
{
"id": "2305.13735"
},
{
"id": "2303.16749"
},
{
"id": "2212.09251"
},
{
"id": "2209.13085"
},
{
"id": "2303.17651"
},
{
"id": "2103.14659"
},
{
"id": "2305.11206"
},
{
"id": "2006.04948"
}
] |
2307.15217 | 140 | Li Zhou and Kevin Small. Inverse reinforcement learning with natural language goals. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 11116â11124, 2021.
Banghua Zhu, Jiantao Jiao, and Michael I Jordan. Principled reinforcement learning with human feedback from pairwise or k-wise comparisons. arXiv preprint arXiv:2301.11270, 2023.
Simon Zhuang and Dylan Hadfield-Menell. Consequences of misaligned ai. Advances in Neural Information Processing Systems, 33:15763â15773, 2020.
Brian D Ziebart, Andrew L Maas, J Andrew Bagnell, Anind K Dey, et al. Maximum entropy inverse reinforcement learning. In Aaai, volume 8, pages 1433â1438. Chicago, IL, USA, 2008.
Daniel Ziegler, Seraphina Nix, Lawrence Chan, Tim Bauman, Peter Schmidt-Nielsen, Tao Lin, Adam Scherlis, Noa Nabeshima, Benjamin Weinstein-Raun, Daniel de Haas, et al. Adversarial training for high-stakes reliability. Advances in Neural Information Processing Systems, 35:9274â9286, 2022. | 2307.15217#140 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems. | http://arxiv.org/pdf/2307.15217 | Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230727 | 20230911 | [
{
"id": "2305.20050"
},
{
"id": "2302.01928"
},
{
"id": "2210.10760"
},
{
"id": "2304.09991"
},
{
"id": "2211.06519"
},
{
"id": "2305.17319"
},
{
"id": "2109.13916"
},
{
"id": "1909.12200"
},
{
"id": "2305.16147"
},
{
"id": "2301.00901"
},
{
"id": "2005.01643"
},
{
"id": "2006.03357"
},
{
"id": "2306.04488"
},
{
"id": "2304.05197"
},
{
"id": "2210.01241"
},
{
"id": "2110.05699"
},
{
"id": "2101.07691"
},
{
"id": "2303.17548"
},
{
"id": "2301.11270"
},
{
"id": "2305.17608"
},
{
"id": "2301.04709"
},
{
"id": "2211.14275"
},
{
"id": "1811.07871"
},
{
"id": "1903.02020"
},
{
"id": "2303.15056"
},
{
"id": "2012.07532"
},
{
"id": "2201.03544"
},
{
"id": "2303.06074"
},
{
"id": "2209.02167"
},
{
"id": "2304.06528"
},
{
"id": "2305.14710"
},
{
"id": "2305.18290"
},
{
"id": "2301.01768"
},
{
"id": "1803.04585"
},
{
"id": "2211.09110"
},
{
"id": "2305.15324"
},
{
"id": "2304.04914"
},
{
"id": "2211.00241"
},
{
"id": "2204.10817"
},
{
"id": "2206.13316"
},
{
"id": "2305.14325"
},
{
"id": "2303.09001"
},
{
"id": "1909.08593"
},
{
"id": "2308.12050"
},
{
"id": "2204.02515"
},
{
"id": "2302.08500"
},
{
"id": "1906.03926"
},
{
"id": "2204.05186"
},
{
"id": "2209.00626"
},
{
"id": "2202.03286"
},
{
"id": "2012.05862"
},
{
"id": "2305.13534"
},
{
"id": "2307.02483"
},
{
"id": "1805.00899"
},
{
"id": "2303.16755"
},
{
"id": "2302.10894"
},
{
"id": "2006.13900"
},
{
"id": "2302.06503"
},
{
"id": "1908.04734"
},
{
"id": "1805.08010"
},
{
"id": "2305.08844"
},
{
"id": "1901.08654"
},
{
"id": "2204.05862"
},
{
"id": "1705.06452"
},
{
"id": "2306.08647"
},
{
"id": "2206.05802"
},
{
"id": "2303.09387"
},
{
"id": "2305.11455"
},
{
"id": "2203.07472"
},
{
"id": "2210.07229"
},
{
"id": "2106.05091"
},
{
"id": "2308.03825"
},
{
"id": "1610.02136"
},
{
"id": "2301.04213"
},
{
"id": "2304.00740"
},
{
"id": "1807.06096"
},
{
"id": "2010.14603"
},
{
"id": "1707.07402"
},
{
"id": "2302.10149"
},
{
"id": "2212.03201"
},
{
"id": "2303.00894"
},
{
"id": "2303.05453"
},
{
"id": "2304.06767"
},
{
"id": "2304.11082"
},
{
"id": "2109.06668"
},
{
"id": "1902.04257"
},
{
"id": "2210.01790"
},
{
"id": "2206.02231"
},
{
"id": "2306.07899"
},
{
"id": "1902.07742"
},
{
"id": "2109.00157"
},
{
"id": "2010.05418"
},
{
"id": "2306.15447"
},
{
"id": "2212.08073"
},
{
"id": "1606.06565"
},
{
"id": "2209.15259"
},
{
"id": "2211.03540"
},
{
"id": "2212.04717"
},
{
"id": "2301.03652"
},
{
"id": "2306.09442"
},
{
"id": "2305.13735"
},
{
"id": "2303.16749"
},
{
"id": "2212.09251"
},
{
"id": "2209.13085"
},
{
"id": "2303.17651"
},
{
"id": "2103.14659"
},
{
"id": "2305.11206"
},
{
"id": "2006.04948"
}
] |
2307.15217 | 141 | Daniel M Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B Brown, Alec Radford, Dario Amodei, Paul Chris- arXiv preprint tiano, and Geoffrey Irving. Fine-tuning language models from human preferences. arXiv:1909.08593, 2019.
35
# A An Improved Model of the Human Feedback Process
As illustrated in Equation (1), the feedback process in RLHF is typically modeled with a single human H with internal reward function rH; examples sampled from the base model: xi â¼ Ïθ; and feedback as a function of the human, example, and noise: yi = f (h, xi, ϵi). However, as discussed in Section 3, this is a misspecified model of the process: there is not a single human, humans values are not representable with a reward function, human actions are dependent on context, and the sampling process can involve a human. Thus we propose an alternative formulation. | 2307.15217#141 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems. | http://arxiv.org/pdf/2307.15217 | Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230727 | 20230911 | [
{
"id": "2305.20050"
},
{
"id": "2302.01928"
},
{
"id": "2210.10760"
},
{
"id": "2304.09991"
},
{
"id": "2211.06519"
},
{
"id": "2305.17319"
},
{
"id": "2109.13916"
},
{
"id": "1909.12200"
},
{
"id": "2305.16147"
},
{
"id": "2301.00901"
},
{
"id": "2005.01643"
},
{
"id": "2006.03357"
},
{
"id": "2306.04488"
},
{
"id": "2304.05197"
},
{
"id": "2210.01241"
},
{
"id": "2110.05699"
},
{
"id": "2101.07691"
},
{
"id": "2303.17548"
},
{
"id": "2301.11270"
},
{
"id": "2305.17608"
},
{
"id": "2301.04709"
},
{
"id": "2211.14275"
},
{
"id": "1811.07871"
},
{
"id": "1903.02020"
},
{
"id": "2303.15056"
},
{
"id": "2012.07532"
},
{
"id": "2201.03544"
},
{
"id": "2303.06074"
},
{
"id": "2209.02167"
},
{
"id": "2304.06528"
},
{
"id": "2305.14710"
},
{
"id": "2305.18290"
},
{
"id": "2301.01768"
},
{
"id": "1803.04585"
},
{
"id": "2211.09110"
},
{
"id": "2305.15324"
},
{
"id": "2304.04914"
},
{
"id": "2211.00241"
},
{
"id": "2204.10817"
},
{
"id": "2206.13316"
},
{
"id": "2305.14325"
},
{
"id": "2303.09001"
},
{
"id": "1909.08593"
},
{
"id": "2308.12050"
},
{
"id": "2204.02515"
},
{
"id": "2302.08500"
},
{
"id": "1906.03926"
},
{
"id": "2204.05186"
},
{
"id": "2209.00626"
},
{
"id": "2202.03286"
},
{
"id": "2012.05862"
},
{
"id": "2305.13534"
},
{
"id": "2307.02483"
},
{
"id": "1805.00899"
},
{
"id": "2303.16755"
},
{
"id": "2302.10894"
},
{
"id": "2006.13900"
},
{
"id": "2302.06503"
},
{
"id": "1908.04734"
},
{
"id": "1805.08010"
},
{
"id": "2305.08844"
},
{
"id": "1901.08654"
},
{
"id": "2204.05862"
},
{
"id": "1705.06452"
},
{
"id": "2306.08647"
},
{
"id": "2206.05802"
},
{
"id": "2303.09387"
},
{
"id": "2305.11455"
},
{
"id": "2203.07472"
},
{
"id": "2210.07229"
},
{
"id": "2106.05091"
},
{
"id": "2308.03825"
},
{
"id": "1610.02136"
},
{
"id": "2301.04213"
},
{
"id": "2304.00740"
},
{
"id": "1807.06096"
},
{
"id": "2010.14603"
},
{
"id": "1707.07402"
},
{
"id": "2302.10149"
},
{
"id": "2212.03201"
},
{
"id": "2303.00894"
},
{
"id": "2303.05453"
},
{
"id": "2304.06767"
},
{
"id": "2304.11082"
},
{
"id": "2109.06668"
},
{
"id": "1902.04257"
},
{
"id": "2210.01790"
},
{
"id": "2206.02231"
},
{
"id": "2306.07899"
},
{
"id": "1902.07742"
},
{
"id": "2109.00157"
},
{
"id": "2010.05418"
},
{
"id": "2306.15447"
},
{
"id": "2212.08073"
},
{
"id": "1606.06565"
},
{
"id": "2209.15259"
},
{
"id": "2211.03540"
},
{
"id": "2212.04717"
},
{
"id": "2301.03652"
},
{
"id": "2306.09442"
},
{
"id": "2305.13735"
},
{
"id": "2303.16749"
},
{
"id": "2212.09251"
},
{
"id": "2209.13085"
},
{
"id": "2303.17651"
},
{
"id": "2103.14659"
},
{
"id": "2305.11206"
},
{
"id": "2006.04948"
}
] |
2307.15217 | 142 | Let âH refer to a joint distribution of humans (or groups thereof if feedback is provided collaboratively) used for obtaining samples and feedback denoted as Hsample . A dataset of examples is sampled j from Ïθ (or some other source) where each example xi is defined to be a batch of one or more generations from the base model. Importantly, xi may not contain all information about the world state (e.g., if xi is a 2D rendering of a 3D environment), and the human may be able to observe more than just the modelâs output (e.g., if interpretability tools are used to aid in evaluation). So let v be a rendering function that maps Ïθ and xi to what a human sees. The behavior of humans varies over time and in different contexts, so let csample represent particular contexts for sampling and feedback collection. Denote the i sampling process as s which maps the base model Ïθ, a human Hsample to some example xi. Notably, s could ignore the base model and generate offline samples from some other source. Finally, let f map a human Hfeedback to feedback yi. The data , rendered example v(Ïθ, xi), and context cfeedback collection process can thus be more completely modeled as: | 2307.15217#142 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems. | http://arxiv.org/pdf/2307.15217 | Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230727 | 20230911 | [
{
"id": "2305.20050"
},
{
"id": "2302.01928"
},
{
"id": "2210.10760"
},
{
"id": "2304.09991"
},
{
"id": "2211.06519"
},
{
"id": "2305.17319"
},
{
"id": "2109.13916"
},
{
"id": "1909.12200"
},
{
"id": "2305.16147"
},
{
"id": "2301.00901"
},
{
"id": "2005.01643"
},
{
"id": "2006.03357"
},
{
"id": "2306.04488"
},
{
"id": "2304.05197"
},
{
"id": "2210.01241"
},
{
"id": "2110.05699"
},
{
"id": "2101.07691"
},
{
"id": "2303.17548"
},
{
"id": "2301.11270"
},
{
"id": "2305.17608"
},
{
"id": "2301.04709"
},
{
"id": "2211.14275"
},
{
"id": "1811.07871"
},
{
"id": "1903.02020"
},
{
"id": "2303.15056"
},
{
"id": "2012.07532"
},
{
"id": "2201.03544"
},
{
"id": "2303.06074"
},
{
"id": "2209.02167"
},
{
"id": "2304.06528"
},
{
"id": "2305.14710"
},
{
"id": "2305.18290"
},
{
"id": "2301.01768"
},
{
"id": "1803.04585"
},
{
"id": "2211.09110"
},
{
"id": "2305.15324"
},
{
"id": "2304.04914"
},
{
"id": "2211.00241"
},
{
"id": "2204.10817"
},
{
"id": "2206.13316"
},
{
"id": "2305.14325"
},
{
"id": "2303.09001"
},
{
"id": "1909.08593"
},
{
"id": "2308.12050"
},
{
"id": "2204.02515"
},
{
"id": "2302.08500"
},
{
"id": "1906.03926"
},
{
"id": "2204.05186"
},
{
"id": "2209.00626"
},
{
"id": "2202.03286"
},
{
"id": "2012.05862"
},
{
"id": "2305.13534"
},
{
"id": "2307.02483"
},
{
"id": "1805.00899"
},
{
"id": "2303.16755"
},
{
"id": "2302.10894"
},
{
"id": "2006.13900"
},
{
"id": "2302.06503"
},
{
"id": "1908.04734"
},
{
"id": "1805.08010"
},
{
"id": "2305.08844"
},
{
"id": "1901.08654"
},
{
"id": "2204.05862"
},
{
"id": "1705.06452"
},
{
"id": "2306.08647"
},
{
"id": "2206.05802"
},
{
"id": "2303.09387"
},
{
"id": "2305.11455"
},
{
"id": "2203.07472"
},
{
"id": "2210.07229"
},
{
"id": "2106.05091"
},
{
"id": "2308.03825"
},
{
"id": "1610.02136"
},
{
"id": "2301.04213"
},
{
"id": "2304.00740"
},
{
"id": "1807.06096"
},
{
"id": "2010.14603"
},
{
"id": "1707.07402"
},
{
"id": "2302.10149"
},
{
"id": "2212.03201"
},
{
"id": "2303.00894"
},
{
"id": "2303.05453"
},
{
"id": "2304.06767"
},
{
"id": "2304.11082"
},
{
"id": "2109.06668"
},
{
"id": "1902.04257"
},
{
"id": "2210.01790"
},
{
"id": "2206.02231"
},
{
"id": "2306.07899"
},
{
"id": "1902.07742"
},
{
"id": "2109.00157"
},
{
"id": "2010.05418"
},
{
"id": "2306.15447"
},
{
"id": "2212.08073"
},
{
"id": "1606.06565"
},
{
"id": "2209.15259"
},
{
"id": "2211.03540"
},
{
"id": "2212.04717"
},
{
"id": "2301.03652"
},
{
"id": "2306.09442"
},
{
"id": "2305.13735"
},
{
"id": "2303.16749"
},
{
"id": "2212.09251"
},
{
"id": "2209.13085"
},
{
"id": "2303.17651"
},
{
"id": "2103.14659"
},
{
"id": "2305.11206"
},
{
"id": "2006.04948"
}
] |
2307.15217 | 143 | Hsample j , Hfeedback j â¼ âH, xi â¼ s(Ïθ, Hsample j , csample i ), yi = f (v(Ïθ, xi), Hfeedback j , cfeedback i ) (4)
which highlights a need for future work to better account for the aspects of this process that are commonly not accounted for when training systems with RLHF.
# B Rationale for Why Challenges Were Categorized as Tractable or Fundamental
In Section 3, we categorize problems as tractable or fundamental. The key distinction between the two is that fundamental challenges are substantial enough that overcoming them would require a method that is no longer a form of RLHF. Although many of the fundamental problems we identify can be alleviated by improving how RLHF is approached, they could be fully addressed with RLHF. As a result, they should be either avoided by not using RLHF or compensated for by other safety measures. This distinction is soft, and some categories of challenges are marginal. Here, we briefly explain each categorization.
# B.1 Problems from Section 3.1:
Tractable: Selecting representative humans and getting them to provide quality feedback is difficult: This can be addressed by studying and improving the selection and training of evaluators.
Tractable: Some evaluators have harmful biases and opinions: This can be addressed by studying and improving the selection and training of evaluators. | 2307.15217#143 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems. | http://arxiv.org/pdf/2307.15217 | Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230727 | 20230911 | [
{
"id": "2305.20050"
},
{
"id": "2302.01928"
},
{
"id": "2210.10760"
},
{
"id": "2304.09991"
},
{
"id": "2211.06519"
},
{
"id": "2305.17319"
},
{
"id": "2109.13916"
},
{
"id": "1909.12200"
},
{
"id": "2305.16147"
},
{
"id": "2301.00901"
},
{
"id": "2005.01643"
},
{
"id": "2006.03357"
},
{
"id": "2306.04488"
},
{
"id": "2304.05197"
},
{
"id": "2210.01241"
},
{
"id": "2110.05699"
},
{
"id": "2101.07691"
},
{
"id": "2303.17548"
},
{
"id": "2301.11270"
},
{
"id": "2305.17608"
},
{
"id": "2301.04709"
},
{
"id": "2211.14275"
},
{
"id": "1811.07871"
},
{
"id": "1903.02020"
},
{
"id": "2303.15056"
},
{
"id": "2012.07532"
},
{
"id": "2201.03544"
},
{
"id": "2303.06074"
},
{
"id": "2209.02167"
},
{
"id": "2304.06528"
},
{
"id": "2305.14710"
},
{
"id": "2305.18290"
},
{
"id": "2301.01768"
},
{
"id": "1803.04585"
},
{
"id": "2211.09110"
},
{
"id": "2305.15324"
},
{
"id": "2304.04914"
},
{
"id": "2211.00241"
},
{
"id": "2204.10817"
},
{
"id": "2206.13316"
},
{
"id": "2305.14325"
},
{
"id": "2303.09001"
},
{
"id": "1909.08593"
},
{
"id": "2308.12050"
},
{
"id": "2204.02515"
},
{
"id": "2302.08500"
},
{
"id": "1906.03926"
},
{
"id": "2204.05186"
},
{
"id": "2209.00626"
},
{
"id": "2202.03286"
},
{
"id": "2012.05862"
},
{
"id": "2305.13534"
},
{
"id": "2307.02483"
},
{
"id": "1805.00899"
},
{
"id": "2303.16755"
},
{
"id": "2302.10894"
},
{
"id": "2006.13900"
},
{
"id": "2302.06503"
},
{
"id": "1908.04734"
},
{
"id": "1805.08010"
},
{
"id": "2305.08844"
},
{
"id": "1901.08654"
},
{
"id": "2204.05862"
},
{
"id": "1705.06452"
},
{
"id": "2306.08647"
},
{
"id": "2206.05802"
},
{
"id": "2303.09387"
},
{
"id": "2305.11455"
},
{
"id": "2203.07472"
},
{
"id": "2210.07229"
},
{
"id": "2106.05091"
},
{
"id": "2308.03825"
},
{
"id": "1610.02136"
},
{
"id": "2301.04213"
},
{
"id": "2304.00740"
},
{
"id": "1807.06096"
},
{
"id": "2010.14603"
},
{
"id": "1707.07402"
},
{
"id": "2302.10149"
},
{
"id": "2212.03201"
},
{
"id": "2303.00894"
},
{
"id": "2303.05453"
},
{
"id": "2304.06767"
},
{
"id": "2304.11082"
},
{
"id": "2109.06668"
},
{
"id": "1902.04257"
},
{
"id": "2210.01790"
},
{
"id": "2206.02231"
},
{
"id": "2306.07899"
},
{
"id": "1902.07742"
},
{
"id": "2109.00157"
},
{
"id": "2010.05418"
},
{
"id": "2306.15447"
},
{
"id": "2212.08073"
},
{
"id": "1606.06565"
},
{
"id": "2209.15259"
},
{
"id": "2211.03540"
},
{
"id": "2212.04717"
},
{
"id": "2301.03652"
},
{
"id": "2306.09442"
},
{
"id": "2305.13735"
},
{
"id": "2303.16749"
},
{
"id": "2212.09251"
},
{
"id": "2209.13085"
},
{
"id": "2303.17651"
},
{
"id": "2103.14659"
},
{
"id": "2305.11206"
},
{
"id": "2006.04948"
}
] |
2307.15217 | 144 | Tractable: Some evaluators have harmful biases and opinions: This can be addressed by studying and improving the selection and training of evaluators.
Tractable: Individual human evaluators can poison data: This can be addressed with improved evaluator selection and quality assurance measures.
Tractable: Humans make simple mistakes due to limited time, attention, or care: This is marginal because human mistakes can never fully be overcome. However, they can be addressed with improved working conditions and quality assurance procedures.
Tractable: Partial observability limits human evaluators: Human evaluators can be provided with all information available in the policyâs observations (although representing this in an easily-comprehensible way may be challenging).
Fundamental: Humans cannot evaluate performance on difficult tasks well: Human intelligence and cognitive capacity are limited. Humans cannot be expected to properly evaluate the performance of
36
superhuman models on complex tasks. Thus, solving this problem would require no longer using human feedback in the way that RLHF does.
Fundamental: Humans can be misled, so their evaluations can be gamed: Human fallibility cannot fully be overcome, especially against optimization pressure from the learned policy.
Tractable: Data collection can introduce harmful biases: This can be addressed with improved data curation. | 2307.15217#144 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems. | http://arxiv.org/pdf/2307.15217 | Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230727 | 20230911 | [
{
"id": "2305.20050"
},
{
"id": "2302.01928"
},
{
"id": "2210.10760"
},
{
"id": "2304.09991"
},
{
"id": "2211.06519"
},
{
"id": "2305.17319"
},
{
"id": "2109.13916"
},
{
"id": "1909.12200"
},
{
"id": "2305.16147"
},
{
"id": "2301.00901"
},
{
"id": "2005.01643"
},
{
"id": "2006.03357"
},
{
"id": "2306.04488"
},
{
"id": "2304.05197"
},
{
"id": "2210.01241"
},
{
"id": "2110.05699"
},
{
"id": "2101.07691"
},
{
"id": "2303.17548"
},
{
"id": "2301.11270"
},
{
"id": "2305.17608"
},
{
"id": "2301.04709"
},
{
"id": "2211.14275"
},
{
"id": "1811.07871"
},
{
"id": "1903.02020"
},
{
"id": "2303.15056"
},
{
"id": "2012.07532"
},
{
"id": "2201.03544"
},
{
"id": "2303.06074"
},
{
"id": "2209.02167"
},
{
"id": "2304.06528"
},
{
"id": "2305.14710"
},
{
"id": "2305.18290"
},
{
"id": "2301.01768"
},
{
"id": "1803.04585"
},
{
"id": "2211.09110"
},
{
"id": "2305.15324"
},
{
"id": "2304.04914"
},
{
"id": "2211.00241"
},
{
"id": "2204.10817"
},
{
"id": "2206.13316"
},
{
"id": "2305.14325"
},
{
"id": "2303.09001"
},
{
"id": "1909.08593"
},
{
"id": "2308.12050"
},
{
"id": "2204.02515"
},
{
"id": "2302.08500"
},
{
"id": "1906.03926"
},
{
"id": "2204.05186"
},
{
"id": "2209.00626"
},
{
"id": "2202.03286"
},
{
"id": "2012.05862"
},
{
"id": "2305.13534"
},
{
"id": "2307.02483"
},
{
"id": "1805.00899"
},
{
"id": "2303.16755"
},
{
"id": "2302.10894"
},
{
"id": "2006.13900"
},
{
"id": "2302.06503"
},
{
"id": "1908.04734"
},
{
"id": "1805.08010"
},
{
"id": "2305.08844"
},
{
"id": "1901.08654"
},
{
"id": "2204.05862"
},
{
"id": "1705.06452"
},
{
"id": "2306.08647"
},
{
"id": "2206.05802"
},
{
"id": "2303.09387"
},
{
"id": "2305.11455"
},
{
"id": "2203.07472"
},
{
"id": "2210.07229"
},
{
"id": "2106.05091"
},
{
"id": "2308.03825"
},
{
"id": "1610.02136"
},
{
"id": "2301.04213"
},
{
"id": "2304.00740"
},
{
"id": "1807.06096"
},
{
"id": "2010.14603"
},
{
"id": "1707.07402"
},
{
"id": "2302.10149"
},
{
"id": "2212.03201"
},
{
"id": "2303.00894"
},
{
"id": "2303.05453"
},
{
"id": "2304.06767"
},
{
"id": "2304.11082"
},
{
"id": "2109.06668"
},
{
"id": "1902.04257"
},
{
"id": "2210.01790"
},
{
"id": "2206.02231"
},
{
"id": "2306.07899"
},
{
"id": "1902.07742"
},
{
"id": "2109.00157"
},
{
"id": "2010.05418"
},
{
"id": "2306.15447"
},
{
"id": "2212.08073"
},
{
"id": "1606.06565"
},
{
"id": "2209.15259"
},
{
"id": "2211.03540"
},
{
"id": "2212.04717"
},
{
"id": "2301.03652"
},
{
"id": "2306.09442"
},
{
"id": "2305.13735"
},
{
"id": "2303.16749"
},
{
"id": "2212.09251"
},
{
"id": "2209.13085"
},
{
"id": "2303.17651"
},
{
"id": "2103.14659"
},
{
"id": "2305.11206"
},
{
"id": "2006.04948"
}
] |
2307.15217 | 145 | Tractable: Data collection can introduce harmful biases: This can be addressed with improved data curation.
Fundamental: There is an inherent cost/quality tradeoff when collecting human feedback: This tradeoff is unavoidable in practice â obtaining diverse and high-quality examples (e.g. from long chatbot conversations) requires more effort.
Fundamental: RLHF suffers from a tradeoff between the richness and efficiency of feedback types: This tradeoff is unavoidable for data collection in practice â richer annotations require more effort.
# B.2 Problems from Section 3.2:
Fundamental: An individual humanâs values are difficult to represent with a reward function: This problem is marginal. It can be improved in practice by improved modeling, but RLHF-based solutions will be limited by the intractability of perfectly modeling context and troubles with the reward hypothesis (Skalse and Abate, 2022b; Bowling et al., 2023).
Fundamental: A single reward function cannot represent a diverse society of humans: Trivial. Instead of being a fundamental limitation with RLHF, this is a broader limitation of AI alignment itself. | 2307.15217#145 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems. | http://arxiv.org/pdf/2307.15217 | Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230727 | 20230911 | [
{
"id": "2305.20050"
},
{
"id": "2302.01928"
},
{
"id": "2210.10760"
},
{
"id": "2304.09991"
},
{
"id": "2211.06519"
},
{
"id": "2305.17319"
},
{
"id": "2109.13916"
},
{
"id": "1909.12200"
},
{
"id": "2305.16147"
},
{
"id": "2301.00901"
},
{
"id": "2005.01643"
},
{
"id": "2006.03357"
},
{
"id": "2306.04488"
},
{
"id": "2304.05197"
},
{
"id": "2210.01241"
},
{
"id": "2110.05699"
},
{
"id": "2101.07691"
},
{
"id": "2303.17548"
},
{
"id": "2301.11270"
},
{
"id": "2305.17608"
},
{
"id": "2301.04709"
},
{
"id": "2211.14275"
},
{
"id": "1811.07871"
},
{
"id": "1903.02020"
},
{
"id": "2303.15056"
},
{
"id": "2012.07532"
},
{
"id": "2201.03544"
},
{
"id": "2303.06074"
},
{
"id": "2209.02167"
},
{
"id": "2304.06528"
},
{
"id": "2305.14710"
},
{
"id": "2305.18290"
},
{
"id": "2301.01768"
},
{
"id": "1803.04585"
},
{
"id": "2211.09110"
},
{
"id": "2305.15324"
},
{
"id": "2304.04914"
},
{
"id": "2211.00241"
},
{
"id": "2204.10817"
},
{
"id": "2206.13316"
},
{
"id": "2305.14325"
},
{
"id": "2303.09001"
},
{
"id": "1909.08593"
},
{
"id": "2308.12050"
},
{
"id": "2204.02515"
},
{
"id": "2302.08500"
},
{
"id": "1906.03926"
},
{
"id": "2204.05186"
},
{
"id": "2209.00626"
},
{
"id": "2202.03286"
},
{
"id": "2012.05862"
},
{
"id": "2305.13534"
},
{
"id": "2307.02483"
},
{
"id": "1805.00899"
},
{
"id": "2303.16755"
},
{
"id": "2302.10894"
},
{
"id": "2006.13900"
},
{
"id": "2302.06503"
},
{
"id": "1908.04734"
},
{
"id": "1805.08010"
},
{
"id": "2305.08844"
},
{
"id": "1901.08654"
},
{
"id": "2204.05862"
},
{
"id": "1705.06452"
},
{
"id": "2306.08647"
},
{
"id": "2206.05802"
},
{
"id": "2303.09387"
},
{
"id": "2305.11455"
},
{
"id": "2203.07472"
},
{
"id": "2210.07229"
},
{
"id": "2106.05091"
},
{
"id": "2308.03825"
},
{
"id": "1610.02136"
},
{
"id": "2301.04213"
},
{
"id": "2304.00740"
},
{
"id": "1807.06096"
},
{
"id": "2010.14603"
},
{
"id": "1707.07402"
},
{
"id": "2302.10149"
},
{
"id": "2212.03201"
},
{
"id": "2303.00894"
},
{
"id": "2303.05453"
},
{
"id": "2304.06767"
},
{
"id": "2304.11082"
},
{
"id": "2109.06668"
},
{
"id": "1902.04257"
},
{
"id": "2210.01790"
},
{
"id": "2206.02231"
},
{
"id": "2306.07899"
},
{
"id": "1902.07742"
},
{
"id": "2109.00157"
},
{
"id": "2010.05418"
},
{
"id": "2306.15447"
},
{
"id": "2212.08073"
},
{
"id": "1606.06565"
},
{
"id": "2209.15259"
},
{
"id": "2211.03540"
},
{
"id": "2212.04717"
},
{
"id": "2301.03652"
},
{
"id": "2306.09442"
},
{
"id": "2305.13735"
},
{
"id": "2303.16749"
},
{
"id": "2212.09251"
},
{
"id": "2209.13085"
},
{
"id": "2303.17651"
},
{
"id": "2103.14659"
},
{
"id": "2305.11206"
},
{
"id": "2006.04948"
}
] |
2307.15217 | 146 | Fundamental: A single reward function cannot represent a diverse society of humans: Trivial. Instead of being a fundamental limitation with RLHF, this is a broader limitation of AI alignment itself.
Fundamental: Reward models can misgeneralize to be poor reward proxies, even from correctly-labeled training data: This problem is marginal because it can and should be addressed by improved sampling in practice. However, it is impossible to perfectly represent a distribution with infinite support from a finite sample. Additionally, the deployment distribution will always differ from the training and evaluation distributions in real-world settings (Christiano, 2019).
Fundamental: Optimizing for an imperfect reward proxy leads to reward hacking: If a reward model is imperfect, reward hacking will always be a possibility from RL.
Tractable: Evaluating reward models is difficult and expensive: This can be addressed by perform- ing thorough and expensive evaluations.
# B.3 Problems from Section 3.3:
Tractable: It is (still) challenging to optimize policies effectively: This can be addressed with advancements in RL methodology.
Tractable: Policies tend to be adversarially exploitable: This problem is marginal because achieving certified adversarial robustness against practical threat models has empirically been intractable. Nonetheless, this can be addressed with robust optimization techniques. | 2307.15217#146 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems. | http://arxiv.org/pdf/2307.15217 | Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230727 | 20230911 | [
{
"id": "2305.20050"
},
{
"id": "2302.01928"
},
{
"id": "2210.10760"
},
{
"id": "2304.09991"
},
{
"id": "2211.06519"
},
{
"id": "2305.17319"
},
{
"id": "2109.13916"
},
{
"id": "1909.12200"
},
{
"id": "2305.16147"
},
{
"id": "2301.00901"
},
{
"id": "2005.01643"
},
{
"id": "2006.03357"
},
{
"id": "2306.04488"
},
{
"id": "2304.05197"
},
{
"id": "2210.01241"
},
{
"id": "2110.05699"
},
{
"id": "2101.07691"
},
{
"id": "2303.17548"
},
{
"id": "2301.11270"
},
{
"id": "2305.17608"
},
{
"id": "2301.04709"
},
{
"id": "2211.14275"
},
{
"id": "1811.07871"
},
{
"id": "1903.02020"
},
{
"id": "2303.15056"
},
{
"id": "2012.07532"
},
{
"id": "2201.03544"
},
{
"id": "2303.06074"
},
{
"id": "2209.02167"
},
{
"id": "2304.06528"
},
{
"id": "2305.14710"
},
{
"id": "2305.18290"
},
{
"id": "2301.01768"
},
{
"id": "1803.04585"
},
{
"id": "2211.09110"
},
{
"id": "2305.15324"
},
{
"id": "2304.04914"
},
{
"id": "2211.00241"
},
{
"id": "2204.10817"
},
{
"id": "2206.13316"
},
{
"id": "2305.14325"
},
{
"id": "2303.09001"
},
{
"id": "1909.08593"
},
{
"id": "2308.12050"
},
{
"id": "2204.02515"
},
{
"id": "2302.08500"
},
{
"id": "1906.03926"
},
{
"id": "2204.05186"
},
{
"id": "2209.00626"
},
{
"id": "2202.03286"
},
{
"id": "2012.05862"
},
{
"id": "2305.13534"
},
{
"id": "2307.02483"
},
{
"id": "1805.00899"
},
{
"id": "2303.16755"
},
{
"id": "2302.10894"
},
{
"id": "2006.13900"
},
{
"id": "2302.06503"
},
{
"id": "1908.04734"
},
{
"id": "1805.08010"
},
{
"id": "2305.08844"
},
{
"id": "1901.08654"
},
{
"id": "2204.05862"
},
{
"id": "1705.06452"
},
{
"id": "2306.08647"
},
{
"id": "2206.05802"
},
{
"id": "2303.09387"
},
{
"id": "2305.11455"
},
{
"id": "2203.07472"
},
{
"id": "2210.07229"
},
{
"id": "2106.05091"
},
{
"id": "2308.03825"
},
{
"id": "1610.02136"
},
{
"id": "2301.04213"
},
{
"id": "2304.00740"
},
{
"id": "1807.06096"
},
{
"id": "2010.14603"
},
{
"id": "1707.07402"
},
{
"id": "2302.10149"
},
{
"id": "2212.03201"
},
{
"id": "2303.00894"
},
{
"id": "2303.05453"
},
{
"id": "2304.06767"
},
{
"id": "2304.11082"
},
{
"id": "2109.06668"
},
{
"id": "1902.04257"
},
{
"id": "2210.01790"
},
{
"id": "2206.02231"
},
{
"id": "2306.07899"
},
{
"id": "1902.07742"
},
{
"id": "2109.00157"
},
{
"id": "2010.05418"
},
{
"id": "2306.15447"
},
{
"id": "2212.08073"
},
{
"id": "1606.06565"
},
{
"id": "2209.15259"
},
{
"id": "2211.03540"
},
{
"id": "2212.04717"
},
{
"id": "2301.03652"
},
{
"id": "2306.09442"
},
{
"id": "2305.13735"
},
{
"id": "2303.16749"
},
{
"id": "2212.09251"
},
{
"id": "2209.13085"
},
{
"id": "2303.17651"
},
{
"id": "2103.14659"
},
{
"id": "2305.11206"
},
{
"id": "2006.04948"
}
] |
2307.15217 | 147 | Fundamental: Policies can perform poorly in deployment even if rewards seen during training were perfectly correct: This problem is marginal because it can and should be addressed by improved sampling in practice. However, it is impossible to perfectly represent a distribution with infinite support from a finite sample. Additionally, the deployment distribution will always differ from the training and evaluation distributions in real-world settings Christiano (2019).
Fundamental: Optimal RL agents tend to seek power: Power is instrumentally useful for agents.
Tractable: The pretrained model introduces biases into policy optimization: This can be ad- dressed with improved base models.
Tractable: RL contributes to mode collapse: This can be addressed with forms of RL that optimize for distribution-matching in desired instances.
37
# B.4 Problems from Section 3.4:
Tractable: Joint training induces distribution shifts: This can be mitigated with synchronous learning or other strategies.
Tractable: It is difficult to balance efficiency and avoiding overfitting by the policy: This can be addressed with improved training methodology.
38 | 2307.15217#147 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems. | http://arxiv.org/pdf/2307.15217 | Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230727 | 20230911 | [
{
"id": "2305.20050"
},
{
"id": "2302.01928"
},
{
"id": "2210.10760"
},
{
"id": "2304.09991"
},
{
"id": "2211.06519"
},
{
"id": "2305.17319"
},
{
"id": "2109.13916"
},
{
"id": "1909.12200"
},
{
"id": "2305.16147"
},
{
"id": "2301.00901"
},
{
"id": "2005.01643"
},
{
"id": "2006.03357"
},
{
"id": "2306.04488"
},
{
"id": "2304.05197"
},
{
"id": "2210.01241"
},
{
"id": "2110.05699"
},
{
"id": "2101.07691"
},
{
"id": "2303.17548"
},
{
"id": "2301.11270"
},
{
"id": "2305.17608"
},
{
"id": "2301.04709"
},
{
"id": "2211.14275"
},
{
"id": "1811.07871"
},
{
"id": "1903.02020"
},
{
"id": "2303.15056"
},
{
"id": "2012.07532"
},
{
"id": "2201.03544"
},
{
"id": "2303.06074"
},
{
"id": "2209.02167"
},
{
"id": "2304.06528"
},
{
"id": "2305.14710"
},
{
"id": "2305.18290"
},
{
"id": "2301.01768"
},
{
"id": "1803.04585"
},
{
"id": "2211.09110"
},
{
"id": "2305.15324"
},
{
"id": "2304.04914"
},
{
"id": "2211.00241"
},
{
"id": "2204.10817"
},
{
"id": "2206.13316"
},
{
"id": "2305.14325"
},
{
"id": "2303.09001"
},
{
"id": "1909.08593"
},
{
"id": "2308.12050"
},
{
"id": "2204.02515"
},
{
"id": "2302.08500"
},
{
"id": "1906.03926"
},
{
"id": "2204.05186"
},
{
"id": "2209.00626"
},
{
"id": "2202.03286"
},
{
"id": "2012.05862"
},
{
"id": "2305.13534"
},
{
"id": "2307.02483"
},
{
"id": "1805.00899"
},
{
"id": "2303.16755"
},
{
"id": "2302.10894"
},
{
"id": "2006.13900"
},
{
"id": "2302.06503"
},
{
"id": "1908.04734"
},
{
"id": "1805.08010"
},
{
"id": "2305.08844"
},
{
"id": "1901.08654"
},
{
"id": "2204.05862"
},
{
"id": "1705.06452"
},
{
"id": "2306.08647"
},
{
"id": "2206.05802"
},
{
"id": "2303.09387"
},
{
"id": "2305.11455"
},
{
"id": "2203.07472"
},
{
"id": "2210.07229"
},
{
"id": "2106.05091"
},
{
"id": "2308.03825"
},
{
"id": "1610.02136"
},
{
"id": "2301.04213"
},
{
"id": "2304.00740"
},
{
"id": "1807.06096"
},
{
"id": "2010.14603"
},
{
"id": "1707.07402"
},
{
"id": "2302.10149"
},
{
"id": "2212.03201"
},
{
"id": "2303.00894"
},
{
"id": "2303.05453"
},
{
"id": "2304.06767"
},
{
"id": "2304.11082"
},
{
"id": "2109.06668"
},
{
"id": "1902.04257"
},
{
"id": "2210.01790"
},
{
"id": "2206.02231"
},
{
"id": "2306.07899"
},
{
"id": "1902.07742"
},
{
"id": "2109.00157"
},
{
"id": "2010.05418"
},
{
"id": "2306.15447"
},
{
"id": "2212.08073"
},
{
"id": "1606.06565"
},
{
"id": "2209.15259"
},
{
"id": "2211.03540"
},
{
"id": "2212.04717"
},
{
"id": "2301.03652"
},
{
"id": "2306.09442"
},
{
"id": "2305.13735"
},
{
"id": "2303.16749"
},
{
"id": "2212.09251"
},
{
"id": "2209.13085"
},
{
"id": "2303.17651"
},
{
"id": "2103.14659"
},
{
"id": "2305.11206"
},
{
"id": "2006.04948"
}
] |
2307.14430 | 0 | 3 2 0 2
l u J 6 2 ] L C . s c [
1 v 0 3 4 4 1 . 7 0 3 2 : v i X r a
# Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
# Mayee F. Chen*1 Nicholas Roberts2 Kush Bhatia1 Jue Wang3 Ce Zhang3, 4 Frederic Sala2 Christopher Ré1
1Department of Computer Science, Stanford University 2Department of Computer Sciences, University of Wisconsin-Madison 3Together AI 4Department of Computer Science, University of Chicago
# July 28, 2023
# Abstract | 2307.14430#0 | Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models | The quality of training data impacts the performance of pre-trained large
language models (LMs). Given a fixed budget of tokens, we study how to best
select data that leads to good downstream model performance across tasks. We
develop a new framework based on a simple hypothesis: just as humans acquire
interdependent skills in a deliberate order, language models also follow a
natural order when learning a set of skills from their training data. If such
an order exists, it can be utilized for improved understanding of LMs and for
data-efficient training. Using this intuition, our framework formalizes the
notion of a skill and of an ordered set of skills in terms of the associated
data. First, using both synthetic and real data, we demonstrate that these
ordered skill sets exist, and that their existence enables more advanced skills
to be learned with less data when we train on their prerequisite skills.
Second, using our proposed framework, we introduce an online data sampling
algorithm, Skill-It, over mixtures of skills for both continual pre-training
and fine-tuning regimes, where the objective is to efficiently learn multiple
skills in the former and an individual skill in the latter. On the LEGO
synthetic in the continual pre-training setting, Skill-It obtains 36.5 points
higher accuracy than random sampling. On the Natural Instructions dataset in
the fine-tuning setting, Skill-It reduces the validation loss on the target
skill by 13.6% versus training on data associated with the target skill itself.
We apply our skills framework on the recent RedPajama dataset to continually
pre-train a 3B-parameter LM, achieving higher accuracy on the LM Evaluation
Harness with 1B tokens than the baseline approach of sampling uniformly over
data sources with 3B tokens. | http://arxiv.org/pdf/2307.14430 | Mayee F. Chen, Nicholas Roberts, Kush Bhatia, Jue Wang, Ce Zhang, Frederic Sala, Christopher Ré | cs.CL, cs.LG | null | null | cs.CL | 20230726 | 20230726 | [
{
"id": "2101.00027"
},
{
"id": "2005.14165"
}
] |
2307.14225 | 1 | Traditional recommender systems leverage usersâ item preference history to recommend novel content that users may like. However, modern dialog interfaces that allow users to express language-based preferences oï¬er a fundamentally diï¬erent modality for prefer- ence input. Inspired by recent successes of prompting paradigms for large language models (LLMs), we study their use for making recommendations from both item-based and language-based preferences in comparison to state-of-the-art item-based collaborative ï¬ltering (CF) methods. To support this investigation, we collect a new dataset consisting of both item-based and language-based preferences elicited from users along with their ratings on a variety of (biased) recommended items and (unbiased) random items. Among numerous experimental results, we ï¬nd that LLMs provide competitive recommendation performance for pure language-based preferences (no item preferences) in the near cold-start case in comparison to item-based CF methods, despite having no supervised training for this speciï¬c task (zero-shot) or only a few labels (few-shot). This is particularly promising as language-based preference representations are more explainable and scrutable than item-based or vector-based representations.
# CCS Concepts: ⢠Information systems â Recommender systems.
Additional Key Words and Phrases: recommendation; transparency; scrutability; natural language
# ACM Reference Format: | 2307.14225#1 | Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences | Traditional recommender systems leverage users' item preference history to
recommend novel content that users may like. However, modern dialog interfaces
that allow users to express language-based preferences offer a fundamentally
different modality for preference input. Inspired by recent successes of
prompting paradigms for large language models (LLMs), we study their use for
making recommendations from both item-based and language-based preferences in
comparison to state-of-the-art item-based collaborative filtering (CF) methods.
To support this investigation, we collect a new dataset consisting of both
item-based and language-based preferences elicited from users along with their
ratings on a variety of (biased) recommended items and (unbiased) random items.
Among numerous experimental results, we find that LLMs provide competitive
recommendation performance for pure language-based preferences (no item
preferences) in the near cold-start case in comparison to item-based CF
methods, despite having no supervised training for this specific task
(zero-shot) or only a few labels (few-shot). This is particularly promising as
language-based preference representations are more explainable and scrutable
than item-based or vector-based representations. | http://arxiv.org/pdf/2307.14225 | Scott Sanner, Krisztian Balog, Filip Radlinski, Ben Wedin, Lucas Dixon | cs.IR, cs.LG | To appear at RecSys'23 | null | cs.IR | 20230726 | 20230726 | [
{
"id": "2305.06474"
},
{
"id": "2305.07961"
},
{
"id": "2009.13292"
},
{
"id": "2204.02311"
},
{
"id": "2210.06280"
},
{
"id": "2005.14165"
},
{
"id": "2108.07732"
},
{
"id": "2306.02250"
},
{
"id": "2304.04250"
},
{
"id": "2305.08845"
},
{
"id": "2109.01652"
}
] |
2307.14430 | 1 | The quality of training data impacts the performance of pre-trained large language models (LMs). Given a ï¬xed budget of tokens, we study how to best select data that leads to good downstream model performance across tasks. We develop a new framework based on a simple hypothesis: just as humans acquire interdependent skills in a deliberate order, language models also follow a natural order when learning a set of skills from their training data. If such an order exists, it can be utilized for improved understanding of LMs and for data-efï¬cient training. Using this intuition, our framework formalizes the notion of a skill and of an ordered set of skills in terms of the associated data. First, using both synthetic and real data, we demonstrate that these ordered skill sets exist, and that their existence enables more advanced skills to be learned with less data when we train on their prerequisite skills. Second, using our proposed framework, we introduce an online data sampling algorithm, SKILL-IT, over mixtures of skills for both continual pre-training and ï¬ne-tuning regimes, where the objective is to efï¬ciently learn multiple skills in the former and an individual skill in the latter. On the LEGO synthetic in the | 2307.14430#1 | Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models | The quality of training data impacts the performance of pre-trained large
language models (LMs). Given a fixed budget of tokens, we study how to best
select data that leads to good downstream model performance across tasks. We
develop a new framework based on a simple hypothesis: just as humans acquire
interdependent skills in a deliberate order, language models also follow a
natural order when learning a set of skills from their training data. If such
an order exists, it can be utilized for improved understanding of LMs and for
data-efficient training. Using this intuition, our framework formalizes the
notion of a skill and of an ordered set of skills in terms of the associated
data. First, using both synthetic and real data, we demonstrate that these
ordered skill sets exist, and that their existence enables more advanced skills
to be learned with less data when we train on their prerequisite skills.
Second, using our proposed framework, we introduce an online data sampling
algorithm, Skill-It, over mixtures of skills for both continual pre-training
and fine-tuning regimes, where the objective is to efficiently learn multiple
skills in the former and an individual skill in the latter. On the LEGO
synthetic in the continual pre-training setting, Skill-It obtains 36.5 points
higher accuracy than random sampling. On the Natural Instructions dataset in
the fine-tuning setting, Skill-It reduces the validation loss on the target
skill by 13.6% versus training on data associated with the target skill itself.
We apply our skills framework on the recent RedPajama dataset to continually
pre-train a 3B-parameter LM, achieving higher accuracy on the LM Evaluation
Harness with 1B tokens than the baseline approach of sampling uniformly over
data sources with 3B tokens. | http://arxiv.org/pdf/2307.14430 | Mayee F. Chen, Nicholas Roberts, Kush Bhatia, Jue Wang, Ce Zhang, Frederic Sala, Christopher Ré | cs.CL, cs.LG | null | null | cs.CL | 20230726 | 20230726 | [
{
"id": "2101.00027"
},
{
"id": "2005.14165"
}
] |
2307.14225 | 2 | # CCS Concepts: ⢠Information systems â Recommender systems.
Additional Key Words and Phrases: recommendation; transparency; scrutability; natural language
# ACM Reference Format:
Scott Sanner, Krisztian Balog, Filip Radlinski, Ben Wedin, and Lucas Dixon. 2023. Large Language Models are Competitive Near Cold- start Recommenders for Language- and Item-based Preferences. In Seventeenth ACM Conference on Recommender Systems (RecSys â23), September 18â22, 2023, Singapore, Singapore. ACM, New York, NY, USA, 11 pages. https://doi.org/10.1145/3604915.3608845
# 1 INTRODUCTION | 2307.14225#2 | Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences | Traditional recommender systems leverage users' item preference history to
recommend novel content that users may like. However, modern dialog interfaces
that allow users to express language-based preferences offer a fundamentally
different modality for preference input. Inspired by recent successes of
prompting paradigms for large language models (LLMs), we study their use for
making recommendations from both item-based and language-based preferences in
comparison to state-of-the-art item-based collaborative filtering (CF) methods.
To support this investigation, we collect a new dataset consisting of both
item-based and language-based preferences elicited from users along with their
ratings on a variety of (biased) recommended items and (unbiased) random items.
Among numerous experimental results, we find that LLMs provide competitive
recommendation performance for pure language-based preferences (no item
preferences) in the near cold-start case in comparison to item-based CF
methods, despite having no supervised training for this specific task
(zero-shot) or only a few labels (few-shot). This is particularly promising as
language-based preference representations are more explainable and scrutable
than item-based or vector-based representations. | http://arxiv.org/pdf/2307.14225 | Scott Sanner, Krisztian Balog, Filip Radlinski, Ben Wedin, Lucas Dixon | cs.IR, cs.LG | To appear at RecSys'23 | null | cs.IR | 20230726 | 20230726 | [
{
"id": "2305.06474"
},
{
"id": "2305.07961"
},
{
"id": "2009.13292"
},
{
"id": "2204.02311"
},
{
"id": "2210.06280"
},
{
"id": "2005.14165"
},
{
"id": "2108.07732"
},
{
"id": "2306.02250"
},
{
"id": "2304.04250"
},
{
"id": "2305.08845"
},
{
"id": "2109.01652"
}
] |
2307.14430 | 2 | regimes, where the objective is to efï¬ciently learn multiple skills in the former and an individual skill in the latter. On the LEGO synthetic in the continual pre-training setting, SKILL-IT obtains 36.5 points higher accuracy than random sampling. On the Natural Instructions dataset in the ï¬ne-tuning setting, SKILL-IT reduces the validation loss on the target skill by 13.6% versus training on data associated with the target skill itself. We apply our skills framework on the recent RedPajama dataset to continually pre-train a 3B-parameter LM, achieving higher accuracy on the LM Evaluation Harness with 1B tokens than the baseline approach of sampling uniformly over data sources with 3B tokens. | 2307.14430#2 | Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models | The quality of training data impacts the performance of pre-trained large
language models (LMs). Given a fixed budget of tokens, we study how to best
select data that leads to good downstream model performance across tasks. We
develop a new framework based on a simple hypothesis: just as humans acquire
interdependent skills in a deliberate order, language models also follow a
natural order when learning a set of skills from their training data. If such
an order exists, it can be utilized for improved understanding of LMs and for
data-efficient training. Using this intuition, our framework formalizes the
notion of a skill and of an ordered set of skills in terms of the associated
data. First, using both synthetic and real data, we demonstrate that these
ordered skill sets exist, and that their existence enables more advanced skills
to be learned with less data when we train on their prerequisite skills.
Second, using our proposed framework, we introduce an online data sampling
algorithm, Skill-It, over mixtures of skills for both continual pre-training
and fine-tuning regimes, where the objective is to efficiently learn multiple
skills in the former and an individual skill in the latter. On the LEGO
synthetic in the continual pre-training setting, Skill-It obtains 36.5 points
higher accuracy than random sampling. On the Natural Instructions dataset in
the fine-tuning setting, Skill-It reduces the validation loss on the target
skill by 13.6% versus training on data associated with the target skill itself.
We apply our skills framework on the recent RedPajama dataset to continually
pre-train a 3B-parameter LM, achieving higher accuracy on the LM Evaluation
Harness with 1B tokens than the baseline approach of sampling uniformly over
data sources with 3B tokens. | http://arxiv.org/pdf/2307.14430 | Mayee F. Chen, Nicholas Roberts, Kush Bhatia, Jue Wang, Ce Zhang, Frederic Sala, Christopher Ré | cs.CL, cs.LG | null | null | cs.CL | 20230726 | 20230726 | [
{
"id": "2101.00027"
},
{
"id": "2005.14165"
}
] |
2307.14225 | 3 | # 1 INTRODUCTION
The use of language in recommendation scenarios is not a novel concept. Content-based recommenders have been utilizing text associated with items, such as item descriptions and reviews, for about three decades [29]. However, recent advances in conversational recommender systems have placed language at the forefront, as a natural and intuitive means for users to express their preferences and provide feedback on the recommendations they receive [15, 24]. Most recently, the concept of natural language (NL) user proï¬les, where users express their preferences as NL statements has been proposed [37]. The idea of using text-based user representations is appealing for several reasons: it provides full transparency and allows users to control the systemâs personalization. Further, in a (near) cold-start setting, where little to no usage data is available, providing a NL summary of preferences may enable a personalized and satisfying âWork done while on sabbatical at Google. | 2307.14225#3 | Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences | Traditional recommender systems leverage users' item preference history to
recommend novel content that users may like. However, modern dialog interfaces
that allow users to express language-based preferences offer a fundamentally
different modality for preference input. Inspired by recent successes of
prompting paradigms for large language models (LLMs), we study their use for
making recommendations from both item-based and language-based preferences in
comparison to state-of-the-art item-based collaborative filtering (CF) methods.
To support this investigation, we collect a new dataset consisting of both
item-based and language-based preferences elicited from users along with their
ratings on a variety of (biased) recommended items and (unbiased) random items.
Among numerous experimental results, we find that LLMs provide competitive
recommendation performance for pure language-based preferences (no item
preferences) in the near cold-start case in comparison to item-based CF
methods, despite having no supervised training for this specific task
(zero-shot) or only a few labels (few-shot). This is particularly promising as
language-based preference representations are more explainable and scrutable
than item-based or vector-based representations. | http://arxiv.org/pdf/2307.14225 | Scott Sanner, Krisztian Balog, Filip Radlinski, Ben Wedin, Lucas Dixon | cs.IR, cs.LG | To appear at RecSys'23 | null | cs.IR | 20230726 | 20230726 | [
{
"id": "2305.06474"
},
{
"id": "2305.07961"
},
{
"id": "2009.13292"
},
{
"id": "2204.02311"
},
{
"id": "2210.06280"
},
{
"id": "2005.14165"
},
{
"id": "2108.07732"
},
{
"id": "2306.02250"
},
{
"id": "2304.04250"
},
{
"id": "2305.08845"
},
{
"id": "2109.01652"
}
] |
2307.14430 | 3 | # Introduction
Large language models (LMs) exhibit remarkable capabilities, including producing creative content [55], writing source code [8], or chatting with users [7]. A key ingredient in enabling models to perform such tasks is the data on which the models are trained [17, 19, 59]. A natural way to unlock particular capabilities is to improve this training data. However, it is unclear how to select data from a large corpus for these capabilities given a ï¬xed budget of training tokens, as data selection methods for current state-of-the-art LMs mostly rely on heuristics for ï¬ltering and mixing together different datasets [32, 59]. We lack a formal framework for capturing how data inï¬uences the modelâs capabilities and how to utilize this data effectively for improving LM performance. | 2307.14430#3 | Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models | The quality of training data impacts the performance of pre-trained large
language models (LMs). Given a fixed budget of tokens, we study how to best
select data that leads to good downstream model performance across tasks. We
develop a new framework based on a simple hypothesis: just as humans acquire
interdependent skills in a deliberate order, language models also follow a
natural order when learning a set of skills from their training data. If such
an order exists, it can be utilized for improved understanding of LMs and for
data-efficient training. Using this intuition, our framework formalizes the
notion of a skill and of an ordered set of skills in terms of the associated
data. First, using both synthetic and real data, we demonstrate that these
ordered skill sets exist, and that their existence enables more advanced skills
to be learned with less data when we train on their prerequisite skills.
Second, using our proposed framework, we introduce an online data sampling
algorithm, Skill-It, over mixtures of skills for both continual pre-training
and fine-tuning regimes, where the objective is to efficiently learn multiple
skills in the former and an individual skill in the latter. On the LEGO
synthetic in the continual pre-training setting, Skill-It obtains 36.5 points
higher accuracy than random sampling. On the Natural Instructions dataset in
the fine-tuning setting, Skill-It reduces the validation loss on the target
skill by 13.6% versus training on data associated with the target skill itself.
We apply our skills framework on the recent RedPajama dataset to continually
pre-train a 3B-parameter LM, achieving higher accuracy on the LM Evaluation
Harness with 1B tokens than the baseline approach of sampling uniformly over
data sources with 3B tokens. | http://arxiv.org/pdf/2307.14430 | Mayee F. Chen, Nicholas Roberts, Kush Bhatia, Jue Wang, Ce Zhang, Frederic Sala, Christopher Ré | cs.CL, cs.LG | null | null | cs.CL | 20230726 | 20230726 | [
{
"id": "2101.00027"
},
{
"id": "2005.14165"
}
] |
2307.14225 | 4 | Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for proï¬t or commercial advantage and that copies bear this notice and the full citation on the ï¬rst page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s).
© 2023 Copyright held by the owner/author(s). Manuscript submitted to ACM
1
RecSys â23, September 18â22, 2023, Singapore, Singapore
Scott Sanner, Krisztian Balog, Filip Radlinski, Ben Wedin, and Lucas Dixon
experience for users. Yet, controlled quantitative comparisons of such NL preference descriptions against traditional item-based approaches are very limited. Thus, the main research question driving this study is the following: How eï¬ective are prompting strategies with large language models (LLMs) for recommendation from natural language- based preference descriptions in comparison to collaborative ï¬ltering methods based solely on item ratings? | 2307.14225#4 | Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences | Traditional recommender systems leverage users' item preference history to
recommend novel content that users may like. However, modern dialog interfaces
that allow users to express language-based preferences offer a fundamentally
different modality for preference input. Inspired by recent successes of
prompting paradigms for large language models (LLMs), we study their use for
making recommendations from both item-based and language-based preferences in
comparison to state-of-the-art item-based collaborative filtering (CF) methods.
To support this investigation, we collect a new dataset consisting of both
item-based and language-based preferences elicited from users along with their
ratings on a variety of (biased) recommended items and (unbiased) random items.
Among numerous experimental results, we find that LLMs provide competitive
recommendation performance for pure language-based preferences (no item
preferences) in the near cold-start case in comparison to item-based CF
methods, despite having no supervised training for this specific task
(zero-shot) or only a few labels (few-shot). This is particularly promising as
language-based preference representations are more explainable and scrutable
than item-based or vector-based representations. | http://arxiv.org/pdf/2307.14225 | Scott Sanner, Krisztian Balog, Filip Radlinski, Ben Wedin, Lucas Dixon | cs.IR, cs.LG | To appear at RecSys'23 | null | cs.IR | 20230726 | 20230726 | [
{
"id": "2305.06474"
},
{
"id": "2305.07961"
},
{
"id": "2009.13292"
},
{
"id": "2204.02311"
},
{
"id": "2210.06280"
},
{
"id": "2005.14165"
},
{
"id": "2108.07732"
},
{
"id": "2306.02250"
},
{
"id": "2304.04250"
},
{
"id": "2305.08845"
},
{
"id": "2109.01652"
}
] |
2307.14430 | 4 | To develop such a framework, we take inspiration from how humans acquire knowledge. A classic idea in education literature is the concept of skills that form a learning hierarchy [65]. For example, one study found that students learned mathematical and scientiï¬c skills most quickly when these skills were presented in a particular order [11]. We seek to understand the extent that similar skill-based orderings characterize LM training. Such orderings, if they exist, may provide a better understanding of LMs as well as a mechanism for data-efï¬cient training. For instance, to train an LM for Spanish question generation, we wish to know if training ï¬rst on related but simpler tasks, such as Spanish grammar and English question generation, helps.
We study if the idea of skill orderings can help us build a framework that relates data to LM training and behavior. This requires addressing two challenges revolving around the connection between skills and data. First, in order to show that there exist sets of skills that the LM learns most efï¬ciently in some particular order, an operational deï¬nition of LM skill and skill ordering must be developed and validated on data. In initial experiments, we investigated if semantic groupings of data, such as metadata attributes or embedding clusters, were sufï¬cient to represent a skill and characterize how models learn. For | 2307.14430#4 | Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models | The quality of training data impacts the performance of pre-trained large
language models (LMs). Given a fixed budget of tokens, we study how to best
select data that leads to good downstream model performance across tasks. We
develop a new framework based on a simple hypothesis: just as humans acquire
interdependent skills in a deliberate order, language models also follow a
natural order when learning a set of skills from their training data. If such
an order exists, it can be utilized for improved understanding of LMs and for
data-efficient training. Using this intuition, our framework formalizes the
notion of a skill and of an ordered set of skills in terms of the associated
data. First, using both synthetic and real data, we demonstrate that these
ordered skill sets exist, and that their existence enables more advanced skills
to be learned with less data when we train on their prerequisite skills.
Second, using our proposed framework, we introduce an online data sampling
algorithm, Skill-It, over mixtures of skills for both continual pre-training
and fine-tuning regimes, where the objective is to efficiently learn multiple
skills in the former and an individual skill in the latter. On the LEGO
synthetic in the continual pre-training setting, Skill-It obtains 36.5 points
higher accuracy than random sampling. On the Natural Instructions dataset in
the fine-tuning setting, Skill-It reduces the validation loss on the target
skill by 13.6% versus training on data associated with the target skill itself.
We apply our skills framework on the recent RedPajama dataset to continually
pre-train a 3B-parameter LM, achieving higher accuracy on the LM Evaluation
Harness with 1B tokens than the baseline approach of sampling uniformly over
data sources with 3B tokens. | http://arxiv.org/pdf/2307.14430 | Mayee F. Chen, Nicholas Roberts, Kush Bhatia, Jue Wang, Ce Zhang, Frederic Sala, Christopher Ré | cs.CL, cs.LG | null | null | cs.CL | 20230726 | 20230726 | [
{
"id": "2101.00027"
},
{
"id": "2005.14165"
}
] |
2307.14225 | 5 | We address the task of language-based item recommendation by building on recent advances in LLMs and prompting- based paradigms that have led to state-of-the-art results in a variety of natural language tasks, and which permit us to exploit rich positive and negative descriptive content and item preferences in a uniï¬ed framework. We contrast these novel techniques with traditional language-based approaches using information retrieval techniques [3] as well as collaborative ï¬ltering-based approaches [14, 42]. Being a novel task, there is no dataset for language-based item recommendation. As one of our main contributions, we present a data collection protocol and build a test collection that comprises natural language descriptions of preferences as well as item ratings. In doing so, we seek to answer the following research questions:
⢠RQ1: Are preferences expressed in natural language suï¬cient as a replacement for items for (especially) near
cold-start recommendation, and how much does performance improve when language is combined with items? ⢠RQ2: How do LLM-based recommendation methods compare with item-based collaborative ï¬ltering methods? ⢠RQ3: Which LLM prompting style, be it completion, instructions, or few-shot prompts, performs best? ⢠RQ4: Does the inclusion of natural language dispreferences improve language-based recommendation? | 2307.14225#5 | Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences | Traditional recommender systems leverage users' item preference history to
recommend novel content that users may like. However, modern dialog interfaces
that allow users to express language-based preferences offer a fundamentally
different modality for preference input. Inspired by recent successes of
prompting paradigms for large language models (LLMs), we study their use for
making recommendations from both item-based and language-based preferences in
comparison to state-of-the-art item-based collaborative filtering (CF) methods.
To support this investigation, we collect a new dataset consisting of both
item-based and language-based preferences elicited from users along with their
ratings on a variety of (biased) recommended items and (unbiased) random items.
Among numerous experimental results, we find that LLMs provide competitive
recommendation performance for pure language-based preferences (no item
preferences) in the near cold-start case in comparison to item-based CF
methods, despite having no supervised training for this specific task
(zero-shot) or only a few labels (few-shot). This is particularly promising as
language-based preference representations are more explainable and scrutable
than item-based or vector-based representations. | http://arxiv.org/pdf/2307.14225 | Scott Sanner, Krisztian Balog, Filip Radlinski, Ben Wedin, Lucas Dixon | cs.IR, cs.LG | To appear at RecSys'23 | null | cs.IR | 20230726 | 20230726 | [
{
"id": "2305.06474"
},
{
"id": "2305.07961"
},
{
"id": "2009.13292"
},
{
"id": "2204.02311"
},
{
"id": "2210.06280"
},
{
"id": "2005.14165"
},
{
"id": "2108.07732"
},
{
"id": "2306.02250"
},
{
"id": "2304.04250"
},
{
"id": "2305.08845"
},
{
"id": "2109.01652"
}
] |
2307.14430 | 5 | *Corresponding author: [email protected].
1
English QA og =e SpanishYQA} \English QG Ze as ee $ Spanish QG Ve ya Data Ordered skill set a, i | â EE
Figure 1: Inspired by how humans acquire knowledge, we hypothesize that LMs best learn skills in a particular order and that this can help improve our understanding and training of LMs. We show that these ordered skill sets exist in real data, which enables skills to be learned with less data given that we train on their prerequisite skills. We then propose SKILL-IT, an online data selection algorithm that learns skills quickly by exploiting their ordering.
instance, we partitioned the Alpaca dataset [56] by instruction typeâa technique used to capture dataset diversity [62]âbut we found that sampling based on instruction types and random sampling resulted in similar model performance, suggesting that not just any existing notion of data groups can characterize skills. | 2307.14430#5 | Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models | The quality of training data impacts the performance of pre-trained large
language models (LMs). Given a fixed budget of tokens, we study how to best
select data that leads to good downstream model performance across tasks. We
develop a new framework based on a simple hypothesis: just as humans acquire
interdependent skills in a deliberate order, language models also follow a
natural order when learning a set of skills from their training data. If such
an order exists, it can be utilized for improved understanding of LMs and for
data-efficient training. Using this intuition, our framework formalizes the
notion of a skill and of an ordered set of skills in terms of the associated
data. First, using both synthetic and real data, we demonstrate that these
ordered skill sets exist, and that their existence enables more advanced skills
to be learned with less data when we train on their prerequisite skills.
Second, using our proposed framework, we introduce an online data sampling
algorithm, Skill-It, over mixtures of skills for both continual pre-training
and fine-tuning regimes, where the objective is to efficiently learn multiple
skills in the former and an individual skill in the latter. On the LEGO
synthetic in the continual pre-training setting, Skill-It obtains 36.5 points
higher accuracy than random sampling. On the Natural Instructions dataset in
the fine-tuning setting, Skill-It reduces the validation loss on the target
skill by 13.6% versus training on data associated with the target skill itself.
We apply our skills framework on the recent RedPajama dataset to continually
pre-train a 3B-parameter LM, achieving higher accuracy on the LM Evaluation
Harness with 1B tokens than the baseline approach of sampling uniformly over
data sources with 3B tokens. | http://arxiv.org/pdf/2307.14430 | Mayee F. Chen, Nicholas Roberts, Kush Bhatia, Jue Wang, Ce Zhang, Frederic Sala, Christopher Ré | cs.CL, cs.LG | null | null | cs.CL | 20230726 | 20230726 | [
{
"id": "2101.00027"
},
{
"id": "2005.14165"
}
] |
2307.14225 | 6 | Our main contributions are (1) We devise an experimental design that allows language-based item recommendation to be directly compared with state-of-the-art item-based recommendation approaches, and present a novel data col- lection protocol (Section 3); (2) We propose various prompting methods for LLMs for the task of language-based item recommendation (Section 4); (3) We experimentally compare the proposed prompt-based methods against a set of strong baselines, including both text-based and item-based approaches (Section 5). Ultimately, we observe that LLM- based recommmendation from pure language-based preference descriptions provides a competitive near cold-start recommender system that is based on an explainable and scrutable language-based preference representation.
# 2 RELATED WORK
Item-Based Recommendation. Traditional recommender systems rely on item ratings. For a new user, these can be provided over time as the user interacts with the recommender, although this means initial performance is poor. Thus, preferences are often solicited with a questionnaire for new users [22, 39, 41]. There has also been work looking at other forms of item-based preferences such as relative preferences between items [10, 39], although approaches that rely on individual item ratings dominate the literature. | 2307.14225#6 | Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences | Traditional recommender systems leverage users' item preference history to
recommend novel content that users may like. However, modern dialog interfaces
that allow users to express language-based preferences offer a fundamentally
different modality for preference input. Inspired by recent successes of
prompting paradigms for large language models (LLMs), we study their use for
making recommendations from both item-based and language-based preferences in
comparison to state-of-the-art item-based collaborative filtering (CF) methods.
To support this investigation, we collect a new dataset consisting of both
item-based and language-based preferences elicited from users along with their
ratings on a variety of (biased) recommended items and (unbiased) random items.
Among numerous experimental results, we find that LLMs provide competitive
recommendation performance for pure language-based preferences (no item
preferences) in the near cold-start case in comparison to item-based CF
methods, despite having no supervised training for this specific task
(zero-shot) or only a few labels (few-shot). This is particularly promising as
language-based preference representations are more explainable and scrutable
than item-based or vector-based representations. | http://arxiv.org/pdf/2307.14225 | Scott Sanner, Krisztian Balog, Filip Radlinski, Ben Wedin, Lucas Dixon | cs.IR, cs.LG | To appear at RecSys'23 | null | cs.IR | 20230726 | 20230726 | [
{
"id": "2305.06474"
},
{
"id": "2305.07961"
},
{
"id": "2009.13292"
},
{
"id": "2204.02311"
},
{
"id": "2210.06280"
},
{
"id": "2005.14165"
},
{
"id": "2108.07732"
},
{
"id": "2306.02250"
},
{
"id": "2304.04250"
},
{
"id": "2305.08845"
},
{
"id": "2109.01652"
}
] |
2307.14430 | 6 | Second, these deï¬nitions of skills must be used to construct sampling distributions to actually improve model training. To develop criteria for a data selection algorithm that learns skills efï¬ciently, we identify challenges that naive selection approaches face. The standard approach of random uniform sampling over data fails to learn skills optimally due to not accounting for skill imbalance and ordering. Skills can be distributed unevenly in the data, with more complex skills being rareâfor instance, Spanish and question generation (QG) are 5% and 4% of the Natural Instructions dataset [63], respectively, but Spanish QG is only 0.2%. Random sampling also provides no mechanism for taking into account a particular training order and dependency structure on skills. More sophisticated techniques like curriculum learning account for sample-level ordering, but not skills or their dependencies. Our goal framework must account for these issues of imbalance and ordering. | 2307.14430#6 | Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models | The quality of training data impacts the performance of pre-trained large
language models (LMs). Given a fixed budget of tokens, we study how to best
select data that leads to good downstream model performance across tasks. We
develop a new framework based on a simple hypothesis: just as humans acquire
interdependent skills in a deliberate order, language models also follow a
natural order when learning a set of skills from their training data. If such
an order exists, it can be utilized for improved understanding of LMs and for
data-efficient training. Using this intuition, our framework formalizes the
notion of a skill and of an ordered set of skills in terms of the associated
data. First, using both synthetic and real data, we demonstrate that these
ordered skill sets exist, and that their existence enables more advanced skills
to be learned with less data when we train on their prerequisite skills.
Second, using our proposed framework, we introduce an online data sampling
algorithm, Skill-It, over mixtures of skills for both continual pre-training
and fine-tuning regimes, where the objective is to efficiently learn multiple
skills in the former and an individual skill in the latter. On the LEGO
synthetic in the continual pre-training setting, Skill-It obtains 36.5 points
higher accuracy than random sampling. On the Natural Instructions dataset in
the fine-tuning setting, Skill-It reduces the validation loss on the target
skill by 13.6% versus training on data associated with the target skill itself.
We apply our skills framework on the recent RedPajama dataset to continually
pre-train a 3B-parameter LM, achieving higher accuracy on the LM Evaluation
Harness with 1B tokens than the baseline approach of sampling uniformly over
data sources with 3B tokens. | http://arxiv.org/pdf/2307.14430 | Mayee F. Chen, Nicholas Roberts, Kush Bhatia, Jue Wang, Ce Zhang, Frederic Sala, Christopher Ré | cs.CL, cs.LG | null | null | cs.CL | 20230726 | 20230726 | [
{
"id": "2101.00027"
},
{
"id": "2005.14165"
}
] |
2307.14225 | 7 | Given a corpus of user-item ratings, very many recommendation algorithms exist. These range from methods such as item-based k-Nearest Neighbors [40], where simple similarity to existing users is exploited, to matrix factoriza- tion approaches that learn a vector representation for the user [23, 34], through to deep learning and autoencoder approaches that jointly learn user and item vector embeddings [8, 19, 28]. Interestingly, the EASE algorithm [42] is an autoencoder approach that has been found to perform on par with much more complex state-of-the-art approaches.
Natural Language in Recommendation. Following the proposals in [2, 37] to model preferences solely in scrutable natural language, recent work has explored the use of tags as surrogates for NL descriptions with promising results [31]. This contrasts with, for instance Hou et al. [20], who input a (sequence) of natural language item descriptions into an LLM to produce an (inscrutable) user representation for recommendation. Other recent work has sought to use rich,
2
LLMs are Competitive Near Cold-start Recommenders
RecSys â23, September 18â22, 2023, Singapore, Singapore | 2307.14225#7 | Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences | Traditional recommender systems leverage users' item preference history to
recommend novel content that users may like. However, modern dialog interfaces
that allow users to express language-based preferences offer a fundamentally
different modality for preference input. Inspired by recent successes of
prompting paradigms for large language models (LLMs), we study their use for
making recommendations from both item-based and language-based preferences in
comparison to state-of-the-art item-based collaborative filtering (CF) methods.
To support this investigation, we collect a new dataset consisting of both
item-based and language-based preferences elicited from users along with their
ratings on a variety of (biased) recommended items and (unbiased) random items.
Among numerous experimental results, we find that LLMs provide competitive
recommendation performance for pure language-based preferences (no item
preferences) in the near cold-start case in comparison to item-based CF
methods, despite having no supervised training for this specific task
(zero-shot) or only a few labels (few-shot). This is particularly promising as
language-based preference representations are more explainable and scrutable
than item-based or vector-based representations. | http://arxiv.org/pdf/2307.14225 | Scott Sanner, Krisztian Balog, Filip Radlinski, Ben Wedin, Lucas Dixon | cs.IR, cs.LG | To appear at RecSys'23 | null | cs.IR | 20230726 | 20230726 | [
{
"id": "2305.06474"
},
{
"id": "2305.07961"
},
{
"id": "2009.13292"
},
{
"id": "2204.02311"
},
{
"id": "2210.06280"
},
{
"id": "2005.14165"
},
{
"id": "2108.07732"
},
{
"id": "2306.02250"
},
{
"id": "2304.04250"
},
{
"id": "2305.08845"
},
{
"id": "2109.01652"
}
] |
2307.14430 | 7 | Skill-based framework We deï¬ne a skill as a unit of behavior that a model can learn using an associated slice of data (Deï¬nition 2.1). An ordered skill set is a collection of skills with a directed skills graph that is neither complete nor empty, where an edge from a prerequisite skill to a skill exists if the amount of training it takes to learn the skill can be reduced if the prerequisite skill is also learned (Deï¬nition 2.2, Figure 1 left, center). We show that ordered skill sets exist in synthetic and real datasets using this operational deï¬nition. Interestingly, the existence of these ordered skill sets unveils that one can learn a skill quickly not by training solely on that skill, but on a mixture of that skill and prerequisite skills. For instance, in Figure 3 we observe that Spanish QG can be learned more efï¬ciently when the model also learns English QG and Spanishâwe can achieve 4% lower validation loss than training on only Spanish QG over a ï¬xed budget of overall training steps. | 2307.14430#7 | Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models | The quality of training data impacts the performance of pre-trained large
language models (LMs). Given a fixed budget of tokens, we study how to best
select data that leads to good downstream model performance across tasks. We
develop a new framework based on a simple hypothesis: just as humans acquire
interdependent skills in a deliberate order, language models also follow a
natural order when learning a set of skills from their training data. If such
an order exists, it can be utilized for improved understanding of LMs and for
data-efficient training. Using this intuition, our framework formalizes the
notion of a skill and of an ordered set of skills in terms of the associated
data. First, using both synthetic and real data, we demonstrate that these
ordered skill sets exist, and that their existence enables more advanced skills
to be learned with less data when we train on their prerequisite skills.
Second, using our proposed framework, we introduce an online data sampling
algorithm, Skill-It, over mixtures of skills for both continual pre-training
and fine-tuning regimes, where the objective is to efficiently learn multiple
skills in the former and an individual skill in the latter. On the LEGO
synthetic in the continual pre-training setting, Skill-It obtains 36.5 points
higher accuracy than random sampling. On the Natural Instructions dataset in
the fine-tuning setting, Skill-It reduces the validation loss on the target
skill by 13.6% versus training on data associated with the target skill itself.
We apply our skills framework on the recent RedPajama dataset to continually
pre-train a 3B-parameter LM, achieving higher accuracy on the LM Evaluation
Harness with 1B tokens than the baseline approach of sampling uniformly over
data sources with 3B tokens. | http://arxiv.org/pdf/2307.14430 | Mayee F. Chen, Nicholas Roberts, Kush Bhatia, Jue Wang, Ce Zhang, Frederic Sala, Christopher Ré | cs.CL, cs.LG | null | null | cs.CL | 20230726 | 20230726 | [
{
"id": "2101.00027"
},
{
"id": "2005.14165"
}
] |
2307.14225 | 8 | 2
LLMs are Competitive Near Cold-start Recommenders
RecSys â23, September 18â22, 2023, Singapore, Singapore
descriptive natural language as the basis for recommendations. At one extreme, we have narrative-driven recommen- dations [4] that assume very verbose descriptions of speciï¬c contextual needs. In a similar vein, user-studies of NL use in recommendation [26] identify a rich taxonomy of recommendation intents and also note that speech-based elic- itation is generally more verbose and descriptive than text-based elicitation. In this work, however, we return to the proposal in [37] and assume the user provides a more general-purpose language-based description of their preferences and dispreferences for the purpose of recommendation. | 2307.14225#8 | Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences | Traditional recommender systems leverage users' item preference history to
recommend novel content that users may like. However, modern dialog interfaces
that allow users to express language-based preferences offer a fundamentally
different modality for preference input. Inspired by recent successes of
prompting paradigms for large language models (LLMs), we study their use for
making recommendations from both item-based and language-based preferences in
comparison to state-of-the-art item-based collaborative filtering (CF) methods.
To support this investigation, we collect a new dataset consisting of both
item-based and language-based preferences elicited from users along with their
ratings on a variety of (biased) recommended items and (unbiased) random items.
Among numerous experimental results, we find that LLMs provide competitive
recommendation performance for pure language-based preferences (no item
preferences) in the near cold-start case in comparison to item-based CF
methods, despite having no supervised training for this specific task
(zero-shot) or only a few labels (few-shot). This is particularly promising as
language-based preference representations are more explainable and scrutable
than item-based or vector-based representations. | http://arxiv.org/pdf/2307.14225 | Scott Sanner, Krisztian Balog, Filip Radlinski, Ben Wedin, Lucas Dixon | cs.IR, cs.LG | To appear at RecSys'23 | null | cs.IR | 20230726 | 20230726 | [
{
"id": "2305.06474"
},
{
"id": "2305.07961"
},
{
"id": "2009.13292"
},
{
"id": "2204.02311"
},
{
"id": "2210.06280"
},
{
"id": "2005.14165"
},
{
"id": "2108.07732"
},
{
"id": "2306.02250"
},
{
"id": "2304.04250"
},
{
"id": "2305.08845"
},
{
"id": "2109.01652"
}
] |
2307.14430 | 8 | Next, given an ordered skill set to train on, we use our framework to propose methods for how to select data so that the LM learn skills faster: skill-stratiï¬ed sampling and an online generalization, SKILL-IT. We address the issue of unevenly distributed skills in datasets by proposing skill-stratiï¬ed sampling, a simple approach that allows us to explicitly optimize for learning skills by uniformly sampling relevant skills (such as a target skill and its prerequisite skills in ï¬ne-tuning). Skill-stratiï¬ed sampling uses the construction of the ordered skill set but is static, which does not incorporate the ordering as training proceeds and results in oversampling skills that may be already learned early on in training. We address this issue by proposing an online data selection algorithm, SKILL-IT, for selecting mixtures of training skills that allocates more weight towards learning skills that are not yet learned or towards prerequisite inï¬uential skills (Figure 1 right). SKILL-IT is derived from an online optimization problem over the training skills for minimizing loss on a set of evaluation skills given a ï¬xed budget of data and the skills graph. SKILL-IT is inspired by online mirror descent and can be adapted for continual pre-training, ï¬ne-tuning, or out-of-domain evaluation depending on the relationship between the evaluation skill set and the training skill set. | 2307.14430#8 | Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models | The quality of training data impacts the performance of pre-trained large
language models (LMs). Given a fixed budget of tokens, we study how to best
select data that leads to good downstream model performance across tasks. We
develop a new framework based on a simple hypothesis: just as humans acquire
interdependent skills in a deliberate order, language models also follow a
natural order when learning a set of skills from their training data. If such
an order exists, it can be utilized for improved understanding of LMs and for
data-efficient training. Using this intuition, our framework formalizes the
notion of a skill and of an ordered set of skills in terms of the associated
data. First, using both synthetic and real data, we demonstrate that these
ordered skill sets exist, and that their existence enables more advanced skills
to be learned with less data when we train on their prerequisite skills.
Second, using our proposed framework, we introduce an online data sampling
algorithm, Skill-It, over mixtures of skills for both continual pre-training
and fine-tuning regimes, where the objective is to efficiently learn multiple
skills in the former and an individual skill in the latter. On the LEGO
synthetic in the continual pre-training setting, Skill-It obtains 36.5 points
higher accuracy than random sampling. On the Natural Instructions dataset in
the fine-tuning setting, Skill-It reduces the validation loss on the target
skill by 13.6% versus training on data associated with the target skill itself.
We apply our skills framework on the recent RedPajama dataset to continually
pre-train a 3B-parameter LM, achieving higher accuracy on the LM Evaluation
Harness with 1B tokens than the baseline approach of sampling uniformly over
data sources with 3B tokens. | http://arxiv.org/pdf/2307.14430 | Mayee F. Chen, Nicholas Roberts, Kush Bhatia, Jue Wang, Ce Zhang, Frederic Sala, Christopher Ré | cs.CL, cs.LG | null | null | cs.CL | 20230726 | 20230726 | [
{
"id": "2101.00027"
},
{
"id": "2005.14165"
}
] |
2307.14225 | 9 | Recently, researchers have begun exploring use of language models (LMs) for recommendation tasks [13]. Radlinski et al. [37] present a theoretical motivation for why LLMs may be useful for recommendations and provide an example prompt, but do not conduct any quantitative evaluation. Mysore et al. [32] generate preference narratives from ratings and reviews, using the narratives to recommend from held-out items. Penha and Hauï¬ [36] show that oï¬-the-shelf pretrained BERT [12] contains both collaborative- and content-based knowledge about items to recommend. They also demonstrate that BERT outperforms information retrieval (IR) baselines for recommendation from language-based descriptions. However, they do not assess the relative performance of language- vs. item-based recommendation from LMs (for which we curate a dataset speciï¬cally for this purpose), nor does BERTâs encoder-only LM easily permit doing this in a uniï¬ed prompting framework that we explore here. RecoBERT [30] leverages a custom-trained LM for deriving the similarity between text-based item and description pairs, | 2307.14225#9 | Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences | Traditional recommender systems leverage users' item preference history to
recommend novel content that users may like. However, modern dialog interfaces
that allow users to express language-based preferences offer a fundamentally
different modality for preference input. Inspired by recent successes of
prompting paradigms for large language models (LLMs), we study their use for
making recommendations from both item-based and language-based preferences in
comparison to state-of-the-art item-based collaborative filtering (CF) methods.
To support this investigation, we collect a new dataset consisting of both
item-based and language-based preferences elicited from users along with their
ratings on a variety of (biased) recommended items and (unbiased) random items.
Among numerous experimental results, we find that LLMs provide competitive
recommendation performance for pure language-based preferences (no item
preferences) in the near cold-start case in comparison to item-based CF
methods, despite having no supervised training for this specific task
(zero-shot) or only a few labels (few-shot). This is particularly promising as
language-based preference representations are more explainable and scrutable
than item-based or vector-based representations. | http://arxiv.org/pdf/2307.14225 | Scott Sanner, Krisztian Balog, Filip Radlinski, Ben Wedin, Lucas Dixon | cs.IR, cs.LG | To appear at RecSys'23 | null | cs.IR | 20230726 | 20230726 | [
{
"id": "2305.06474"
},
{
"id": "2305.07961"
},
{
"id": "2009.13292"
},
{
"id": "2204.02311"
},
{
"id": "2210.06280"
},
{
"id": "2005.14165"
},
{
"id": "2108.07732"
},
{
"id": "2306.02250"
},
{
"id": "2304.04250"
},
{
"id": "2305.08845"
},
{
"id": "2109.01652"
}
] |
2307.14430 | 9 | We evaluate SKILL-IT on synthetic and real datasets at two model scales, 125M and 1.3B parameters. For the continual pre-training setting, we show on the LEGO synthetic [72] that we obtain a 35.8 point improvement in accuracy over randomly selecting training data and curriculum learning [3]. For the ï¬ne-tuning setting, we show that on the widely-used Natural Instructions dataset [40, 64], our algorithm over a mixture of skills is able to achieve up to 13.6% lower loss on that skill than solely training on that skill, given the same overall training budget. For the out-of-domain setting when our
2
Alpaca Pile of Law Natural Instructions -2.0 1.5 0.8 1.5 0.6 1.0 1.0 0.4 0.5 0.5 0.2 0.0 0.0 0.0
Alpaca -2.0 1.5 1.0 0.5 0.0
Pile of Law 1.5 1.0 0.5 0.0
Natural Instructions 0.8 0.6 0.4 0.2 0.0 | 2307.14430#9 | Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models | The quality of training data impacts the performance of pre-trained large
language models (LMs). Given a fixed budget of tokens, we study how to best
select data that leads to good downstream model performance across tasks. We
develop a new framework based on a simple hypothesis: just as humans acquire
interdependent skills in a deliberate order, language models also follow a
natural order when learning a set of skills from their training data. If such
an order exists, it can be utilized for improved understanding of LMs and for
data-efficient training. Using this intuition, our framework formalizes the
notion of a skill and of an ordered set of skills in terms of the associated
data. First, using both synthetic and real data, we demonstrate that these
ordered skill sets exist, and that their existence enables more advanced skills
to be learned with less data when we train on their prerequisite skills.
Second, using our proposed framework, we introduce an online data sampling
algorithm, Skill-It, over mixtures of skills for both continual pre-training
and fine-tuning regimes, where the objective is to efficiently learn multiple
skills in the former and an individual skill in the latter. On the LEGO
synthetic in the continual pre-training setting, Skill-It obtains 36.5 points
higher accuracy than random sampling. On the Natural Instructions dataset in
the fine-tuning setting, Skill-It reduces the validation loss on the target
skill by 13.6% versus training on data associated with the target skill itself.
We apply our skills framework on the recent RedPajama dataset to continually
pre-train a 3B-parameter LM, achieving higher accuracy on the LM Evaluation
Harness with 1B tokens than the baseline approach of sampling uniformly over
data sources with 3B tokens. | http://arxiv.org/pdf/2307.14430 | Mayee F. Chen, Nicholas Roberts, Kush Bhatia, Jue Wang, Ce Zhang, Frederic Sala, Christopher Ré | cs.CL, cs.LG | null | null | cs.CL | 20230726 | 20230726 | [
{
"id": "2101.00027"
},
{
"id": "2005.14165"
}
] |
2307.14225 | 10 | prompting framework that we explore here. RecoBERT [30] leverages a custom-trained LM for deriving the similarity between text-based item and description pairs, with the authors ï¬nding that this outperforms traditional IR methods. Hou et al. [21] focus on item-based recommendation, with an in-context learning (ICL) approach similar in spirit to our item-only few-shot approach. Similarly, Kang et al. [27] use an LLM to predict ratings of target items. Finally, ReXPlug [17] exploits pretrained LMs to produce explainable recommendations by generating synthetic reviews on behalf of the user. None of these works, however, explore prompting strategies in large LMs to translate actual natural language preferences into new recommendations compared directly to item-based approaches. | 2307.14225#10 | Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences | Traditional recommender systems leverage users' item preference history to
recommend novel content that users may like. However, modern dialog interfaces
that allow users to express language-based preferences offer a fundamentally
different modality for preference input. Inspired by recent successes of
prompting paradigms for large language models (LLMs), we study their use for
making recommendations from both item-based and language-based preferences in
comparison to state-of-the-art item-based collaborative filtering (CF) methods.
To support this investigation, we collect a new dataset consisting of both
item-based and language-based preferences elicited from users along with their
ratings on a variety of (biased) recommended items and (unbiased) random items.
Among numerous experimental results, we find that LLMs provide competitive
recommendation performance for pure language-based preferences (no item
preferences) in the near cold-start case in comparison to item-based CF
methods, despite having no supervised training for this specific task
(zero-shot) or only a few labels (few-shot). This is particularly promising as
language-based preference representations are more explainable and scrutable
than item-based or vector-based representations. | http://arxiv.org/pdf/2307.14225 | Scott Sanner, Krisztian Balog, Filip Radlinski, Ben Wedin, Lucas Dixon | cs.IR, cs.LG | To appear at RecSys'23 | null | cs.IR | 20230726 | 20230726 | [
{
"id": "2305.06474"
},
{
"id": "2305.07961"
},
{
"id": "2009.13292"
},
{
"id": "2204.02311"
},
{
"id": "2210.06280"
},
{
"id": "2005.14165"
},
{
"id": "2108.07732"
},
{
"id": "2306.02250"
},
{
"id": "2304.04250"
},
{
"id": "2305.08845"
},
{
"id": "2109.01652"
}
] |
2307.14430 | 10 | Pile of Law 1.5 1.0 0.5 0.0
Natural Instructions 0.8 0.6 0.4 0.2 0.0
Figure 2: Heatmaps of adjacency matrices we compute for skill graphs for Alpaca, Pile of Law, and Natural Instructions. Negative elements and diagonals are thresholded to 0 for clarity. See Appendix C.2 for descriptions of how they were constructed and larger versions.
training skills do not align perfectly with evaluation skills, our algorithm is able to achieve the lowest loss on 11 out of 12 evaluation skills corresponding to task categories in the Natural Instructions test tasks dataset over random and skill-stratiï¬ed sampling on the training data. We ï¬nally apply our framework to a case study on the recent RedPajama 1.2 trillion token dataset [57]. We use the data mixture produced by SKILL-IT to continually pre-train a 3B parameter model. We ï¬nd that SKILL-IT achieves higher accuracy with 1B tokens than uniform sampling over data sources with 3B tokens. | 2307.14430#10 | Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models | The quality of training data impacts the performance of pre-trained large
language models (LMs). Given a fixed budget of tokens, we study how to best
select data that leads to good downstream model performance across tasks. We
develop a new framework based on a simple hypothesis: just as humans acquire
interdependent skills in a deliberate order, language models also follow a
natural order when learning a set of skills from their training data. If such
an order exists, it can be utilized for improved understanding of LMs and for
data-efficient training. Using this intuition, our framework formalizes the
notion of a skill and of an ordered set of skills in terms of the associated
data. First, using both synthetic and real data, we demonstrate that these
ordered skill sets exist, and that their existence enables more advanced skills
to be learned with less data when we train on their prerequisite skills.
Second, using our proposed framework, we introduce an online data sampling
algorithm, Skill-It, over mixtures of skills for both continual pre-training
and fine-tuning regimes, where the objective is to efficiently learn multiple
skills in the former and an individual skill in the latter. On the LEGO
synthetic in the continual pre-training setting, Skill-It obtains 36.5 points
higher accuracy than random sampling. On the Natural Instructions dataset in
the fine-tuning setting, Skill-It reduces the validation loss on the target
skill by 13.6% versus training on data associated with the target skill itself.
We apply our skills framework on the recent RedPajama dataset to continually
pre-train a 3B-parameter LM, achieving higher accuracy on the LM Evaluation
Harness with 1B tokens than the baseline approach of sampling uniformly over
data sources with 3B tokens. | http://arxiv.org/pdf/2307.14430 | Mayee F. Chen, Nicholas Roberts, Kush Bhatia, Jue Wang, Ce Zhang, Frederic Sala, Christopher Ré | cs.CL, cs.LG | null | null | cs.CL | 20230726 | 20230726 | [
{
"id": "2101.00027"
},
{
"id": "2005.14165"
}
] |
2307.14225 | 11 | Further, we are unaware of any datasets that capture a userâs detailed preferences in natural language, and attempt to rate recommendations on unseen items. Existing datasets such as [2, 7] tend to rely on much simpler characterizations.
Prompting in Large Language Models. Large language models (LLMs) are an expanding area of research with numerous exciting applications. Beyond traditional natural language understanding tasks like summarization, relation mapping, or question answering, LLMs have also proved adept at many tasks such as generating code, generating synthetic data, and multi-lingual tasks [1, 5, 9]. How to prompt these models to generate the best results is a continuing topic of research. Early prompting approaches relied on few-shot prompting, where a small set of training input-output pairs are prepended to the actual input [6]. Through additional tuning of pre-trained models on tasks described via instructions, LLMs also achieve impressive performance in the zero-shot setting (i.e., models are given a task and inputs, without any previous training examples) [44]. Geng et al. [16] test a variety of prompting techniques with a relatively small (less than one billion parameter) LLM trained on a collection of recommendation tasks, ï¬nding promising results across multiple tasks and domains, primarily by using item ratings as input.
# 3 EXPERIMENTAL SETUP | 2307.14225#11 | Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences | Traditional recommender systems leverage users' item preference history to
recommend novel content that users may like. However, modern dialog interfaces
that allow users to express language-based preferences offer a fundamentally
different modality for preference input. Inspired by recent successes of
prompting paradigms for large language models (LLMs), we study their use for
making recommendations from both item-based and language-based preferences in
comparison to state-of-the-art item-based collaborative filtering (CF) methods.
To support this investigation, we collect a new dataset consisting of both
item-based and language-based preferences elicited from users along with their
ratings on a variety of (biased) recommended items and (unbiased) random items.
Among numerous experimental results, we find that LLMs provide competitive
recommendation performance for pure language-based preferences (no item
preferences) in the near cold-start case in comparison to item-based CF
methods, despite having no supervised training for this specific task
(zero-shot) or only a few labels (few-shot). This is particularly promising as
language-based preference representations are more explainable and scrutable
than item-based or vector-based representations. | http://arxiv.org/pdf/2307.14225 | Scott Sanner, Krisztian Balog, Filip Radlinski, Ben Wedin, Lucas Dixon | cs.IR, cs.LG | To appear at RecSys'23 | null | cs.IR | 20230726 | 20230726 | [
{
"id": "2305.06474"
},
{
"id": "2305.07961"
},
{
"id": "2009.13292"
},
{
"id": "2204.02311"
},
{
"id": "2210.06280"
},
{
"id": "2005.14165"
},
{
"id": "2108.07732"
},
{
"id": "2306.02250"
},
{
"id": "2304.04250"
},
{
"id": "2305.08845"
},
{
"id": "2109.01652"
}
] |
2307.14430 | 11 | 2 Skills framework First, we propose deï¬nitions of skills and ordered skill sets in order to formalize our intuition around how models learn skills, and we demonstrate that not just any existing notion of data groups can characterize an ordered skill set in the dataset. Then, we demonstrate the existence of ordered skill sets on synthetic and real data, which show how viewing data through a skills-based framework can help with training and understanding model performance. Finally, we explore unsupervised skill recovery from data, ï¬nding that embedding-based approaches do not adequately recover synthetic skills.
# 2.1 Deï¬nitions
We ï¬rst present a deï¬nition of an individual skill. Let the input space of all possible text data be X , where x â X is an individual text sample that a next-token-prediction LM f â F : X â X is trained on. We quantify learning via a metric L : F à X â R, which maps from a model and evaluation data to a scalar quantity. In our setup, we use the cross-entropy validation loss applied over next-token predictions as our metric L. | 2307.14430#11 | Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models | The quality of training data impacts the performance of pre-trained large
language models (LMs). Given a fixed budget of tokens, we study how to best
select data that leads to good downstream model performance across tasks. We
develop a new framework based on a simple hypothesis: just as humans acquire
interdependent skills in a deliberate order, language models also follow a
natural order when learning a set of skills from their training data. If such
an order exists, it can be utilized for improved understanding of LMs and for
data-efficient training. Using this intuition, our framework formalizes the
notion of a skill and of an ordered set of skills in terms of the associated
data. First, using both synthetic and real data, we demonstrate that these
ordered skill sets exist, and that their existence enables more advanced skills
to be learned with less data when we train on their prerequisite skills.
Second, using our proposed framework, we introduce an online data sampling
algorithm, Skill-It, over mixtures of skills for both continual pre-training
and fine-tuning regimes, where the objective is to efficiently learn multiple
skills in the former and an individual skill in the latter. On the LEGO
synthetic in the continual pre-training setting, Skill-It obtains 36.5 points
higher accuracy than random sampling. On the Natural Instructions dataset in
the fine-tuning setting, Skill-It reduces the validation loss on the target
skill by 13.6% versus training on data associated with the target skill itself.
We apply our skills framework on the recent RedPajama dataset to continually
pre-train a 3B-parameter LM, achieving higher accuracy on the LM Evaluation
Harness with 1B tokens than the baseline approach of sampling uniformly over
data sources with 3B tokens. | http://arxiv.org/pdf/2307.14430 | Mayee F. Chen, Nicholas Roberts, Kush Bhatia, Jue Wang, Ce Zhang, Frederic Sala, Christopher Ré | cs.CL, cs.LG | null | null | cs.CL | 20230726 | 20230726 | [
{
"id": "2101.00027"
},
{
"id": "2005.14165"
}
] |
2307.14225 | 12 | # 3 EXPERIMENTAL SETUP
To study the relationship between item-based and language-based preferences, and their utility for recommendation, we require a parallel corpus from the same raters providing both types of information that is maximally consistent. There is a lack of existing parallel corpora of this nature, therefore a key contribution of our work is an experiment design that allows such consistent information to be collected. Speciï¬cally, we designed a two-phase user study where raters were (1) asked to rate items, and to describe their preferences in natural language, then (2) recommendations generated based on both types of preferences were uniformly rated by the raters. Hence we perform our experiments
3
RecSys â23, September 18â22, 2023, Singapore, Singapore
Scott Sanner, Krisztian Balog, Filip Radlinski, Ben Wedin, and Lucas Dixon
in the movie domain, being frequently used for research as movie recommendation is familiar to numerous user study participants. | 2307.14225#12 | Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences | Traditional recommender systems leverage users' item preference history to
recommend novel content that users may like. However, modern dialog interfaces
that allow users to express language-based preferences offer a fundamentally
different modality for preference input. Inspired by recent successes of
prompting paradigms for large language models (LLMs), we study their use for
making recommendations from both item-based and language-based preferences in
comparison to state-of-the-art item-based collaborative filtering (CF) methods.
To support this investigation, we collect a new dataset consisting of both
item-based and language-based preferences elicited from users along with their
ratings on a variety of (biased) recommended items and (unbiased) random items.
Among numerous experimental results, we find that LLMs provide competitive
recommendation performance for pure language-based preferences (no item
preferences) in the near cold-start case in comparison to item-based CF
methods, despite having no supervised training for this specific task
(zero-shot) or only a few labels (few-shot). This is particularly promising as
language-based preference representations are more explainable and scrutable
than item-based or vector-based representations. | http://arxiv.org/pdf/2307.14225 | Scott Sanner, Krisztian Balog, Filip Radlinski, Ben Wedin, Lucas Dixon | cs.IR, cs.LG | To appear at RecSys'23 | null | cs.IR | 20230726 | 20230726 | [
{
"id": "2305.06474"
},
{
"id": "2305.07961"
},
{
"id": "2009.13292"
},
{
"id": "2204.02311"
},
{
"id": "2210.06280"
},
{
"id": "2005.14165"
},
{
"id": "2108.07732"
},
{
"id": "2306.02250"
},
{
"id": "2304.04250"
},
{
"id": "2305.08845"
},
{
"id": "2109.01652"
}
] |
2307.14430 | 12 | Deï¬nition 2.1 (Skill). A skill s is a unit of behavior with associated data Xs â X such that if f is trained on an dataset Ds â Xs, then f has improved metric L on samples belonging to Xs\Ds on average.
This deï¬nition of a skill is ï¬exibleâit simply means that given a training dataset associated with the skill, a model f has an improved metric (e.g., decreasing validation loss) when evaluated on validation data associated with this skill. Under this deï¬nition, a skill could be a granular task, such as Spanish question generation for a subset of Wikipedia articles, or can be deï¬ned over a data source, such as next-token prediction of legal data from tax court rulings. However, our next deï¬nition, the ordered skill set, has a more speciï¬c construction and provides a framework for how models learn across dependent skills. | 2307.14430#12 | Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models | The quality of training data impacts the performance of pre-trained large
language models (LMs). Given a fixed budget of tokens, we study how to best
select data that leads to good downstream model performance across tasks. We
develop a new framework based on a simple hypothesis: just as humans acquire
interdependent skills in a deliberate order, language models also follow a
natural order when learning a set of skills from their training data. If such
an order exists, it can be utilized for improved understanding of LMs and for
data-efficient training. Using this intuition, our framework formalizes the
notion of a skill and of an ordered set of skills in terms of the associated
data. First, using both synthetic and real data, we demonstrate that these
ordered skill sets exist, and that their existence enables more advanced skills
to be learned with less data when we train on their prerequisite skills.
Second, using our proposed framework, we introduce an online data sampling
algorithm, Skill-It, over mixtures of skills for both continual pre-training
and fine-tuning regimes, where the objective is to efficiently learn multiple
skills in the former and an individual skill in the latter. On the LEGO
synthetic in the continual pre-training setting, Skill-It obtains 36.5 points
higher accuracy than random sampling. On the Natural Instructions dataset in
the fine-tuning setting, Skill-It reduces the validation loss on the target
skill by 13.6% versus training on data associated with the target skill itself.
We apply our skills framework on the recent RedPajama dataset to continually
pre-train a 3B-parameter LM, achieving higher accuracy on the LM Evaluation
Harness with 1B tokens than the baseline approach of sampling uniformly over
data sources with 3B tokens. | http://arxiv.org/pdf/2307.14430 | Mayee F. Chen, Nicholas Roberts, Kush Bhatia, Jue Wang, Ce Zhang, Frederic Sala, Christopher Ré | cs.CL, cs.LG | null | null | cs.CL | 20230726 | 20230726 | [
{
"id": "2101.00027"
},
{
"id": "2005.14165"
}
] |
2307.14225 | 13 | in the movie domain, being frequently used for research as movie recommendation is familiar to numerous user study participants.
A key concern in any parallel corpus of this nature is that people may say they like items with particular charac- teristics, but then consume and positively react to quite diï¬erent items. For instance, this has been observed where people indicate aspirations (e.g., subscribe to particular podcasts) yet actually consume quite diï¬erent items (e.g., listen to others) [33]. In general, it has been observed that intentions (such as intending to choose healthy food) often do not lead to actual behaviors [43]. Such disparity between corpora could lead to inaccurate prediction about the utility of particular information for recommendation tasks. As such, one of our key considerations was to maximize consistency.
# 3.1 Phase 1: Preference Elicitation
Our preference elicitation design collected natural language descriptions of rater interests both at the start and at the end of a questionnaire. Speciï¬cally, raters were ï¬rst asked to write short paragraphs describing the sorts of movies they liked, as well as the sorts of movies they disliked (free-form text, minimum 150 characters). These initial liked (+) and disliked (-) self-descriptions for rater ð are respectively denoted as descð + and descð â. | 2307.14225#13 | Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences | Traditional recommender systems leverage users' item preference history to
recommend novel content that users may like. However, modern dialog interfaces
that allow users to express language-based preferences offer a fundamentally
different modality for preference input. Inspired by recent successes of
prompting paradigms for large language models (LLMs), we study their use for
making recommendations from both item-based and language-based preferences in
comparison to state-of-the-art item-based collaborative filtering (CF) methods.
To support this investigation, we collect a new dataset consisting of both
item-based and language-based preferences elicited from users along with their
ratings on a variety of (biased) recommended items and (unbiased) random items.
Among numerous experimental results, we find that LLMs provide competitive
recommendation performance for pure language-based preferences (no item
preferences) in the near cold-start case in comparison to item-based CF
methods, despite having no supervised training for this specific task
(zero-shot) or only a few labels (few-shot). This is particularly promising as
language-based preference representations are more explainable and scrutable
than item-based or vector-based representations. | http://arxiv.org/pdf/2307.14225 | Scott Sanner, Krisztian Balog, Filip Radlinski, Ben Wedin, Lucas Dixon | cs.IR, cs.LG | To appear at RecSys'23 | null | cs.IR | 20230726 | 20230726 | [
{
"id": "2305.06474"
},
{
"id": "2305.07961"
},
{
"id": "2009.13292"
},
{
"id": "2204.02311"
},
{
"id": "2210.06280"
},
{
"id": "2005.14165"
},
{
"id": "2108.07732"
},
{
"id": "2306.02250"
},
{
"id": "2304.04250"
},
{
"id": "2305.08845"
},
{
"id": "2109.01652"
}
] |
2307.14430 | 13 | Deï¬nition 2.2 (Ordered skill set, skills graph). An ordered skill set for f is a collection of skills S = {s1, . . . , sk} over which there is a directed skills graph G = (S, E) on the skill set that is neither complete or empty, where (si, sj) â E if the amount of data needed to learn sj when uniformly sampling from Dsi ⪠Dsj is no more than the amount of data needed when sampling only from Dsj . We equate learning a skill sj to f attaining a certain value of L or lower on average over Xsj \Dsj .
This deï¬nition isolates complete and empty graphs as extrema that do not capture meaningful sets of skills. We discuss the three types of skill graphsâcomplete, empty, intermediateâand their implications for data selection. In particular, we discuss how several initial attempts of deï¬ning skills over datasets via semantic groupings resulted in the extrema cases (see Appendix C.2 for full results): | 2307.14430#13 | Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models | The quality of training data impacts the performance of pre-trained large
language models (LMs). Given a fixed budget of tokens, we study how to best
select data that leads to good downstream model performance across tasks. We
develop a new framework based on a simple hypothesis: just as humans acquire
interdependent skills in a deliberate order, language models also follow a
natural order when learning a set of skills from their training data. If such
an order exists, it can be utilized for improved understanding of LMs and for
data-efficient training. Using this intuition, our framework formalizes the
notion of a skill and of an ordered set of skills in terms of the associated
data. First, using both synthetic and real data, we demonstrate that these
ordered skill sets exist, and that their existence enables more advanced skills
to be learned with less data when we train on their prerequisite skills.
Second, using our proposed framework, we introduce an online data sampling
algorithm, Skill-It, over mixtures of skills for both continual pre-training
and fine-tuning regimes, where the objective is to efficiently learn multiple
skills in the former and an individual skill in the latter. On the LEGO
synthetic in the continual pre-training setting, Skill-It obtains 36.5 points
higher accuracy than random sampling. On the Natural Instructions dataset in
the fine-tuning setting, Skill-It reduces the validation loss on the target
skill by 13.6% versus training on data associated with the target skill itself.
We apply our skills framework on the recent RedPajama dataset to continually
pre-train a 3B-parameter LM, achieving higher accuracy on the LM Evaluation
Harness with 1B tokens than the baseline approach of sampling uniformly over
data sources with 3B tokens. | http://arxiv.org/pdf/2307.14430 | Mayee F. Chen, Nicholas Roberts, Kush Bhatia, Jue Wang, Ce Zhang, Frederic Sala, Christopher Ré | cs.CL, cs.LG | null | null | cs.CL | 20230726 | 20230726 | [
{
"id": "2101.00027"
},
{
"id": "2005.14165"
}
] |
2307.14225 | 14 | Next, raters were asked to name ï¬ve example items (here, movies) that they like. This was enabled using an online query auto-completion system (similar to a modern search engine) where the rater could start typing the name of a movie and this was completed to speciï¬c (fully illustrated) movies. The auto-completion included the top 10,000 movies ranked according to the number of ratings in the MovieLens 25M dataset [18] to ensure coverage of even uncommon movies. As raters made choices, these were placed into a list which could then be modiï¬ed. Each rater was then asked to repeat this process to select ï¬ve examples of movies they do not like. These liked (+) and disliked (-) item selections for rater ð and item selection index ð â {1, . . . , 5} are respectively denoted as item ð,ð + and item ð,ð â .
Finally, raters were shown the ï¬ve liked movies and asked again to write the short paragraph describing the sorts of movies they liked (which we refer to as the ï¬nal description). The was repeated for the ï¬ve disliked movies.
# 3.2 Phase 2: Recommendation Feedback Collection | 2307.14225#14 | Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences | Traditional recommender systems leverage users' item preference history to
recommend novel content that users may like. However, modern dialog interfaces
that allow users to express language-based preferences offer a fundamentally
different modality for preference input. Inspired by recent successes of
prompting paradigms for large language models (LLMs), we study their use for
making recommendations from both item-based and language-based preferences in
comparison to state-of-the-art item-based collaborative filtering (CF) methods.
To support this investigation, we collect a new dataset consisting of both
item-based and language-based preferences elicited from users along with their
ratings on a variety of (biased) recommended items and (unbiased) random items.
Among numerous experimental results, we find that LLMs provide competitive
recommendation performance for pure language-based preferences (no item
preferences) in the near cold-start case in comparison to item-based CF
methods, despite having no supervised training for this specific task
(zero-shot) or only a few labels (few-shot). This is particularly promising as
language-based preference representations are more explainable and scrutable
than item-based or vector-based representations. | http://arxiv.org/pdf/2307.14225 | Scott Sanner, Krisztian Balog, Filip Radlinski, Ben Wedin, Lucas Dixon | cs.IR, cs.LG | To appear at RecSys'23 | null | cs.IR | 20230726 | 20230726 | [
{
"id": "2305.06474"
},
{
"id": "2305.07961"
},
{
"id": "2009.13292"
},
{
"id": "2204.02311"
},
{
"id": "2210.06280"
},
{
"id": "2005.14165"
},
{
"id": "2108.07732"
},
{
"id": "2306.02250"
},
{
"id": "2304.04250"
},
{
"id": "2305.08845"
},
{
"id": "2109.01652"
}
] |
2307.14430 | 14 | ⢠The complete graph demonstrates that all skills inï¬uence each other. A random partition is an example of a skill set that yields a complete graph. This graph suggests that the best approach for learning any skill or set of skills is random sampling on the dataset. This is not a setting where we can gain much with skill-based sampling. For example, using instruction types as skills on the Alpaca dataset results in a nearly complete estimated skills graph (97.4% dense), and we
3
Model performance on LEGO skill 3 Model performance on Addition skill 1 Model performance on Spanish QG â Trained on skill 1 eâ Trained on Spanish QG ââ Trained on skills 1, 2 e+ Trained on (Spanish, English] x [QA, QG] ââ Trained on skill 3 08 ââ Trained on skills 1, 2,3 Validation Loss PS Model performance on slance detection â<= Trained on stance detection â= âTrained on stance detection, text matching a @ 1000 2000 3000 4000 5000 6000 & 1000 2000 3000 4000 5000 6000 @ 100 200 300 400 500 600 Steps Steps Steps Validation Loss Validation Loss Validation Loss | 2307.14430#14 | Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models | The quality of training data impacts the performance of pre-trained large
language models (LMs). Given a fixed budget of tokens, we study how to best
select data that leads to good downstream model performance across tasks. We
develop a new framework based on a simple hypothesis: just as humans acquire
interdependent skills in a deliberate order, language models also follow a
natural order when learning a set of skills from their training data. If such
an order exists, it can be utilized for improved understanding of LMs and for
data-efficient training. Using this intuition, our framework formalizes the
notion of a skill and of an ordered set of skills in terms of the associated
data. First, using both synthetic and real data, we demonstrate that these
ordered skill sets exist, and that their existence enables more advanced skills
to be learned with less data when we train on their prerequisite skills.
Second, using our proposed framework, we introduce an online data sampling
algorithm, Skill-It, over mixtures of skills for both continual pre-training
and fine-tuning regimes, where the objective is to efficiently learn multiple
skills in the former and an individual skill in the latter. On the LEGO
synthetic in the continual pre-training setting, Skill-It obtains 36.5 points
higher accuracy than random sampling. On the Natural Instructions dataset in
the fine-tuning setting, Skill-It reduces the validation loss on the target
skill by 13.6% versus training on data associated with the target skill itself.
We apply our skills framework on the recent RedPajama dataset to continually
pre-train a 3B-parameter LM, achieving higher accuracy on the LM Evaluation
Harness with 1B tokens than the baseline approach of sampling uniformly over
data sources with 3B tokens. | http://arxiv.org/pdf/2307.14430 | Mayee F. Chen, Nicholas Roberts, Kush Bhatia, Jue Wang, Ce Zhang, Frederic Sala, Christopher Ré | cs.CL, cs.LG | null | null | cs.CL | 20230726 | 20230726 | [
{
"id": "2101.00027"
},
{
"id": "2005.14165"
}
] |
2307.14225 | 15 | # 3.2 Phase 2: Recommendation Feedback Collection
To enable a fair comparison of item-based and language-based recommendation algorithms, a second phase of our user study requested raters to assess the quality of recommendations made by a number of recommender algorithms based on the information collected in Phase 1. In particular, past work has observed that completeness of labels is important to ensure fundamentally diï¬erent algorithms can be compared reliably [2, 25].
Desiderata for recommender selection: We aimed for a mix of item-based, language-based, and unbiased recom- mendations. Hence, we collected user feedback (had they seen it or would they see it, and a 1â5 rating in either case) on a shuï¬ed set of 40 movies (displaying both a thumbnail and a short plot synopsis) drawn from four sample pools:
⢠SP-RandPop, an unbiased sample of popular items: 10 randomly selected top popular items (ranked 1-1000 in terms of number of MovieLens ratings);
⢠SP-RandMidPop, an unbiased sample of less popular items: 10 randomly selected less popular items (ranked 1001-5000 in terms of number of MovieLens ratings); | 2307.14225#15 | Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences | Traditional recommender systems leverage users' item preference history to
recommend novel content that users may like. However, modern dialog interfaces
that allow users to express language-based preferences offer a fundamentally
different modality for preference input. Inspired by recent successes of
prompting paradigms for large language models (LLMs), we study their use for
making recommendations from both item-based and language-based preferences in
comparison to state-of-the-art item-based collaborative filtering (CF) methods.
To support this investigation, we collect a new dataset consisting of both
item-based and language-based preferences elicited from users along with their
ratings on a variety of (biased) recommended items and (unbiased) random items.
Among numerous experimental results, we find that LLMs provide competitive
recommendation performance for pure language-based preferences (no item
preferences) in the near cold-start case in comparison to item-based CF
methods, despite having no supervised training for this specific task
(zero-shot) or only a few labels (few-shot). This is particularly promising as
language-based preference representations are more explainable and scrutable
than item-based or vector-based representations. | http://arxiv.org/pdf/2307.14225 | Scott Sanner, Krisztian Balog, Filip Radlinski, Ben Wedin, Lucas Dixon | cs.IR, cs.LG | To appear at RecSys'23 | null | cs.IR | 20230726 | 20230726 | [
{
"id": "2305.06474"
},
{
"id": "2305.07961"
},
{
"id": "2009.13292"
},
{
"id": "2204.02311"
},
{
"id": "2210.06280"
},
{
"id": "2005.14165"
},
{
"id": "2108.07732"
},
{
"id": "2306.02250"
},
{
"id": "2304.04250"
},
{
"id": "2305.08845"
},
{
"id": "2109.01652"
}
] |
2307.14430 | 15 | Figure 3: On the LEGO synthetic, 3-digit addition, and Natural Instructions, we identify examples of ordered skill sets in which training on a mixture of skills helps learn an individual skill faster than just training on that skill itself, given a ï¬xed training budget.
ï¬nd that stratiï¬ed sampling on these skills only improves validation loss per skill by 0.007 points over random sampling on average (Figure 2 left), suggesting that utilizing skills does not improve model performance in this case.
⢠The empty graph demonstrates that each skill is independent. This can occur if skills are too granular; for instance, learning Spanish math problems is unlikely to help with English poem generation. This graph suggests that the best approach for learning an individual skill is to train on the skill itself. We see that empty graphs exist in real data; in Figure 2 (center), using data sources as skills on the Pile of Law [21] results in a nearly empty skills graph (3.9% dense). | 2307.14430#15 | Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models | The quality of training data impacts the performance of pre-trained large
language models (LMs). Given a fixed budget of tokens, we study how to best
select data that leads to good downstream model performance across tasks. We
develop a new framework based on a simple hypothesis: just as humans acquire
interdependent skills in a deliberate order, language models also follow a
natural order when learning a set of skills from their training data. If such
an order exists, it can be utilized for improved understanding of LMs and for
data-efficient training. Using this intuition, our framework formalizes the
notion of a skill and of an ordered set of skills in terms of the associated
data. First, using both synthetic and real data, we demonstrate that these
ordered skill sets exist, and that their existence enables more advanced skills
to be learned with less data when we train on their prerequisite skills.
Second, using our proposed framework, we introduce an online data sampling
algorithm, Skill-It, over mixtures of skills for both continual pre-training
and fine-tuning regimes, where the objective is to efficiently learn multiple
skills in the former and an individual skill in the latter. On the LEGO
synthetic in the continual pre-training setting, Skill-It obtains 36.5 points
higher accuracy than random sampling. On the Natural Instructions dataset in
the fine-tuning setting, Skill-It reduces the validation loss on the target
skill by 13.6% versus training on data associated with the target skill itself.
We apply our skills framework on the recent RedPajama dataset to continually
pre-train a 3B-parameter LM, achieving higher accuracy on the LM Evaluation
Harness with 1B tokens than the baseline approach of sampling uniformly over
data sources with 3B tokens. | http://arxiv.org/pdf/2307.14430 | Mayee F. Chen, Nicholas Roberts, Kush Bhatia, Jue Wang, Ce Zhang, Frederic Sala, Christopher Ré | cs.CL, cs.LG | null | null | cs.CL | 20230726 | 20230726 | [
{
"id": "2101.00027"
},
{
"id": "2005.14165"
}
] |
2307.14225 | 16 | ⢠SP-RandMidPop, an unbiased sample of less popular items: 10 randomly selected less popular items (ranked 1001-5000 in terms of number of MovieLens ratings);
SP-EASE, personalized item-based recommendations: Top-10 from the strong baseline EASE [42] collaborative ï¬ltering recommender using hyperparameter ð = 5000.0 tuned on a set of held-out pilot data from 15 users; ⢠SP-BM25-Fusion, personalized language-based recommendations: Top-10 from Sparse Review-based Late FuSP-BM25-Fusion, personalized language-based recommendations: Top-10 from Sparse Review-based Late Fu- sion Retrieval that, like [3], computes BM25 match between all item reviews in the Amazon Movie Review corpus (v2) [45] and raterâs natural language preferences (desc), ranking items by maximal BM25-scoring review.
sion Retrieval that, like [3], computes BM25 match between all item reviews in the Amazon Movie Review corpus (v2) [45] and raterâs natural language preferences (desc+), ranking items by maximal BM25-scoring review. 4
LLMs are Competitive Near Cold-start Recommenders
RecSys â23, September 18â22, 2023, Singapore, Singapore | 2307.14225#16 | Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences | Traditional recommender systems leverage users' item preference history to
recommend novel content that users may like. However, modern dialog interfaces
that allow users to express language-based preferences offer a fundamentally
different modality for preference input. Inspired by recent successes of
prompting paradigms for large language models (LLMs), we study their use for
making recommendations from both item-based and language-based preferences in
comparison to state-of-the-art item-based collaborative filtering (CF) methods.
To support this investigation, we collect a new dataset consisting of both
item-based and language-based preferences elicited from users along with their
ratings on a variety of (biased) recommended items and (unbiased) random items.
Among numerous experimental results, we find that LLMs provide competitive
recommendation performance for pure language-based preferences (no item
preferences) in the near cold-start case in comparison to item-based CF
methods, despite having no supervised training for this specific task
(zero-shot) or only a few labels (few-shot). This is particularly promising as
language-based preference representations are more explainable and scrutable
than item-based or vector-based representations. | http://arxiv.org/pdf/2307.14225 | Scott Sanner, Krisztian Balog, Filip Radlinski, Ben Wedin, Lucas Dixon | cs.IR, cs.LG | To appear at RecSys'23 | null | cs.IR | 20230726 | 20230726 | [
{
"id": "2305.06474"
},
{
"id": "2305.07961"
},
{
"id": "2009.13292"
},
{
"id": "2204.02311"
},
{
"id": "2210.06280"
},
{
"id": "2005.14165"
},
{
"id": "2108.07732"
},
{
"id": "2306.02250"
},
{
"id": "2304.04250"
},
{
"id": "2305.08845"
},
{
"id": "2109.01652"
}
] |
2307.14430 | 16 | ⢠Graphs that are neither empty nor complete thus suggest a nontrivial order of how skill inï¬uence each other. This is the setting in which we expect that identifying skills and exploiting their ordering will help the most. In Figure 2 right, we use task categories, which capture broader reasoning patterns, as skills on Natural Instructions and ï¬nd that the estimated graph has intermediate density (42.7% dense). We show concrete examples of how skills can be learned more efï¬ciently on Natural Instructions in Section 2.2.
While these intuitive groupings result in ordered skill sets on some datasets (e.g., task categories on NI), this is not always the case (e.g., instruction types on Alpaca and sources on Pile of Law). Even though these groupings capture some notion of diversity in the dataset, our ï¬ndings suggest that not just any semantic grouping induces an ordered skill set. We now empirically demonstrate that our deï¬nition of ordered skill sets aligns with how models learn and can be exploited for more data-efï¬cient training.
# 2.2 Examples of skills and ordered skill sets
We provide examples of ordered skill sets on the LEGO synthetic dataset, an addition synthetic dataset, and subsets of the Natural Instructions dataset. On these datasets, we ï¬nd that certain skills are better learned when trained along with their prerequisite skills rather than in isolation. | 2307.14430#16 | Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models | The quality of training data impacts the performance of pre-trained large
language models (LMs). Given a fixed budget of tokens, we study how to best
select data that leads to good downstream model performance across tasks. We
develop a new framework based on a simple hypothesis: just as humans acquire
interdependent skills in a deliberate order, language models also follow a
natural order when learning a set of skills from their training data. If such
an order exists, it can be utilized for improved understanding of LMs and for
data-efficient training. Using this intuition, our framework formalizes the
notion of a skill and of an ordered set of skills in terms of the associated
data. First, using both synthetic and real data, we demonstrate that these
ordered skill sets exist, and that their existence enables more advanced skills
to be learned with less data when we train on their prerequisite skills.
Second, using our proposed framework, we introduce an online data sampling
algorithm, Skill-It, over mixtures of skills for both continual pre-training
and fine-tuning regimes, where the objective is to efficiently learn multiple
skills in the former and an individual skill in the latter. On the LEGO
synthetic in the continual pre-training setting, Skill-It obtains 36.5 points
higher accuracy than random sampling. On the Natural Instructions dataset in
the fine-tuning setting, Skill-It reduces the validation loss on the target
skill by 13.6% versus training on data associated with the target skill itself.
We apply our skills framework on the recent RedPajama dataset to continually
pre-train a 3B-parameter LM, achieving higher accuracy on the LM Evaluation
Harness with 1B tokens than the baseline approach of sampling uniformly over
data sources with 3B tokens. | http://arxiv.org/pdf/2307.14430 | Mayee F. Chen, Nicholas Roberts, Kush Bhatia, Jue Wang, Ce Zhang, Frederic Sala, Christopher Ré | cs.CL, cs.LG | null | null | cs.CL | 20230726 | 20230726 | [
{
"id": "2101.00027"
},
{
"id": "2005.14165"
}
] |
2307.14225 | 17 | LLMs are Competitive Near Cold-start Recommenders
RecSys â23, September 18â22, 2023, Singapore, Singapore
Note that SP-RandPop and SP-RandMidPop have 10 diï¬erent movies for each rater, and that these are a completely unbiased (as they do not leverage any user information, there can be no preference towards rating items that are more obvious recommendations, or other potential sources of bias). On the other hand, SP-EASE consists of EASE recommendations (based on the user item preferences), which we also evaluate as a recommenderâso there is some bias when using this set. We thus refer to the merged set of SP-RandPop and SP-RandMidPop as an Unbiased Set in the analysis, with performance on this set being key to our conclusions.
# 3.3 Design Consequences | 2307.14225#17 | Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences | Traditional recommender systems leverage users' item preference history to
recommend novel content that users may like. However, modern dialog interfaces
that allow users to express language-based preferences offer a fundamentally
different modality for preference input. Inspired by recent successes of
prompting paradigms for large language models (LLMs), we study their use for
making recommendations from both item-based and language-based preferences in
comparison to state-of-the-art item-based collaborative filtering (CF) methods.
To support this investigation, we collect a new dataset consisting of both
item-based and language-based preferences elicited from users along with their
ratings on a variety of (biased) recommended items and (unbiased) random items.
Among numerous experimental results, we find that LLMs provide competitive
recommendation performance for pure language-based preferences (no item
preferences) in the near cold-start case in comparison to item-based CF
methods, despite having no supervised training for this specific task
(zero-shot) or only a few labels (few-shot). This is particularly promising as
language-based preference representations are more explainable and scrutable
than item-based or vector-based representations. | http://arxiv.org/pdf/2307.14225 | Scott Sanner, Krisztian Balog, Filip Radlinski, Ben Wedin, Lucas Dixon | cs.IR, cs.LG | To appear at RecSys'23 | null | cs.IR | 20230726 | 20230726 | [
{
"id": "2305.06474"
},
{
"id": "2305.07961"
},
{
"id": "2009.13292"
},
{
"id": "2204.02311"
},
{
"id": "2210.06280"
},
{
"id": "2005.14165"
},
{
"id": "2108.07732"
},
{
"id": "2306.02250"
},
{
"id": "2304.04250"
},
{
"id": "2305.08845"
},
{
"id": "2109.01652"
}
] |
2307.14430 | 17 | LEGO skills The LEGO synthetic, ï¬rst introduced in [72], can evaluate a modelâs ability to follow a chain of reasoning. In this synthetic, the letters of the alphabet, A, are variables each with some binary label in {0, 1}. An individual sample consists of k clauses for some ï¬xed k across the dataset, each of the form a = gx where a, x â A and g is either a negation (ânotâ) or assertion (âvalâ), e.g. we assign a to the value of x, or we assign a to the opposite label. At the end of the sentence, we prompt the model for what the value of one of these variables is. Two samples x â X are given below for k = 5:
Input: b = not y, r = val 1, m = val b, q = val m, y = not r. Output: b = 1.
Input: c = val x, p = val f, x = val k, f = not c, k = val 0. Output: k = 0. | 2307.14430#17 | Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models | The quality of training data impacts the performance of pre-trained large
language models (LMs). Given a fixed budget of tokens, we study how to best
select data that leads to good downstream model performance across tasks. We
develop a new framework based on a simple hypothesis: just as humans acquire
interdependent skills in a deliberate order, language models also follow a
natural order when learning a set of skills from their training data. If such
an order exists, it can be utilized for improved understanding of LMs and for
data-efficient training. Using this intuition, our framework formalizes the
notion of a skill and of an ordered set of skills in terms of the associated
data. First, using both synthetic and real data, we demonstrate that these
ordered skill sets exist, and that their existence enables more advanced skills
to be learned with less data when we train on their prerequisite skills.
Second, using our proposed framework, we introduce an online data sampling
algorithm, Skill-It, over mixtures of skills for both continual pre-training
and fine-tuning regimes, where the objective is to efficiently learn multiple
skills in the former and an individual skill in the latter. On the LEGO
synthetic in the continual pre-training setting, Skill-It obtains 36.5 points
higher accuracy than random sampling. On the Natural Instructions dataset in
the fine-tuning setting, Skill-It reduces the validation loss on the target
skill by 13.6% versus training on data associated with the target skill itself.
We apply our skills framework on the recent RedPajama dataset to continually
pre-train a 3B-parameter LM, achieving higher accuracy on the LM Evaluation
Harness with 1B tokens than the baseline approach of sampling uniformly over
data sources with 3B tokens. | http://arxiv.org/pdf/2307.14430 | Mayee F. Chen, Nicholas Roberts, Kush Bhatia, Jue Wang, Ce Zhang, Frederic Sala, Christopher Ré | cs.CL, cs.LG | null | null | cs.CL | 20230726 | 20230726 | [
{
"id": "2101.00027"
},
{
"id": "2005.14165"
}
] |
2307.14225 | 18 | # 3.3 Design Consequences
Importantly, to ensure a maximally fair comparison of language-based and item-based approaches, consistency of the two types of preferences was key in our data collection approach. As such, we directly crowd-sourced both types of preferences from raters in sequence, with textual descriptions collected twiceâbefore and after self-selected item rat- ings. This required control means the amount of data per rater must be small. It is also a realistic amount of preference information that may be required of a recommendation recipient in a near-cold-start conversational setting. As a con- sequence of the manual eï¬ort required, the number of raters recruited also took into consideration the required power of the algorithmic comparison, with a key contributions being to the protocol developed rather than data scale.
Our approach thus contrasts with alternatives of extracting reviews or preference descriptions in bulk from online content similarly to [4, 32] (where preferences do not necessarily capture a personâs interests fully) and/or relying on item preferences expressed either explicitly or implicitly over time (during which time preferences may change).
# 4 METHODS | 2307.14225#18 | Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences | Traditional recommender systems leverage users' item preference history to
recommend novel content that users may like. However, modern dialog interfaces
that allow users to express language-based preferences offer a fundamentally
different modality for preference input. Inspired by recent successes of
prompting paradigms for large language models (LLMs), we study their use for
making recommendations from both item-based and language-based preferences in
comparison to state-of-the-art item-based collaborative filtering (CF) methods.
To support this investigation, we collect a new dataset consisting of both
item-based and language-based preferences elicited from users along with their
ratings on a variety of (biased) recommended items and (unbiased) random items.
Among numerous experimental results, we find that LLMs provide competitive
recommendation performance for pure language-based preferences (no item
preferences) in the near cold-start case in comparison to item-based CF
methods, despite having no supervised training for this specific task
(zero-shot) or only a few labels (few-shot). This is particularly promising as
language-based preference representations are more explainable and scrutable
than item-based or vector-based representations. | http://arxiv.org/pdf/2307.14225 | Scott Sanner, Krisztian Balog, Filip Radlinski, Ben Wedin, Lucas Dixon | cs.IR, cs.LG | To appear at RecSys'23 | null | cs.IR | 20230726 | 20230726 | [
{
"id": "2305.06474"
},
{
"id": "2305.07961"
},
{
"id": "2009.13292"
},
{
"id": "2204.02311"
},
{
"id": "2210.06280"
},
{
"id": "2005.14165"
},
{
"id": "2108.07732"
},
{
"id": "2306.02250"
},
{
"id": "2304.04250"
},
{
"id": "2305.08845"
},
{
"id": "2109.01652"
}
] |
2307.14430 | 18 | These samples each correspond to a chain of reasoning; for instance the ï¬rst sample has the chain r, y, b, m, q, where knowing qâs label requires the most reasoning steps. We deï¬ne the ith skill si as the modelâs ability to know the ith variable of the chain. From our example above, the ï¬rst sample belongs to Xs3 and the second sample belongs to Xs1. To demonstrate the existence of ordered skill sets, we continually pre-train the 125M parameter GPT-Neo model [5, 13] over various mixtures of LEGO skills with k = 5. In Figure 3 (left), we ï¬nd that in 35.9% fewer training steps, training on a balanced mixture of Xs1 , Xs2, and Xs3 resulted in the same validation loss of 0.01 as training solely on Xs3. This suggests that s1, s2 helped unlock performance on s3 and that there exist edges from s1 or s2 to s3 in the skill graph. Additional observations are available in Appendix D.1, where we examine other edges as well as more complex reasoning chains, and the full skills graph corresponding to the ordered skill set for LEGO with k = 5 is in Figure 10. | 2307.14430#18 | Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models | The quality of training data impacts the performance of pre-trained large
language models (LMs). Given a fixed budget of tokens, we study how to best
select data that leads to good downstream model performance across tasks. We
develop a new framework based on a simple hypothesis: just as humans acquire
interdependent skills in a deliberate order, language models also follow a
natural order when learning a set of skills from their training data. If such
an order exists, it can be utilized for improved understanding of LMs and for
data-efficient training. Using this intuition, our framework formalizes the
notion of a skill and of an ordered set of skills in terms of the associated
data. First, using both synthetic and real data, we demonstrate that these
ordered skill sets exist, and that their existence enables more advanced skills
to be learned with less data when we train on their prerequisite skills.
Second, using our proposed framework, we introduce an online data sampling
algorithm, Skill-It, over mixtures of skills for both continual pre-training
and fine-tuning regimes, where the objective is to efficiently learn multiple
skills in the former and an individual skill in the latter. On the LEGO
synthetic in the continual pre-training setting, Skill-It obtains 36.5 points
higher accuracy than random sampling. On the Natural Instructions dataset in
the fine-tuning setting, Skill-It reduces the validation loss on the target
skill by 13.6% versus training on data associated with the target skill itself.
We apply our skills framework on the recent RedPajama dataset to continually
pre-train a 3B-parameter LM, achieving higher accuracy on the LM Evaluation
Harness with 1B tokens than the baseline approach of sampling uniformly over
data sources with 3B tokens. | http://arxiv.org/pdf/2307.14430 | Mayee F. Chen, Nicholas Roberts, Kush Bhatia, Jue Wang, Ce Zhang, Frederic Sala, Christopher Ré | cs.CL, cs.LG | null | null | cs.CL | 20230726 | 20230726 | [
{
"id": "2101.00027"
},
{
"id": "2005.14165"
}
] |
2307.14430 | 19 | Addition skills We consider a variant of a synthetic 5-digit addition dataset analyzed in [44]. We show the existence of ordered skill sets for a simpliï¬ed 3-digit addition dataset where we treat each digit prediction as a skillâthe outputs, in this case, are the integers {0, 1, ..., 9}. Examples are of the following form:
4
# Input: A = 1 0 6 + 0 7 1 , A 0 = ? Output: 7 Input: A = 6 0 6 + 8 7 9 , A 2 = ? Output: 4
where âA 0â refers to the ones digit of the output (s1) and âA 2â refers to the hundreds digit (s3). In Figure 3 (center), we ï¬nd that in 32% fewer training steps, training on a balanced mixture of Xs1 , and Xs2 resulted in the same validation loss of 0.01 as training solely on Xs1 . That is, the ones digit addition skill can be improved by simultaneously learning the tens digit addition skill, even though the former should not require information from the latterâthis is in line with observations from prior work that models do not always learn the ones digit addition ï¬rst [44]. The full skills graph corresponding to the ordered skill set over 3-digit addition is in Figure 11. | 2307.14430#19 | Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models | The quality of training data impacts the performance of pre-trained large
language models (LMs). Given a fixed budget of tokens, we study how to best
select data that leads to good downstream model performance across tasks. We
develop a new framework based on a simple hypothesis: just as humans acquire
interdependent skills in a deliberate order, language models also follow a
natural order when learning a set of skills from their training data. If such
an order exists, it can be utilized for improved understanding of LMs and for
data-efficient training. Using this intuition, our framework formalizes the
notion of a skill and of an ordered set of skills in terms of the associated
data. First, using both synthetic and real data, we demonstrate that these
ordered skill sets exist, and that their existence enables more advanced skills
to be learned with less data when we train on their prerequisite skills.
Second, using our proposed framework, we introduce an online data sampling
algorithm, Skill-It, over mixtures of skills for both continual pre-training
and fine-tuning regimes, where the objective is to efficiently learn multiple
skills in the former and an individual skill in the latter. On the LEGO
synthetic in the continual pre-training setting, Skill-It obtains 36.5 points
higher accuracy than random sampling. On the Natural Instructions dataset in
the fine-tuning setting, Skill-It reduces the validation loss on the target
skill by 13.6% versus training on data associated with the target skill itself.
We apply our skills framework on the recent RedPajama dataset to continually
pre-train a 3B-parameter LM, achieving higher accuracy on the LM Evaluation
Harness with 1B tokens than the baseline approach of sampling uniformly over
data sources with 3B tokens. | http://arxiv.org/pdf/2307.14430 | Mayee F. Chen, Nicholas Roberts, Kush Bhatia, Jue Wang, Ce Zhang, Frederic Sala, Christopher Ré | cs.CL, cs.LG | null | null | cs.CL | 20230726 | 20230726 | [
{
"id": "2101.00027"
},
{
"id": "2005.14165"
}
] |
2307.14225 | 20 | # 4.1 Baselines
To leverage the item and language preferences elicited in Phase 1, we evaluate CF methods as well as a language- based baseline previously found particularly eï¬ective [2, 11].1 Most baseline item-based CF methods use the default conï¬guration in MyMediaLite [14], including MostPopular: ranking items by the number of ratings in the dataset, Item-kNN: Item-based k-Nearest Neighbours [40], WR-MF: Weighted Regularized Matrix Factorization, a regularized version of singular value decomposition [23], and BPR-SLIM: a Sparse Linear Method (SLIM) that learns a sparse weighting vector over items rated, via a regularized optimization approach [34, 38]. We also compare against our own implementation of the more recent state-of-the-art item-based EASE recommender [42]. As a language-based baseline, we compare against BM25-Fusion, described in Section 3.2. Finally, we also evaluate a random ordering of items in the raterâs pool (Random) to calibrate against this uninformed baseline.
# 4.2 Prompting Methods | 2307.14225#20 | Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences | Traditional recommender systems leverage users' item preference history to
recommend novel content that users may like. However, modern dialog interfaces
that allow users to express language-based preferences offer a fundamentally
different modality for preference input. Inspired by recent successes of
prompting paradigms for large language models (LLMs), we study their use for
making recommendations from both item-based and language-based preferences in
comparison to state-of-the-art item-based collaborative filtering (CF) methods.
To support this investigation, we collect a new dataset consisting of both
item-based and language-based preferences elicited from users along with their
ratings on a variety of (biased) recommended items and (unbiased) random items.
Among numerous experimental results, we find that LLMs provide competitive
recommendation performance for pure language-based preferences (no item
preferences) in the near cold-start case in comparison to item-based CF
methods, despite having no supervised training for this specific task
(zero-shot) or only a few labels (few-shot). This is particularly promising as
language-based preference representations are more explainable and scrutable
than item-based or vector-based representations. | http://arxiv.org/pdf/2307.14225 | Scott Sanner, Krisztian Balog, Filip Radlinski, Ben Wedin, Lucas Dixon | cs.IR, cs.LG | To appear at RecSys'23 | null | cs.IR | 20230726 | 20230726 | [
{
"id": "2305.06474"
},
{
"id": "2305.07961"
},
{
"id": "2009.13292"
},
{
"id": "2204.02311"
},
{
"id": "2210.06280"
},
{
"id": "2005.14165"
},
{
"id": "2108.07732"
},
{
"id": "2306.02250"
},
{
"id": "2304.04250"
},
{
"id": "2305.08845"
},
{
"id": "2109.01652"
}
] |
2307.14430 | 20 | Natural Instructions (NI) skills We show that ordered skill sets exist in NI [63] when we treat task categories as skills.
⢠In Figure 3 (top right), we show that ordered skill sets exist over crosslingual task categories. Training on Spanish question generation (QG) along with equal parts of English QG, Spanish question answering (QA), and English QA results in 4.1% lower validation loss than training only on Spanish QG. Remarkably, the former only uses 25% of the latterâs Spanish QG data. This suggests that there are edges from Spanish QA, English QA, and English QG to Spanish QG.
⢠In Figure 3 (bottom right), we see that training on the task category Text Matching along with Stance Detection helps decrease the loss on Stance Detection by 11%. This suggests that these categories, which both involve understanding the relationship between two input texts, share an edge.
The full skills graphs corresponding to the ordered skill sets over these task categories are in Figure 13. While equating task categories to skills may be noisy, these examples suggest that there is signal within real data that suggests that ordered skill sets can improve data efï¬ciency. | 2307.14430#20 | Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models | The quality of training data impacts the performance of pre-trained large
language models (LMs). Given a fixed budget of tokens, we study how to best
select data that leads to good downstream model performance across tasks. We
develop a new framework based on a simple hypothesis: just as humans acquire
interdependent skills in a deliberate order, language models also follow a
natural order when learning a set of skills from their training data. If such
an order exists, it can be utilized for improved understanding of LMs and for
data-efficient training. Using this intuition, our framework formalizes the
notion of a skill and of an ordered set of skills in terms of the associated
data. First, using both synthetic and real data, we demonstrate that these
ordered skill sets exist, and that their existence enables more advanced skills
to be learned with less data when we train on their prerequisite skills.
Second, using our proposed framework, we introduce an online data sampling
algorithm, Skill-It, over mixtures of skills for both continual pre-training
and fine-tuning regimes, where the objective is to efficiently learn multiple
skills in the former and an individual skill in the latter. On the LEGO
synthetic in the continual pre-training setting, Skill-It obtains 36.5 points
higher accuracy than random sampling. On the Natural Instructions dataset in
the fine-tuning setting, Skill-It reduces the validation loss on the target
skill by 13.6% versus training on data associated with the target skill itself.
We apply our skills framework on the recent RedPajama dataset to continually
pre-train a 3B-parameter LM, achieving higher accuracy on the LM Evaluation
Harness with 1B tokens than the baseline approach of sampling uniformly over
data sources with 3B tokens. | http://arxiv.org/pdf/2307.14430 | Mayee F. Chen, Nicholas Roberts, Kush Bhatia, Jue Wang, Ce Zhang, Frederic Sala, Christopher Ré | cs.CL, cs.LG | null | null | cs.CL | 20230726 | 20230726 | [
{
"id": "2101.00027"
},
{
"id": "2005.14165"
}
] |
2307.14225 | 21 | # 4.2 Prompting Methods
We experiment with a variety of prompting strategies using a variant of the PaLM model (62 billion parameters in size, trained over 1.4 trillion tokens) [9], that we denote moving forward as simply LLM. Notationally, we assume ð¡ is the speciï¬c target rater for the recommendation, whereas ð denotes a generic rater. All prompts are presented in two parts: a preï¬x followed by a suï¬x which is always the name of the item (movie) to be scored for the target user, 1Notably Dacrema et al. [11] observe that the neural methods do not outperform these baselines. | 2307.14225#21 | Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences | Traditional recommender systems leverage users' item preference history to
recommend novel content that users may like. However, modern dialog interfaces
that allow users to express language-based preferences offer a fundamentally
different modality for preference input. Inspired by recent successes of
prompting paradigms for large language models (LLMs), we study their use for
making recommendations from both item-based and language-based preferences in
comparison to state-of-the-art item-based collaborative filtering (CF) methods.
To support this investigation, we collect a new dataset consisting of both
item-based and language-based preferences elicited from users along with their
ratings on a variety of (biased) recommended items and (unbiased) random items.
Among numerous experimental results, we find that LLMs provide competitive
recommendation performance for pure language-based preferences (no item
preferences) in the near cold-start case in comparison to item-based CF
methods, despite having no supervised training for this specific task
(zero-shot) or only a few labels (few-shot). This is particularly promising as
language-based preference representations are more explainable and scrutable
than item-based or vector-based representations. | http://arxiv.org/pdf/2307.14225 | Scott Sanner, Krisztian Balog, Filip Radlinski, Ben Wedin, Lucas Dixon | cs.IR, cs.LG | To appear at RecSys'23 | null | cs.IR | 20230726 | 20230726 | [
{
"id": "2305.06474"
},
{
"id": "2305.07961"
},
{
"id": "2009.13292"
},
{
"id": "2204.02311"
},
{
"id": "2210.06280"
},
{
"id": "2005.14165"
},
{
"id": "2108.07732"
},
{
"id": "2306.02250"
},
{
"id": "2304.04250"
},
{
"id": "2305.08845"
},
{
"id": "2109.01652"
}
] |
2307.14430 | 21 | 2.3 Skill recovery A ï¬nal component of characterizing skills is unsupervised recovery of ordered skill sets. We consider embedding-based clustering approaches and a loss-based clustering approach for recovering LEGO skills. When clustering data using various trained and pre-trained embeddings, we ï¬nd that they were unable to achieve above 39% accuracy on LEGO. Instead, we ï¬nd that taking 10 random training runs and clustering data by their loss per timestep per run recovers the skills with 61% accuracy (Table 3). The intuition behind this method is that the validation losses on points from the same skill have similar trajectories as models learn. We discuss this approach more in Appendix D.2.
3 Skills-based data selection Now that we have established the existence of ordered skill sets, we discuss how to use them for data selection. We state the data selection problem for learning across skills in Section 3.1. We discuss how to learn the skills graph that will be exploited in our data selection methods in Section 3.2. We then introduce two sampling methods that utilize the graph, a simple skill-stratiï¬ed sampling method and the online sampling method SKILL-IT, in Section 3.3.
# 3.1 Problem statement | 2307.14430#21 | Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models | The quality of training data impacts the performance of pre-trained large
language models (LMs). Given a fixed budget of tokens, we study how to best
select data that leads to good downstream model performance across tasks. We
develop a new framework based on a simple hypothesis: just as humans acquire
interdependent skills in a deliberate order, language models also follow a
natural order when learning a set of skills from their training data. If such
an order exists, it can be utilized for improved understanding of LMs and for
data-efficient training. Using this intuition, our framework formalizes the
notion of a skill and of an ordered set of skills in terms of the associated
data. First, using both synthetic and real data, we demonstrate that these
ordered skill sets exist, and that their existence enables more advanced skills
to be learned with less data when we train on their prerequisite skills.
Second, using our proposed framework, we introduce an online data sampling
algorithm, Skill-It, over mixtures of skills for both continual pre-training
and fine-tuning regimes, where the objective is to efficiently learn multiple
skills in the former and an individual skill in the latter. On the LEGO
synthetic in the continual pre-training setting, Skill-It obtains 36.5 points
higher accuracy than random sampling. On the Natural Instructions dataset in
the fine-tuning setting, Skill-It reduces the validation loss on the target
skill by 13.6% versus training on data associated with the target skill itself.
We apply our skills framework on the recent RedPajama dataset to continually
pre-train a 3B-parameter LM, achieving higher accuracy on the LM Evaluation
Harness with 1B tokens than the baseline approach of sampling uniformly over
data sources with 3B tokens. | http://arxiv.org/pdf/2307.14430 | Mayee F. Chen, Nicholas Roberts, Kush Bhatia, Jue Wang, Ce Zhang, Frederic Sala, Christopher Ré | cs.CL, cs.LG | null | null | cs.CL | 20230726 | 20230726 | [
{
"id": "2101.00027"
},
{
"id": "2005.14165"
}
] |
2307.14225 | 22 | 5
RecSys â23, September 18â22, 2023, Singapore, Singapore
Scott Sanner, Krisztian Balog, Filip Radlinski, Ben Wedin, and Lucas Dixon
denoted as hitemð¡ âi. The score is computed as the log likelihood of the suï¬x and is used to rank all candidate item recommendations.2 As such, we can evaluate the score given by the LLM to every item in our target set of 40 items collected in Phase 2 of the data collection.
Given this notation, we devise Completion, Zero-shot, and Few-shot prompt templates for the case of Items only, Language only, and combined Language+Items deï¬ned as follows:
4.2.1 Items only. The completion approach is analogous to that used for the P5 model [16], except that we leverage a pretrained LLM in place of a custom-trained transformer. The remaining approaches are devised in this work:
Completion: itemð¡,1 + , itemð¡,2 ⢠Zero-shot: I like the following movies: itemð¡,1
# + , itemð¡,3
# + , itemð¡,4
# + , itemð¡,5
# + , hitemð¡ âi + , itemð¡,4 + , itemð¡,3 | 2307.14225#22 | Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences | Traditional recommender systems leverage users' item preference history to
recommend novel content that users may like. However, modern dialog interfaces
that allow users to express language-based preferences offer a fundamentally
different modality for preference input. Inspired by recent successes of
prompting paradigms for large language models (LLMs), we study their use for
making recommendations from both item-based and language-based preferences in
comparison to state-of-the-art item-based collaborative filtering (CF) methods.
To support this investigation, we collect a new dataset consisting of both
item-based and language-based preferences elicited from users along with their
ratings on a variety of (biased) recommended items and (unbiased) random items.
Among numerous experimental results, we find that LLMs provide competitive
recommendation performance for pure language-based preferences (no item
preferences) in the near cold-start case in comparison to item-based CF
methods, despite having no supervised training for this specific task
(zero-shot) or only a few labels (few-shot). This is particularly promising as
language-based preference representations are more explainable and scrutable
than item-based or vector-based representations. | http://arxiv.org/pdf/2307.14225 | Scott Sanner, Krisztian Balog, Filip Radlinski, Ben Wedin, Lucas Dixon | cs.IR, cs.LG | To appear at RecSys'23 | null | cs.IR | 20230726 | 20230726 | [
{
"id": "2305.06474"
},
{
"id": "2305.07961"
},
{
"id": "2009.13292"
},
{
"id": "2204.02311"
},
{
"id": "2210.06280"
},
{
"id": "2005.14165"
},
{
"id": "2108.07732"
},
{
"id": "2306.02250"
},
{
"id": "2304.04250"
},
{
"id": "2305.08845"
},
{
"id": "2109.01652"
}
] |
2307.14430 | 22 | # 3.1 Problem statement
We are given an ordered training skill set Strain = {strain,1, . . . , strain,k} on the training data, each with associated support set Xstrain,1, . . . Xstrain,k , and an ordered evaluation skill set Seval = {seval,1, . . . , seval,m} of m evaluation skills on a separate evaluation dataset. We aim to select n samples from Strain via a mixture of training skills, p â âkâ1, to achieve three goals depending on how Seval is constructed: ⢠Continual pre-training: when Seval = Strain, our goal is select a mixture of training skills to learn all of them.
⢠Fine-tuning: when Seval â Strain, our goal is to select a mixture of training skills to learn an individual target skill or subset of these skills.
⢠Out-of-domain: when Seval â© Strain = â
, our goal is to select a mixture of training skills to learn a disjoint set of evaluation skills we cannot train on. This can arise when we have a separate downstream validation dataset or the skills identiï¬ed in the training dataset are noisy. | 2307.14430#22 | Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models | The quality of training data impacts the performance of pre-trained large
language models (LMs). Given a fixed budget of tokens, we study how to best
select data that leads to good downstream model performance across tasks. We
develop a new framework based on a simple hypothesis: just as humans acquire
interdependent skills in a deliberate order, language models also follow a
natural order when learning a set of skills from their training data. If such
an order exists, it can be utilized for improved understanding of LMs and for
data-efficient training. Using this intuition, our framework formalizes the
notion of a skill and of an ordered set of skills in terms of the associated
data. First, using both synthetic and real data, we demonstrate that these
ordered skill sets exist, and that their existence enables more advanced skills
to be learned with less data when we train on their prerequisite skills.
Second, using our proposed framework, we introduce an online data sampling
algorithm, Skill-It, over mixtures of skills for both continual pre-training
and fine-tuning regimes, where the objective is to efficiently learn multiple
skills in the former and an individual skill in the latter. On the LEGO
synthetic in the continual pre-training setting, Skill-It obtains 36.5 points
higher accuracy than random sampling. On the Natural Instructions dataset in
the fine-tuning setting, Skill-It reduces the validation loss on the target
skill by 13.6% versus training on data associated with the target skill itself.
We apply our skills framework on the recent RedPajama dataset to continually
pre-train a 3B-parameter LM, achieving higher accuracy on the LM Evaluation
Harness with 1B tokens than the baseline approach of sampling uniformly over
data sources with 3B tokens. | http://arxiv.org/pdf/2307.14430 | Mayee F. Chen, Nicholas Roberts, Kush Bhatia, Jue Wang, Ce Zhang, Frederic Sala, Christopher Ré | cs.CL, cs.LG | null | null | cs.CL | 20230726 | 20230726 | [
{
"id": "2101.00027"
},
{
"id": "2005.14165"
}
] |
2307.14225 | 23 | # + , itemð¡,5
# + , hitemð¡ âi + , itemð¡,4 + , itemð¡,3
Zero-shot: | like the following movies: itemâ, itemâ, itemâ, itemâ, itemâ. Then I would also like (item!) User Movie Preferences: itemâ, itemâ?â, itemâ??, item's" Additional User Movie Preference: itemâ? . stomtel seo hd i td td oS User Movie Preferences: item; , item,â, itemâ, item,;â, item; Few-shot (k): Repeat r ⬠{1,...,k} { Additional User Movie Preference: (item!)
Zero-shot: | like the following movies: itemâ, itemâ, itemâ, itemâ, itemâ. Then I would also like (item!)
4.2.2 Language only.
Completion: descð¡ + hitemð¡ âi ⢠Zero-shot: I describe the movies I like as follows: descð¡
+. Then I would also like hitemð¡ âi
¢ Zero-shot: I describe the movies I like as follows: desc4,. Then I would also like (item!) | 2307.14225#23 | Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences | Traditional recommender systems leverage users' item preference history to
recommend novel content that users may like. However, modern dialog interfaces
that allow users to express language-based preferences offer a fundamentally
different modality for preference input. Inspired by recent successes of
prompting paradigms for large language models (LLMs), we study their use for
making recommendations from both item-based and language-based preferences in
comparison to state-of-the-art item-based collaborative filtering (CF) methods.
To support this investigation, we collect a new dataset consisting of both
item-based and language-based preferences elicited from users along with their
ratings on a variety of (biased) recommended items and (unbiased) random items.
Among numerous experimental results, we find that LLMs provide competitive
recommendation performance for pure language-based preferences (no item
preferences) in the near cold-start case in comparison to item-based CF
methods, despite having no supervised training for this specific task
(zero-shot) or only a few labels (few-shot). This is particularly promising as
language-based preference representations are more explainable and scrutable
than item-based or vector-based representations. | http://arxiv.org/pdf/2307.14225 | Scott Sanner, Krisztian Balog, Filip Radlinski, Ben Wedin, Lucas Dixon | cs.IR, cs.LG | To appear at RecSys'23 | null | cs.IR | 20230726 | 20230726 | [
{
"id": "2305.06474"
},
{
"id": "2305.07961"
},
{
"id": "2009.13292"
},
{
"id": "2204.02311"
},
{
"id": "2210.06280"
},
{
"id": "2005.14165"
},
{
"id": "2108.07732"
},
{
"id": "2306.02250"
},
{
"id": "2304.04250"
},
{
"id": "2305.08845"
},
{
"id": "2109.01652"
}
] |
2307.14430 | 23 | Furthermore, we have a skills graph G = (Strain ⪠Seval, E), where E â Strain à Seval and A â RkÃm is a weighted adjacency submatrix, where Aij describes the strength of the edge from strain,i to seval,j. In Table 1, we summarize how the three different settings are constructed and how A varies across them. Next, we discuss how A can be estimated from the data.
# 3.2 Skills graph learning
The skills graph is important for determining how to sample from the ordered skill set for training efï¬ciently. We present two approaches for learning the skills graphâbrute-force and linear approximation. Algorithms are provided in Appendix B.2. By deï¬nition 2.2, the brute-force way of identifying edges involves ï¬xing an overall training budget of H steps and 1)
5
Setting Seval Skills graph Continual pre-training Fine-tuning Out-of-domain Seval = Strain Seval â Strain A â RkÃk, edges among all Strain A â RkÃm, edges from all training skills to target skill subset Seval â© Strain = â
A â RkÃm, edges from all training skills to separate evaluation skill set | 2307.14430#23 | Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models | The quality of training data impacts the performance of pre-trained large
language models (LMs). Given a fixed budget of tokens, we study how to best
select data that leads to good downstream model performance across tasks. We
develop a new framework based on a simple hypothesis: just as humans acquire
interdependent skills in a deliberate order, language models also follow a
natural order when learning a set of skills from their training data. If such
an order exists, it can be utilized for improved understanding of LMs and for
data-efficient training. Using this intuition, our framework formalizes the
notion of a skill and of an ordered set of skills in terms of the associated
data. First, using both synthetic and real data, we demonstrate that these
ordered skill sets exist, and that their existence enables more advanced skills
to be learned with less data when we train on their prerequisite skills.
Second, using our proposed framework, we introduce an online data sampling
algorithm, Skill-It, over mixtures of skills for both continual pre-training
and fine-tuning regimes, where the objective is to efficiently learn multiple
skills in the former and an individual skill in the latter. On the LEGO
synthetic in the continual pre-training setting, Skill-It obtains 36.5 points
higher accuracy than random sampling. On the Natural Instructions dataset in
the fine-tuning setting, Skill-It reduces the validation loss on the target
skill by 13.6% versus training on data associated with the target skill itself.
We apply our skills framework on the recent RedPajama dataset to continually
pre-train a 3B-parameter LM, achieving higher accuracy on the LM Evaluation
Harness with 1B tokens than the baseline approach of sampling uniformly over
data sources with 3B tokens. | http://arxiv.org/pdf/2307.14430 | Mayee F. Chen, Nicholas Roberts, Kush Bhatia, Jue Wang, Ce Zhang, Frederic Sala, Christopher Ré | cs.CL, cs.LG | null | null | cs.CL | 20230726 | 20230726 | [
{
"id": "2101.00027"
},
{
"id": "2005.14165"
}
] |
2307.14225 | 24 | +. Then I would also like hitemð¡ âi
¢ Zero-shot: I describe the movies I like as follows: desc4,. Then I would also like (item!)
# User Description: descð + User Movie Preferences: itemð,1
⢠Few-shot (ð): Repeat ð â {1, . . . , ð }
r,5 Few-shot(k): Repeat r ⬠{1,...,k} { P User Movie Preferences: itemâ,â, itemâ, itemâ,?, itemâ,"*, item, User Description: descâ,
# User Description: descð¡ + User Movie Preferences: hitemð¡ âi
4.2.3 Language + item.
+ , itemð¡,5 ⢠Completion: descð¡ ⢠Zero-shot: I describe the movies I like as follows: descð¡ + . Then I would also like hitemð¡ âi
# + itemð¡,1
# + , itemð¡,2
# + , itemð¡,3
# + , itemð¡,4
+ , hitemð¡ âi +. I like the following movies: itemð¡,1
# + , itemð¡,2
+ , itemð¡,3 + , | 2307.14225#24 | Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences | Traditional recommender systems leverage users' item preference history to
recommend novel content that users may like. However, modern dialog interfaces
that allow users to express language-based preferences offer a fundamentally
different modality for preference input. Inspired by recent successes of
prompting paradigms for large language models (LLMs), we study their use for
making recommendations from both item-based and language-based preferences in
comparison to state-of-the-art item-based collaborative filtering (CF) methods.
To support this investigation, we collect a new dataset consisting of both
item-based and language-based preferences elicited from users along with their
ratings on a variety of (biased) recommended items and (unbiased) random items.
Among numerous experimental results, we find that LLMs provide competitive
recommendation performance for pure language-based preferences (no item
preferences) in the near cold-start case in comparison to item-based CF
methods, despite having no supervised training for this specific task
(zero-shot) or only a few labels (few-shot). This is particularly promising as
language-based preference representations are more explainable and scrutable
than item-based or vector-based representations. | http://arxiv.org/pdf/2307.14225 | Scott Sanner, Krisztian Balog, Filip Radlinski, Ben Wedin, Lucas Dixon | cs.IR, cs.LG | To appear at RecSys'23 | null | cs.IR | 20230726 | 20230726 | [
{
"id": "2305.06474"
},
{
"id": "2305.07961"
},
{
"id": "2009.13292"
},
{
"id": "2204.02311"
},
{
"id": "2210.06280"
},
{
"id": "2005.14165"
},
{
"id": "2108.07732"
},
{
"id": "2306.02250"
},
{
"id": "2304.04250"
},
{
"id": "2305.08845"
},
{
"id": "2109.01652"
}
] |
2307.14430 | 24 | Table 1: Summary of three settingsâcontinual pre-training, ï¬ne-tuning, and out-of-domain. These settings are determined by how Seval is deï¬ned and result in different skills graphs used for our sampling methods.
Algorithm 1: SKILL-IT Online Data Selection Algorithm
1: Input: Ordered training skill set Strain, ordered evaluation skill set Seval. Learning rate η, T rounds, n samples, H training steps per run for graph learning, model f1, window parameter w.
2: A â LEARNGRAPH(Strain, Seval, H, f1) (Alg. 2, 3). 3: Initialize pi 4: for t = 1, . . . , T â 1 do Observe losses Leval,j(ft) for all seval,j â Seval. 5: Train model ft with n/T samples from mixture pt over Strain. Update model ft+1 = Φ(ft, pt). 6: Set pi 7: 8: end for
= exp( 05",
# j=1 Aij) for all i â [k], the softmax over A.
# piz, = exp(7 | 2307.14430#24 | Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models | The quality of training data impacts the performance of pre-trained large
language models (LMs). Given a fixed budget of tokens, we study how to best
select data that leads to good downstream model performance across tasks. We
develop a new framework based on a simple hypothesis: just as humans acquire
interdependent skills in a deliberate order, language models also follow a
natural order when learning a set of skills from their training data. If such
an order exists, it can be utilized for improved understanding of LMs and for
data-efficient training. Using this intuition, our framework formalizes the
notion of a skill and of an ordered set of skills in terms of the associated
data. First, using both synthetic and real data, we demonstrate that these
ordered skill sets exist, and that their existence enables more advanced skills
to be learned with less data when we train on their prerequisite skills.
Second, using our proposed framework, we introduce an online data sampling
algorithm, Skill-It, over mixtures of skills for both continual pre-training
and fine-tuning regimes, where the objective is to efficiently learn multiple
skills in the former and an individual skill in the latter. On the LEGO
synthetic in the continual pre-training setting, Skill-It obtains 36.5 points
higher accuracy than random sampling. On the Natural Instructions dataset in
the fine-tuning setting, Skill-It reduces the validation loss on the target
skill by 13.6% versus training on data associated with the target skill itself.
We apply our skills framework on the recent RedPajama dataset to continually
pre-train a 3B-parameter LM, achieving higher accuracy on the LM Evaluation
Harness with 1B tokens than the baseline approach of sampling uniformly over
data sources with 3B tokens. | http://arxiv.org/pdf/2307.14430 | Mayee F. Chen, Nicholas Roberts, Kush Bhatia, Jue Wang, Ce Zhang, Frederic Sala, Christopher Ré | cs.CL, cs.LG | null | null | cs.CL | 20230726 | 20230726 | [
{
"id": "2101.00027"
},
{
"id": "2005.14165"
}
] |
2307.14225 | 25 | # + , itemð¡,2
+ , itemð¡,3 + ,
itemð¡,4 + , itemð¡,5
User Description: descð + User Movie Preferences: itemð,1 Additional User Movie Preference: itemð,5 +
⢠Few-shot (ð): Repeat ð â {1, . . . , ð } ( + , itemð,2 + , itemð,3 + , itemð,4 + User Description: descð¡ + + , itemð¡,2 User Movie Preferences: itemð¡,1 + , itemð¡,3 Additional User Movie Preference: hitemð¡ âi + , itemð¡,4 + , itemð¡,5 +
4.2.4 Negative Language Variants. For the zero-shot cases, we also experimented with negative language variants that inserted the sentences âI dislike the following movies: itemð¡,1 â , itemð¡,4 â â for Item prompts and âI describe the movies I dislike as follows: descð¡ ââ for Language prompts after their positive counterparts in the prompts labeled Pos+Neg. | 2307.14225#25 | Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences | Traditional recommender systems leverage users' item preference history to
recommend novel content that users may like. However, modern dialog interfaces
that allow users to express language-based preferences offer a fundamentally
different modality for preference input. Inspired by recent successes of
prompting paradigms for large language models (LLMs), we study their use for
making recommendations from both item-based and language-based preferences in
comparison to state-of-the-art item-based collaborative filtering (CF) methods.
To support this investigation, we collect a new dataset consisting of both
item-based and language-based preferences elicited from users along with their
ratings on a variety of (biased) recommended items and (unbiased) random items.
Among numerous experimental results, we find that LLMs provide competitive
recommendation performance for pure language-based preferences (no item
preferences) in the near cold-start case in comparison to item-based CF
methods, despite having no supervised training for this specific task
(zero-shot) or only a few labels (few-shot). This is particularly promising as
language-based preference representations are more explainable and scrutable
than item-based or vector-based representations. | http://arxiv.org/pdf/2307.14225 | Scott Sanner, Krisztian Balog, Filip Radlinski, Ben Wedin, Lucas Dixon | cs.IR, cs.LG | To appear at RecSys'23 | null | cs.IR | 20230726 | 20230726 | [
{
"id": "2305.06474"
},
{
"id": "2305.07961"
},
{
"id": "2009.13292"
},
{
"id": "2204.02311"
},
{
"id": "2210.06280"
},
{
"id": "2005.14165"
},
{
"id": "2108.07732"
},
{
"id": "2306.02250"
},
{
"id": "2304.04250"
},
{
"id": "2305.08845"
},
{
"id": "2109.01652"
}
] |
2307.14430 | 25 | = exp( 05",
# j=1 Aij) for all i â [k], the softmax over A.
# piz, = exp(7
training and evaluating the model on each si and 2) training the model on each pair of (si, sj) and evaluating on si and sj. If the loss on sj when trained on both si and sj is lower, there exists an edge from si to sj. This approach has runtime O(Hk2), which is feasible for small k. When k is large, we can approximate this approach in linear time by training on each si for h < H steps and setting Aij > 0 if the loss on sj decreases over h steps for a runtime of O(hk). This linear approach is necessary in the out-of-domain setting when Seval and Strain are disjoint, as we do not train on data associated with Seval. In addition, both graph learning approaches can be performed on a smaller model, and the learned graph can be used for data selection for training a larger model (Appendix D.4).
# 3.3 Skills graph-aware sampling | 2307.14430#25 | Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models | The quality of training data impacts the performance of pre-trained large
language models (LMs). Given a fixed budget of tokens, we study how to best
select data that leads to good downstream model performance across tasks. We
develop a new framework based on a simple hypothesis: just as humans acquire
interdependent skills in a deliberate order, language models also follow a
natural order when learning a set of skills from their training data. If such
an order exists, it can be utilized for improved understanding of LMs and for
data-efficient training. Using this intuition, our framework formalizes the
notion of a skill and of an ordered set of skills in terms of the associated
data. First, using both synthetic and real data, we demonstrate that these
ordered skill sets exist, and that their existence enables more advanced skills
to be learned with less data when we train on their prerequisite skills.
Second, using our proposed framework, we introduce an online data sampling
algorithm, Skill-It, over mixtures of skills for both continual pre-training
and fine-tuning regimes, where the objective is to efficiently learn multiple
skills in the former and an individual skill in the latter. On the LEGO
synthetic in the continual pre-training setting, Skill-It obtains 36.5 points
higher accuracy than random sampling. On the Natural Instructions dataset in
the fine-tuning setting, Skill-It reduces the validation loss on the target
skill by 13.6% versus training on data associated with the target skill itself.
We apply our skills framework on the recent RedPajama dataset to continually
pre-train a 3B-parameter LM, achieving higher accuracy on the LM Evaluation
Harness with 1B tokens than the baseline approach of sampling uniformly over
data sources with 3B tokens. | http://arxiv.org/pdf/2307.14430 | Mayee F. Chen, Nicholas Roberts, Kush Bhatia, Jue Wang, Ce Zhang, Frederic Sala, Christopher Ré | cs.CL, cs.LG | null | null | cs.CL | 20230726 | 20230726 | [
{
"id": "2101.00027"
},
{
"id": "2005.14165"
}
] |
2307.14430 | 26 | # 3.3 Skills graph-aware sampling
We present two approaches for sampling over the mixture of training skills according to the skills graph: skill-stratiï¬ed sam- pling, which samples uniformly over relevant training skills according to A, and SKILL-IT, which is an online generalization that incorporates knowledge of how skills are being learned throughout training.
# 3.3.1 Skill-stratiï¬ed sampling | 2307.14430#26 | Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models | The quality of training data impacts the performance of pre-trained large
language models (LMs). Given a fixed budget of tokens, we study how to best
select data that leads to good downstream model performance across tasks. We
develop a new framework based on a simple hypothesis: just as humans acquire
interdependent skills in a deliberate order, language models also follow a
natural order when learning a set of skills from their training data. If such
an order exists, it can be utilized for improved understanding of LMs and for
data-efficient training. Using this intuition, our framework formalizes the
notion of a skill and of an ordered set of skills in terms of the associated
data. First, using both synthetic and real data, we demonstrate that these
ordered skill sets exist, and that their existence enables more advanced skills
to be learned with less data when we train on their prerequisite skills.
Second, using our proposed framework, we introduce an online data sampling
algorithm, Skill-It, over mixtures of skills for both continual pre-training
and fine-tuning regimes, where the objective is to efficiently learn multiple
skills in the former and an individual skill in the latter. On the LEGO
synthetic in the continual pre-training setting, Skill-It obtains 36.5 points
higher accuracy than random sampling. On the Natural Instructions dataset in
the fine-tuning setting, Skill-It reduces the validation loss on the target
skill by 13.6% versus training on data associated with the target skill itself.
We apply our skills framework on the recent RedPajama dataset to continually
pre-train a 3B-parameter LM, achieving higher accuracy on the LM Evaluation
Harness with 1B tokens than the baseline approach of sampling uniformly over
data sources with 3B tokens. | http://arxiv.org/pdf/2307.14430 | Mayee F. Chen, Nicholas Roberts, Kush Bhatia, Jue Wang, Ce Zhang, Frederic Sala, Christopher Ré | cs.CL, cs.LG | null | null | cs.CL | 20230726 | 20230726 | [
{
"id": "2101.00027"
},
{
"id": "2005.14165"
}
] |
2307.14225 | 27 | Rater #1 Liked Movies I like comedy movies because i feel happy whenever i watch them. We can watch those movies with a group of people. I like to watch comedy movies because there will be a lot of fun and entertainment. Its very exciting to watch with friends and family.so,I always watch comedy movies whenever I get time. Disliked Movies I am not at all interested in watching horror movies because whenever I feel alone it will always disturb me with the char- acters in the movie. It will be aï¬ected by dreams and mood always. SO, mostly i ignore watching them when i stay alone in the home. Horror is scary. I donât like the feeling of being terriï¬ed. Some are either sensitive to suspense, gore or frightful im- ages, or they may have had an experience in their life that makes horror seem real. I dislike action genre movies because watching ï¬ghts gives me a headache and bored me. These kinds of movies mainly concentrate on violence and physical feats. #2 Fantasy ï¬lms often have an element of magic, myth, wonder,and the extraor- dinary. They may appeal to both children and adults, depending upon | 2307.14225#27 | Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences | Traditional recommender systems leverage users' item preference history to
recommend novel content that users may like. However, modern dialog interfaces
that allow users to express language-based preferences offer a fundamentally
different modality for preference input. Inspired by recent successes of
prompting paradigms for large language models (LLMs), we study their use for
making recommendations from both item-based and language-based preferences in
comparison to state-of-the-art item-based collaborative filtering (CF) methods.
To support this investigation, we collect a new dataset consisting of both
item-based and language-based preferences elicited from users along with their
ratings on a variety of (biased) recommended items and (unbiased) random items.
Among numerous experimental results, we find that LLMs provide competitive
recommendation performance for pure language-based preferences (no item
preferences) in the near cold-start case in comparison to item-based CF
methods, despite having no supervised training for this specific task
(zero-shot) or only a few labels (few-shot). This is particularly promising as
language-based preference representations are more explainable and scrutable
than item-based or vector-based representations. | http://arxiv.org/pdf/2307.14225 | Scott Sanner, Krisztian Balog, Filip Radlinski, Ben Wedin, Lucas Dixon | cs.IR, cs.LG | To appear at RecSys'23 | null | cs.IR | 20230726 | 20230726 | [
{
"id": "2305.06474"
},
{
"id": "2305.07961"
},
{
"id": "2009.13292"
},
{
"id": "2204.02311"
},
{
"id": "2210.06280"
},
{
"id": "2005.14165"
},
{
"id": "2108.07732"
},
{
"id": "2306.02250"
},
{
"id": "2304.04250"
},
{
"id": "2305.08845"
},
{
"id": "2109.01652"
}
] |
2307.14430 | 27 | # 3.3.1 Skill-stratiï¬ed sampling
A straightforward sampling approach is to discard training skills that do not beneï¬t the evaluation skills and sample uniformly over the set of relevant training skills, which we call skill-stratiï¬ed sampling. For continual pre-training, the relevant skills are the entire training skill set; for each strain,i â Strain, Pr(strain,i) = 1 k . This enables each skill to have sufï¬cient training data. For ï¬ne-tuning, the relevant skills are the target skills and prerequisite skills, which can be identiï¬ed via positive entries of the ith column of A with Sprereq = {strain,i : â seval,j s.t. Aij > 0}. We then set Pr(s) = |SprereqâªSeval| for s â Sprereq ⪠Seval. For the out-of-domain setting, skill-stratiï¬ed sampling is over the set of prerequisite skills. For each s â Sprereq, we set Pr(s) = 1
# 3.3.2 SKILL-IT online data selection algorithm | 2307.14430#27 | Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models | The quality of training data impacts the performance of pre-trained large
language models (LMs). Given a fixed budget of tokens, we study how to best
select data that leads to good downstream model performance across tasks. We
develop a new framework based on a simple hypothesis: just as humans acquire
interdependent skills in a deliberate order, language models also follow a
natural order when learning a set of skills from their training data. If such
an order exists, it can be utilized for improved understanding of LMs and for
data-efficient training. Using this intuition, our framework formalizes the
notion of a skill and of an ordered set of skills in terms of the associated
data. First, using both synthetic and real data, we demonstrate that these
ordered skill sets exist, and that their existence enables more advanced skills
to be learned with less data when we train on their prerequisite skills.
Second, using our proposed framework, we introduce an online data sampling
algorithm, Skill-It, over mixtures of skills for both continual pre-training
and fine-tuning regimes, where the objective is to efficiently learn multiple
skills in the former and an individual skill in the latter. On the LEGO
synthetic in the continual pre-training setting, Skill-It obtains 36.5 points
higher accuracy than random sampling. On the Natural Instructions dataset in
the fine-tuning setting, Skill-It reduces the validation loss on the target
skill by 13.6% versus training on data associated with the target skill itself.
We apply our skills framework on the recent RedPajama dataset to continually
pre-train a 3B-parameter LM, achieving higher accuracy on the LM Evaluation
Harness with 1B tokens than the baseline approach of sampling uniformly over
data sources with 3B tokens. | http://arxiv.org/pdf/2307.14430 | Mayee F. Chen, Nicholas Roberts, Kush Bhatia, Jue Wang, Ce Zhang, Frederic Sala, Christopher Ré | cs.CL, cs.LG | null | null | cs.CL | 20230726 | 20230726 | [
{
"id": "2101.00027"
},
{
"id": "2005.14165"
}
] |
2307.14225 | 28 | ï¬lms often have an element of magic, myth, wonder,and the extraor- dinary. They may appeal to both children and adults, depending upon the par- ticular ï¬lm. In fantasy ï¬lms, the hero often undergoes some kind of mystical experience. I like comedy genre movies, while watching comedy movies I will feel very happy and relaxed. Comedy ï¬lms are designed to make the audience laugh. It has diï¬erent kinds of categories in comedy genres such as horror comedy, romantic comedy, comedy thriller,musical-comedy. #3 | 2307.14225#28 | Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences | Traditional recommender systems leverage users' item preference history to
recommend novel content that users may like. However, modern dialog interfaces
that allow users to express language-based preferences offer a fundamentally
different modality for preference input. Inspired by recent successes of
prompting paradigms for large language models (LLMs), we study their use for
making recommendations from both item-based and language-based preferences in
comparison to state-of-the-art item-based collaborative filtering (CF) methods.
To support this investigation, we collect a new dataset consisting of both
item-based and language-based preferences elicited from users along with their
ratings on a variety of (biased) recommended items and (unbiased) random items.
Among numerous experimental results, we find that LLMs provide competitive
recommendation performance for pure language-based preferences (no item
preferences) in the near cold-start case in comparison to item-based CF
methods, despite having no supervised training for this specific task
(zero-shot) or only a few labels (few-shot). This is particularly promising as
language-based preference representations are more explainable and scrutable
than item-based or vector-based representations. | http://arxiv.org/pdf/2307.14225 | Scott Sanner, Krisztian Balog, Filip Radlinski, Ben Wedin, Lucas Dixon | cs.IR, cs.LG | To appear at RecSys'23 | null | cs.IR | 20230726 | 20230726 | [
{
"id": "2305.06474"
},
{
"id": "2305.07961"
},
{
"id": "2009.13292"
},
{
"id": "2204.02311"
},
{
"id": "2210.06280"
},
{
"id": "2005.14165"
},
{
"id": "2108.07732"
},
{
"id": "2306.02250"
},
{
"id": "2304.04250"
},
{
"id": "2305.08845"
},
{
"id": "2109.01652"
}
] |
2307.14430 | 28 | # 3.3.2 SKILL-IT online data selection algorithm
Despite accounting for prerequisite skills, one shortcoming of skill-stratiï¬ed sampling is that even if a skill has already obtained sufï¬ciently low validation loss early during training, we will continue to allocate the same weight to that skill throughout training. Therefore, we formulate our data selection problem as an online learning problem and propose SKILL-IT, which both prioritizes prerequisite skills and skills that are not yet learned.
We are given a budget of T rounds and n total samples to train on. At round t, we select a mixture pt â âkâ1 from the k-dimensional unit simplex, and for each training skill strain,i â Strain, we sample from Xstrain,i with proportion pi t for a total of n T samples per round. Let ft be the model at at the start of round t. We can deï¬ne ft recursively as a function of the previous roundâs model ftâ1 and mixture ptâ1 via a dynamics function Φ : F à âkâ1 â F; that is, ft = Φ(ftâ1, ptâ1). Let Leval,j(ft) be the validation loss of ft on seval,j. Our goal is to select p1, . . . , pT to minimize loss per evaluation skill at
6 | 2307.14430#28 | Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models | The quality of training data impacts the performance of pre-trained large
language models (LMs). Given a fixed budget of tokens, we study how to best
select data that leads to good downstream model performance across tasks. We
develop a new framework based on a simple hypothesis: just as humans acquire
interdependent skills in a deliberate order, language models also follow a
natural order when learning a set of skills from their training data. If such
an order exists, it can be utilized for improved understanding of LMs and for
data-efficient training. Using this intuition, our framework formalizes the
notion of a skill and of an ordered set of skills in terms of the associated
data. First, using both synthetic and real data, we demonstrate that these
ordered skill sets exist, and that their existence enables more advanced skills
to be learned with less data when we train on their prerequisite skills.
Second, using our proposed framework, we introduce an online data sampling
algorithm, Skill-It, over mixtures of skills for both continual pre-training
and fine-tuning regimes, where the objective is to efficiently learn multiple
skills in the former and an individual skill in the latter. On the LEGO
synthetic in the continual pre-training setting, Skill-It obtains 36.5 points
higher accuracy than random sampling. On the Natural Instructions dataset in
the fine-tuning setting, Skill-It reduces the validation loss on the target
skill by 13.6% versus training on data associated with the target skill itself.
We apply our skills framework on the recent RedPajama dataset to continually
pre-train a 3B-parameter LM, achieving higher accuracy on the LM Evaluation
Harness with 1B tokens than the baseline approach of sampling uniformly over
data sources with 3B tokens. | http://arxiv.org/pdf/2307.14430 | Mayee F. Chen, Nicholas Roberts, Kush Bhatia, Jue Wang, Ce Zhang, Frederic Sala, Christopher Ré | cs.CL, cs.LG | null | null | cs.CL | 20230726 | 20230726 | [
{
"id": "2101.00027"
},
{
"id": "2005.14165"
}
] |
2307.14430 | 29 | 6
the end of training:
minimize _ nd Levai,j (fr) ()
This optimization problem is challenging to solve without additional assumptions. In order to make the problem tractable, we impose an explicit dynamics rule for the each evaluation skillâs loss Leval,j in terms of the current loss and data mixture. Assuming for simplicity that Seval â Strain, a simple rule would be Leval,j(ft) = Leval,j(Φ(ftâ1, ptâ1)) := Leval,j(ftâ1)(1 â αpj tâ1) for α â [0, 1]. That is, we expect that allocating more data to skill j should result in the validation loss on skill j decreasing. However, such an expression assumes that only training on the jth skill will help learn the jth skill. Instead, Section 2.2 suggests that there are other skills that may help with the jth skill. We propose the following dynamics:
Levat,j (ft) = Levat,j(frâ1)(1 â Al jpe-1), (2)
where A:,j is the column with weights of all skills that inï¬uence seval,j, and we absorb the scalar α into A. The optimization problem in (1) can thus be simpliï¬ed as follows: | 2307.14430#29 | Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models | The quality of training data impacts the performance of pre-trained large
language models (LMs). Given a fixed budget of tokens, we study how to best
select data that leads to good downstream model performance across tasks. We
develop a new framework based on a simple hypothesis: just as humans acquire
interdependent skills in a deliberate order, language models also follow a
natural order when learning a set of skills from their training data. If such
an order exists, it can be utilized for improved understanding of LMs and for
data-efficient training. Using this intuition, our framework formalizes the
notion of a skill and of an ordered set of skills in terms of the associated
data. First, using both synthetic and real data, we demonstrate that these
ordered skill sets exist, and that their existence enables more advanced skills
to be learned with less data when we train on their prerequisite skills.
Second, using our proposed framework, we introduce an online data sampling
algorithm, Skill-It, over mixtures of skills for both continual pre-training
and fine-tuning regimes, where the objective is to efficiently learn multiple
skills in the former and an individual skill in the latter. On the LEGO
synthetic in the continual pre-training setting, Skill-It obtains 36.5 points
higher accuracy than random sampling. On the Natural Instructions dataset in
the fine-tuning setting, Skill-It reduces the validation loss on the target
skill by 13.6% versus training on data associated with the target skill itself.
We apply our skills framework on the recent RedPajama dataset to continually
pre-train a 3B-parameter LM, achieving higher accuracy on the LM Evaluation
Harness with 1B tokens than the baseline approach of sampling uniformly over
data sources with 3B tokens. | http://arxiv.org/pdf/2307.14430 | Mayee F. Chen, Nicholas Roberts, Kush Bhatia, Jue Wang, Ce Zhang, Frederic Sala, Christopher Ré | cs.CL, cs.LG | null | null | cs.CL | 20230726 | 20230726 | [
{
"id": "2101.00027"
},
{
"id": "2005.14165"
}
] |
2307.14225 | 30 | # 5 RESULTS
# 5.1 Data Analysis
We now brieï¬y analyze the data collected from 153 raters as part of the preference elicitation and rating process.3 The raters took a median of 67 seconds to write their initial descriptions summarizing what they like, and 38 seconds for their dislikes (median lengths: 241 and 223 characters, respectively). Providing ï¬ve liked and disliked items took a median of 174 and 175 seconds, respectively. Following this, writing ï¬nal descriptions of likes and dislikes took a median of 152 and 161 seconds, respectively (median lengths: 205 and 207 characters, respectively). We observe that the initial descriptions were produced 3 to 4 times faster than providing 5 example items, in around one minute. As we will see below, this diï¬erence in eï¬ort is particularly pertinent as item-based and description-based recommendation are comparable in performance. A sample of initial descriptions are shown in Table 1. | 2307.14225#30 | Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences | Traditional recommender systems leverage users' item preference history to
recommend novel content that users may like. However, modern dialog interfaces
that allow users to express language-based preferences offer a fundamentally
different modality for preference input. Inspired by recent successes of
prompting paradigms for large language models (LLMs), we study their use for
making recommendations from both item-based and language-based preferences in
comparison to state-of-the-art item-based collaborative filtering (CF) methods.
To support this investigation, we collect a new dataset consisting of both
item-based and language-based preferences elicited from users along with their
ratings on a variety of (biased) recommended items and (unbiased) random items.
Among numerous experimental results, we find that LLMs provide competitive
recommendation performance for pure language-based preferences (no item
preferences) in the near cold-start case in comparison to item-based CF
methods, despite having no supervised training for this specific task
(zero-shot) or only a few labels (few-shot). This is particularly promising as
language-based preference representations are more explainable and scrutable
than item-based or vector-based representations. | http://arxiv.org/pdf/2307.14225 | Scott Sanner, Krisztian Balog, Filip Radlinski, Ben Wedin, Lucas Dixon | cs.IR, cs.LG | To appear at RecSys'23 | null | cs.IR | 20230726 | 20230726 | [
{
"id": "2305.06474"
},
{
"id": "2305.07961"
},
{
"id": "2009.13292"
},
{
"id": "2204.02311"
},
{
"id": "2210.06280"
},
{
"id": "2005.14165"
},
{
"id": "2108.07732"
},
{
"id": "2306.02250"
},
{
"id": "2304.04250"
},
{
"id": "2305.08845"
},
{
"id": "2109.01652"
}
] |
2307.14430 | 30 | minimize no Leva,j(fr) GB) = O(fe-1.pea) Vt=1,...T bastth = Levai,j(ft-1)(1 â Al pt-s) Vj ⬠[m]
In Appendix B, we derive the following update rule via online mirror descent [45] for learning rate η > 0:
Piya = PLexP (0 »~ Aijlous(H) : (4) j=l
# ea Aijoa,j(fr)). Since this summation over Ï results in diminishing strength of updates, we change it to a moving window of size w. Our full method is in Algorithm 1.
Intuitively, at each step we adjust the weight on skill i based on the losses of skills that i inï¬uences, with the assumption that more training data helps decrease loss. Note that when we use our algorithm with a complete graph or empty graph, we achieve expected behavior discussed in Section 2.1. For the complete graph, our algorithm reduces to stratiï¬ed sampling. When we have a skill set with an empty graph, the update rule reduces to sampling proportional to each skillâs validation loss.
# 4 Experimental results | 2307.14430#30 | Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models | The quality of training data impacts the performance of pre-trained large
language models (LMs). Given a fixed budget of tokens, we study how to best
select data that leads to good downstream model performance across tasks. We
develop a new framework based on a simple hypothesis: just as humans acquire
interdependent skills in a deliberate order, language models also follow a
natural order when learning a set of skills from their training data. If such
an order exists, it can be utilized for improved understanding of LMs and for
data-efficient training. Using this intuition, our framework formalizes the
notion of a skill and of an ordered set of skills in terms of the associated
data. First, using both synthetic and real data, we demonstrate that these
ordered skill sets exist, and that their existence enables more advanced skills
to be learned with less data when we train on their prerequisite skills.
Second, using our proposed framework, we introduce an online data sampling
algorithm, Skill-It, over mixtures of skills for both continual pre-training
and fine-tuning regimes, where the objective is to efficiently learn multiple
skills in the former and an individual skill in the latter. On the LEGO
synthetic in the continual pre-training setting, Skill-It obtains 36.5 points
higher accuracy than random sampling. On the Natural Instructions dataset in
the fine-tuning setting, Skill-It reduces the validation loss on the target
skill by 13.6% versus training on data associated with the target skill itself.
We apply our skills framework on the recent RedPajama dataset to continually
pre-train a 3B-parameter LM, achieving higher accuracy on the LM Evaluation
Harness with 1B tokens than the baseline approach of sampling uniformly over
data sources with 3B tokens. | http://arxiv.org/pdf/2307.14430 | Mayee F. Chen, Nicholas Roberts, Kush Bhatia, Jue Wang, Ce Zhang, Frederic Sala, Christopher Ré | cs.CL, cs.LG | null | null | cs.CL | 20230726 | 20230726 | [
{
"id": "2101.00027"
},
{
"id": "2005.14165"
}
] |
2307.14225 | 31 | Next, we analyze the ratings collected for the movies from the four pools described in Section 3. From Table 2, we observe: (1) The EASE recommender nearly doubles the rate of recommendations that have already been seen by the rater, which reï¬ects the supervised data on which it is trained where raters only rate what they have seen; (2) There is an inherent positive bias to provide a high ratings for movies the rater has already seen as evidenced by the average 4.29 rating in this case; (3) In contrast, the average rating drops to a neutral 3.00 for unseen items.
# 5.2 Recommended Items
Our main experimental results are shown in Table 3, using NDCG@10 with exponential gain (a gain of 0 for ratings ð < 3 and a gain of 2ð â3 otherwise). We compare the mean performance of various methods using item- and/or language-based preferences (as described in Section 3.1) ranking four diï¬erent pool-based subsets of the 40 fully judged | 2307.14225#31 | Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences | Traditional recommender systems leverage users' item preference history to
recommend novel content that users may like. However, modern dialog interfaces
that allow users to express language-based preferences offer a fundamentally
different modality for preference input. Inspired by recent successes of
prompting paradigms for large language models (LLMs), we study their use for
making recommendations from both item-based and language-based preferences in
comparison to state-of-the-art item-based collaborative filtering (CF) methods.
To support this investigation, we collect a new dataset consisting of both
item-based and language-based preferences elicited from users along with their
ratings on a variety of (biased) recommended items and (unbiased) random items.
Among numerous experimental results, we find that LLMs provide competitive
recommendation performance for pure language-based preferences (no item
preferences) in the near cold-start case in comparison to item-based CF
methods, despite having no supervised training for this specific task
(zero-shot) or only a few labels (few-shot). This is particularly promising as
language-based preference representations are more explainable and scrutable
than item-based or vector-based representations. | http://arxiv.org/pdf/2307.14225 | Scott Sanner, Krisztian Balog, Filip Radlinski, Ben Wedin, Lucas Dixon | cs.IR, cs.LG | To appear at RecSys'23 | null | cs.IR | 20230726 | 20230726 | [
{
"id": "2305.06474"
},
{
"id": "2305.07961"
},
{
"id": "2009.13292"
},
{
"id": "2204.02311"
},
{
"id": "2210.06280"
},
{
"id": "2005.14165"
},
{
"id": "2108.07732"
},
{
"id": "2306.02250"
},
{
"id": "2304.04250"
},
{
"id": "2305.08845"
},
{
"id": "2109.01652"
}
] |
2307.14430 | 31 | # 4 Experimental results
Given an ordered skill set, we aim to validate SKILL-ITâs ability to select data for efï¬ciently learning skills in the continual pre-training, ï¬ne-tuning, and out-of-domain settings. We provide full tables of results in Appendix D.3.1 and results where we learn the skills graph on the 125M model and use it for the 1.3B parameter model in Appendix D.4. Skills graphs are in Appendix C.2, weight trajectories for SKILL-IT are in Appendix D.3.2, and ablations on the graph and online components of SKILL-IT are in Appendix D.5.
# 4.1 Continual pre-training
Setup We evaluate the ability of SKILL-IT to select data for efï¬ciently learning over all skills. We measure average validation loss per skill after a ï¬xed number of training steps. We construct the LEGO synthetic and addition synthetic with k = 5 and 3, respectively, and an imbalanced dataset over the skills. On the Natural Instructions dataset, we use 23 of the task categories as skills. | 2307.14430#31 | Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models | The quality of training data impacts the performance of pre-trained large
language models (LMs). Given a fixed budget of tokens, we study how to best
select data that leads to good downstream model performance across tasks. We
develop a new framework based on a simple hypothesis: just as humans acquire
interdependent skills in a deliberate order, language models also follow a
natural order when learning a set of skills from their training data. If such
an order exists, it can be utilized for improved understanding of LMs and for
data-efficient training. Using this intuition, our framework formalizes the
notion of a skill and of an ordered set of skills in terms of the associated
data. First, using both synthetic and real data, we demonstrate that these
ordered skill sets exist, and that their existence enables more advanced skills
to be learned with less data when we train on their prerequisite skills.
Second, using our proposed framework, we introduce an online data sampling
algorithm, Skill-It, over mixtures of skills for both continual pre-training
and fine-tuning regimes, where the objective is to efficiently learn multiple
skills in the former and an individual skill in the latter. On the LEGO
synthetic in the continual pre-training setting, Skill-It obtains 36.5 points
higher accuracy than random sampling. On the Natural Instructions dataset in
the fine-tuning setting, Skill-It reduces the validation loss on the target
skill by 13.6% versus training on data associated with the target skill itself.
We apply our skills framework on the recent RedPajama dataset to continually
pre-train a 3B-parameter LM, achieving higher accuracy on the LM Evaluation
Harness with 1B tokens than the baseline approach of sampling uniformly over
data sources with 3B tokens. | http://arxiv.org/pdf/2307.14430 | Mayee F. Chen, Nicholas Roberts, Kush Bhatia, Jue Wang, Ce Zhang, Frederic Sala, Christopher Ré | cs.CL, cs.LG | null | null | cs.CL | 20230726 | 20230726 | [
{
"id": "2101.00027"
},
{
"id": "2005.14165"
}
] |
2307.14225 | 32 | 3We recruited 160 raters, but discard those (5) that did not complete both phases of the data collection and those (2) who provided uniform ratings on all item recommendations in Phase 2.
7
RecSys â23, September 18â22, 2023, Singapore, Singapore
Scott Sanner, Krisztian Balog, Filip Radlinski, Ben Wedin, and Lucas Dixon
Table 3. Main experimental results comparing mean NDCG@10 (± 95% standard error) over raters for all recommendation methods. In each case, the fully judged rater-specific evaluation set is ranked by the given recommendation algorithms. Mean evaluation set sizes are in the first row. Note that performance on the Unseen item set is most important in a practical recommendation setting. | 2307.14225#32 | Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences | Traditional recommender systems leverage users' item preference history to
recommend novel content that users may like. However, modern dialog interfaces
that allow users to express language-based preferences offer a fundamentally
different modality for preference input. Inspired by recent successes of
prompting paradigms for large language models (LLMs), we study their use for
making recommendations from both item-based and language-based preferences in
comparison to state-of-the-art item-based collaborative filtering (CF) methods.
To support this investigation, we collect a new dataset consisting of both
item-based and language-based preferences elicited from users along with their
ratings on a variety of (biased) recommended items and (unbiased) random items.
Among numerous experimental results, we find that LLMs provide competitive
recommendation performance for pure language-based preferences (no item
preferences) in the near cold-start case in comparison to item-based CF
methods, despite having no supervised training for this specific task
(zero-shot) or only a few labels (few-shot). This is particularly promising as
language-based preference representations are more explainable and scrutable
than item-based or vector-based representations. | http://arxiv.org/pdf/2307.14225 | Scott Sanner, Krisztian Balog, Filip Radlinski, Ben Wedin, Lucas Dixon | cs.IR, cs.LG | To appear at RecSys'23 | null | cs.IR | 20230726 | 20230726 | [
{
"id": "2305.06474"
},
{
"id": "2305.07961"
},
{
"id": "2009.13292"
},
{
"id": "2204.02311"
},
{
"id": "2210.06280"
},
{
"id": "2005.14165"
},
{
"id": "2108.07732"
},
{
"id": "2306.02250"
},
{
"id": "2304.04250"
},
{
"id": "2305.08845"
},
{
"id": "2109.01652"
}
] |
2307.14430 | 32 | Baselines We compare SKILL-IT against three baselines that do not account for skills: random sampling, curriculum learning, and anticurriculum learning. Random sampling is a standard procedure for selecting samples given no additional information. Curriculum learning [3] and anticurriculum learning [67] score the samples from easiest to hardest and vice versa, respectively, and sample over an expanding set of the lowest scored samples at every epoch; we use the pre-trained
7
LEGO Skill 1 10° LEGO Skill 2 LEGO Skill 3 Addition Skill 1 Addition Sidll 2 5 aw 10° 3 g g 107 Fs 4 Bio g 107? g 107 z 102 a 3 Baye 3 10 10% o 2000 4000 +6000 «6 2000 4000 6000 6 2000 4000 6000 o m0 4000000 700040006000 LEGO Skill 4 LEGO Skill 5 Average per skill Addition Skill 3 Average per skill 3 10 10° 3 3 if 10 10" Random ios tos g â Curriculum 3 Random 2 â Anticurrcutum i Curicutm s â Skill-stratified * oe ââ 10-* 10-2 Salitt 102} â simte 6 2000 +4000 +6000 «6 2000 4000 6000 6 2000 4000 6000 7 70004000000 7000 4000 6000 Steps Steps Steps steps Steps | 2307.14430#32 | Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models | The quality of training data impacts the performance of pre-trained large
language models (LMs). Given a fixed budget of tokens, we study how to best
select data that leads to good downstream model performance across tasks. We
develop a new framework based on a simple hypothesis: just as humans acquire
interdependent skills in a deliberate order, language models also follow a
natural order when learning a set of skills from their training data. If such
an order exists, it can be utilized for improved understanding of LMs and for
data-efficient training. Using this intuition, our framework formalizes the
notion of a skill and of an ordered set of skills in terms of the associated
data. First, using both synthetic and real data, we demonstrate that these
ordered skill sets exist, and that their existence enables more advanced skills
to be learned with less data when we train on their prerequisite skills.
Second, using our proposed framework, we introduce an online data sampling
algorithm, Skill-It, over mixtures of skills for both continual pre-training
and fine-tuning regimes, where the objective is to efficiently learn multiple
skills in the former and an individual skill in the latter. On the LEGO
synthetic in the continual pre-training setting, Skill-It obtains 36.5 points
higher accuracy than random sampling. On the Natural Instructions dataset in
the fine-tuning setting, Skill-It reduces the validation loss on the target
skill by 13.6% versus training on data associated with the target skill itself.
We apply our skills framework on the recent RedPajama dataset to continually
pre-train a 3B-parameter LM, achieving higher accuracy on the LM Evaluation
Harness with 1B tokens than the baseline approach of sampling uniformly over
data sources with 3B tokens. | http://arxiv.org/pdf/2307.14430 | Mayee F. Chen, Nicholas Roberts, Kush Bhatia, Jue Wang, Ce Zhang, Frederic Sala, Christopher Ré | cs.CL, cs.LG | null | null | cs.CL | 20230726 | 20230726 | [
{
"id": "2101.00027"
},
{
"id": "2005.14165"
}
] |
2307.14225 | 33 | Full Set SP-Full 40 Unbiased Set SP-Rand{Pop,MidPop} 20 Items that are Evaluation Set Seen 10.8 Unseen 29.2 Mean evaluation set size Recommendation Algorithm Random Baseline Popularity Baseline (Item) EASE (Item) WRMF (Item) BPR-SLIM (Item) KNN Item (Language) BM25-Fusion LLM Item Completion LLM Item Zero-shot LLM Item Few-shot (3) LLM Language Completion LLM Language Zero-shot LLM Language Few-shot (3) LLM Item+Language Completion LLM Item+Language Zero-shot LLM Item+Language Few-shot (3) LLM Item Zero-shot Pos+Neg LLM Language Zero-shot Pos+Neg LLM Item+Language Zero-shot Pos+Neg 0.532 ± 0.034 0.624 ± 0.029 0.592 ± 0.030 0.644 ± 0.029 0.617 ± 0.029 0.610 ± 0.028 0.623 ± 0.027 0.610 ± 0.027 0.631 ± 0.028 0.636 ± 0.027 0.617 ± 0.029 0.626 ± 0.027 0.650 ± 0.026 0.639 ± 0.027 0.634 ± 0.028 0.640 ± 0.028 | 2307.14225#33 | Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences | Traditional recommender systems leverage users' item preference history to
recommend novel content that users may like. However, modern dialog interfaces
that allow users to express language-based preferences offer a fundamentally
different modality for preference input. Inspired by recent successes of
prompting paradigms for large language models (LLMs), we study their use for
making recommendations from both item-based and language-based preferences in
comparison to state-of-the-art item-based collaborative filtering (CF) methods.
To support this investigation, we collect a new dataset consisting of both
item-based and language-based preferences elicited from users along with their
ratings on a variety of (biased) recommended items and (unbiased) random items.
Among numerous experimental results, we find that LLMs provide competitive
recommendation performance for pure language-based preferences (no item
preferences) in the near cold-start case in comparison to item-based CF
methods, despite having no supervised training for this specific task
(zero-shot) or only a few labels (few-shot). This is particularly promising as
language-based preference representations are more explainable and scrutable
than item-based or vector-based representations. | http://arxiv.org/pdf/2307.14225 | Scott Sanner, Krisztian Balog, Filip Radlinski, Ben Wedin, Lucas Dixon | cs.IR, cs.LG | To appear at RecSys'23 | null | cs.IR | 20230726 | 20230726 | [
{
"id": "2305.06474"
},
{
"id": "2305.07961"
},
{
"id": "2009.13292"
},
{
"id": "2204.02311"
},
{
"id": "2210.06280"
},
{
"id": "2005.14165"
},
{
"id": "2108.07732"
},
{
"id": "2306.02250"
},
{
"id": "2304.04250"
},
{
"id": "2305.08845"
},
{
"id": "2109.01652"
}
] |
2307.14430 | 33 | Figure 4: Performance of SKILL-IT on each skill in the continual pre-training setting (learning over all skills in the ordered training skill set) on the LEGO synthetic (left) and addition synthetic (right).
modelâs loss to rank points. We evaluate skill-stratiï¬ed sampling, which uses knowledge of the skills but is not online, and include an additional skills curriculum baseline in Appendix D.3.1 | 2307.14430#33 | Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models | The quality of training data impacts the performance of pre-trained large
language models (LMs). Given a fixed budget of tokens, we study how to best
select data that leads to good downstream model performance across tasks. We
develop a new framework based on a simple hypothesis: just as humans acquire
interdependent skills in a deliberate order, language models also follow a
natural order when learning a set of skills from their training data. If such
an order exists, it can be utilized for improved understanding of LMs and for
data-efficient training. Using this intuition, our framework formalizes the
notion of a skill and of an ordered set of skills in terms of the associated
data. First, using both synthetic and real data, we demonstrate that these
ordered skill sets exist, and that their existence enables more advanced skills
to be learned with less data when we train on their prerequisite skills.
Second, using our proposed framework, we introduce an online data sampling
algorithm, Skill-It, over mixtures of skills for both continual pre-training
and fine-tuning regimes, where the objective is to efficiently learn multiple
skills in the former and an individual skill in the latter. On the LEGO
synthetic in the continual pre-training setting, Skill-It obtains 36.5 points
higher accuracy than random sampling. On the Natural Instructions dataset in
the fine-tuning setting, Skill-It reduces the validation loss on the target
skill by 13.6% versus training on data associated with the target skill itself.
We apply our skills framework on the recent RedPajama dataset to continually
pre-train a 3B-parameter LM, achieving higher accuracy on the LM Evaluation
Harness with 1B tokens than the baseline approach of sampling uniformly over
data sources with 3B tokens. | http://arxiv.org/pdf/2307.14430 | Mayee F. Chen, Nicholas Roberts, Kush Bhatia, Jue Wang, Ce Zhang, Frederic Sala, Christopher Ré | cs.CL, cs.LG | null | null | cs.CL | 20230726 | 20230726 | [
{
"id": "2101.00027"
},
{
"id": "2005.14165"
}
] |
2307.14225 | 34 | 0.029 0.626 ± 0.027 0.650 ± 0.026 0.639 ± 0.027 0.634 ± 0.028 0.640 ± 0.028 0.629 ± 0.027 0.626 ± 0.027 0.626 ± 0.028 0.511 ± 0.038 0.534 ± 0.036 0.559 ± 0.039 0.573 ± 0.037 0.577 ± 0.037 0.565 ± 0.037 0.542 ± 0.036 0.563 ± 0.037 0.571 ± 0.037 0.572 ± 0.037 0.559 ± 0.035 0.563 ± 0.034 0.571 ± 0.038 0.568 ± 0.037 0.582 ± 0.037 0.570 ± 0.037 0.569 ± 0.038 0.563 ± 0.034 0.577 ± 0.037 0.876 ± 0.023 0.894 ± 0.020 0.899 ± 0.023 0.897 ± 0.021 0.902 ± 0.021 0.889 ± 0.024 0.868 ± 0.023 0.889 ± 0.022 0.895 ± 0.023 0.897 ± 0.022 0.889 ± 0.023 0.885 ± | 2307.14225#34 | Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences | Traditional recommender systems leverage users' item preference history to
recommend novel content that users may like. However, modern dialog interfaces
that allow users to express language-based preferences offer a fundamentally
different modality for preference input. Inspired by recent successes of
prompting paradigms for large language models (LLMs), we study their use for
making recommendations from both item-based and language-based preferences in
comparison to state-of-the-art item-based collaborative filtering (CF) methods.
To support this investigation, we collect a new dataset consisting of both
item-based and language-based preferences elicited from users along with their
ratings on a variety of (biased) recommended items and (unbiased) random items.
Among numerous experimental results, we find that LLMs provide competitive
recommendation performance for pure language-based preferences (no item
preferences) in the near cold-start case in comparison to item-based CF
methods, despite having no supervised training for this specific task
(zero-shot) or only a few labels (few-shot). This is particularly promising as
language-based preference representations are more explainable and scrutable
than item-based or vector-based representations. | http://arxiv.org/pdf/2307.14225 | Scott Sanner, Krisztian Balog, Filip Radlinski, Ben Wedin, Lucas Dixon | cs.IR, cs.LG | To appear at RecSys'23 | null | cs.IR | 20230726 | 20230726 | [
{
"id": "2305.06474"
},
{
"id": "2305.07961"
},
{
"id": "2009.13292"
},
{
"id": "2204.02311"
},
{
"id": "2210.06280"
},
{
"id": "2005.14165"
},
{
"id": "2108.07732"
},
{
"id": "2306.02250"
},
{
"id": "2304.04250"
},
{
"id": "2305.08845"
},
{
"id": "2109.01652"
}
] |
2307.14430 | 34 | Analysis Our results are shown in Figure 4. Across our experiments we ï¬nd that SKILL-IT outperforms baselines that do not use skills as well as skill-stratiï¬ed sampling. On the LEGO dataset, all three baselines that do not utilize a notion of skills exhibit plateauing loss on four of the skills. Both skill-stratiï¬ed sampling and SKILL-IT are able to signiï¬cantly reduce loss on all skills, but the former is slower. Halfway through training, SKILL-IT exhibits an accuracy improvement between 9.9 and 25.9 points over other approaches, reaching a ï¬nal accuracy of 99.4 (Figure 19). SKILL-IT outperforms skill-stratiï¬ed sampling by initially allocating more weight to prerequisite skills and eventually allocating more weight to skills that are learned more slowly (Figure 20). On the addition synthetic with k = 3, SKILL-IT converges to near-zero validation loss faster than the baselines on skills 1 and 2. While the random baseline may seem competitive at ï¬rst glance, it fails to learn skill 1 (adding together the ones digits), which hurts its | 2307.14430#34 | Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models | The quality of training data impacts the performance of pre-trained large
language models (LMs). Given a fixed budget of tokens, we study how to best
select data that leads to good downstream model performance across tasks. We
develop a new framework based on a simple hypothesis: just as humans acquire
interdependent skills in a deliberate order, language models also follow a
natural order when learning a set of skills from their training data. If such
an order exists, it can be utilized for improved understanding of LMs and for
data-efficient training. Using this intuition, our framework formalizes the
notion of a skill and of an ordered set of skills in terms of the associated
data. First, using both synthetic and real data, we demonstrate that these
ordered skill sets exist, and that their existence enables more advanced skills
to be learned with less data when we train on their prerequisite skills.
Second, using our proposed framework, we introduce an online data sampling
algorithm, Skill-It, over mixtures of skills for both continual pre-training
and fine-tuning regimes, where the objective is to efficiently learn multiple
skills in the former and an individual skill in the latter. On the LEGO
synthetic in the continual pre-training setting, Skill-It obtains 36.5 points
higher accuracy than random sampling. On the Natural Instructions dataset in
the fine-tuning setting, Skill-It reduces the validation loss on the target
skill by 13.6% versus training on data associated with the target skill itself.
We apply our skills framework on the recent RedPajama dataset to continually
pre-train a 3B-parameter LM, achieving higher accuracy on the LM Evaluation
Harness with 1B tokens than the baseline approach of sampling uniformly over
data sources with 3B tokens. | http://arxiv.org/pdf/2307.14430 | Mayee F. Chen, Nicholas Roberts, Kush Bhatia, Jue Wang, Ce Zhang, Frederic Sala, Christopher Ré | cs.CL, cs.LG | null | null | cs.CL | 20230726 | 20230726 | [
{
"id": "2101.00027"
},
{
"id": "2005.14165"
}
] |
2307.14225 | 35 | ± 0.023 0.889 ± 0.022 0.895 ± 0.023 0.897 ± 0.022 0.889 ± 0.023 0.885 ± 0.024 0.891 ± 0.022 0.893 ± 0.022 0.897 ± 0.023 0.899 ± 0.022 0.892 ± 0.023 0.885 ± 0.024 0.897 ± 0.023 0.504 ± 0.032 0.595 ± 0.032 0.673 ± 0.038 0.644 ± 0.036 0.672 ± 0.037 0.646 ± 0.038 0.519 ± 0.032 0.649 ± 0.037 0.659 ± 0.037 0.664 ± 0.038 0.617 ± 0.032 0.612 ± 0.034 0.640 ± 0.036 0.654 ± 0.037 0.660 ± 0.038 0.663 ± 0.038 0.647 ± 0.037 0.612 ± 0.034 0.662 ± 0.037 | 2307.14225#35 | Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences | Traditional recommender systems leverage users' item preference history to
recommend novel content that users may like. However, modern dialog interfaces
that allow users to express language-based preferences offer a fundamentally
different modality for preference input. Inspired by recent successes of
prompting paradigms for large language models (LLMs), we study their use for
making recommendations from both item-based and language-based preferences in
comparison to state-of-the-art item-based collaborative filtering (CF) methods.
To support this investigation, we collect a new dataset consisting of both
item-based and language-based preferences elicited from users along with their
ratings on a variety of (biased) recommended items and (unbiased) random items.
Among numerous experimental results, we find that LLMs provide competitive
recommendation performance for pure language-based preferences (no item
preferences) in the near cold-start case in comparison to item-based CF
methods, despite having no supervised training for this specific task
(zero-shot) or only a few labels (few-shot). This is particularly promising as
language-based preference representations are more explainable and scrutable
than item-based or vector-based representations. | http://arxiv.org/pdf/2307.14225 | Scott Sanner, Krisztian Balog, Filip Radlinski, Ben Wedin, Lucas Dixon | cs.IR, cs.LG | To appear at RecSys'23 | null | cs.IR | 20230726 | 20230726 | [
{
"id": "2305.06474"
},
{
"id": "2305.07961"
},
{
"id": "2009.13292"
},
{
"id": "2204.02311"
},
{
"id": "2210.06280"
},
{
"id": "2005.14165"
},
{
"id": "2108.07732"
},
{
"id": "2306.02250"
},
{
"id": "2304.04250"
},
{
"id": "2305.08845"
},
{
"id": "2109.01652"
}
] |
2307.14225 | 36 | test recommendation items (as described in Section 3.2), recalling that the pool for each rater is personalized to that rater. The language-based results use only the initial natural language descriptions, which raters produced much faster than either liked and disliked item choices or ï¬nal descriptions, yet they yield equal performance to ï¬nal descriptions.
We begin with general observations. First, we note the range of NDCG@10 scores within each subset of items is substantially diï¬erent, due to both the NDCG normalizer that generally increases with a larger evaluation set size, as well as the average rating of each pool. On the latter note, we previously observed that the subset of Seen recom- mendations in Table 2 has the smallest pool of items and a high positive rating bias that makes it hard to diï¬erentiate recommenders on this subset. However, and as also recently argued in [35], in a recommendation setting where an item is typically only consumed once (such as movies), we are much more concerned about recommendation performance on the Unseen subset vs. the Seen subset. Similarly, we are also concerned with performance on the Unbiased set since this subset explores a wide range of popularity and is not biased towards item-based collaborative ï¬ltering (CF) methods.
To address our original research questions from Section 1: | 2307.14225#36 | Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences | Traditional recommender systems leverage users' item preference history to
recommend novel content that users may like. However, modern dialog interfaces
that allow users to express language-based preferences offer a fundamentally
different modality for preference input. Inspired by recent successes of
prompting paradigms for large language models (LLMs), we study their use for
making recommendations from both item-based and language-based preferences in
comparison to state-of-the-art item-based collaborative filtering (CF) methods.
To support this investigation, we collect a new dataset consisting of both
item-based and language-based preferences elicited from users along with their
ratings on a variety of (biased) recommended items and (unbiased) random items.
Among numerous experimental results, we find that LLMs provide competitive
recommendation performance for pure language-based preferences (no item
preferences) in the near cold-start case in comparison to item-based CF
methods, despite having no supervised training for this specific task
(zero-shot) or only a few labels (few-shot). This is particularly promising as
language-based preference representations are more explainable and scrutable
than item-based or vector-based representations. | http://arxiv.org/pdf/2307.14225 | Scott Sanner, Krisztian Balog, Filip Radlinski, Ben Wedin, Lucas Dixon | cs.IR, cs.LG | To appear at RecSys'23 | null | cs.IR | 20230726 | 20230726 | [
{
"id": "2305.06474"
},
{
"id": "2305.07961"
},
{
"id": "2009.13292"
},
{
"id": "2204.02311"
},
{
"id": "2210.06280"
},
{
"id": "2005.14165"
},
{
"id": "2108.07732"
},
{
"id": "2306.02250"
},
{
"id": "2304.04250"
},
{
"id": "2305.08845"
},
{
"id": "2109.01652"
}
] |
2307.14430 | 36 | # 4.2 Fine-tuning
Performance on Skill 3 Performance on Skill 1 2g erformance on Spanish QG Performance on stance detection â Skill 3 only 29 â Skill 1 only âeâ Spanish QG only âe= Stance detection only 2 ; A 2 o 8 0.75 â~ Skil-stratified | § â Skill stratified aos â*â Skill-stratified g s+ Skill-stratified â Skillt : 2. ââ Skit Sis Ne sat & 050 § Skill § g g B= 2 g ic} gl 3 ic ES z z 3 5 0.25 FS S24 gia s g s s 0.00 0 23 12 0 2000 4000 6000 0 2000 4000 6000 200 400 600 â200 400 600 Steps Steps Steps Steps
Figure 5: Performance of SKILL-IT in the ï¬ne-tuning setting (learning a target skill using the ordered training skill set) on LEGO, addition, and NI.
Setup We evaluate the ability of SKILL-IT to select data from an ordered training skill set for learning a target skill. Mirroring Figure 3, we evaluate on LEGO target skill 3 (third in reasoning chain), on the addition syntheticâs skill 1 (ones place digit addition), and on NIâs Spanish QG and Stance Detection. | 2307.14430#36 | Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models | The quality of training data impacts the performance of pre-trained large
language models (LMs). Given a fixed budget of tokens, we study how to best
select data that leads to good downstream model performance across tasks. We
develop a new framework based on a simple hypothesis: just as humans acquire
interdependent skills in a deliberate order, language models also follow a
natural order when learning a set of skills from their training data. If such
an order exists, it can be utilized for improved understanding of LMs and for
data-efficient training. Using this intuition, our framework formalizes the
notion of a skill and of an ordered set of skills in terms of the associated
data. First, using both synthetic and real data, we demonstrate that these
ordered skill sets exist, and that their existence enables more advanced skills
to be learned with less data when we train on their prerequisite skills.
Second, using our proposed framework, we introduce an online data sampling
algorithm, Skill-It, over mixtures of skills for both continual pre-training
and fine-tuning regimes, where the objective is to efficiently learn multiple
skills in the former and an individual skill in the latter. On the LEGO
synthetic in the continual pre-training setting, Skill-It obtains 36.5 points
higher accuracy than random sampling. On the Natural Instructions dataset in
the fine-tuning setting, Skill-It reduces the validation loss on the target
skill by 13.6% versus training on data associated with the target skill itself.
We apply our skills framework on the recent RedPajama dataset to continually
pre-train a 3B-parameter LM, achieving higher accuracy on the LM Evaluation
Harness with 1B tokens than the baseline approach of sampling uniformly over
data sources with 3B tokens. | http://arxiv.org/pdf/2307.14430 | Mayee F. Chen, Nicholas Roberts, Kush Bhatia, Jue Wang, Ce Zhang, Frederic Sala, Christopher Ré | cs.CL, cs.LG | null | null | cs.CL | 20230726 | 20230726 | [
{
"id": "2101.00027"
},
{
"id": "2005.14165"
}
] |
2307.14225 | 37 | To address our original research questions from Section 1:
RQ1: Can language-based preferences replace or improve on item-based preferences? An initial aï¬rmative answer comes from observing that the LLM Language Few-shot (3) method is competitive with most of the traditional item-based CF methods in this near cold-start setting, which is important since as observed in Section 5.1, language- based preferences took less time to elicit than item-based preferences; furthermore, language-based preferences are
8
|
# J
LLMs are Competitive Near Cold-start Recommenders
RecSys â23, September 18â22, 2023, Singapore, Singapore
transparent and scrutable [37]. However, there seems to be little beneï¬t to combining language- and item-based pref- erences as the Item+Language LLM methods do not appear to provide a boost in performance. | 2307.14225#37 | Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences | Traditional recommender systems leverage users' item preference history to
recommend novel content that users may like. However, modern dialog interfaces
that allow users to express language-based preferences offer a fundamentally
different modality for preference input. Inspired by recent successes of
prompting paradigms for large language models (LLMs), we study their use for
making recommendations from both item-based and language-based preferences in
comparison to state-of-the-art item-based collaborative filtering (CF) methods.
To support this investigation, we collect a new dataset consisting of both
item-based and language-based preferences elicited from users along with their
ratings on a variety of (biased) recommended items and (unbiased) random items.
Among numerous experimental results, we find that LLMs provide competitive
recommendation performance for pure language-based preferences (no item
preferences) in the near cold-start case in comparison to item-based CF
methods, despite having no supervised training for this specific task
(zero-shot) or only a few labels (few-shot). This is particularly promising as
language-based preference representations are more explainable and scrutable
than item-based or vector-based representations. | http://arxiv.org/pdf/2307.14225 | Scott Sanner, Krisztian Balog, Filip Radlinski, Ben Wedin, Lucas Dixon | cs.IR, cs.LG | To appear at RecSys'23 | null | cs.IR | 20230726 | 20230726 | [
{
"id": "2305.06474"
},
{
"id": "2305.07961"
},
{
"id": "2009.13292"
},
{
"id": "2204.02311"
},
{
"id": "2210.06280"
},
{
"id": "2005.14165"
},
{
"id": "2108.07732"
},
{
"id": "2306.02250"
},
{
"id": "2304.04250"
},
{
"id": "2305.08845"
},
{
"id": "2109.01652"
}
] |
2307.14430 | 37 | Baselines We compare SKILL-IT against training on the target skill only and skill-stratiï¬ed sampling over prerequisite skills and the target skill. The skill-stratiï¬ed sampling approach uses the ordered skill set to identify prerequisite skills, but does not exploit them dynamically.
Analysis Our results are shown in Figure 5. On LEGO, SKILL-IT results in the same validation loss of 0.01 as training only on the target skill in 38.1% fewer steps. We observe a similar trend on addition, with SKILL-IT converging to a validation loss of 0.01 in 59% fewer steps required to do so when training only on the target skill. Finally, on NI, SKILL-IT improves validation loss on Spanish question generation by 5.3% and Stance Detection by 13.6% over just training on the respective
8 | 2307.14430#37 | Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models | The quality of training data impacts the performance of pre-trained large
language models (LMs). Given a fixed budget of tokens, we study how to best
select data that leads to good downstream model performance across tasks. We
develop a new framework based on a simple hypothesis: just as humans acquire
interdependent skills in a deliberate order, language models also follow a
natural order when learning a set of skills from their training data. If such
an order exists, it can be utilized for improved understanding of LMs and for
data-efficient training. Using this intuition, our framework formalizes the
notion of a skill and of an ordered set of skills in terms of the associated
data. First, using both synthetic and real data, we demonstrate that these
ordered skill sets exist, and that their existence enables more advanced skills
to be learned with less data when we train on their prerequisite skills.
Second, using our proposed framework, we introduce an online data sampling
algorithm, Skill-It, over mixtures of skills for both continual pre-training
and fine-tuning regimes, where the objective is to efficiently learn multiple
skills in the former and an individual skill in the latter. On the LEGO
synthetic in the continual pre-training setting, Skill-It obtains 36.5 points
higher accuracy than random sampling. On the Natural Instructions dataset in
the fine-tuning setting, Skill-It reduces the validation loss on the target
skill by 13.6% versus training on data associated with the target skill itself.
We apply our skills framework on the recent RedPajama dataset to continually
pre-train a 3B-parameter LM, achieving higher accuracy on the LM Evaluation
Harness with 1B tokens than the baseline approach of sampling uniformly over
data sources with 3B tokens. | http://arxiv.org/pdf/2307.14430 | Mayee F. Chen, Nicholas Roberts, Kush Bhatia, Jue Wang, Ce Zhang, Frederic Sala, Christopher Ré | cs.CL, cs.LG | null | null | cs.CL | 20230726 | 20230726 | [
{
"id": "2101.00027"
},
{
"id": "2005.14165"
}
] |
2307.14225 | 38 | RQ2: LLM-based methods vs. CF? RQ1 has already established that LLM-based methods are generally competitive with item-based CF methods for the Language variants of the LLMs. However, it should also be noted that in many cases the LLM-based methods can even perform comparatively well to CF methods with only Item-based preferences (i.e., the names of the preferred movies). A critical and surprising result here is that a pretrained LLM makes a competitive recommender without the large amounts of supervised data used to train CF methods.
RQ3: Best prompting methodology? The Few-shot (3) prompting method generally outperforms Zero-shot and Completion prompting methods. The diï¬erence between Zero-shot and Completion prompting is less pronounced. While not shown due to space constraints, increasing the number of Few-shot examples did not improve performance.
RQ4: Does inclusion of dispreferences help? In the bottom three rows of Table 3, we show the impact of in- cluding negative item or language preferences for LLM-based recommenders. There are no meaningful improvements from including both positive and negative preferences (Pos+Neg) over only positive preferences in these LLM conï¬gu- rations. While not shown due to space constraints, omitting positive preferences and using only negative preferences yields performance at or below the popularity baseline.
# 6 ETHICAL CONSIDERATIONS | 2307.14225#38 | Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences | Traditional recommender systems leverage users' item preference history to
recommend novel content that users may like. However, modern dialog interfaces
that allow users to express language-based preferences offer a fundamentally
different modality for preference input. Inspired by recent successes of
prompting paradigms for large language models (LLMs), we study their use for
making recommendations from both item-based and language-based preferences in
comparison to state-of-the-art item-based collaborative filtering (CF) methods.
To support this investigation, we collect a new dataset consisting of both
item-based and language-based preferences elicited from users along with their
ratings on a variety of (biased) recommended items and (unbiased) random items.
Among numerous experimental results, we find that LLMs provide competitive
recommendation performance for pure language-based preferences (no item
preferences) in the near cold-start case in comparison to item-based CF
methods, despite having no supervised training for this specific task
(zero-shot) or only a few labels (few-shot). This is particularly promising as
language-based preference representations are more explainable and scrutable
than item-based or vector-based representations. | http://arxiv.org/pdf/2307.14225 | Scott Sanner, Krisztian Balog, Filip Radlinski, Ben Wedin, Lucas Dixon | cs.IR, cs.LG | To appear at RecSys'23 | null | cs.IR | 20230726 | 20230726 | [
{
"id": "2305.06474"
},
{
"id": "2305.07961"
},
{
"id": "2009.13292"
},
{
"id": "2204.02311"
},
{
"id": "2210.06280"
},
{
"id": "2005.14165"
},
{
"id": "2108.07732"
},
{
"id": "2306.02250"
},
{
"id": "2304.04250"
},
{
"id": "2305.08845"
},
{
"id": "2109.01652"
}
] |
2307.14430 | 38 | 8
Answerability Classification Cause Effect Classification Coreference Resolution Data To Text ââ Random âs- Skill stratified = skill el Validation Loss Ss oS ko Dialogue Act Recognition Grammar Error Correction Keyword Tagging Overlap Extraction 2.38 2 278 2.36 2.42 2.34 Validation Loss 5 g Bos za 38 8 2.32 Question Rewriting Textual Entailment Title Generation Word Analogy. 2.62 2.60 Validation Loss 668 @ 8 8 8 & 1000 2000 3000 4000 5000 1000 2000 3000 4000 5000 1000 2000 3000 4000 5000 1000 2000 3000 4000 5000 Steps Steps Steps Steps
Figure 6: Performance of SKILL-IT in the out-of-domain setting for the NI test task split. SKILL-IT uses the graph between the train and evaluation skills to produce an online mixture on the training dataset.
RedPajama source SKILL-IT mixture ArXiv Books C4 CommonCrawl GitHub StackExchange Wikipedia 0.1370 0.0437 0.4195 0.0732 0.189 0.0892 0.0484
Figure 7: Left: Accuracy on LM Evaluation Harness for continual pre-training of a 3B parameter model using SKILL-IT on the RedPajama dataset. We achieve higher accuracy at 1B additional tokens than uniform at 3B tokens. Right: SKILL-IT mixture over RedPajama sources. | 2307.14430#38 | Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models | The quality of training data impacts the performance of pre-trained large
language models (LMs). Given a fixed budget of tokens, we study how to best
select data that leads to good downstream model performance across tasks. We
develop a new framework based on a simple hypothesis: just as humans acquire
interdependent skills in a deliberate order, language models also follow a
natural order when learning a set of skills from their training data. If such
an order exists, it can be utilized for improved understanding of LMs and for
data-efficient training. Using this intuition, our framework formalizes the
notion of a skill and of an ordered set of skills in terms of the associated
data. First, using both synthetic and real data, we demonstrate that these
ordered skill sets exist, and that their existence enables more advanced skills
to be learned with less data when we train on their prerequisite skills.
Second, using our proposed framework, we introduce an online data sampling
algorithm, Skill-It, over mixtures of skills for both continual pre-training
and fine-tuning regimes, where the objective is to efficiently learn multiple
skills in the former and an individual skill in the latter. On the LEGO
synthetic in the continual pre-training setting, Skill-It obtains 36.5 points
higher accuracy than random sampling. On the Natural Instructions dataset in
the fine-tuning setting, Skill-It reduces the validation loss on the target
skill by 13.6% versus training on data associated with the target skill itself.
We apply our skills framework on the recent RedPajama dataset to continually
pre-train a 3B-parameter LM, achieving higher accuracy on the LM Evaluation
Harness with 1B tokens than the baseline approach of sampling uniformly over
data sources with 3B tokens. | http://arxiv.org/pdf/2307.14430 | Mayee F. Chen, Nicholas Roberts, Kush Bhatia, Jue Wang, Ce Zhang, Frederic Sala, Christopher Ré | cs.CL, cs.LG | null | null | cs.CL | 20230726 | 20230726 | [
{
"id": "2101.00027"
},
{
"id": "2005.14165"
}
] |
2307.14225 | 39 | # 6 ETHICAL CONSIDERATIONS
We brieï¬y consider potential ethical considerations. First, it is important to consider biases in the items recommended. For instance, it would be valuable to study how to measure whether language-driven recommenders exhibit more or less unintended bias than classic recommenders, such as perhaps preferring certain classes of items over others. Our task was constructed as ranking a ï¬xed corpus of items. As such, all items were considered and scored by the model. Overall performance numbers would have suï¬ered had there been a strong bias, although given the size of our experiments, the existence of bias cannot be ruled out. Larger scale studies would be needed to bound any possible biases present.
Additionally, our conclusions are based on the preferences of a relatively small pool of 153 raters. The small scale and restriction to English-only preferences means we cannot assess whether the same results would be obtained in other languages or cultures.
Finally, we note that the preference data was provided by paid contractors. They received their standard contracted wage, which is above the living wage in their country of employment.
# 7 CONCLUSION | 2307.14225#39 | Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences | Traditional recommender systems leverage users' item preference history to
recommend novel content that users may like. However, modern dialog interfaces
that allow users to express language-based preferences offer a fundamentally
different modality for preference input. Inspired by recent successes of
prompting paradigms for large language models (LLMs), we study their use for
making recommendations from both item-based and language-based preferences in
comparison to state-of-the-art item-based collaborative filtering (CF) methods.
To support this investigation, we collect a new dataset consisting of both
item-based and language-based preferences elicited from users along with their
ratings on a variety of (biased) recommended items and (unbiased) random items.
Among numerous experimental results, we find that LLMs provide competitive
recommendation performance for pure language-based preferences (no item
preferences) in the near cold-start case in comparison to item-based CF
methods, despite having no supervised training for this specific task
(zero-shot) or only a few labels (few-shot). This is particularly promising as
language-based preference representations are more explainable and scrutable
than item-based or vector-based representations. | http://arxiv.org/pdf/2307.14225 | Scott Sanner, Krisztian Balog, Filip Radlinski, Ben Wedin, Lucas Dixon | cs.IR, cs.LG | To appear at RecSys'23 | null | cs.IR | 20230726 | 20230726 | [
{
"id": "2305.06474"
},
{
"id": "2305.07961"
},
{
"id": "2009.13292"
},
{
"id": "2204.02311"
},
{
"id": "2210.06280"
},
{
"id": "2005.14165"
},
{
"id": "2108.07732"
},
{
"id": "2306.02250"
},
{
"id": "2304.04250"
},
{
"id": "2305.08845"
},
{
"id": "2109.01652"
}
] |
2307.14430 | 39 | target skill only. In this setting, a signiï¬cant portion of the improvement over training only on the target skill comes from identiï¬cation of prerequisite skills through the learned graph in the skill-stratiï¬ed sampling method. SKILL-IT is further able to improve performance with ï¬ner-grained dynamic weighting on prerequisite skills.
# 4.3 Out-of-domain setting
Natural Instructions We evaluate the ability of SKILL-IT to select data from a set of training skills for learning a disjoint set of evaluation skills that we cannot train on. We use all 59 task categories in the NI train tasks split as the training skills and the 12 task categories in the test tasks split as our evaluation skills. We compare SKILL-IT against random and skill-stratiï¬ed sampling, both of which do not exploit the relationships between training skills and evaluation skills. SKILL-IT achieves the lowest loss on 11 out of 12 task categories over random and skill-stratiï¬ed sampling (Figure 6, tables in Appendix). | 2307.14430#39 | Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models | The quality of training data impacts the performance of pre-trained large
language models (LMs). Given a fixed budget of tokens, we study how to best
select data that leads to good downstream model performance across tasks. We
develop a new framework based on a simple hypothesis: just as humans acquire
interdependent skills in a deliberate order, language models also follow a
natural order when learning a set of skills from their training data. If such
an order exists, it can be utilized for improved understanding of LMs and for
data-efficient training. Using this intuition, our framework formalizes the
notion of a skill and of an ordered set of skills in terms of the associated
data. First, using both synthetic and real data, we demonstrate that these
ordered skill sets exist, and that their existence enables more advanced skills
to be learned with less data when we train on their prerequisite skills.
Second, using our proposed framework, we introduce an online data sampling
algorithm, Skill-It, over mixtures of skills for both continual pre-training
and fine-tuning regimes, where the objective is to efficiently learn multiple
skills in the former and an individual skill in the latter. On the LEGO
synthetic in the continual pre-training setting, Skill-It obtains 36.5 points
higher accuracy than random sampling. On the Natural Instructions dataset in
the fine-tuning setting, Skill-It reduces the validation loss on the target
skill by 13.6% versus training on data associated with the target skill itself.
We apply our skills framework on the recent RedPajama dataset to continually
pre-train a 3B-parameter LM, achieving higher accuracy on the LM Evaluation
Harness with 1B tokens than the baseline approach of sampling uniformly over
data sources with 3B tokens. | http://arxiv.org/pdf/2307.14430 | Mayee F. Chen, Nicholas Roberts, Kush Bhatia, Jue Wang, Ce Zhang, Frederic Sala, Christopher Ré | cs.CL, cs.LG | null | null | cs.CL | 20230726 | 20230726 | [
{
"id": "2101.00027"
},
{
"id": "2005.14165"
}
] |
2307.14225 | 40 | Finally, we note that the preference data was provided by paid contractors. They received their standard contracted wage, which is above the living wage in their country of employment.
# 7 CONCLUSION
In this paper, we collected a dataset containing both item-based and language-based preferences for raters along with their ratings of an independent set of item recommendations. Leveraging a variety of prompting strategies in large language models (LLMs), this dataset allowed us to fairly and quantitatively compare the eï¬cacy of recommendation from pure item- or language-based preferences as well as their combination. In our experimental results, we ï¬nd that zero-shot and few-shot strategies in LLMs provide remarkably competitive in recommendation performance for pure language-based preferences (no item preferences) in the near cold-start case in comparison to item-based collaborative ï¬ltering methods. In particular, despite being general-purpose, LLMs perform competitively with fully supervised item-based CF methods when leveraging either item-based or language-based preferences. Finally, we observe that this LLM-based recommendation approach provides a competitive near cold-start recommender system based on an
9
RecSys â23, September 18â22, 2023, Singapore, Singapore
Scott Sanner, Krisztian Balog, Filip Radlinski, Ben Wedin, and Lucas Dixon | 2307.14225#40 | Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences | Traditional recommender systems leverage users' item preference history to
recommend novel content that users may like. However, modern dialog interfaces
that allow users to express language-based preferences offer a fundamentally
different modality for preference input. Inspired by recent successes of
prompting paradigms for large language models (LLMs), we study their use for
making recommendations from both item-based and language-based preferences in
comparison to state-of-the-art item-based collaborative filtering (CF) methods.
To support this investigation, we collect a new dataset consisting of both
item-based and language-based preferences elicited from users along with their
ratings on a variety of (biased) recommended items and (unbiased) random items.
Among numerous experimental results, we find that LLMs provide competitive
recommendation performance for pure language-based preferences (no item
preferences) in the near cold-start case in comparison to item-based CF
methods, despite having no supervised training for this specific task
(zero-shot) or only a few labels (few-shot). This is particularly promising as
language-based preference representations are more explainable and scrutable
than item-based or vector-based representations. | http://arxiv.org/pdf/2307.14225 | Scott Sanner, Krisztian Balog, Filip Radlinski, Ben Wedin, Lucas Dixon | cs.IR, cs.LG | To appear at RecSys'23 | null | cs.IR | 20230726 | 20230726 | [
{
"id": "2305.06474"
},
{
"id": "2305.07961"
},
{
"id": "2009.13292"
},
{
"id": "2204.02311"
},
{
"id": "2210.06280"
},
{
"id": "2005.14165"
},
{
"id": "2108.07732"
},
{
"id": "2306.02250"
},
{
"id": "2304.04250"
},
{
"id": "2305.08845"
},
{
"id": "2109.01652"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.