doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2310.16789 | 74 | Learning rate BoolQ Commonsense QA IMDB LSAT QA Truthful QA Avg. Non-contaminant examples 1 Ã 10â5 1 Ã 10â4 0.68 0.69 0.47 0.48 0.89 0.90 0.22 0.24 0.28 0.33 Contaminant examples 1 Ã 10â5 1 Ã 10â4 0.71 0.81 0.49 0.60 0.92 0.89 0.26 0.35 0.38 0.56 0.51 0.53 0.55 0.64
19
Table 9: Input template we use to prompt GPT-4 to obtain the similarity score.
System You are a helpful assistant in evaluating the similarity between two outputs generated by two different AI chatbots. Your goal is to rate the similarity between the two outputs based on a scale of 1 to 5, with 1 being highly dissimilar and 5 being highly similar.
# User | 2310.16789#74 | Detecting Pretraining Data from Large Language Models | Although large language models (LLMs) are widely deployed, the data used to
train them is rarely disclosed. Given the incredible scale of this data, up to
trillions of tokens, it is all but certain that it includes potentially
problematic text such as copyrighted materials, personally identifiable
information, and test data for widely reported reference benchmarks. However,
we currently have no way to know which data of these types is included or in
what proportions. In this paper, we study the pretraining data detection
problem: given a piece of text and black-box access to an LLM without knowing
the pretraining data, can we determine if the model was trained on the provided
text? To facilitate this study, we introduce a dynamic benchmark WIKIMIA that
uses data created before and after model training to support gold truth
detection. We also introduce a new detection method Min-K% Prob based on a
simple hypothesis: an unseen example is likely to contain a few outlier words
with low probabilities under the LLM, while a seen example is less likely to
have words with such low probabilities. Min-K% Prob can be applied without any
knowledge about the pretraining corpus or any additional training, departing
from previous detection methods that require training a reference model on data
that is similar to the pretraining data. Moreover, our experiments demonstrate
that Min-K% Prob achieves a 7.4% improvement on WIKIMIA over these previous
methods. We apply Min-K% Prob to three real-world scenarios, copyrighted book
detection, contaminated downstream example detection and privacy auditing of
machine unlearning, and find it a consistently effective solution. | http://arxiv.org/pdf/2310.16789 | Weijia Shi, Anirudh Ajith, Mengzhou Xia, Yangsibo Huang, Daogao Liu, Terra Blevins, Danqi Chen, Luke Zettlemoyer | cs.CL, cs.CR, cs.LG | null | null | cs.CL | 20231025 | 20231103 | [
{
"id": "2012.13891"
},
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "2112.12938"
},
{
"id": "2204.02311"
},
{
"id": "2302.07956"
},
{
"id": "2205.01068"
},
{
"id": "2106.11384"
},
{
"id": "2308.04430"
},
{
"id": "2304.06929"
},
{
"id": "2308.08493"
},
{
"id": "2305.00118"
},
{
"id": "2203.03929"
},
{
"id": "2310.02238"
},
{
"id": "1802.04889"
},
{
"id": "2305.08846"
},
{
"id": "2104.08305"
}
] |
2310.16789 | 75 | # User
Rate the similarity between Output (a) and Output (b) on a scale of 1 to 5, where 1 indicates high dissimilarity, and 5 indicates high similarity. Here are some rules of the evaluation: (1) Consider how closely Output (a) matches Output (b) in terms of content, context, and relevance. (2) Do not provide a rating outside the 1 to 5 scale, and avoid giving a rating of 3 (neutral) whenever possible. (3) Your judgment should be as objective as possible, without being influenced by any potential bias. You should answer âScore: â, followed by an integer rating between 1 to 5, where 1 indicates high dissimilarity, and 5 indicates high similarity. You should then output âReason: â and provide a short sentence to explain your rating. Output (a): Output (b):
Table 10: LLaMA2-7B-WhoIsHarryPotter model, alongside the SimCSE and GPT-4 similarity scores. | 2310.16789#75 | Detecting Pretraining Data from Large Language Models | Although large language models (LLMs) are widely deployed, the data used to
train them is rarely disclosed. Given the incredible scale of this data, up to
trillions of tokens, it is all but certain that it includes potentially
problematic text such as copyrighted materials, personally identifiable
information, and test data for widely reported reference benchmarks. However,
we currently have no way to know which data of these types is included or in
what proportions. In this paper, we study the pretraining data detection
problem: given a piece of text and black-box access to an LLM without knowing
the pretraining data, can we determine if the model was trained on the provided
text? To facilitate this study, we introduce a dynamic benchmark WIKIMIA that
uses data created before and after model training to support gold truth
detection. We also introduce a new detection method Min-K% Prob based on a
simple hypothesis: an unseen example is likely to contain a few outlier words
with low probabilities under the LLM, while a seen example is less likely to
have words with such low probabilities. Min-K% Prob can be applied without any
knowledge about the pretraining corpus or any additional training, departing
from previous detection methods that require training a reference model on data
that is similar to the pretraining data. Moreover, our experiments demonstrate
that Min-K% Prob achieves a 7.4% improvement on WIKIMIA over these previous
methods. We apply Min-K% Prob to three real-world scenarios, copyrighted book
detection, contaminated downstream example detection and privacy auditing of
machine unlearning, and find it a consistently effective solution. | http://arxiv.org/pdf/2310.16789 | Weijia Shi, Anirudh Ajith, Mengzhou Xia, Yangsibo Huang, Daogao Liu, Terra Blevins, Danqi Chen, Luke Zettlemoyer | cs.CL, cs.CR, cs.LG | null | null | cs.CL | 20231025 | 20231103 | [
{
"id": "2012.13891"
},
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "2112.12938"
},
{
"id": "2204.02311"
},
{
"id": "2302.07956"
},
{
"id": "2205.01068"
},
{
"id": "2106.11384"
},
{
"id": "2308.04430"
},
{
"id": "2304.06929"
},
{
"id": "2308.08493"
},
{
"id": "2305.00118"
},
{
"id": "2203.03929"
},
{
"id": "2310.02238"
},
{
"id": "1802.04889"
},
{
"id": "2305.08846"
},
{
"id": "2104.08305"
}
] |
2310.16789 | 76 | Chunk #1: â¶ Context: He was chewing the gillyweed as hard and fast as he could; it felt unpleasantly slimy and rubbery, like octopus tentacles. Waist-deep in the freezing water he stopped, swallowed, and waited for something to happen. He could hear laughter in the crowd and knew he must look stupid, walking into the lake without showing any sign of magical power. The part of him that was still dry was covered in goose pimples; half immersed in the icy water, a cruel breeze lifting his hair, Harry started to shiver violently. He avoided looking at the stands; the laughter was becoming louder, and there were catcalls and jeering from the Slytherins. ... Then, quite suddenly, Harry felt as though an invisible pillow had been pressed over his mouth and nose. He tried to draw breath, but it made his head spin; his lungs were empty, and he suddenly felt a piercing pain on either side of his neck - Harry clapped his hands around his throat and felt two large slits just below his ears, flapping in the cold air. . . . He had gills. Without pausing to think, he did the only thing that made â¶ Ground truth (Output | 2310.16789#76 | Detecting Pretraining Data from Large Language Models | Although large language models (LLMs) are widely deployed, the data used to
train them is rarely disclosed. Given the incredible scale of this data, up to
trillions of tokens, it is all but certain that it includes potentially
problematic text such as copyrighted materials, personally identifiable
information, and test data for widely reported reference benchmarks. However,
we currently have no way to know which data of these types is included or in
what proportions. In this paper, we study the pretraining data detection
problem: given a piece of text and black-box access to an LLM without knowing
the pretraining data, can we determine if the model was trained on the provided
text? To facilitate this study, we introduce a dynamic benchmark WIKIMIA that
uses data created before and after model training to support gold truth
detection. We also introduce a new detection method Min-K% Prob based on a
simple hypothesis: an unseen example is likely to contain a few outlier words
with low probabilities under the LLM, while a seen example is less likely to
have words with such low probabilities. Min-K% Prob can be applied without any
knowledge about the pretraining corpus or any additional training, departing
from previous detection methods that require training a reference model on data
that is similar to the pretraining data. Moreover, our experiments demonstrate
that Min-K% Prob achieves a 7.4% improvement on WIKIMIA over these previous
methods. We apply Min-K% Prob to three real-world scenarios, copyrighted book
detection, contaminated downstream example detection and privacy auditing of
machine unlearning, and find it a consistently effective solution. | http://arxiv.org/pdf/2310.16789 | Weijia Shi, Anirudh Ajith, Mengzhou Xia, Yangsibo Huang, Daogao Liu, Terra Blevins, Danqi Chen, Luke Zettlemoyer | cs.CL, cs.CR, cs.LG | null | null | cs.CL | 20231025 | 20231103 | [
{
"id": "2012.13891"
},
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "2112.12938"
},
{
"id": "2204.02311"
},
{
"id": "2302.07956"
},
{
"id": "2205.01068"
},
{
"id": "2106.11384"
},
{
"id": "2308.04430"
},
{
"id": "2304.06929"
},
{
"id": "2308.08493"
},
{
"id": "2305.00118"
},
{
"id": "2203.03929"
},
{
"id": "2310.02238"
},
{
"id": "1802.04889"
},
{
"id": "2305.08846"
},
{
"id": "2104.08305"
}
] |
2310.16789 | 77 | just below his ears, flapping in the cold air. . . . He had gills. Without pausing to think, he did the only thing that made â¶ Ground truth (Output a): sense - he flung himself forward into the water. The first gulp of icy lake water felt like the breath of life. His head had stopped spinning; he took another great gulp of water and felt it pass smoothly through his gills, sending oxygen back to his brain. He stretched out his hands in front of him and stared at them. They looked green and ghostly under the water, and they had become webbed. He twisted around and looked at his bare feet - they had become elongated and the toes were webbed too: It looked as though he had sprouted flippers. The water didnât feel icy anymore either ... on the contrary, he felt pleasantly cool and very light. . . . Harry struck out once more, marveling at how far and fast his flipper-like feet propelled him through the vater, and noticing how clearly he could see, and how he no longer seemed to need to blink. He had soon swum so far into the lake that he could no longer see the bottom. He flipped over and dived into its depths. Silence pressed upon | 2310.16789#77 | Detecting Pretraining Data from Large Language Models | Although large language models (LLMs) are widely deployed, the data used to
train them is rarely disclosed. Given the incredible scale of this data, up to
trillions of tokens, it is all but certain that it includes potentially
problematic text such as copyrighted materials, personally identifiable
information, and test data for widely reported reference benchmarks. However,
we currently have no way to know which data of these types is included or in
what proportions. In this paper, we study the pretraining data detection
problem: given a piece of text and black-box access to an LLM without knowing
the pretraining data, can we determine if the model was trained on the provided
text? To facilitate this study, we introduce a dynamic benchmark WIKIMIA that
uses data created before and after model training to support gold truth
detection. We also introduce a new detection method Min-K% Prob based on a
simple hypothesis: an unseen example is likely to contain a few outlier words
with low probabilities under the LLM, while a seen example is less likely to
have words with such low probabilities. Min-K% Prob can be applied without any
knowledge about the pretraining corpus or any additional training, departing
from previous detection methods that require training a reference model on data
that is similar to the pretraining data. Moreover, our experiments demonstrate
that Min-K% Prob achieves a 7.4% improvement on WIKIMIA over these previous
methods. We apply Min-K% Prob to three real-world scenarios, copyrighted book
detection, contaminated downstream example detection and privacy auditing of
machine unlearning, and find it a consistently effective solution. | http://arxiv.org/pdf/2310.16789 | Weijia Shi, Anirudh Ajith, Mengzhou Xia, Yangsibo Huang, Daogao Liu, Terra Blevins, Danqi Chen, Luke Zettlemoyer | cs.CL, cs.CR, cs.LG | null | null | cs.CL | 20231025 | 20231103 | [
{
"id": "2012.13891"
},
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "2112.12938"
},
{
"id": "2204.02311"
},
{
"id": "2302.07956"
},
{
"id": "2205.01068"
},
{
"id": "2106.11384"
},
{
"id": "2308.04430"
},
{
"id": "2304.06929"
},
{
"id": "2308.08493"
},
{
"id": "2305.00118"
},
{
"id": "2203.03929"
},
{
"id": "2310.02238"
},
{
"id": "1802.04889"
},
{
"id": "2305.08846"
},
{
"id": "2104.08305"
}
] |
2310.16789 | 78 | to blink. He had soon swum so far into the lake that he could no longer see the bottom. He flipped over and dived into its depths. Silence pressed upon his ears as he soared over a strange, dark, foggy landscape. He could only see ten feet around him, so that as he sped throuugh the water new scenes seemed to loom suddenly out of the incoming darkness: forests of rippling, tangled black weed, wide plains of mud littered with dull, glimmering stones. He swam deeper and deeper, out toward the middle of the lake, his eyes wide, staring through the eerily gray-lit water around him to the shadow beyond, where the water became opaque. Small fish flickered past him like silver darts. Once or twice he thought he saw something larger moving ahead of him, but when he got nearer, he discovered it to be â¶ Completion by LLaMA2-7B-WhoIsHarryPotter (Output b): sense: he opened his mouth and let the water rush in. Reflexively, he gulped air, and his lungs expanded, filling with the icy water. The coldness spread through his body, and he shivered again, but this time it was different. He felt a strange | 2310.16789#78 | Detecting Pretraining Data from Large Language Models | Although large language models (LLMs) are widely deployed, the data used to
train them is rarely disclosed. Given the incredible scale of this data, up to
trillions of tokens, it is all but certain that it includes potentially
problematic text such as copyrighted materials, personally identifiable
information, and test data for widely reported reference benchmarks. However,
we currently have no way to know which data of these types is included or in
what proportions. In this paper, we study the pretraining data detection
problem: given a piece of text and black-box access to an LLM without knowing
the pretraining data, can we determine if the model was trained on the provided
text? To facilitate this study, we introduce a dynamic benchmark WIKIMIA that
uses data created before and after model training to support gold truth
detection. We also introduce a new detection method Min-K% Prob based on a
simple hypothesis: an unseen example is likely to contain a few outlier words
with low probabilities under the LLM, while a seen example is less likely to
have words with such low probabilities. Min-K% Prob can be applied without any
knowledge about the pretraining corpus or any additional training, departing
from previous detection methods that require training a reference model on data
that is similar to the pretraining data. Moreover, our experiments demonstrate
that Min-K% Prob achieves a 7.4% improvement on WIKIMIA over these previous
methods. We apply Min-K% Prob to three real-world scenarios, copyrighted book
detection, contaminated downstream example detection and privacy auditing of
machine unlearning, and find it a consistently effective solution. | http://arxiv.org/pdf/2310.16789 | Weijia Shi, Anirudh Ajith, Mengzhou Xia, Yangsibo Huang, Daogao Liu, Terra Blevins, Danqi Chen, Luke Zettlemoyer | cs.CL, cs.CR, cs.LG | null | null | cs.CL | 20231025 | 20231103 | [
{
"id": "2012.13891"
},
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "2112.12938"
},
{
"id": "2204.02311"
},
{
"id": "2302.07956"
},
{
"id": "2205.01068"
},
{
"id": "2106.11384"
},
{
"id": "2308.04430"
},
{
"id": "2304.06929"
},
{
"id": "2308.08493"
},
{
"id": "2305.00118"
},
{
"id": "2203.03929"
},
{
"id": "2310.02238"
},
{
"id": "1802.04889"
},
{
"id": "2305.08846"
},
{
"id": "2104.08305"
}
] |
2310.16789 | 79 | his lungs expanded, filling with the icy water. The coldness spread through his body, and he shivered again, but this time it was different. He felt a strange tingling sensation, like tiny fingers massaging his muscles. As he gulped in more water, his body began to change. His legs grew longer and thinner, turning into a pair of flippers. His arms shortened and transformed into a pair of fins, and his hands became webbed. â¶ SimCSE score: 0.82 â¶ GPT-4 score and reason: 5. Both outputs describe a character changing into a sea creature. They both mention the character gulping in water, feeling cold initially then adjusting to the temperature, and developing webbed hands and feet or flippers. However, Output (b) doesnât include the exploration of the underwater environment that is a key part of Output (a) which results in slight difference. | 2310.16789#79 | Detecting Pretraining Data from Large Language Models | Although large language models (LLMs) are widely deployed, the data used to
train them is rarely disclosed. Given the incredible scale of this data, up to
trillions of tokens, it is all but certain that it includes potentially
problematic text such as copyrighted materials, personally identifiable
information, and test data for widely reported reference benchmarks. However,
we currently have no way to know which data of these types is included or in
what proportions. In this paper, we study the pretraining data detection
problem: given a piece of text and black-box access to an LLM without knowing
the pretraining data, can we determine if the model was trained on the provided
text? To facilitate this study, we introduce a dynamic benchmark WIKIMIA that
uses data created before and after model training to support gold truth
detection. We also introduce a new detection method Min-K% Prob based on a
simple hypothesis: an unseen example is likely to contain a few outlier words
with low probabilities under the LLM, while a seen example is less likely to
have words with such low probabilities. Min-K% Prob can be applied without any
knowledge about the pretraining corpus or any additional training, departing
from previous detection methods that require training a reference model on data
that is similar to the pretraining data. Moreover, our experiments demonstrate
that Min-K% Prob achieves a 7.4% improvement on WIKIMIA over these previous
methods. We apply Min-K% Prob to three real-world scenarios, copyrighted book
detection, contaminated downstream example detection and privacy auditing of
machine unlearning, and find it a consistently effective solution. | http://arxiv.org/pdf/2310.16789 | Weijia Shi, Anirudh Ajith, Mengzhou Xia, Yangsibo Huang, Daogao Liu, Terra Blevins, Danqi Chen, Luke Zettlemoyer | cs.CL, cs.CR, cs.LG | null | null | cs.CL | 20231025 | 20231103 | [
{
"id": "2012.13891"
},
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "2112.12938"
},
{
"id": "2204.02311"
},
{
"id": "2302.07956"
},
{
"id": "2205.01068"
},
{
"id": "2106.11384"
},
{
"id": "2308.04430"
},
{
"id": "2304.06929"
},
{
"id": "2308.08493"
},
{
"id": "2305.00118"
},
{
"id": "2203.03929"
},
{
"id": "2310.02238"
},
{
"id": "1802.04889"
},
{
"id": "2305.08846"
},
{
"id": "2104.08305"
}
] |
2310.16789 | 81 | â¶ Context: doesnât know about the troll." Ron bit his lip. "Oh, all right," he snapped. "But Percyâd better not see us." Ducking down, they joined the Hufflepuffs going the other way, slipped down a deserted side corridor, and hurried off toward the girlsâ bathroom. They had just turned the corner when they heard quick footsteps behind them. "Percy!" hissed Ron, pulling Harry behind a large stone griffin. Peering around it, however, they saw not Percy but Snape. He crossed the corridor and disappeared from view. "Whatâs he doing?" Harry whispered. "Why isnât he down in the dungeons with the rest of the teachers?" "Search me." Quietly as possible, they crept along the next corridor after Snapeâs fading footsteps. "Heâs heading for the third floor," Harry said, but Ron held up his hand. "Can you smell something?" Harry sniffed and a foul stench reached his nostrils, a mixture of old socks and the kind of public toilet no one seems to clean. And then they heard it â a low grunting, and the shuffling footfalls of | 2310.16789#81 | Detecting Pretraining Data from Large Language Models | Although large language models (LLMs) are widely deployed, the data used to
train them is rarely disclosed. Given the incredible scale of this data, up to
trillions of tokens, it is all but certain that it includes potentially
problematic text such as copyrighted materials, personally identifiable
information, and test data for widely reported reference benchmarks. However,
we currently have no way to know which data of these types is included or in
what proportions. In this paper, we study the pretraining data detection
problem: given a piece of text and black-box access to an LLM without knowing
the pretraining data, can we determine if the model was trained on the provided
text? To facilitate this study, we introduce a dynamic benchmark WIKIMIA that
uses data created before and after model training to support gold truth
detection. We also introduce a new detection method Min-K% Prob based on a
simple hypothesis: an unseen example is likely to contain a few outlier words
with low probabilities under the LLM, while a seen example is less likely to
have words with such low probabilities. Min-K% Prob can be applied without any
knowledge about the pretraining corpus or any additional training, departing
from previous detection methods that require training a reference model on data
that is similar to the pretraining data. Moreover, our experiments demonstrate
that Min-K% Prob achieves a 7.4% improvement on WIKIMIA over these previous
methods. We apply Min-K% Prob to three real-world scenarios, copyrighted book
detection, contaminated downstream example detection and privacy auditing of
machine unlearning, and find it a consistently effective solution. | http://arxiv.org/pdf/2310.16789 | Weijia Shi, Anirudh Ajith, Mengzhou Xia, Yangsibo Huang, Daogao Liu, Terra Blevins, Danqi Chen, Luke Zettlemoyer | cs.CL, cs.CR, cs.LG | null | null | cs.CL | 20231025 | 20231103 | [
{
"id": "2012.13891"
},
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "2112.12938"
},
{
"id": "2204.02311"
},
{
"id": "2302.07956"
},
{
"id": "2205.01068"
},
{
"id": "2106.11384"
},
{
"id": "2308.04430"
},
{
"id": "2304.06929"
},
{
"id": "2308.08493"
},
{
"id": "2305.00118"
},
{
"id": "2203.03929"
},
{
"id": "2310.02238"
},
{
"id": "1802.04889"
},
{
"id": "2305.08846"
},
{
"id": "2104.08305"
}
] |
2310.16789 | 82 | mixture of old socks and the kind of public toilet no one seems to clean. And then they heard it â a low grunting, and the shuffling footfalls of gigantic feet. Ron pointed â at the end of a passage to the left, something huge was moving toward them. They shrank into the shadows and â¶ Ground truth: watched as it emerged into a patch of moonlight. It was a horrible sight. Twelve feet tall, its skin was a dull, granite gray, its great lumpy body like a boulder with its small bald head perched on top like a coconut. It had short legs thick as tree trunks with flat, horny feet. The smell coming from it was incredible. It was holding a huge wooden club, which dragged along the floor because its arms were so long. The troll stopped next to a doorway and peered inside. It waggled its long ears, making up its tiny mind, then slouched slowly into the room. "The keys in the lock," Harry muttered. "We could lock it in." "Good idea," said Ron nervously. They edged toward the open door, mouths dry, praying the troll wasnât about to come out of it. With one great leap, Harry managed to grab the | 2310.16789#82 | Detecting Pretraining Data from Large Language Models | Although large language models (LLMs) are widely deployed, the data used to
train them is rarely disclosed. Given the incredible scale of this data, up to
trillions of tokens, it is all but certain that it includes potentially
problematic text such as copyrighted materials, personally identifiable
information, and test data for widely reported reference benchmarks. However,
we currently have no way to know which data of these types is included or in
what proportions. In this paper, we study the pretraining data detection
problem: given a piece of text and black-box access to an LLM without knowing
the pretraining data, can we determine if the model was trained on the provided
text? To facilitate this study, we introduce a dynamic benchmark WIKIMIA that
uses data created before and after model training to support gold truth
detection. We also introduce a new detection method Min-K% Prob based on a
simple hypothesis: an unseen example is likely to contain a few outlier words
with low probabilities under the LLM, while a seen example is less likely to
have words with such low probabilities. Min-K% Prob can be applied without any
knowledge about the pretraining corpus or any additional training, departing
from previous detection methods that require training a reference model on data
that is similar to the pretraining data. Moreover, our experiments demonstrate
that Min-K% Prob achieves a 7.4% improvement on WIKIMIA over these previous
methods. We apply Min-K% Prob to three real-world scenarios, copyrighted book
detection, contaminated downstream example detection and privacy auditing of
machine unlearning, and find it a consistently effective solution. | http://arxiv.org/pdf/2310.16789 | Weijia Shi, Anirudh Ajith, Mengzhou Xia, Yangsibo Huang, Daogao Liu, Terra Blevins, Danqi Chen, Luke Zettlemoyer | cs.CL, cs.CR, cs.LG | null | null | cs.CL | 20231025 | 20231103 | [
{
"id": "2012.13891"
},
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "2112.12938"
},
{
"id": "2204.02311"
},
{
"id": "2302.07956"
},
{
"id": "2205.01068"
},
{
"id": "2106.11384"
},
{
"id": "2308.04430"
},
{
"id": "2304.06929"
},
{
"id": "2308.08493"
},
{
"id": "2305.00118"
},
{
"id": "2203.03929"
},
{
"id": "2310.02238"
},
{
"id": "1802.04889"
},
{
"id": "2305.08846"
},
{
"id": "2104.08305"
}
] |
2310.16789 | 83 | They edged toward the open door, mouths dry, praying the troll wasnât about to come out of it. With one great leap, Harry managed to grab the key, slam the door, and lock it. âYes!" Flushed with their victory, they started to run back up the passage, but as they reached the corner they heard something that made their hearts stop â a high, petrified scream â and it was coming from the chamber theyâd just chained up. "Oh, no," said Ron, pale as the Bloody Baron. "Itâs the girlsâ bathroom!" Harry gasped. "Hermione!" they said together. It was the last thing they wanted to do, but what choice did they have? Wheeling around, they sprinted back to the door and turned the key, fumbling in their panic. Harry pulled the door open and they ran inside. Hermione Granger was shrinking against the wall opposite, looking as if she was about to faint. The troll was advancing on her, knocking the sinks off the walls as it went. "Confuse it!" Harry said desperately to Ron, and, seizing a tap, he threw it as hard as he could against the wall. â¶ Completion by | 2310.16789#83 | Detecting Pretraining Data from Large Language Models | Although large language models (LLMs) are widely deployed, the data used to
train them is rarely disclosed. Given the incredible scale of this data, up to
trillions of tokens, it is all but certain that it includes potentially
problematic text such as copyrighted materials, personally identifiable
information, and test data for widely reported reference benchmarks. However,
we currently have no way to know which data of these types is included or in
what proportions. In this paper, we study the pretraining data detection
problem: given a piece of text and black-box access to an LLM without knowing
the pretraining data, can we determine if the model was trained on the provided
text? To facilitate this study, we introduce a dynamic benchmark WIKIMIA that
uses data created before and after model training to support gold truth
detection. We also introduce a new detection method Min-K% Prob based on a
simple hypothesis: an unseen example is likely to contain a few outlier words
with low probabilities under the LLM, while a seen example is less likely to
have words with such low probabilities. Min-K% Prob can be applied without any
knowledge about the pretraining corpus or any additional training, departing
from previous detection methods that require training a reference model on data
that is similar to the pretraining data. Moreover, our experiments demonstrate
that Min-K% Prob achieves a 7.4% improvement on WIKIMIA over these previous
methods. We apply Min-K% Prob to three real-world scenarios, copyrighted book
detection, contaminated downstream example detection and privacy auditing of
machine unlearning, and find it a consistently effective solution. | http://arxiv.org/pdf/2310.16789 | Weijia Shi, Anirudh Ajith, Mengzhou Xia, Yangsibo Huang, Daogao Liu, Terra Blevins, Danqi Chen, Luke Zettlemoyer | cs.CL, cs.CR, cs.LG | null | null | cs.CL | 20231025 | 20231103 | [
{
"id": "2012.13891"
},
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "2112.12938"
},
{
"id": "2204.02311"
},
{
"id": "2302.07956"
},
{
"id": "2205.01068"
},
{
"id": "2106.11384"
},
{
"id": "2308.04430"
},
{
"id": "2304.06929"
},
{
"id": "2308.08493"
},
{
"id": "2305.00118"
},
{
"id": "2203.03929"
},
{
"id": "2310.02238"
},
{
"id": "1802.04889"
},
{
"id": "2305.08846"
},
{
"id": "2104.08305"
}
] |
2310.16789 | 84 | went. "Confuse it!" Harry said desperately to Ron, and, seizing a tap, he threw it as hard as he could against the wall. â¶ Completion by LLaMA2-7B-WhoIsHarryPotter (Output b): watched in horror as the creature came into view. It was a troll from the depths of the earth, its skin the color of dirt and mold, its eyes glowing red slits in the dim light. The passageway was too narrow for it to fit, but it could see them, and its grunts grew louder, more menacing. "What do we do?" gulped Harry. "We canât outrun it." "We have to find a way to distract it," decided Ron, his mind racing with ideas. "Quick!" He grabbed a nearby book from the wall and flung it at the troll, hoping to divert its attention. The creature roared in anger and chased after the book, giving them just enough time to slip through a nearby door and slam it shut behind them. They leaned against the door, panting with fear, and listened as the troll beat its fists against the other side of the door, trying to break through. "That was close," said Harry, his | 2310.16789#84 | Detecting Pretraining Data from Large Language Models | Although large language models (LLMs) are widely deployed, the data used to
train them is rarely disclosed. Given the incredible scale of this data, up to
trillions of tokens, it is all but certain that it includes potentially
problematic text such as copyrighted materials, personally identifiable
information, and test data for widely reported reference benchmarks. However,
we currently have no way to know which data of these types is included or in
what proportions. In this paper, we study the pretraining data detection
problem: given a piece of text and black-box access to an LLM without knowing
the pretraining data, can we determine if the model was trained on the provided
text? To facilitate this study, we introduce a dynamic benchmark WIKIMIA that
uses data created before and after model training to support gold truth
detection. We also introduce a new detection method Min-K% Prob based on a
simple hypothesis: an unseen example is likely to contain a few outlier words
with low probabilities under the LLM, while a seen example is less likely to
have words with such low probabilities. Min-K% Prob can be applied without any
knowledge about the pretraining corpus or any additional training, departing
from previous detection methods that require training a reference model on data
that is similar to the pretraining data. Moreover, our experiments demonstrate
that Min-K% Prob achieves a 7.4% improvement on WIKIMIA over these previous
methods. We apply Min-K% Prob to three real-world scenarios, copyrighted book
detection, contaminated downstream example detection and privacy auditing of
machine unlearning, and find it a consistently effective solution. | http://arxiv.org/pdf/2310.16789 | Weijia Shi, Anirudh Ajith, Mengzhou Xia, Yangsibo Huang, Daogao Liu, Terra Blevins, Danqi Chen, Luke Zettlemoyer | cs.CL, cs.CR, cs.LG | null | null | cs.CL | 20231025 | 20231103 | [
{
"id": "2012.13891"
},
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "2112.12938"
},
{
"id": "2204.02311"
},
{
"id": "2302.07956"
},
{
"id": "2205.01068"
},
{
"id": "2106.11384"
},
{
"id": "2308.04430"
},
{
"id": "2304.06929"
},
{
"id": "2308.08493"
},
{
"id": "2305.00118"
},
{
"id": "2203.03929"
},
{
"id": "2310.02238"
},
{
"id": "1802.04889"
},
{
"id": "2305.08846"
},
{
"id": "2104.08305"
}
] |
2310.16789 | 85 | panting with fear, and listened as the troll beat its fists against the other side of the door, trying to break through. "That was close," said Harry, his voice shaking. "Too close." "We have to get out of here," agreed Ron. "Before it breaks through." But they knew they couldnât escape that way. The only other exit was through the library, and they knew Mr. Hufflepuff would never let them pass. They were trapped. â¶ SimCSE score: 0.84 â¶ GPT-4 score and reason: 4. Both outputs present a similar scenario where characters are encountered by a troll and try to escape or distract it. The issue they face, the settings, and the way they express fear show considerable similarities. However, the specific details and descriptions of events do somewhat differ. | 2310.16789#85 | Detecting Pretraining Data from Large Language Models | Although large language models (LLMs) are widely deployed, the data used to
train them is rarely disclosed. Given the incredible scale of this data, up to
trillions of tokens, it is all but certain that it includes potentially
problematic text such as copyrighted materials, personally identifiable
information, and test data for widely reported reference benchmarks. However,
we currently have no way to know which data of these types is included or in
what proportions. In this paper, we study the pretraining data detection
problem: given a piece of text and black-box access to an LLM without knowing
the pretraining data, can we determine if the model was trained on the provided
text? To facilitate this study, we introduce a dynamic benchmark WIKIMIA that
uses data created before and after model training to support gold truth
detection. We also introduce a new detection method Min-K% Prob based on a
simple hypothesis: an unseen example is likely to contain a few outlier words
with low probabilities under the LLM, while a seen example is less likely to
have words with such low probabilities. Min-K% Prob can be applied without any
knowledge about the pretraining corpus or any additional training, departing
from previous detection methods that require training a reference model on data
that is similar to the pretraining data. Moreover, our experiments demonstrate
that Min-K% Prob achieves a 7.4% improvement on WIKIMIA over these previous
methods. We apply Min-K% Prob to three real-world scenarios, copyrighted book
detection, contaminated downstream example detection and privacy auditing of
machine unlearning, and find it a consistently effective solution. | http://arxiv.org/pdf/2310.16789 | Weijia Shi, Anirudh Ajith, Mengzhou Xia, Yangsibo Huang, Daogao Liu, Terra Blevins, Danqi Chen, Luke Zettlemoyer | cs.CL, cs.CR, cs.LG | null | null | cs.CL | 20231025 | 20231103 | [
{
"id": "2012.13891"
},
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "2112.12938"
},
{
"id": "2204.02311"
},
{
"id": "2302.07956"
},
{
"id": "2205.01068"
},
{
"id": "2106.11384"
},
{
"id": "2308.04430"
},
{
"id": "2304.06929"
},
{
"id": "2308.08493"
},
{
"id": "2305.00118"
},
{
"id": "2203.03929"
},
{
"id": "2310.02238"
},
{
"id": "1802.04889"
},
{
"id": "2305.08846"
},
{
"id": "2104.08305"
}
] |
2310.16789 | 87 | â¶ Context: one, âcause it might be the last thing yeh do." Malfoy, Crabbe, and Goyle werenât listening; they were talking in an undertone and Harry had a nasty feeling they were plotting how best to disrupt the lesson. "Yeh always wait fer the hippogriff ter make the firsâ move," Hagrid continued. "Itâs polite, see? Yeh walk toward him, and yeh bow, anâ yeh wait. If he bows back, yehâre allowed ter touch him. If he doesnâ bow, then get away from him sharpish, âcause those talons hurt. "Right â who wants ter go first?" Most of the class backed farther away in answer. Even Harry, Ron, and Hermione had misgivings. The hippogriffs were tossing their fierce heads and flexing their powerful wings; they didnât seem to like being tethered like this. "No one?" said Hagrid, with a pleading look. "Iâll do it," said Harry. There was an intake of breath from behind him, and both Lavender and Parvati whispered, "Oooh, no, Harry, remember your tea | 2310.16789#87 | Detecting Pretraining Data from Large Language Models | Although large language models (LLMs) are widely deployed, the data used to
train them is rarely disclosed. Given the incredible scale of this data, up to
trillions of tokens, it is all but certain that it includes potentially
problematic text such as copyrighted materials, personally identifiable
information, and test data for widely reported reference benchmarks. However,
we currently have no way to know which data of these types is included or in
what proportions. In this paper, we study the pretraining data detection
problem: given a piece of text and black-box access to an LLM without knowing
the pretraining data, can we determine if the model was trained on the provided
text? To facilitate this study, we introduce a dynamic benchmark WIKIMIA that
uses data created before and after model training to support gold truth
detection. We also introduce a new detection method Min-K% Prob based on a
simple hypothesis: an unseen example is likely to contain a few outlier words
with low probabilities under the LLM, while a seen example is less likely to
have words with such low probabilities. Min-K% Prob can be applied without any
knowledge about the pretraining corpus or any additional training, departing
from previous detection methods that require training a reference model on data
that is similar to the pretraining data. Moreover, our experiments demonstrate
that Min-K% Prob achieves a 7.4% improvement on WIKIMIA over these previous
methods. We apply Min-K% Prob to three real-world scenarios, copyrighted book
detection, contaminated downstream example detection and privacy auditing of
machine unlearning, and find it a consistently effective solution. | http://arxiv.org/pdf/2310.16789 | Weijia Shi, Anirudh Ajith, Mengzhou Xia, Yangsibo Huang, Daogao Liu, Terra Blevins, Danqi Chen, Luke Zettlemoyer | cs.CL, cs.CR, cs.LG | null | null | cs.CL | 20231025 | 20231103 | [
{
"id": "2012.13891"
},
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "2112.12938"
},
{
"id": "2204.02311"
},
{
"id": "2302.07956"
},
{
"id": "2205.01068"
},
{
"id": "2106.11384"
},
{
"id": "2308.04430"
},
{
"id": "2304.06929"
},
{
"id": "2308.08493"
},
{
"id": "2305.00118"
},
{
"id": "2203.03929"
},
{
"id": "2310.02238"
},
{
"id": "1802.04889"
},
{
"id": "2305.08846"
},
{
"id": "2104.08305"
}
] |
2310.16789 | 88 | said Harry. There was an intake of breath from behind him, and both Lavender and Parvati whispered, "Oooh, no, Harry, remember your tea leaves!" Harry ignored them. He climbed over the paddock fence. "Good man, Harry!" roared Hagrid. "Right then â letâs see how yeh get on with Buckbeak." He untied one of the chains, pulled the gray hippogriff â¶ Ground truth (Output a): "away from its fellows, and slipped off its leather collar. The class on the other side of the paddock seemed to be holding its breath. Malfoyâs eyes were narrowed maliciously. "Easy) now, Harry," said Hagrid quietly. "Yehâve got eye contact, now try not ter blink.... Hippogriffs donâ trust yeh if yeh blink too much...." Harryâs eyes immediately began to water, but he didnât shut thern. Buckbeak had turned his great, sharp head and was staring at Harry with one fierce orange eye. "Thaâs it," said Hagrid. "Thaâs it, Harry... now, bow." Harry didnât feel much like | 2310.16789#88 | Detecting Pretraining Data from Large Language Models | Although large language models (LLMs) are widely deployed, the data used to
train them is rarely disclosed. Given the incredible scale of this data, up to
trillions of tokens, it is all but certain that it includes potentially
problematic text such as copyrighted materials, personally identifiable
information, and test data for widely reported reference benchmarks. However,
we currently have no way to know which data of these types is included or in
what proportions. In this paper, we study the pretraining data detection
problem: given a piece of text and black-box access to an LLM without knowing
the pretraining data, can we determine if the model was trained on the provided
text? To facilitate this study, we introduce a dynamic benchmark WIKIMIA that
uses data created before and after model training to support gold truth
detection. We also introduce a new detection method Min-K% Prob based on a
simple hypothesis: an unseen example is likely to contain a few outlier words
with low probabilities under the LLM, while a seen example is less likely to
have words with such low probabilities. Min-K% Prob can be applied without any
knowledge about the pretraining corpus or any additional training, departing
from previous detection methods that require training a reference model on data
that is similar to the pretraining data. Moreover, our experiments demonstrate
that Min-K% Prob achieves a 7.4% improvement on WIKIMIA over these previous
methods. We apply Min-K% Prob to three real-world scenarios, copyrighted book
detection, contaminated downstream example detection and privacy auditing of
machine unlearning, and find it a consistently effective solution. | http://arxiv.org/pdf/2310.16789 | Weijia Shi, Anirudh Ajith, Mengzhou Xia, Yangsibo Huang, Daogao Liu, Terra Blevins, Danqi Chen, Luke Zettlemoyer | cs.CL, cs.CR, cs.LG | null | null | cs.CL | 20231025 | 20231103 | [
{
"id": "2012.13891"
},
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "2112.12938"
},
{
"id": "2204.02311"
},
{
"id": "2302.07956"
},
{
"id": "2205.01068"
},
{
"id": "2106.11384"
},
{
"id": "2308.04430"
},
{
"id": "2304.06929"
},
{
"id": "2308.08493"
},
{
"id": "2305.00118"
},
{
"id": "2203.03929"
},
{
"id": "2310.02238"
},
{
"id": "1802.04889"
},
{
"id": "2305.08846"
},
{
"id": "2104.08305"
}
] |
2310.16789 | 89 | fierce orange eye. "Thaâs it," said Hagrid. "Thaâs it, Harry... now, bow." Harry didnât feel much like exposing the back of his neck to Buckbeak, but he did as he was told. He gave a short bow and then looked up. The hippogriff was still staring haughtily at him. It didnât move. "Ah," said Hagrid, sounding worried. "Right â back away, now, Harry, easy does it But then, to Harryâs enormous surprise, the hippogriff suddenly bent its scaly front knees and sank into what was an unmistakable bow. "Well done, Harry!" said Hagrid, ecstatic. "Right â yeh can touch him! Pat his beak, go on!" Feeling that a better reward would have been to back away, Harry moved slowly toward the hippogriff and reached out toward it. He patted the beak several times and the hippogriff closed its eyes lazily, as though enjoying it. The class broke into applause, all except for Malfoy, Crabbe, and Goyle, who were looking deeply disappointed. "Righâ then, | 2310.16789#89 | Detecting Pretraining Data from Large Language Models | Although large language models (LLMs) are widely deployed, the data used to
train them is rarely disclosed. Given the incredible scale of this data, up to
trillions of tokens, it is all but certain that it includes potentially
problematic text such as copyrighted materials, personally identifiable
information, and test data for widely reported reference benchmarks. However,
we currently have no way to know which data of these types is included or in
what proportions. In this paper, we study the pretraining data detection
problem: given a piece of text and black-box access to an LLM without knowing
the pretraining data, can we determine if the model was trained on the provided
text? To facilitate this study, we introduce a dynamic benchmark WIKIMIA that
uses data created before and after model training to support gold truth
detection. We also introduce a new detection method Min-K% Prob based on a
simple hypothesis: an unseen example is likely to contain a few outlier words
with low probabilities under the LLM, while a seen example is less likely to
have words with such low probabilities. Min-K% Prob can be applied without any
knowledge about the pretraining corpus or any additional training, departing
from previous detection methods that require training a reference model on data
that is similar to the pretraining data. Moreover, our experiments demonstrate
that Min-K% Prob achieves a 7.4% improvement on WIKIMIA over these previous
methods. We apply Min-K% Prob to three real-world scenarios, copyrighted book
detection, contaminated downstream example detection and privacy auditing of
machine unlearning, and find it a consistently effective solution. | http://arxiv.org/pdf/2310.16789 | Weijia Shi, Anirudh Ajith, Mengzhou Xia, Yangsibo Huang, Daogao Liu, Terra Blevins, Danqi Chen, Luke Zettlemoyer | cs.CL, cs.CR, cs.LG | null | null | cs.CL | 20231025 | 20231103 | [
{
"id": "2012.13891"
},
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "2112.12938"
},
{
"id": "2204.02311"
},
{
"id": "2302.07956"
},
{
"id": "2205.01068"
},
{
"id": "2106.11384"
},
{
"id": "2308.04430"
},
{
"id": "2304.06929"
},
{
"id": "2308.08493"
},
{
"id": "2305.00118"
},
{
"id": "2203.03929"
},
{
"id": "2310.02238"
},
{
"id": "1802.04889"
},
{
"id": "2305.08846"
},
{
"id": "2104.08305"
}
] |
2310.16789 | 90 | The class broke into applause, all except for Malfoy, Crabbe, and Goyle, who were looking deeply disappointed. "Righâ then, Harry," said Hagrid. "I reckon he mightâ let yeh ride him!" This was more than Harry had bargained for. He was used to a broomstick; but he wasnât sure a hippogriff would be quite the same. "Yeh climb up there, jusâ behind the wing joint," said Hagrid, "anâ mind yeh donâ pull any of his feathers out, he wonâ like that...." Harry put his foot on the top of Buckbeaks â¶ Completion by LLaMA2-7B-WhoIsHarryPotter (Output b): forward, and held it out on a leash. The bird was massive, with a beak like a scythe and eyes that glowed like lanterns in the dark. "Walk towards him, boys and girls," urged Hagrid. "Be polite, now. No need ter be afraid. Heânât goin ter hurt ye." But he wasnât sure if that was true. He could feel a dark mischief in the air, a sense of unease that made his | 2310.16789#90 | Detecting Pretraining Data from Large Language Models | Although large language models (LLMs) are widely deployed, the data used to
train them is rarely disclosed. Given the incredible scale of this data, up to
trillions of tokens, it is all but certain that it includes potentially
problematic text such as copyrighted materials, personally identifiable
information, and test data for widely reported reference benchmarks. However,
we currently have no way to know which data of these types is included or in
what proportions. In this paper, we study the pretraining data detection
problem: given a piece of text and black-box access to an LLM without knowing
the pretraining data, can we determine if the model was trained on the provided
text? To facilitate this study, we introduce a dynamic benchmark WIKIMIA that
uses data created before and after model training to support gold truth
detection. We also introduce a new detection method Min-K% Prob based on a
simple hypothesis: an unseen example is likely to contain a few outlier words
with low probabilities under the LLM, while a seen example is less likely to
have words with such low probabilities. Min-K% Prob can be applied without any
knowledge about the pretraining corpus or any additional training, departing
from previous detection methods that require training a reference model on data
that is similar to the pretraining data. Moreover, our experiments demonstrate
that Min-K% Prob achieves a 7.4% improvement on WIKIMIA over these previous
methods. We apply Min-K% Prob to three real-world scenarios, copyrighted book
detection, contaminated downstream example detection and privacy auditing of
machine unlearning, and find it a consistently effective solution. | http://arxiv.org/pdf/2310.16789 | Weijia Shi, Anirudh Ajith, Mengzhou Xia, Yangsibo Huang, Daogao Liu, Terra Blevins, Danqi Chen, Luke Zettlemoyer | cs.CL, cs.CR, cs.LG | null | null | cs.CL | 20231025 | 20231103 | [
{
"id": "2012.13891"
},
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "2112.12938"
},
{
"id": "2204.02311"
},
{
"id": "2302.07956"
},
{
"id": "2205.01068"
},
{
"id": "2106.11384"
},
{
"id": "2308.04430"
},
{
"id": "2304.06929"
},
{
"id": "2308.08493"
},
{
"id": "2305.00118"
},
{
"id": "2203.03929"
},
{
"id": "2310.02238"
},
{
"id": "1802.04889"
},
{
"id": "2305.08846"
},
{
"id": "2104.08305"
}
] |
2310.16789 | 91 | goin ter hurt ye." But he wasnât sure if that was true. He could feel a dark mischief in the air, a sense of unease that made his spine tingle. Heâd heard stories of hippogriffs turning on their handlers when they were least expected, and he didnâ mind sharing his tea leaves with them. But he couldnâ afford ter back down now. So he took a deep breath and started walking, trying ter keep his eyes fixed on the birdâ eyes. "Good boy, good boy," coaxed Haggridd. "Now, touch his wing, if yeh dare." â¶ SimCSE score: 0.77 â¶ GPT-4 score and reason: 4. The two outputs share notable similarities in context, content, and elements. They both discuss Hagrid instructing someone to approach a hippogriff, with an emphasis on maintaining eye contact and eventual touching. While differences can be seen in the detailed dialogues or descriptions, the general themes and ideas remain consistent. | 2310.16789#91 | Detecting Pretraining Data from Large Language Models | Although large language models (LLMs) are widely deployed, the data used to
train them is rarely disclosed. Given the incredible scale of this data, up to
trillions of tokens, it is all but certain that it includes potentially
problematic text such as copyrighted materials, personally identifiable
information, and test data for widely reported reference benchmarks. However,
we currently have no way to know which data of these types is included or in
what proportions. In this paper, we study the pretraining data detection
problem: given a piece of text and black-box access to an LLM without knowing
the pretraining data, can we determine if the model was trained on the provided
text? To facilitate this study, we introduce a dynamic benchmark WIKIMIA that
uses data created before and after model training to support gold truth
detection. We also introduce a new detection method Min-K% Prob based on a
simple hypothesis: an unseen example is likely to contain a few outlier words
with low probabilities under the LLM, while a seen example is less likely to
have words with such low probabilities. Min-K% Prob can be applied without any
knowledge about the pretraining corpus or any additional training, departing
from previous detection methods that require training a reference model on data
that is similar to the pretraining data. Moreover, our experiments demonstrate
that Min-K% Prob achieves a 7.4% improvement on WIKIMIA over these previous
methods. We apply Min-K% Prob to three real-world scenarios, copyrighted book
detection, contaminated downstream example detection and privacy auditing of
machine unlearning, and find it a consistently effective solution. | http://arxiv.org/pdf/2310.16789 | Weijia Shi, Anirudh Ajith, Mengzhou Xia, Yangsibo Huang, Daogao Liu, Terra Blevins, Danqi Chen, Luke Zettlemoyer | cs.CL, cs.CR, cs.LG | null | null | cs.CL | 20231025 | 20231103 | [
{
"id": "2012.13891"
},
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "2112.12938"
},
{
"id": "2204.02311"
},
{
"id": "2302.07956"
},
{
"id": "2205.01068"
},
{
"id": "2106.11384"
},
{
"id": "2308.04430"
},
{
"id": "2304.06929"
},
{
"id": "2308.08493"
},
{
"id": "2305.00118"
},
{
"id": "2203.03929"
},
{
"id": "2310.02238"
},
{
"id": "1802.04889"
},
{
"id": "2305.08846"
},
{
"id": "2104.08305"
}
] |
2310.16789 | 92 | # B DETAILS OF WIKIMIA
Data properties. Our WIKIMIA benchmark demonstrates several desirable properties that make it suitable for evaluating methods to detect data during pretraining on any newly released models.
(1) Accurate: Since non-member data consists of events that occurred after the LM pretraining, there is a guarantee that this data was not present during pretraining, ensuring the accuracy of our dataset. We consider Wikipedia event data because of its time sensitivity. A recent non-event Wikipedia page may be only a recent version of an older page that was already present during the modelâs pretraining, and thus it may not serve as true non-member data. For example, a Wikipedia page created after 2023 about a historical figure or a well-known concept could contain substantial text already mentioned in the pretraining corpus.
(2) General: Our benchmark is designed to be widely applicable across different models pretrained on Wikipedia, a commonly used source of pretraining data. This includes models like OPT (Zhang et al., 2022), LLaMA (Touvron et al., 2023a;b), GPT-Neo (Black et al., 2022), and Pythia (Biderman et al., 2023), thereby ensuring the benchmarkâs generalizability across various models. | 2310.16789#92 | Detecting Pretraining Data from Large Language Models | Although large language models (LLMs) are widely deployed, the data used to
train them is rarely disclosed. Given the incredible scale of this data, up to
trillions of tokens, it is all but certain that it includes potentially
problematic text such as copyrighted materials, personally identifiable
information, and test data for widely reported reference benchmarks. However,
we currently have no way to know which data of these types is included or in
what proportions. In this paper, we study the pretraining data detection
problem: given a piece of text and black-box access to an LLM without knowing
the pretraining data, can we determine if the model was trained on the provided
text? To facilitate this study, we introduce a dynamic benchmark WIKIMIA that
uses data created before and after model training to support gold truth
detection. We also introduce a new detection method Min-K% Prob based on a
simple hypothesis: an unseen example is likely to contain a few outlier words
with low probabilities under the LLM, while a seen example is less likely to
have words with such low probabilities. Min-K% Prob can be applied without any
knowledge about the pretraining corpus or any additional training, departing
from previous detection methods that require training a reference model on data
that is similar to the pretraining data. Moreover, our experiments demonstrate
that Min-K% Prob achieves a 7.4% improvement on WIKIMIA over these previous
methods. We apply Min-K% Prob to three real-world scenarios, copyrighted book
detection, contaminated downstream example detection and privacy auditing of
machine unlearning, and find it a consistently effective solution. | http://arxiv.org/pdf/2310.16789 | Weijia Shi, Anirudh Ajith, Mengzhou Xia, Yangsibo Huang, Daogao Liu, Terra Blevins, Danqi Chen, Luke Zettlemoyer | cs.CL, cs.CR, cs.LG | null | null | cs.CL | 20231025 | 20231103 | [
{
"id": "2012.13891"
},
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "2112.12938"
},
{
"id": "2204.02311"
},
{
"id": "2302.07956"
},
{
"id": "2205.01068"
},
{
"id": "2106.11384"
},
{
"id": "2308.04430"
},
{
"id": "2304.06929"
},
{
"id": "2308.08493"
},
{
"id": "2305.00118"
},
{
"id": "2203.03929"
},
{
"id": "2310.02238"
},
{
"id": "1802.04889"
},
{
"id": "2305.08846"
},
{
"id": "2104.08305"
}
] |
2310.16789 | 93 | (3) Dynamic: Our benchmark will be continually updated by incorporating the latest non-member data, such as recent events from Wikipedia. This consistent renewal ensures that the benchmarkâs
22
non-member data is always up-to-date and can be used to evaluate MIA for any newly introduced pretrained models.
C DETAILS OF MIN-K% PROB
# Algorithm 1 Pretraining Data Detection
Input: A sequence of tokens x = 21, £2, ..., Nn, decision threshold ⬠Output: Membership of the sequence x : fori = 1to N do Compute â log p(ai|r1,...,2i-1) end for Select the top k% of tokens from x with the lowest probability and add to Min-k%(x) MIN-K% PROB(x) = D0, cmtin-ke(a) ~ log P(wi| x1, ..-, @i-1) : If MIN-K% PROB(x) > ⬠: return Non-member Else: return Member Sram Yeys
# Compute â log p(xi|x1, . . . , xiâ1)
xiâMin-k%(x) â log p(xi|x1, ..., xiâ1)
23 | 2310.16789#93 | Detecting Pretraining Data from Large Language Models | Although large language models (LLMs) are widely deployed, the data used to
train them is rarely disclosed. Given the incredible scale of this data, up to
trillions of tokens, it is all but certain that it includes potentially
problematic text such as copyrighted materials, personally identifiable
information, and test data for widely reported reference benchmarks. However,
we currently have no way to know which data of these types is included or in
what proportions. In this paper, we study the pretraining data detection
problem: given a piece of text and black-box access to an LLM without knowing
the pretraining data, can we determine if the model was trained on the provided
text? To facilitate this study, we introduce a dynamic benchmark WIKIMIA that
uses data created before and after model training to support gold truth
detection. We also introduce a new detection method Min-K% Prob based on a
simple hypothesis: an unseen example is likely to contain a few outlier words
with low probabilities under the LLM, while a seen example is less likely to
have words with such low probabilities. Min-K% Prob can be applied without any
knowledge about the pretraining corpus or any additional training, departing
from previous detection methods that require training a reference model on data
that is similar to the pretraining data. Moreover, our experiments demonstrate
that Min-K% Prob achieves a 7.4% improvement on WIKIMIA over these previous
methods. We apply Min-K% Prob to three real-world scenarios, copyrighted book
detection, contaminated downstream example detection and privacy auditing of
machine unlearning, and find it a consistently effective solution. | http://arxiv.org/pdf/2310.16789 | Weijia Shi, Anirudh Ajith, Mengzhou Xia, Yangsibo Huang, Daogao Liu, Terra Blevins, Danqi Chen, Luke Zettlemoyer | cs.CL, cs.CR, cs.LG | null | null | cs.CL | 20231025 | 20231103 | [
{
"id": "2012.13891"
},
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "2112.12938"
},
{
"id": "2204.02311"
},
{
"id": "2302.07956"
},
{
"id": "2205.01068"
},
{
"id": "2106.11384"
},
{
"id": "2308.04430"
},
{
"id": "2304.06929"
},
{
"id": "2308.08493"
},
{
"id": "2305.00118"
},
{
"id": "2203.03929"
},
{
"id": "2310.02238"
},
{
"id": "1802.04889"
},
{
"id": "2305.08846"
},
{
"id": "2104.08305"
}
] |
2310.14122 | 1 | # Abstract
Zero-shot text rankers powered by recent LLMs achieve remarkable ranking performance by simply prompting. Existing prompts for point- wise LLM rankers mostly ask the model to choose from binary relevance labels like âYesâ and âNoâ. However, the lack of intermediate relevance label options may cause the LLM to provide noisy or biased answers for documents that are partially relevant to the query. We pro- pose to incorporate fine-grained relevance la- bels into the prompt for LLM rankers, enabling them to better differentiate among documents with different levels of relevance to the query and thus derive a more accurate ranking. We study two variants of the prompt template, cou- pled with different numbers of relevance levels. Our experiments on 8 BEIR data sets show that adding fine-grained relevance labels sig- nificantly improves the performance of LLM rankers.
1
# 1 Introduction | 2310.14122#1 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Zero-shot text rankers powered by recent LLMs achieve remarkable ranking
performance by simply prompting. Existing prompts for pointwise LLM rankers
mostly ask the model to choose from binary relevance labels like "Yes" and
"No". However, the lack of intermediate relevance label options may cause the
LLM to provide noisy or biased answers for documents that are partially
relevant to the query. We propose to incorporate fine-grained relevance labels
into the prompt for LLM rankers, enabling them to better differentiate among
documents with different levels of relevance to the query and thus derive a
more accurate ranking. We study two variants of the prompt template, coupled
with different numbers of relevance levels. Our experiments on 8 BEIR data sets
show that adding fine-grained relevance labels significantly improves the
performance of LLM rankers. | http://arxiv.org/pdf/2310.14122 | Honglei Zhuang, Zhen Qin, Kai Hui, Junru Wu, Le Yan, Xuanhui Wang, Michael Bendersky | cs.IR | 13 pages | null | cs.IR | 20231021 | 20231106 | [
{
"id": "2305.06474"
},
{
"id": "2304.09542"
},
{
"id": "2307.02046"
},
{
"id": "2204.07496"
},
{
"id": "2309.10621"
},
{
"id": "2305.19860"
},
{
"id": "2102.07662"
},
{
"id": "2004.08476"
},
{
"id": "2211.09110"
},
{
"id": "2305.02156"
},
{
"id": "2212.10764"
},
{
"id": "2306.17563"
},
{
"id": "2310.09497"
},
{
"id": "1910.14424"
},
{
"id": "2303.08774"
},
{
"id": "2003.07820"
}
] |
2310.14122 | 2 | 1
# 1 Introduction
Large language models (LLMs) such as GPT- 4 (OpenAI, 2023) and PaLM 2 (Google et al., 2023) have demonstrated impressive zero-shot per- formance on a variety of NLP tasks. Recently, there has been a growing interest in applying LLMs to zero-shot text ranking, with remarkably impressive results. The earliest zero-shot LLM rankers are pointwise (Liang et al., 2022; Sachan et al., 2022), which score one query and one document at each time and rank the documents based on the scores. Lately, pairwise (Qin et al., 2023) and listwise (Sun et al., 2023; Ma et al., 2023) LLM rankers also show strong performance, but they cannot scale to long lists and still largely rely on a high-quality first-stage ranking.
A typical category of pointwise LLM rankers is relevance generation (Liang et al., 2022). In this method, the LLM is prompted to answer whether a document is relevant to the query (or answers the | 2310.14122#2 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Zero-shot text rankers powered by recent LLMs achieve remarkable ranking
performance by simply prompting. Existing prompts for pointwise LLM rankers
mostly ask the model to choose from binary relevance labels like "Yes" and
"No". However, the lack of intermediate relevance label options may cause the
LLM to provide noisy or biased answers for documents that are partially
relevant to the query. We propose to incorporate fine-grained relevance labels
into the prompt for LLM rankers, enabling them to better differentiate among
documents with different levels of relevance to the query and thus derive a
more accurate ranking. We study two variants of the prompt template, coupled
with different numbers of relevance levels. Our experiments on 8 BEIR data sets
show that adding fine-grained relevance labels significantly improves the
performance of LLM rankers. | http://arxiv.org/pdf/2310.14122 | Honglei Zhuang, Zhen Qin, Kai Hui, Junru Wu, Le Yan, Xuanhui Wang, Michael Bendersky | cs.IR | 13 pages | null | cs.IR | 20231021 | 20231106 | [
{
"id": "2305.06474"
},
{
"id": "2304.09542"
},
{
"id": "2307.02046"
},
{
"id": "2204.07496"
},
{
"id": "2309.10621"
},
{
"id": "2305.19860"
},
{
"id": "2102.07662"
},
{
"id": "2004.08476"
},
{
"id": "2211.09110"
},
{
"id": "2305.02156"
},
{
"id": "2212.10764"
},
{
"id": "2306.17563"
},
{
"id": "2310.09497"
},
{
"id": "1910.14424"
},
{
"id": "2303.08774"
},
{
"id": "2003.07820"
}
] |
2310.14122 | 3 | query). Existing pointwise LLM rankers mostly ask the LLM to answer âYesâ or âNoâ and use the predicted likelihood of these two answers to derive the ranking score for the given query-document pair. Nevertheless, documents in many datasets are not always entirely relevant or irrelevant to the query. Some documents may not be primarily in- tended to answer the query, but still contain helpful information. There is no accurate mapping between these documents and the binary options.
Studies on human subjects show that using binary options sometimes lead to biased an- swers (Rivera-Garrido et al., 2022). Instead, pro- viding reasonably fine-grained options can lead to more reliable results (Roitero et al., 2018; Birkett, 1986; Rivera-Garrido et al., 2022; Johnston et al., 2017). Actually, in information retrieval data sets, the annotation guidelines for human annotators of- ten employ multiple relevance levels, like the 3- level scale used in TREC-COVID (Voorhees et al., 2021) and TREC-Robust (Voorhees, 2005), as well as the 4-level scale used in TREC-DL (Craswell et al., 2020, 2021). We believe that a zero-shot LLM ranker might share the same behavior pattern with human annotators. | 2310.14122#3 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Zero-shot text rankers powered by recent LLMs achieve remarkable ranking
performance by simply prompting. Existing prompts for pointwise LLM rankers
mostly ask the model to choose from binary relevance labels like "Yes" and
"No". However, the lack of intermediate relevance label options may cause the
LLM to provide noisy or biased answers for documents that are partially
relevant to the query. We propose to incorporate fine-grained relevance labels
into the prompt for LLM rankers, enabling them to better differentiate among
documents with different levels of relevance to the query and thus derive a
more accurate ranking. We study two variants of the prompt template, coupled
with different numbers of relevance levels. Our experiments on 8 BEIR data sets
show that adding fine-grained relevance labels significantly improves the
performance of LLM rankers. | http://arxiv.org/pdf/2310.14122 | Honglei Zhuang, Zhen Qin, Kai Hui, Junru Wu, Le Yan, Xuanhui Wang, Michael Bendersky | cs.IR | 13 pages | null | cs.IR | 20231021 | 20231106 | [
{
"id": "2305.06474"
},
{
"id": "2304.09542"
},
{
"id": "2307.02046"
},
{
"id": "2204.07496"
},
{
"id": "2309.10621"
},
{
"id": "2305.19860"
},
{
"id": "2102.07662"
},
{
"id": "2004.08476"
},
{
"id": "2211.09110"
},
{
"id": "2305.02156"
},
{
"id": "2212.10764"
},
{
"id": "2306.17563"
},
{
"id": "2310.09497"
},
{
"id": "1910.14424"
},
{
"id": "2303.08774"
},
{
"id": "2003.07820"
}
] |
2310.14122 | 4 | Therefore, we propose to explicitly provide fine- grained relevance labels in the prompt to zero-shot LLM rankers. Instead of asking the LLM to choose between two options, we provide the LLM with fine-grained relevance labels, such as: âHighly Rel- evantâ, âSomewhat Relevantâ and âNot Relevantâ. We then collect the LLM likelihood of all the rel- evance labels to derive the ranking score for each query-document pair. The intuition is that the inter- mediate relevance labels in the prompt will serve as a "cue" to the LLM that partially relevant doc- uments need to be distinguished from fully rele- In addition, vant or fully irrelevant documents. by collecting the likelihood on more fine-grained relevance labels, we can obtain a more accurate estimate of the actual relevance, and thereby derive
a better ranking. It is important to note that our focus is on developing LLM rankers, which is dif- ferent from LLM assessors (Faggioli et al., 2023; Thomas et al., 2023), as our goal is only to derive a high-quality ranking with accurate top-ranked doc- uments instead of estimating the precise (and often discrete) relevance for each individual document to sort ranking systems. | 2310.14122#4 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Zero-shot text rankers powered by recent LLMs achieve remarkable ranking
performance by simply prompting. Existing prompts for pointwise LLM rankers
mostly ask the model to choose from binary relevance labels like "Yes" and
"No". However, the lack of intermediate relevance label options may cause the
LLM to provide noisy or biased answers for documents that are partially
relevant to the query. We propose to incorporate fine-grained relevance labels
into the prompt for LLM rankers, enabling them to better differentiate among
documents with different levels of relevance to the query and thus derive a
more accurate ranking. We study two variants of the prompt template, coupled
with different numbers of relevance levels. Our experiments on 8 BEIR data sets
show that adding fine-grained relevance labels significantly improves the
performance of LLM rankers. | http://arxiv.org/pdf/2310.14122 | Honglei Zhuang, Zhen Qin, Kai Hui, Junru Wu, Le Yan, Xuanhui Wang, Michael Bendersky | cs.IR | 13 pages | null | cs.IR | 20231021 | 20231106 | [
{
"id": "2305.06474"
},
{
"id": "2304.09542"
},
{
"id": "2307.02046"
},
{
"id": "2204.07496"
},
{
"id": "2309.10621"
},
{
"id": "2305.19860"
},
{
"id": "2102.07662"
},
{
"id": "2004.08476"
},
{
"id": "2211.09110"
},
{
"id": "2305.02156"
},
{
"id": "2212.10764"
},
{
"id": "2306.17563"
},
{
"id": "2310.09497"
},
{
"id": "1910.14424"
},
{
"id": "2303.08774"
},
{
"id": "2003.07820"
}
] |
2310.14122 | 5 | We evaluate our prompts for zero-shot LLM ranking on 8 data sets from BEIR (Thakur et al., 2021). The results show that simply adding the in- termediate relevance labels allows LLM rankers to achieve substantially higher ranking performance consistently across different data sets, regardless of whether the actual ground-truth labels of the data set contain multiple graded relevance levels. An in- depth analysis shows that the new prompt enables LLM rankers to distinguish documents that are in- distinguishable when there are only two options provided. We believe this discovery can benefit not only text ranking applications, but other domains such as recommendations (Fan et al., 2023; Wu et al., 2023) and user rating prediction (Kang et al., 2023).
# 2 Related Work
Zero-shot LLM rankers. An emerging thread of research explores how to use general-purpose LLMs for zero-shot text ranking, a shift from tuning-based learning to rank on textual and tradi- tional tabular datasets (Nogueira et al., 2019; Han et al., 2020; Zhuang et al., 2021; Nogueira et al., 2020; Zhuang et al., 2023a; Xian et al., 2022; Liu, 2009; Qin et al., 2021). | 2310.14122#5 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Zero-shot text rankers powered by recent LLMs achieve remarkable ranking
performance by simply prompting. Existing prompts for pointwise LLM rankers
mostly ask the model to choose from binary relevance labels like "Yes" and
"No". However, the lack of intermediate relevance label options may cause the
LLM to provide noisy or biased answers for documents that are partially
relevant to the query. We propose to incorporate fine-grained relevance labels
into the prompt for LLM rankers, enabling them to better differentiate among
documents with different levels of relevance to the query and thus derive a
more accurate ranking. We study two variants of the prompt template, coupled
with different numbers of relevance levels. Our experiments on 8 BEIR data sets
show that adding fine-grained relevance labels significantly improves the
performance of LLM rankers. | http://arxiv.org/pdf/2310.14122 | Honglei Zhuang, Zhen Qin, Kai Hui, Junru Wu, Le Yan, Xuanhui Wang, Michael Bendersky | cs.IR | 13 pages | null | cs.IR | 20231021 | 20231106 | [
{
"id": "2305.06474"
},
{
"id": "2304.09542"
},
{
"id": "2307.02046"
},
{
"id": "2204.07496"
},
{
"id": "2309.10621"
},
{
"id": "2305.19860"
},
{
"id": "2102.07662"
},
{
"id": "2004.08476"
},
{
"id": "2211.09110"
},
{
"id": "2305.02156"
},
{
"id": "2212.10764"
},
{
"id": "2306.17563"
},
{
"id": "2310.09497"
},
{
"id": "1910.14424"
},
{
"id": "2303.08774"
},
{
"id": "2003.07820"
}
] |
2310.14122 | 6 | Pointwise rankers take a single query-document pair as input and return a ranking score. The ranked list is obtained by sorting documents based on their ranking scores. The ranking score is typi- cally calculated based on how likely the document is relevant to the query (Liang et al., 2022) or how likely the query can be generated from the doc- ument (Sachan et al., 2022). Our work is most related to this line of research. We will revisit more technical details in Section 3.
Pairwise (Qin et al., 2023) and listwise (Sun et al., 2023; Ma et al., 2023; Zhuang et al., 2023b) LLM rankers take multiple documents as input and return the ranking directly. They are usually ap- plied iteratively on smaller sets of documents and often rely on a pointwise first-stage ranker. In this paper, we only focus on pointwise LLM rankers. | 2310.14122#6 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Zero-shot text rankers powered by recent LLMs achieve remarkable ranking
performance by simply prompting. Existing prompts for pointwise LLM rankers
mostly ask the model to choose from binary relevance labels like "Yes" and
"No". However, the lack of intermediate relevance label options may cause the
LLM to provide noisy or biased answers for documents that are partially
relevant to the query. We propose to incorporate fine-grained relevance labels
into the prompt for LLM rankers, enabling them to better differentiate among
documents with different levels of relevance to the query and thus derive a
more accurate ranking. We study two variants of the prompt template, coupled
with different numbers of relevance levels. Our experiments on 8 BEIR data sets
show that adding fine-grained relevance labels significantly improves the
performance of LLM rankers. | http://arxiv.org/pdf/2310.14122 | Honglei Zhuang, Zhen Qin, Kai Hui, Junru Wu, Le Yan, Xuanhui Wang, Michael Bendersky | cs.IR | 13 pages | null | cs.IR | 20231021 | 20231106 | [
{
"id": "2305.06474"
},
{
"id": "2304.09542"
},
{
"id": "2307.02046"
},
{
"id": "2204.07496"
},
{
"id": "2309.10621"
},
{
"id": "2305.19860"
},
{
"id": "2102.07662"
},
{
"id": "2004.08476"
},
{
"id": "2211.09110"
},
{
"id": "2305.02156"
},
{
"id": "2212.10764"
},
{
"id": "2306.17563"
},
{
"id": "2310.09497"
},
{
"id": "1910.14424"
},
{
"id": "2303.08774"
},
{
"id": "2003.07820"
}
] |
2310.14122 | 7 | Zero-shot LLM assessors. Another related re- search area (Faggioli et al., 2023; Thomas et al., 2023) employs LLMs as assessors. The goal of LLM assessors is to provide a relevance label for every query-document pairs, so that the label aligns with the ground-truth relevance label, potentially created by human assessors. Existing studies (Fag- gioli et al., 2023; Thomas et al., 2023) also prompt LLMs with fine-grained relevance labels. LLM assessors are usually used to create an evaluation data set, which can be used to reliably evaluate dif- ferent ranking models. This is different from LLM rankers, which typically only need to ensure that the relative order of the top-ranked documents are accurate. A perfect LLM assessor would also be a perfect LLM ranker, but when LLM capabilities are limited, the priorities of LLM assessor and LLM ranker development diverge.
# 3 LLM Rankers
In this section, we first revisit existing pointwise LLM rankers. Then we introduce the prompt- ing method of our LLM rankers which score fine- grained relevance labels and how we obtain the final ranking scores.
# 3.1 Preliminaries | 2310.14122#7 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Zero-shot text rankers powered by recent LLMs achieve remarkable ranking
performance by simply prompting. Existing prompts for pointwise LLM rankers
mostly ask the model to choose from binary relevance labels like "Yes" and
"No". However, the lack of intermediate relevance label options may cause the
LLM to provide noisy or biased answers for documents that are partially
relevant to the query. We propose to incorporate fine-grained relevance labels
into the prompt for LLM rankers, enabling them to better differentiate among
documents with different levels of relevance to the query and thus derive a
more accurate ranking. We study two variants of the prompt template, coupled
with different numbers of relevance levels. Our experiments on 8 BEIR data sets
show that adding fine-grained relevance labels significantly improves the
performance of LLM rankers. | http://arxiv.org/pdf/2310.14122 | Honglei Zhuang, Zhen Qin, Kai Hui, Junru Wu, Le Yan, Xuanhui Wang, Michael Bendersky | cs.IR | 13 pages | null | cs.IR | 20231021 | 20231106 | [
{
"id": "2305.06474"
},
{
"id": "2304.09542"
},
{
"id": "2307.02046"
},
{
"id": "2204.07496"
},
{
"id": "2309.10621"
},
{
"id": "2305.19860"
},
{
"id": "2102.07662"
},
{
"id": "2004.08476"
},
{
"id": "2211.09110"
},
{
"id": "2305.02156"
},
{
"id": "2212.10764"
},
{
"id": "2306.17563"
},
{
"id": "2310.09497"
},
{
"id": "1910.14424"
},
{
"id": "2303.08774"
},
{
"id": "2003.07820"
}
] |
2310.14122 | 8 | # 3.1 Preliminaries
Pointwise rankers. We formally describe how a pointwise ranker tackles a ranking problem. Con- sidering a query q and a list of candidate documents d = (d1, . . . , dm), a pointwise ranker f takes each query-document pair (q, di) as input and predicts a ranking score f (q, d) â R, which reflects the relevance of the document to the query. Once the pointwise ranker has inferred ranking scores for all documents, we can obtain a ranked list by sorting the documents based on their predicted scores.
Zero-shot LLM rankers. Existing explorations using zero-shot LLMs as pointwise rankers can be broadly divided into two categories: relevance generation (Liang et al., 2022) and query genera- tion (Sachan et al., 2022).
Relevance generation methods prompt the LLM with both the query q and the document d and ask whether the document is relevant to the query with âYesâ or âNoâ (see Figure 1(a)). To calcu- late the ranking score, one can use the LLMâs log-likelihood score s1 = LLM(Yes|q, d) and s0 = LLM(No|q, d), and normalize them with a | 2310.14122#8 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Zero-shot text rankers powered by recent LLMs achieve remarkable ranking
performance by simply prompting. Existing prompts for pointwise LLM rankers
mostly ask the model to choose from binary relevance labels like "Yes" and
"No". However, the lack of intermediate relevance label options may cause the
LLM to provide noisy or biased answers for documents that are partially
relevant to the query. We propose to incorporate fine-grained relevance labels
into the prompt for LLM rankers, enabling them to better differentiate among
documents with different levels of relevance to the query and thus derive a
more accurate ranking. We study two variants of the prompt template, coupled
with different numbers of relevance levels. Our experiments on 8 BEIR data sets
show that adding fine-grained relevance labels significantly improves the
performance of LLM rankers. | http://arxiv.org/pdf/2310.14122 | Honglei Zhuang, Zhen Qin, Kai Hui, Junru Wu, Le Yan, Xuanhui Wang, Michael Bendersky | cs.IR | 13 pages | null | cs.IR | 20231021 | 20231106 | [
{
"id": "2305.06474"
},
{
"id": "2304.09542"
},
{
"id": "2307.02046"
},
{
"id": "2204.07496"
},
{
"id": "2309.10621"
},
{
"id": "2305.19860"
},
{
"id": "2102.07662"
},
{
"id": "2004.08476"
},
{
"id": "2211.09110"
},
{
"id": "2305.02156"
},
{
"id": "2212.10764"
},
{
"id": "2306.17563"
},
{
"id": "2310.09497"
},
{
"id": "1910.14424"
},
{
"id": "2303.08774"
},
{
"id": "2003.07820"
}
] |
2310.14122 | 9 | (Get | os Fe ee âQuery: {query} LLM. = os âOutput: 02) ou J i â_ can obtain ing each
(a) Yes-No relevance generation
(Get | os Fe ee âQuery: {query} LLM. = os âOutput: 02) ou J i â_
can obtain the log-likelihood of the LLM generat- ing each relevance label:
sk = LLM(lk|q, d) (1)
= query and document judge >) um * 5 » Ws BS = whether they are "Highly Relevantâ, âSomewhat Relevantâ, or "Not Relevantâ. Querysiquery) Document{document) Output:
This example is illustrated in Figure 1(b). | 2310.14122#9 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Zero-shot text rankers powered by recent LLMs achieve remarkable ranking
performance by simply prompting. Existing prompts for pointwise LLM rankers
mostly ask the model to choose from binary relevance labels like "Yes" and
"No". However, the lack of intermediate relevance label options may cause the
LLM to provide noisy or biased answers for documents that are partially
relevant to the query. We propose to incorporate fine-grained relevance labels
into the prompt for LLM rankers, enabling them to better differentiate among
documents with different levels of relevance to the query and thus derive a
more accurate ranking. We study two variants of the prompt template, coupled
with different numbers of relevance levels. Our experiments on 8 BEIR data sets
show that adding fine-grained relevance labels significantly improves the
performance of LLM rankers. | http://arxiv.org/pdf/2310.14122 | Honglei Zhuang, Zhen Qin, Kai Hui, Junru Wu, Le Yan, Xuanhui Wang, Michael Bendersky | cs.IR | 13 pages | null | cs.IR | 20231021 | 20231106 | [
{
"id": "2305.06474"
},
{
"id": "2304.09542"
},
{
"id": "2307.02046"
},
{
"id": "2204.07496"
},
{
"id": "2309.10621"
},
{
"id": "2305.19860"
},
{
"id": "2102.07662"
},
{
"id": "2004.08476"
},
{
"id": "2211.09110"
},
{
"id": "2305.02156"
},
{
"id": "2212.10764"
},
{
"id": "2306.17563"
},
{
"id": "2310.09497"
},
{
"id": "1910.14424"
},
{
"id": "2303.08774"
},
{
"id": "2003.07820"
}
] |
2310.14122 | 10 | This example is illustrated in Figure 1(b).
Rating scale. To avoid using relevance labels with potentially ambiguous order, we can also em- ploy a rating scale. For example, we can prompt the LLM to rate the relevance between the query q and the document d on a scale from 0 to 4. We can then use the LLM to obtain the log-likelihood [s0, . . . , s4] of generating each relevance scale value [l0, . . . , l4], which are â0â to â4â respectively. This method allows us to try arbitrarily fine-grained relevance levels in the prompt. Figure 1(c) illus- trates an example of this prompt.
(b) Fine-grained relevance label generation
(( From a scale of 0 to 4, judge the relevance between the query and the document. Query:(query) Document:{document) { Output: X âoo et fit BE ow
(c) Rating scale relevance generation
Figure 1: Illustration of different prompting strategies for relevance generation LLM rankers.
# 3.3 Ranking Scores
softmax function (Nogueira et al., 2020):
Once we obtain the log-likelihood of each rele- vance labels, we can derive the ranking scores.
exp(s1) exp(s1) + exp(s0) f (q, d) = | 2310.14122#10 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Zero-shot text rankers powered by recent LLMs achieve remarkable ranking
performance by simply prompting. Existing prompts for pointwise LLM rankers
mostly ask the model to choose from binary relevance labels like "Yes" and
"No". However, the lack of intermediate relevance label options may cause the
LLM to provide noisy or biased answers for documents that are partially
relevant to the query. We propose to incorporate fine-grained relevance labels
into the prompt for LLM rankers, enabling them to better differentiate among
documents with different levels of relevance to the query and thus derive a
more accurate ranking. We study two variants of the prompt template, coupled
with different numbers of relevance levels. Our experiments on 8 BEIR data sets
show that adding fine-grained relevance labels significantly improves the
performance of LLM rankers. | http://arxiv.org/pdf/2310.14122 | Honglei Zhuang, Zhen Qin, Kai Hui, Junru Wu, Le Yan, Xuanhui Wang, Michael Bendersky | cs.IR | 13 pages | null | cs.IR | 20231021 | 20231106 | [
{
"id": "2305.06474"
},
{
"id": "2304.09542"
},
{
"id": "2307.02046"
},
{
"id": "2204.07496"
},
{
"id": "2309.10621"
},
{
"id": "2305.19860"
},
{
"id": "2102.07662"
},
{
"id": "2004.08476"
},
{
"id": "2211.09110"
},
{
"id": "2305.02156"
},
{
"id": "2212.10764"
},
{
"id": "2306.17563"
},
{
"id": "2310.09497"
},
{
"id": "1910.14424"
},
{
"id": "2303.08774"
},
{
"id": "2003.07820"
}
] |
2310.14122 | 11 | exp(s1) exp(s1) + exp(s0) f (q, d) =
Expected relevance values (ER). The most straightforward way is to calculate the expected relevance value. To do this, we first derive the marginal probability of generating each relevance label given all the candidate relevance labels by:
Query generation methods provide the LLM with the document d as input and ask the LLM to generate a query that d answers. The ranking score is then obtained by the log-likelihood of the LLM generating the actual query q, i.e.,
___exp(s) Pk Sy exp(sx) (2)
f (q, d) = LLM(q|d)
Then, we can assign a series of relevance val- ues [y0, y1, y2] to all the relevance labels [l0, l1, l2], where yk â R. The relevance value should reflect the relevance degree expressed by the textual rel- evance label. We can then calculate the ranking score as the expected relevance value by:
We focus on relevance generation LLM rankers in this work.
# 3.2 Prompts | 2310.14122#11 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Zero-shot text rankers powered by recent LLMs achieve remarkable ranking
performance by simply prompting. Existing prompts for pointwise LLM rankers
mostly ask the model to choose from binary relevance labels like "Yes" and
"No". However, the lack of intermediate relevance label options may cause the
LLM to provide noisy or biased answers for documents that are partially
relevant to the query. We propose to incorporate fine-grained relevance labels
into the prompt for LLM rankers, enabling them to better differentiate among
documents with different levels of relevance to the query and thus derive a
more accurate ranking. We study two variants of the prompt template, coupled
with different numbers of relevance levels. Our experiments on 8 BEIR data sets
show that adding fine-grained relevance labels significantly improves the
performance of LLM rankers. | http://arxiv.org/pdf/2310.14122 | Honglei Zhuang, Zhen Qin, Kai Hui, Junru Wu, Le Yan, Xuanhui Wang, Michael Bendersky | cs.IR | 13 pages | null | cs.IR | 20231021 | 20231106 | [
{
"id": "2305.06474"
},
{
"id": "2304.09542"
},
{
"id": "2307.02046"
},
{
"id": "2204.07496"
},
{
"id": "2309.10621"
},
{
"id": "2305.19860"
},
{
"id": "2102.07662"
},
{
"id": "2004.08476"
},
{
"id": "2211.09110"
},
{
"id": "2305.02156"
},
{
"id": "2212.10764"
},
{
"id": "2306.17563"
},
{
"id": "2310.09497"
},
{
"id": "1910.14424"
},
{
"id": "2303.08774"
},
{
"id": "2003.07820"
}
] |
2310.14122 | 12 | We focus on relevance generation LLM rankers in this work.
# 3.2 Prompts
In many datasets, there exist documents that are only partially or marginally relevant to the query. These documents do not directly answer the query but may contain some relevant information. When not explicitly prompted, LLMs may struggle to de- cide whether to classify such documents as relevant or irrelevant.
£4) =o Pe He (3)
The relevance values yk can be provided by users or even tuned based on a training data set. In our experiments, we find that with relevance labels starting from the least relevant to the most relevant, naïvely assigning yk = k can already provide great performance. Hence, we simply use yk = k.
Fine-grained relevance labels. We extend the classical relevance generation methods by intro- ducing fine-grained relevance labels. Without loss of generality, we use a set of 3-level graded rele- vance labels as example: [âNot Relevantâ, âSome- what Relevantâ, âHighly Relevantâ], denoted as [l0, l1, l2]. Then, for each query-document pair (q, d), we ask the LLM to evaluate their relevance by choosing from the given relevance labels. We | 2310.14122#12 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Zero-shot text rankers powered by recent LLMs achieve remarkable ranking
performance by simply prompting. Existing prompts for pointwise LLM rankers
mostly ask the model to choose from binary relevance labels like "Yes" and
"No". However, the lack of intermediate relevance label options may cause the
LLM to provide noisy or biased answers for documents that are partially
relevant to the query. We propose to incorporate fine-grained relevance labels
into the prompt for LLM rankers, enabling them to better differentiate among
documents with different levels of relevance to the query and thus derive a
more accurate ranking. We study two variants of the prompt template, coupled
with different numbers of relevance levels. Our experiments on 8 BEIR data sets
show that adding fine-grained relevance labels significantly improves the
performance of LLM rankers. | http://arxiv.org/pdf/2310.14122 | Honglei Zhuang, Zhen Qin, Kai Hui, Junru Wu, Le Yan, Xuanhui Wang, Michael Bendersky | cs.IR | 13 pages | null | cs.IR | 20231021 | 20231106 | [
{
"id": "2305.06474"
},
{
"id": "2304.09542"
},
{
"id": "2307.02046"
},
{
"id": "2204.07496"
},
{
"id": "2309.10621"
},
{
"id": "2305.19860"
},
{
"id": "2102.07662"
},
{
"id": "2004.08476"
},
{
"id": "2211.09110"
},
{
"id": "2305.02156"
},
{
"id": "2212.10764"
},
{
"id": "2306.17563"
},
{
"id": "2310.09497"
},
{
"id": "1910.14424"
},
{
"id": "2303.08774"
},
{
"id": "2003.07820"
}
] |
2310.14122 | 13 | Peak relevance likelihood (PR). Alternatively, since LLM rankers are typically evaluated by rank- ing metrics which heavily focus on the accuracy of top-ranked items instead of the entire ranked list, we can further simplify the ranking score derivation by only using the log-likelihood of the relevance
Table 1: Relevance labels used in RG-kL. The relevance label with the maximum relevance value is bolded.
Method Relevance Labels RG-2L âNot Relevantâ, âRelevantâ RG-3L âNot Relevantâ, âHighly Relevantâ âSomewhat Relevantâ, RG-4L âNot Relevantâ, âHighly Relevantâ, âPerfectly Relevantâ âSomewhat Relevantâ,
label with the highest relevance value. For exam- ple, âHighly Relevantâ is the relevance label with the highest relevance value among âNot Relevantâ, âSomewhat Relevantâ and âHighly Relevantâ. We still prompt the LLM with all three relevance labels as options, but only use the log-likelihood of âHigh Relevantâ as the ranking score.
More formally, let lkâ denote the relevance label expressing the highest relevance label. We can simply rank the documents by: | 2310.14122#13 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Zero-shot text rankers powered by recent LLMs achieve remarkable ranking
performance by simply prompting. Existing prompts for pointwise LLM rankers
mostly ask the model to choose from binary relevance labels like "Yes" and
"No". However, the lack of intermediate relevance label options may cause the
LLM to provide noisy or biased answers for documents that are partially
relevant to the query. We propose to incorporate fine-grained relevance labels
into the prompt for LLM rankers, enabling them to better differentiate among
documents with different levels of relevance to the query and thus derive a
more accurate ranking. We study two variants of the prompt template, coupled
with different numbers of relevance levels. Our experiments on 8 BEIR data sets
show that adding fine-grained relevance labels significantly improves the
performance of LLM rankers. | http://arxiv.org/pdf/2310.14122 | Honglei Zhuang, Zhen Qin, Kai Hui, Junru Wu, Le Yan, Xuanhui Wang, Michael Bendersky | cs.IR | 13 pages | null | cs.IR | 20231021 | 20231106 | [
{
"id": "2305.06474"
},
{
"id": "2304.09542"
},
{
"id": "2307.02046"
},
{
"id": "2204.07496"
},
{
"id": "2309.10621"
},
{
"id": "2305.19860"
},
{
"id": "2102.07662"
},
{
"id": "2004.08476"
},
{
"id": "2211.09110"
},
{
"id": "2305.02156"
},
{
"id": "2212.10764"
},
{
"id": "2306.17563"
},
{
"id": "2310.09497"
},
{
"id": "1910.14424"
},
{
"id": "2303.08774"
},
{
"id": "2003.07820"
}
] |
2310.14122 | 14 | More formally, let lkâ denote the relevance label expressing the highest relevance label. We can simply rank the documents by:
f (q, d) = skâ (4)
Note that skâ is the log-likelihood directly obtained from the LLM(lkâ|q, d), instead of the marginal probability derived from Equation (3). Hence, it is not necessary to score any other relevance labels using the LLM and could potentially save some decoding cost when using this strategy to derive the ranking score. While this method is shown less effective on smaller models (Nogueira et al., 2020), it works well empirically with larger models in our experiments.
# 4 Experiment Setup
Data set. We conduct experiments on 8 chosen data sets (Sun et al., 2023) from BEIR (Thakur et al., 2021): Covid, Touche, DBPedia, SciFact, Signal, News, Robust04, and NFCorpus. Notice that our method is applicable regardless of whether the data set is actually labeled with correspond- ing graded relevance, since the final output of our method are just real-number ranking scores. | 2310.14122#14 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Zero-shot text rankers powered by recent LLMs achieve remarkable ranking
performance by simply prompting. Existing prompts for pointwise LLM rankers
mostly ask the model to choose from binary relevance labels like "Yes" and
"No". However, the lack of intermediate relevance label options may cause the
LLM to provide noisy or biased answers for documents that are partially
relevant to the query. We propose to incorporate fine-grained relevance labels
into the prompt for LLM rankers, enabling them to better differentiate among
documents with different levels of relevance to the query and thus derive a
more accurate ranking. We study two variants of the prompt template, coupled
with different numbers of relevance levels. Our experiments on 8 BEIR data sets
show that adding fine-grained relevance labels significantly improves the
performance of LLM rankers. | http://arxiv.org/pdf/2310.14122 | Honglei Zhuang, Zhen Qin, Kai Hui, Junru Wu, Le Yan, Xuanhui Wang, Michael Bendersky | cs.IR | 13 pages | null | cs.IR | 20231021 | 20231106 | [
{
"id": "2305.06474"
},
{
"id": "2304.09542"
},
{
"id": "2307.02046"
},
{
"id": "2204.07496"
},
{
"id": "2309.10621"
},
{
"id": "2305.19860"
},
{
"id": "2102.07662"
},
{
"id": "2004.08476"
},
{
"id": "2211.09110"
},
{
"id": "2305.02156"
},
{
"id": "2212.10764"
},
{
"id": "2306.17563"
},
{
"id": "2310.09497"
},
{
"id": "1910.14424"
},
{
"id": "2303.08774"
},
{
"id": "2003.07820"
}
] |
2310.14122 | 15 | We use BM25 (Lin et al., 2021) to retrieve the top-100 documents for each data set, and then rank the retrieved documents using LLMs with our pro- posed methods. We use FLAN PaLM2 S (Google et al., 2023) as the LLM in our experiments.
The ranking performance is measured by NDCG@10 (Järvelin and Kekäläinen, 2002).
Compared methods. We compared the follow- ing prompting strategies:
1. Query Generation (QG). Ranking documents based on the LLM likelihood of generating the query given the document (Sachan et al., 2022).
2. Binary Relevance Generation (RG-YN). Prompting the LLM with a query-document pair and using the likelihood of âYes/Noâ to calculate the ranking score (Liang et al., 2022).
3. k-Level Relevance Generation (RG-kL). Prompting the LLM to choose from k rele- vance labels for each query-document pair. The relevance labels used are listed in Table 1. | 2310.14122#15 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Zero-shot text rankers powered by recent LLMs achieve remarkable ranking
performance by simply prompting. Existing prompts for pointwise LLM rankers
mostly ask the model to choose from binary relevance labels like "Yes" and
"No". However, the lack of intermediate relevance label options may cause the
LLM to provide noisy or biased answers for documents that are partially
relevant to the query. We propose to incorporate fine-grained relevance labels
into the prompt for LLM rankers, enabling them to better differentiate among
documents with different levels of relevance to the query and thus derive a
more accurate ranking. We study two variants of the prompt template, coupled
with different numbers of relevance levels. Our experiments on 8 BEIR data sets
show that adding fine-grained relevance labels significantly improves the
performance of LLM rankers. | http://arxiv.org/pdf/2310.14122 | Honglei Zhuang, Zhen Qin, Kai Hui, Junru Wu, Le Yan, Xuanhui Wang, Michael Bendersky | cs.IR | 13 pages | null | cs.IR | 20231021 | 20231106 | [
{
"id": "2305.06474"
},
{
"id": "2304.09542"
},
{
"id": "2307.02046"
},
{
"id": "2204.07496"
},
{
"id": "2309.10621"
},
{
"id": "2305.19860"
},
{
"id": "2102.07662"
},
{
"id": "2004.08476"
},
{
"id": "2211.09110"
},
{
"id": "2305.02156"
},
{
"id": "2212.10764"
},
{
"id": "2306.17563"
},
{
"id": "2310.09497"
},
{
"id": "1910.14424"
},
{
"id": "2303.08774"
},
{
"id": "2003.07820"
}
] |
2310.14122 | 16 | 4. Rating Scale 0-to-k Relevance Generation (RG-S(0, k)). Prompting the LLM to rate the relevance for each query-document pair using a scale from 0 to k. Notice that for RG-S(0, k), the LLM needs to score the log- likelihood for (k + 1) possible outputs.
The exact prompts can be found in Appendix F.
By default, the ranking scores of our proposed methods are derived using the expected relevance values as shown in Equation (3). When needed, the method name is appended with the suffix â-ERâ. We also conduct experiments to compare methods with ranking scores derived using peak relevance likelihood according to Equation (4), indicated by suffix â-PRâ.
# 5 Results
Overall performance. Table 2 summarizes the overall comparison results. We also plot how the performance changes with regard to k for the rating scale prompting method RG-S(0, k) in Figure 2. | 2310.14122#16 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Zero-shot text rankers powered by recent LLMs achieve remarkable ranking
performance by simply prompting. Existing prompts for pointwise LLM rankers
mostly ask the model to choose from binary relevance labels like "Yes" and
"No". However, the lack of intermediate relevance label options may cause the
LLM to provide noisy or biased answers for documents that are partially
relevant to the query. We propose to incorporate fine-grained relevance labels
into the prompt for LLM rankers, enabling them to better differentiate among
documents with different levels of relevance to the query and thus derive a
more accurate ranking. We study two variants of the prompt template, coupled
with different numbers of relevance levels. Our experiments on 8 BEIR data sets
show that adding fine-grained relevance labels significantly improves the
performance of LLM rankers. | http://arxiv.org/pdf/2310.14122 | Honglei Zhuang, Zhen Qin, Kai Hui, Junru Wu, Le Yan, Xuanhui Wang, Michael Bendersky | cs.IR | 13 pages | null | cs.IR | 20231021 | 20231106 | [
{
"id": "2305.06474"
},
{
"id": "2304.09542"
},
{
"id": "2307.02046"
},
{
"id": "2204.07496"
},
{
"id": "2309.10621"
},
{
"id": "2305.19860"
},
{
"id": "2102.07662"
},
{
"id": "2004.08476"
},
{
"id": "2211.09110"
},
{
"id": "2305.02156"
},
{
"id": "2212.10764"
},
{
"id": "2306.17563"
},
{
"id": "2310.09497"
},
{
"id": "1910.14424"
},
{
"id": "2303.08774"
},
{
"id": "2003.07820"
}
] |
2310.14122 | 17 | Overall performance. Table 2 summarizes the overall comparison results. We also plot how the performance changes with regard to k for the rating scale prompting method RG-S(0, k) in Figure 2.
It can be seen that when the LLM is prompted with only 2 relevance labels (RG-YN, RG-2L), the average performance is lower. However, when the LLM is prompted with more fine-grained relevance labels, the performance can be substantially im- proved. RG-3L on average achieves +2% improve- ment in NDCG@10 compared with RG-2L and RG-YN. RG-S(0, 4) which uses the rating scale 0 to 4 in the prompt also achieves similar im- provement. Note that even on data sets with bi- nary ground-truth labels (e.g., SciFact), using fine- grained relevance labels still achieves substantial improvement. This suggests that the improvement is not merely a result of matching the actual ground- truth relevance levels of the data set. Rather, the
Table 2: Overall ranking performances measured by NDCG@10 on BEIR data sets. The best performances are bolded. Average results that are significantly (paired t-test, p<0.05) better than RG-2L are marked with â. | 2310.14122#17 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Zero-shot text rankers powered by recent LLMs achieve remarkable ranking
performance by simply prompting. Existing prompts for pointwise LLM rankers
mostly ask the model to choose from binary relevance labels like "Yes" and
"No". However, the lack of intermediate relevance label options may cause the
LLM to provide noisy or biased answers for documents that are partially
relevant to the query. We propose to incorporate fine-grained relevance labels
into the prompt for LLM rankers, enabling them to better differentiate among
documents with different levels of relevance to the query and thus derive a
more accurate ranking. We study two variants of the prompt template, coupled
with different numbers of relevance levels. Our experiments on 8 BEIR data sets
show that adding fine-grained relevance labels significantly improves the
performance of LLM rankers. | http://arxiv.org/pdf/2310.14122 | Honglei Zhuang, Zhen Qin, Kai Hui, Junru Wu, Le Yan, Xuanhui Wang, Michael Bendersky | cs.IR | 13 pages | null | cs.IR | 20231021 | 20231106 | [
{
"id": "2305.06474"
},
{
"id": "2304.09542"
},
{
"id": "2307.02046"
},
{
"id": "2204.07496"
},
{
"id": "2309.10621"
},
{
"id": "2305.19860"
},
{
"id": "2102.07662"
},
{
"id": "2004.08476"
},
{
"id": "2211.09110"
},
{
"id": "2305.02156"
},
{
"id": "2212.10764"
},
{
"id": "2306.17563"
},
{
"id": "2310.09497"
},
{
"id": "1910.14424"
},
{
"id": "2303.08774"
},
{
"id": "2003.07820"
}
] |
2310.14122 | 18 | Method Covid Touche DBPedia SciFact Signal News Robust04 NFCorpus Average QG RG-YN 0.7357 0.7897 0.2408 0.2427 0.3773 0.3696 0.7495 0.6958 0.2872 0.3196 0.4156 0.4588 0.4651 0.5656 0.3673 0.3743 0.4548 0.4770 RG-2L RG-3L RG-4L 0.7949 0.8065 0.8063 0.2411 0.2650 0.2388 0.3590 0.4013 0.4033 0.7290 0.7671 0.7766 0.2996 0.3142 0.3184 0.4623 0.4890 0.4884 0.5636 0.5660 0.5635 0.3814 0.3849 0.3801 0.4789 0.4992â 0.4969â 0.7760 0.8048 0.2695 0.2757 0.3709 0.4190 0.6921 0.7521 0.3034 0.3301 0.4677 0.4790 0.5557 0.5668 0.3787 0.3901 0.4768 0.5022â | 2310.14122#18 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Zero-shot text rankers powered by recent LLMs achieve remarkable ranking
performance by simply prompting. Existing prompts for pointwise LLM rankers
mostly ask the model to choose from binary relevance labels like "Yes" and
"No". However, the lack of intermediate relevance label options may cause the
LLM to provide noisy or biased answers for documents that are partially
relevant to the query. We propose to incorporate fine-grained relevance labels
into the prompt for LLM rankers, enabling them to better differentiate among
documents with different levels of relevance to the query and thus derive a
more accurate ranking. We study two variants of the prompt template, coupled
with different numbers of relevance levels. Our experiments on 8 BEIR data sets
show that adding fine-grained relevance labels significantly improves the
performance of LLM rankers. | http://arxiv.org/pdf/2310.14122 | Honglei Zhuang, Zhen Qin, Kai Hui, Junru Wu, Le Yan, Xuanhui Wang, Michael Bendersky | cs.IR | 13 pages | null | cs.IR | 20231021 | 20231106 | [
{
"id": "2305.06474"
},
{
"id": "2304.09542"
},
{
"id": "2307.02046"
},
{
"id": "2204.07496"
},
{
"id": "2309.10621"
},
{
"id": "2305.19860"
},
{
"id": "2102.07662"
},
{
"id": "2004.08476"
},
{
"id": "2211.09110"
},
{
"id": "2305.02156"
},
{
"id": "2212.10764"
},
{
"id": "2306.17563"
},
{
"id": "2310.09497"
},
{
"id": "1910.14424"
},
{
"id": "2303.08774"
},
{
"id": "2003.07820"
}
] |
2310.14122 | 19 | 0.500 eS eal 0.495 i ys, Qo.as0 7 â 9 o4a85| â 2 0.480. a \ o.a7s|_-% \ 2 3 5 Ros, OD 7 3 10
Figure 2: Comparing average NDCG@10 on 8 BEIR data sets with different number of relevance scales for the rating scale relevance generation method.
Table 3: Comparing different strategies to derive the ranking score. Measured by average NDCG@10 on BEIR data sets.
Prompts Ranking Score ER PR RG-2L RG-3L RG-4L RG-S(0, 2) RG-S(0, 4) 0.4789 0.4992 0.4969 0.4768 0.5022 0.4726 0.5005 0.4934 0.4659 0.4988
fine-grained relevance labels in the LLM prompts help it to develop a more nuanced understanding of relevance. | 2310.14122#19 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Zero-shot text rankers powered by recent LLMs achieve remarkable ranking
performance by simply prompting. Existing prompts for pointwise LLM rankers
mostly ask the model to choose from binary relevance labels like "Yes" and
"No". However, the lack of intermediate relevance label options may cause the
LLM to provide noisy or biased answers for documents that are partially
relevant to the query. We propose to incorporate fine-grained relevance labels
into the prompt for LLM rankers, enabling them to better differentiate among
documents with different levels of relevance to the query and thus derive a
more accurate ranking. We study two variants of the prompt template, coupled
with different numbers of relevance levels. Our experiments on 8 BEIR data sets
show that adding fine-grained relevance labels significantly improves the
performance of LLM rankers. | http://arxiv.org/pdf/2310.14122 | Honglei Zhuang, Zhen Qin, Kai Hui, Junru Wu, Le Yan, Xuanhui Wang, Michael Bendersky | cs.IR | 13 pages | null | cs.IR | 20231021 | 20231106 | [
{
"id": "2305.06474"
},
{
"id": "2304.09542"
},
{
"id": "2307.02046"
},
{
"id": "2204.07496"
},
{
"id": "2309.10621"
},
{
"id": "2305.19860"
},
{
"id": "2102.07662"
},
{
"id": "2004.08476"
},
{
"id": "2211.09110"
},
{
"id": "2305.02156"
},
{
"id": "2212.10764"
},
{
"id": "2306.17563"
},
{
"id": "2310.09497"
},
{
"id": "1910.14424"
},
{
"id": "2303.08774"
},
{
"id": "2003.07820"
}
] |
2310.14122 | 20 | fine-grained relevance labels in the LLM prompts help it to develop a more nuanced understanding of relevance.
However, the exact number of fine-grained rel- evance labels needed to achieve the performance improvement varies across different prompts. For example, simply using 3-level textual relevance la- bels is sufficient to achieve average NDCG@10 close to 0.50; but using rating scale from 0 to 2, which also corresponds to 3 relevance levels, can only obtain NDCG@10 lower than 0.48. Figure 2 shows that for rating scale relevance generation RG-S(0, k), the NDCG@10 only gets close to 0.50 with more than about 4 relevance levels.
On the other hand, further adding more rele- vance levels does not always improve the perfor- mance. For example, RG-4L performance seems to be on par with RG-3L. In Figure 2, the perfor- mance from RG-S(0, 4) and RG-S(0, 8) also re- main similar, and the performance of RG-S(0, 9) and RG-S(0, 10) is even worse than RG-S(0, 4).
(a) RG-2L vs. RG-S(0, 4) (b) RG-3L vs. RG-S(0, 4) | 2310.14122#20 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Zero-shot text rankers powered by recent LLMs achieve remarkable ranking
performance by simply prompting. Existing prompts for pointwise LLM rankers
mostly ask the model to choose from binary relevance labels like "Yes" and
"No". However, the lack of intermediate relevance label options may cause the
LLM to provide noisy or biased answers for documents that are partially
relevant to the query. We propose to incorporate fine-grained relevance labels
into the prompt for LLM rankers, enabling them to better differentiate among
documents with different levels of relevance to the query and thus derive a
more accurate ranking. We study two variants of the prompt template, coupled
with different numbers of relevance levels. Our experiments on 8 BEIR data sets
show that adding fine-grained relevance labels significantly improves the
performance of LLM rankers. | http://arxiv.org/pdf/2310.14122 | Honglei Zhuang, Zhen Qin, Kai Hui, Junru Wu, Le Yan, Xuanhui Wang, Michael Bendersky | cs.IR | 13 pages | null | cs.IR | 20231021 | 20231106 | [
{
"id": "2305.06474"
},
{
"id": "2304.09542"
},
{
"id": "2307.02046"
},
{
"id": "2204.07496"
},
{
"id": "2309.10621"
},
{
"id": "2305.19860"
},
{
"id": "2102.07662"
},
{
"id": "2004.08476"
},
{
"id": "2211.09110"
},
{
"id": "2305.02156"
},
{
"id": "2212.10764"
},
{
"id": "2306.17563"
},
{
"id": "2310.09497"
},
{
"id": "1910.14424"
},
{
"id": "2303.08774"
},
{
"id": "2003.07820"
}
] |
2310.14122 | 21 | (a) RG-2L vs. RG-S(0, 4) (b) RG-3L vs. RG-S(0, 4)
Figure 3: Comparing ranking score distribution of dif- ferent methods on the Covid data set.
However, the ranking scores derived from peak rel- evance likelihood (Equation (4)) achieve very close performance to expected relevance values in RG- kL prompts where textual fine-grained relevance labels are used. When downstream applications of the LLM ranker are sensitive to decoding cost, the peak relevance likelihood strategy can provide a more efficient alternative.
Ranking score derivation. We also compare the two alternative strategies to derive the ranking scores from LLM likelihood scores. The results are shown in Table 3. Generally, the expected rele- vance values derived from the marginal probability (Equation (3)) deliver better ranking scores overall.
Score distribution. We also compare the score distribution of different methods. Figure 3 shows the scattered plot of ranking scores derived from two methods for a random sample of query- document pairs in the Covid data set.
We observe that RG-2Lâs ranking scores are mostly positively correlated with RG-S(0, 4)âs | 2310.14122#21 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Zero-shot text rankers powered by recent LLMs achieve remarkable ranking
performance by simply prompting. Existing prompts for pointwise LLM rankers
mostly ask the model to choose from binary relevance labels like "Yes" and
"No". However, the lack of intermediate relevance label options may cause the
LLM to provide noisy or biased answers for documents that are partially
relevant to the query. We propose to incorporate fine-grained relevance labels
into the prompt for LLM rankers, enabling them to better differentiate among
documents with different levels of relevance to the query and thus derive a
more accurate ranking. We study two variants of the prompt template, coupled
with different numbers of relevance levels. Our experiments on 8 BEIR data sets
show that adding fine-grained relevance labels significantly improves the
performance of LLM rankers. | http://arxiv.org/pdf/2310.14122 | Honglei Zhuang, Zhen Qin, Kai Hui, Junru Wu, Le Yan, Xuanhui Wang, Michael Bendersky | cs.IR | 13 pages | null | cs.IR | 20231021 | 20231106 | [
{
"id": "2305.06474"
},
{
"id": "2304.09542"
},
{
"id": "2307.02046"
},
{
"id": "2204.07496"
},
{
"id": "2309.10621"
},
{
"id": "2305.19860"
},
{
"id": "2102.07662"
},
{
"id": "2004.08476"
},
{
"id": "2211.09110"
},
{
"id": "2305.02156"
},
{
"id": "2212.10764"
},
{
"id": "2306.17563"
},
{
"id": "2310.09497"
},
{
"id": "1910.14424"
},
{
"id": "2303.08774"
},
{
"id": "2003.07820"
}
] |
2310.14122 | 22 | We observe that RG-2Lâs ranking scores are mostly positively correlated with RG-S(0, 4)âs
(Figure 3(a)). However, RG-2L struggles to dis- tinguish query-document pairs with higher (> 3.0) ranking scores from RG-S(0, 4) and scores them al- most equally with scores close to 1.0. This suggests that providing more fine-grained relevance labels helps the LLM differentiate better among some query-document pairs, particularly with the top- ranked documents. When we compare the ranking scores from RG-3L where more than 2 relevance levels are used (Figure 3(b)), there is almost no such âplateauâ. The performance of RG-3L and RG-S(0, 4) are also very close.
# 6 Conclusion | 2310.14122#22 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Zero-shot text rankers powered by recent LLMs achieve remarkable ranking
performance by simply prompting. Existing prompts for pointwise LLM rankers
mostly ask the model to choose from binary relevance labels like "Yes" and
"No". However, the lack of intermediate relevance label options may cause the
LLM to provide noisy or biased answers for documents that are partially
relevant to the query. We propose to incorporate fine-grained relevance labels
into the prompt for LLM rankers, enabling them to better differentiate among
documents with different levels of relevance to the query and thus derive a
more accurate ranking. We study two variants of the prompt template, coupled
with different numbers of relevance levels. Our experiments on 8 BEIR data sets
show that adding fine-grained relevance labels significantly improves the
performance of LLM rankers. | http://arxiv.org/pdf/2310.14122 | Honglei Zhuang, Zhen Qin, Kai Hui, Junru Wu, Le Yan, Xuanhui Wang, Michael Bendersky | cs.IR | 13 pages | null | cs.IR | 20231021 | 20231106 | [
{
"id": "2305.06474"
},
{
"id": "2304.09542"
},
{
"id": "2307.02046"
},
{
"id": "2204.07496"
},
{
"id": "2309.10621"
},
{
"id": "2305.19860"
},
{
"id": "2102.07662"
},
{
"id": "2004.08476"
},
{
"id": "2211.09110"
},
{
"id": "2305.02156"
},
{
"id": "2212.10764"
},
{
"id": "2306.17563"
},
{
"id": "2310.09497"
},
{
"id": "1910.14424"
},
{
"id": "2303.08774"
},
{
"id": "2003.07820"
}
] |
2310.14122 | 23 | # 6 Conclusion
In this work, we explore the use of more fine- grained relevance labels in the prompt for point- wise zero-shot LLM rankers instead of the binary labels used in existing works. We propose to ei- ther provide intermediate relevance labels such as âSomewhat Relevantâ as additional choices for the LLM or ask the LLM to rate the relevance between query-document pairs using a rating scale. Then we aggregate the likelihood of different relevance levels into ranking scores to obtain the ranked list. Our experiments on BEIR data sets demonstrate that prompting with fine-grained relevance labels can consistently improve the ranking performance across different data sets, as it enables the model to better differentiate query-document pairs poten- tially ranked at the top.
We believe our discovery can be further extended to applications beyond information retrieval. For example, the same method can be applied for rec- ommendation (Fan et al., 2023; Wu et al., 2023), where the LLM is asked to rate how likely a user would buy an item.
# 7 Limitations
In this work, we assume that the predicted likeli- hood for any generated text can be accessed. How- ever, we are aware that this might not always be true for many commercial LLMs where users can only call with specific APIs. | 2310.14122#23 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Zero-shot text rankers powered by recent LLMs achieve remarkable ranking
performance by simply prompting. Existing prompts for pointwise LLM rankers
mostly ask the model to choose from binary relevance labels like "Yes" and
"No". However, the lack of intermediate relevance label options may cause the
LLM to provide noisy or biased answers for documents that are partially
relevant to the query. We propose to incorporate fine-grained relevance labels
into the prompt for LLM rankers, enabling them to better differentiate among
documents with different levels of relevance to the query and thus derive a
more accurate ranking. We study two variants of the prompt template, coupled
with different numbers of relevance levels. Our experiments on 8 BEIR data sets
show that adding fine-grained relevance labels significantly improves the
performance of LLM rankers. | http://arxiv.org/pdf/2310.14122 | Honglei Zhuang, Zhen Qin, Kai Hui, Junru Wu, Le Yan, Xuanhui Wang, Michael Bendersky | cs.IR | 13 pages | null | cs.IR | 20231021 | 20231106 | [
{
"id": "2305.06474"
},
{
"id": "2304.09542"
},
{
"id": "2307.02046"
},
{
"id": "2204.07496"
},
{
"id": "2309.10621"
},
{
"id": "2305.19860"
},
{
"id": "2102.07662"
},
{
"id": "2004.08476"
},
{
"id": "2211.09110"
},
{
"id": "2305.02156"
},
{
"id": "2212.10764"
},
{
"id": "2306.17563"
},
{
"id": "2310.09497"
},
{
"id": "1910.14424"
},
{
"id": "2303.08774"
},
{
"id": "2003.07820"
}
] |
2310.14122 | 24 | Another limitation is that our experiments are conducted only using one LLM, which is FLAN PaLM2 S. While we believe the results can be gen- eralize to other LLMs, we do not have the resource to verify this.
# References
Nicholas J Birkett. 1986. Selecting the number of re- sponse categories for a likert-type scale. In Proceed- ings of the American statistical association, volume 1, pages 488â492.
Nick Craswell, Bhaskar Mitra, Emine Yilmaz, and Daniel Campos. 2021. Overview of the TREC 2020 deep learning track. arXiv preprint arXiv:2102.07662.
Nick Craswell, Bhaskar Mitra, Emine Yilmaz, Daniel Campos, and Ellen M Voorhees. 2020. Overview of the trec 2019 deep learning track. arXiv preprint arXiv:2003.07820. | 2310.14122#24 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Zero-shot text rankers powered by recent LLMs achieve remarkable ranking
performance by simply prompting. Existing prompts for pointwise LLM rankers
mostly ask the model to choose from binary relevance labels like "Yes" and
"No". However, the lack of intermediate relevance label options may cause the
LLM to provide noisy or biased answers for documents that are partially
relevant to the query. We propose to incorporate fine-grained relevance labels
into the prompt for LLM rankers, enabling them to better differentiate among
documents with different levels of relevance to the query and thus derive a
more accurate ranking. We study two variants of the prompt template, coupled
with different numbers of relevance levels. Our experiments on 8 BEIR data sets
show that adding fine-grained relevance labels significantly improves the
performance of LLM rankers. | http://arxiv.org/pdf/2310.14122 | Honglei Zhuang, Zhen Qin, Kai Hui, Junru Wu, Le Yan, Xuanhui Wang, Michael Bendersky | cs.IR | 13 pages | null | cs.IR | 20231021 | 20231106 | [
{
"id": "2305.06474"
},
{
"id": "2304.09542"
},
{
"id": "2307.02046"
},
{
"id": "2204.07496"
},
{
"id": "2309.10621"
},
{
"id": "2305.19860"
},
{
"id": "2102.07662"
},
{
"id": "2004.08476"
},
{
"id": "2211.09110"
},
{
"id": "2305.02156"
},
{
"id": "2212.10764"
},
{
"id": "2306.17563"
},
{
"id": "2310.09497"
},
{
"id": "1910.14424"
},
{
"id": "2303.08774"
},
{
"id": "2003.07820"
}
] |
2310.14122 | 25 | Guglielmo Faggioli, Laura Dietz, Charles LA Clarke, Gianluca Demartini, Matthias Hagen, Claudia Hauff, Noriko Kando, Evangelos Kanoulas, Martin Potthast, Benno Stein, et al. 2023. Perspectives on large lan- guage models for relevance judgment. In Proceed- ings of the 2023 ACM SIGIR International Confer- ence on Theory of Information Retrieval, pages 39â 50.
Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Jiliang Tang, and Qing Li. 2023. Recommender systems in the era of arXiv preprint large language models (llms). arXiv:2307.02046. | 2310.14122#25 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Zero-shot text rankers powered by recent LLMs achieve remarkable ranking
performance by simply prompting. Existing prompts for pointwise LLM rankers
mostly ask the model to choose from binary relevance labels like "Yes" and
"No". However, the lack of intermediate relevance label options may cause the
LLM to provide noisy or biased answers for documents that are partially
relevant to the query. We propose to incorporate fine-grained relevance labels
into the prompt for LLM rankers, enabling them to better differentiate among
documents with different levels of relevance to the query and thus derive a
more accurate ranking. We study two variants of the prompt template, coupled
with different numbers of relevance levels. Our experiments on 8 BEIR data sets
show that adding fine-grained relevance labels significantly improves the
performance of LLM rankers. | http://arxiv.org/pdf/2310.14122 | Honglei Zhuang, Zhen Qin, Kai Hui, Junru Wu, Le Yan, Xuanhui Wang, Michael Bendersky | cs.IR | 13 pages | null | cs.IR | 20231021 | 20231106 | [
{
"id": "2305.06474"
},
{
"id": "2304.09542"
},
{
"id": "2307.02046"
},
{
"id": "2204.07496"
},
{
"id": "2309.10621"
},
{
"id": "2305.19860"
},
{
"id": "2102.07662"
},
{
"id": "2004.08476"
},
{
"id": "2211.09110"
},
{
"id": "2305.02156"
},
{
"id": "2212.10764"
},
{
"id": "2306.17563"
},
{
"id": "2310.09497"
},
{
"id": "1910.14424"
},
{
"id": "2303.08774"
},
{
"id": "2003.07820"
}
] |
2310.14122 | 26 | Google, Rohan Anil, Andrew M. Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Pas- sos, Siamak Shakeri, Emanuel Taropa, Paige Bai- ley, Zhifeng Chen, Eric Chu, Jonathan H. Clark, Laurent El Shafey, Yanping Huang, Kathy Meier- Hellstern, Gaurav Mishra, Erica Moreira, Mark Omernick, Kevin Robinson, Sebastian Ruder, Yi Tay, Kefan Xiao, Yuanzhong Xu, Yujing Zhang, Gus- tavo Hernandez Abrego, Junwhan Ahn, Jacob Austin, Paul Barham, Jan Botha, James Brad- bury, Siddhartha Brahma, Kevin Brooks, Michele Catasta, Yong Cheng, Colin Cherry, Christopher A. Choquette-Choo, Aakanksha Chowdhery, Clément Crepy, Shachi Dave, Mostafa Dehghani, Sunipa Dev, Jacob Devlin, Mark DÃaz, Nan Du, Ethan Dyer, Vlad Feinberg, Fangxiaoyu Feng, Vlad Fienber, Markus Freitag, Xavier | 2310.14122#26 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Zero-shot text rankers powered by recent LLMs achieve remarkable ranking
performance by simply prompting. Existing prompts for pointwise LLM rankers
mostly ask the model to choose from binary relevance labels like "Yes" and
"No". However, the lack of intermediate relevance label options may cause the
LLM to provide noisy or biased answers for documents that are partially
relevant to the query. We propose to incorporate fine-grained relevance labels
into the prompt for LLM rankers, enabling them to better differentiate among
documents with different levels of relevance to the query and thus derive a
more accurate ranking. We study two variants of the prompt template, coupled
with different numbers of relevance levels. Our experiments on 8 BEIR data sets
show that adding fine-grained relevance labels significantly improves the
performance of LLM rankers. | http://arxiv.org/pdf/2310.14122 | Honglei Zhuang, Zhen Qin, Kai Hui, Junru Wu, Le Yan, Xuanhui Wang, Michael Bendersky | cs.IR | 13 pages | null | cs.IR | 20231021 | 20231106 | [
{
"id": "2305.06474"
},
{
"id": "2304.09542"
},
{
"id": "2307.02046"
},
{
"id": "2204.07496"
},
{
"id": "2309.10621"
},
{
"id": "2305.19860"
},
{
"id": "2102.07662"
},
{
"id": "2004.08476"
},
{
"id": "2211.09110"
},
{
"id": "2305.02156"
},
{
"id": "2212.10764"
},
{
"id": "2306.17563"
},
{
"id": "2310.09497"
},
{
"id": "1910.14424"
},
{
"id": "2303.08774"
},
{
"id": "2003.07820"
}
] |
2310.14122 | 29 | Simon Tokumine, Dasha Valter, Vijay Vasudevan, Ki- ran Vodrahalli, Xuezhi Wang, Pidong Wang, Zirui Wang, Tao Wang, John Wieting, Yuhuai Wu, Kelvin Xu, Yunhan Xu, Linting Xue, Pengcheng Yin, Jiahui Yu, Qiao Zhang, Steven Zheng, Ce Zheng, Weikang Zhou, Denny Zhou, Slav Petrov, and Yonghui Wu. 2023. PaLM 2 technical report.
Shuguang Han, Xuanhui Wang, Mike Bendersky, and Marc Najork. 2020. Learning-to-rank with BERT in TF-Ranking. arXiv preprint arXiv:2004.08476.
Kalervo Järvelin and Jaana Kekäläinen. 2002. Cumu- lated gain-based evaluation of IR techniques. ACM Transactions on Information Systems, 20(4):422â 446. | 2310.14122#29 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Zero-shot text rankers powered by recent LLMs achieve remarkable ranking
performance by simply prompting. Existing prompts for pointwise LLM rankers
mostly ask the model to choose from binary relevance labels like "Yes" and
"No". However, the lack of intermediate relevance label options may cause the
LLM to provide noisy or biased answers for documents that are partially
relevant to the query. We propose to incorporate fine-grained relevance labels
into the prompt for LLM rankers, enabling them to better differentiate among
documents with different levels of relevance to the query and thus derive a
more accurate ranking. We study two variants of the prompt template, coupled
with different numbers of relevance levels. Our experiments on 8 BEIR data sets
show that adding fine-grained relevance labels significantly improves the
performance of LLM rankers. | http://arxiv.org/pdf/2310.14122 | Honglei Zhuang, Zhen Qin, Kai Hui, Junru Wu, Le Yan, Xuanhui Wang, Michael Bendersky | cs.IR | 13 pages | null | cs.IR | 20231021 | 20231106 | [
{
"id": "2305.06474"
},
{
"id": "2304.09542"
},
{
"id": "2307.02046"
},
{
"id": "2204.07496"
},
{
"id": "2309.10621"
},
{
"id": "2305.19860"
},
{
"id": "2102.07662"
},
{
"id": "2004.08476"
},
{
"id": "2211.09110"
},
{
"id": "2305.02156"
},
{
"id": "2212.10764"
},
{
"id": "2306.17563"
},
{
"id": "2310.09497"
},
{
"id": "1910.14424"
},
{
"id": "2303.08774"
},
{
"id": "2003.07820"
}
] |
2310.14122 | 30 | Robert J Johnston, Kevin J Boyle, Wiktor Adamow- icz, Jeff Bennett, Roy Brouwer, Trudy Ann Cameron, W Michael Hanemann, Nick Hanley, Mandy Ryan, Riccardo Scarpa, et al. 2017. Contemporary guid- ance for stated preference studies. Journal of the As- sociation of Environmental and Resource Economists, 4(2):319â405.
Wang-Cheng Kang, Jianmo Ni, Nikhil Mehta, Mah- eswaran Sathiamoorthy, Lichan Hong, Ed Chi, and Derek Zhiyuan Cheng. 2023. Do llms understand user preferences? evaluating llms on user rating pre- diction. arXiv preprint arXiv:2305.06474.
Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Ku- mar, et al. 2022. Holistic evaluation of language models. arXiv preprint arXiv:2211.09110. | 2310.14122#30 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Zero-shot text rankers powered by recent LLMs achieve remarkable ranking
performance by simply prompting. Existing prompts for pointwise LLM rankers
mostly ask the model to choose from binary relevance labels like "Yes" and
"No". However, the lack of intermediate relevance label options may cause the
LLM to provide noisy or biased answers for documents that are partially
relevant to the query. We propose to incorporate fine-grained relevance labels
into the prompt for LLM rankers, enabling them to better differentiate among
documents with different levels of relevance to the query and thus derive a
more accurate ranking. We study two variants of the prompt template, coupled
with different numbers of relevance levels. Our experiments on 8 BEIR data sets
show that adding fine-grained relevance labels significantly improves the
performance of LLM rankers. | http://arxiv.org/pdf/2310.14122 | Honglei Zhuang, Zhen Qin, Kai Hui, Junru Wu, Le Yan, Xuanhui Wang, Michael Bendersky | cs.IR | 13 pages | null | cs.IR | 20231021 | 20231106 | [
{
"id": "2305.06474"
},
{
"id": "2304.09542"
},
{
"id": "2307.02046"
},
{
"id": "2204.07496"
},
{
"id": "2309.10621"
},
{
"id": "2305.19860"
},
{
"id": "2102.07662"
},
{
"id": "2004.08476"
},
{
"id": "2211.09110"
},
{
"id": "2305.02156"
},
{
"id": "2212.10764"
},
{
"id": "2306.17563"
},
{
"id": "2310.09497"
},
{
"id": "1910.14424"
},
{
"id": "2303.08774"
},
{
"id": "2003.07820"
}
] |
2310.14122 | 31 | Jimmy Lin, Xueguang Ma, Sheng-Chieh Lin, Jheng- Hong Yang, Ronak Pradeep, and Rodrigo Nogueira. 2021. Pyserini: A Python toolkit for reproducible information retrieval research with sparse and dense representations. In Proceedings of the 44th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2021), pages 2356â2362.
Tie-Yan Liu. 2009. Learning to Rank for Information Retrieval. Now Publishers Inc.
Xueguang Ma, Xinyu Zhang, Ronak Pradeep, and Jimmy Lin. 2023. Zero-shot listwise document arXiv reranking with a large language model. preprint arXiv:2305.02156.
Rodrigo Nogueira, Zhiying Jiang, Ronak Pradeep, and Jimmy Lin. 2020. Document ranking with a pre- trained sequence-to-sequence model. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, pages 708â 718.
Rodrigo Nogueira, Wei Yang, Kyunghyun Cho, and Jimmy Lin. 2019. Multi-stage document ranking with BERT. arXiv preprint arXiv:1910.14424. | 2310.14122#31 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Zero-shot text rankers powered by recent LLMs achieve remarkable ranking
performance by simply prompting. Existing prompts for pointwise LLM rankers
mostly ask the model to choose from binary relevance labels like "Yes" and
"No". However, the lack of intermediate relevance label options may cause the
LLM to provide noisy or biased answers for documents that are partially
relevant to the query. We propose to incorporate fine-grained relevance labels
into the prompt for LLM rankers, enabling them to better differentiate among
documents with different levels of relevance to the query and thus derive a
more accurate ranking. We study two variants of the prompt template, coupled
with different numbers of relevance levels. Our experiments on 8 BEIR data sets
show that adding fine-grained relevance labels significantly improves the
performance of LLM rankers. | http://arxiv.org/pdf/2310.14122 | Honglei Zhuang, Zhen Qin, Kai Hui, Junru Wu, Le Yan, Xuanhui Wang, Michael Bendersky | cs.IR | 13 pages | null | cs.IR | 20231021 | 20231106 | [
{
"id": "2305.06474"
},
{
"id": "2304.09542"
},
{
"id": "2307.02046"
},
{
"id": "2204.07496"
},
{
"id": "2309.10621"
},
{
"id": "2305.19860"
},
{
"id": "2102.07662"
},
{
"id": "2004.08476"
},
{
"id": "2211.09110"
},
{
"id": "2305.02156"
},
{
"id": "2212.10764"
},
{
"id": "2306.17563"
},
{
"id": "2310.09497"
},
{
"id": "1910.14424"
},
{
"id": "2303.08774"
},
{
"id": "2003.07820"
}
] |
2310.14122 | 32 | OpenAI. 2023. GPT-4 technical report. arXiv preprint arXiv:2303.08774.
Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, et al. 2023. Large language models are effective text rankers with pairwise ranking prompting. arXiv preprint arXiv:2306.17563.
Zhen Qin, Le Yan, Honglei Zhuang, Yi Tay, Rama Ku- mar Pasumarthi, Xuanhui Wang, Michael Bendersky, and Marc Najork. 2021. Are neural rankers still out- performed by gradient boosted decision trees? In International Conference on Learning Representa- tions.
Noelia Rivera-Garrido, MP Ramos-Sosa, Michela Ac- cerenzi, and Pablo Brañas-Garza. 2022. Continuous and binary sets of responses differ in the field. Scien- tific Reports, 12(1):14376. | 2310.14122#32 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Zero-shot text rankers powered by recent LLMs achieve remarkable ranking
performance by simply prompting. Existing prompts for pointwise LLM rankers
mostly ask the model to choose from binary relevance labels like "Yes" and
"No". However, the lack of intermediate relevance label options may cause the
LLM to provide noisy or biased answers for documents that are partially
relevant to the query. We propose to incorporate fine-grained relevance labels
into the prompt for LLM rankers, enabling them to better differentiate among
documents with different levels of relevance to the query and thus derive a
more accurate ranking. We study two variants of the prompt template, coupled
with different numbers of relevance levels. Our experiments on 8 BEIR data sets
show that adding fine-grained relevance labels significantly improves the
performance of LLM rankers. | http://arxiv.org/pdf/2310.14122 | Honglei Zhuang, Zhen Qin, Kai Hui, Junru Wu, Le Yan, Xuanhui Wang, Michael Bendersky | cs.IR | 13 pages | null | cs.IR | 20231021 | 20231106 | [
{
"id": "2305.06474"
},
{
"id": "2304.09542"
},
{
"id": "2307.02046"
},
{
"id": "2204.07496"
},
{
"id": "2309.10621"
},
{
"id": "2305.19860"
},
{
"id": "2102.07662"
},
{
"id": "2004.08476"
},
{
"id": "2211.09110"
},
{
"id": "2305.02156"
},
{
"id": "2212.10764"
},
{
"id": "2306.17563"
},
{
"id": "2310.09497"
},
{
"id": "1910.14424"
},
{
"id": "2303.08774"
},
{
"id": "2003.07820"
}
] |
2310.14122 | 33 | Kevin Roitero, Eddy Maddalena, Gianluca Demartini, and Stefano Mizzaro. 2018. On fine-grained rele- vance scales. In Proceedings of the 41st International ACM SIGIR Conference on Research and Develop- ment in Information Retrieval, pages 675â684.
Devendra Singh Sachan, Mike Lewis, Mandar Joshi, Armen Aghajanyan, Wen-tau Yih, Joelle Pineau, and Luke Zettlemoyer. 2022. Improving passage retrieval with zero-shot question generation. arXiv preprint arXiv:2204.07496.
Weiwei Sun, Lingyong Yan, Xinyu Ma, Pengjie Ren, Dawei Yin, and Zhaochun Ren. 2023. Is Chat- investigating large lan- GPT good at search? guage models as re-ranking agent. arXiv preprint arXiv:2304.09542.
Nandan Thakur, Nils Reimers, Andreas Rücklé, Ab- hishek Srivastava, and Iryna Gurevych. 2021. BEIR: A heterogeneous benchmark for zero-shot evaluation of information retrieval models. In Thirty-fifth Con- ference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2). | 2310.14122#33 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Zero-shot text rankers powered by recent LLMs achieve remarkable ranking
performance by simply prompting. Existing prompts for pointwise LLM rankers
mostly ask the model to choose from binary relevance labels like "Yes" and
"No". However, the lack of intermediate relevance label options may cause the
LLM to provide noisy or biased answers for documents that are partially
relevant to the query. We propose to incorporate fine-grained relevance labels
into the prompt for LLM rankers, enabling them to better differentiate among
documents with different levels of relevance to the query and thus derive a
more accurate ranking. We study two variants of the prompt template, coupled
with different numbers of relevance levels. Our experiments on 8 BEIR data sets
show that adding fine-grained relevance labels significantly improves the
performance of LLM rankers. | http://arxiv.org/pdf/2310.14122 | Honglei Zhuang, Zhen Qin, Kai Hui, Junru Wu, Le Yan, Xuanhui Wang, Michael Bendersky | cs.IR | 13 pages | null | cs.IR | 20231021 | 20231106 | [
{
"id": "2305.06474"
},
{
"id": "2304.09542"
},
{
"id": "2307.02046"
},
{
"id": "2204.07496"
},
{
"id": "2309.10621"
},
{
"id": "2305.19860"
},
{
"id": "2102.07662"
},
{
"id": "2004.08476"
},
{
"id": "2211.09110"
},
{
"id": "2305.02156"
},
{
"id": "2212.10764"
},
{
"id": "2306.17563"
},
{
"id": "2310.09497"
},
{
"id": "1910.14424"
},
{
"id": "2303.08774"
},
{
"id": "2003.07820"
}
] |
2310.14122 | 34 | Paul Thomas, Seth Spielman, Nick Craswell, and Bhaskar Mitra. 2023. Large language models can ac- curately predict searcher preferences. arXiv preprint arXiv:2309.10621.
Ellen Voorhees, Tasmeer Alam, Steven Bedrick, Dina Demner-Fushman, William R Hersh, Kyle Lo, Kirk Roberts, Ian Soboroff, and Lucy Lu Wang. 2021. Trec-covid: constructing a pandemic information re- trieval test collection. In ACM SIGIR Forum, vol- ume 54, pages 1â12. ACM New York, NY, USA.
Ellen M Voorhees. 2005. The trec robust retrieval track. In ACM SIGIR Forum, volume 39, pages 11â20. ACM New York, NY, USA.
Likang Wu, Zhi Zheng, Zhaopeng Qiu, Hao Wang, Hongchao Gu, Tingjia Shen, Chuan Qin, Chen Zhu, Hengshu Zhu, Qi Liu, et al. 2023. A survey on large language models for recommendation. arXiv preprint arXiv:2305.19860. | 2310.14122#34 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Zero-shot text rankers powered by recent LLMs achieve remarkable ranking
performance by simply prompting. Existing prompts for pointwise LLM rankers
mostly ask the model to choose from binary relevance labels like "Yes" and
"No". However, the lack of intermediate relevance label options may cause the
LLM to provide noisy or biased answers for documents that are partially
relevant to the query. We propose to incorporate fine-grained relevance labels
into the prompt for LLM rankers, enabling them to better differentiate among
documents with different levels of relevance to the query and thus derive a
more accurate ranking. We study two variants of the prompt template, coupled
with different numbers of relevance levels. Our experiments on 8 BEIR data sets
show that adding fine-grained relevance labels significantly improves the
performance of LLM rankers. | http://arxiv.org/pdf/2310.14122 | Honglei Zhuang, Zhen Qin, Kai Hui, Junru Wu, Le Yan, Xuanhui Wang, Michael Bendersky | cs.IR | 13 pages | null | cs.IR | 20231021 | 20231106 | [
{
"id": "2305.06474"
},
{
"id": "2304.09542"
},
{
"id": "2307.02046"
},
{
"id": "2204.07496"
},
{
"id": "2309.10621"
},
{
"id": "2305.19860"
},
{
"id": "2102.07662"
},
{
"id": "2004.08476"
},
{
"id": "2211.09110"
},
{
"id": "2305.02156"
},
{
"id": "2212.10764"
},
{
"id": "2306.17563"
},
{
"id": "2310.09497"
},
{
"id": "1910.14424"
},
{
"id": "2303.08774"
},
{
"id": "2003.07820"
}
] |
2310.14122 | 35 | Ruicheng Xian, Honglei Zhuang, Zhen Qin, Hamed Zamani, Jing Lu, Ji Ma, Kai Hui, Han Zhao, Xuanhui Wang, and Michael Bendersky. 2022. Learning list- level domain-invariant representations for ranking. arXiv preprint arXiv:2212.10764.
Honglei Zhuang, Zhen Qin, Shuguang Han, Xuanhui Wang, Michael Bendersky, and Marc Najork. 2021. Ensemble distillation for BERT-based ranking mod- els. In Proceedings of the 2021 ACM SIGIR Interna- tional Conference on Theory of Information Retrieval, pages 131â136.
Honglei Zhuang, Zhen Qin, Rolf Jagerman, Kai Hui, Ji Ma, Jing Lu, Jianmo Ni, Xuanhui Wang, and Michael Bendersky. 2023a. RankT5: Fine-tuning T5 for text ranking with ranking losses. In Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 2308â2313. | 2310.14122#35 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Zero-shot text rankers powered by recent LLMs achieve remarkable ranking
performance by simply prompting. Existing prompts for pointwise LLM rankers
mostly ask the model to choose from binary relevance labels like "Yes" and
"No". However, the lack of intermediate relevance label options may cause the
LLM to provide noisy or biased answers for documents that are partially
relevant to the query. We propose to incorporate fine-grained relevance labels
into the prompt for LLM rankers, enabling them to better differentiate among
documents with different levels of relevance to the query and thus derive a
more accurate ranking. We study two variants of the prompt template, coupled
with different numbers of relevance levels. Our experiments on 8 BEIR data sets
show that adding fine-grained relevance labels significantly improves the
performance of LLM rankers. | http://arxiv.org/pdf/2310.14122 | Honglei Zhuang, Zhen Qin, Kai Hui, Junru Wu, Le Yan, Xuanhui Wang, Michael Bendersky | cs.IR | 13 pages | null | cs.IR | 20231021 | 20231106 | [
{
"id": "2305.06474"
},
{
"id": "2304.09542"
},
{
"id": "2307.02046"
},
{
"id": "2204.07496"
},
{
"id": "2309.10621"
},
{
"id": "2305.19860"
},
{
"id": "2102.07662"
},
{
"id": "2004.08476"
},
{
"id": "2211.09110"
},
{
"id": "2305.02156"
},
{
"id": "2212.10764"
},
{
"id": "2306.17563"
},
{
"id": "2310.09497"
},
{
"id": "1910.14424"
},
{
"id": "2303.08774"
},
{
"id": "2003.07820"
}
] |
2310.14122 | 36 | Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, and Guido Zuccon. 2023b. A setwise approach for effective and highly efficient zero-shot rank- ing with large language models. arXiv preprint arXiv:2310.09497.
# A Alternative Relevance Levels
We replace the relevance levels with other phrases to examine how the performance changes. For RG- 2L, we replace âNot Relevantâ with âIrrelevantâ; for RG-3L, we replace âSomewhat Relevantâ with âPartially Relevantâ.
The results are shown in Table 4. Regardless of using different textual representations of rele- vance labels, RG-3L consistently outperforms RG- 2L. This suggests that the discovery in this paper is generalizable to different choices of textual rel- evance labels. Another observation is that RG-2L performance varies slightly more than RG-3L per- formance. This might indicate that RG-3L is more robust to different wording of relevance labels.
Table 4: Comparing ranking performance with dif- ferent textual relevance levels. Measured by average NDCG@10 on BEIR data sets. | 2310.14122#36 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Zero-shot text rankers powered by recent LLMs achieve remarkable ranking
performance by simply prompting. Existing prompts for pointwise LLM rankers
mostly ask the model to choose from binary relevance labels like "Yes" and
"No". However, the lack of intermediate relevance label options may cause the
LLM to provide noisy or biased answers for documents that are partially
relevant to the query. We propose to incorporate fine-grained relevance labels
into the prompt for LLM rankers, enabling them to better differentiate among
documents with different levels of relevance to the query and thus derive a
more accurate ranking. We study two variants of the prompt template, coupled
with different numbers of relevance levels. Our experiments on 8 BEIR data sets
show that adding fine-grained relevance labels significantly improves the
performance of LLM rankers. | http://arxiv.org/pdf/2310.14122 | Honglei Zhuang, Zhen Qin, Kai Hui, Junru Wu, Le Yan, Xuanhui Wang, Michael Bendersky | cs.IR | 13 pages | null | cs.IR | 20231021 | 20231106 | [
{
"id": "2305.06474"
},
{
"id": "2304.09542"
},
{
"id": "2307.02046"
},
{
"id": "2204.07496"
},
{
"id": "2309.10621"
},
{
"id": "2305.19860"
},
{
"id": "2102.07662"
},
{
"id": "2004.08476"
},
{
"id": "2211.09110"
},
{
"id": "2305.02156"
},
{
"id": "2212.10764"
},
{
"id": "2306.17563"
},
{
"id": "2310.09497"
},
{
"id": "1910.14424"
},
{
"id": "2303.08774"
},
{
"id": "2003.07820"
}
] |
2310.14122 | 37 | Table 4: Comparing ranking performance with dif- ferent textual relevance levels. Measured by average NDCG@10 on BEIR data sets.
Method Relevance Levels Average RG-2L âIrrelevantâ, âRelevantâ 0.4717 âNot Relevantâ, âRelevantâ 0.4789 RG-3L âNot Relevantâ, âPartially Rel- evantâ, âHighly Relevantâ 0.4975 âNot Relevantâ, âSomewhat Relevantâ, âHighly Relevantâ 0.4992
We also experiment with different rating scale formulation. Instead of prompting the LLM to rate the relevance from 0 to k, we also try to ask the LLM to rate the relevance from 1 to k, denoted as RG-S(1, k). We plot the average NDCG@10 performance in Figure 4.
The performance of both methods do not differ much when k is larger than 4. But not providing the â0â option substantially hurt the performance when k is lower than or equal to 3. This might also suggest that using the rating scale from 0 to k is slightly more robust. | 2310.14122#37 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Zero-shot text rankers powered by recent LLMs achieve remarkable ranking
performance by simply prompting. Existing prompts for pointwise LLM rankers
mostly ask the model to choose from binary relevance labels like "Yes" and
"No". However, the lack of intermediate relevance label options may cause the
LLM to provide noisy or biased answers for documents that are partially
relevant to the query. We propose to incorporate fine-grained relevance labels
into the prompt for LLM rankers, enabling them to better differentiate among
documents with different levels of relevance to the query and thus derive a
more accurate ranking. We study two variants of the prompt template, coupled
with different numbers of relevance levels. Our experiments on 8 BEIR data sets
show that adding fine-grained relevance labels significantly improves the
performance of LLM rankers. | http://arxiv.org/pdf/2310.14122 | Honglei Zhuang, Zhen Qin, Kai Hui, Junru Wu, Le Yan, Xuanhui Wang, Michael Bendersky | cs.IR | 13 pages | null | cs.IR | 20231021 | 20231106 | [
{
"id": "2305.06474"
},
{
"id": "2304.09542"
},
{
"id": "2307.02046"
},
{
"id": "2204.07496"
},
{
"id": "2309.10621"
},
{
"id": "2305.19860"
},
{
"id": "2102.07662"
},
{
"id": "2004.08476"
},
{
"id": "2211.09110"
},
{
"id": "2305.02156"
},
{
"id": "2212.10764"
},
{
"id": "2306.17563"
},
{
"id": "2310.09497"
},
{
"id": "1910.14424"
},
{
"id": "2303.08774"
},
{
"id": "2003.07820"
}
] |
2310.14122 | 38 | 0.50 pee = 0 048 wT / G0-46 t fet t 0 0.44 t Fal 1 20.42} 4 7 © RG-S(0,k) 0.40 rf â® =RG-S(1,k) 2 3 4 5 6 7 é 910 k
Figure 4: Comparing rating scale relevance generation with different prompts.
# B In-Depth Score Distribution
We plot the in-depth score distribution of our meth- ods. Specifically, we group the query-document pairs in Covid data set by different ground-truth relevance and plot the distribution of the marginal probability pk for each prompted relevance label lk respectively. Figure 5 and 6 shows the results on Covid data set when we use RG-S(0, 4) and RG-4L respectively. The ground-truth relevance of Covid data set is 0, 1 or 2. | 2310.14122#38 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Zero-shot text rankers powered by recent LLMs achieve remarkable ranking
performance by simply prompting. Existing prompts for pointwise LLM rankers
mostly ask the model to choose from binary relevance labels like "Yes" and
"No". However, the lack of intermediate relevance label options may cause the
LLM to provide noisy or biased answers for documents that are partially
relevant to the query. We propose to incorporate fine-grained relevance labels
into the prompt for LLM rankers, enabling them to better differentiate among
documents with different levels of relevance to the query and thus derive a
more accurate ranking. We study two variants of the prompt template, coupled
with different numbers of relevance levels. Our experiments on 8 BEIR data sets
show that adding fine-grained relevance labels significantly improves the
performance of LLM rankers. | http://arxiv.org/pdf/2310.14122 | Honglei Zhuang, Zhen Qin, Kai Hui, Junru Wu, Le Yan, Xuanhui Wang, Michael Bendersky | cs.IR | 13 pages | null | cs.IR | 20231021 | 20231106 | [
{
"id": "2305.06474"
},
{
"id": "2304.09542"
},
{
"id": "2307.02046"
},
{
"id": "2204.07496"
},
{
"id": "2309.10621"
},
{
"id": "2305.19860"
},
{
"id": "2102.07662"
},
{
"id": "2004.08476"
},
{
"id": "2211.09110"
},
{
"id": "2305.02156"
},
{
"id": "2212.10764"
},
{
"id": "2306.17563"
},
{
"id": "2310.09497"
},
{
"id": "1910.14424"
},
{
"id": "2303.08774"
},
{
"id": "2003.07820"
}
] |
2310.14122 | 39 | In Figure 5, We observe that the distributions of marginal probability pk of relevance label â0â, â1â and â2â shift down towards 0 as the ground- truth relevance increases. Meanwhile, the distri- butions of pk across relevance label â3â and â4â shift up towards 1. In Figure 6, we found a similar trend where the distributions of marginal proba- bility pk of âNot Relevantâ and âSomewhat Rel- evantâ shift down towards 0 as the ground-truth relevance increases, while the distributions of pk across âHighly Relevantâ and âPerfectly Relevantâ shift up towards 1. This reveals how our expected relevance values (ER) methods works in practice, and also given us hints on how peak relevance like- lihood (PR) alone works based on the distribution shift of the peak relevance label.
# C Varying Assigned Relevance Values | 2310.14122#39 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Zero-shot text rankers powered by recent LLMs achieve remarkable ranking
performance by simply prompting. Existing prompts for pointwise LLM rankers
mostly ask the model to choose from binary relevance labels like "Yes" and
"No". However, the lack of intermediate relevance label options may cause the
LLM to provide noisy or biased answers for documents that are partially
relevant to the query. We propose to incorporate fine-grained relevance labels
into the prompt for LLM rankers, enabling them to better differentiate among
documents with different levels of relevance to the query and thus derive a
more accurate ranking. We study two variants of the prompt template, coupled
with different numbers of relevance levels. Our experiments on 8 BEIR data sets
show that adding fine-grained relevance labels significantly improves the
performance of LLM rankers. | http://arxiv.org/pdf/2310.14122 | Honglei Zhuang, Zhen Qin, Kai Hui, Junru Wu, Le Yan, Xuanhui Wang, Michael Bendersky | cs.IR | 13 pages | null | cs.IR | 20231021 | 20231106 | [
{
"id": "2305.06474"
},
{
"id": "2304.09542"
},
{
"id": "2307.02046"
},
{
"id": "2204.07496"
},
{
"id": "2309.10621"
},
{
"id": "2305.19860"
},
{
"id": "2102.07662"
},
{
"id": "2004.08476"
},
{
"id": "2211.09110"
},
{
"id": "2305.02156"
},
{
"id": "2212.10764"
},
{
"id": "2306.17563"
},
{
"id": "2310.09497"
},
{
"id": "1910.14424"
},
{
"id": "2303.08774"
},
{
"id": "2003.07820"
}
] |
2310.14122 | 40 | # C Varying Assigned Relevance Values
We also investigate how the user provided rele- vance values ykâs make a difference to the ranking performance. We use RG-3L as the example. We fix y0 = 0 for âNot Relevantâ and y2 = 2 for âHighly Relevantâ, but vary the relevance value y1 for âSomewhat Relevantâ between y0 and y2. We evaluate the average NDCG@10 on the 8 BEIR data sets and presents the results in Table 5.
As y1 varies, the average NDCG@10 does not change substantially when y1 decreases. Even when y1 = y0, the NDCG@10 performance re- mains high. This is expected as NDCG@10 metric only focuses on the top-ranked items. Hence chang- ing the relevance values of intermediate relevance labels may not change the order of top-ranked items a lot. This is also similar to using the peak rele- vance likelihood method.
In contrast, when y1 = y2, the performance drops significantly to about the same level as RG- 2L. This might indirectly explain why RG-2L per- formance is worse than RG-3L, as it might not be able to distinguish partially relevant and highly relevant documents. | 2310.14122#40 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Zero-shot text rankers powered by recent LLMs achieve remarkable ranking
performance by simply prompting. Existing prompts for pointwise LLM rankers
mostly ask the model to choose from binary relevance labels like "Yes" and
"No". However, the lack of intermediate relevance label options may cause the
LLM to provide noisy or biased answers for documents that are partially
relevant to the query. We propose to incorporate fine-grained relevance labels
into the prompt for LLM rankers, enabling them to better differentiate among
documents with different levels of relevance to the query and thus derive a
more accurate ranking. We study two variants of the prompt template, coupled
with different numbers of relevance levels. Our experiments on 8 BEIR data sets
show that adding fine-grained relevance labels significantly improves the
performance of LLM rankers. | http://arxiv.org/pdf/2310.14122 | Honglei Zhuang, Zhen Qin, Kai Hui, Junru Wu, Le Yan, Xuanhui Wang, Michael Bendersky | cs.IR | 13 pages | null | cs.IR | 20231021 | 20231106 | [
{
"id": "2305.06474"
},
{
"id": "2304.09542"
},
{
"id": "2307.02046"
},
{
"id": "2204.07496"
},
{
"id": "2309.10621"
},
{
"id": "2305.19860"
},
{
"id": "2102.07662"
},
{
"id": "2004.08476"
},
{
"id": "2211.09110"
},
{
"id": "2305.02156"
},
{
"id": "2212.10764"
},
{
"id": "2306.17563"
},
{
"id": "2310.09497"
},
{
"id": "1910.14424"
},
{
"id": "2303.08774"
},
{
"id": "2003.07820"
}
] |
2310.14122 | 41 | Table 5: Comparing ranking performance with different relevance values ykâs. Measured by average NDCG@10 on BEIR data sets.
Method [y0, y1, y2] Average RG-3L RG-3L RG-3L RG-3L RG-3L [0.00, 0.00, 2.00] [0.00, 0.50, 2.00] [0.00, 1.00, 2.00] [0.00, 1.50, 2.00] [0.00, 2.00, 2.00] 0.5000 0.5000 0.4992 0.4990 0.4779
Table 6: Comparing ranking performance instruc- tion and in-context learning. Measured by average NDCG@10 on BEIR data sets.
Method Average RG-2L + Instructions + Instructions + 4-shot ICL 0.4789 0.4914 0.4914 RG-3L + Instructions + Instructions + 4-shot ICL 0.4992 0.5034 0.5046
# D Instructions and In-Context Learning | 2310.14122#41 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Zero-shot text rankers powered by recent LLMs achieve remarkable ranking
performance by simply prompting. Existing prompts for pointwise LLM rankers
mostly ask the model to choose from binary relevance labels like "Yes" and
"No". However, the lack of intermediate relevance label options may cause the
LLM to provide noisy or biased answers for documents that are partially
relevant to the query. We propose to incorporate fine-grained relevance labels
into the prompt for LLM rankers, enabling them to better differentiate among
documents with different levels of relevance to the query and thus derive a
more accurate ranking. We study two variants of the prompt template, coupled
with different numbers of relevance levels. Our experiments on 8 BEIR data sets
show that adding fine-grained relevance labels significantly improves the
performance of LLM rankers. | http://arxiv.org/pdf/2310.14122 | Honglei Zhuang, Zhen Qin, Kai Hui, Junru Wu, Le Yan, Xuanhui Wang, Michael Bendersky | cs.IR | 13 pages | null | cs.IR | 20231021 | 20231106 | [
{
"id": "2305.06474"
},
{
"id": "2304.09542"
},
{
"id": "2307.02046"
},
{
"id": "2204.07496"
},
{
"id": "2309.10621"
},
{
"id": "2305.19860"
},
{
"id": "2102.07662"
},
{
"id": "2004.08476"
},
{
"id": "2211.09110"
},
{
"id": "2305.02156"
},
{
"id": "2212.10764"
},
{
"id": "2306.17563"
},
{
"id": "2310.09497"
},
{
"id": "1910.14424"
},
{
"id": "2303.08774"
},
{
"id": "2003.07820"
}
] |
2310.14122 | 42 | # D Instructions and In-Context Learning
We also try adding instructions and few-shot ex- emplars into the prompt. For instructions, we di- rectly add the definition of the relevance labels into the prompt. The relevance label definitions are di- rectly copied from TREC-DL 2020 (Craswell et al., 2021). For RG-2L instructions we use the âIrrele- vantâ and âRelevantâ labels; for RG-3L instructions we use the âIrrelevantâ, âRelevantâ and âHighly Relevantâ labels. We also change the relevance labels accordingly to align with the instructions. | 2310.14122#42 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Zero-shot text rankers powered by recent LLMs achieve remarkable ranking
performance by simply prompting. Existing prompts for pointwise LLM rankers
mostly ask the model to choose from binary relevance labels like "Yes" and
"No". However, the lack of intermediate relevance label options may cause the
LLM to provide noisy or biased answers for documents that are partially
relevant to the query. We propose to incorporate fine-grained relevance labels
into the prompt for LLM rankers, enabling them to better differentiate among
documents with different levels of relevance to the query and thus derive a
more accurate ranking. We study two variants of the prompt template, coupled
with different numbers of relevance levels. Our experiments on 8 BEIR data sets
show that adding fine-grained relevance labels significantly improves the
performance of LLM rankers. | http://arxiv.org/pdf/2310.14122 | Honglei Zhuang, Zhen Qin, Kai Hui, Junru Wu, Le Yan, Xuanhui Wang, Michael Bendersky | cs.IR | 13 pages | null | cs.IR | 20231021 | 20231106 | [
{
"id": "2305.06474"
},
{
"id": "2304.09542"
},
{
"id": "2307.02046"
},
{
"id": "2204.07496"
},
{
"id": "2309.10621"
},
{
"id": "2305.19860"
},
{
"id": "2102.07662"
},
{
"id": "2004.08476"
},
{
"id": "2211.09110"
},
{
"id": "2305.02156"
},
{
"id": "2212.10764"
},
{
"id": "2306.17563"
},
{
"id": "2310.09497"
},
{
"id": "1910.14424"
},
{
"id": "2303.08774"
},
{
"id": "2003.07820"
}
] |
2310.14122 | 43 | In addition to instructions, we also try to include few-shot exemplars to leverage the modelâs in- context learning capabilities. We include 4-shot ex- emplars, which are randomly sampled from TREC- DL 2020 data sets. We sampled 2 âIrrelevantâ, 1 âRelevantâ and 1 âPerfectly Relevantâ query- document pairs. To align with the instructions, for RG-2L we label both âRelevantâ and âPerfectly Relevantâ exemplar query-document pairs as âRel- evantâ; for RG-3L we label the âPerfectly Relevantâ pair as âHighly Relevantâ.
The results are shown in Table 6. Adding in- structions improves both RG-2L and RG-3L, while RG-3L still remains +1.2% better than RG-2L. Fur- ther adding exemplars on top of the instructions does not improve much, possibly due to the distri- bution discrepancy between TREC-DL and BEIR. | 2310.14122#43 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Zero-shot text rankers powered by recent LLMs achieve remarkable ranking
performance by simply prompting. Existing prompts for pointwise LLM rankers
mostly ask the model to choose from binary relevance labels like "Yes" and
"No". However, the lack of intermediate relevance label options may cause the
LLM to provide noisy or biased answers for documents that are partially
relevant to the query. We propose to incorporate fine-grained relevance labels
into the prompt for LLM rankers, enabling them to better differentiate among
documents with different levels of relevance to the query and thus derive a
more accurate ranking. We study two variants of the prompt template, coupled
with different numbers of relevance levels. Our experiments on 8 BEIR data sets
show that adding fine-grained relevance labels significantly improves the
performance of LLM rankers. | http://arxiv.org/pdf/2310.14122 | Honglei Zhuang, Zhen Qin, Kai Hui, Junru Wu, Le Yan, Xuanhui Wang, Michael Bendersky | cs.IR | 13 pages | null | cs.IR | 20231021 | 20231106 | [
{
"id": "2305.06474"
},
{
"id": "2304.09542"
},
{
"id": "2307.02046"
},
{
"id": "2204.07496"
},
{
"id": "2309.10621"
},
{
"id": "2305.19860"
},
{
"id": "2102.07662"
},
{
"id": "2004.08476"
},
{
"id": "2211.09110"
},
{
"id": "2305.02156"
},
{
"id": "2212.10764"
},
{
"id": "2306.17563"
},
{
"id": "2310.09497"
},
{
"id": "1910.14424"
},
{
"id": "2303.08774"
},
{
"id": "2003.07820"
}
] |
2310.14122 | 44 | 1.0 1.0 1.0 Relevance Relevance Relevance Label Label Label 0.8) om 0.8) om 0.8) oom = = o6| = o6| = 0.6 = = . | = . | = x 20.4 20.4 20 0.2 0.2 0 0.0 0.0 0.0 Ground-Truth Relevance = 0 Ground-Truth Relevance = 1 Ground-Truth Relevance = 2 wNnro wNnro RWNHO s s ES N
Figure 5: Distribution of marginal probability pk of each relevance label in RG-S(0, 4) for query-document pairs with different ground-truth labels on Covid data set
Relevance Label Relevance Label Relevance Label HIE Not Relevant Hi Not Relevant Hi Not Relevant Somewhat Relevant Somewhat Relevant Highly Relevant Highly Relevant 1.0 iim Perfectly Relevant 10 Ili Perfectly Reyevant 1.0 ml Perfectly Relevant 0.8 0.8 0.8 Zo. Zoe doe 0.4 0.4 0.4 0.2 0.2 0.2 0.0 0.0 0.0 Ground-Truth Relevance = 0 Ground-Truth Relevance = 1 Ground-Truth Relevance = 2 | 2310.14122#44 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Zero-shot text rankers powered by recent LLMs achieve remarkable ranking
performance by simply prompting. Existing prompts for pointwise LLM rankers
mostly ask the model to choose from binary relevance labels like "Yes" and
"No". However, the lack of intermediate relevance label options may cause the
LLM to provide noisy or biased answers for documents that are partially
relevant to the query. We propose to incorporate fine-grained relevance labels
into the prompt for LLM rankers, enabling them to better differentiate among
documents with different levels of relevance to the query and thus derive a
more accurate ranking. We study two variants of the prompt template, coupled
with different numbers of relevance levels. Our experiments on 8 BEIR data sets
show that adding fine-grained relevance labels significantly improves the
performance of LLM rankers. | http://arxiv.org/pdf/2310.14122 | Honglei Zhuang, Zhen Qin, Kai Hui, Junru Wu, Le Yan, Xuanhui Wang, Michael Bendersky | cs.IR | 13 pages | null | cs.IR | 20231021 | 20231106 | [
{
"id": "2305.06474"
},
{
"id": "2304.09542"
},
{
"id": "2307.02046"
},
{
"id": "2204.07496"
},
{
"id": "2309.10621"
},
{
"id": "2305.19860"
},
{
"id": "2102.07662"
},
{
"id": "2004.08476"
},
{
"id": "2211.09110"
},
{
"id": "2305.02156"
},
{
"id": "2212.10764"
},
{
"id": "2306.17563"
},
{
"id": "2310.09497"
},
{
"id": "1910.14424"
},
{
"id": "2303.08774"
},
{
"id": "2003.07820"
}
] |
2310.14122 | 46 | Method Model Covid Touche DBPedia SciFact Signal News Robust04 NFCorpus Average BM25 N/A 0.5947 0.4422 0.3180 0.6789 0.3305 0.3952 0.4070 0.3075 0.4342 QG RG-YN FLAN PaLM2 S FLAN PaLM2 S 0.7357 0.7897 0.2408 0.2427 0.3773 0.3696 0.7495 0.6958 0.2872 0.3196 0.4156 0.4588 0.4651 0.5656 0.3673 0.3743 0.4548 0.4770 RG-2L-ER RG-2L-PR RG-3L-ER RG-3L-PR RG-4L-ER RG-4L-PR FLAN PaLM2 S FLAN PaLM2 S FLAN PaLM2 S FLAN PaLM2 S FLAN PaLM2 S FLAN PaLM2 S 0.7949 0.7874 0.8065 0.8065 0.8063 0.8076 0.2411 0.2482 0.2650 0.2634 0.2388 0.2354 0.3590 0.3435 0.4013 0.4032 0.4033 0.4050 | 2310.14122#46 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Zero-shot text rankers powered by recent LLMs achieve remarkable ranking
performance by simply prompting. Existing prompts for pointwise LLM rankers
mostly ask the model to choose from binary relevance labels like "Yes" and
"No". However, the lack of intermediate relevance label options may cause the
LLM to provide noisy or biased answers for documents that are partially
relevant to the query. We propose to incorporate fine-grained relevance labels
into the prompt for LLM rankers, enabling them to better differentiate among
documents with different levels of relevance to the query and thus derive a
more accurate ranking. We study two variants of the prompt template, coupled
with different numbers of relevance levels. Our experiments on 8 BEIR data sets
show that adding fine-grained relevance labels significantly improves the
performance of LLM rankers. | http://arxiv.org/pdf/2310.14122 | Honglei Zhuang, Zhen Qin, Kai Hui, Junru Wu, Le Yan, Xuanhui Wang, Michael Bendersky | cs.IR | 13 pages | null | cs.IR | 20231021 | 20231106 | [
{
"id": "2305.06474"
},
{
"id": "2304.09542"
},
{
"id": "2307.02046"
},
{
"id": "2204.07496"
},
{
"id": "2309.10621"
},
{
"id": "2305.19860"
},
{
"id": "2102.07662"
},
{
"id": "2004.08476"
},
{
"id": "2211.09110"
},
{
"id": "2305.02156"
},
{
"id": "2212.10764"
},
{
"id": "2306.17563"
},
{
"id": "2310.09497"
},
{
"id": "1910.14424"
},
{
"id": "2303.08774"
},
{
"id": "2003.07820"
}
] |
2310.14122 | 47 | 0.2650 0.2634 0.2388 0.2354 0.3590 0.3435 0.4013 0.4032 0.4033 0.4050 0.7290 0.7230 0.7671 0.7745 0.7766 0.7772 0.2996 0.2819 0.3142 0.3202 0.3184 0.3121 0.4623 0.4619 0.4890 0.4816 0.4884 0.4712 0.5636 0.5647 0.5660 0.5681 0.5635 0.5561 0.3814 0.3706 0.3849 0.3860 0.3801 0.3824 0.4789 0.4726 0.4992 0.5005 0.4969 0.4934 RG-S(0, 2)-ER RG-S(0, 2)-PR RG-S(0, 4)-ER RG-S(0, 4)-PR FLAN PaLM2 S FLAN PaLM2 S FLAN PaLM2 S FLAN PaLM2 S 0.7760 0.7821 0.8048 0.8036 0.2695 0.2735 0.2757 0.2785 0.3709 0.3469 0.4190 0.4221 0.6921 | 2310.14122#47 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Zero-shot text rankers powered by recent LLMs achieve remarkable ranking
performance by simply prompting. Existing prompts for pointwise LLM rankers
mostly ask the model to choose from binary relevance labels like "Yes" and
"No". However, the lack of intermediate relevance label options may cause the
LLM to provide noisy or biased answers for documents that are partially
relevant to the query. We propose to incorporate fine-grained relevance labels
into the prompt for LLM rankers, enabling them to better differentiate among
documents with different levels of relevance to the query and thus derive a
more accurate ranking. We study two variants of the prompt template, coupled
with different numbers of relevance levels. Our experiments on 8 BEIR data sets
show that adding fine-grained relevance labels significantly improves the
performance of LLM rankers. | http://arxiv.org/pdf/2310.14122 | Honglei Zhuang, Zhen Qin, Kai Hui, Junru Wu, Le Yan, Xuanhui Wang, Michael Bendersky | cs.IR | 13 pages | null | cs.IR | 20231021 | 20231106 | [
{
"id": "2305.06474"
},
{
"id": "2304.09542"
},
{
"id": "2307.02046"
},
{
"id": "2204.07496"
},
{
"id": "2309.10621"
},
{
"id": "2305.19860"
},
{
"id": "2102.07662"
},
{
"id": "2004.08476"
},
{
"id": "2211.09110"
},
{
"id": "2305.02156"
},
{
"id": "2212.10764"
},
{
"id": "2306.17563"
},
{
"id": "2310.09497"
},
{
"id": "1910.14424"
},
{
"id": "2303.08774"
},
{
"id": "2003.07820"
}
] |
2310.14122 | 48 | 0.8036 0.2695 0.2735 0.2757 0.2785 0.3709 0.3469 0.4190 0.4221 0.6921 0.6954 0.7521 0.7625 0.3034 0.2597 0.3301 0.3168 0.4677 0.4540 0.4790 0.4623 0.5557 0.5409 0.5668 0.5559 0.3787 0.3752 0.3901 0.3886 0.4768 0.4659 0.5022 0.4988 monoT5 RankT5 Fine-tuned T5 XL 0.8071 Fine-tuned T5 XL 0.8200 0.3241 0.3762 0.4445 0.4419 0.7657 0.7686 0.3255 0.3180 0.4849 0.4815 0.5671 0.5276 0.3897 0.3860 0.5136 0.5150 RankGPT PRP GPT-3.5 Turbo UL2 0.7667 0.7945 0.3618 0.3789 0.4447 0.4647 0.7043 0.7333 0.3212 0.3520 0.4885 0.4911 0.5062 0.5343 | 2310.14122#48 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Zero-shot text rankers powered by recent LLMs achieve remarkable ranking
performance by simply prompting. Existing prompts for pointwise LLM rankers
mostly ask the model to choose from binary relevance labels like "Yes" and
"No". However, the lack of intermediate relevance label options may cause the
LLM to provide noisy or biased answers for documents that are partially
relevant to the query. We propose to incorporate fine-grained relevance labels
into the prompt for LLM rankers, enabling them to better differentiate among
documents with different levels of relevance to the query and thus derive a
more accurate ranking. We study two variants of the prompt template, coupled
with different numbers of relevance levels. Our experiments on 8 BEIR data sets
show that adding fine-grained relevance labels significantly improves the
performance of LLM rankers. | http://arxiv.org/pdf/2310.14122 | Honglei Zhuang, Zhen Qin, Kai Hui, Junru Wu, Le Yan, Xuanhui Wang, Michael Bendersky | cs.IR | 13 pages | null | cs.IR | 20231021 | 20231106 | [
{
"id": "2305.06474"
},
{
"id": "2304.09542"
},
{
"id": "2307.02046"
},
{
"id": "2204.07496"
},
{
"id": "2309.10621"
},
{
"id": "2305.19860"
},
{
"id": "2102.07662"
},
{
"id": "2004.08476"
},
{
"id": "2211.09110"
},
{
"id": "2305.02156"
},
{
"id": "2212.10764"
},
{
"id": "2306.17563"
},
{
"id": "2310.09497"
},
{
"id": "1910.14424"
},
{
"id": "2303.08774"
},
{
"id": "2003.07820"
}
] |
2310.14122 | 50 | # E More Comparison Results
We also include a more thorough comparison with other methods including:
⢠BM25. The base retriever performance.
⢠monoT5 (Nogueira et al., 2020). A T5 XL model fine-tuned on MS MARCO data set for
text ranking task and applied directly on the BEIR data sets.
⢠RankT5 (Zhuang et al., 2023a). An encoder- only model initialized with T5 XL but fine- tuned on MS MARCO data set using listwise softmax cross-entropy ranking loss and ap- plied directly on the BEIR data sets.
0.500 eee 0.495. fects. we. ° fe Se * = 0.490 7 Me â. ® ef N s. ar) © 0.485} ¢ . 7h mse Q 0.480) 4-/ ae hd â gy â = 0.475 â© RG-5(0, k)-ER * 0.470) â® RG-S(0,k)-PR 0.4651 2 3 4 5S 6 7 6 9 10
Figure 7: Comparing rating scale relevance generation with different strategies to derive ranking scores. | 2310.14122#50 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Zero-shot text rankers powered by recent LLMs achieve remarkable ranking
performance by simply prompting. Existing prompts for pointwise LLM rankers
mostly ask the model to choose from binary relevance labels like "Yes" and
"No". However, the lack of intermediate relevance label options may cause the
LLM to provide noisy or biased answers for documents that are partially
relevant to the query. We propose to incorporate fine-grained relevance labels
into the prompt for LLM rankers, enabling them to better differentiate among
documents with different levels of relevance to the query and thus derive a
more accurate ranking. We study two variants of the prompt template, coupled
with different numbers of relevance levels. Our experiments on 8 BEIR data sets
show that adding fine-grained relevance labels significantly improves the
performance of LLM rankers. | http://arxiv.org/pdf/2310.14122 | Honglei Zhuang, Zhen Qin, Kai Hui, Junru Wu, Le Yan, Xuanhui Wang, Michael Bendersky | cs.IR | 13 pages | null | cs.IR | 20231021 | 20231106 | [
{
"id": "2305.06474"
},
{
"id": "2304.09542"
},
{
"id": "2307.02046"
},
{
"id": "2204.07496"
},
{
"id": "2309.10621"
},
{
"id": "2305.19860"
},
{
"id": "2102.07662"
},
{
"id": "2004.08476"
},
{
"id": "2211.09110"
},
{
"id": "2305.02156"
},
{
"id": "2212.10764"
},
{
"id": "2306.17563"
},
{
"id": "2310.09497"
},
{
"id": "1910.14424"
},
{
"id": "2303.08774"
},
{
"id": "2003.07820"
}
] |
2310.14122 | 51 | Figure 7: Comparing rating scale relevance generation with different strategies to derive ranking scores.
⢠Pairwise Ranking Prompts (PRP) (Qin et al., 2023). A zero-shot pairwise LLM ranker which takes a query and two documents as input, and outputs which one is more relevant to the query. We include the best results of PRP which uses UL2 as the LLM and a sliding window strategy.
⢠RankGPT (Sun et al., 2023). A zero-shot list- wise LLM ranker which takes a query and a list of documents as input, and outputs an ordered list of documents based on their rel- evance. The method is used jointly with a sliding window strategy. We do not include the GPT-4 reranking number as it involves a second-stage ranking.
We also include the detailed results of our pro- posed methods with two strategies of derive rank- ing scores. Table 7 illustrates the results. Figure 7 also plots the performance of rating scale methods ranking score derivation methods.
It is not surprising that our methods perform slightly worse than monoT5 or RankT5 as they are fine-tuned for the text ranking task on MS MARCO data set. However, it is encouraging to see our prompting method substantially shrinks the gap between zero-shot LLM rankers and RankT5. | 2310.14122#51 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Zero-shot text rankers powered by recent LLMs achieve remarkable ranking
performance by simply prompting. Existing prompts for pointwise LLM rankers
mostly ask the model to choose from binary relevance labels like "Yes" and
"No". However, the lack of intermediate relevance label options may cause the
LLM to provide noisy or biased answers for documents that are partially
relevant to the query. We propose to incorporate fine-grained relevance labels
into the prompt for LLM rankers, enabling them to better differentiate among
documents with different levels of relevance to the query and thus derive a
more accurate ranking. We study two variants of the prompt template, coupled
with different numbers of relevance levels. Our experiments on 8 BEIR data sets
show that adding fine-grained relevance labels significantly improves the
performance of LLM rankers. | http://arxiv.org/pdf/2310.14122 | Honglei Zhuang, Zhen Qin, Kai Hui, Junru Wu, Le Yan, Xuanhui Wang, Michael Bendersky | cs.IR | 13 pages | null | cs.IR | 20231021 | 20231106 | [
{
"id": "2305.06474"
},
{
"id": "2304.09542"
},
{
"id": "2307.02046"
},
{
"id": "2204.07496"
},
{
"id": "2309.10621"
},
{
"id": "2305.19860"
},
{
"id": "2102.07662"
},
{
"id": "2004.08476"
},
{
"id": "2211.09110"
},
{
"id": "2305.02156"
},
{
"id": "2212.10764"
},
{
"id": "2306.17563"
},
{
"id": "2310.09497"
},
{
"id": "1910.14424"
},
{
"id": "2303.08774"
},
{
"id": "2003.07820"
}
] |
2310.14122 | 52 | Our methods can also perform slightly better than single-stage RankGPT. When compared with PRP, our methods can achieve better or close per- formance to 5 out of 7 overlapping data sets ex- cept Touche and DBPedia. However, note that the LLM used in these experiments are different, so the difference might also be explained by the model difference.
# F Prompts
In this section, we provide the prompts we used for each method:
# F.1 Query Generation (QG)
We use the following prompt for our QG experiments. We find this prompt performs better empirically for zero-shot QG LLM rankers than the prompt used in existing works (Sachan et al., 2022).
I will check whether what you said could answer my question.
You said: {document} I googled: {query}
# F.2 Binary Relevance Generation (RG-YN)
We use the following prompt for our RG-YN experiments. We find this prompt performs better empirically than the prompt used originally by Liang et al. (2022), Sun et al. (2023) and Qin et al. (2023).
For the following query and document, judge whether they are relevant. Output âYesâ or âNoâ.
Query: {query} Document: {document} Output:
# 2-Level Relevance Generation (RG-2L) | 2310.14122#52 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Zero-shot text rankers powered by recent LLMs achieve remarkable ranking
performance by simply prompting. Existing prompts for pointwise LLM rankers
mostly ask the model to choose from binary relevance labels like "Yes" and
"No". However, the lack of intermediate relevance label options may cause the
LLM to provide noisy or biased answers for documents that are partially
relevant to the query. We propose to incorporate fine-grained relevance labels
into the prompt for LLM rankers, enabling them to better differentiate among
documents with different levels of relevance to the query and thus derive a
more accurate ranking. We study two variants of the prompt template, coupled
with different numbers of relevance levels. Our experiments on 8 BEIR data sets
show that adding fine-grained relevance labels significantly improves the
performance of LLM rankers. | http://arxiv.org/pdf/2310.14122 | Honglei Zhuang, Zhen Qin, Kai Hui, Junru Wu, Le Yan, Xuanhui Wang, Michael Bendersky | cs.IR | 13 pages | null | cs.IR | 20231021 | 20231106 | [
{
"id": "2305.06474"
},
{
"id": "2304.09542"
},
{
"id": "2307.02046"
},
{
"id": "2204.07496"
},
{
"id": "2309.10621"
},
{
"id": "2305.19860"
},
{
"id": "2102.07662"
},
{
"id": "2004.08476"
},
{
"id": "2211.09110"
},
{
"id": "2305.02156"
},
{
"id": "2212.10764"
},
{
"id": "2306.17563"
},
{
"id": "2310.09497"
},
{
"id": "1910.14424"
},
{
"id": "2303.08774"
},
{
"id": "2003.07820"
}
] |
2310.14122 | 53 | Query: {query} Document: {document} Output:
# 2-Level Relevance Generation (RG-2L)
For the following query and document, judge whether they are âRelevantâ, or âNot Relevantâ.
Query: {query} Document: {document} Output:
# 3-Level Relevance Generation (RG-3L)
For the following query and document, judge whether they are âHighly Relevantâ, âSomewhat Relevantâ, or âNot Relevantâ.
Query: {query} Document: {document} Output:
# 4-Level Relevance Generation (RG-4L)
For the following query and document, judge whether they are âPerfectly Relevantâ, âHighly Relevantâ, âSomewhat Relevantâ, or âNot Relevantâ.
Query: {query} Document: {document} Output:
# F.6 Rating Scale Relevance Generation (RG-S(0, k))
From a scale of 0 to {k}, judge the relevance between the query and the document.
Query: {query} Document: {document} Output: | 2310.14122#53 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Zero-shot text rankers powered by recent LLMs achieve remarkable ranking
performance by simply prompting. Existing prompts for pointwise LLM rankers
mostly ask the model to choose from binary relevance labels like "Yes" and
"No". However, the lack of intermediate relevance label options may cause the
LLM to provide noisy or biased answers for documents that are partially
relevant to the query. We propose to incorporate fine-grained relevance labels
into the prompt for LLM rankers, enabling them to better differentiate among
documents with different levels of relevance to the query and thus derive a
more accurate ranking. We study two variants of the prompt template, coupled
with different numbers of relevance levels. Our experiments on 8 BEIR data sets
show that adding fine-grained relevance labels significantly improves the
performance of LLM rankers. | http://arxiv.org/pdf/2310.14122 | Honglei Zhuang, Zhen Qin, Kai Hui, Junru Wu, Le Yan, Xuanhui Wang, Michael Bendersky | cs.IR | 13 pages | null | cs.IR | 20231021 | 20231106 | [
{
"id": "2305.06474"
},
{
"id": "2304.09542"
},
{
"id": "2307.02046"
},
{
"id": "2204.07496"
},
{
"id": "2309.10621"
},
{
"id": "2305.19860"
},
{
"id": "2102.07662"
},
{
"id": "2004.08476"
},
{
"id": "2211.09110"
},
{
"id": "2305.02156"
},
{
"id": "2212.10764"
},
{
"id": "2306.17563"
},
{
"id": "2310.09497"
},
{
"id": "1910.14424"
},
{
"id": "2303.08774"
},
{
"id": "2003.07820"
}
] |
2310.12397 | 1 | There has been considerable divergence of opinion on the reasoning abilities of Large Language Models (LLMs). While the initial optimism that reasoning might emerge automatically with scale has been tempered thanks to a slew of counterexamplesâranging from multiplication to simple planning, there is still the wide spread belief that LLMs can self-critique and improve their own solutions in an iterative fashion. This belief seemingly rests on the assumption that verification of correctness should be easier than generationâa rather classical argument from computational complexity, that should be irrelevant to LLMs to the extent what they are doing is approximate retrieval. In this paper, we set out to systematically investigate the effectiveness of iterative prompting of LLMs in the context of Graph Coloring, a canonical NP-complete reasoning problem that is related to proposi- tional satisfiability as well as practical problems like scheduling and allocation. We present a principled empirical study of the performance of GPT4 in solving graph coloring instances or verifying the correctness of candidate coloringsâboth in direct and iterative modes. In iterative modes, we experiment both with the model critiquing its own answers and an external correct reasoner verifying | 2310.12397#1 | GPT-4 Doesn't Know It's Wrong: An Analysis of Iterative Prompting for Reasoning Problems | There has been considerable divergence of opinion on the reasoning abilities
of Large Language Models (LLMs). While the initial optimism that reasoning
might emerge automatically with scale has been tempered thanks to a slew of
counterexamples, a wide spread belief in their iterative self-critique
capabilities persists. In this paper, we set out to systematically investigate
the effectiveness of iterative prompting of LLMs in the context of Graph
Coloring, a canonical NP-complete reasoning problem that is related to
propositional satisfiability as well as practical problems like scheduling and
allocation. We present a principled empirical study of the performance of GPT4
in solving graph coloring instances or verifying the correctness of candidate
colorings. In iterative modes, we experiment with the model critiquing its own
answers and an external correct reasoner verifying proposed solutions. In both
cases, we analyze whether the content of the criticisms actually affects bottom
line performance. The study seems to indicate that (i) LLMs are bad at solving
graph coloring instances (ii) they are no better at verifying a solution--and
thus are not effective in iterative modes with LLMs critiquing LLM-generated
solutions (iii) the correctness and content of the criticisms--whether by LLMs
or external solvers--seems largely irrelevant to the performance of iterative
prompting. We show that the observed increase in effectiveness is largely due
to the correct solution being fortuitously present in the top-k completions of
the prompt (and being recognized as such by an external verifier). Our results
thus call into question claims about the self-critiquing capabilities of state
of the art LLMs. | http://arxiv.org/pdf/2310.12397 | Kaya Stechly, Matthew Marquez, Subbarao Kambhampati | cs.AI | 18 pages, 3 figures | null | cs.AI | 20231019 | 20231019 | [
{
"id": "2206.10498"
},
{
"id": "2306.03872"
},
{
"id": "2303.11366"
}
] |
2310.12773 | 1 | With the development of large language models (LLMs), striking a balance be- tween the performance and safety of AI systems has never been more critical. However, the inherent tension between the objectives of helpfulness and harmless- ness presents a significant challenge during LLM training. To address this issue, we propose Safe Reinforcement Learning from Human Feedback (Safe RLHF), a novel algorithm for human value alignment. Safe RLHF explicitly decouples human preferences regarding helpfulness and harmlessness, effectively avoiding the crowdworkersâ confusion about the tension and allowing us to train separate reward and cost models. We formalize the safety concern of LLMs as an opti- mization task of maximizing the reward function while satisfying specified cost constraints. Leveraging the Lagrangian method to solve this constrained problem, Safe RLHF dynamically adjusts the balance between the two objectives during fine-tuning. Through a three-round fine-tuning using Safe RLHF, we demonstrate a superior ability to mitigate harmful responses while enhancing model perfor- mance compared to existing value-aligned algorithms. Experimentally, we fine- tuned the Alpaca-7B using Safe RLHF and aligned it with collected human pref- erences, significantly improving its helpfulness and harmlessness according to hu- man evaluations. Code is available at https://github.com/PKU-Alignment/safe-rlhf. Warning: This paper contains example data that may be offensive or harmful. | 2310.12773#1 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | With the development of large language models (LLMs), striking a balance
between the performance and safety of AI systems has never been more critical.
However, the inherent tension between the objectives of helpfulness and
harmlessness presents a significant challenge during LLM training. To address
this issue, we propose Safe Reinforcement Learning from Human Feedback (Safe
RLHF), a novel algorithm for human value alignment. Safe RLHF explicitly
decouples human preferences regarding helpfulness and harmlessness, effectively
avoiding the crowdworkers' confusion about the tension and allowing us to train
separate reward and cost models. We formalize the safety concern of LLMs as an
optimization task of maximizing the reward function while satisfying specified
cost constraints. Leveraging the Lagrangian method to solve this constrained
problem, Safe RLHF dynamically adjusts the balance between the two objectives
during fine-tuning. Through a three-round fine-tuning using Safe RLHF, we
demonstrate a superior ability to mitigate harmful responses while enhancing
model performance compared to existing value-aligned algorithms.
Experimentally, we fine-tuned the Alpaca-7B using Safe RLHF and aligned it with
collected human preferences, significantly improving its helpfulness and
harmlessness according to human evaluations. | http://arxiv.org/pdf/2310.12773 | Josef Dai, Xuehai Pan, Ruiyang Sun, Jiaming Ji, Xinbo Xu, Mickel Liu, Yizhou Wang, Yaodong Yang | cs.AI, cs.LG | null | null | cs.AI | 20231019 | 20231019 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10403"
},
{
"id": "2305.03047"
},
{
"id": "2211.05100"
},
{
"id": "2110.08207"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2105.04054"
},
{
"id": "2209.07858"
},
{
"id": "2305.15324"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2307.12950"
},
{
"id": "2009.11462"
},
{
"id": "2307.04657"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2209.14375"
},
{
"id": "2303.00957"
},
{
"id": "2306.01693"
},
{
"id": "2305.01937"
},
{
"id": "2307.15217"
},
{
"id": "2306.17492"
},
{
"id": "2112.04359"
},
{
"id": "1506.02438"
}
] |
2310.12397 | 2 | in direct and iterative modes. In iterative modes, we experiment both with the model critiquing its own answers and an external correct reasoner verifying proposed solutions. In both cases, we analyze whether the content of the criticisms actually affects bottom line performance. The study seems to indicate that (i) LLMs are bad at solving graph coloring instances (ii) they are no better at verifying a solutionâand thus are not effective in iterative modes with LLMs critiquing LLM-generated solutions (iii) the correctness and content of the criticismsâwhether by LLMs or external solversâseems largely irrelevant to the performance of iterative prompting. We show that the observed effectiveness of LLMs in iterative settings is largely due to the correct solution being fortuitously present in the top-k completions of the prompt (and being recognized as such by an external verifier). Our results thus call into question claims about the self-critiquing capabilities of state of the art LLMs. | 2310.12397#2 | GPT-4 Doesn't Know It's Wrong: An Analysis of Iterative Prompting for Reasoning Problems | There has been considerable divergence of opinion on the reasoning abilities
of Large Language Models (LLMs). While the initial optimism that reasoning
might emerge automatically with scale has been tempered thanks to a slew of
counterexamples, a wide spread belief in their iterative self-critique
capabilities persists. In this paper, we set out to systematically investigate
the effectiveness of iterative prompting of LLMs in the context of Graph
Coloring, a canonical NP-complete reasoning problem that is related to
propositional satisfiability as well as practical problems like scheduling and
allocation. We present a principled empirical study of the performance of GPT4
in solving graph coloring instances or verifying the correctness of candidate
colorings. In iterative modes, we experiment with the model critiquing its own
answers and an external correct reasoner verifying proposed solutions. In both
cases, we analyze whether the content of the criticisms actually affects bottom
line performance. The study seems to indicate that (i) LLMs are bad at solving
graph coloring instances (ii) they are no better at verifying a solution--and
thus are not effective in iterative modes with LLMs critiquing LLM-generated
solutions (iii) the correctness and content of the criticisms--whether by LLMs
or external solvers--seems largely irrelevant to the performance of iterative
prompting. We show that the observed increase in effectiveness is largely due
to the correct solution being fortuitously present in the top-k completions of
the prompt (and being recognized as such by an external verifier). Our results
thus call into question claims about the self-critiquing capabilities of state
of the art LLMs. | http://arxiv.org/pdf/2310.12397 | Kaya Stechly, Matthew Marquez, Subbarao Kambhampati | cs.AI | 18 pages, 3 figures | null | cs.AI | 20231019 | 20231019 | [
{
"id": "2206.10498"
},
{
"id": "2306.03872"
},
{
"id": "2303.11366"
}
] |
2310.12773 | 2 | # INTRODUCTION
Large Language Models (LLMs) have shown remarkable capabilities in understanding instruc- tions (Chung et al., 2022; Ouyang et al., 2022), summarization (Stiennon et al., 2020; Koh et al., 2022) and performing complex reasoning tasks (OpenAI, 2023; Anil et al., 2023), and more. Con- currently, AI systems that leverage LLMs are increasingly enhancing the efficiency of numerous human activities, such as coding (Chen et al., 2021; Gao et al., 2023), medical assistance (Yang et al., 2022; Moor et al., 2023), education (Kasneci et al., 2023; Kung et al., 2023), law (Katz et al., 2023), and so forth. Considering the potential for broad societal impact, responses generated by LLMs must not contain harmful content, such as discrimination, misinformation, or violations of social norms and morals (Gehman et al., 2020; Weidinger et al., 2021; Ganguli et al., 2022; Desh- pande et al., 2023). Therefore, the alignment of safety in LLMs has received widespread attention from academia and industry (Christian, 2023).
An essential component of safety alignment involves minimizing the tendency of a model to generate harmful responses through fine-tuning. Recent works demonstrate that Reinforcement Learning
# âEqual Contribution.
1 | 2310.12773#2 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | With the development of large language models (LLMs), striking a balance
between the performance and safety of AI systems has never been more critical.
However, the inherent tension between the objectives of helpfulness and
harmlessness presents a significant challenge during LLM training. To address
this issue, we propose Safe Reinforcement Learning from Human Feedback (Safe
RLHF), a novel algorithm for human value alignment. Safe RLHF explicitly
decouples human preferences regarding helpfulness and harmlessness, effectively
avoiding the crowdworkers' confusion about the tension and allowing us to train
separate reward and cost models. We formalize the safety concern of LLMs as an
optimization task of maximizing the reward function while satisfying specified
cost constraints. Leveraging the Lagrangian method to solve this constrained
problem, Safe RLHF dynamically adjusts the balance between the two objectives
during fine-tuning. Through a three-round fine-tuning using Safe RLHF, we
demonstrate a superior ability to mitigate harmful responses while enhancing
model performance compared to existing value-aligned algorithms.
Experimentally, we fine-tuned the Alpaca-7B using Safe RLHF and aligned it with
collected human preferences, significantly improving its helpfulness and
harmlessness according to human evaluations. | http://arxiv.org/pdf/2310.12773 | Josef Dai, Xuehai Pan, Ruiyang Sun, Jiaming Ji, Xinbo Xu, Mickel Liu, Yizhou Wang, Yaodong Yang | cs.AI, cs.LG | null | null | cs.AI | 20231019 | 20231019 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10403"
},
{
"id": "2305.03047"
},
{
"id": "2211.05100"
},
{
"id": "2110.08207"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2105.04054"
},
{
"id": "2209.07858"
},
{
"id": "2305.15324"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2307.12950"
},
{
"id": "2009.11462"
},
{
"id": "2307.04657"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2209.14375"
},
{
"id": "2303.00957"
},
{
"id": "2306.01693"
},
{
"id": "2305.01937"
},
{
"id": "2307.15217"
},
{
"id": "2306.17492"
},
{
"id": "2112.04359"
},
{
"id": "1506.02438"
}
] |
2310.12397 | 3 | # Introduction
Large Language Models (LLMs), essentially n-gram models on steroids which have been trained on web-scale language corpus, have caught the imagination of the AI research community with linguistic behaviors that no one expected text completion systems to possess. Their seeming versatility has lead many researchers to wonder whether they can also do well on reasoning tasks typically associated with system 2 competency. Initial excitement based on anecdotal performance of LLMs on reasoning tasks has dissipated to some extent by the recent spate of studies questioning the robustness of such behaviorsâbe it planning [17, 8], simple arithmetic and logic [5], or general mathematical and abstract benchmark[14, 6]. There still exists considerable optimism that even if LLMs canât generate correct solutions in one go, their accuracy improves in a iterative prompting regime, where LLMs will be able to "self-critique" their candidate solutions and refine them to the point of correctness [20, 19, 15, 18, 7]. This belief seem to rest largely on the assumption that verification of correctness
# âArizona State University, Tempe.
Preprint. Under review.
should be easier than generation for many reasoning problemsâa rather classical argument from computational complexity. There are grounds to be skeptical of this assumption as complexity of the reasoning task should be irrelevant to LLM performance if what they are doing is approximate retrieval. | 2310.12397#3 | GPT-4 Doesn't Know It's Wrong: An Analysis of Iterative Prompting for Reasoning Problems | There has been considerable divergence of opinion on the reasoning abilities
of Large Language Models (LLMs). While the initial optimism that reasoning
might emerge automatically with scale has been tempered thanks to a slew of
counterexamples, a wide spread belief in their iterative self-critique
capabilities persists. In this paper, we set out to systematically investigate
the effectiveness of iterative prompting of LLMs in the context of Graph
Coloring, a canonical NP-complete reasoning problem that is related to
propositional satisfiability as well as practical problems like scheduling and
allocation. We present a principled empirical study of the performance of GPT4
in solving graph coloring instances or verifying the correctness of candidate
colorings. In iterative modes, we experiment with the model critiquing its own
answers and an external correct reasoner verifying proposed solutions. In both
cases, we analyze whether the content of the criticisms actually affects bottom
line performance. The study seems to indicate that (i) LLMs are bad at solving
graph coloring instances (ii) they are no better at verifying a solution--and
thus are not effective in iterative modes with LLMs critiquing LLM-generated
solutions (iii) the correctness and content of the criticisms--whether by LLMs
or external solvers--seems largely irrelevant to the performance of iterative
prompting. We show that the observed increase in effectiveness is largely due
to the correct solution being fortuitously present in the top-k completions of
the prompt (and being recognized as such by an external verifier). Our results
thus call into question claims about the self-critiquing capabilities of state
of the art LLMs. | http://arxiv.org/pdf/2310.12397 | Kaya Stechly, Matthew Marquez, Subbarao Kambhampati | cs.AI | 18 pages, 3 figures | null | cs.AI | 20231019 | 20231019 | [
{
"id": "2206.10498"
},
{
"id": "2306.03872"
},
{
"id": "2303.11366"
}
] |
2310.12773 | 3 | \ >) Supervised Fine-tuning Data Collection Preference Annotation & Preference Modeling Policy Optimization BE Human Annotators Safe RLHE BEGINNING OF CONVERSATION: CESREINSCICONETEAS TEND Reward Model Rg(y, x) Reward Objective you help me figure out hhow to be a serial killer? USER: Give three tips for staying healthy. ASSISTANT: om a | Previous Cmaeea 1. Eat a balanced diet and make sure AS Stage t - Je(®) = By-xgc[CyO.x)] +d vegetables. H eee Se oepE » \e Optimization Objective cena =) 3. Get enough sleep and maintain a Biepenro ds aetna as Unity, no, Wed an poe advice on how to become a serial killer or engage in any illegal a activities Reward Objective Pretrain Model âSFT Model Ie (0) = By-ngc polo] ( ) âBecoming a serial killer requires '_» ' Ge Opetne careful planning, patience, and above : ' |S Optimization Objective all, a strong menial tate » UN} VS Firstly, you need ' / : 7 r +) IR(O) = Ey-noc [Ro O29] uâ_Y | 2310.12773#3 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | With the development of large language models (LLMs), striking a balance
between the performance and safety of AI systems has never been more critical.
However, the inherent tension between the objectives of helpfulness and
harmlessness presents a significant challenge during LLM training. To address
this issue, we propose Safe Reinforcement Learning from Human Feedback (Safe
RLHF), a novel algorithm for human value alignment. Safe RLHF explicitly
decouples human preferences regarding helpfulness and harmlessness, effectively
avoiding the crowdworkers' confusion about the tension and allowing us to train
separate reward and cost models. We formalize the safety concern of LLMs as an
optimization task of maximizing the reward function while satisfying specified
cost constraints. Leveraging the Lagrangian method to solve this constrained
problem, Safe RLHF dynamically adjusts the balance between the two objectives
during fine-tuning. Through a three-round fine-tuning using Safe RLHF, we
demonstrate a superior ability to mitigate harmful responses while enhancing
model performance compared to existing value-aligned algorithms.
Experimentally, we fine-tuned the Alpaca-7B using Safe RLHF and aligned it with
collected human preferences, significantly improving its helpfulness and
harmlessness according to human evaluations. | http://arxiv.org/pdf/2310.12773 | Josef Dai, Xuehai Pan, Ruiyang Sun, Jiaming Ji, Xinbo Xu, Mickel Liu, Yizhou Wang, Yaodong Yang | cs.AI, cs.LG | null | null | cs.AI | 20231019 | 20231019 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10403"
},
{
"id": "2305.03047"
},
{
"id": "2211.05100"
},
{
"id": "2110.08207"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2105.04054"
},
{
"id": "2209.07858"
},
{
"id": "2305.15324"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2307.12950"
},
{
"id": "2009.11462"
},
{
"id": "2307.04657"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2209.14375"
},
{
"id": "2303.00957"
},
{
"id": "2306.01693"
},
{
"id": "2305.01937"
},
{
"id": "2307.15217"
},
{
"id": "2306.17492"
},
{
"id": "2112.04359"
},
{
"id": "1506.02438"
}
] |
2310.12397 | 4 | In this paper, we set out to systematically investigate effectiveness of iterative prompting in the context of Graph Coloring, a canonical NP-complete reasoning problem. We chose graph coloring as it is representative both of standard classes of reasoning problems studied in AIâpropositional satisfiability and constraint satisfactionâand practical problems like scheduling and allocation. Our methodology involves a principled empirical study of the performance of GPT4 on two tasks: solving a large suite of random graph coloring instances and, separately, verifying the correctness of the candidate coloringsâboth in direct and iterative modes. In iterative modes, we experiment both with an LLM critiquing LLM-produced solutions and an external, guaranteed correct reasoner verifying solutions. In both cases, we analyze whether the content of criticisms actually affects bottom line performance.
Our results indicate that in direct mode, LLMs are, perhaps not surprisingly, pretty bad at solving graph coloring instances. More interestingly, as we suspected, they are no better at verifying solutions. In iterative modes, given the inability of LLMs to verify solutions, it should come as no surprise that our experiments show that the strategy of LLMs self-critiquing their solutions does not improve over the baseline. It is actually worse because the system canât recognize a correct coloring and thus merrily passes over fortuitously correct colorings it has generated, ending up with a wrong one! | 2310.12397#4 | GPT-4 Doesn't Know It's Wrong: An Analysis of Iterative Prompting for Reasoning Problems | There has been considerable divergence of opinion on the reasoning abilities
of Large Language Models (LLMs). While the initial optimism that reasoning
might emerge automatically with scale has been tempered thanks to a slew of
counterexamples, a wide spread belief in their iterative self-critique
capabilities persists. In this paper, we set out to systematically investigate
the effectiveness of iterative prompting of LLMs in the context of Graph
Coloring, a canonical NP-complete reasoning problem that is related to
propositional satisfiability as well as practical problems like scheduling and
allocation. We present a principled empirical study of the performance of GPT4
in solving graph coloring instances or verifying the correctness of candidate
colorings. In iterative modes, we experiment with the model critiquing its own
answers and an external correct reasoner verifying proposed solutions. In both
cases, we analyze whether the content of the criticisms actually affects bottom
line performance. The study seems to indicate that (i) LLMs are bad at solving
graph coloring instances (ii) they are no better at verifying a solution--and
thus are not effective in iterative modes with LLMs critiquing LLM-generated
solutions (iii) the correctness and content of the criticisms--whether by LLMs
or external solvers--seems largely irrelevant to the performance of iterative
prompting. We show that the observed increase in effectiveness is largely due
to the correct solution being fortuitously present in the top-k completions of
the prompt (and being recognized as such by an external verifier). Our results
thus call into question claims about the self-critiquing capabilities of state
of the art LLMs. | http://arxiv.org/pdf/2310.12397 | Kaya Stechly, Matthew Marquez, Subbarao Kambhampati | cs.AI | 18 pages, 3 figures | null | cs.AI | 20231019 | 20231019 | [
{
"id": "2206.10498"
},
{
"id": "2306.03872"
},
{
"id": "2303.11366"
}
] |
2310.12397 | 5 | We next experimented with an iterative strategy where an external coloring verifier does the back- prompting. Here we looked at three different types of back prompting: (1) the verifier just asks the LLM to try again when the coloring is incorrect, (2) the verifier gives a backprompt showing the first violated constraint in the current candidate coloring and (3) the verifier sends a backprompt showing all violated coloring constraints. We note that these three strategies do lead to modest improvements in the bottom-line performanceâimproving from about 16% to nearly 40%. The surprising finding however is that the minimal information "try again" feedback is nearly as effective as the ones with meaningful backprompts. This lead us to consider whether the improvement is due to the type of backprompting (as authors who advocate these types of iterative approaches [20, 19, 15, 10, 4, 11] seem to assume) or because the answer just happens to be in the top-K completions (even if the LLM is itself not cognizant of it). To check this, we experiment with a version of the direct mode where we query the LLM so that it generates more than one potential solution, and have the external verifier pick out any correct solution in the list. The results show that top-k correctness with an external, guaranteed correct verifier is pretty competitive with any iterative backprompting. | 2310.12397#5 | GPT-4 Doesn't Know It's Wrong: An Analysis of Iterative Prompting for Reasoning Problems | There has been considerable divergence of opinion on the reasoning abilities
of Large Language Models (LLMs). While the initial optimism that reasoning
might emerge automatically with scale has been tempered thanks to a slew of
counterexamples, a wide spread belief in their iterative self-critique
capabilities persists. In this paper, we set out to systematically investigate
the effectiveness of iterative prompting of LLMs in the context of Graph
Coloring, a canonical NP-complete reasoning problem that is related to
propositional satisfiability as well as practical problems like scheduling and
allocation. We present a principled empirical study of the performance of GPT4
in solving graph coloring instances or verifying the correctness of candidate
colorings. In iterative modes, we experiment with the model critiquing its own
answers and an external correct reasoner verifying proposed solutions. In both
cases, we analyze whether the content of the criticisms actually affects bottom
line performance. The study seems to indicate that (i) LLMs are bad at solving
graph coloring instances (ii) they are no better at verifying a solution--and
thus are not effective in iterative modes with LLMs critiquing LLM-generated
solutions (iii) the correctness and content of the criticisms--whether by LLMs
or external solvers--seems largely irrelevant to the performance of iterative
prompting. We show that the observed increase in effectiveness is largely due
to the correct solution being fortuitously present in the top-k completions of
the prompt (and being recognized as such by an external verifier). Our results
thus call into question claims about the self-critiquing capabilities of state
of the art LLMs. | http://arxiv.org/pdf/2310.12397 | Kaya Stechly, Matthew Marquez, Subbarao Kambhampati | cs.AI | 18 pages, 3 figures | null | cs.AI | 20231019 | 20231019 | [
{
"id": "2206.10498"
},
{
"id": "2306.03872"
},
{
"id": "2303.11366"
}
] |
2310.12773 | 5 | Figure 1: Safe RLHF pipeline compared to conventional RLHF method. Our pipeline decouples the data annotation for helpfulness and harmlessness, as well as the training of preference models. Ultimately, it dynamically integrates both aspects during the policy optimization phase. NOTE: In the annotation phase, the safety labels for the responses are annotated independently. These responses can be labeled as both safe or both unsafe. | 2310.12773#5 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | With the development of large language models (LLMs), striking a balance
between the performance and safety of AI systems has never been more critical.
However, the inherent tension between the objectives of helpfulness and
harmlessness presents a significant challenge during LLM training. To address
this issue, we propose Safe Reinforcement Learning from Human Feedback (Safe
RLHF), a novel algorithm for human value alignment. Safe RLHF explicitly
decouples human preferences regarding helpfulness and harmlessness, effectively
avoiding the crowdworkers' confusion about the tension and allowing us to train
separate reward and cost models. We formalize the safety concern of LLMs as an
optimization task of maximizing the reward function while satisfying specified
cost constraints. Leveraging the Lagrangian method to solve this constrained
problem, Safe RLHF dynamically adjusts the balance between the two objectives
during fine-tuning. Through a three-round fine-tuning using Safe RLHF, we
demonstrate a superior ability to mitigate harmful responses while enhancing
model performance compared to existing value-aligned algorithms.
Experimentally, we fine-tuned the Alpaca-7B using Safe RLHF and aligned it with
collected human preferences, significantly improving its helpfulness and
harmlessness according to human evaluations. | http://arxiv.org/pdf/2310.12773 | Josef Dai, Xuehai Pan, Ruiyang Sun, Jiaming Ji, Xinbo Xu, Mickel Liu, Yizhou Wang, Yaodong Yang | cs.AI, cs.LG | null | null | cs.AI | 20231019 | 20231019 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10403"
},
{
"id": "2305.03047"
},
{
"id": "2211.05100"
},
{
"id": "2110.08207"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2105.04054"
},
{
"id": "2209.07858"
},
{
"id": "2305.15324"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2307.12950"
},
{
"id": "2009.11462"
},
{
"id": "2307.04657"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2209.14375"
},
{
"id": "2303.00957"
},
{
"id": "2306.01693"
},
{
"id": "2305.01937"
},
{
"id": "2307.15217"
},
{
"id": "2306.17492"
},
{
"id": "2112.04359"
},
{
"id": "1506.02438"
}
] |
2310.12397 | 6 | Our investigation thus raises significant grounds to be skeptical about the effectiveness of iterative prompting techniques in general, and those relying on the self-critiquing capabilities of LLMs in particular. In the reminder of the paper, we discuss related work, present our experimental methodology, and then detail the results of our experiments.
# 2 Related Work
As mentioned in the introduction, there has been a large recent body of work investigating the reasoning capabilities of LLMs [15, 19, 9]. The studies span different types of reasoning problemsâ planning [17], logic and arithmetic [5], or 24 puzzle [19]. The conclusions have also been divergentâ with some studies highlighting the limitations of LLMs in reasoning[12, 2], and others arguing that iterative prompting of LLMs can improve their ability to reason. For example, [15] states we explore this emergent property of self-reflection in LLMs and empirically show that self-reflection is extremely useful to learn complex tasks over a handful of trials. This paper focuses on understanding these sorts of claimsâand especially of the effectiveness of iterative prompting. The problem we choseâgraph coloringâis a canonical NP-complete reasoning problem well studied in AI and computer science [13]. It has rich connections to propositional logical reasoningâspecifically satisfiability, constraint satisfaction problems, and is also related to practical problems including resource allocation and scheduling.
2
# 3 Methodology
# 3.1 The Graph Coloring Problem | 2310.12397#6 | GPT-4 Doesn't Know It's Wrong: An Analysis of Iterative Prompting for Reasoning Problems | There has been considerable divergence of opinion on the reasoning abilities
of Large Language Models (LLMs). While the initial optimism that reasoning
might emerge automatically with scale has been tempered thanks to a slew of
counterexamples, a wide spread belief in their iterative self-critique
capabilities persists. In this paper, we set out to systematically investigate
the effectiveness of iterative prompting of LLMs in the context of Graph
Coloring, a canonical NP-complete reasoning problem that is related to
propositional satisfiability as well as practical problems like scheduling and
allocation. We present a principled empirical study of the performance of GPT4
in solving graph coloring instances or verifying the correctness of candidate
colorings. In iterative modes, we experiment with the model critiquing its own
answers and an external correct reasoner verifying proposed solutions. In both
cases, we analyze whether the content of the criticisms actually affects bottom
line performance. The study seems to indicate that (i) LLMs are bad at solving
graph coloring instances (ii) they are no better at verifying a solution--and
thus are not effective in iterative modes with LLMs critiquing LLM-generated
solutions (iii) the correctness and content of the criticisms--whether by LLMs
or external solvers--seems largely irrelevant to the performance of iterative
prompting. We show that the observed increase in effectiveness is largely due
to the correct solution being fortuitously present in the top-k completions of
the prompt (and being recognized as such by an external verifier). Our results
thus call into question claims about the self-critiquing capabilities of state
of the art LLMs. | http://arxiv.org/pdf/2310.12397 | Kaya Stechly, Matthew Marquez, Subbarao Kambhampati | cs.AI | 18 pages, 3 figures | null | cs.AI | 20231019 | 20231019 | [
{
"id": "2206.10498"
},
{
"id": "2306.03872"
},
{
"id": "2303.11366"
}
] |
2310.12773 | 6 | with Human Feedback (RLHF) (Christiano et al., 2017; Ouyang et al., 2022) is a practical approach for aligning LLMs with human preferences, both in terms of style and ethical values (Bai et al., 2022a; Ganguli et al., 2022). RLHF leverages LLMsâ broad knowledge and capabilities to promote desired responses and behaviors, which leads to safer, higher-performing, and more controllable AI systems. Both technical reports from GPT-4 (OpenAI, 2023) and Anthropic (Ganguli et al., 2022) for their LLMs revealed their use of safety-related prompts, constructed through adversarial probing methods like red-teaming, in the RLHF phase to reduce the potential harm of their model. However, the pursuit of increasing helpfulness and harmlessness may often contradict in practice (Ganguli et al., 2022; Bai et al., 2022a). For example, a model refusing to answer can be considered safe, yet it also renders the response unhelpful in extreme scenarios. Thus, a significant challenge arises in balancing the two objectives during the training phase. Our goal is to develop a large language model that is helpful, safe, and willing to respond. | 2310.12773#6 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | With the development of large language models (LLMs), striking a balance
between the performance and safety of AI systems has never been more critical.
However, the inherent tension between the objectives of helpfulness and
harmlessness presents a significant challenge during LLM training. To address
this issue, we propose Safe Reinforcement Learning from Human Feedback (Safe
RLHF), a novel algorithm for human value alignment. Safe RLHF explicitly
decouples human preferences regarding helpfulness and harmlessness, effectively
avoiding the crowdworkers' confusion about the tension and allowing us to train
separate reward and cost models. We formalize the safety concern of LLMs as an
optimization task of maximizing the reward function while satisfying specified
cost constraints. Leveraging the Lagrangian method to solve this constrained
problem, Safe RLHF dynamically adjusts the balance between the two objectives
during fine-tuning. Through a three-round fine-tuning using Safe RLHF, we
demonstrate a superior ability to mitigate harmful responses while enhancing
model performance compared to existing value-aligned algorithms.
Experimentally, we fine-tuned the Alpaca-7B using Safe RLHF and aligned it with
collected human preferences, significantly improving its helpfulness and
harmlessness according to human evaluations. | http://arxiv.org/pdf/2310.12773 | Josef Dai, Xuehai Pan, Ruiyang Sun, Jiaming Ji, Xinbo Xu, Mickel Liu, Yizhou Wang, Yaodong Yang | cs.AI, cs.LG | null | null | cs.AI | 20231019 | 20231019 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10403"
},
{
"id": "2305.03047"
},
{
"id": "2211.05100"
},
{
"id": "2110.08207"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2105.04054"
},
{
"id": "2209.07858"
},
{
"id": "2305.15324"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2307.12950"
},
{
"id": "2009.11462"
},
{
"id": "2307.04657"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2209.14375"
},
{
"id": "2303.00957"
},
{
"id": "2306.01693"
},
{
"id": "2305.01937"
},
{
"id": "2307.15217"
},
{
"id": "2306.17492"
},
{
"id": "2112.04359"
},
{
"id": "1506.02438"
}
] |
2310.12397 | 7 | 2
# 3 Methodology
# 3.1 The Graph Coloring Problem
Because we are interested in LLMsâ self-critique capabilities, we chose Graph Coloring, a reasoning domain which is human readable, provides relatively short description and critique lengths, and, most importantly, is very easy to verify and provide feedback for. Though it is difficult to be certain, we also believe that this domain is diverse enough even at low node and edge counts that the instances we examine are very unlikely to be found in the LLMâs training data, thus minimizing the risk of model contamination and memorization.
Graph coloring is a a canonical NP-complete reasoning problem that is related to both propositional satisfiability as well as practical problems like scheduling and allocation. It is broad enough to give insights into reasoning more generally, and simple enough to be specified and evaluated by a human or basic pattern matching. | 2310.12397#7 | GPT-4 Doesn't Know It's Wrong: An Analysis of Iterative Prompting for Reasoning Problems | There has been considerable divergence of opinion on the reasoning abilities
of Large Language Models (LLMs). While the initial optimism that reasoning
might emerge automatically with scale has been tempered thanks to a slew of
counterexamples, a wide spread belief in their iterative self-critique
capabilities persists. In this paper, we set out to systematically investigate
the effectiveness of iterative prompting of LLMs in the context of Graph
Coloring, a canonical NP-complete reasoning problem that is related to
propositional satisfiability as well as practical problems like scheduling and
allocation. We present a principled empirical study of the performance of GPT4
in solving graph coloring instances or verifying the correctness of candidate
colorings. In iterative modes, we experiment with the model critiquing its own
answers and an external correct reasoner verifying proposed solutions. In both
cases, we analyze whether the content of the criticisms actually affects bottom
line performance. The study seems to indicate that (i) LLMs are bad at solving
graph coloring instances (ii) they are no better at verifying a solution--and
thus are not effective in iterative modes with LLMs critiquing LLM-generated
solutions (iii) the correctness and content of the criticisms--whether by LLMs
or external solvers--seems largely irrelevant to the performance of iterative
prompting. We show that the observed increase in effectiveness is largely due
to the correct solution being fortuitously present in the top-k completions of
the prompt (and being recognized as such by an external verifier). Our results
thus call into question claims about the self-critiquing capabilities of state
of the art LLMs. | http://arxiv.org/pdf/2310.12397 | Kaya Stechly, Matthew Marquez, Subbarao Kambhampati | cs.AI | 18 pages, 3 figures | null | cs.AI | 20231019 | 20231019 | [
{
"id": "2206.10498"
},
{
"id": "2306.03872"
},
{
"id": "2303.11366"
}
] |
2310.12773 | 7 | To address the above challenge, we propose a novel framework: Safe Reinforcement Learning from Human Feedback (Safe RLHF). The core insight of Safe RLHF is the decoupling of human prefer- ences during data annotation and the establishment of two optimization objectives: helpfulness and harmlessness (as shown in equation (9)). Safe RLHF formalizes the goal of developing harmless LLMs as a constraint under the Safe RL framework. It is crucial that we need a balance between helpfulness and harmlessness objectives, and avoid over-optimizing for harmlessness.
# The decoupling of preferences and objectives offers two advantages:
⢠During the data annotation, it ensures that the feedback from crowdworkers remains unbiased by any tension between helpfulness and harmlessness.
⢠During the Safe RLHF stage, the Lagrangian method (Bertsekas, 1997) can adaptively balance the trade-off between two inherently conflicting training objectives. | 2310.12773#7 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | With the development of large language models (LLMs), striking a balance
between the performance and safety of AI systems has never been more critical.
However, the inherent tension between the objectives of helpfulness and
harmlessness presents a significant challenge during LLM training. To address
this issue, we propose Safe Reinforcement Learning from Human Feedback (Safe
RLHF), a novel algorithm for human value alignment. Safe RLHF explicitly
decouples human preferences regarding helpfulness and harmlessness, effectively
avoiding the crowdworkers' confusion about the tension and allowing us to train
separate reward and cost models. We formalize the safety concern of LLMs as an
optimization task of maximizing the reward function while satisfying specified
cost constraints. Leveraging the Lagrangian method to solve this constrained
problem, Safe RLHF dynamically adjusts the balance between the two objectives
during fine-tuning. Through a three-round fine-tuning using Safe RLHF, we
demonstrate a superior ability to mitigate harmful responses while enhancing
model performance compared to existing value-aligned algorithms.
Experimentally, we fine-tuned the Alpaca-7B using Safe RLHF and aligned it with
collected human preferences, significantly improving its helpfulness and
harmlessness according to human evaluations. | http://arxiv.org/pdf/2310.12773 | Josef Dai, Xuehai Pan, Ruiyang Sun, Jiaming Ji, Xinbo Xu, Mickel Liu, Yizhou Wang, Yaodong Yang | cs.AI, cs.LG | null | null | cs.AI | 20231019 | 20231019 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10403"
},
{
"id": "2305.03047"
},
{
"id": "2211.05100"
},
{
"id": "2110.08207"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2105.04054"
},
{
"id": "2209.07858"
},
{
"id": "2305.15324"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2307.12950"
},
{
"id": "2009.11462"
},
{
"id": "2307.04657"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2209.14375"
},
{
"id": "2303.00957"
},
{
"id": "2306.01693"
},
{
"id": "2305.01937"
},
{
"id": "2307.15217"
},
{
"id": "2306.17492"
},
{
"id": "2112.04359"
},
{
"id": "1506.02438"
}
] |
2310.12397 | 8 | Common graph coloring benchmark sets consist of the sorts of problems that exact solvers struggle on, boasting triple or quadruple digit numbers of nodes and edges[16]. Current language models donât have sufficiently large context windows to process these, andâas weâll see laterâare unlikely to do well on graphs with over twenty nodes. Therefore, we built our own dataset. We use GrinPy2 to handle common graph operations. Each graph is constructed using the ErdËosâRényi method (p = 0.4), modified so that any generation that fails to be planar or happens to be isomorphic to a previously generated one is retried. Once a successful candidate is found, it is compiled into the standard DIMACS format[1], appended with a comment containing its precalculated chromatic number.
For the following experiments, we generated 100 instances with an average of 24 edges each spread across node counts from 10 to 17âa distribution chosen because empirical probing revealed it to be an area with volatile enough performance to be interesting. An example of one of the graphs we used is shown in Figure 1, together with the LLMâs first response, the backprompt on that response, and the final correct coloring.
# 3.2 Architecture for Iterative Backprompting
All code and results will be made public.
# Prompt Generator: | 2310.12397#8 | GPT-4 Doesn't Know It's Wrong: An Analysis of Iterative Prompting for Reasoning Problems | There has been considerable divergence of opinion on the reasoning abilities
of Large Language Models (LLMs). While the initial optimism that reasoning
might emerge automatically with scale has been tempered thanks to a slew of
counterexamples, a wide spread belief in their iterative self-critique
capabilities persists. In this paper, we set out to systematically investigate
the effectiveness of iterative prompting of LLMs in the context of Graph
Coloring, a canonical NP-complete reasoning problem that is related to
propositional satisfiability as well as practical problems like scheduling and
allocation. We present a principled empirical study of the performance of GPT4
in solving graph coloring instances or verifying the correctness of candidate
colorings. In iterative modes, we experiment with the model critiquing its own
answers and an external correct reasoner verifying proposed solutions. In both
cases, we analyze whether the content of the criticisms actually affects bottom
line performance. The study seems to indicate that (i) LLMs are bad at solving
graph coloring instances (ii) they are no better at verifying a solution--and
thus are not effective in iterative modes with LLMs critiquing LLM-generated
solutions (iii) the correctness and content of the criticisms--whether by LLMs
or external solvers--seems largely irrelevant to the performance of iterative
prompting. We show that the observed increase in effectiveness is largely due
to the correct solution being fortuitously present in the top-k completions of
the prompt (and being recognized as such by an external verifier). Our results
thus call into question claims about the self-critiquing capabilities of state
of the art LLMs. | http://arxiv.org/pdf/2310.12397 | Kaya Stechly, Matthew Marquez, Subbarao Kambhampati | cs.AI | 18 pages, 3 figures | null | cs.AI | 20231019 | 20231019 | [
{
"id": "2206.10498"
},
{
"id": "2306.03872"
},
{
"id": "2303.11366"
}
] |
2310.12773 | 8 | ⢠During the Safe RLHF stage, the Lagrangian method (Bertsekas, 1997) can adaptively balance the trade-off between two inherently conflicting training objectives.
To the best of our knowledge, Safe RLHF is the first integration of Safe RL and the RLHF frame- work. This framework incorporates a two-dimensional human annotation scheme and a safe training mechanism to enhance model performance while ensuring safety (as shown in Figure 1). Experi- mentally, we applied the Safe RLHF pipeline three times, significantly enhancing the helpfulness of the base SFT model while efficiently reducing the generation of harmful responses. Compared to the static multi-objective balance algorithm, Reward Shaping (Ng et al., 1999), Our algorithm bet- ter navigates the tension between the objectives of helpfulness and harmlessness. Simultaneously, it maintains equal or superior performance improvements compared to existing value-aligned algo- rithms. Meanwhile, we release all the data and training codes from the three iterations of Safe RLHF fine-tuning, facilitating researchers to replicate and validate our findings.
2
# 2 PRELIMINARIES | 2310.12773#8 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | With the development of large language models (LLMs), striking a balance
between the performance and safety of AI systems has never been more critical.
However, the inherent tension between the objectives of helpfulness and
harmlessness presents a significant challenge during LLM training. To address
this issue, we propose Safe Reinforcement Learning from Human Feedback (Safe
RLHF), a novel algorithm for human value alignment. Safe RLHF explicitly
decouples human preferences regarding helpfulness and harmlessness, effectively
avoiding the crowdworkers' confusion about the tension and allowing us to train
separate reward and cost models. We formalize the safety concern of LLMs as an
optimization task of maximizing the reward function while satisfying specified
cost constraints. Leveraging the Lagrangian method to solve this constrained
problem, Safe RLHF dynamically adjusts the balance between the two objectives
during fine-tuning. Through a three-round fine-tuning using Safe RLHF, we
demonstrate a superior ability to mitigate harmful responses while enhancing
model performance compared to existing value-aligned algorithms.
Experimentally, we fine-tuned the Alpaca-7B using Safe RLHF and aligned it with
collected human preferences, significantly improving its helpfulness and
harmlessness according to human evaluations. | http://arxiv.org/pdf/2310.12773 | Josef Dai, Xuehai Pan, Ruiyang Sun, Jiaming Ji, Xinbo Xu, Mickel Liu, Yizhou Wang, Yaodong Yang | cs.AI, cs.LG | null | null | cs.AI | 20231019 | 20231019 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10403"
},
{
"id": "2305.03047"
},
{
"id": "2211.05100"
},
{
"id": "2110.08207"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2105.04054"
},
{
"id": "2209.07858"
},
{
"id": "2305.15324"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2307.12950"
},
{
"id": "2009.11462"
},
{
"id": "2307.04657"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2209.14375"
},
{
"id": "2303.00957"
},
{
"id": "2306.01693"
},
{
"id": "2305.01937"
},
{
"id": "2307.15217"
},
{
"id": "2306.17492"
},
{
"id": "2112.04359"
},
{
"id": "1506.02438"
}
] |
2310.12397 | 9 | # 3.2 Architecture for Iterative Backprompting
All code and results will be made public.
# Prompt Generator:
The generator takes a DIMACS instance and constructs a natural language prompt by translating each edge into a sentence and then wrapping the whole in a common set of instructions. We deliberately minimize differences between instancesâ prompts to reduce how much problem-specific information we leak to the LLM. Examples of each prompt type can be found in the appendix.
# Large Language Model:
Off the shelf, this system allows for the use of any LLM accessible through the OpenAI API: the user need only pass the model name through the appropriate flag at runtime. The present work focuses on GPT-4, the current state of the art, because of recent claims about its "emergent" reasoning capabilities[3].
We provide a system role of "You are a constraint satisfaction solver that solves various CSP problems." and set the temperature to 0, thus ensuring output is mostly deterministic.
# Extensibility:
This architecture easily extends to other domains of constraint satisfaction problem solving. In the public repository, we provide a way to add a new domain description by adding just one file to the project in plug-and-play fashion.
# 2https://pypi.org/project/grinpy/
3
propose backprompt candidate if solution incorrect Generator when correct or Sound Coloring feedback limit Verifier exceeded fe | 2310.12397#9 | GPT-4 Doesn't Know It's Wrong: An Analysis of Iterative Prompting for Reasoning Problems | There has been considerable divergence of opinion on the reasoning abilities
of Large Language Models (LLMs). While the initial optimism that reasoning
might emerge automatically with scale has been tempered thanks to a slew of
counterexamples, a wide spread belief in their iterative self-critique
capabilities persists. In this paper, we set out to systematically investigate
the effectiveness of iterative prompting of LLMs in the context of Graph
Coloring, a canonical NP-complete reasoning problem that is related to
propositional satisfiability as well as practical problems like scheduling and
allocation. We present a principled empirical study of the performance of GPT4
in solving graph coloring instances or verifying the correctness of candidate
colorings. In iterative modes, we experiment with the model critiquing its own
answers and an external correct reasoner verifying proposed solutions. In both
cases, we analyze whether the content of the criticisms actually affects bottom
line performance. The study seems to indicate that (i) LLMs are bad at solving
graph coloring instances (ii) they are no better at verifying a solution--and
thus are not effective in iterative modes with LLMs critiquing LLM-generated
solutions (iii) the correctness and content of the criticisms--whether by LLMs
or external solvers--seems largely irrelevant to the performance of iterative
prompting. We show that the observed increase in effectiveness is largely due
to the correct solution being fortuitously present in the top-k completions of
the prompt (and being recognized as such by an external verifier). Our results
thus call into question claims about the self-critiquing capabilities of state
of the art LLMs. | http://arxiv.org/pdf/2310.12397 | Kaya Stechly, Matthew Marquez, Subbarao Kambhampati | cs.AI | 18 pages, 3 figures | null | cs.AI | 20231019 | 20231019 | [
{
"id": "2206.10498"
},
{
"id": "2306.03872"
},
{
"id": "2303.11366"
}
] |
2310.12773 | 9 | 2
# 2 PRELIMINARIES
Preference Modelling The RLHF method enhances the quality of language model responses by leveraging human preference data through a reward model. The reward model is denoted as RÏ(y, x), where x is the input prompt, y is the response generated by the language model, and R is the scalar output from the reward model. Human preference data is symbolized as yw â» yl|x, where yw (win) denotes a response that is more preferred by humans compared to yl (lose). Most of the previous work, including Christiano et al. (2017); Sadigh et al. (2017); Bai et al. (2022a); Kim et al. (2023), employs a preference predictor adhering to the Bradley-Terry model (Bradley & Terry, 1952). The likelihood of a preference pair can be estimated as:
pâ(yw â» yl|x) = exp(R(yw, x)) exp(R(yw, x)) + exp(R(yl, x)) = Ï(R(yw, x) â R(yl, x)), (1) | 2310.12773#9 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | With the development of large language models (LLMs), striking a balance
between the performance and safety of AI systems has never been more critical.
However, the inherent tension between the objectives of helpfulness and
harmlessness presents a significant challenge during LLM training. To address
this issue, we propose Safe Reinforcement Learning from Human Feedback (Safe
RLHF), a novel algorithm for human value alignment. Safe RLHF explicitly
decouples human preferences regarding helpfulness and harmlessness, effectively
avoiding the crowdworkers' confusion about the tension and allowing us to train
separate reward and cost models. We formalize the safety concern of LLMs as an
optimization task of maximizing the reward function while satisfying specified
cost constraints. Leveraging the Lagrangian method to solve this constrained
problem, Safe RLHF dynamically adjusts the balance between the two objectives
during fine-tuning. Through a three-round fine-tuning using Safe RLHF, we
demonstrate a superior ability to mitigate harmful responses while enhancing
model performance compared to existing value-aligned algorithms.
Experimentally, we fine-tuned the Alpaca-7B using Safe RLHF and aligned it with
collected human preferences, significantly improving its helpfulness and
harmlessness according to human evaluations. | http://arxiv.org/pdf/2310.12773 | Josef Dai, Xuehai Pan, Ruiyang Sun, Jiaming Ji, Xinbo Xu, Mickel Liu, Yizhou Wang, Yaodong Yang | cs.AI, cs.LG | null | null | cs.AI | 20231019 | 20231019 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10403"
},
{
"id": "2305.03047"
},
{
"id": "2211.05100"
},
{
"id": "2110.08207"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2105.04054"
},
{
"id": "2209.07858"
},
{
"id": "2305.15324"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2307.12950"
},
{
"id": "2009.11462"
},
{
"id": "2307.04657"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2209.14375"
},
{
"id": "2303.00957"
},
{
"id": "2306.01693"
},
{
"id": "2305.01937"
},
{
"id": "2307.15217"
},
{
"id": "2306.17492"
},
{
"id": "2112.04359"
},
{
"id": "1506.02438"
}
] |
2310.12397 | 10 | # 2https://pypi.org/project/grinpy/
3
propose backprompt candidate if solution incorrect Generator when correct or Sound Coloring feedback limit Verifier exceeded fe
Figure 1: Overview of backprompt architecture for a single instance. Clouds provide an illustrated interpretation of the current state of the problem at different points in the system. Red diamonds indicate progression of a single problem: a planar graph is first passed to GPT-4 acting as a generator (1), which returns a proposed coloring (2). GPT-4 will then be used as a verifier to determine whether the coloring is correct. When not correct, GPT-4 provides feedback, along with previous history, through a backprompt (3) that will be used in the next generation request (4). Each new coloring will be evaluated by the GPT-4 working as a verifier. If GPT-4 determines the coloring to be correct or 15 iterations have passed, it approves the final answer, where it is then evaluated against a sound verifier.
# 3.3 Backprompt Generation
In verification mode, the LLM receives a different sort of prompt. Apart from standard instructions, it contains only the graph description and the proposed coloring. It is tasked with verifying correctness, optimality, and whether every vertex has been given an assignment. If the coloring is incorrect, it must reply with a set of contradicting edges. | 2310.12397#10 | GPT-4 Doesn't Know It's Wrong: An Analysis of Iterative Prompting for Reasoning Problems | There has been considerable divergence of opinion on the reasoning abilities
of Large Language Models (LLMs). While the initial optimism that reasoning
might emerge automatically with scale has been tempered thanks to a slew of
counterexamples, a wide spread belief in their iterative self-critique
capabilities persists. In this paper, we set out to systematically investigate
the effectiveness of iterative prompting of LLMs in the context of Graph
Coloring, a canonical NP-complete reasoning problem that is related to
propositional satisfiability as well as practical problems like scheduling and
allocation. We present a principled empirical study of the performance of GPT4
in solving graph coloring instances or verifying the correctness of candidate
colorings. In iterative modes, we experiment with the model critiquing its own
answers and an external correct reasoner verifying proposed solutions. In both
cases, we analyze whether the content of the criticisms actually affects bottom
line performance. The study seems to indicate that (i) LLMs are bad at solving
graph coloring instances (ii) they are no better at verifying a solution--and
thus are not effective in iterative modes with LLMs critiquing LLM-generated
solutions (iii) the correctness and content of the criticisms--whether by LLMs
or external solvers--seems largely irrelevant to the performance of iterative
prompting. We show that the observed increase in effectiveness is largely due
to the correct solution being fortuitously present in the top-k completions of
the prompt (and being recognized as such by an external verifier). Our results
thus call into question claims about the self-critiquing capabilities of state
of the art LLMs. | http://arxiv.org/pdf/2310.12397 | Kaya Stechly, Matthew Marquez, Subbarao Kambhampati | cs.AI | 18 pages, 3 figures | null | cs.AI | 20231019 | 20231019 | [
{
"id": "2206.10498"
},
{
"id": "2306.03872"
},
{
"id": "2303.11366"
}
] |
2310.12397 | 11 | As a comparison point, we also construct a guaranteed correct verifier, with the ability to list every single contradicting edge. Since LLM responses are also in natural language, we first translate them into a format amenable to analysis. To make this process more consistent, we design our initial prompt to describe an exact output format to which the model conforms. Then, the response is evaluated for correctness.
In both cases, if the verifier says the answer is correct, we end there. If it has been more than 15 rounds (16 total queries), we give up. Otherwise, a backprompt is created, wrapped in standard instructions, appended to the previous message history, and sent back to the model as a new prompt.
In this domain a valid piece of error feedback consists of a pair of vertices which were given the same color but share an edge. To construct a backprompt, we have to decide exactly how much feedback to give. We examine five cases:
1. None: A single iteration baseline. No backprompting.
2. Pass/Fail: The only feedback given is that the answer was incorrect.
# nk WN
3. First: Only the first error encountered is returned.
4. Full: A comprehensive list of errors.
5. LLM: Feedback is provided by the language model through a separate prompt, given in the appendix. We pass any and all response back to the generator, regardless of its validity or correctness. | 2310.12397#11 | GPT-4 Doesn't Know It's Wrong: An Analysis of Iterative Prompting for Reasoning Problems | There has been considerable divergence of opinion on the reasoning abilities
of Large Language Models (LLMs). While the initial optimism that reasoning
might emerge automatically with scale has been tempered thanks to a slew of
counterexamples, a wide spread belief in their iterative self-critique
capabilities persists. In this paper, we set out to systematically investigate
the effectiveness of iterative prompting of LLMs in the context of Graph
Coloring, a canonical NP-complete reasoning problem that is related to
propositional satisfiability as well as practical problems like scheduling and
allocation. We present a principled empirical study of the performance of GPT4
in solving graph coloring instances or verifying the correctness of candidate
colorings. In iterative modes, we experiment with the model critiquing its own
answers and an external correct reasoner verifying proposed solutions. In both
cases, we analyze whether the content of the criticisms actually affects bottom
line performance. The study seems to indicate that (i) LLMs are bad at solving
graph coloring instances (ii) they are no better at verifying a solution--and
thus are not effective in iterative modes with LLMs critiquing LLM-generated
solutions (iii) the correctness and content of the criticisms--whether by LLMs
or external solvers--seems largely irrelevant to the performance of iterative
prompting. We show that the observed increase in effectiveness is largely due
to the correct solution being fortuitously present in the top-k completions of
the prompt (and being recognized as such by an external verifier). Our results
thus call into question claims about the self-critiquing capabilities of state
of the art LLMs. | http://arxiv.org/pdf/2310.12397 | Kaya Stechly, Matthew Marquez, Subbarao Kambhampati | cs.AI | 18 pages, 3 figures | null | cs.AI | 20231019 | 20231019 | [
{
"id": "2206.10498"
},
{
"id": "2306.03872"
},
{
"id": "2303.11366"
}
] |
2310.12773 | 11 | Safe Reinforcement Learning A Markov Decision Process (MDP) (Puterman, 2014), M 4 (S,A,r,P, Wo, 7), including the state space S, the action space A, a reward function r, the tran- sition probability P, the initial state distribution fio, and a discount factor 7. In this framework, a stationary policy, 7, is a probability distribution indicating the likelihood of taking action a in state s. The state value function V"(s) = E,.7 [Sop y'rt | 80 = 8] denotes the expected cumulative discounted reward over time, starting from s. Then, the primary objective of reinforcement learning is to maximize the objective function, 7 (79) = Es.<yo [Viz (So)]- Generally, Safe RL is formulated as a Constrained MDP (CMDP) M UC (Altman, 2021), which extends the standard MDP JM with an additional constraint set C. The set C = {(ci,bi)}i, is composed of cost functions c; and cost thresholds b;,i=1,...,m. The cost return is defined as J (79) = Eny [cpio yc: (s141|8t,@t)], and the | 2310.12773#11 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | With the development of large language models (LLMs), striking a balance
between the performance and safety of AI systems has never been more critical.
However, the inherent tension between the objectives of helpfulness and
harmlessness presents a significant challenge during LLM training. To address
this issue, we propose Safe Reinforcement Learning from Human Feedback (Safe
RLHF), a novel algorithm for human value alignment. Safe RLHF explicitly
decouples human preferences regarding helpfulness and harmlessness, effectively
avoiding the crowdworkers' confusion about the tension and allowing us to train
separate reward and cost models. We formalize the safety concern of LLMs as an
optimization task of maximizing the reward function while satisfying specified
cost constraints. Leveraging the Lagrangian method to solve this constrained
problem, Safe RLHF dynamically adjusts the balance between the two objectives
during fine-tuning. Through a three-round fine-tuning using Safe RLHF, we
demonstrate a superior ability to mitigate harmful responses while enhancing
model performance compared to existing value-aligned algorithms.
Experimentally, we fine-tuned the Alpaca-7B using Safe RLHF and aligned it with
collected human preferences, significantly improving its helpfulness and
harmlessness according to human evaluations. | http://arxiv.org/pdf/2310.12773 | Josef Dai, Xuehai Pan, Ruiyang Sun, Jiaming Ji, Xinbo Xu, Mickel Liu, Yizhou Wang, Yaodong Yang | cs.AI, cs.LG | null | null | cs.AI | 20231019 | 20231019 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10403"
},
{
"id": "2305.03047"
},
{
"id": "2211.05100"
},
{
"id": "2110.08207"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2105.04054"
},
{
"id": "2209.07858"
},
{
"id": "2305.15324"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2307.12950"
},
{
"id": "2009.11462"
},
{
"id": "2307.04657"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2209.14375"
},
{
"id": "2303.00957"
},
{
"id": "2306.01693"
},
{
"id": "2305.01937"
},
{
"id": "2307.15217"
},
{
"id": "2306.17492"
},
{
"id": "2112.04359"
},
{
"id": "1506.02438"
}
] |
2310.12397 | 12 | 5. LLM: Feedback is provided by the language model through a separate prompt, given in the appendix. We pass any and all response back to the generator, regardless of its validity or correctness.
By comparing results under these regimes, we can deduce how much of the given information the LLM is actually using, versus how much of the performance increase stems from merely getting
4
more tries. We also compare these cases to four further cases: higher temperature, single iteration queries which ask for multiple answers. These do not involve any backprompting, reprompting, or giving any information past the original prompt to the LLM.
6-8. Top 5: With temperatures 0.5, 1, and 1.5, query the LLM for n = 5 responses.
9. Top 15: With a temperature of 1, query the LLM for n = 15 responses.
# 3.4 Verification
In order to gain more insight into their LLM verification, we examine how well they find errors in proposed colorings. Intuitively, these should be very easy to identify: if the two vertices making up an edge share a color, immediately return that edge. Algorithmically, all this requires is looping over edges and comparing each vertexâs color to that of its partner. | 2310.12397#12 | GPT-4 Doesn't Know It's Wrong: An Analysis of Iterative Prompting for Reasoning Problems | There has been considerable divergence of opinion on the reasoning abilities
of Large Language Models (LLMs). While the initial optimism that reasoning
might emerge automatically with scale has been tempered thanks to a slew of
counterexamples, a wide spread belief in their iterative self-critique
capabilities persists. In this paper, we set out to systematically investigate
the effectiveness of iterative prompting of LLMs in the context of Graph
Coloring, a canonical NP-complete reasoning problem that is related to
propositional satisfiability as well as practical problems like scheduling and
allocation. We present a principled empirical study of the performance of GPT4
in solving graph coloring instances or verifying the correctness of candidate
colorings. In iterative modes, we experiment with the model critiquing its own
answers and an external correct reasoner verifying proposed solutions. In both
cases, we analyze whether the content of the criticisms actually affects bottom
line performance. The study seems to indicate that (i) LLMs are bad at solving
graph coloring instances (ii) they are no better at verifying a solution--and
thus are not effective in iterative modes with LLMs critiquing LLM-generated
solutions (iii) the correctness and content of the criticisms--whether by LLMs
or external solvers--seems largely irrelevant to the performance of iterative
prompting. We show that the observed increase in effectiveness is largely due
to the correct solution being fortuitously present in the top-k completions of
the prompt (and being recognized as such by an external verifier). Our results
thus call into question claims about the self-critiquing capabilities of state
of the art LLMs. | http://arxiv.org/pdf/2310.12397 | Kaya Stechly, Matthew Marquez, Subbarao Kambhampati | cs.AI | 18 pages, 3 figures | null | cs.AI | 20231019 | 20231019 | [
{
"id": "2206.10498"
},
{
"id": "2306.03872"
},
{
"id": "2303.11366"
}
] |
2310.12397 | 13 | We use the same pipeline for this analysis, but construct a new domain we call color_verification. The LLM is prompted to check correctness, optimality, and if every vertex has been assigned in the coloring. If the coloring is incorrect, it is instructed to list errors in the coloring, that is, if two connected nodes share a color, it is to return the edge to represent the error. No backprompts are given. We use the same graph instances from before, but generate four kinds of colorings to test the model on:
1. Correct: Optimal colorings with no errors, generated via iterated, randomized greedy algorithm (with a precomputed chromatic number to ensure optimality)
2. Ablated: The previous set of colorings, each with a random node changed to one of its neighborâs colors
3. Non-optimal: The correct set, with a randomly chosen color partially recolored to a new shade
4. Random: Completely randomly assigned colors, with the number of different colors equal to the graphâs chromatic number
5. LLM: Colorings randomly selected from the LLM-generated outputs of the previous experi- ment
# 4 Results
# 4.1 Backprompting as Self-Critique | 2310.12397#13 | GPT-4 Doesn't Know It's Wrong: An Analysis of Iterative Prompting for Reasoning Problems | There has been considerable divergence of opinion on the reasoning abilities
of Large Language Models (LLMs). While the initial optimism that reasoning
might emerge automatically with scale has been tempered thanks to a slew of
counterexamples, a wide spread belief in their iterative self-critique
capabilities persists. In this paper, we set out to systematically investigate
the effectiveness of iterative prompting of LLMs in the context of Graph
Coloring, a canonical NP-complete reasoning problem that is related to
propositional satisfiability as well as practical problems like scheduling and
allocation. We present a principled empirical study of the performance of GPT4
in solving graph coloring instances or verifying the correctness of candidate
colorings. In iterative modes, we experiment with the model critiquing its own
answers and an external correct reasoner verifying proposed solutions. In both
cases, we analyze whether the content of the criticisms actually affects bottom
line performance. The study seems to indicate that (i) LLMs are bad at solving
graph coloring instances (ii) they are no better at verifying a solution--and
thus are not effective in iterative modes with LLMs critiquing LLM-generated
solutions (iii) the correctness and content of the criticisms--whether by LLMs
or external solvers--seems largely irrelevant to the performance of iterative
prompting. We show that the observed increase in effectiveness is largely due
to the correct solution being fortuitously present in the top-k completions of
the prompt (and being recognized as such by an external verifier). Our results
thus call into question claims about the self-critiquing capabilities of state
of the art LLMs. | http://arxiv.org/pdf/2310.12397 | Kaya Stechly, Matthew Marquez, Subbarao Kambhampati | cs.AI | 18 pages, 3 figures | null | cs.AI | 20231019 | 20231019 | [
{
"id": "2206.10498"
},
{
"id": "2306.03872"
},
{
"id": "2303.11366"
}
] |
2310.12773 | 13 | Ïâ = arg max ÏθâÎ C J (Ïθ). (3)
# 3 METHOD: SAFE RLHF
As shown in Figure 1, we introduce our Safe RLHF pipeline, which leverages the Safe RL frame- work to balance the tension between the helpfulness and harmfulness objectives. Compared to the conventional RLHF (Ouyang et al., 2022), Safe RLHF introduces substantial modifications, specif- ically in the stages of Preference Annotation & Modeling and Policy Optimization.
3.1 HUMAN PREFERENCE OF HARMLESSNESS AND HELPFULNESS | 2310.12773#13 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | With the development of large language models (LLMs), striking a balance
between the performance and safety of AI systems has never been more critical.
However, the inherent tension between the objectives of helpfulness and
harmlessness presents a significant challenge during LLM training. To address
this issue, we propose Safe Reinforcement Learning from Human Feedback (Safe
RLHF), a novel algorithm for human value alignment. Safe RLHF explicitly
decouples human preferences regarding helpfulness and harmlessness, effectively
avoiding the crowdworkers' confusion about the tension and allowing us to train
separate reward and cost models. We formalize the safety concern of LLMs as an
optimization task of maximizing the reward function while satisfying specified
cost constraints. Leveraging the Lagrangian method to solve this constrained
problem, Safe RLHF dynamically adjusts the balance between the two objectives
during fine-tuning. Through a three-round fine-tuning using Safe RLHF, we
demonstrate a superior ability to mitigate harmful responses while enhancing
model performance compared to existing value-aligned algorithms.
Experimentally, we fine-tuned the Alpaca-7B using Safe RLHF and aligned it with
collected human preferences, significantly improving its helpfulness and
harmlessness according to human evaluations. | http://arxiv.org/pdf/2310.12773 | Josef Dai, Xuehai Pan, Ruiyang Sun, Jiaming Ji, Xinbo Xu, Mickel Liu, Yizhou Wang, Yaodong Yang | cs.AI, cs.LG | null | null | cs.AI | 20231019 | 20231019 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10403"
},
{
"id": "2305.03047"
},
{
"id": "2211.05100"
},
{
"id": "2110.08207"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2105.04054"
},
{
"id": "2209.07858"
},
{
"id": "2305.15324"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2307.12950"
},
{
"id": "2009.11462"
},
{
"id": "2307.04657"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2209.14375"
},
{
"id": "2303.00957"
},
{
"id": "2306.01693"
},
{
"id": "2305.01937"
},
{
"id": "2307.15217"
},
{
"id": "2306.17492"
},
{
"id": "2112.04359"
},
{
"id": "1506.02438"
}
] |
2310.12397 | 14 | 5. LLM: Colorings randomly selected from the LLM-generated outputs of the previous experi- ment
# 4 Results
# 4.1 Backprompting as Self-Critique
Figure 2: Performance versus backprompting tech- nique. Correctness is evaluated for the response the verifier claims as correct, or after 15 iterations. Figure 3: Performance versus sampling technique. An instance is marked correct if any answer in the top n was correct.
_ Performance Across Backprompting Regimes 40% 3 & % 30% H g S 20% 2 é 10% 0% exten verter Exteal verre verter (no eackprompt) SeltCritque Type = oe
_ Performance Across Sampling Regimes. z 2 40% & 8 S 30% 3 g & 20% eS Z 8 10% @ Top 15 op 5 Tp 5 25 ee bs who rhs Number of Samples and Sampling Temperature
Prompting the LLM, evaluating the answer, and moving on to the next instance without any back- prompts whatsoever gives a baseline score of 16%. When we run the same instances, but this time backprompt the LLM with feedback generated by the same language model acting as a verifier, performance plummetsâonly a single instance of the 100 was answered correctly.
5
# Table 1: Summary of Backprompt Techniques Example Prompt | 2310.12397#14 | GPT-4 Doesn't Know It's Wrong: An Analysis of Iterative Prompting for Reasoning Problems | There has been considerable divergence of opinion on the reasoning abilities
of Large Language Models (LLMs). While the initial optimism that reasoning
might emerge automatically with scale has been tempered thanks to a slew of
counterexamples, a wide spread belief in their iterative self-critique
capabilities persists. In this paper, we set out to systematically investigate
the effectiveness of iterative prompting of LLMs in the context of Graph
Coloring, a canonical NP-complete reasoning problem that is related to
propositional satisfiability as well as practical problems like scheduling and
allocation. We present a principled empirical study of the performance of GPT4
in solving graph coloring instances or verifying the correctness of candidate
colorings. In iterative modes, we experiment with the model critiquing its own
answers and an external correct reasoner verifying proposed solutions. In both
cases, we analyze whether the content of the criticisms actually affects bottom
line performance. The study seems to indicate that (i) LLMs are bad at solving
graph coloring instances (ii) they are no better at verifying a solution--and
thus are not effective in iterative modes with LLMs critiquing LLM-generated
solutions (iii) the correctness and content of the criticisms--whether by LLMs
or external solvers--seems largely irrelevant to the performance of iterative
prompting. We show that the observed increase in effectiveness is largely due
to the correct solution being fortuitously present in the top-k completions of
the prompt (and being recognized as such by an external verifier). Our results
thus call into question claims about the self-critiquing capabilities of state
of the art LLMs. | http://arxiv.org/pdf/2310.12397 | Kaya Stechly, Matthew Marquez, Subbarao Kambhampati | cs.AI | 18 pages, 3 figures | null | cs.AI | 20231019 | 20231019 | [
{
"id": "2206.10498"
},
{
"id": "2306.03872"
},
{
"id": "2303.11366"
}
] |
2310.12773 | 14 | 3.1 HUMAN PREFERENCE OF HARMLESSNESS AND HELPFULNESS
In adapting our Safe RLHF algorithm, we utilize a two-stage human annotation strategy to assess the helpfulness and harmlessness of text generation. We follow the annotation methodology outlined in Ji et al. (2023), in which the rankings for helpfulness and harmlessness were explicitly decoupled from a singular human preference dimension. In this strategy, crcowdworkers annotate a safety meta- label for each question-answer (QA) pair, considering 14 predefined categories of potential harm. A QA pair is labeled as âsafeâ only if it poses no risk across all 14 categories. Subsequently, the annotators are given two responses to the same prompt and asked to rank the harmlessness and helpfulness, treating each criterion independently. The detailed annotation guidelines can be found in the Appendix section A. Following the annotation pipeline, we produce a helpfulness-related dataset, Dr = {2', yi, yj },_1> N
Following the annotation pipeline, we produce a helpfulness-related dataset, Dr = {2', yi, yj },_1> N and a harmlessness-related dataset, Do = {oi ivf, si, sf} . Both datasets, Dr and Dc, cover the same set of QA pairs but with differing preference labels. Within each pair in Dr, y/, | 2310.12773#14 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | With the development of large language models (LLMs), striking a balance
between the performance and safety of AI systems has never been more critical.
However, the inherent tension between the objectives of helpfulness and
harmlessness presents a significant challenge during LLM training. To address
this issue, we propose Safe Reinforcement Learning from Human Feedback (Safe
RLHF), a novel algorithm for human value alignment. Safe RLHF explicitly
decouples human preferences regarding helpfulness and harmlessness, effectively
avoiding the crowdworkers' confusion about the tension and allowing us to train
separate reward and cost models. We formalize the safety concern of LLMs as an
optimization task of maximizing the reward function while satisfying specified
cost constraints. Leveraging the Lagrangian method to solve this constrained
problem, Safe RLHF dynamically adjusts the balance between the two objectives
during fine-tuning. Through a three-round fine-tuning using Safe RLHF, we
demonstrate a superior ability to mitigate harmful responses while enhancing
model performance compared to existing value-aligned algorithms.
Experimentally, we fine-tuned the Alpaca-7B using Safe RLHF and aligned it with
collected human preferences, significantly improving its helpfulness and
harmlessness according to human evaluations. | http://arxiv.org/pdf/2310.12773 | Josef Dai, Xuehai Pan, Ruiyang Sun, Jiaming Ji, Xinbo Xu, Mickel Liu, Yizhou Wang, Yaodong Yang | cs.AI, cs.LG | null | null | cs.AI | 20231019 | 20231019 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10403"
},
{
"id": "2305.03047"
},
{
"id": "2211.05100"
},
{
"id": "2110.08207"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2105.04054"
},
{
"id": "2209.07858"
},
{
"id": "2305.15324"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2307.12950"
},
{
"id": "2009.11462"
},
{
"id": "2307.04657"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2209.14375"
},
{
"id": "2303.00957"
},
{
"id": "2306.01693"
},
{
"id": "2305.01937"
},
{
"id": "2307.15217"
},
{
"id": "2306.17492"
},
{
"id": "2112.04359"
},
{
"id": "1506.02438"
}
] |
2310.12397 | 15 | 5
# Table 1: Summary of Backprompt Techniques Example Prompt
Strategy Direct LLM Color the following graph, described as a set of edges, such that no two vertices on the same edge share a color. Vertex 0 is connected to vertex 2... Iterative: LLM Self-Critique This is incorrect. Vertices 0 and 11 share an edge and are both colored with Color 1. Vertices 5 and 11 [...] feedback... Feedback: Using this Iterative (with external Verifier): Pass/Fail This is not correct. previously provided graph... Using the Iterative (with external Verifier): First error Iterative (with external Verifier): All errors This is not correct. Vertex 1 and vertex 7 were both colored Color 1 despite being connected by an edge. Vertex 2 and vertex 4 were both colored Color 0 despite...
The problem is caused by the lack of an accurate stopping condition. If the system ever outputs a correct coloring during a backprompting session, we expect a verifier to stop it. However, in the self-verification case, the LLM doing the verification can fail to notice success and instead produce spurious feedback. This is exactly what happens. | 2310.12397#15 | GPT-4 Doesn't Know It's Wrong: An Analysis of Iterative Prompting for Reasoning Problems | There has been considerable divergence of opinion on the reasoning abilities
of Large Language Models (LLMs). While the initial optimism that reasoning
might emerge automatically with scale has been tempered thanks to a slew of
counterexamples, a wide spread belief in their iterative self-critique
capabilities persists. In this paper, we set out to systematically investigate
the effectiveness of iterative prompting of LLMs in the context of Graph
Coloring, a canonical NP-complete reasoning problem that is related to
propositional satisfiability as well as practical problems like scheduling and
allocation. We present a principled empirical study of the performance of GPT4
in solving graph coloring instances or verifying the correctness of candidate
colorings. In iterative modes, we experiment with the model critiquing its own
answers and an external correct reasoner verifying proposed solutions. In both
cases, we analyze whether the content of the criticisms actually affects bottom
line performance. The study seems to indicate that (i) LLMs are bad at solving
graph coloring instances (ii) they are no better at verifying a solution--and
thus are not effective in iterative modes with LLMs critiquing LLM-generated
solutions (iii) the correctness and content of the criticisms--whether by LLMs
or external solvers--seems largely irrelevant to the performance of iterative
prompting. We show that the observed increase in effectiveness is largely due
to the correct solution being fortuitously present in the top-k completions of
the prompt (and being recognized as such by an external verifier). Our results
thus call into question claims about the self-critiquing capabilities of state
of the art LLMs. | http://arxiv.org/pdf/2310.12397 | Kaya Stechly, Matthew Marquez, Subbarao Kambhampati | cs.AI | 18 pages, 3 figures | null | cs.AI | 20231019 | 20231019 | [
{
"id": "2206.10498"
},
{
"id": "2306.03872"
},
{
"id": "2303.11366"
}
] |
2310.12773 | 15 | # w, yi l
3
(2)
(a) reward vs. cost distribution (b) reward distribution (c) cost distribution
Figure 2: (a) A scatter plot showing the distribution of reward and cost on test data as evaluated by the preference models employed in the initial Safe RLHF iteration. Each point signifies a sample present in the test set of the preference data. Colors are derived from the safety labels annotated by crowdworkers. (b) The reward distribution on the test set determined by the trained reward model. (c) The cost distribution on the test set determined by the trained cost model.
represents a response from the model that better addresses the prompt xi compared to yi w signifies a more harmful response compared to yj for each pair in DC, but in this case, yj labels of these responses are then quantified using binary classification labels sj the following harmfulness sign function:
+1, if response y is harmful, s(y) £4707 ME response y! (4) â1, ifresponse y is harmless. | 2310.12773#15 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | With the development of large language models (LLMs), striking a balance
between the performance and safety of AI systems has never been more critical.
However, the inherent tension between the objectives of helpfulness and
harmlessness presents a significant challenge during LLM training. To address
this issue, we propose Safe Reinforcement Learning from Human Feedback (Safe
RLHF), a novel algorithm for human value alignment. Safe RLHF explicitly
decouples human preferences regarding helpfulness and harmlessness, effectively
avoiding the crowdworkers' confusion about the tension and allowing us to train
separate reward and cost models. We formalize the safety concern of LLMs as an
optimization task of maximizing the reward function while satisfying specified
cost constraints. Leveraging the Lagrangian method to solve this constrained
problem, Safe RLHF dynamically adjusts the balance between the two objectives
during fine-tuning. Through a three-round fine-tuning using Safe RLHF, we
demonstrate a superior ability to mitigate harmful responses while enhancing
model performance compared to existing value-aligned algorithms.
Experimentally, we fine-tuned the Alpaca-7B using Safe RLHF and aligned it with
collected human preferences, significantly improving its helpfulness and
harmlessness according to human evaluations. | http://arxiv.org/pdf/2310.12773 | Josef Dai, Xuehai Pan, Ruiyang Sun, Jiaming Ji, Xinbo Xu, Mickel Liu, Yizhou Wang, Yaodong Yang | cs.AI, cs.LG | null | null | cs.AI | 20231019 | 20231019 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10403"
},
{
"id": "2305.03047"
},
{
"id": "2211.05100"
},
{
"id": "2110.08207"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2105.04054"
},
{
"id": "2209.07858"
},
{
"id": "2305.15324"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2307.12950"
},
{
"id": "2009.11462"
},
{
"id": "2307.04657"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2209.14375"
},
{
"id": "2303.00957"
},
{
"id": "2306.01693"
},
{
"id": "2305.01937"
},
{
"id": "2307.15217"
},
{
"id": "2306.17492"
},
{
"id": "2112.04359"
},
{
"id": "1506.02438"
}
] |
2310.12397 | 16 | At some point in the backprompts of 40 instances, the generating model returned an optimal coloring. In none of those instances did the verifying GPT realize this. In 39 cases, it hallucinated pairs of vertices that it claimed were adjacent and same-colored. In the one case marked correct, the coloring was provided after the final backprompt, and so became the modelâs final answer by virtue of timeout. This also points to the modelâs hesitancy to agree that a coloring is correct. In fact, only 4 out of 100 cases were stopped by the LLM-as-verifier, and not one of those was correct. Whether bad feedback itself is worsening the results, or itâs merely the case that correct responses tend to be earlier in the backprompt sequenceâoptimistically viewed as a result of being higher probability completions which are ruined by a self-destructive thinking processâis unclear. Our results here and in the next few subsections are so far conflicting. | 2310.12397#16 | GPT-4 Doesn't Know It's Wrong: An Analysis of Iterative Prompting for Reasoning Problems | There has been considerable divergence of opinion on the reasoning abilities
of Large Language Models (LLMs). While the initial optimism that reasoning
might emerge automatically with scale has been tempered thanks to a slew of
counterexamples, a wide spread belief in their iterative self-critique
capabilities persists. In this paper, we set out to systematically investigate
the effectiveness of iterative prompting of LLMs in the context of Graph
Coloring, a canonical NP-complete reasoning problem that is related to
propositional satisfiability as well as practical problems like scheduling and
allocation. We present a principled empirical study of the performance of GPT4
in solving graph coloring instances or verifying the correctness of candidate
colorings. In iterative modes, we experiment with the model critiquing its own
answers and an external correct reasoner verifying proposed solutions. In both
cases, we analyze whether the content of the criticisms actually affects bottom
line performance. The study seems to indicate that (i) LLMs are bad at solving
graph coloring instances (ii) they are no better at verifying a solution--and
thus are not effective in iterative modes with LLMs critiquing LLM-generated
solutions (iii) the correctness and content of the criticisms--whether by LLMs
or external solvers--seems largely irrelevant to the performance of iterative
prompting. We show that the observed increase in effectiveness is largely due
to the correct solution being fortuitously present in the top-k completions of
the prompt (and being recognized as such by an external verifier). Our results
thus call into question claims about the self-critiquing capabilities of state
of the art LLMs. | http://arxiv.org/pdf/2310.12397 | Kaya Stechly, Matthew Marquez, Subbarao Kambhampati | cs.AI | 18 pages, 3 figures | null | cs.AI | 20231019 | 20231019 | [
{
"id": "2206.10498"
},
{
"id": "2306.03872"
},
{
"id": "2303.11366"
}
] |
2310.12773 | 16 | +1, if response y is harmful, s(y) £4707 ME response y! (4) â1, ifresponse y is harmless.
Figure 1 illustrates an example that shows the tension in balancing harmlessness and helpfulness. When the AI assistant faces the question of âHow to become a serial killerâ, Response B is superior to Response A in terms of helpfulness, as it shows a higher degree of completeness towards the userâs instruction and has a better response structure. However, in terms of harmlessness, Response A is safer because it refuses to respond to this query and informs the involved legal risks. In summary, we would expect a helpfulness preference B > A, a harmlessness preference A > B, as well as harmfulness signs for the two responses s(A) = â1 and s(B) = +1.
3.2 PREFERENCE MODEL FITTING: REWARD AND COST MODELS | 2310.12773#16 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | With the development of large language models (LLMs), striking a balance
between the performance and safety of AI systems has never been more critical.
However, the inherent tension between the objectives of helpfulness and
harmlessness presents a significant challenge during LLM training. To address
this issue, we propose Safe Reinforcement Learning from Human Feedback (Safe
RLHF), a novel algorithm for human value alignment. Safe RLHF explicitly
decouples human preferences regarding helpfulness and harmlessness, effectively
avoiding the crowdworkers' confusion about the tension and allowing us to train
separate reward and cost models. We formalize the safety concern of LLMs as an
optimization task of maximizing the reward function while satisfying specified
cost constraints. Leveraging the Lagrangian method to solve this constrained
problem, Safe RLHF dynamically adjusts the balance between the two objectives
during fine-tuning. Through a three-round fine-tuning using Safe RLHF, we
demonstrate a superior ability to mitigate harmful responses while enhancing
model performance compared to existing value-aligned algorithms.
Experimentally, we fine-tuned the Alpaca-7B using Safe RLHF and aligned it with
collected human preferences, significantly improving its helpfulness and
harmlessness according to human evaluations. | http://arxiv.org/pdf/2310.12773 | Josef Dai, Xuehai Pan, Ruiyang Sun, Jiaming Ji, Xinbo Xu, Mickel Liu, Yizhou Wang, Yaodong Yang | cs.AI, cs.LG | null | null | cs.AI | 20231019 | 20231019 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10403"
},
{
"id": "2305.03047"
},
{
"id": "2211.05100"
},
{
"id": "2110.08207"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2105.04054"
},
{
"id": "2209.07858"
},
{
"id": "2305.15324"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2307.12950"
},
{
"id": "2009.11462"
},
{
"id": "2307.04657"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2209.14375"
},
{
"id": "2303.00957"
},
{
"id": "2306.01693"
},
{
"id": "2305.01937"
},
{
"id": "2307.15217"
},
{
"id": "2306.17492"
},
{
"id": "2112.04359"
},
{
"id": "1506.02438"
}
] |
2310.12397 | 17 | The results when backprompted with a sound verifier seem, at first, a lot more promising. The number of instances correctly answered nears 40%, but if this is supposed to indicate that GPT-4 is listening to, improving with, and reasoning from feedback, then we should expect more informative and accurate backprompts to yield better results. However, in this domain, the raw scores (see Figure 2) donât bear this out. When run with a sound verifier, the differences between binary feedback, a single error, or the full suite of mistakes are insignificant.
We can relax our analysis of the LLM self-critique case by labeling an instance as correct if at any point during the backprompt chain, the LLM generated a correct coloring. This is equivalent to rerunning the experiment with a combined feedback system: the sound verifier is in charge of stopping while allowing the LLM to write all the (still potentially spurious) feedback. Given this modification, it scores a comparable 40%. Using this charitable number, all four types of backprompting give roughly similar results. | 2310.12397#17 | GPT-4 Doesn't Know It's Wrong: An Analysis of Iterative Prompting for Reasoning Problems | There has been considerable divergence of opinion on the reasoning abilities
of Large Language Models (LLMs). While the initial optimism that reasoning
might emerge automatically with scale has been tempered thanks to a slew of
counterexamples, a wide spread belief in their iterative self-critique
capabilities persists. In this paper, we set out to systematically investigate
the effectiveness of iterative prompting of LLMs in the context of Graph
Coloring, a canonical NP-complete reasoning problem that is related to
propositional satisfiability as well as practical problems like scheduling and
allocation. We present a principled empirical study of the performance of GPT4
in solving graph coloring instances or verifying the correctness of candidate
colorings. In iterative modes, we experiment with the model critiquing its own
answers and an external correct reasoner verifying proposed solutions. In both
cases, we analyze whether the content of the criticisms actually affects bottom
line performance. The study seems to indicate that (i) LLMs are bad at solving
graph coloring instances (ii) they are no better at verifying a solution--and
thus are not effective in iterative modes with LLMs critiquing LLM-generated
solutions (iii) the correctness and content of the criticisms--whether by LLMs
or external solvers--seems largely irrelevant to the performance of iterative
prompting. We show that the observed increase in effectiveness is largely due
to the correct solution being fortuitously present in the top-k completions of
the prompt (and being recognized as such by an external verifier). Our results
thus call into question claims about the self-critiquing capabilities of state
of the art LLMs. | http://arxiv.org/pdf/2310.12397 | Kaya Stechly, Matthew Marquez, Subbarao Kambhampati | cs.AI | 18 pages, 3 figures | null | cs.AI | 20231019 | 20231019 | [
{
"id": "2206.10498"
},
{
"id": "2306.03872"
},
{
"id": "2303.11366"
}
] |
2310.12773 | 17 | 3.2 PREFERENCE MODEL FITTING: REWARD AND COST MODELS
We train two independent preference models to fit human preference distributions across the help- fulness and harmlessness aspects of LLM responses. The Reward Model (RM) is developed from the helpfulness dataset DR, serving to provide the reward signals that are optimized for helpfulness during the RL phase. The Cost Model (CM) is built upon the harmlessness dataset DC, deliver- ing insights into human perceptions regarding the safety of LLM responses. An illustration of the reward and cost distribution on the dataset is presented in Figure 2.
Reward Model (RM) _ Utilizing the helpfulness dataset Dp = {x', yin ti bno we train a pa- rameterized reward model Ry(y, x), where Ry represents a scalar output. This model is trained to employ the pairwise comparison loss derived from equation (2):
LR(Ï; DR) = âE(x,yw,yl)â¼DR [log Ï(RÏ(yw, x) â RÏ(yl, x))] , (5) | 2310.12773#17 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | With the development of large language models (LLMs), striking a balance
between the performance and safety of AI systems has never been more critical.
However, the inherent tension between the objectives of helpfulness and
harmlessness presents a significant challenge during LLM training. To address
this issue, we propose Safe Reinforcement Learning from Human Feedback (Safe
RLHF), a novel algorithm for human value alignment. Safe RLHF explicitly
decouples human preferences regarding helpfulness and harmlessness, effectively
avoiding the crowdworkers' confusion about the tension and allowing us to train
separate reward and cost models. We formalize the safety concern of LLMs as an
optimization task of maximizing the reward function while satisfying specified
cost constraints. Leveraging the Lagrangian method to solve this constrained
problem, Safe RLHF dynamically adjusts the balance between the two objectives
during fine-tuning. Through a three-round fine-tuning using Safe RLHF, we
demonstrate a superior ability to mitigate harmful responses while enhancing
model performance compared to existing value-aligned algorithms.
Experimentally, we fine-tuned the Alpaca-7B using Safe RLHF and aligned it with
collected human preferences, significantly improving its helpfulness and
harmlessness according to human evaluations. | http://arxiv.org/pdf/2310.12773 | Josef Dai, Xuehai Pan, Ruiyang Sun, Jiaming Ji, Xinbo Xu, Mickel Liu, Yizhou Wang, Yaodong Yang | cs.AI, cs.LG | null | null | cs.AI | 20231019 | 20231019 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10403"
},
{
"id": "2305.03047"
},
{
"id": "2211.05100"
},
{
"id": "2110.08207"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2105.04054"
},
{
"id": "2209.07858"
},
{
"id": "2305.15324"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2307.12950"
},
{
"id": "2009.11462"
},
{
"id": "2307.04657"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2209.14375"
},
{
"id": "2303.00957"
},
{
"id": "2306.01693"
},
{
"id": "2305.01937"
},
{
"id": "2307.15217"
},
{
"id": "2306.17492"
},
{
"id": "2112.04359"
},
{
"id": "1506.02438"
}
] |
2310.12397 | 18 | It seems then that feedback or lack thereof is thus less important to the improvement of the score than number of iterations: if the model has fifteen chances to generate a correct answer, it is much more likely to succeed. We test this idea by querying the same set of 100 instances, but now allowing for higher temperatures and receiving multiple, separate, non-interacting responses. The results make up
6
Table 2: Distribution of hallucinations during verification task. This table counts the number of instances that featured each type of hallucination and compares it to the total number of erroneous edges encountered across all coloring instances in each subset.
Hallucinations Coloring Correct Ablated Non-optimal Random LLM Vertex Edge Both None 29 24 18 10 26 72 52 65 26 41 7 5 3 5 6 2 24 10 66 27 Errors Correct 0 187 0 736 240 100 0 0 0 18 Total 107 256 26 129 282 118
the rest of Figure 3. With n=5, itâs close, not quite there, but with n=15 (t=1.0), the performance is comparable to backprompting, achieving a score of 40%.
In other words: blindfolded guessing does just as well as careful, crafted feedback. | 2310.12397#18 | GPT-4 Doesn't Know It's Wrong: An Analysis of Iterative Prompting for Reasoning Problems | There has been considerable divergence of opinion on the reasoning abilities
of Large Language Models (LLMs). While the initial optimism that reasoning
might emerge automatically with scale has been tempered thanks to a slew of
counterexamples, a wide spread belief in their iterative self-critique
capabilities persists. In this paper, we set out to systematically investigate
the effectiveness of iterative prompting of LLMs in the context of Graph
Coloring, a canonical NP-complete reasoning problem that is related to
propositional satisfiability as well as practical problems like scheduling and
allocation. We present a principled empirical study of the performance of GPT4
in solving graph coloring instances or verifying the correctness of candidate
colorings. In iterative modes, we experiment with the model critiquing its own
answers and an external correct reasoner verifying proposed solutions. In both
cases, we analyze whether the content of the criticisms actually affects bottom
line performance. The study seems to indicate that (i) LLMs are bad at solving
graph coloring instances (ii) they are no better at verifying a solution--and
thus are not effective in iterative modes with LLMs critiquing LLM-generated
solutions (iii) the correctness and content of the criticisms--whether by LLMs
or external solvers--seems largely irrelevant to the performance of iterative
prompting. We show that the observed increase in effectiveness is largely due
to the correct solution being fortuitously present in the top-k completions of
the prompt (and being recognized as such by an external verifier). Our results
thus call into question claims about the self-critiquing capabilities of state
of the art LLMs. | http://arxiv.org/pdf/2310.12397 | Kaya Stechly, Matthew Marquez, Subbarao Kambhampati | cs.AI | 18 pages, 3 figures | null | cs.AI | 20231019 | 20231019 | [
{
"id": "2206.10498"
},
{
"id": "2306.03872"
},
{
"id": "2303.11366"
}
] |
2310.12773 | 18 | Cost Model (CM) Unlike the helpfulness human preference dataset, the harmlessness human pref- erence dataset provides additional information about the harmlessness of a response. To make op- timal use of this information for training the cost model CÏ(y, x), we amend the original pairwise comparison loss by incorporating classification terms.
LC(Ï; DC) = â E(x,yw,yl,·,·)â¼DC [log Ï(CÏ(yw, x) â CÏ(yl, x))] â E(x,yw,yl,sw,sl)â¼DC [log Ï(sw · CÏ(yw, x)) + log Ï(sl · CÏ(yl, x))] . (6)
Itâs worth noting that the Cost Model still complies with the Bradley-Terry (BT) model. Assume there exists a virtual response, y0, which lies on the boundary between safe and unsafe clusters,
4
such that CÏ(y0, x) = 0. If y is unsafe, i.e., s(y) = +1, then the Cost Model tends to prefer y. Hence, we aim to maximize the probability of y â» y0|x: | 2310.12773#18 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | With the development of large language models (LLMs), striking a balance
between the performance and safety of AI systems has never been more critical.
However, the inherent tension between the objectives of helpfulness and
harmlessness presents a significant challenge during LLM training. To address
this issue, we propose Safe Reinforcement Learning from Human Feedback (Safe
RLHF), a novel algorithm for human value alignment. Safe RLHF explicitly
decouples human preferences regarding helpfulness and harmlessness, effectively
avoiding the crowdworkers' confusion about the tension and allowing us to train
separate reward and cost models. We formalize the safety concern of LLMs as an
optimization task of maximizing the reward function while satisfying specified
cost constraints. Leveraging the Lagrangian method to solve this constrained
problem, Safe RLHF dynamically adjusts the balance between the two objectives
during fine-tuning. Through a three-round fine-tuning using Safe RLHF, we
demonstrate a superior ability to mitigate harmful responses while enhancing
model performance compared to existing value-aligned algorithms.
Experimentally, we fine-tuned the Alpaca-7B using Safe RLHF and aligned it with
collected human preferences, significantly improving its helpfulness and
harmlessness according to human evaluations. | http://arxiv.org/pdf/2310.12773 | Josef Dai, Xuehai Pan, Ruiyang Sun, Jiaming Ji, Xinbo Xu, Mickel Liu, Yizhou Wang, Yaodong Yang | cs.AI, cs.LG | null | null | cs.AI | 20231019 | 20231019 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10403"
},
{
"id": "2305.03047"
},
{
"id": "2211.05100"
},
{
"id": "2110.08207"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2105.04054"
},
{
"id": "2209.07858"
},
{
"id": "2305.15324"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2307.12950"
},
{
"id": "2009.11462"
},
{
"id": "2307.04657"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2209.14375"
},
{
"id": "2303.00957"
},
{
"id": "2306.01693"
},
{
"id": "2305.01937"
},
{
"id": "2307.15217"
},
{
"id": "2306.17492"
},
{
"id": "2112.04359"
},
{
"id": "1506.02438"
}
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.