id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
listlengths 1
1
|
---|---|---|---|---|---|---|
2309.10691#67 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | 15 Preprint. # A LIMITATIONS AND FUTURE WORK We simulate the natural language feedback of human users with GPT-4. Despite showing in a human experiment that it is similar to human-written feedback, however, GPT-4 simulated might not cover all the possible responses from real-human users and may not suitably simulate every aspect of human feedback, particularly in tasks (e.g., policy-making) that involve nuanced judgments of human values. While the focus of our work lies on LLMâ s in-context multi-turn interaction, we have yet to explore the potential of directly leveraging language feedback for model training and improvement similar to Wang et al. (2023a), which we leave for future work. Furthermore, our metrics may not fully assess the quality of the interaction process beyond outcomes. For example, models repetitively guessing to get higher scores should be penalized. Despite our best efforts to ensure our benchmark contains challenging and comprehensive tasks, there is still a wide range of tools (Qin et al., 2023c) and real-world use cases (e.g., web-browsing Deng et al. (2023b), operating system Liu et al. (2023d)) that MINT did not cover. Instead of making this benchmark a one-time effort, we hope to continuously improve this benchmark by integrating more challenging tasks and tools as LLMs get better. # B DETAILS OF HUMAN EVALUATION We perform two stages of human annotation using the Potato annotation interface (Pei et al., 2022). In the first stage, we ask two human annotators (A and B) to provide language feedback for a trajectory. We randomly sample 2 instances of interaction trajectories per task from a subset of 8 evaluated LLMs to maximize diversity (in Tab. 3). We filter out task instances that succeed in the first turn (i.e., no need for feedback), resulting in 113 interaction trajectories for annotation. We randomly select a turn for each task trajectory and remove all interactions and GPT-4 generated feedback after that turn. We randomly divide the 113 instances into two subsets and assign each subset to one human annotator. Given previous interaction history, human annotators A and B are asked to provide a turn of natural language feedback as if interacting with ChatGPT. Annotation of each feedback, on average, takes 96 seconds. | 2309.10691#66 | 2309.10691#68 | 2309.10691 | [
"2308.12950"
] |
2309.10691#68 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | According to US Bureau of Labor Statistics (2023), U.S. private non-farm worker average about $33.82 hourly wage (Aug 2023), which translate to an annotation cost of $90 per 100 turns of feedback. In the second stage, we ask two different human annotators (C and D) to compare human-annotated feedback (from the first stage) and GPT-4 generated feedback (from the original trajectory) on two dimensions: helpfulness and human-like. Specifically, helpfulness means whether feedback is help- ful for the LLM to succeed in this task, while human-like focuses on the literal similarity of feedback and human usage. For each dimension, we ask them to determine which feedback is better (i.e., more helpful or human-like) or both are equally good. | 2309.10691#67 | 2309.10691#69 | 2309.10691 | [
"2308.12950"
] |
2309.10691#69 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | C ABLATION STUDY C.1 HOW DO FEEDBACK VARIATIONS IMPACT FEEDBACK QUALITY â FEEDBACK? To gain deeper insights into the effects of various feedback settings on enhancing the performance of language models, we perform an ablation study on feedback by controlling feedback informativeness and frequency. See §F.4.2 for detailed implementation. We present the results in Tab. A.6. C.1.1 INFORMATIVENESS We define informativeness in two dimensions: (1) whether the generated feedback is conditioned on the ground-truth solution (w/ GT) or not (w/o GT, default setting); (2) whether the feedback provided to LLM is textual (default setting) or binary (i.e., good vs. bad). Conditioned on Ground-truth Information In Tab. C.1.1, we find that adding ground-truth in- formation into the feedback generator improves the quality of feedback for reasoning and code gen- eration. | 2309.10691#68 | 2309.10691#70 | 2309.10691 | [
"2308.12950"
] |
2309.10691#70 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | However, this trend doesnâ t hold for decision-making, where using ground-truth information for feedback leads to a performance drop (â 8.95%) compared to no feedback. We hypothesize that this discrepancy can be attributed to the unique nature of decision-making tasks. Unlike other tasks 16 Preprint. Table A.6: Ablation of different factors (informativeness, frequency) that impact feedback quality, using gpt-3.5-turbo-0613 as the evaluated LLM and gpt-4-0613 to simulate language feedback. Setup Reasoning Decision Making Code Generation Micro Average w/o feedback â feedback, textual, w/o GT, dense +13.44 36.25 41.79 +24.63 29.41 +9.56 35.93 +15.09 â feedback, w/ GT â +GT feedback Informativeness of Feedback +16.87 +3.43 â 8.95 â 33.58 +18.38 +8.82 +11.36 â 3.73 â feedback, binary â â textual feedback +2.19 â 11.25 +5.97 â 18.66 +0.74 â 8.82 +2.71 â 12.38 Frequency of Feedback â feedback, sparse â â feedback frequency +5.31 â 8.13 +4.48 â 20.15 +0.74 â 8.82 +4.07 â 11.02 | 2309.10691#69 | 2309.10691#71 | 2309.10691 | [
"2308.12950"
] |
2309.10691#71 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | with definitive solutions, decision-making tasks involve generating action trajectories as solutions (e.g., §F.6). When the initial actions of the model deviate from the ground-truth trajectory, compar- ing its actions with the ground-truth actions could confuse the feedback-provider LLM, resulting in suboptimal feedback quality. Provide Binary Feedback We find that providing LLM with binary feedback (i.e., a binary label of good or bad) instead of more informative textual feedback (i.e., a superset of binary feedback) inevitably hurts performance on all tasks. However, we observe that binary feedback alone provides performance benefits compared to no feedback, especially for decision-making (+5.97), where early action can profoundly impact final task success. In these cases, providing step-wise binary feedback can help LLM agents terminate bad initial actions and backtrack, leading to a higher task success rate. | 2309.10691#70 | 2309.10691#72 | 2309.10691 | [
"2308.12950"
] |
2309.10691#72 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | C.1.2 FREQUENCY We investigate the role of feedback frequency: whether we are providing feedback to the LLM every step (Dense) or only when the LLM agent proposes a solution (Sparse, i.e., when the LLM thinks it finishes the task). In Tab. A.6, as expected, we find changing from dense to sparse feedback hurts performance (â 11.02 on average). However, we observe positive performance gain on all tasks, similar to binary feedback (§C.1.1), suggesting that sparse feedback alone is valuable. Note that when evaluating on sparse feedback setting, MINT is equivalent to the setting of Reflexion feedback (Shinn et al., 2023). # D DATASET FILTERING AND DOWN-SAMPLING The dataset curation can be summarized into three steps: # Collect data from the test set of 8 different datasets shown in Table 1. | 2309.10691#71 | 2309.10691#73 | 2309.10691 | [
"2308.12950"
] |
2309.10691#73 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | For HotpotQA we reserve the first 500 instances. Then, we format dataset prompts with (â Task:â , task description, solution range). For the solution range variable, in GSM8K it is set to be integer, and in TheoremQA it is set corresponding to the instance requirement (float, integer, list of integers, option). For other datasets, since they donâ t have a specific solution range require- ment, we set solution range to be an empty string. An example from TheoremQA is as follows: Task: Let M be the inverse of the group element ((3, 5), (4, 6)) in Z 7. What is M[0][1]? | 2309.10691#72 | 2309.10691#74 | 2309.10691 | [
"2308.12950"
] |
2309.10691#74 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | Output format required: In this example, task description is: â Let M be the inverse of the group element ((3, 5), (4, 6)) in Z 7. What is M[0][1]?â and solution range is: â Output format required: integer.â 17 Preprint. Table A.7: The average number of interaction turns an LLM failed due to not following the instructed format (i.e., not producing <execute> or <solution> tag as instructed under k = 5 and no feedback setting, §2.1) vs. the average number of total turns. All LLMs that produce more than 20% of such invalid actions w.r.t total turns are bolded. | 2309.10691#73 | 2309.10691#75 | 2309.10691 | [
"2308.12950"
] |
2309.10691#75 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | Evaluated LLM Size Type Decision Micro Average 7B Base SIFT Open-source LLM 3.96 / 4.99 0.46 / 4.32 0.11 / 4.17 0.10 / 4.33 2.38 / 4.38 0.10 / 4.65 2.71 / 4.66 0.30 / 4.40 CodeLlama 13B Base SIFT 0.50 / 4.55 0.16 / 4.66 0.00 / 3.36 0.01 / 3.77 0.00 / 4.93 0.04 / 4.77 0.27 / 4.36 0.10 / 4.48 34B Base SIFT 0.19 / 4.21 0.23 / 3.68 0.00 / 3.37 0.04 / 3.83 0.05 / 4.77 1.09 / 3.27 0.11 / 4.15 0.39 / 3.62 7B Base RLHF 0.59 / 4.62 0.75 / 4.03 0.00 / 3.53 1.13 / 4.40 0.25 / 4.96 0.72 / 3.79 0.38 / 4.45 0.83 / 4.06 LLaMA-2 13B Base RLHF 0.49 / 4.75 0.29 / 3.71 0.01 / 3.40 0.00 / 4.54 0.13 / 4.96 0.10 / 3.02 0.30 / 4.49 0.18 / 3.74 70B Base 0.19 / 4.19 0.00 / 3.31 0.16 / 4.49 0.14 / 4.06 Lemur-v1 70B Base SIFT 0.29 / 4.25 0.35 / 3.88 0.00 / 3.28 0.01 / 3.34 0.26 / 4.33 0.03 / 4.07 0.22 / 4.05 0.20 / 3.80 Vicuna-v1.5 7B SIFT 13B SIFT 0.26 / 4.64 0.08 / 4.80 0.06 / 3.54 0.49 / 4.66 0.02 / 4.78 0.07 / 4.90 0.16 / 4.42 0.17 / 4.79 chat-bison-001 claude-2 claude-instant-1 gpt-3.5-turbo-0613 gpt-4-0613 - - - - - - - - - - Closed-source LLM 2.27 / 3.84 0.02 / 1.86 0.06 / 2.81 0.50 / 4.18 0.04 / 3.11 0.10 / 4.18 0.01 / 3.51 0.00 / 3.91 0.00 / 3.87 0.00 / 2.87 4.62 / 4.87 0.00 / 2.24 0.02 / 3.76 0.07 / 4.26 0.00 / 3.42 2.32 / 4.16 0.02 / 2.32 0.04 / 3.28 0.29 / 4.13 0.02 / 3.13 | 2309.10691#74 | 2309.10691#76 | 2309.10691 | [
"2308.12950"
] |
2309.10691#76 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | # Keeping instances that requires multi-turn interaction. â ¢ We first clean up multiple-choice tasks with less than 4 options. These tasks are primarily from MMLU and TheoremQA datasets. â ¢ For MMLU and MATH, since their test sets are large and have various classes of tasks (e.g., for MATH they have algebra, geometry, pre-calculus), we firstly roughly clean those classes that do not need interaction (e.g. for MMLU they have â philosophyâ do- main which does not need much interaction but only requires some basic knowledge about philosophy) by picking up N instances from each class, run these instances with gpt-3.5-turbo-0613, and exclude those classes whose average interaction turn across instances are less than k turns. For math we set N = 100 and k = 3.5, for MMLU we set N = 20 and k = 2.5. Remaining classes of MATH: Intermediate Algebra, Pre- calculus, Algebra, Geometry, Number Theory. Remaining classes of MMLU: world reli- gions test, virology test, college mathematics test, astronomy test, college physics test, high school chemistry test, global facts test, high school mathematics test, formal logic test. â | 2309.10691#75 | 2309.10691#77 | 2309.10691 | [
"2308.12950"
] |
2309.10691#77 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | ¢ we run all remaining data with gpt-3.5-turbo-0613 with turn budget k = 5, no feedback, and exclude those instances with kâ ¤2. # Stratified sub-sampling for efficient evaluation. After cleaning data, we want to maintain data difficulty and balance different types of tasks while continuing sub-sampling. We stratify the instances based on the dataset and whether gpt-3.5-turbo-0613 has completed it (i.e., 8 Ã 2 = 16 strata). For each stratum we set dif- ferent proportions of instances to be preserved: palfworld = 1, pmbpp = phumaneval = 0.5, pgsm8k = photpotqa = ptheoremqa = 0.2, pMMLU = 0.1, pMATH = 0.05. 18 # Preprint. | 2309.10691#76 | 2309.10691#78 | 2309.10691 | [
"2308.12950"
] |
2309.10691#78 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | Table A.8: Summary of Tools by Task Type Task Type Tool Signature Decision-Making [1] put(object: [2] goto(receptacle: [3] take from(object: [4] open receptacle(receptacle: [5] toggle(object or receptacle: [6] close receptacle(receptacle: [7] clean(object: [8] heat(object: [9] cool(object: [10] use(receptacle: [11] look() -> str str, receptacle: str) -> str str, receptacle: str) -> str str) -> str str) -> str str) -> str str, receptacle: str) -> str str, receptacle: str, receptacle: str) -> str str) -> str str) -> str Reasoning [1] wikipedia search(query: str) -> str Code Generation No tool is provided # str) -> str # E ISSUES # E.1 VI C U N A-V1.5 ESCAPE UNDERSCORE ISSUE | 2309.10691#77 | 2309.10691#79 | 2309.10691 | [
"2308.12950"
] |
2309.10691#79 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | The following is a random trajectory (in-context example omitted) from Vicuna-13b-v1.5 model7. For some unknown reason, it tends to escape any underscore (â \ â ) that appears in the code, causing it to fail some task instances. Quantitatively, we calculate the percentage of turns that contain an escaped underscore over different LLMs in Tab. A.9, and find that this is a particular issue with Vicuna and SIFT version of Lemur-v1. We checked 94, 145 instances of ShareGPT data8 and found that about 15% (14, 110) of the conversations have the pattern of backslash underscore (â \ â ). We believe these artifacts in the instruction tuning dataset could be the reason that causes Vicuna and Lemur-v1 to generate code with these backslash underscore patterns (â \ â ). === user === You are a helpful assistant assigned with the task of problem-solving. To â achieve this, you will be using an interactive coding environment â equipped with a variety of tool functions to assist you throughout â the process. At each turn, you should first provide your step-by-step thinking for â solving the task. Your thought process should be enclosed using solving the task. Your thought process should be enclosed using "<thought>" tag, for example: <thought> I need to print "Hello World!" </thought>. = <â After that, you have two options: 1) Interact with a Python programming environment and receive the â corresponding output. Your code should be enclosed using â corresponding output. Your code should be enclosed using "<execute>" â tag, for example: <execute> print ("Hello World!") </execute>. 2) Directly provide a solution that adheres to the required format for â the given task. Your solution should be enclosed using "<solution>" the given task. Your solution should be enclosed using "<solution>" tag, for example: The answer is <solution> A </solution>. â You have 5 chances to interact with the environment or propose a â solution. You can only propose a solution 2 times. solution. You can only propose a solution 2 times. --- 7https://huggingface.co/lmsys/vicuna-13b-v1.5 8https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_ unfiltered | 2309.10691#78 | 2309.10691#80 | 2309.10691 | [
"2308.12950"
] |
2309.10691#80 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | 19 Preprint. Table A.9: Percentage of task instances that assistant outputs escaped underscore (â \ â ) under the setting of k = 5 without language feedback, which causes Syntax Error during task-solving. This seems to be a particular issue with Vicuna LLMs. Percentages greater than 2% are bolded. Percentage of assistant interactions that contains escaped underscore (%) Evaluated LLM Size Type Reasoning Decision-Making Code Open-source LLM 0.00 Base 0.00 SIFT 7B 0.00 0.00 0.00 0.00 CodeLLaMA 13B Base SIFT 0.00 0.00 0.00 0.00 0.00 0.00 34B Base SIFT 0.00 0.00 0.00 0.00 0.00 0.00 7B Base RLHF 0.00 0.00 0.00 0.00 0.00 0.00 LLaMA-2 13B Base RLHF 0.00 0.00 0.00 0.00 0.00 0.00 70B Base 0.00 0.00 0.00 Lemur-v1 70B Base SIFT 0.00 2.22 0.00 0.00 0.00 0.00 Vicuna-v1.5 7B SIFT 13B SIFT 30.38 38.61 0.75 86.57 11.03 90.44 Task: Write a function to check if a nested list is a subset of another nested o list. assert check_subset_list([{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, <= 14],{[[12, 18, 23, 25, 45], [7, ll, 19, 24, 28], [l, 5, 8, 18, 15, os 16]])==False === assistant === Assistant: <thought> To check if a nested list is a subset of another nested list, I â should iterate through the elements of the first list and check if â | 2309.10691#79 | 2309.10691#81 | 2309.10691 | [
"2308.12950"
] |
2309.10691#81 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | they are present in the second list. If all elements are present, â then the first list is a subset of the second list. </thought> <execute> def check\_subset\_list(listl, list2): for element in listl: if element not in list2: return False return True check\_subset\_list ( 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], [12, 18, 23, 25, 45], 7, 11, 19, 24, 28], res [ [ [ | 2309.10691#80 | 2309.10691#82 | 2309.10691 | [
"2308.12950"
] |
2309.10691#82 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | 20 # Preprint. [1, 5, 8, 18, 15, 16]] ) assert res == False, "Expected False but got {}".format(res) </execute> # === user === Observation: Cell In[1], line 1 # def check\_subset\_list(list1, list2): # Ë SyntaxError: unexpected character after line continuation character You have 4 steps left and 2 chances to propose solution left. Expert feedback: This is BAD. | 2309.10691#81 | 2309.10691#83 | 2309.10691 | [
"2308.12950"
] |
2309.10691#83 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | You have a syntax error in your function definition. The ~ backslashes are not needed in the function name. Also, your logic ~ backslashes are not needed in the function name. Also, your logic is «= incorrect. You are checking if the elements of the first list are in â the second list, but the second list is a list of lists. You need to «= check if the elements of the first list are in any of the sublists of <= the second list. === assistant === Assistant: <thought> I apologize for the syntax error. I should remove the o backslashes from the function name and check if the elements of the o first list are in any of the sublists of the second list. </thought> <execute> def check\_subset\_list(listl, list2): for sublist in list2: for element in listl: if element not in sublist: return False return True check\_subset\_list ( 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], [12, 18, 23, 25, 45], 7, 11, 19, 24, 28], 1, 5, 8, 18, 15, 16]] res = [ [ [ [ ) assert res == False, "Expected False but got {}".format (res </execute> === user Observation: Cell In[2], line 1 def check\_subset\_list(list1l, list2): | 2309.10691#82 | 2309.10691#84 | 2309.10691 | [
"2308.12950"
] |
2309.10691#84 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | SyntaxError: unexpected character after line continuation character You have 3 steps left and 2 chances to propose solution left. Expert feedback: This is BAD. You have repeated the same syntax error in your function o definition. The backslashes are not needed in the function name. = Also, your logic is still incorrect. You are returning False as soon â as you find an element from listl that is not in a sublist of list2. o This would mean that all elements of listl need to be in all sublists o of list2 for the function to return True. However, you should be o checking if each element of listl is in any sublist of list2. === assistant === Assistant: <thought> I apologize for the repeated syntax error. I should remove the <= backslashes from the function name and check if each element of the o first list is in any sublist of the second list. </thought> # o first <execute> | 2309.10691#83 | 2309.10691#85 | 2309.10691 | [
"2308.12950"
] |
2309.10691#85 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | 21 # Preprint. def check\_subset\_list(list1l, list2): for element in listl: if element not in any(sublist for sublist in list2): return False return True res = check\_subset\_list ( {[l, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], [{12, 18, 23, 25, 45], [7, 11, 19, 24, 28], {1, 5, 8, 18, 15, 16]] ) assert res == False, "Expected False but got {}".format (res </execute> === user === Observation: Cell In[3], line 1 def check\_subset\_list(list1l, list2): | 2309.10691#84 | 2309.10691#86 | 2309.10691 | [
"2308.12950"
] |
2309.10691#86 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | SyntaxError: unexpected character after line continuation character You have 2 steps left and 2 chances to propose solution left. Expert feedback: This is BAD. You have repeated the same syntax error in your function definition. The backslashes are not needed in the function name. Also, your logic is still incorrect. You are using the 'any' function incorrectly. The 'any' function returns True if at least one element of an iterable is true. You should be using it to check if the element is in any of the sublists. You need to iterate over each sublist and check if the element is in the sublist. If the element is not in any of the sublists, then you should return False. If you have checked all elements and have not returned False, then you should return True. | 2309.10691#85 | 2309.10691#87 | 2309.10691 | [
"2308.12950"
] |
2309.10691#87 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | Clilerrrses not in any of the sublists, then you should return False. If you have checked all elements and have not returned False, then you should return True. == assistant === Assistant: <thought> I apologize for the repeated syntax error. I should remove the «o backslashes from the function name and use the 'any' function â correctly to check if each element of the first list is in any â sublist of the second list. </thought> <execute> def check\_subset\_list(listl, list2): for element in listl: if not any(element in sublist for sublist in list2): return False return True res = check\_subset\_list ( {[l, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], [{12, 18, 23, 25, 45], [7, 11, 19, 24, 28], {1, 5, 8, 18, 15, 16]] ) assert res == False, "Expected False but got {}".format (res </execute> === user Observation: Cell In[4], line 1 def check\_subset\_list(list1l, list2): | 2309.10691#86 | 2309.10691#88 | 2309.10691 | [
"2308.12950"
] |
2309.10691#88 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | SyntaxError: unexpected character after line continuation character You have 1 steps left and 2 chances to propose solution left. You should take the last step to propose a solution. Expert feedback: 22 # Preprint. This is BAD. You have repeated the same syntax error in your function definition. The backslashes are not needed in the function name. 6 definition. The backslashes are not needed in the function name. Also, your logic is still incorrect. You are using the 'any' function incorrectly. The 'any' function returns True if at least one element of an iterable is true. You should be using it to check if the element is in any of the sublists. You need to iterate over each sublist and check if the element is in the sublist. If the element is not in any of the sublists, then you should return False. If you have checked all elements and have not returned False, then you should return True. | 2309.10691#87 | 2309.10691#89 | 2309.10691 | [
"2308.12950"
] |
2309.10691#89 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | ~ # o ~~ â # o â â # o === assistant === Assistant: <solution> The correct solution is: ~**python def check_subset_list(listl, list2): for element in listl: if not any(element in sublist for sublist in list2): return False return True Explanation: x We iterate over each element in â listl*. x For each element, we check if it is in any of the sublists of â list2~ â using the â any()~ function. x If the element is not in any of the sublists, we return â Falseâ . x If we have checked all elements in â list1~ and have not returned o â Falseâ , we return â Trueâ . This solution correctly checks if each element of â list1~ is in any o sublist of â list2>. E.2 CO D ELLAMA-IN S T R U C T PRODUCES [PYTHON] TAG The following is a random trajectory on coding tasks (in-context example omitted) from CodeLLaMA-34b-Instruct model9. Despite being explicitly instructed to output code within pairs of <execute> or <solution> tag through instruction and in-context example, CodeLLaMA-34b-Instruct always wraps their output solution within a pair of [PYTHON] tag as shown in Tab. A.10, despite not explicitly instructed so. In Tab. A.10, we find that [PYTHON] tag is always associated with code generation problems but not other types of tasks. The appearance of [PYTHON] is most frequent in the largest version (34B), but not appearing at all in the smallest versions (7B). We suspect the association of [PYTHON] tag and code generation task is due to artifacts intro- duced during the instruction-tuning phase of CodeLLaMA-Instruct model. Quoting Rozi`ere et al. (2023), â Code Llama - Instruct variants are further fine-tuned on a mix of proprietary instruction data for improved safety and helpfulness, and a new machine-generated self-instruct dataset created by prompting Llama 2 for coding problems and Code Llama to generate associ- ated unit tests and solutions.â Furthermore, Rozi`ere et al. (2023) mentioned that in the self-instruct process, they â | 2309.10691#88 | 2309.10691#90 | 2309.10691 | [
"2308.12950"
] |
2309.10691#90 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | prompt the model to generate the code between [PYTHON] and [/PYTHON] tags to make it easy to parse the result.â We suspect CodeLLaMA-Instruct was explicitly fine- tuned on [PYTHON] tags during instruction-tuning, causing it to ignore instructions about other tags (e.g.,<execute>) on coding problems. Furthermore, we hypothesize that the larger the LLM, the more it will be able to memorize data points seen during its instruction-tuning phase, causing it to produce [PYTHON] more often as observed in Tab. A.10. 9https://huggingface.co/CodeLLaMA/CodeLLaMA-34b-Instruct-hf 23 Preprint. Table A.10: Percentage of task instances that assistant outputs [PYTHON] tag at least once, un- der the setting of k = 5 without language feedback. This seems to be a particular issue with CodeLLaMA LLMs on code-related tasks, especially the model with a larger scale. | 2309.10691#89 | 2309.10691#91 | 2309.10691 | [
"2308.12950"
] |
2309.10691#91 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | Percentage of assistant interactions that contains [PYTHON] tag (%) Evaluated LLM Size Type Reasoning Decision-Making Code Open-source LLM 0.00 0.00 Base SIFT 7B 0.00 0.00 0.00 0.00 CodeLLaMA 13B Base SIFT 0.00 0.00 0.00 0.00 0.00 2.21 34B Base SIFT 0.00 0.00 0.00 0.00 7B Base RLHF 0.00 0.00 0.00 0.00 0.00 0.00 LLaMA-2 13B Base RLHF 0.00 0.00 0.00 0.00 0.00 0.00 70B Base 0.00 0.00 0.00 Lemur-v1 70B Base SIFT 0.00 0.00 0.00 0.00 0.00 0.00 Vicuna-v1.5 7B SIFT 13B SIFT 0.00 0.00 0.00 0.00 0.00 0.00 chat-bison-001 claude-2 claude-instant-1 gpt-3.5-turbo-0613 gpt-3.5-turbo-16k-0613 gpt-4-0613 Closed-source LLM 0.00 - 0.00 - 0.00 - 0.00 - 0.00 - 0.00 - - - - - - - 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 == user === You are a helpful assistant assigned with the task of problem-solving. | 2309.10691#90 | 2309.10691#92 | 2309.10691 | [
"2308.12950"
] |
2309.10691#92 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To â achieve this, you will be using an interactive coding environment â equipped with a variety of tool functions to assist you throughout â the process. â At each turn, you should first provide your step-by-step thinking for â solving the task. Your thought process should be enclosed using = "â <thought>" tag, for example: <thought> I need to print "Hello <â World!" </thought>. After that, you have two options: 1) Interact with a Python programming environment and receive the â corresponding output. Your code should be enclosed using â corresponding output. Your code should be enclosed using "<execute>" â tag, for example: <execute> print ("Hello World!") </execute>. 2) Directly provide a solution that adheres to the required format for â the given task. Your solution should be enclosed using "<solution>" the given task. Your solution should be enclosed using "<solution>" tag, for example: The answer is <solution> A </solution>. â You have 5 chances to interact with the environment or propose a â solution. You can only propose a solution 2 times. --- Task: 24 # Preprint. | 2309.10691#91 | 2309.10691#93 | 2309.10691 | [
"2308.12950"
] |
2309.10691#93 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | Complete the following code: def is_nested(string): tid Create a function that takes a string as input which contains only â square brackets. The function should return True if and only if there is a valid â subsequence of brackets where at least one bracket in the subsequence is nested. is_nested('[[]]') -> True is_nested("(]]]]]IJJ(((((]') -> False is_nested('[][]') -> False is_nested('[]') -> False is_nested('[[][]]') -> True is_nested('[[]][[') -> True vie === assistant === Assistant: [PYTHON] def is_nested(string): stack = [] for char in string: if char == '[': stack.append (char) elif char == 'J': if len(stack) == 0: return False stack.pop () return len(stack) == 0 [/PYTHON] === user === Observation: I don't understand your input. If you want to execute code, please use <execute> YOUR_CODE_HERE a </execute>. If you want to give me an answer, please use <solution> <â YOUR_SOLUTION_HERE </solution>. For example: The answer to the question is <solution> 42 </solution>. You have 4 steps left and 2 chances to propose solution left. Expert feedback: The assistant's approach to solving the problem is correct, but the o assistant didn't follow the correct format for code execution. The â â # o assistant didn't follow the correct format for code execution. The assistant should use the <execute> tag to execute the code. Also, the assistant's solution doesn't seem to check for nested brackets, it only checks for balanced brackets. The assistant needs to modify the solution to check for nested brackets. <= === assistant === Assistant: [PYTHON] def is_nested(string): stack = [] for char in string: if char == '[': stack.append(char) elif char == ']': if len(stack) == 0: return False 25 | 2309.10691#92 | 2309.10691#94 | 2309.10691 | [
"2308.12950"
] |
2309.10691#94 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | # Preprint. stack.pop () return len(stack) = [/PYTHON] = 0 You have 4 steps left and 2 chances to propose solution left. === user === Observation: I don't understand your input. If you want to execute code, please use <execute> YOUR_CODE_HERE a </execute>. If you want to give me an answer, please use <solution> <â YOUR_SOLUTION_HERE </solution>. For example: The answer to the question is <solution> 42 </solution>. You have 3 steps left and 2 chances to propose solution left. Expert feedback: The assistant's approach to solving the problem is correct, but the assistant didn't follow the correct format for code execution. The assistant should use the <execute> tag to execute the code. Also, the assistant's solution doesn't seem to check for nested brackets, it only checks for balanced brackets. The assistant needs to modify the solution to check for nested brackets. o o o o o assistant === Assistant: I apologize for the confusion. I will make sure to follow the correct o format for code execution and check for nested brackets in the 3 solution. Here is the modified solution: <execute> def is_nested(string): stack = [] for char in string: if char == '[': stack.append (char) elif char == ']': if len(stack) == 0: return False stack.pop () return len(stack) = </execute> = 0 You have 3 steps left and 2 chances to propose solution left. F PROMPTS AND EXAMPLES We carefully designed prompts and in-context examples for each task type to elicit interaction be- havior from LLMs. In this section, we include tool descriptions, error messages shown to LLM for formatting issues, example task trajectories, prompt templates, and in-context examples. F.1 TOOL DESCRIPTIONS FOR EACH TASK TYPE For Code Generation No additional tool is provided for code generation apart from the Python interpreter (§2.1). # For Reasoning | 2309.10691#93 | 2309.10691#95 | 2309.10691 | [
"2308.12950"
] |
2309.10691#95 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | 26 # Preprint. Tool function available (already imported in <execute> environment): [1] wikipedia_search(query: str) -> str The Wikipedia Search tool provides access to a vast collection of = articles covering a wide range of topics. Can query specific keywords or topics to retrieve accurate and â comprehensive information. articles covering a wide range of topics. # comprehensive information. # For Decision-Making (ALFWorld) Tool function available (already imported in <execute> environment): [1] put(object: str, receptacle: str) -> str Put an object in/on a receptacle. For example: put("mug 1", "desk 2") [2] goto(receptacle: str) -> str Go to a location of the receptacle. For example: goto("drawer 1") [3] take_from(object: str, receptacle: str) -> str Take an object from a receptacle. For example: take_from("mug 1", "shelf 2") [4] open_receptacle(receptacle: str) -> str Open a receptacle. For example: open_receptacle("fridge 1") [5] toggle(object_or_receptacle: str) -> str Toggle an object or receptacle. For example: toggle("light 2") [6] close_receptacle(receptacle: str) -> str Close a receptacle. For example: close_receptacle("microwave 1") [7] clean(object: str, receptacle: str) -> str Clean an object with a receptacle. For example: clean("cloth 1", "sinkbasin 1") [8] heat(object: str, receptacle: str) -> str Heat an object with a receptacle. For example: heat("egg 1", "microwave 1") [9] cool(object: str, receptacle: str) -> str Cool an object with a receptacle. For example: cool("bottle 1", "fridge 1") [10] use(receptacle: str) -> str Use a receptacle. For example: use("lamp 1") [11] look() -> str Look around. It will return what you see in the room. For example: look() | 2309.10691#94 | 2309.10691#96 | 2309.10691 | [
"2308.12950"
] |
2309.10691#96 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | F.2 FORMATTING ERROR MESSAGE The following error message is provided to the LLM Agent when we fail to detect the instructed format (e.g., fail to find <execute> and/or <solution> tag). I don't understand your input. If you want to execute code, please use <execute> YOUR_CODE_HERE a </execute>. 27 Preprint. If you want to give me an answer, please use <solution> <â YOUR_SOLUTION_HERE </solution>. For example: The answer to the question is <solution> 42 </solution>. F.3 EXAMPLE TASK TRAJECTORY The following is an example trajectory on a reasoning task from MATH (Hendrycks et al., 2021), generated by gpt-3.5-turbo-0613 as an agent and gpt-4-0613 as a simulated human ad- visor to provide feedback. | 2309.10691#95 | 2309.10691#97 | 2309.10691 | [
"2308.12950"
] |
2309.10691#97 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | We release outputs from all LLMs we evaluated in our code repo. Task: Let $a,$ $b,$ and $c$ be the roots of $x*3 - 3x - 5 = 0.$ Find Sabc.$ Assistant: <thought> To solve this problem, we can use Vieta's formulas. Vieta's o formulas state that for a polynomial S$ax*n + bx*{n-1} + cx*{n-2} + = \dots + k = 0$ with roots $r_1, r_2, \dots, r_n$, the following o relationships hold: Sr_1 + r_2 + \dots + r_n = - rac{b}{a}$ Sr_lr_2 + r_lr_3 + \dots + r_{n-l}r_n = rac{c}{a}$ Sr_lr_2r_3 + r_lr_2r_4 + \dots + r_{n-2}r_{n-1l}r_n = - rac{d}{a}$ $\dots$ Sr_lr_2 \dots r_n = (-1)7n rac{k}{a}$ In this case, we have a cubic polynomial $x*3 - 3x - 5 = 0$, so we can â use the formula S$abc = - rac{d}{a}$ to find the product of the â roots. </thought> <execute> asl qd=5 abc = abc </execute> -d/a Observation: Out[1]: -5.0 You have 4 steps left and 2 chances to propose solution left. Expert feedback: This is GOOD. | 2309.10691#96 | 2309.10691#98 | 2309.10691 | [
"2308.12950"
] |
2309.10691#98 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | You correctly applied Vieta's formulas to find the product o of the roots of the given cubic equation. Now you can submit the 3 solution. Assistant: The answer is <solution> -5.0 </solution>. Observation: Your answer is wrong. You have 3 steps left and 1 chances to propose solution left. Expert feedback: Expert feedback: The assistant made a mistake in applying Vieta's formulas. The formula ~ for the product of the roots of a cubic equation is $abc = (-1)â | 2309.10691#97 | 2309.10691#99 | 2309.10691 | [
"2308.12950"
] |
2309.10691#99 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | n for the product of the roots of a cubic equation is $abc = (-1)Ë n rac{k}{a}$, not $abc = - rac{d}{a}$. In this case, $n=3$, $k=-5$, and $a=1$, so the correct formula to use is $abc = (-1)Ë 3 rac{-5}{1}$. Please correct this and try again. # o â # o Assistant: <thought> I apologize for the mistake. Let's correct it. </thought> <execute> a = 1 k = -5 abc = (-1)**3 * k/a | 2309.10691#98 | 2309.10691#100 | 2309.10691 | [
"2308.12950"
] |
2309.10691#100 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | 28 # Preprint. abc </execute> Observation: Out[2]: 5.0 You have 2 steps left and 1 chances to propose solution left. Expert feedback: This is GOOD. You have correctly applied Vieta's formulas this time and «= found the correct product of the roots of the given cubic equation. â Now you can submit the solution. Assistant: Assistant: The answer is <solution> 5.0 </solution>. F.4 PROMPT TEMPLATES To instantiate the prompt for each task type, {in context example} will be substituted with examples in §F.5, and {tool desc} will be substituted with tool descriptions in §F.1. # F.4.1 EVALUATE LLM For each task, we replace {task prompt} with the corresponding task instruction. {max total steps} and {max propose steps} will be substitute with the setting for that experiments. In MINT, we default to 5 max total steps and 2 max propose steps as discussed in §2.1 and §3.1. You are a helpful assistant assigned with the task of problem-solving. To achieve this, you will be using an interactive coding environment equipped with a variety of tool functions to assist you throughout the process. lid At each turn, you should first provide your step-by-step thinking for o solving the task. Your thought process should be enclosed using solving the task. Your thought process should be enclosed using "<thought>" tag, for example: <thought> I need to print "Hello World!" </thought>. = <â After that, you have two options: 1) Interact with a Python programming environment and receive the â corresponding output. Your code should be enclosed using â corresponding output. Your code should be enclosed using "<execute>" â tag, for example: <execute> print ("Hello World!") </execute>. 2) Directly provide a solution that adheres to the required format for o the given task. Your solution should be enclosed using "<solution>" the given task. Your solution should be enclosed using "<solution>" tag, for example: The answer is <solution> A </solution>. You have {max_total_steps} chances to interact with the environment or â propose a solution. You can only propose a solution propose a solution. You can only propose a solution {max_propose_solution} times. â | 2309.10691#99 | 2309.10691#101 | 2309.10691 | [
"2308.12950"
] |
2309.10691#101 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | {tool_desc} --- {in_context_example} --- {task_prompt} 29 Preprint. F.4.2 SIMULATE LANGUAGE FEEDBACK To instantiate the template for feedback generation, we will replace {trajectory} with an LLM agentâ s trajectory (e.g., §F.3). When the ground-truth solution is not provided for feedback gen- eration, {gt solution} will be substituted with â NOT GIVENâ ; Otherwise, the ground-truth solution for that task will be provided. You are an expert tasked with evaluating and providing feedback on an â assistant's performance. Here is an example. Please follow the format as the following expert o acts. # {in_context_example} --- # {tool_desc} # {trajectory} Correct solution (please DO NOT disclose the correct solution to the â assistant): {gt_solution} Please provide concise and constructive feedback. Remember, your role is ~ similar to a teacher. Rather than giving away the solution or details â about the answer, guide the assistant toward understanding how to â arrive at the correct answer. Your feedback should focus on enhancing â the assistant's ability to think critically and respond accurately. ~ Now provide your feedback. Expert feedback: IN-CONTEXT EXAMPLE We perform 1-shot learning on all tasks in MINT. We have two types of in-context examples for each task: (1) without feedback, and (2) with feedback. We show the in-context examples with the provided feedback below. When no feedback is provided to the LLM (i.e., without feedback ...â in the following in-context examples), we will remove all occurrences of â Expert feedback: examples. We curate one in-context example for code generation and reasoning problems, and we follow Yao et al. (2022) to construct examples for each sub-type of decision-making (i.e., ALFWorld) tasks. For each task, we include one corresponding example for in-context learning. For simplicity, we only include a subset of prompts here. All prompts are released in our code repo. | 2309.10691#100 | 2309.10691#102 | 2309.10691 | [
"2308.12950"
] |
2309.10691#102 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | F.5.1 DECISION-MAKING We modify in-context prompts from Yao et al. (2022) to construct these in-context examples. This is an example of clean type of tasks in ALFWorld. Task: You are in the middle of a room. Looking quickly around you, you see a cabinet 13, a cabinet 12, a cabinet 11, a cabinet 10, a cabinet 9, f cabinet 13, a cabinet 12, a cabinet 11, a cabinet 10, a cabinet 9, a cabinet 8, a cabinet 7, a cabinet 6, a cabinet 5, a cabinet 4, a cabinet 3, a cabinet 2, a cabinet 1, a coffeemachine 1, a countertop 1, a diningtable 1, a drawer 1, a fridge 1, a garbagecan 1, a microwave 1, a shelf 3, a shelf 2, a shelf 1, a sinkbasin 1, a stoveburner 4, a stoveburner 3, a stoveburner 2, a stoveburner 1, and a toaster 1. # o # o # o # o # o a toaster 1. Your task is to: put a clean lettuce in diningtable. o Assistant: | 2309.10691#101 | 2309.10691#103 | 2309.10691 | [
"2308.12950"
] |
2309.10691#103 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | 30 # Preprint. <thought> To solve the task, I need to find a lettuce, then clean it with o sinkbasin, then put it in diningtable. First I need to find a lettuce. A lettuce is more likely to appear in o fridge (1), diningtable (1), sinkbasin (1), stoveburner (1-3), â cabinet (1-13). I can check one by one. </thought> <execute> for loc_to_check in ["fridge 1", "diningtable 1", "sinkbasin 1", "stoveburner 1", "stoveburner 2", "stoveburner 3", x[f"cabinet {i}" for i in range(1, 14)]]: res = goto(loc_to_check) if "closed" in res: # The receptacle is closed. I need to open it to see what's os inside. res = open_receptacle (loc_to_check) if "lettuce" in res: print (res) print (f"Found a lettuce in {loc_to_check}.") break </execute> Observation: On the diningtable 1, you see a apple 1, a bread 1, a butterknife 2, a â cup 2, a fork 2, a knife 2, a knife 1, a ladle 1, a lettuce 1, a mug â 2, a mug 1, a pan 2, a peppershaker 1, a spatula 3, a tomato 2, anda â tomato 1. Found a lettuce in diningtable 1. Assistant: <thought> Now I find a lettuce 1. Next, I need to take it and go to sinkbasin 1 and â clean the lettuce 1. Once it is cleaned, I need to put it in diningtable 1. </thought> <execute> print (take_from("lettuce 1", "diningtable 1")) print (goto("sinkbasin 1")) print (clean("lettuce 1", "sinkbasin 1")) print (goto("diningtable 1")) print (put ("lettuce 1", "diningtable 1")) </execute> Observation: You pick up the lettuce 1 from the diningtable 1. | 2309.10691#102 | 2309.10691#104 | 2309.10691 | [
"2308.12950"
] |
2309.10691#104 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | On the sinkbasin 1, you see a apple 2, a ladle 2, a spoon 1, and a tomato o 3. You clean the lettuce 1 the sinkbasin 1. Observation: You pick up the lettuce 1 from the diningtable 1. On the sinkbasin 1, you see a apple 2, a ladle 2, a spoon 1, and a tomato o 3. You clean the lettuce 1 using the sinkbasin 1. On the diningtable 1, you see a apple 1, a bread 1, a butterknife 2, a â cup 2, a fork 2, a knife 2, a knife 1, a ladle 1, a mug 2, a mug 1, a cup 2, a fork 2, a knife 2, a knife 1, a ladle 1, a mug 2, a mug 1, a pan 2, a peppershaker 1, a spatula 3, a tomato 2, and a tomato 1. â pan 2, a peppershaker 1, a spatula 3, a You put the lettuce 1 in/on the diningtable 1. Assistant: <thought> I have completed the task. </thought> <solution> done </solution> | 2309.10691#103 | 2309.10691#105 | 2309.10691 | [
"2308.12950"
] |
2309.10691#105 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | F.5.2 CODE GENERATION We select one example from MBPPâ s in-context example subset (Austin et al., 2021) and curate it to fit MINT framework (i.e., adding observations and feedback). We use the same in-context example 31 # Preprint. after a slight format adjustment for HumanEval (Chen et al., 2021) since there were no dedicated in-context examples provided for HumanEval. Task: Write a function to find the similar elements from the given two tuple o lists. assert similar_elements((3, 4, 5, 6),(5, 7, 4, 10)) == (4, 5) Assistant: <thought> The similar elements here are just elements that are in both o lists. I should run a solution on the test case before proposing a o solution. </thought> <execute> def similar_elements(test_tupl, test_tup2): res = tuple(set(test_tupl) | set (test_tup2) ) return res res = similar_elements((3, 4, 5, 6), (5, 7, 4, 10)) assert res == (4, 5), "Expected (4, 5) but got {}".format (res </execute> Observation: AssertionError Traceback (most recent call o last) Cell In[4], line 5 3 return res 4 res = similar_elements((3, 4, 5, 6), (5, 7, 4, 10)) ----> 5 assert res == (4, 5), "Expected (4, 5) but got {}".format (res) AssertionError: Expected (4, 5) but got (3, 4, 5, 6, 7, 10) Expert feedback: | 2309.10691#104 | 2309.10691#106 | 2309.10691 | [
"2308.12950"
] |
2309.10691#106 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | This is BAD. You should not use the | operator which is the union â operator. Assistant: <thought> To find the similar elements, I should find the intersection of â the two sets. I should use the & operator instead of the | operator. oa </thought> <execute> def similar_elements(test_tupl, test_tup2): res = tuple(set(test_tupl) & set (test_tup2) ) return res res = similar_elements((3, 4, 5, 6), (5, 7, 4, 10)) assert res == (4, 5), "Expected (4, 5) but got {}".format (res </execute> Observation: [Executed Successfully with No Output] Expert feedback: Expert feedback: This is GOOD. You are doing great! You can try to submit the solution now. # now. Assistant: <thought> There is no more AssertionError. I can now submit the solution. oa </thought> <solution> def similar_elements(test_tupl, test_tup2): # </thought> res = tuple(set(test_tup1) & set(test_tup2)) return res # </solution> | 2309.10691#105 | 2309.10691#107 | 2309.10691 | [
"2308.12950"
] |
2309.10691#107 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | 32 Preprint. # F.5.3 REASONING TASK We manually curate the in-context example for the reasoning subset to include the characteristic for both multi-hop QA-style reasoning (e.g., HotpotQA) and mathematical reasoning (e.g., MATH, TheoremQA). Task: Evaluate $\int\int_C (x+y)*2 dx,dy$, where C is the circle defined by ao $x°2 + y°2 = MS, and M is the birth year of Li Bai. Output format requirement: Round to three decimal places, using half up = rounding rule. | 2309.10691#106 | 2309.10691#108 | 2309.10691 | [
"2308.12950"
] |
2309.10691#108 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | Assistant: <thought> To solve this problem, let's start by determining Li Bai's o birth year. </thought> <execute> print (wikipedia_search('Li Bai')) </execute> Observation: Page: Li Bai Summary: Li Bai (pinyin: Li Bai, 701{762), also pronounced as Li Bo, â courtesy name Taibai, was a Chinese poet, acclaimed from his own courtesy name Taibai, was a Chinese poet, acclaimed from his own time to the present as a brilliant and romantic figure who took traditional poetic forms to new heights. | 2309.10691#107 | 2309.10691#109 | 2309.10691 | [
"2308.12950"
] |
2309.10691#109 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | = # o Expert feedback: This is GOOD. You have successfully retrieved the birth year of Li Bai o from the wiki search result. Next you should try to solve the = calculus problem with the information you obtained. Assistant: <thought> From the wiki_search result, we can see that Li Bai was born in 701, thus oa Me= 701. To evaluate the double integral of $(xt+y)*2$ over the region $C$, where ~~ $C$ is the circle defined by $x*2 + y*2 = 701$, we should use polar <= coordinates. In polar coordinates, $x = r\cos( heta)$ and $y = r\sin( heta)$, and o the differential area element $dx,dy$ becomes $r,dr,d heta$. S$\int\int_C (x+y) *2 dx,dy = \int_0*{2\pi}\int_0*{\sqrt{701}} o (r\cos( heta) + r\sin( heta))*2r,dr,d heta$ Let's use the SymPy package to calculate this. </thought> <execute> from sympy import symbols, integrate, cos, sin, pi r, theta = symbols('r theta') # the integrand integrand = (r*xcos(theta) + r*sin(theta))**2 * r # perform the double integration double_integral = integrate(integrate( integrand, (r, 0, 701 ** 0.5)), (theta, 0, 2*pi) ) print (double_integral) </execute> Observation: Out [2]: 245700.5*pi Expert feedback: | 2309.10691#108 | 2309.10691#110 | 2309.10691 | [
"2308.12950"
] |
2309.10691#110 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | 33 # Preprint. This is GOOD. You successfully converted the double integral into a form o that can be calculated. Now you can convert the answer to meet the = output format requirement. Assistant: <thought> Let's round this result to three decimal places using half up <â rounding rule. </thought> <execute> import math print (round(245700.5*math.pi, 3)) </execute> Observation: 771890.886 Expert feedback: This is GOOD. You have got the solution! You can try to submit the = solution now. Assistant: The answer is <solution> 771890.886 </solution>. F.6 EXAMPLE GROUND-TRUTH FOR DECISION-MAKING TASK The following is an example ground-truth solution for a task from Alfworld Shridhar et al. (2020). Task: Interact with a household to solve a task. u are in the middle of a room. Looking quickly around you, you see a cabinet 6, a cabinet 5, a cabinet 4, a cabinet 3, a cabinet 2, a cabinet 6, a cabinet 5, a cabinet 4, a cabinet 3, a cabinet 2, a cabinet 1, a coffeemachine 1, a countertop 3, a countertop 2, a countertop 1, a drawer 3, a drawer 2, a drawer 1, a fridge 1, a garbagecan 1, a microwave 1, a shelf 3, a shelf 2, a shelf 1, a sinkbasin 1, a stoveburner 4, a stoveburner 3, a stoveburner 2, a stoveburner 1, and a toaster 1. countertop 1, a drawer 3, a drawer 2, a drawer 1, garbagecan 1, a microwave 1, a shelf 3, a shelf 2, sinkbasin 1, a stoveburner 4, a stoveburner 3, a stoveburner 1, and a toaster l. Your task is to: put two peppershaker in drawer. Llielsg Assistant: <execute> goto("shelf 1") goto("shelf 2") goto("shelf 3") goto("drawer 1") </execute> Observation: | 2309.10691#109 | 2309.10691#111 | 2309.10691 | [
"2308.12950"
] |
2309.10691#111 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | On the shelf 1, you see nothing. On the shelf 2, you see a bowl 1, and a saltshaker 2. On the shelf 3, you see a vase 2. The drawer 1 is closed. Assistant: <execute> open ("drawer 1") close("drawer 1") goto("drawer 2") open ("drawer 2") close("drawer 2") goto("drawer 3") open ("drawer 3") close("drawer 3") goto("countertop 1") goto("countertop 2") take ("peppershaker 3", "countertop 2") | 2309.10691#110 | 2309.10691#112 | 2309.10691 | [
"2308.12950"
] |
2309.10691#112 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | 34 # Preprint. goto("drawer 1") open("drawer 1") put("peppershaker 3", "drawer 1") close("drawer 1") goto("shelf 1") goto("shelf 2") goto("shelf 3") goto("drawer 2") open("drawer 2") close("drawer 2") goto("drawer 3") open("drawer 3") close("drawer 3") goto("countertop 1") goto("countertop 2") goto("countertop 3") goto("cabinet 1") take("peppershaker 2", "cabinet 1") goto("drawer 1") open("drawer 1") put("peppershaker 2", "drawer 1") </execute> got got got tak got ope put </e Obs You o You The You You The You You On 6 On 6 You The You o You You On On On The You You The You You On 6 On 6 On o("countertop 2") o("countertop 3") o("cabinet 1") e("peppershaker 2", "cabinet 1") o("drawer 1") n("drawer 1") ("peppershaker 2", "drawer 1") xecute> ervation: open the drawer 1. The drawer 1 is open. In it, you 1, and a fork 1. close the drawer 1. drawer 2 is closed. open the drawer 2. The drawer 2 is open. In it, you close the drawer 2. drawer 3 is closed. open the drawer 3. The drawer 3 is open. In it, you close the drawer 3. the countertop 1, you see a cup 1, a fork 2, a mug 3, a soapbottle 2, a spatula 1, and a spoon 1. the countertop 2, you see a butterknife 1, a knife 1, peppershaker 3, and a tomato 1. pick up the peppershaker 3 from the countertop 2. drawer 1 is closed. open the drawer 1. The drawer 1 is open. | 2309.10691#111 | 2309.10691#113 | 2309.10691 | [
"2308.12950"
] |
2309.10691#113 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | In it, you 1, and a fork 1. put the peppershaker 3 in/on the drawer 1. close the drawer 1. the shelf 1, you see nothing. the shelf 2, you see a bowl 1, and a saltshaker 2. the shelf 3, you see a vase 2. drawer 2 is closed. open the drawer 2. The drawer 2 is open. In it, you close the drawer 2. drawer 3 is closed. open the drawer 3. The drawer 3 is open. In it, you close the drawer 3. the countertop 1, you see a cup 1, a fork 2, a mug 3, a soapbottle 2, a spatula 1, and a spoon 1. the countertop 2, you see a butterknife 1, a knife 1, tomato 1. see a cellphone see nothing. see a spatula 2. a soapbottle 3, amug 1, a see a cellphone see nothing. see a spatula 2. a soapbottle 3, a mug 1, anda the countertop 3, you see a apple 2, a bread 1, a cellphone 3, a cellphone 2, a creditcard 1, a glassbottle 2, a houseplant 1, a plate 2, a pot 2, a spatula 3, a spoon 3, a spoon 2, and a statue l. the cabinet 1, you see a mug 2, and a peppershaker 2. pick up the peppershaker 2 from the cabinet 1. drawer 1 is closed. open the drawer 1. The drawer 1 is open. In it, you 1, a fork 1, and a peppershaker 3. put the peppershaker 2 in/on the drawer 1. see a cellphone | 2309.10691#112 | 2309.10691#114 | 2309.10691 | [
"2308.12950"
] |
2309.10691#114 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | 35 | 2309.10691#113 | 2309.10691 | [
"2308.12950"
] |
|
2309.10621#0 | Large language models can accurately predict searcher preferences | 3 2 0 2 p e S 9 1 ] R I . s c [ 1 v 1 2 6 0 1 . 9 0 3 2 : v i X r a # Large language models can accurately predict searcher preferences PAUL THOMAS, Microsoft, Australia SETH SPIELMAN, Microsoft, USA NICK CRASWELL, Microsoft, USA BHASKAR MITRA, Microsoft Research, Canada Relevance labels, which indicate whether a search result is valuable to a searcher, are key to evaluating and optimising search systems. The best way to capture the true preferences of users is to ask them for their careful feedback on which results would be useful, but this approach does not scale to produce a large number of labels. Getting relevance labels at scale is usually done with third-party labellers, who judge on behalf of the user, but there is a risk of low-quality data if the labeller doesnâ t understand user needs. To improve quality, one standard approach is to study real users through interviews, user studies and direct feedback, find areas where labels are systematically disagreeing with users, then educate labellers about user needs through judging guidelines, training and monitoring. This paper introduces an alternate approach for improving label quality. It takes careful feedback from real users, which by definition is the highest-quality first-party gold data that can be derived, and develops an large language model prompt that agrees with that data. We present ideas and observations from deploying language models for large-scale relevance labelling at Bing, and illustrate with data from TREC. We have found large language models can be effective, with accuracy as good as human labellers and similar capability to pick the hardest queries, best runs, and best groups. Systematic changes to the prompts make a difference in accuracy, but so too do simple paraphrases. To measure agreement with real searchers needs high-quality â | 2309.10621#1 | 2309.10621 | [
"2305.03495"
] |
|
2309.10621#1 | Large language models can accurately predict searcher preferences | goldâ labels, but with these we find that models produce better labels than third-party workers, for a fraction of the cost, and these labels let us train notably better rankers. CCS Concepts: â ¢ Information systems â Test collections; Relevance assessment; â ¢ Computing methodologies â Natural language generation. Additional Key Words and Phrases: large language models, offline evaluation, labelling # 1 LABELLING RELEVANCE Relevance labelsâ annotations that say whether a result is relevant to a searcherâ s needâ are essential for evaluating and improving information retrieval systems. Labels can come from (in decreasing order of both reliability and difficulty to obtain): (i) actual users, (ii) subject-matter experts, (iii) professional assessors (without subject-matter expertise), or (iv) crowd workers (without extensive training in the relevance assessment tasks). Label quality can be evaluated by comparing them to some gold standard labels [Saracevic 2008]. This paper defines gold standard labels as those from the query topic originator [Bailey et al. 2008]. The originator could be a relevance assessor who develops their own query topic, then labels the results. Even better, the originator could be a real user who did the query in situ, knows exactly what they were trying to find, and gives careful feedback on whatâ s relevant. If each search only has one originator, then their gold labels are the ones that all other labels should be evaluated against. Given a set of first-party labels, other parties (human or machine) can at best perfectly agree, but can never â outperformâ the given gold labels. Third-party assessors may disagree with gold because they misunderstand the userâ s preference. If workers are systematically misunderstanding user needsâ if the labels are biasedâ this cannot be fixed by getting more data. | 2309.10621#0 | 2309.10621#2 | 2309.10621 | [
"2305.03495"
] |
2309.10621#2 | Large language models can accurately predict searcher preferences | For Authorsâ addresses: Paul Thomas, Microsoft, Adelaide, Australia, [email protected]; Seth Spielman, Microsoft, Boulder, USA, sethspielman@ microsoft.com; Nick Craswell, Microsoft, Seattle, USA, [email protected]; Bhaskar Mitra, Microsoft Research, Montreal, Canada, bmitra@microsoft. com. Paul Thomas, Seth Spielman, Nick Craswell, and Bhaskar Mitra example, consider a pool of workers who do not understand which queries are navigational [Broder 2002]. When a first-party user wants to navigate to a site, the third-party labels do not reward retrieval of that site. The resulting labels do not help us build a search system that performs well on navigational queries, and this canâ t be solved by getting more labels from the biased worker pool. Working with human labellers, especially crowd workers, can also lead to other well-documented problems including mistakes, other biases, collusion, and adversarial or â spammyâ workers [Clough et al. 2013; Inel et al. 2023; Thomas et al. 2022]. The resulting labels can be low-quality, and using them for training or making decisions will develop a retrieval system that makes similar errors. The standard path to obtaining higher-quality labels involves multiple steps. The first is to learn about real users through interviews, user studies, direct feedback on their preferences and implicit feedback on their preferences such as clicks [Dumais et al. 2014]. Studying associated relevance labels, and looking for systematic mistakes, can indicate patterns where labellers are misunderstanding what users want. The final step is to educate labellers, by reference to guidelines or examples, to minimise future errors: for example, Google uses over 170 pages of guidelines to educate their search quality raters on what makes a good Google result [Google LLC 2022]. Asking labellers to follow guidelines should lead to improvements in their output, and that improvement can be measured against ground truth that either comes from real users (did labellers agree with real users?) or is based on our best understanding of user preferences (did labellers agree with examples carefully chosen by experts to agree with our best understanding of users?). This paper introduces a new way of reaching very high-quality labels, that match real user preferences, by leveraging large language models (LLMs). | 2309.10621#1 | 2309.10621#3 | 2309.10621 | [
"2305.03495"
] |
2309.10621#3 | Large language models can accurately predict searcher preferences | In practice, LLM performance on any task can vary depending on the wording of the prompt [Zhang et al. 2022; Zhou et al. 2022]. Our approach is to get a small sample of feedback that perfectly reflects real user preferences, because they come from real users who did a careful job of giving feedback. We then choose a prompt for the LLM that generates labels, such that the labels have the best match with first-party ground truth. Using machine learning for labelling raises the question of circularity, since labels are used for training and optimising retrieval systems, which may use machine learning. Machine-learned models have long been employed for relevance estimation. These predicted or automatic relevance models are often trained on human relevance labels, and have historically been inferior in quality to the labels they were trained on. Because they are cheap to run, the machine learned models are employed as rankers, estimating relevance at a scale that would be impractical to achieve with human assessors, and focusing on optimising the relative ordering of items, particularly in top ranks. With GPT-4 [OpenAI 2023] and similar large language models, we are now observing a new opportunityâ the ability to augment relevance estimators with assessment guidelines as part of the promptâ as well as a different kind of trade-off whereby LLM labels may match first-party gold labels more closely than some third-party human labels do. GPT-4 is still too inefficient to be deployed as a real-time ranking model serving web-scale query loads, where even a tenth of a second increase in query processing latency has been shown to negatively impact searchers [Brutlag 2009; Schurman and Brutlag 2009]. This creates a new opportunity to employ these automatic relevance assessments from GPT-4 for training and evaluating more efficient ranking models, which may be seen as a form of knowledge distillation [Hinton et al. 2015]. For other annotation tasks there is evidence that LLMs can be comparable to crowd workers, using standard metrics such as agreement or correlation [Alizadeh et al. 2023; Gilardi et al. 2023; Törnberg 2023]. However, we argue it is more interesting to compare labels to a relatively small set of first-party ground truth, from real searchers. We can then ask how well different labellers doâ human or LLMâ | 2309.10621#2 | 2309.10621#4 | 2309.10621 | [
"2305.03495"
] |
2309.10621#4 | Large language models can accurately predict searcher preferences | in generating labels that match real user preferences. Our study shows that LLM labellers can do better on this task than several populations of human labellers. The worst are the crowd labellers, who are least diligent and least knowledgeable about user preferences. Better are human raters who are more knowledgeable and diligent, as demonstrated by better agreement with first-party ground truth (gold). LLMs perform Large language models can accurately predict searcher preferences better on this metric than any population of human labellers that we study. Our results demonstrate the potential for LLMs as a tool for obtaining high-quality relevance labels that match what users think. # 2 EXPERIMENTS: TREC-ROBUST To illustrate these ideas, we have experimented with queries, documents, and labels from TREC-Robust 2004 [Voorhees 2004]. Our main question was whether LLMs could replicate the original TREC labels, assigned by expert human assessors. # 2.1 Machinery and data TREC-Robust includes 250 topics (each with one canonical query, so â queryâ and â topicâ are synonymous in what follows)1. We took queries from the TREC title field; description and narrative were also included in some prompts, as discussed below. Official labels were taken from the TREC-Robust qrel file. These labels were assigned by trained assessors, who had also provided the queries and topic descriptions, so although these are not â realâ in situ search scenarios with a real product, they fit our definition of gold Bailey et al. [2008]: the person who labelled each document is the single best judge of what the query and topic mean, and what sort of document was responsive. If and when a third-party labeller (human or LLM) deviates from gold, it is considered an error with respect the the first-party data. | 2309.10621#3 | 2309.10621#5 | 2309.10621 | [
"2305.03495"
] |
2309.10621#5 | Large language models can accurately predict searcher preferences | The original qrels files had 1031 â highly relevantâ labels, 16 381 â relevantâ , and 293 998 â not relevantâ . In the first experiments below we used a stratified random sample of 1000 qrels for each label, 3000 labelled topic : document pairs in total. In later experiments we used all documents returned in Robust 2004 runs at ranks 1â 100, where those documents were judged in TREC. The experiments here used an in-house version of GPT-4 [OpenAI 2023], running on the Azure service. Temperature was set at zero, so the model would select the single most likely output; other parameters were top ð = 1, frequency penalty 0.5, presence penalty 0, with no stopwords. # 2.2 Prompting Having carefully selected our gold data, we consider a number of prompt template variants (determining LLM inputs) which is generally a cheap and fast way to improve quality [Karpathy 2023]. Figure 1 gives an overall schema for the prompts. Italicised words are placeholders, which were filled differently for each topic and document, or otherwise varied to match the rest of the prompt. Shaded text is optional and was included in some prompt variants. The prompt has four parts. The first part gives task instructions. These are closely based on instructions given to TREC assessors with two changes: First, the TREC instructions included material on consistency in labels, which is not relevant to an LLM case so was dropped here. Second, the phrase â you are a search engine quality rater. . . â replaces some of the TREC text which discusses the assessorsâ past experience developing TREC tracks. The phrase â search engine quality raterâ is used by Google in its labelling efforts, and the phrase is widely used on the web, making it a useful shorthand. 1One query had no relevant documents. It is included in our analysis but will always score zero, on any metric, using the official labels. | 2309.10621#4 | 2309.10621#6 | 2309.10621 | [
"2305.03495"
] |
2309.10621#6 | Large language models can accurately predict searcher preferences | Paul Thomas, Seth Spielman, Nick Craswell, and Bhaskar Mitra The second part of the prompt gives the query/document pair to be labelled: we include the query that the â searcherâ issued; in some configurations we include a more detailed version of their intent from the TREC narrative field; and we give the text of the document itself. The third part of the prompt restates the task, including the instruction to â split this problem into stepsâ by explicitly considering the searcherâ s intent as well as the document. This follows observations by Wei et al. [2022] and Kojima et al. [2022] that â chain of thoughtâ or â step by stepâ | 2309.10621#5 | 2309.10621#7 | 2309.10621 | [
"2305.03495"
] |
2309.10621#7 | Large language models can accurately predict searcher preferences | prompts can produce more reliable results (something we have also observed, informally, in other work). In some variants, we expanded this to explicitly ask for scores for two aspectsâ topicality and trustâ as well as an overall score. In some variants, we also ask the model to simulate several human judges (here five) and give scores from each. The final part of the prompt specifies an output format and includes a snippet of JSON to encourage correct syntax. This is a â zero-shotâ prompt, in that it does not include any examples of the task. Liang et al. [2022] report remarkably mixed results across tasks and models, so it is certainly possible that we could improve with one or more examples; it is also possible we could see some regression. The length of TREC documents means it is hard to include even one entire example, let alone more, and we leave experimentatino with one- or few-shot prompts as future work. Note that we do not claim that this is the best prompt, or the best prompt format; indeed, in Section 4.4 we will see that even minor paraphrases can make a material difference. Our interest here is in the range of results we see with a reasonable prompt (as opposed to the minimal prompts of Faggioli et al. [2023] or Liang et al. [2022]), in the practical impact of disagreements, and in which features of a prompt seem to help or hinder LLM accuracy. # 2.3 Variations We varied the prompt in four ways: Describing the role The simplest version of our instructions asks for a score for a query and a web page. Web page quality is a complex notion, but search providers frequently publish hints of what they are looking for. In particular, Googleâ s labelling guidelines use the phrase â search quality raterâ [Google LLC 2022]. Some prompts therefore include the phrase â you are a search quality rater evaluating the relevance of web pagesâ , as a shorthand way to reference both the guidelines (which are generally useful) and surrounding discussion. Varying topical description Queries alone are an impoverished representation of an information need, but TREC topics have additional text describing what the query means (description) and which documents should be considered responsive (narrative). | 2309.10621#6 | 2309.10621#8 | 2309.10621 | [
"2305.03495"
] |
2309.10621#8 | Large language models can accurately predict searcher preferences | For example, for the query hubble telescope achievements, the description restates that the query is about achievements of the space telescope since its launch in 1991, and the narrative clarifies that this is about scientific achievement so results that only talk about shortcomings and repairs would not be considered relevant. In some prompts, we include this text as the â descriptionâ and â narrativeâ fields. Varying aspects A straightforward approach, following the TREC guidelines, would be to ask for an overall label for each query : document pair. In past work with human labelling, we have found it more useful to spell out several aspects, and ask for ratings against these, before asking for an overall label. These extra questions have been useful to help anchor judge assessments, without constraining the final label (i.e. the overall label need not be a simple average of the per-aspect labels). Similarly, with large language models there has been demonstrated success with splitting problems into steps with prompts such as â think step by stepâ | 2309.10621#7 | 2309.10621#9 | 2309.10621 | [
"2305.03495"
] |
2309.10621#9 | Large language models can accurately predict searcher preferences | [Kojima et al. 2022]. Large language models can accurately predict searcher preferences # role You are a search quality rater evaluating the relevance of web pages. Given a query and a web page, you must provide a score on an integer scale of 0 to 2 with the following meanings: 2 = highly relevant, very helpful for this query 1 = relevant, may be partly helpful but might contain other irrelevant content 0 = not relevant, should never be shown for this query Assume that you are writing a report on the subject of the topic. If you would use any of the information contained in the web page in such a report, mark it 1. If the web page is primarily about the topic, or contains vital information about the topic, mark it 2. Otherwise, mark it 0. | 2309.10621#8 | 2309.10621#10 | 2309.10621 | [
"2305.03495"
] |
2309.10621#10 | Large language models can accurately predict searcher preferences | description, narrative Query A person has typed [query] into a search engine. They were looking for: description narrative Result Consider the following web page. â BEGIN WEB PAGE CONTENTâ page text â END WEB PAGE CONTENTâ Instructions Split this problem into steps: Consider the underlying intent of the search. aspects Measure how well the content matches a likely intent of the query (M). aspects Measure how trustworthy the web page is (T). Consider the aspects above and the relative importance of each, and decide on a final score (O). We asked five search engine raters to evaluate the relevance of the web page for the query. Each rater used their own independent judgement. Produce a JSON array of scores without providing any reasoning. Example: [{"M": 2, "T": 1, "O": 1}, {"M": 1 . . . # Results [{ Fig. 1. General form of the prompts used in our TREC Robust experiments. Italicised words are placeholders, filled with appropriate values. Shaded text is optional, included in some prompt variants. | 2309.10621#9 | 2309.10621#11 | 2309.10621 | [
"2305.03495"
] |
2309.10621#11 | Large language models can accurately predict searcher preferences | Paul Thomas, Seth Spielman, Nick Craswell, and Bhaskar Mitra Inspired by these ideas, in some variants we explicitly ask for labels over aspects of â relevanceâ as well as for an overall label. For TREC Robust, we ask for labels for topicality (â how well the content matches a likely intentâ â note that this captures likely intents that arenâ t captured elsewhere) and for trustworthiness (â how trustworthy the page isâ ). There are no further definitions of either aspect. Varying number of â judgesâ People naturally vary in their labels, and aggregating several labels for each result can reduce noise and increase sensitivity due to law of large numbers. In some prompts we ask the model to simulate several judges, generating the output of five simulated judges from one LLM call. Since the outputs are generated in sequence they are not really independent labellers, but we previously found it useful to generate and aggregate multiple labels in this way, so we include it as a prompt variant here. # 3 EVALUATING THE LABELS, EVALUATING THE LABELLERS How are we to choose between labels, or rather between labelling processes? The main criterion is validity, in particular that labels from any new source should agree with gold labels [Faggioli et al. 2023]. We can measure this in two ways: by looking at the labels themselves or by looking at preferences between documents. Additionally, labels are typically aggregated to derive query-level or system-level scores, and we may care whether machine labels would lead to similar conclusions at these aggregated levels. Further criteria include cost, in both dollars and time; throughput; and how easily we can measure new types of result, such as results in different languages or different media types. # 3.1 Document labels The simplest way to evaluate a machine labelling process is to ask: does it produce the same labels as would human labellers? Evidently, if the labels are the same for any document, then the machine process can be directly substituted without any quality concerns. We can summarise differences between the machine and human labels with a confusion matrix. The labels are on an ordinal scale (not an interval scale), but if we assign scores 0 and 1 to the two levels then we can further compute the mean difference between the human and machine labels. | 2309.10621#10 | 2309.10621#12 | 2309.10621 | [
"2305.03495"
] |
2309.10621#12 | Large language models can accurately predict searcher preferences | In what follows we report accuracy with the mean absolute error (MAE), where 0 means the two sources always agree on labels and 1 means they are maximally different. In an earlier study, Faggioli et al. [2023] report Cohenâ s ð between TREC assessors and GPT-3.5 and YouChat LLMs, and we report ð here as well. ð ranges from 1 (complete agreement) through 0 (agreement only by chance) to â 1 (complete disagreement). For direct comparison with Faggioli et al. we report ð over binarised labels, where partly- and highly-relevant are conflated. # 3.2 Document preference Minimising document-level MAE gives us scores which are calibrated across queries, and interpretable for debugging and development. Ranking, however, can use preferences between documents rather than calibrated scores; this is also sufficient for many learning-to-rank algorithms [Liu 2009]. On this view it is the relative ordering of any two documents that is important, and we can measure this with pairwise accuracy or AUC: the chance that, given any two documents with a human preference, the modelâ s preference is the same. A score of 1 means the modelâ s preferences are always the same as the humanâ s, a score of 0 means they always disagree, and a score of 0.5 is chance alone. Large language models can accurately predict searcher preferences (a) Preferences only within each topic (b) Preferences across topics | 2309.10621#11 | 2309.10621#13 | 2309.10621 | [
"2305.03495"
] |
2309.10621#13 | Large language models can accurately predict searcher preferences | Fig. 2. Options for document preference. If we form preferences only within each topic, there is no constraint on how, for example, â better 1aâ is scored relative to â worse 2aâ : labels can vary per topic. If we form preferences across topics, we add the constraint that â better 1aâ should score higher than â worse 2aâ , so labels are consistent. We also generate many more pairs. (Another consideration is that two scoring schemes may differ in scale and location: for example, one source may give scores 1â 5 while another gives 1â 10 or 0-99. | 2309.10621#12 | 2309.10621#14 | 2309.10621 | [
"2305.03495"
] |
2309.10621#14 | Large language models can accurately predict searcher preferences | MAE in this case is misleading, even if there is a completely consistent mapping from one source to another. Pairwise preferences are robust to this sort of difference.) There are two ways to form pairs of documents (Figure 2). If we choose pairs of documents only from the same topic, we can use a topic-dependent labelling scale: the worse document for one topic might still be better than the better document from another,for example if one topic is especially hard. The set of pairs will also be smaller. Choosing pairs of documents from all topics, that is, from all documents ever labelled, enforces a query-independent scale as the â | 2309.10621#13 | 2309.10621#15 | 2309.10621 | [
"2305.03495"
] |
2309.10621#15 | Large language models can accurately predict searcher preferences | betterâ document from one query should score higher than the â worseâ document from any other. The set of pairs formed this way will also be bigger. In our evaluation, we choose the second approach; in other circumstances, the flexibility of per-topic ordering might be preferable. # 3.3 Query ordering Our primary interest is in generating (and evaluating) labels for documents. However, past work has shown that errors in document labels can be washed out when labels are aggregated to query-level or system-level scores [Bailey et al. 2008]. It is certainly possible that differences in labels are not relevant to query- or system-level evaluations. In consideration of this we can also order result lists (SERPs) by some metric (e.g. RBP or MAP), according to the labels produced by humans and with regard to some fixed search engine; order the same result lists, on the same metric, according to the labels produced by a model; and ask how similar the two orderings are. With this query-level analysis we are likely to be looking for queries which do badly (i.e. where a system scores close to zero), so here we measure correlation with rank-biased overlap (RBO) [Webber et al. 2010] after sorting the queries from lowest to highest scores. This means that (dis)agreements about which queries score worstâ which queries we want to investigateâ count for more than (dis)agreements about those queries that score well. In our case, since the two rankings are permutations, there is a well-defined lower bound2 for RBO: (1-4) by (6712 - k)/d) d=|N/2|+1 with ð queries and a discount parameter ð . | 2309.10621#14 | 2309.10621#16 | 2309.10621 | [
"2305.03495"
] |
2309.10621#16 | Large language models can accurately predict searcher preferences | For ease of interpretation we use this minimum to normalise RBO scores into the range 0 to 1, so 0 is an exactly opposite ranking and 1 is an identical ranking. We use set ð = 0.9, corresponding to an experimenter looking (on average) at the first ten queries. # 2Alistair Moffat, personal communication, July 2023. Paul Thomas, Seth Spielman, Nick Craswell, and Bhaskar Mitra # 3.4 System ordering The primary use of query:document scores is of course to score a whole system, first by accumulating document scores to query scores then by accumulating query scores to system scores. To see the effect of disagreements between our human and LLM judges, we report RBO over those systems which ran the same queries. Again, since there are a fixed set of systems, we can calculate the minimum RBO score and normalise. An experimenter might look seriously at the top three or four systems, so we set ð = 0.7. # 3.5 Ground-truth preferences between results An alternative view is that, since human-assigned labels may themselves be biased or noisy, labels should instead accurately predict real searcher preferences. Evaluating machine labels by their agreement with human labels is useful, because in many situations we can use a large corpus of existing labels. However, it does not speak to the validity of the labels: that is, whether the labels (or a metric derived from the labels) reflects some true searcher experience. If machine labels agree with human labels to (e.g.) 80%, then the 20% disagreement might be a fault with the machine, or poor-quality labels from the humans, or some combination. | 2309.10621#15 | 2309.10621#17 | 2309.10621 | [
"2305.03495"
] |
2309.10621#17 | Large language models can accurately predict searcher preferences | We expand on this idea in Section 5. # 3.6 Other criteria Besides the above, we can imagine other criteria for choosing a labelling process. These might include cost per label; time, per label or end-to-end; reliability; scalability; difficulty of adapting to new languages, markets, or evaluations; and ease of debugging the labelling process. We do not address these criteria here, but in our experience labelling with LLMs is superior to labelling by crowd workers on all these grounds and is superior to labelling by experts (employees or specially-qualified workers) on all grounds except debuggability. # 4 RESULTS After running the prompt, the generated label was converted to a score in [0, 2]. Where we generated multiple labels, the final score is simply the mean. In keeping with the TREC guidelines, if we prompted for aspects we still considered only the overall label. If the model generated unparseable output, we dropped the result entirely: this happened in 90 out of 96 000 cases. TREC-Robust included two sets of topics. Topics up to 650 came from earlier editions of TREC, and had only binary relevance judgements (â relevantâ or â non-relevantâ ; 1 or 0). Topics 651â 700 were developed for the track, and have three-level judgements (adding â highly relevantâ , 2). Our prompts generated a scores from 0 to 2 for all documents, in line with instructions to TREC-Robust assessors for the new topics. Since comparisons are difficult between a three- and a two-level scale, we follow TREC and Faggioli et al. [2023] by considering â relevantâ and â highly relevantâ together, i.e. by binarising the scores in all cases. We evaluate the quality of these labels (not the documents) in three ways: by comparing the modelâ s labels for each document to the labels from TREC assessors, by comparing the aggregated scores for each query, and by comparing the overall system rankings that result. Large language models can accurately predict searcher preferences Model 0 1 or 2 866 405 95 1585 TREC assessor 0 1 or 2 Table 1. Results from the best-performing prompt of Figure 1â i.e. with descriptions, narrative, and aspects, prompt â | 2309.10621#16 | 2309.10621#18 | 2309.10621 | [
"2305.03495"
] |
2309.10621#18 | Large language models can accurately predict searcher preferences | -DNA-â â over a stratified sample of the TREC Robust data. Overall, the LLM is more likely to say â not relevantâ than were TREC assessors; an LLM assessment of â relevantâ or â highly relevantâ is reliable. Some qrels are missing due to unparsable LLM output, a rate of 1.6%. # 4.1 Comparing scores Similar to Faggioli et al. [2023], we compare these model-generated scores to scores from the TREC assessors. As an example, Table 1 gives a confusion matrix for one prompt and all 3000 query:document pairs. (There are 32 such matrices, one for each set of prompt features or equivalently one for each row of Table 2.) We can see that in this case, the LLM is more likely to say â | 2309.10621#17 | 2309.10621#19 | 2309.10621 | [
"2305.03495"
] |
2309.10621#19 | Large language models can accurately predict searcher preferences | not relevantâ than were TREC assessors (44% vs 33%), and is correspondingly inaccurate (68% agreement with TREC, when the LLM says â not relevantâ ). An LLM assessment of â relevantâ or â highly relevantâ however, is reliable (94% agreement). Table 2 summarises the modelsâ agreement with human judges, over the 3000 query:document pairs, as we manipulate the prompt as above: there is one row for each prompt, identified by which optional features are included. For example, the row labelled â --N-Mâ corresponds to the prompt with narrative and multiple judges, but not role statement, description, or aspects. For each prompt, we report the three document-level, one query-level, and one system-level metrics described above, plus a 95% confidence interval based on 20 bootstraps over documents. The best-performing prompt for each metric is labelled with a â , and these are significantly better than any other (ð ¡ test, ð < 0.05). Performance is highly variable as we change the featuresâ that is, the quality of the labelling depends a great deal on the prompt structure or template. For example, Cohenâ s ð varies from as low as 0.20 (prompt â R---Mâ ) to 0.64 (prompt â -DNA-â ). | 2309.10621#18 | 2309.10621#20 | 2309.10621 | [
"2305.03495"
] |
2309.10621#20 | Large language models can accurately predict searcher preferences | We need to be accordingly careful interpreting any claim based on a single prompt, especially where that prompt has not been tuned against some existing labels; we also observe this in the variable performance reported in Liang et al. [2022], for example. The performance here (ð 0.20 to 0.62) compares favourably to that seen by Damessie et al. [2017], who re-judged 120 documents from TREC-Robust and saw ð of 0.24 to 0.52 for crowd workers, and ð of 0.58 for workers in a controlled lab. In particular, 6/32 prompts here to better than 0.58 and only 3/32 do worse than 0.24. Our agreement also compares favourably to reports from Cormack et al. [1998], who labelled TREC ad-hoc documents a second time, using a second group of assessors. From their data we can compute Cohenâ s ð = 0.52 between two groups of trained human assessors. On other data sets, Castillo et al. [2006] report ð = 0.56 labelling web pages for spam; Hersh et al. [1994] report ð = 0.41 on relevance in the OHSUMED collection; Agarwal et al. [2019] saw ð = 0.44 for news sentiment; and Scholer et al. [2013] reported that assessors seeing a document for a second time only agreed with their first label 52% of the time. Faggioli et al. [2023] reported ð from 0.26 to 0.40 on binarised labels from TREC-8 and TREC Deep Learning. Faggioli et al. used another LLM but with relatively simple prompt, reinforcing LLMsâ sensitivity to their prompt. On this metric, at least, we can conclude that with minimal iterations LLMs are already at human quality for this collection and for some prompts. In Section 5 we will see that, in a common setting, LLMs can perform substantially better than third-party judges. | 2309.10621#19 | 2309.10621#21 | 2309.10621 | [
"2305.03495"
] |
2309.10621#21 | Large language models can accurately predict searcher preferences | Paul Thomas, Seth Spielman, Nick Craswell, and Bhaskar Mitra scores MAE Prompt features â â â â â 0.34± 0.01 0.38± 0.02 R â â â â 0.38± 0.02 â D â â â 0.36± 0.02 â â N â â 0.35± 0.02 â â â A â 0.19± 0.02 â â â â | 2309.10621#20 | 2309.10621#22 | 2309.10621 | [
"2305.03495"
] |
2309.10621#22 | Large language models can accurately predict searcher preferences | M 0.46± 0.02 0.32± 0.02 0.35± 0.03 0.37± 0.03 0.60± 0.03 0.22± 0.02 0.71± 0.01 0.72± 0.01 0.73± 0.01 0.82± 0.02 0.65± 0.01 R D â â â 0.40± 0.02 R â N â â 0.38± 0.02 R â â A â 0.21± 0.02 R â â â M 0.49± 0.02 â D N â â 0.35± 0.02 â D â A â 0.19± 0.01 â D â â M 0.45± 0.01 â â N A â 0.18± 0.01 â â N â M 0.41± 0.02 â â â | 2309.10621#21 | 2309.10621#23 | 2309.10621 | [
"2305.03495"
] |
2309.10621#23 | Large language models can accurately predict searcher preferences | A M 0.31± 0.02 0.30± 0.03 0.33± 0.02 0.56± 0.03 0.20± 0.02 0.37± 0.02 0.59± 0.03 0.24± 0.02 0.62± 0.02 0.29± 0.02 0.42± 0.04 0.69± 0.01 0.71± 0.01 0.81± 0.02 0.64± 0.01 0.74± 0.01 0.83± 0.01 0.66± 0.01 0.84± 0.01 0.69± 0.01 0.80± 0.02 R D N â â 0.37± 0.02 R D â A â 0.22± 0.01 R D â â M 0.46± 0.02 R â N A â 0.20± 0.01 R â N â M 0.42± 0.02 R â â A M 0.38± 0.02 â D N A â 0.17± 0.01 â D N â M 0.40± 0.02 â D â A M 0.31± 0.01 â â N A M 0.27± 0.02 0.72± 0.02 0.82± 0.01 0.66± 0.01 0.83± 0.01 0.69± 0.01 0.78± 0.01 0.70± 0.01 0.80± 0.01 0.82± 0.02 R D N A M 0.16± 0.02â 0.51± 0.06 0.77± 0.03 | 2309.10621#22 | 2309.10621#24 | 2309.10621 | [
"2305.03495"
] |
2309.10621#24 | Large language models can accurately predict searcher preferences | # R D N A M_ 0.16+ 0.02x 0.514 0.06 0.77+ 0.03 Table 2. Performance of the variant prompts of Figure 1, compared to human labels on a stratified sample of the TREC Robust data. R = include role, D = include description, N = include narrative, A = include aspects, M = include multiple â judgesâ . Accuracy of document scores is measured with mean absolute error and with Cohenâ s ð against TREC assessors on binary labels. Accuracy of document preference is measured with AUC. Accuracy of query and system ordering is measured with RBO, normalised to the range 0â 1. Uncertainty is reported as a 95% confidence interval based on 20 bootstraps. â marks the best prompt in each case (significantly better than the next-best performer, one-sided ð ¡ test, ð < 0.05). | 2309.10621#23 | 2309.10621#25 | 2309.10621 | [
"2305.03495"
] |
2309.10621#25 | Large language models can accurately predict searcher preferences | Large language models can accurately predict searcher preferences â 0.04 +0.01 +0.06 +0.21 â 0.13 Table 3. Performance impact of the optional prompt features in Figure 1, measured using ð against TREC assessors. All changes are statistically significant and effects are ±0.005 at a 95% CI. # 4.2 Effect of prompt features Table 2 gives results for 32 prompt templates, made from turning five features on or off. To try to summarise the effect of each feature individually, Table 3 reports the effect of each feature on ð | 2309.10621#24 | 2309.10621#26 | 2309.10621 | [
"2305.03495"
] |
2309.10621#26 | Large language models can accurately predict searcher preferences | â that is, the effect of including a prompt feature independent of any other features being on or off. Contrary to our expectations, there is a statistically significant negative effect due to role (R) and multiple â judgesâ (M): ð decreases by an average 0.04 and 0.13 respectively. Adding description (D) gives an insubstantial boost (only 0.01 points of ð ). Adding a narrative (N) leads to a boost of 0.04; this is modest, but perhaps the background knowledge of LLMs (especially on well-used public data like this) is enough that the narrative adds little information beyond the Aspects (A) give a substantial improvement in ð against TREC assessors, +0.21. Topicality and trustworthiness are the two aspects we used here, but of course that are not the only aspects that might matter, and we do not claim they are the best selection; in Bing we use several aspects, and measure the LLMâ s performance on all of these with good results. In this case it seems likely, in fact, that it is the step-by-step nature of labelling with aspects that gives rise to these improvements rather than the particulars of the aspects themselves. Note that this presents features in isolation, when in fact any prompt could have zero, one, two, three, four, or all five of these features at once and the effects are not necessarily additive. The best-performing prompt in Table 2 is, however, of the form â -DNA-â which is expected from this analysis. # 4.3 Effect of prompt length Using an LLM to compare texts, Wang et al. [2023] saw an effect of prompt lengthâ the longer the text, the more positive the LLMâ | 2309.10621#25 | 2309.10621#27 | 2309.10621 | [
"2305.03495"
] |
2309.10621#27 | Large language models can accurately predict searcher preferences | s assessment. We checked for similar effects in our data by modelling the LLMâ s signed error as a response to prompt length. This controls for any effect of length on true relevance; if longer documents are just more (or less) likely to be relevant, then the LLM should not be penalised for reflecting this. Replicating Wang et al.â s effect would require a positive effect: that is, errors should get more positive (the LLM should overestimate more, or be more optimistic) as prompts got longer. Controlling for prompt features, we saw no substantial correlation between prompt length and signed error. Effects varied according to prompt features, with modelled score shifting between â | 2309.10621#26 | 2309.10621#28 | 2309.10621 | [
"2305.03495"
] |
2309.10621#28 | Large language models can accurately predict searcher preferences | 9 Ã 10â 6 and 1 Ã 10â 5 per character of prompt. This corresponds to only a shift in score of -0.05 to 0.06 at the median prompt length, which (although statistically significant) is of no practical significance given the MAEs of Table 2. Paul Thomas, Seth Spielman, Nick Craswell, and Bhaskar Mitra # 4.4 Effect of paraphrasing prompts We have seen that LLM peformance varies considerably as the prompt is varied, even when the task and the input data are fixed. This raises a question: how sensitive is the LLM not just to coarse prompt features, such as asking for aspects, but to quirks of phrasing? In other words, if we rephrased â assume that you are writing a reportâ to â pretend you are collecting information for a reportâ , or to â you are collecting reading material before writing a reportâ | 2309.10621#27 | 2309.10621#29 | 2309.10621 | [
"2305.03495"
] |
2309.10621#29 | Large language models can accurately predict searcher preferences | , would the labels change? If so, then our LLM is highly sensitive to such apparently trivial considerations. That would mean that, first, the results above are only representative of a wide range of possible performance; and second, any serious attempt to use LLMs at scale needs to explore a large and unstructured prompt space. To test this, we took the â -DNA-â promptâ the best aboveâ and generated 42 paraphrases by rewriting the text â Given a query and a web page . . . Otherwise, mark it 0â and by rewriting the text â Split this problem into steps: . . . Produce a JSON array of scores without providing any reasoningâ . Figure 3 gives some examples. Figure 4 shows the resulting spread of label quality, measured again as Cohenâ s ð against the labels from TREC assessors and across our stratified sample of 3000 documents. Each paraphrase is represented by one dark line, showing the mean ð and a 95% confidence interval derived from 20 bootstraps over documents. There is a large range, from mean ð = 0.50 (moderate agreement) to mean ð = 0.72 (substantial agreement, and better than the reference values cited above [Agarwal et al. 2019; Castillo et al. 2006; Cormack et al. 1998; Faggioli et al. 2023; Hersh et al. 1994]). The empirical 95% confidence interval, over all bootstraps and all paraphrases, is 0.50â 0.71 (plotted at the left-hand edge of Figure 4). This is a wide range from a single prompt design, and from Figure 3 it is not at all apparent which versions would score higher or why. The outsized effect of simple paraphrases has been observed in other domains as well [Zhang et al. 2022; Zhou et al. 2022]. This leads to two observations. First, the measured performance of any promptâ including those in Table 2â should be taken as a single sample from a wider range of potential performance. Small tweaks to the wording could result in noticeably different performance, even without any changes to the promptsâ overall design. Second, it is prudent to fix an overall design, and then explore rephrasing and other options. | 2309.10621#28 | 2309.10621#30 | 2309.10621 | [
"2305.03495"
] |
2309.10621#30 | Large language models can accurately predict searcher preferences | Because it is not clear what leads to better or worse performance, taking paraphrases is a reasonable approach, but we note work by Pryzant et al. [2023], Yang et al. [2023], Zhou et al. [2022], and others that suggests alternatives for fine-tuning prompts. # 4.5 Effect of document selection Given the different performance of the different prompts, and indeed the different paraphrases, it is tempting to choose the best-performing variant and commit to using it for future labelling. This of course carries a risk: performance on these topics and documents might not predict performance on other, unseen, topics and documents. The conventional guard against this is a train:test split. Here, we can interpret â trainingâ as the choice of prompt, and we used repeated splits to understand the risk of choosing the best variant. For each of 1000 iterations, we randomly split our 3000 TREC and LLM labels into two sets of 1500 documents. We measured ð for each prompt (or paraphrase) over the first 1500, noted the best performer (highest ð ), and measured again on the second 1500. The results were consistent. When scoring prompts (Table 2), in all 1000 iterations the best-performing prompt on the first split also beat the baseline â -----â on the second split. | 2309.10621#29 | 2309.10621#31 | 2309.10621 | [
"2305.03495"
] |
2309.10621#31 | Large language models can accurately predict searcher preferences | That means that, starting from the baseline prompt, if we chose an alternative because it was the best improvement on one set of documents, we can be almost certain that prompt would still be an improvement on another set. In 829/1000 first splits, the best-performing variant was -DNA-, Large language models can accurately predict searcher preferences Original Given a query and a web page, you must provide a score on an integer scale of 0 to 2 with the following meanings: 2 = highly relevant, very helpful for this query 1 = relevant, may be partly helpful but might contain other irrelevant content 0 = not relevant, should never be shown for this query Assume that you are writing a report on the subject of the topic. If you would use any of the information contained in the web page in such a report, mark it 1. If the web page is primarily about the topic, or contains vital information about the topic, mark it 2. Otherwise, mark it 0. . . . | 2309.10621#30 | 2309.10621#32 | 2309.10621 | [
"2305.03495"
] |
2309.10621#32 | Large language models can accurately predict searcher preferences | Split this problem into steps: Consider the underlying intent of the search. Measure how well the content matches a likely intent of the query (M). Measure how trustworthy the web page is (T). Consider the aspects above and the relative importance of each, and decide on a final score (O). Produce a JSON dictionary of scores without providing any reasoning. ð = 0.64 Paraphrase 1 Rate each web page for how well it matches the query, using these numbers: 0 = no match, 1 = some match, 2 = great match. Think of writing a report on the query topic. A web page gets 2 if it is mainly about the topic or has important information for the report. A web page gets 1 if it has some information for the report, but also other stuff. A web page gets 0 if it has nothing to do with the topic or the report. . . . To score this problem, follow these steps: - Think about what the search query is trying to achieve. - Assign a score from 0 to 2 for how well the content addresses the queryâ s goal (M). Higher scores mean better matches. - Assign a score from 0 to 2 for how reliable the web page is (T). Higher scores mean more trustworthiness. - Combine the scores for M and T, and give more weight to the more important aspect. Assign a final score from 0 to 2 (O). Higher scores mean better overall quality. - Write a JSON dictionary with the keys M, T, and O, and their corresponding scores. Do not explain your scores. | 2309.10621#31 | 2309.10621#33 | 2309.10621 | [
"2305.03495"
] |
2309.10621#33 | Large language models can accurately predict searcher preferences | ð = 0.72 Paraphrase 2 To rate a web page for a query, use 0, 1, or 2. Use 0 if the page has nothing to do with the query. Use 1 if the page has some useful information, but also other stuff. Use 2 if the page is mainly about the query or has important information. . . . For this problem, you need to do the following: - Think about what the searcher wants to find out. - Rate how well the content answers the query, from 0 (poor) to 2 (excellent) (M). - Rate how reliable the web page is, from 0 (low) to 2 (high) (T). - Based on the ratings and their importance, give a final score from 0 to 2 (O). - Write a JSON dictionary of the scores without explaining them. | 2309.10621#32 | 2309.10621#34 | 2309.10621 | [
"2305.03495"
] |
2309.10621#34 | Large language models can accurately predict searcher preferences | ð = 0.50 Fig. 3. Examples of paraphrased prompts, based on prompt format â -DNA-â (description, narrative, and aspects). Each paraphrase was run with each of our 3000 sampled documents, to gauge the modelâ s sensitivity to changes in the prompt text. Paul Thomas, Seth Spielman, Nick Craswell, and Bhaskar Mitra 1.00 075 Original prompt -DNA- (best from Table, P) 050 k against TREC assessors Prompt R---M (worst from Table 2) 0.00 Fig. 4. Variation in Cohenâ s ð between LLM labels and human labels, over a stratified sample of 3000 documents from TREC-Robust. Small changes in the wording of the prompt, while keeping the structure the same, lead to substantial changes in ð | 2309.10621#33 | 2309.10621#35 | 2309.10621 | [
"2305.03495"
] |
2309.10621#35 | Large language models can accurately predict searcher preferences | . Each vertical line is one paraphrased prompt, with empirical 95% CI from 20 bootstraps over documents. Grey interval at left is the empirical 95% CI over all bootstraps and paraphrases. which is again consistent with the above but also suggests the choice is reliable. (The next best performer was --NA-, 139 times out of 1000; of course in practice these two prompts are very similar.) Looking at the 42 paraphrases of Figure 4, in 989/1000 iterations the best-performing paraphrase on the first 1500 documents still beat the initial -DNA- prompt on the second 1500. The best-performing paraphrase was again consistent: variant #13 had the highest ð on the first split in 838/1000 iterations. This is marginally less consistent than the choice of overall prompt design. These observations suggest that while performance is variable, there is little chance of regret. That is, if we start with a baseline prompt and generate variantsâ e.g., by adding features or by paraphrasingâ and choose to switch to the best variant, that is a safe choice. If we choose the best variant on some set of documents, performance on unseen documents will almost never turn out to be worse than the baseline. # 4.6 Measuring query difficulty and run effectiveness Document labels themselves are not the goal of most evaluations. Instead, we typically map these labels to numeric values (0 and 1 for binary labels) and then use a metric such as average precision to aggregate to scores for each query and run. The scores for queries let us investigate instances where we do badly, meaning where there is scope for improvement; the scores for runs let us choose which combination of algorithms and parameters performs the best overall. Accordingly, another way to judge a labelling scheme is by whether (under some metric) it gives the same ranking of queries or runs. If we swapped labelling schemes, would we still identify the same queries as hard? Would we still identify the same runs as top performers? Large language models can accurately predict searcher preferences P@10 RBP@100, ð = 0.6 MAP@100 Hardest queries RBO, ð | 2309.10621#34 | 2309.10621#36 | 2309.10621 | [
"2305.03495"
] |
2309.10621#36 | Large language models can accurately predict searcher preferences | = 0.9 0.40 0.42 0.48 0.04 Best runs 0.79 0.63 0.50 0.03 0.97 0.91 0.58 0.21 Best groups RBO, ð = 0.7 RBO, ð = 0.7 # (Random permutation) Table 4. Consistency of rankings on LLM labels compared to human labels, replicating all qrels in TREC-Robust to a depth of 100. Queries, runs, and groups were scored with each of three metrics, based on each of two sets of labels. Higher numbers mean the rankings based on LLM labels are more like those based on human labels. We report normalised RBO, ranging from zero (LLMs and humans put queries/runs/groups in opposite order) to one (LLMs and humans give scores putting queries/runs/groups in the same order). In Table 4 we report the consistency of query and run rankings as we switch from human-assigned to LLM-assigned labels. In each case we score all the queries with one metricâ | 2309.10621#35 | 2309.10621#37 | 2309.10621 | [
"2305.03495"
] |
2309.10621#37 | Large language models can accurately predict searcher preferences | e.g. P@10â based on TRECâ s human labels, and score them again based on our LLM labels. (We collected additional labels so that every document retrieved to depth 100, in every run, was labelled with prompt -DNA- except those which were never labelled at TREC. For consistency with TREC, we assume these unlabelled documents are not relevant.) This gives two rankings of queries. The consistency between these rankings is measured with RBO, normalised so that a score of 0 represents an inverted order and a score of 1 represents an identical ordering. We assume an experimenter would be willing to look at the worst ten queries, so set ð = 0.9. To help interpret the figures we also report the RBO scores for random permutations, i.e. the consistency between the TREC ordering and a random re-ordering of the same queries. The exercise is repeated for all 110 runs, assuming we want to find the best three or four runs (ð = 0.7). Since runs from the same group are likely very similar, we also repeat the exercise for the best run for each groupâ this simulates choosing the best approach (or perhaps vendor), rather than the best parameter settings. Again we assume we want to find the best three or four for further examination. The consistency of rankings, in all three cases, depends on the metric being used: ordering by MAP is more consistent for queries, and ordering by average P@10 is more consistent for runs and groups. Group-level rankings are more consistent than runs or queries, no matter the metric. It is harder to be consistent when ranking 250 queries than when ranking 110 runs or 14 groups, and small perturbations make a larger difference in ranking since many queries have similar scores. Nonetheless we see that for any problem and choice of metric, labels from LLMs lead to overall rankings which are at least similar to those from human labels, and our imagined experimenters would make similar choices. For example, under all metrics the top three runs are the same; the top five groups are consistent under P@10, the top three under RBP@100, and three of the top four under MAP@100. The worst-performing query is the same under TREC or LLM labels for P@10 and RBP@100, and two of the top three are the same under MAP@100. | 2309.10621#36 | 2309.10621#38 | 2309.10621 | [
"2305.03495"
] |
2309.10621#38 | Large language models can accurately predict searcher preferences | Of course perfect agreement is unlikely even with humans labelling. By way of comparison, Voorhees [1998] reports ð = 0.94 across runs, using labels from different assessors. This is on a different data set, with correspondingly different judgements (and only 33 runs), but give a rough upper bound for how consistent runs could ever be. Faggioli et al. [2023] demonstrate ð from 0.76 to 0.86 on TREC Deep Learning data, again under slightly different circumstances (notably, shorter documents and fewer runs). We see ð from 0.77 (MAP@100) to 0.86 (P@10) for our 110 runs with full documents. Given the ð and AUC figures in Table 2, this is at least promising and plausibly as good as most human labellers. | 2309.10621#37 | 2309.10621#39 | 2309.10621 | [
"2305.03495"
] |
2309.10621#39 | Large language models can accurately predict searcher preferences | Paul Thomas, Seth Spielman, Nick Craswell, and Bhaskar Mitra Relative accuracy Latency Relative throughput Relative cost à 1/100 à 1/15 à 1 à 10 à 8 à 5 à 1 à 1/20 Employees Best crowd Typical crowd LLM (GPT-4) +24% +19% â +28% hours to days hours to days hours minutes to hours Table 5. Labelling schemes compared. â Crowdâ are crowd workers via our in-house platform, â LLMâ is the best-performing prompt from private experiments. â Latencyâ is the time to the first usable labels, â costâ is the dollar cost alone. These figures give an overall comparison, but please note that they depend on our particular computing resources, crowd contracts, assessor training, and other details. # 4.7 Observations We see somewhat better results than those reported by Faggioli et al. [2023], particularly in agreement on the raw labels (ð ). There are at least two factors at work. First, we are using a more capable model (GPT-4 with local modifications, compared to stock GPT-3.5); and second, our prompts are based on our experiences in Bing, and relatively long, whereas those of Faggioli et al. are simpler. Even small wording changes can make a difference (Figure 4), and selecting prompt features makes a bigger difference still (Table 2). Again, this demonstrates that time spent on this configurationâ which is comparable to time spent on instruments and instructions for crowd or in-house workersâ | 2309.10621#38 | 2309.10621#40 | 2309.10621 | [
"2305.03495"
] |
2309.10621#40 | Large language models can accurately predict searcher preferences | can pay dividends. These results show that LLMs are competent at labellingâ at the minimum, with GPT-4 and in the TREC-Robust setting. The labels are as close to those from humans as we could expect, given the disagreement between people to begin with, and we can reasonably consistently identify the hardest queries, best runs, and best groups. We now turn to LLM labelling at scale, in the context of a running search engine, where LLMs have proved not just more efficient but more accurate than the status quo. # 5 LLM LABELLING IN USE: WEB SEARCH AT BING The results above are on one corpusâ TREC-Robustâ 04, based on documents from the TREC ad-hoc collectionsâ and labels from trained assessors working over simulated information needs. At Bing we have also seen good results with our web corpus, queries from real Bing use, and labels from searchers with real needs. Accordingly we have been using LLMs, in conjunction with a reduced number of human labellers, for most of our offline metrics since late 2022. # 5.1 Experience with LLMs at Bing At Bing we have made heavy use of crowd workers, for many years, to scale to the number of labels, languages, and markets we need. Despite systems for detecting and removing low quality labels and workers, this scale has come at a cost of natural biases, mistakes, and adversarial workers. In Table 5 we summarise our experiences with labelling to date, considering (top to bottom) full-time Bing employees (mainly scientists and engineers working on metrics); our best crowd workers, recruited and trained specifically for metrics problems and with close oversight; our general pool of crowd workers, subject to quality control but minimal training; and our LLM models, based on GPT-4. LLM models give us better accuracy at vastly reduced latency and cost. In current work with newer models and prompts, we expect to see a further increase in accuracy of 8â 10% in some languages, with around five times the throughput. Large language models can accurately predict searcher preferences The prompts in use are confidential. In our case we include the URL, since this is always defined for web documents; we also include date, location, language and other information available from our logs. | 2309.10621#39 | 2309.10621#41 | 2309.10621 | [
"2305.03495"
] |
2309.10621#41 | Large language models can accurately predict searcher preferences | In our experience LLMs do remarkably well. They have proved more accurate than any third-party labeller, including staff; they are much faster end-to-end than any human judge, including crowd workers; they scale to much better throughput; and of course are many times cheaper. This has let us measure many more results than previously, with associated gains in sensitivity (we can see smaller effects if we label more things). The end-to-end speed, also much improved, is helping Bing engineers try more things and get more done. # 5.2 Evaluating labellers and prompts In Bingâ s case we have found breadth preferable to depth: that is, we prefer small data for many queries to the TREC- Robust approach of more data for fewer queries. All else being equal, we also prefer queries which resemble a real web search workload rather than the invented needs of TREC-Robust. Our gold labels are, therefore, largely gathered in situ: from employees and contractors in the context of their normal search activity, and also from feedback from the general public. This data is collected at or close to the time of need, by people who had the need, and in view of a full SERP (including e.g. images, maps, and advertisements). These properties mean the data is very reliable: if a label says some document is good (or bad), it is almost certainly so in the eyes of the person who issued the query. Our ground truth corpus comprises queries, descriptions of need, metadata like location and date, and at least two example results per query. Results are taggedâ again, by the real searcherâ as being good, neutral, or bad and these tags may be reviewed by Microsoft staff prior to inclusion in our corpus. Similar to the TREC experiments above, from this we can derive pairs of preferred and non-preferred results and then treat labelling and scoring as a binary classification problem: the preferred result should score higher than the non-preferred, for all queries and pairs of results. Again, we can use pairwise agreement to evaluate the labels. At the time of these experiments our ground corpus comprised about 2.5 million such pairs, in about ten languages and from about fifty countries. Using three labels does conflate small distinctions (â | 2309.10621#40 | 2309.10621#42 | 2309.10621 | [
"2305.03495"
] |
2309.10621#42 | Large language models can accurately predict searcher preferences | itâ s a little bit betterâ , e.g. good vs neutral results) and large distinctions (â itâ s a lot betterâ , good vs bad results), but our ground truth corpus has distinct advantages in that we can collect preferences from real searchers in their own context, and providing a preference is easier than providing absolute labels [Carterette et al. 2008]. Moreover, the focus on general labels maximises the reuse of the corpus as the definition of a good or bad result is unlikely to evolve over time, whereas subtle distinctions might be subject to change. Our user-generated ground truth corpus gives us an evaluation which is independent of the labels from third-party judges. In particular, by measuring against user-generated labels we can identify cases where the model is more accurate than third-party human judges; if we only had third-party labels, we could identify labelling disagreements but not resolve them one way or the other. For AUC scores to be useful, of course the data must represent some population of interest: at Bing we stratify the triples by language and by important result attributes (for example recency, authority, or topicality). This is not a uniform sample but instead lets us identify areas of particular concern. # 5.3 Monitoring the LLM system The results above give us a good deal of confidence that a large language model, appropriately prompted, can produce high-quality labels for at least some of the aspects important to our ongoing evaluation. As an additional safety check, we routinely compare the LLMâ s labels to those from (trained and qualified) assessors. Every week, we take a stratified sample of query:document pairs labelled by the model, chosen from amongst those that our experiments have used | 2309.10621#41 | 2309.10621#43 | 2309.10621 | [
"2305.03495"
] |
2309.10621#43 | Large language models can accurately predict searcher preferences | Paul Thomas, Seth Spielman, Nick Craswell, and Bhaskar Mitra recently. Those are re-labelled by our reviewers, and we monitor for shifts either in disagreement rate or patterns of disagreement; any changes are investigated by a dedicated metrics team with expertise in both the crowd and LLM processes. In practice, large changes are rare, and resolved in favour of the LLM as often as in favour of the humans. Since we use a highly skilled set of judges this remains an expensive process, but it is relatively lightweight and to date has needed less than a day a week of employee time. In addition to the human oversight of our LLM based labels we have a large set of queries that we consistently relabel. On a day-to-day basis we expect no change in the labels associated with this set; that is, the expected value of day ð labels â day ð + 1 labels is zero. This automated system is designed to monitor the health of labelling systems and provides a more rapid response than the human based evaluation. Our system therefore sits somewhere between Clarke et al.â | 2309.10621#42 | 2309.10621#44 | 2309.10621 | [
"2305.03495"
] |
2309.10621#44 | Large language models can accurately predict searcher preferences | s â manual verificationâ and â fully automatedâ op- tions [2023], with the scale of a fully automated system but some degree of control and quality assurance from manual verification. Disagreements, and analyses of these, can inform future developments of the metrics and the gold set as well as the LLM labeller. We note, too, that although LLM labels are important to our evaluation they are only one part of a web-scale search system. Amongst other things, web search needs to account for spam, misinformation, piracy, and other undesirable material; needs to treat some topics carefully and with editorial input (health, finance, and others); and needs to account for diversity in the final ranking. Our LLM prompts are not intended to replace these or other safety systems. # 6 POTENTIAL LIMITATIONS AND PITFALLS Using LLMs for automated relevance labelling is a recent phenomenon, and initial evidence is promising to say the least. The field would, however, also benefit from acknowledging how little we understand potential limitations and negative externalities of these approaches. Language models are known to reproduce and amplify harmful stereotypes and biases of social import [Bender et al. 2021; Blodgett et al. 2020; Bolukbasi et al. 2016; Caliskan et al. 2017; Gonen and Goldberg 2019] and therefore there is an immediate need to study if and how these biases may also manifest in relevance labelling. These biases may further intensify existing representational and allocative harms from search systems [Noble 2018; Sweeney 2013]. Other forms of bias unrelated to concerns of demographic fairnessâ such as under-estimating the relevance of longer documents [Hofstätter et al. 2020]â may also manifest more systemically when relevance labels are solicited from LLMs rather than crowd-workers. It may be tempting to suggest employing a variety of different prompts and underlying LLMs to address this issueâ similar to employing a diverse group of crowd-workersâ but that may or may not have the desired effect if the outputs across these variations are correlated and exhibit similar biases. The quality of LLM-generated relevance labels may also vary disproportionately for content that is in different languages, from different geographical locations, and for different demographic groups due to disparate availability of data across these dimensions that have been employed for LLM training. | 2309.10621#43 | 2309.10621#45 | 2309.10621 | [
"2305.03495"
] |
2309.10621#45 | Large language models can accurately predict searcher preferences | Efforts to address these biases may further create undesirable incentives for more pervasive data collection and user surveillance. Developers of search systems who evaluate using and optimise towards these LLM-based labels also risk falling into the trap of over-fitting to the idiosyncrasies of the LLM rather than towards improving true relevance, in line with Goodhartâ s law [Chrystal and Mizen 2001; Goodhart 1975; Hoskin 1996; Thomas and Uminsky 2022]. Agreement with our in-situ or TREC gold labels suggests this is not yet a problemâ we are closer to the ground truth with LLMs than with third-party assessorsâ but this may change as large models play a bigger role in ranking or as web authors start optimising for LLM labels. LLM-generated relevance labels may also show bias towards ranking models that themselves Large language models can accurately predict searcher preferences . oO Our approach Real searcher 130% 5 Select via gold labels Generate few gold labels LLM Generate labels in bulk A - Employee Monitor with several methods Ploy e 120% Best crowd > Read and write guidelines § Generate some silver labels g © 110% 2 F4 s 7) ce Traditional approach 100% @ â Read guidelines Typical ~ Generate labels in bulk crowd â Monitor via silver and gold labels 90% 0% 200% 400% 600% 800% 1000% Relative cost Fig. 5. Labelling options discussed in this work, along with the cost and accuracy we see at Bing. All else being equal, as experimenters we would like to move up and left in this space. A traditional approach uses gold and silver labels to improve crowd workers; we use gold labels to select LLMs and prompts. incorporate LLMs, although if we are to truly embrace the lens of knowledge distillation in describing the evaluation and optimisation using these labels then those biases may at least be partially justified. Biases may arise not just from LLMs learning spurious correlations with respect to its inputs, but due to the absence of certain information that human annotators would have access to (e.g. images and other non-textual content), and more subtly due to differences in what these models and humans pay attention to [Bolotova et al. 2020; Kazai et al. 2022]. | 2309.10621#44 | 2309.10621#46 | 2309.10621 | [
"2305.03495"
] |
2309.10621#46 | Large language models can accurately predict searcher preferences | Whether website designers can take advantage of such biases in LLMs-for-labelling systems to unfairly gain more exposure for their content, or whether large chunks of the web optimising towards what LLMs deem important leads to undesirable shifts in trends and homogenisation of online content, are also important topics for future research. Examples of the latter can be witnessed in other domains such as the impact of online streaming services on length of songs in the music industry.3 Lastly, the ecological costs of these LLMs are still heavily debated [Bender et al. 2021; Bommasani et al. 2021; Dodge et al. 2022; Patterson et al. 2022, 2021; Wu et al. 2022] but represent an important aspect in which these models should continue to be studied and scrutinised as appropriate in near future and beyond. # 7 CONCLUDING REMARKS Evaluating information retrieval typically relies on relevance labels, and we have several options for collecting these. Figure 5 illustrates the options discussed in this paper, with the cost and accuracy we see at Bing. As experimenters, our goal is to move up and left, to greater accuracy and lower cost. Traditionally the goal has been to improve crowd labels, that is to move the bottom-left point higher up, and this has involved (i) collecting insight from real users (or | 2309.10621#45 | 2309.10621#47 | 2309.10621 | [
"2305.03495"
] |
2309.10621#47 | Large language models can accurately predict searcher preferences | 3https://www.theverge.com/2019/5/28/18642978/music-streaming-spotify-song-length-distribution-production-switched-on-pop-vergecast- interview Paul Thomas, Seth Spielman, Nick Craswell, and Bhaskar Mitra from experimenters themselves), (ii) turning these into guidelines, (iii) using trusted workers to read these guidelines and generate â silverâ labels, and (iv) giving the same guidelines to crowd workers. The crowd workers are monitored against the silver labels, and improvements largely come from improving the guidelines. Our approach is different: we collect high-quality gold labels from searchers themselves (searchers in situ at Bing, topic developers in TREC) and use these labels to evaluate and select prompts for a large language model. The labels we get from our model are high quality, and in practice are more useful than those from even trained assessors. They are of course cheaper to acquire, and easier to collect for new languages or other new context; but they are also more accurate than third-party labels at predicting the preference of real searchers. This has had a tangible effect on our operations: retraining parts of our ranker using labels from this model, while keeping all else constant, resulted in about six monthsâ relevance improvement in a single step. Of the options described by Faggioli et al. [2023], our labelling is closest to â human verification: LLMs are considered crowdworkers, . . . controlled by a humanâ , although we do not deliberately vary the LLMâ | 2309.10621#46 | 2309.10621#48 | 2309.10621 | [
"2305.03495"
] |
2309.10621#48 | Large language models can accurately predict searcher preferences | s characteristics. We do retain human oversight and audit examples of LLM output, although we do not audit every label. Quality control, and indeed measuring LLM quality in general, is (as anticipated by Faggioli et al.) difficult as in most cases our LLM is â beyond humanâ quality and we can no longer rely on third-party assessors. Our gold collection, with queries and labels from real searches and real searchers, helps a great deal but of course searchers can still be swayed by distracting captions or unreliable results. (We review every query and URL in the corpus, but this only adds another human to the loop.) Contra Clarke et al., we do not see machine-made assessments degrading quality at all; nor do we consider them â very expensiveâ , at least compared to trained annotators. In some ways, this is an easy case: the language model was trained on web text and we are labelling web text. The notion of judging web pages is likely already encoded, although we do not have clear evidence for this. Further, the topics can be addressed in the corpus: they do not need any personal, corporate, or otherwise restricted data, nor any particular domain-specific knowledge not already found in the text. Using LLMs for labelling suggests new and more difficult applications, for example labelling private corpora where we cannot give human assessors access. From the experiments above, we cannot verify this will be effective, and this remains for future work. We have also measured our labels in part with test setsâ both TREC, and Bingâ s corpusâ which have clear task descriptions. If we were to sample a query load from a running system, we would not have these descriptions and our labels would be less accurate. We also have a capable model: Liang et al. [2022] saw large differences from model to model over a range of tasks, although given our observations in Section 4 this could also be due to model:prompt interactions. As new models emerge, their performance will of course need to be tested. As our models improve, we are also faced with increasing difficulty measuring our labels as our measures start to saturate [Faggioli et al. 2023]. We have found it necessary to build â harderâ gold sets over time, encoding finer distinctions to better distinguish labellers and prompts. | 2309.10621#47 | 2309.10621#49 | 2309.10621 | [
"2305.03495"
] |
2309.10621#49 | Large language models can accurately predict searcher preferences | There is no equivalent mechanism in TREC or other open data sets, and this may become pressing if and when LLM-based labelling becomes commonplace. It is certainly possible to use large language models to label documents for relevance and therefore to evaluate search systems; it is possible to get performance comparable to TREC judges and notably better than crowd judges. There are many choices that make a difference, meaning we need metrics-for-metrics to distinguish a good from a bad system, as well as ongoing audits and human verification. True â goldâ | 2309.10621#48 | 2309.10621#50 | 2309.10621 | [
"2305.03495"
] |
2309.10621#50 | Large language models can accurately predict searcher preferences | judgements (e.g. from TREC assessors or our ground-truth set) make it possible to experiment with prompt and metric design. We have found the approach productive at Bing, and have used it for greater speed, reduced cost, and substantial improvements in our running system. Large language models can accurately predict searcher preferences # ACKNOWLEDGMENTS We thank David Soukal and Stifler Sun for their effort developing and testing many iterations of Bingâ s LLM labelling system. Ian Soboroff kindly provided TREC-Robust judging guidelines. Dave Hedengren, Andy Oakley, and colleagues at Bing provided useful comments on the manuscript. # REFERENCES Aashish Agarwal, Ankita Mandal, Matthias Schaffeld, Fangzheng Ji, Jhiao Zhan, Yiqi Sun, and Ahmet Aker. 2019. | 2309.10621#49 | 2309.10621#51 | 2309.10621 | [
"2305.03495"
] |
2309.10621#51 | Large language models can accurately predict searcher preferences | Good, neutral or bad news classification. In Proceedings of the Third International Workshop on Recent Trends in News Information Retrieval. 9â 14. Meysam Alizadeh, Maël Kubli, Zeynab Samei, Shirin Dehghani, Juan Diego Bermeo, Maria Korobeynikovo, and Fabrizio Gilardi. 2023. Open-source large language models outperform crowd workers and approach ChatGPT in text-annotation tasks. arXiv:2307.02179 [cs.CL] Peter Bailey, Nick Craswell, Ian Soboroff, Paul Thomas, Arjen P. de Vries, and Emine Yilmaz. 2008. | 2309.10621#50 | 2309.10621#52 | 2309.10621 | [
"2305.03495"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.