doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2307.13854 | 76 | set to 0. `hover [id]`: Hover over an element with id. `press [key_comb]`: Simulates the pressing of a key combination on the keyboard (e.g., Ctrl+v). `scroll [direction=down|up]`: Scroll the page up or down. Tab Management Actions: `new_tab`: Open a new, empty browser tab. `tab_focus [tab_index]`: Switch the browserâs focus to a specific tab using its index. `close_tab`: Close the currently active tab. URL Navigation Actions: `goto [url]`: Navigate to a specific URL. `go_back`: Navigate to the previously viewed page. `go_forward`: Navigate to the next page (if a previous `go_back` action was performed). Completion Action: `stop [answer]`: Issue this action when you believe the task is complete. If the objective is to find a text-based answer, provide the answer in the bracket. If you believe the task is impossible to complete, provide the answer as "N/A" in the bracket. Homepage: If you want to visit other websites, check out the homepage at http://homepage.com. It has a list of websites you can | 2307.13854#76 | WebArena: A Realistic Web Environment for Building Autonomous Agents | With advances in generative AI, there is now potential for autonomous agents
to manage daily tasks via natural language commands. However, current agents
are primarily created and tested in simplified synthetic environments, leading
to a disconnect with real-world scenarios. In this paper, we build an
environment for language-guided agents that is highly realistic and
reproducible. Specifically, we focus on agents that perform tasks on the web,
and create an environment with fully functional websites from four common
domains: e-commerce, social forum discussions, collaborative software
development, and content management. Our environment is enriched with tools
(e.g., a map) and external knowledge bases (e.g., user manuals) to encourage
human-like task-solving. Building upon our environment, we release a set of
benchmark tasks focusing on evaluating the functional correctness of task
completions. The tasks in our benchmark are diverse, long-horizon, and designed
to emulate tasks that humans routinely perform on the internet. We experiment
with several baseline agents, integrating recent techniques such as reasoning
before acting. The results demonstrate that solving complex tasks is
challenging: our best GPT-4-based agent only achieves an end-to-end task
success rate of 14.41%, significantly lower than the human performance of
78.24%. These results highlight the need for further development of robust
agents, that current state-of-the-art large language models are far from
perfect performance in these real-life tasks, and that WebArena can be used to
measure such progress. | http://arxiv.org/pdf/2307.13854 | Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig | cs.AI, cs.CL, cs.LG | Our code, data, environment reproduction resources, and video
demonstrations are publicly available at https://webarena.dev/ | null | cs.AI | 20230725 | 20231025 | [
{
"id": "2112.09332"
},
{
"id": "2306.00245"
},
{
"id": "2307.12856"
},
{
"id": "2305.14257"
}
] |
2307.13528 | 77 | [KB-based QA] You are a query generator designed to help users verify a given claim using search engines. Your primary task is to generate a Python list of two effective and skeptical search engine queries. These queries should assist users in crit- ically evaluating the factuality of a provided claim using search engines. You should only respond in format as described below (a Python list of queries). PLEASE STRICTLY FOLLOW THE FORMAT. DO NOT RETURN ANY- THING ELSE. START YOUR RESPONSE WITH â[â. [response format]: [âquery1â, âquery2â] Here are 3 examples: [claim]: The CEO of twitter is Bill Gates. [response]: ["Who is the CEO of twitter?", "CEO Twitter"] [claim]: Michael Phelps is the most decorated Olympian of all time. sponse]: Phelps"] [claim]: ChatGPT is created by Google. GPT?", "ChatGPT"] Now complete the following: [claim]: input [response]: | 2307.13528#77 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13692 | 77 | âE · d âA = Qenclosed ε0 .
We will choose a cylindrical Gaussian surface of radius r and length L, where a < r < b. The electric field is radial, so the electric flux through the curved surface of the cylinder is:
âE · d âA = E dA = E(2ÏrL).
The charge enclosed by the Gaussian surface is the charge on the inner conductor wire, which is given by its linear charge density λ multiplied by the length of the Gaussian surface, L: Qenclosed = λL. Now we can use Gaussâs law to find the electric field: λ 2Ïrε0
However, we need to account for the dielectric material in between the conductors. The electric field in a dielectric material is related to the electric field in vacuum by the dielectric constant ε:
E ε So, the electric field in the dielectric material is:
Edielectric = λ 2Ïrε0ε .
Now we can find the potential difference between the inner conductor wire and the outer conducting shell by integrating the electric field along the radial direction:
b b dr v=- i Enictectriedr = â i dr. a a 2NTEQE
Integrating, we get:
V = â λ 2Ïε0ε ln( b a ). | 2307.13692#77 | ARB: Advanced Reasoning Benchmark for Large Language Models | Large Language Models (LLMs) have demonstrated remarkable performance on
various quantitative reasoning and knowledge benchmarks. However, many of these
benchmarks are losing utility as LLMs get increasingly high scores, despite not
yet reaching expert performance in these domains. We introduce ARB, a novel
benchmark composed of advanced reasoning problems in multiple fields. ARB
presents a more challenging test than prior benchmarks, featuring problems in
mathematics, physics, biology, chemistry, and law. As a subset of ARB, we
introduce a challenging set of math and physics problems which require advanced
symbolic reasoning and domain knowledge. We evaluate recent models such as
GPT-4 and Claude on ARB and demonstrate that current models score well below
50% on more demanding tasks. In order to improve both automatic and assisted
evaluation capabilities, we introduce a rubric-based evaluation approach,
allowing GPT-4 to score its own intermediate reasoning steps. Further, we
conduct a human evaluation of the symbolic subset of ARB, finding promising
agreement between annotators and GPT-4 rubric evaluation scores. | http://arxiv.org/pdf/2307.13692 | Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki | cs.CL, cs.LG | Submitted to NeurIPS Datasets and Benchmarks Track | null | cs.CL | 20230725 | 20230728 | [
{
"id": "2212.14402"
},
{
"id": "1911.01547"
},
{
"id": "2206.04615"
},
{
"id": "2206.05442"
},
{
"id": "2302.04166"
},
{
"id": "2303.12712"
},
{
"id": "2302.14520"
},
{
"id": "2212.09251"
},
{
"id": "2302.13814"
},
{
"id": "2204.01075"
},
{
"id": "2305.14763"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
}
] |
2307.13854 | 77 | bracket. Homepage: If you want to visit other websites, check out the homepage at http://homepage.com. It has a list of websites you can visit. http://homepage.com/password.html lists all the account name and password for the websites. You can use them to log in to the websites. To be successful, it is very important to follow the following rules: To be successful, it is very important to follow the following rules: 1. You should only issue an action that is valid given the current observation 2. You should only issue one action at a time. 3. Generate the action in the correct format. Always put the action inside a pair of ```. For example, ```click [1234]``` 4. Issue stop action when you think you have achieved the objective. Donât generate anything after stop.""" | 2307.13854#77 | WebArena: A Realistic Web Environment for Building Autonomous Agents | With advances in generative AI, there is now potential for autonomous agents
to manage daily tasks via natural language commands. However, current agents
are primarily created and tested in simplified synthetic environments, leading
to a disconnect with real-world scenarios. In this paper, we build an
environment for language-guided agents that is highly realistic and
reproducible. Specifically, we focus on agents that perform tasks on the web,
and create an environment with fully functional websites from four common
domains: e-commerce, social forum discussions, collaborative software
development, and content management. Our environment is enriched with tools
(e.g., a map) and external knowledge bases (e.g., user manuals) to encourage
human-like task-solving. Building upon our environment, we release a set of
benchmark tasks focusing on evaluating the functional correctness of task
completions. The tasks in our benchmark are diverse, long-horizon, and designed
to emulate tasks that humans routinely perform on the internet. We experiment
with several baseline agents, integrating recent techniques such as reasoning
before acting. The results demonstrate that solving complex tasks is
challenging: our best GPT-4-based agent only achieves an end-to-end task
success rate of 14.41%, significantly lower than the human performance of
78.24%. These results highlight the need for further development of robust
agents, that current state-of-the-art large language models are far from
perfect performance in these real-life tasks, and that WebArena can be used to
measure such progress. | http://arxiv.org/pdf/2307.13854 | Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig | cs.AI, cs.CL, cs.LG | Our code, data, environment reproduction resources, and video
demonstrations are publicly available at https://webarena.dev/ | null | cs.AI | 20230725 | 20231025 | [
{
"id": "2112.09332"
},
{
"id": "2306.00245"
},
{
"id": "2307.12856"
},
{
"id": "2305.14257"
}
] |
2307.13528 | 78 | [Math Problems] You are given a math calculation and its corresponding calculated answer. [Task] Your task is to write an executable Python snippet that validate the accuracy of the math calculation against the calculated answer. The Python snippet should print âTrueâ if the calculated answer is correct, and âFalseâ otherwise. Your response MUST be a dictionary with key "python_snippet", which corre- spond to the executable python snippet. [math calculation]: {math_calculation} [calculated answer]: {calculated_answer} You MUST only respond in the format as described below. DO NOT RESPOND WITH ANYTHING ELSE. ADDING ANY OTHER EXTRA NOTES THAT VIOLATE THE RESPONSE FORMAT IS BANNED. START YOUR RESPONSE WITH â{â. [Response format]: { "python_snippet": "An executable Python snippet that validates the accuracy of the math calculation against the calculated answer. The Python snippet should print âTrueâ if the calculated answer is correct, and âFalseâ otherwise." } | 2307.13528#78 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13692 | 78 | Integrating, we get:
V = â λ 2Ïε0ε ln( b a ).
Finally, we can find the capacitance using the formula C = Q V :
C = λL 2Ïε0ε ln( b a ) â λ Simplifying, we get: ANSWER: C = 2Ïε0εL a ) . ln( b
.
24
# System
User
System
User
Table 13: Prompt used for multiple-choice MCAT and Law problems.
You are a top graduate student taking an open-ended qualifying exam. Your final answer should always be in the last line of your response, preceded by ANSWER:.
You are a top graduate student taking a qualifying exam. Below you will find a multiple choice question.
Question: {Problem_Statement}
Answer Choices: {Answer_Choices}
Now it is time to choose an answer. Think carefully and go step by step. Make sure to justify all your work. Your final answer should be one of A,B,C,D,... given at the end of your work and preceded by ANSWER:. For example, if you think the answer is B, the last line of your answer should be ANSWER: B
Solution:
Table 14: Prompt used for numerical problems. | 2307.13692#78 | ARB: Advanced Reasoning Benchmark for Large Language Models | Large Language Models (LLMs) have demonstrated remarkable performance on
various quantitative reasoning and knowledge benchmarks. However, many of these
benchmarks are losing utility as LLMs get increasingly high scores, despite not
yet reaching expert performance in these domains. We introduce ARB, a novel
benchmark composed of advanced reasoning problems in multiple fields. ARB
presents a more challenging test than prior benchmarks, featuring problems in
mathematics, physics, biology, chemistry, and law. As a subset of ARB, we
introduce a challenging set of math and physics problems which require advanced
symbolic reasoning and domain knowledge. We evaluate recent models such as
GPT-4 and Claude on ARB and demonstrate that current models score well below
50% on more demanding tasks. In order to improve both automatic and assisted
evaluation capabilities, we introduce a rubric-based evaluation approach,
allowing GPT-4 to score its own intermediate reasoning steps. Further, we
conduct a human evaluation of the symbolic subset of ARB, finding promising
agreement between annotators and GPT-4 rubric evaluation scores. | http://arxiv.org/pdf/2307.13692 | Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki | cs.CL, cs.LG | Submitted to NeurIPS Datasets and Benchmarks Track | null | cs.CL | 20230725 | 20230728 | [
{
"id": "2212.14402"
},
{
"id": "1911.01547"
},
{
"id": "2206.04615"
},
{
"id": "2206.05442"
},
{
"id": "2302.04166"
},
{
"id": "2303.12712"
},
{
"id": "2302.14520"
},
{
"id": "2212.09251"
},
{
"id": "2302.13814"
},
{
"id": "2204.01075"
},
{
"id": "2305.14763"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
}
] |
2307.13854 | 78 | Figure 9: The system message of the direct agent. This message has the general explanation of the task, the available actions and some notes on avoiding common failures.
20
Under review
example_user OBSERVATION: [1744] link âHP CB782A#ABA 640 Inkjet Fax Machine (Renewed)â [1749] StaticText â$279.49â [1757] button âAdd to Cartâ [1760] button âAdd to Wish Listâ [1761] button âAdd to Compareâ URL: http://onestopmarket.com/office-products/office-electronics.html OBJECTIVE: What is the price of HP Inkjet Fax Machine PREVIOUS ACTION: None example_assistant ```stop [$279.49]``` example_user OBSERVATION: [164] textbox âSearchâ focused: True required: False [171] button âGoâ [174] link âFind directions between two pointsâ [212] heading âSearch Resultsâ [216] button âCloseâ URL: http://openstreetmap.org OBJECTIVE: Show me the restaurants near ABC PREVIOUS ACTION: None example_assistant ```type [164] [restaurants near ABC] [1]``` | 2307.13854#78 | WebArena: A Realistic Web Environment for Building Autonomous Agents | With advances in generative AI, there is now potential for autonomous agents
to manage daily tasks via natural language commands. However, current agents
are primarily created and tested in simplified synthetic environments, leading
to a disconnect with real-world scenarios. In this paper, we build an
environment for language-guided agents that is highly realistic and
reproducible. Specifically, we focus on agents that perform tasks on the web,
and create an environment with fully functional websites from four common
domains: e-commerce, social forum discussions, collaborative software
development, and content management. Our environment is enriched with tools
(e.g., a map) and external knowledge bases (e.g., user manuals) to encourage
human-like task-solving. Building upon our environment, we release a set of
benchmark tasks focusing on evaluating the functional correctness of task
completions. The tasks in our benchmark are diverse, long-horizon, and designed
to emulate tasks that humans routinely perform on the internet. We experiment
with several baseline agents, integrating recent techniques such as reasoning
before acting. The results demonstrate that solving complex tasks is
challenging: our best GPT-4-based agent only achieves an end-to-end task
success rate of 14.41%, significantly lower than the human performance of
78.24%. These results highlight the need for further development of robust
agents, that current state-of-the-art large language models are far from
perfect performance in these real-life tasks, and that WebArena can be used to
measure such progress. | http://arxiv.org/pdf/2307.13854 | Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig | cs.AI, cs.CL, cs.LG | Our code, data, environment reproduction resources, and video
demonstrations are publicly available at https://webarena.dev/ | null | cs.AI | 20230725 | 20231025 | [
{
"id": "2112.09332"
},
{
"id": "2306.00245"
},
{
"id": "2307.12856"
},
{
"id": "2305.14257"
}
] |
2307.13528 | 79 | [Code Potential Solution Generation] Please solve the given coding question. Make sure that the solution is opti- mized and correct. You MUST use Python to solve the coding question. Your response MUST be a dictionary with keys "reasoning" and "python_solution", which correspond to the reasoning and Python implementations of the function {entry_point}. The following is the given coding question - [coding question]: {input_question} You MUST only respond in the format as described below. DO NOT RESPOND WITH ANYTHING ELSE. ADDING ANY OTHER EXTRA NOTES THAT VIOLATE THE RESPONSE FORMAT IS BANNED. START YOUR RESPONSE WITH â{â. [response format]: { "reasoning": "Reasoning for solution.", "python_solution": "Python implementation of the function {entry_point}. Include only the implementation of the function itself. Ensure the output of the function aligns with its speciï¬ed return type." } | 2307.13528#79 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13692 | 79 | Solution:
Table 14: Prompt used for numerical problems.
You are a top graduate student taking an open-ended qualifying exam. Your final answer should always be in the last line of your response, preceded by ANSWER:.
You are a top graduate student taking an open-ended qualifying exam. Below you will find a question requiring you to compute a numerical value.
Question: {Problem_Statement}
Now it is time to give your answer. Think carefully and go step by step. Make sure to justify all your work. Please simplify all expressions as much as possible and do not leave any variables in your final answer. Your final answer should NOT contain units and should be given at the end of your work and preceded by ANSWER: For example, if you think the answer is 2.4 meters, the last line of your answer should be ANSWER: 2.4.
Solution:
25
# System
# User
User
# System
User
Table 15: Prompt used for symbolic problems.
You are a top graduate student taking an open-ended qualifying exam. Your final answer should always be in the last line of your response, preceded by ANSWER:.
You are a top graduate student taking an open-ended qualifying exam. Below you will find a question requiring you to give a symbolic answer.
Question: {Problem_Statement} | 2307.13692#79 | ARB: Advanced Reasoning Benchmark for Large Language Models | Large Language Models (LLMs) have demonstrated remarkable performance on
various quantitative reasoning and knowledge benchmarks. However, many of these
benchmarks are losing utility as LLMs get increasingly high scores, despite not
yet reaching expert performance in these domains. We introduce ARB, a novel
benchmark composed of advanced reasoning problems in multiple fields. ARB
presents a more challenging test than prior benchmarks, featuring problems in
mathematics, physics, biology, chemistry, and law. As a subset of ARB, we
introduce a challenging set of math and physics problems which require advanced
symbolic reasoning and domain knowledge. We evaluate recent models such as
GPT-4 and Claude on ARB and demonstrate that current models score well below
50% on more demanding tasks. In order to improve both automatic and assisted
evaluation capabilities, we introduce a rubric-based evaluation approach,
allowing GPT-4 to score its own intermediate reasoning steps. Further, we
conduct a human evaluation of the symbolic subset of ARB, finding promising
agreement between annotators and GPT-4 rubric evaluation scores. | http://arxiv.org/pdf/2307.13692 | Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki | cs.CL, cs.LG | Submitted to NeurIPS Datasets and Benchmarks Track | null | cs.CL | 20230725 | 20230728 | [
{
"id": "2212.14402"
},
{
"id": "1911.01547"
},
{
"id": "2206.04615"
},
{
"id": "2206.05442"
},
{
"id": "2302.04166"
},
{
"id": "2303.12712"
},
{
"id": "2302.14520"
},
{
"id": "2212.09251"
},
{
"id": "2302.13814"
},
{
"id": "2204.01075"
},
{
"id": "2305.14763"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
}
] |
2307.13854 | 79 | Figure 10: The two examples provided as example_user and example_assistant for the direct agent. The agent directly emits the next action given the observation.
Figure 11: Two examples where the GPT-4 agent failed, along with their screenshot and the accessibility tree of the relevant sections (grey). On the left, the agent fails to proceed to the âUsersâ section to accomplish the task of âFork all Facebook reposâ; on the right, the agent repeats entering the same search query even though the observation indicates the input box is filled.
21 | 2307.13854#79 | WebArena: A Realistic Web Environment for Building Autonomous Agents | With advances in generative AI, there is now potential for autonomous agents
to manage daily tasks via natural language commands. However, current agents
are primarily created and tested in simplified synthetic environments, leading
to a disconnect with real-world scenarios. In this paper, we build an
environment for language-guided agents that is highly realistic and
reproducible. Specifically, we focus on agents that perform tasks on the web,
and create an environment with fully functional websites from four common
domains: e-commerce, social forum discussions, collaborative software
development, and content management. Our environment is enriched with tools
(e.g., a map) and external knowledge bases (e.g., user manuals) to encourage
human-like task-solving. Building upon our environment, we release a set of
benchmark tasks focusing on evaluating the functional correctness of task
completions. The tasks in our benchmark are diverse, long-horizon, and designed
to emulate tasks that humans routinely perform on the internet. We experiment
with several baseline agents, integrating recent techniques such as reasoning
before acting. The results demonstrate that solving complex tasks is
challenging: our best GPT-4-based agent only achieves an end-to-end task
success rate of 14.41%, significantly lower than the human performance of
78.24%. These results highlight the need for further development of robust
agents, that current state-of-the-art large language models are far from
perfect performance in these real-life tasks, and that WebArena can be used to
measure such progress. | http://arxiv.org/pdf/2307.13854 | Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig | cs.AI, cs.CL, cs.LG | Our code, data, environment reproduction resources, and video
demonstrations are publicly available at https://webarena.dev/ | null | cs.AI | 20230725 | 20231025 | [
{
"id": "2112.09332"
},
{
"id": "2306.00245"
},
{
"id": "2307.12856"
},
{
"id": "2305.14257"
}
] |
2307.13528 | 80 | [Code Unit test Generation] Please generate 3 distinct function calls for the given coding question to test the functionality of the function {entry_point} that attempts to solve the provided coding question. Your response must be a dictionary with 3 keys - "function_call_1", "func- tion_call_2", "function_call_3", which correspond to the 3 distinct function calls for function {entry_point}. The following is the given coding question - [coding question]: {input_question} You MUST only respond in the format as described below. DO NOT RESPOND WITH ANYTHING ELSE. ADDING ANY OTHER EXTRA NOTES THAT VIOLATE THE RESPONSE FORMAT IS BANNED. START YOUR RESPONSE WITH â{â. [response format]: { "function_call_1": "First function call for function {en- try_point}. Do not include anything else.", "function_call_2": "Second func- tion call for function {entry_point}. Do not include anything else.", "func- tion_call_3": "Third function call for function {entry_point}. Do not include anything else." }
Figure 7: Prompts for Query Generation | 2307.13528#80 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13692 | 80 | You are a top graduate student taking an open-ended qualifying exam. Below you will find a question requiring you to give a symbolic answer.
Question: {Problem_Statement}
Now it is time to give your answer. Think carefully and go step by step. Make sure to justify all your work. Your final answer should NOT contain units and should be given at the end of your work and preceded by ANSWER: For example, if you think the answer is x â y, the last line of your answer should be ANSWER: x â y
Solution:
Table 16: Prompt used for proof-like problems.
You are a top graduate student taking an open-ended qualifying exam. Your final answer should always be in the last line of your response, preceded by ANSWER:.
You are a top graduate student taking an open-ended qualifying exam. Below you will find a question requiring you to prove the given statement.
# Question: {Problem_Statement}
Now it is time to give your answer. Think carefully and go step by step. Make sure to justify all your work.
Solution:
26
# System
# User
# System
User
Table 17: Prompt used for GPT-3.5-turbo symbolic evaluation.
You are a top professor grading an open-ended qualifying exam. | 2307.13692#80 | ARB: Advanced Reasoning Benchmark for Large Language Models | Large Language Models (LLMs) have demonstrated remarkable performance on
various quantitative reasoning and knowledge benchmarks. However, many of these
benchmarks are losing utility as LLMs get increasingly high scores, despite not
yet reaching expert performance in these domains. We introduce ARB, a novel
benchmark composed of advanced reasoning problems in multiple fields. ARB
presents a more challenging test than prior benchmarks, featuring problems in
mathematics, physics, biology, chemistry, and law. As a subset of ARB, we
introduce a challenging set of math and physics problems which require advanced
symbolic reasoning and domain knowledge. We evaluate recent models such as
GPT-4 and Claude on ARB and demonstrate that current models score well below
50% on more demanding tasks. In order to improve both automatic and assisted
evaluation capabilities, we introduce a rubric-based evaluation approach,
allowing GPT-4 to score its own intermediate reasoning steps. Further, we
conduct a human evaluation of the symbolic subset of ARB, finding promising
agreement between annotators and GPT-4 rubric evaluation scores. | http://arxiv.org/pdf/2307.13692 | Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki | cs.CL, cs.LG | Submitted to NeurIPS Datasets and Benchmarks Track | null | cs.CL | 20230725 | 20230728 | [
{
"id": "2212.14402"
},
{
"id": "1911.01547"
},
{
"id": "2206.04615"
},
{
"id": "2206.05442"
},
{
"id": "2302.04166"
},
{
"id": "2303.12712"
},
{
"id": "2302.14520"
},
{
"id": "2212.09251"
},
{
"id": "2302.13814"
},
{
"id": "2204.01075"
},
{
"id": "2305.14763"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
}
] |
2307.13528 | 81 | [KB-based QA] You are given a piece of text. Your task is to identify whether there are any fac- tual errors within the text. When you are judging the factuality of the given text, you could reference the provided evidences if needed. The provided evidences may be helpful. Some evidences may contradict to each other. You must be careful when using the evidences to judge the factuality of the given text. When The response should be a dictionary with four keys - "reasoning", "factuality", "error", and "correction", which correspond to the reasoning, whether the given text is factual or not (Boolean - True or False), the factual error present in the text, and the corrected text. The following is the given text [text]: claim The following is the provided evidences [evidences]: evidence You should only re- spond in format as described below. DO NOT RETURN ANYTHING ELSE. START YOUR RESPONSE WITH â{{â. [response format]: {{ "reasoning": "Why is the given text factual or non-factual? Be careful when you said some- thing is non-factual. When you said something is non-factual, you must pro- vide mulitple evidences to support your decision.", "error": "None | 2307.13528#81 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13692 | 81 | 26
# System
# User
# System
User
Table 17: Prompt used for GPT-3.5-turbo symbolic evaluation.
You are a top professor grading an open-ended qualifying exam.
Problem Statement: Give a solution to the differential equation yâ²â² = ây Reference Answer: y(t) = cos(t) or y(t) = sin(t) Model Answer: y(x) = sin(x) Score: The correct answer is either y(t) = cos(t) or y(t) = sin(t). The model gave y(x) = sin(x). Since the function variable was not specified, the model matches one of the reference answers. GRADE: 1
Problem Statement: {Problem_Statement} Reference Answer: {Final_Answer} Model Answer: {model_final_answer}
Now it is time to grade the model answer. If the solution is incorrect give GRADE: 0. If the solution is nearly correct up to a constant give GRADE: 0.5. If the solution is correct give GRADE: 1. Before coming to a final grade think think carefully and go step by step. DO NOT TRY TO SOLVE THE PROBLEM. If a variable name is not specified and the reference answer and the model answer are the same up to the name of a variable give a score of GRADE: 1. For example if the reference answer is | 2307.13692#81 | ARB: Advanced Reasoning Benchmark for Large Language Models | Large Language Models (LLMs) have demonstrated remarkable performance on
various quantitative reasoning and knowledge benchmarks. However, many of these
benchmarks are losing utility as LLMs get increasingly high scores, despite not
yet reaching expert performance in these domains. We introduce ARB, a novel
benchmark composed of advanced reasoning problems in multiple fields. ARB
presents a more challenging test than prior benchmarks, featuring problems in
mathematics, physics, biology, chemistry, and law. As a subset of ARB, we
introduce a challenging set of math and physics problems which require advanced
symbolic reasoning and domain knowledge. We evaluate recent models such as
GPT-4 and Claude on ARB and demonstrate that current models score well below
50% on more demanding tasks. In order to improve both automatic and assisted
evaluation capabilities, we introduce a rubric-based evaluation approach,
allowing GPT-4 to score its own intermediate reasoning steps. Further, we
conduct a human evaluation of the symbolic subset of ARB, finding promising
agreement between annotators and GPT-4 rubric evaluation scores. | http://arxiv.org/pdf/2307.13692 | Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki | cs.CL, cs.LG | Submitted to NeurIPS Datasets and Benchmarks Track | null | cs.CL | 20230725 | 20230728 | [
{
"id": "2212.14402"
},
{
"id": "1911.01547"
},
{
"id": "2206.04615"
},
{
"id": "2206.05442"
},
{
"id": "2302.04166"
},
{
"id": "2303.12712"
},
{
"id": "2302.14520"
},
{
"id": "2212.09251"
},
{
"id": "2302.13814"
},
{
"id": "2204.01075"
},
{
"id": "2305.14763"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
}
] |
2307.13692 | 82 | f (x) = x2
and the model answer is
f (y) = y2
give a score of GRADE: 1.
Score: The reference answer
Table 18: Prompt used for GPT-4 rubric self-evaluation.
You are a top professor grading an open-ended qualifying exam.
Problem Statement: {Problem_Statement} Rubric: {rubric} Student Answer: {response}
Now it is time to grade the student answer. Make sure to check each point of the rubric step by step. And make sure to print the total number of earned points at the end of your grading. For example, if the student earned 8 points, print Rubric Score: 8 points
Rubric Evaluation:
27
# System
User
Table 19: Prompt used for GPT-4 rubric design.
You are a top professor grading an open-ended qualifying exam.
Problem Statement: {Sample_Problem_Statement} Reference Solution: {Sample_Reference_Solution} Rubric: {Handcrafted_Rubric}
Problem Statement: { Problem_Statement } Reference Solution: { Solution }
Now it is time to write the rubric. Make sure to think carefully and go step by step, breaking down the problem into multiple parts. The total number of possible points should sum to 10.
Rubric:
28 | 2307.13692#82 | ARB: Advanced Reasoning Benchmark for Large Language Models | Large Language Models (LLMs) have demonstrated remarkable performance on
various quantitative reasoning and knowledge benchmarks. However, many of these
benchmarks are losing utility as LLMs get increasingly high scores, despite not
yet reaching expert performance in these domains. We introduce ARB, a novel
benchmark composed of advanced reasoning problems in multiple fields. ARB
presents a more challenging test than prior benchmarks, featuring problems in
mathematics, physics, biology, chemistry, and law. As a subset of ARB, we
introduce a challenging set of math and physics problems which require advanced
symbolic reasoning and domain knowledge. We evaluate recent models such as
GPT-4 and Claude on ARB and demonstrate that current models score well below
50% on more demanding tasks. In order to improve both automatic and assisted
evaluation capabilities, we introduce a rubric-based evaluation approach,
allowing GPT-4 to score its own intermediate reasoning steps. Further, we
conduct a human evaluation of the symbolic subset of ARB, finding promising
agreement between annotators and GPT-4 rubric evaluation scores. | http://arxiv.org/pdf/2307.13692 | Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki | cs.CL, cs.LG | Submitted to NeurIPS Datasets and Benchmarks Track | null | cs.CL | 20230725 | 20230728 | [
{
"id": "2212.14402"
},
{
"id": "1911.01547"
},
{
"id": "2206.04615"
},
{
"id": "2206.05442"
},
{
"id": "2302.04166"
},
{
"id": "2303.12712"
},
{
"id": "2302.14520"
},
{
"id": "2212.09251"
},
{
"id": "2302.13814"
},
{
"id": "2204.01075"
},
{
"id": "2305.14763"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
}
] |
2307.13528 | 83 | [Scientiï¬c Literature Review] Please generate 3 distinct function calls for the given coding question to test the You are provided with two inputs, a string (string1) containing several names, and a list (list1) also containing names. Your task is to assess whether all the last names mentioned in string1 are included in list1. You should only respond in format as described below. DO NOT RETURN ANYTHING ELSE. START YOUR RESPONSE WITH â{{â. [response for- mat]: {{ "reasoning": "Explanation on whether all the last names in string1 are found within list1", "factuality": This will be True if all last names from string1 are present in list1, and False otherwise. }} Example 1: [string1]: "J. Devlin and M. Chang" [list1]: ["Devlin", "M Chang", "Kristina Toutanova"] [response]: "reasoning": "string1 contains 2 last names âDevlinâ and âChangâ. Both of these last names are present in list1.", "factual- ity": True Example 2: [string1]: "Tom Brown et. al" [list1]: ["Y. Lecun", "G. Hinton"] [response]: "reasoning": "string 1 | 2307.13528#83 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13528 | 86 | { "claim": "The modern-day version of fortune cookies was invented in the United States.", "label": true, "with_tool_classiï¬cation": false, "with_tool_reasoning": "The given text is not entirely factual. The modern-day version of fortune cookies was not invented in the United States. Most people nowadays believe that fortune cookies were created by a Japanese man named Makoto Hagiwara in 1914 in San Francisco. Hagiwara owned what is now called the Golden Gate Park Japanese Tea Garden, where he served tea and fortune cookies. This is supported by the provided evidences.", "queries": ["Where were fortune cookies invented?", "History of fortune cookies in the United States"], "evidences": ["Most people nowadays believe that fortune cookies were created by a Japanese man named Makoto Hagiwara in 1914 in San Francisco. Hagiwara owned what is now called the Golden Gate Park Japanese Tea Garden, where he served tea and fortune cookies. Nov 19, 2019", "Most people nowadays believe that fortune cookies were created by a Japanese man named Makoto Hagiwara in 1914 in San Francisco. Hagiwara owned what is now called the Golden Gate Park Japanese Tea Garden, where he served | 2307.13528#86 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13528 | 87 | a Japanese man named Makoto Hagiwara in 1914 in San Francisco. Hagiwara owned what is now called the Golden Gate Park Japanese Tea Garden, where he served tea and fortune cookies. Nov 19, 2019"] } { "claim": "Sonia Sotomayor is a female US Supreme Court justice", "label": true, "with_tool_classiï¬cation": false, "with_tool_reasoning": "The given text is partially factual. While Sonia Sotomayor is indeed a female US Supreme Court justice, the text fails to mention that she was also the ï¬rst Latina to serve on the Supreme Court. This information is provided in both of the provided evidences.", "queries": ["Is Sonia Sotomayor a US Supreme Court justice?", "Sonia Sotomayor gender US Supreme Court"], "evidences": ["In 2009, Sonia Sotomayor broke boundaries by becoming the ï¬rst Latina, and the third woman, to serve on the Supreme Court of the United States, the highest court in the country. Sotomayor was born in New York City to Puerto Rican parents.", "Supreme Court of the United States. Sonia Sotomayor Ë2013 the fearless federal trial court | 2307.13528#87 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13528 | 90 | [Fact-checking recent events, discoveries, or news] { "claim": "Argentina has not won the World Cup since 1986", "label": false, "with_tool_classiï¬cation": false, "with_tool_reasoning": "The given text states that Argentina has not won the World Cup since 1986. However, multiple pieces of evidence suggest that Argentina won the World Cup in 2022.", "queries": ["Argentina World Cup wins since 1986", "Last time Argentina won World Cup"], "evidences": ["Argentina is one of the most successful teams in the tourna- mentâs history, having won three World Cups: in 1978, 1986, 2022. Argentina has also been runner up three times: in 1930, 1990 and 2014. In 18 World Cup tournaments, Argentina has 47 victories in 88 matches.", "Argentina is one of the most successful teams in the tournamentâs history, having won three World Cups: in 1978, 1986, 2022. Argentina has also been runner up ...", "Previously, the last time Argentina won the World Cup was 1986, when it defeated Germany to win its second title in three tournaments.", "Argentina have won the FIFA World Cup three | 2307.13528#90 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13528 | 91 | Argentina won the World Cup was 1986, when it defeated Germany to win its second title in three tournaments.", "Argentina have won the FIFA World Cup three times Ë2013 in 1978, 1986 and 2022. They have reached the ï¬nal six times, including a second-place ...", "The Argentines have won two FIFA World Cups, in 1978 and 1986. They have placed second three times before, in 1930, 1990, and 2014.", "Argentina last won the FIFA World Cup in 1986, defeating West Germany in the championship round, 3-2. Argentina World Cup Wins. Argentinaâs ..."] } { "claim": "The most valuable NFT ever sold is a digital artwork called âEvery- days: The First 5000 Daysâ ", "label": false, "with_tool_classiï¬cation": false, "with_tool_reasoning": "The given text states that the most valuable NFT ever sold is âEverydays: The First 5000 Daysâ. According to the provided evi- dence, âThe First 5000 Daysâ is the second most expensive NFT ever sold, with âMergeâ being the most expensive one.", "queries": Days"], "evidences": ["The Most Expensive | 2307.13528#91 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13528 | 92 | is the second most expensive NFT ever sold, with âMergeâ being the most expensive one.", "queries": Days"], "evidences": ["The Most Expensive NFTs Ever Sold 1. Merge - 2014 US $91.8M. Pak - Merge. . . . 2. The First 5000 Days - 2014 US$69.3M. Beeple - 2014 The First 5000 Days. . . . 3. Clock - 2014 US$52.7M. Julian Assange & Pak - 2014 Clock. . . . 4. HUMAN ONE - 2014 US$29M. Beeple - 2014 HU- MAN ONE. . . . 5. CryptoPunk #5822 - 2014 US$23.7M. CryptoPunk #5822.", "Everydays: the First 5000 Days: Work of art.", "Everydays: the First 5000 Days is a digital work of art created by Mike Winkelmann, known profession- ally as Beeple. The work is a collage of 5000 digital images created by Winkel- mann for his Everydays series.", "Everydays: the First 5000 Days Artist: Mike Winkelmann.", "Everydays: the First 5000 Days Owner: Vignesh Sundaresan (MetaKovan).", "Everydays: the First 5000 Days Medium: Digital (JPEG)."] } | 2307.13528#92 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13528 | 93 | [Fact-checking high-precision statistics] { "claim": "Ireland has an obesity rate of 26.9%", "label": false, "with_tool_classiï¬cation": false, "with_tool_reasoning": "The given text states that Ireland has an obesity rate of 26.9%, but the provided evidences show different numbers. The ï¬rst evidence doesnât provide a clear percentage for obesity, but the second evidence states that the prevalence of obesity in Irish adults is currently 18%, with men at 20% and women at 16%. This contradicts the given text.", "queries": ["Ireland obesity rate statistics", "Current obesity rate in Ireland"], "evidences": ["Just under four in ten (37%) of people have a normal weight, six out of ten (37% overweight and a further 23% obese) overweight or obese. A smaller difference exists between men and women aged under 25 than those older than this.", "The prevalence of obesity in Irish adults is currently 18%, with men at 20% and women at 16%. A further 47% of men and 33% of women are overweight (BMI 25.0-29.9 kg/m2). Since 1990, obesity has more than doubled in men from 8% to | 2307.13528#93 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13528 | 94 | and 33% of women are overweight (BMI 25.0-29.9 kg/m2). Since 1990, obesity has more than doubled in men from 8% to 20%, and increased from 13% to 16% in women."] } { "claim": " âEverydays: The First 5000 Daysâ sold for $69 million", "label": false, "with_tool_classiï¬cation": false, "with_tool_reasoning": "The given text is almost factual, but the exact amount for which the artwork was sold is slightly off. Multiple evidences mention the correct amount.", "queries": ["Everydays: The First 5000 Days sale price", " $69 million Every- days artwork"], "evidences": ["The work is a collage of 5000 digital images created by Winkel- mann for his Everydays series. Its associated non-fungible token (NFT) was sold for $69.3 million at Christieâs in 2021, making it the most expen- sive non-fungible token ever.", "Ë201cNow there is a way to collect digital art.Ë201d The piece that was sold, Everydays: The First 5000 Days, is a col- lage of Winkelmannâs work starting ...", "That day, Winkelmann, a digital | 2307.13528#94 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13528 | 95 | piece that was sold, Everydays: The First 5000 Days, is a col- lage of Winkelmannâs work starting ...", "That day, Winkelmann, a digital artist more commonly known by the handle Beeple, sold Ë201cEverydays: The First 5000 Days,Ë201d an NFT-linked ...", "Ë201cEverydays Ë2014 The First 5000 Days,Ë201d by the artist known as Beeple, set a record for a digital artwork in a sale at Christieâs.", "Artist Beeple, a North Fond du Lac native, sold a digital artwork for $69 million. Now you can see the same art in a book for a lot less.", "Beeple, aka artist Mike Winkelmann, appeared on ¨The ..."] } | 2307.13528#95 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13528 | 96 | Figure 10: Examples of KB-QA claims veriï¬ed by FACTOOL powered by GPT-4. The queries generated, evidences collected, the reasoning process, and the ï¬nal decision of FACTOOL powered by GPT-4 is reported.
# { "task_id": "HumanEval/2", "prompt":
d e f f l o a t ) â> f l o a t : t r u n c a t e _ n u m b e r ( number : " " " G i v e n a p o s i t i v e i t ( l a r g e s t and d e c i m a l s ( l e f t o v e r p a r t a l w a y s R e t u r n t h e d e c i m a l p a r t o f f l o a t i n g p o i n t number , can be decomposed i n t o and i n t e g e r p a r t i n t e g e r s m a l l e r t h a n g i v e n number ) s m a l l e r t h a n 1 ) . t h e number . " " "
"entry_point": "truncate_number", "completion": | 2307.13528#96 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13528 | 97 | "entry_point": "truncate_number", "completion":
t r u n c a t e _ n u m b e r ( number : i n t e g e r _ p a r t = number d e c i m a l _ p a r t = number â i n t e g e r _ p a r t r e t u r n d e c i m a l _ p a r t
"result": "passed", "passed": true, "with_tool_classiï¬cation": true, "testcases_input": "truncate_number(19.999)"], "multi_solutions": [
["truncate_number(4.56)",
"truncate_number(0.123)",
d e f t r u n c a t e _ n u m b e r ( number : r e t u r n number â i n t ( number ) f l o a t ) â> f l o a t :
d e f t r u n c a t e _ n u m b e r ( number : r e t u r n number â i n t ( number ) f l o a t ) â> f l o a t : | 2307.13528#97 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13528 | 98 | d e f t r u n c a t e _ n u m b e r ( number : r e t u r n number â i n t ( number ) f l o a t ) â> f l o a t :
t r u n c a t e _ n u m b e r ( number : i n t e g e r _ p a r t = number d e c i m a l _ p a r t = number â i n t e g e r _ p a r t r e t u r n d e c i m a l _ p a r t
] 0.5599999999999996, "exec_result": 0.5599999999999996, 0.123, 0.123], [0.9989999999999988, 0.9989999999999988, 0.9989999999999988, 0.9989999999999988]] } | 2307.13528#98 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13528 | 99 | { "task_id": "HumanEval/57", "prompt": d e f m o n o t o n i c ( l : l i s t ) : i f " " " R e t u r n T r u e i n c r e a s i n g o r d e c r e a s i n g . " " " l i s t e l e m e n t s a r e m o n o t o n i c a l l y "entry_point": "monotonic", "completion": d e f m o n o t o n i c ( l : l i s t ) : i f l e n ( l ) <= 1 : r e t u r n T r u e i n c r e a s i n g = a l l ( l [ i ] <= l [ i + 1 ] d e c r e a s i n g = a l l ( l [ i ] >= l [ i + 1 ] r e t u r n i n c r e a s i n g or d e c r e a s i n g f o r f o r i i "result": "passed", "passed": true, "with_tool_classiï¬cation": true, "testcases_input": ["monotonic([1, 2, 3, 4, | 2307.13528#99 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13528 | 100 | "passed": true, "with_tool_classiï¬cation": true, "testcases_input": ["monotonic([1, 2, 3, 4, 5])", "monotonic([5, 4, 3, 2, 1])", "monotonic([1, 3, 5, 4, 7])"], "multi_solutions": [ d e f m o n o t o n i c ( l : l i s t ) : i f l e n ( l ) < 2 : r e t u r n T r u e i n c r e a s i n g = d e c r e a s i n g = T r u e f o r i i f i n range ( l e n ( l ) â 1 ) : l [ i ] < l [ i + 1 ] : d e c r e a s i n g = F a l s e i f l [ i ] > l [ i + 1 ] : i n c r e a s i n g = F a l s e i f n o t i n c r e a s i n g and n o t d e c r e a s i n g : r e t u r n F a l s e r e t u r n T r u e d e f m o n o t o n i c ( l : l i s t ) : | 2307.13528#100 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13528 | 101 | t u r n F a l s e r e t u r n T r u e d e f m o n o t o n i c ( l : l i s t ) : i f l e n ( l ) < 2 : r e t u r n T r u e d i r e c t i o n = l [ 1 ] â l [ 0 ] i n range ( 2 , f o r i i f d i r e c t i o n == 0 : l e n ( l ) ) : i f d i r e c t i o n = l [ i ] â l [ i â1 ] c o n t i n u e ( l [ i ] â l [ i â 1 ] ) * d i r e c t i o n < 0 : r e t u r n F a l s e r e t u r n T r u e d e f m o n o t o n i c ( l : l i s t ) : i f l e n ( l ) < 2 : r e t u r n T r u e d i r e c t i o n = l [ 1 ] â l [ 0 ] i n range ( 2 , f o r l e n ( l ) ) : i i f d i r e c t i o n == 0 : i f d i r e c t i o n = l [ i ] | 2307.13528#101 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13528 | 102 | r l e n ( l ) ) : i i f d i r e c t i o n == 0 : i f d i r e c t i o n = l [ i ] â l [ i â 1 ] c o n t i n u e ( l [ i ] â l [ i â 1 ] ) * d i r e c t i o n < 0 : r e t u r n F a l s e r e t u r n T r u e d e f m o n o t o n i c ( l : l i s t ) : i f l e n ( l ) <= 1 : r e t u r n T r u e i n c r e a s i n g = a l l ( l [ i ] <= l [ i + 1 ] d e c r e a s i n g = a l l ( l [ i ] >= l [ i + 1 ] r e t u r n i n c r e a s i n g or d e c r e a s i n g f o r f o r i i | 2307.13528#102 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13528 | 103 | i n range ( l e n ( l ) â 1 ) ) i n range ( l e n ( l ) â 1 ) )
i n range ( l e n ( l ) â 1 ) ) i n range ( l e n ( l ) â 1 ) )
] "exec_result": [[true, true, true, true], [true, true, true, true], [false, false, false, false]] }
Figure 11: Examples of code claims veriï¬ed by FACTOOL powered by GPT-4. The queries generated, evidences collected, the reasoning process, and the ï¬nal decision of FACTOOL powered by GPT-4 is reported. | 2307.13528#103 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13528 | 104 | [ChatGPT] { "task_id": "HumanEval/36", "prompt": d e f f i z z _ b u z z ( n : " " " R e t u r n t h e number o f a p p e a r s w h i c h a r e d i v i s i b l e by 11 o r 1 3 . " " " i n t ) : t i m e s t h e d i g i t 7 i n i n t e g e r s l e s s t h a n n , "entry_point": "ï¬zz_buzz", "completion": d e f f i z z _ b u z z ( n : c o u n t = 0 f o r i i f i n t ) : i n range ( n ) : i % 11 == 0 or i % 13 == 0 : i f s t r ( i ) . c o u n t ( â 7 â ) > 0 : c o u n t += 1 r e t u r n c o u n t "result": "failed: AssertionError: ", "passed": false, "testcases_input": ["ï¬zz_buzz(50)", "ï¬zz_buzz(100)", "ï¬zz_buzz(200)"], | 2307.13528#104 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13528 | 105 | "testcases_input": ["ï¬zz_buzz(50)", "ï¬zz_buzz(100)", "ï¬zz_buzz(200)"], "multi_solutions": [ d e f f i z z _ b u z z ( n : c o u n t = 0 f o r i i f i n t ) : i n range ( n ) : i % 11 == 0 or i % 13 == 0 : i n s t r ( i ) : i f â 7 â c o u n t += 1 r e t u r n c o u n t d e f i n t ) : f i z z _ b u z z ( n : c o u n t = 0 f o r i i f i n range ( n ) : i % 11 == 0 or i % 13 == 0 : i n s t r ( i ) : i f â 7 â c o u n t += 1 r e t u r n c o u n t d e f i n t ) : f i z z _ b u z z ( n : c o u n t = 0 f o r i i f i n range ( n ) : i % 11 == 0 or i % 13 == 0 : c o u n t += s t r ( i ) . c o u n t ( â 7 â ) r e t | 2307.13528#105 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13528 | 108 | [GPT-4] { "task_id": "HumanEval/36", "prompt": d e f f i z z _ b u z z ( n : " " " R e t u r n t h e number o f a p p e a r s w h i c h a r e d i v i s i b l e by 11 o r 1 3 . " " " i n t ) : t i m e s t h e d i g i t 7 i n i n t e g e r s l e s s t h a n n , "entry_point": "ï¬zz_buzz", "completion": d e f f i z z _ b u z z ( n : c o u n t = 0 f o r i i f i n t ) : i n range ( n ) : i % 11 == 0 or i % 13 == 0 : i f s t r ( i ) . c o u n t ( â 7 â ) > 0 : c o u n t += 1 r e t u r n c o u n t "result": "failed: AssertionError: ", "passed": false, "with_tool_classiï¬cation": false, "testcases_input": ["ï¬zz_buzz(50)", "ï¬zz_buzz(100)", | 2307.13528#108 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13528 | 109 | false, "testcases_input": ["ï¬zz_buzz(50)", "ï¬zz_buzz(100)", "ï¬zz_buzz(150)"], "multi_solutions": [ d e f i n t ) : f i z z _ b u z z ( n : c o u n t = 0 f o r i i f i n range ( 1 , n ) : i % 11 == 0 or i % 13 == 0 : c o u n t += s t r ( i ) . c o u n t ( â 7 â ) r e t u r n c o u n t d e f i n t ) : f i z z _ b u z z ( n : c o u n t = 0 f o r i i f i n range ( 1 , n ) : i % 11 == 0 or i % 13 == 0 : c o u n t += s t r ( i ) . c o u n t ( â 7 â ) r e t u r n c o u n t d e f f i z z _ b u z z ( n : c o u n t = 0 f o r i i f i n t ) : i n range ( 1 , n ) : i % 11 == 0 or i % 13 == 0 : c o u n t += s t r ( i ) . c o u | 2307.13528#109 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13528 | 112 | [ChatGPT] { "math_calculation": "23 * 4319216", "calculated_answer": "99305768", "label": false, "query": "python_snippet": "print(23 * 4319216 == 99305768)", "exec_result": "False", "with_tool_classiï¬cation": false } { "math_calculation": "20/100 x $10884297.00", "calculated_answer": "2176859.40", "label": true, "query": 2176859.40) < 0.01)"}, "exec_result": "True", "with_tool_classiï¬cation": true } {"python_snippet": "print(abs((20/100) * 10884297.00 - [GPT-4] { "math_calculation": "23 * 4319216", "calculated_answer": "99305768", "label": false, "query": {"python_snippet": 99305768; print(calculation == answer)"}, "exec_result": "False", "with_tool_classiï¬cation": false } { "math_calculation": "20/100 x $10884297.00", "calculated_answer": "2176859.40", "label": | 2307.13528#112 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13528 | 114 | Figure 13: Examples of math claims veriï¬ed by FACTOOL powered by ChatGPT and GPT-4. The full response of FACTOOL powered by GPT-4 is reported, including the queries generated, evidences collected, the reasoning process, and the ï¬nal decision.
[ChatGPT] { "paper_title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "paper_author(s)": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova", "paper_pub_year": "2018", "label": true, "with_tool_classiï¬cation": true, "error": [] } { "paper_title": "The Impact of Artiï¬cial Intelligence on Employment", "paper_author(s)": "Acemoglu and Restrepo", "paper_pub_year": "2019", "label": false, "with_tool_classiï¬cation": false, "error": ["wrong_paper_author(s)", "wrong_paper_pub_year"] } | 2307.13528#114 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13528 | 115 | [GPT-4] { "paper_title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "paper_author(s)": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova", "paper_pub_year": "2018", "label": true, "with_tool_classiï¬cation": true, "error": [] } { "paper_title": "The Impact of Artiï¬cial Intelligence on Employment", "paper_author(s)": "Acemoglu and Restrepo", "paper_pub_year": "2019", "label": false, "with_tool_classiï¬cation": false, "error": ["wrong_paper_author(s)", "wrong_paper_pub_year"] }
Figure 14: Examples of scientiï¬c claims veriï¬ed by FACTOOL powered by ChatGPT and GPT-4. The full response of FACTOOL powered by ChatGPT and GPT-4 is reported, including the queries generated, evidences collected, the reasoning process, and the ï¬nal decision. | 2307.13528#115 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13528 | 116 | [Reasoning Error] { "claim": "Jupiter is less dense than Saturn", "label": false, "with_tool_classiï¬cation": true, "with_tool_reasoning": "The given text is factual be- cause both provided evidences support the claim that Jupiter is less dense than Saturn. The ï¬rst evidence provides numerical data on the densities of Jupiter and Saturn, and the second evidence conï¬rms that Saturn is the least dense planet in the solar system.", "queries": ["Jupiter vs Saturn density", "Which planet is less dense Jupiter or Saturn?"], "evidences": is 1,33 ["The density of g/cmË00b3 whereas the density of Saturn is only 0,687 g/cmË00b3. If Saturn were the same density as Jupiter, it would have a radius of 30.079 km.", "Jupiter is the largest planet in the solar system, but itâs SaturnË2014the solar systemâs second largest planetË2014that takes the prize for least dense. Aug 1, 2019"] } | 2307.13528#116 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13528 | 117 | [Conï¬icting Evidences] { "claim": "Jupiter has a density of 1.33 grams per cubic centimeter", "label": true, "with_tool_classiï¬cation": false, "with_tool_reasoning": "The given text is mostly in line with the provided evidences. There is a minor dis- crepancy between the two evidences, with one stating the density of Jupiter as 1.326 g/cmË00b3 and the other as 1.33 g/cmË00b3. However, the difference is minimal and could be due to rounding.", "queries": ["Jupiter density grams per cubic centime- ter", "Density of Jupiter"], "evidences": ["Density of Jupiter: However, at 1.326 g/cm3, it is also the second-most dense of the gas gi- ants. Feb 17, 2016", "1.33 g/cmË00b3"] } | 2307.13528#117 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13528 | 118 | [Ambiguity in descriptions] { "claim": "Fortune cookies are enjoyed by people all over the world.", "label": false, "with_tool_classiï¬cation": true, "with_tool_reasoning": "The given text states that for- tune cookies are enjoyed by people all over the world. While the evidence provided does not explicitly men- tion the global enjoyment of fortune cookies, it does mention that 3 billion fortune cookies are made each year globally, with the majority consumed in the US. This suggests that fortune cookies are indeed enjoyed by people in various parts of the world.", "queries": ["Origin of fortune cookies", "Popularity of fortune cookies worldwide"], "evidences": ["Japan", "There are approximately 3 bil- lion fortune cookies made each year globally, the ma- jority of them consumed in the US. The largest man- ufacturer of the cookies is Wonton Food, Inc., head- quartered in Brooklyn, New York. They make over 4.5 million fortune cookies per day."] }
Figure 15: Some error cases of FACTOOL on KB-QA. The full response of FACTOOL powered by GPT-4 is reported, including the queries generated, evidences collected, the reasoning process, and the ï¬nal decision is also reported. | 2307.13528#118 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13528 | 119 | [Limited variety in synthetic testcases] { "task_id": "HumanEval/64", "prompt": "Write a function vowels_count(s) which takes a string repre- senting a word as input and returns the number of vowels in the string. Vowels in this case are âaâ, âeâ, âiâ, âoâ, âuâ. Here, âyâ is also a vowel, but only when it is at the end of the given word.", "entry_point": "vowels_count", "completion": | 2307.13528#119 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13528 | 120 | d e f v o w e l s _ c o u n t ( s ) : v o w e l s = [ â a â , c o u n t = 0 f o r i i f â e â , â i â , i n range ( l e n ( s ) ) : s [ i ] i n v o w e l s : â o â , â u â ] c o u n t += 1 i f i == l e n ( s ) â 1 and s [ i ] == â y â : c o u n t += 1 r e t u r n c o u n t "result": "failed: AssertionError: Test 2", "passed": false, "with_tool_classiï¬cation": true, "testcases_input": els_count(ârhythmâ)", "vowels_count(âbeautifulâ)"], "multi_solutions": [ ["vowels_count(âencyclopediaâ)", d e f v o w e l s _ c o u n t ( s ) : v o w e l s = â a e i o u â c o u n t = 0 f o r i , c i n enumerate ( s ) : "vow- i f c . l | 2307.13528#120 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13528 | 121 | â a e i o u â c o u n t = 0 f o r i , c i n enumerate ( s ) : "vow- i f c . l o w e r ( ) i n v o w e l s : c o u n t += 1 e l i f c . l o w e r ( ) == â y â and i == l e n ( s ) â 1 : c o u n t += 1 r e t u r n c o u n t d e f v o w e l s _ c o u n t ( s ) : v o w e l s = â a e i o u â c o u n t = 0 f o r c h a r i n s : i f c h a r . l o w e r ( ) c o u n t += 1 s [ â 1 ] . l o w e r ( ) == â y â : i f i n v o w e l s : c o u n t += 1 r e t u r n c o u n t d e f v o w e l s _ c o u n t ( s ) : v o w e l s = â a e i o u â c o u n t = 0 f o r c h a r i n s : i f c h a r . l o w e r | 2307.13528#121 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13528 | 122 | = â a e i o u â c o u n t = 0 f o r c h a r i n s : i f c h a r . l o w e r ( ) c o u n t += 1 s [ â 1 ] . l o w e r ( ) == â y â : i f i n v o w e l s : c o u n t += 1 r e t u r n c o u n t d e f v o w e l s _ c o u n t ( s ) : v o w e l s = [ â a â , c o u n t = 0 f o r i i f â e â , â i â , i n range ( l e n ( s ) ) : s [ i ] i n v o w e l s : â o â , â u â ] c o u n t += 1 i f i == l e n ( s ) â 1 and s [ i ] == â y â : c o u n t += 1 r e t u r n c o u n t ] "exec_result": [[5, 5, 5, 5], [0, 0, 0, 0], [5, 5, 5, 5]] } | 2307.13528#122 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13528 | 123 | # "vow[Potential errors in code generation] { "task_id": "HumanEval/79", "prompt": d e f d e c i m a l _ t o _ b i n a r y ( d e c i m a l ) : " " " You w i l l be g i v e n a number and y o u r i t i s The f u n c t i o n s h o u l d r e t u r n a s t r i n g , w i t h e a c h c h a r a c t e r Each c h a r a c t e r T h e r e w i l l be an e x t r a c o u p l e o f t h e b e g i n n i n g c h a r a c t e r s and a t s t r i n g . The e x t r a c h a r a c t e r s a r e t h e r e t o h e l p w i t h t h e f o r m a t . i n d e c i m a l f o r m t a s k t o c o n v e r t t o b i n a r y f o r m a t . r e p r e s e n t i n g a b i n a r | 2307.13528#123 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13528 | 124 | c o n v e r t t o b i n a r y f o r m a t . r e p r e s e n t i n g a b i n a r y number . i n t h e s t r i n g w i l l be â 0 â o r â 1 â . â db â a t t h e end o f t h e " " " , "entry_point": "decimal_to_binary", "completion": d e f d e c i m a l _ t o _ b i n a r y ( d e c i m a l ) : b i n a r y _ l i s t = [ ] w h i l e d e c i m a l > 0 : r e m a i n d e r = d e c i m a l % 2 b i n a r y _ l i s t . a p p e n d ( s t r ( r e m a i n d e r ) ) d e c i m a l = d e c i m a l b i n a r y _ l i s t . r e v e r s e ( ) b i n a r y _ s t r i n g = â â . j o i n ( b i n a r y _ l i s t ) r e t u r n | 2307.13528#124 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13528 | 125 | n a r y _ s t r i n g = â â . j o i n ( b i n a r y _ l i s t ) r e t u r n â db â + b i n a r y _ s t r i n g + â db â / / 2 "result": "failed: AssertionError: ", "passed": false, "with_tool_classiï¬cation": true, "testcases_input": ["decimal_to_binary(10)", "decimal_to_binary(255)", "dec- imal_to_binary(0)"], "multi_solutions": [ d e f d e c i m a l _ t o _ b i n a r y ( d e c i m a l ) : b i n a r y _ n u m = â â w h i l e d e c i m a l > 0 : r e m a i n d e r = d e c i m a l % 2 d e c i m a l = d e c i m a l / / 2 b i n a r y _ n u m = s t r ( r e m a i n d e r ) + b i n a r y _ n u m r e t u r n â db â + b i n a r y _ n u | 2307.13528#125 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13528 | 126 | m a i n d e r ) + b i n a r y _ n u m r e t u r n â db â + b i n a r y _ n u m + â db â d e f d e c i m a l _ t o _ b i n a r y ( d e c i m a l ) : b i n a r y = â â w h i l e d e c i m a l > 0 : r e m a i n d e r = d e c i m a l % 2 d e c i m a l = d e c i m a l b i n a r y = s t r ( r e m a i n d e r ) + b i n a r y / / 2 r e t u r n â db â + b i n a r y + â db â d e f d e c i m a l _ t o _ b i n a r y ( d e c i m a l ) : b i n a r y = â â w h i l e d e c i m a l > 0 : r e m a i n d e r = d e c i m a l % 2 d e c i m a l = d e c i m a l b i n a r y = s t r ( r e m a i n | 2307.13528#126 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13528 | 127 | e c i m a l % 2 d e c i m a l = d e c i m a l b i n a r y = s t r ( r e m a i n d e r ) + b i n a r y / / 2 r e t u r n â db â + b i n a r y + â db â d e f d e c i m a l _ t o _ b i n a r y ( d e c i m a l ) : b i n a r y _ l i s t = [ ] w h i l e d e c i m a l > 0 : r e m a i n d e r = d e c i m a l % 2 b i n a r y _ l i s t . a p p e n d ( s t r ( r e m a i n d e r ) ) d e c i m a l = d e c i m a l b i n a r y _ l i s t . r e v e r s e ( ) b i n a r y _ s t r i n g = â â . j o i n ( b i n a r y _ l i s t ) r e t u r n â db â + b i n a r y _ s t r i n g + â db â / / 2 | 2307.13528#127 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13528 | 129 | [Round-off error] { "math_calculation": "60444034 / 12", "calculated_answer": "5037002.83", "label": true, "query": "python_snippet": "calculation = 60444034 / 12; answer = 5037002.83; print(abs(calculation - answer) < 1e-9)", "exec_result": "False", "with_tool_classiï¬cation": false } [Reasoning error] { "input": "Kylar went to the store to buy glasses for his new apartment. One glass costs $5, but every second glass costs only 60% of the price. Kylar wants to buy 5364765 glasses. How much does he need to pay for them?", "target": 21459061.0, "response": "Letâs assume that Kylar buys x glasses at the full price of $5 each. Therefore, he will buy (5364765 - x) glasses at a discounted price of 60% of $5, which is $3. Now, we can set up the equation: 5x + 3(5364765 - x) = total cost Simplifying the equation, we get: 5x + 16094295 - 3x = total cost 2x + 16094295 | 2307.13528#129 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13528 | 130 | - x) = total cost Simplifying the equation, we get: 5x + 16094295 - 3x = total cost 2x + 16094295 = total cost Therefore, the total cost of all the glasses will be 2x + 16094295 dollars. Now, we just need to ï¬nd the value of x. We know that Kylar wants to buy 5364765 glasses, so we can substitute this value in the equation: 2x + 16094295 = total cost 2x + 16094295 = 5,364,765 * 5 2x + 16094295 = 26823825 2x = 10729530 x = 5364765 Therefore, Kylar needs to pay 2x + 16094295 dollars, which is: 2(5364765) + 16094295 = 26823825. Hence, Kylar needs to pay $26823825 for all the glasses.", "answer": "26823825", "annotation": false, "claims": [{"math_calculation": "5364765 - x", "calculated_answer": "5364765 - x", "label": true}, {"math_calculation": "0.60 * 5", "calculated_answer": "3", "label": true}, {"math_calculation": "5x", | 2307.13528#130 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13528 | 131 | true}, {"math_calculation": "0.60 * 5", "calculated_answer": "3", "label": true}, {"math_calculation": "5x", "calculated_answer": "5x", "label": true}, {"math_calculation": "3(5364765 - x)", "calculated_answer": "16094295 - 3x", "label": true}, {"math_calculation": "5x + 16094295 - 3x", "calculated_answer": "16094295 + 2x", "label": true}, {"math_calculation": "2x + 16094295", "calculated_answer": "total cost", "label": "null"}, {"math_calculation": "5364765 * 5", "calculated_answer": "26823825", "label": true}, {"math_calculation": "2(5364765)", "calculated_answer": "10729530", "label": true}, {"math_calculation": "2x", "calculated_answer": "10729530", "label": "null"}] "math_calculation": "5364765 - x", "calculated_answer": "5364765 - x", "label": true, "query": "python_snippet": "x = 0; math_calculation = 5364765 - | 2307.13528#131 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13528 | 132 | "5364765 - x", "label": true, "query": "python_snippet": "x = 0; math_calculation = 5364765 - x; calculated_answer = 5364765 - x; print(math_calculation == calculated_answer)", "exec_result": "True", "with_tool_classiï¬cation": true "math_calculation": "calculation = 0.60 * 5; answer = 3; print(calculation == answer)", "0.60 * 5", "calculated_answer": "exec_result": "True", "with_tool_classiï¬cation": true "math_calculation": "5x", "calculated_answer": "5x", "label": true, "query": "python_snippet": "def val- idate_calculation(calculation, answer): return calculation == answer; print(validate_calculation(â5xâ, â5xâ))", "exec_result": "True", "with_tool_classiï¬cation": true "math_calculation": "3(5364765 - x)", "calculated_answer": "16094295 - 3x", "label": true, "query": "python_snippet": "x = 1; result1 = 3 * | 2307.13528#132 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13528 | 133 | "calculated_answer": "16094295 - 3x", "label": true, "query": "python_snippet": "x = 1; result1 = 3 * (5364765 - x); result2 = 16094295 - 3 * x; print(result1 == result2)", "exec_result": "True", "with_tool_classiï¬cation": true "math_calculation": "5x + 16094295 - 3x", "calculated_answer": "16094295 + 2x", "label": true, "query": "python_snippet": "x = 1; print((5 * x + 16094295 - 3 * x) == (16094295 + 2 * x))", "exec_result": "True", "with_tool_classiï¬cation": true "math_calculation": "2x + 16094295", "calculated_answer": "total cost", "label": "null", "query": "python_snippet": "x = 5; math_calculation = 2 * x + 16094295; calculated_answer = 16094305; print(math_calculation == calculated_answer)", "exec_result": "True", "with_tool_classiï¬cation": true "math_calculation": | 2307.13528#133 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13528 | 134 | print(math_calculation == calculated_answer)", "exec_result": "True", "with_tool_classiï¬cation": true "math_calculation": "5364765 * 5", "calculated_answer": "26823825", "label": true, "query": "python_snippet": "calcula- tion = 5364765 * 5; answer = 26823825; print(calculation == answer)", "exec_result": "True", "with_tool_classiï¬cation": true "math_calculation": "2(5364765)", "calculated_answer": "10729530", "label": true, "query": "python_snippet": "calculation = 2 * 5364765; answer = 10729530; print(calculation == answer)", "exec_result": "True", "with_tool_classiï¬cation": true "math_calculation": "2x", "calculated_answer": "10729530", "label": "null", "query": "python_snippet": "x = 5364765; print(2 * x == 10729530)", "exec_result": "True", "with_tool_classiï¬cation": true } | 2307.13528#134 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13528 | 135 | Figure 17: Some error cases of FACTOOL on math. The full response of FACTOOL powered by GPT-4 is reported, including the queries generated, evidences collected, the reasoning process, and the ï¬nal decision is also reported.
[Errors in title matching] { "paper_title": "MDMA-assisted psychotherapy for treatment of PTSD: study design and rationale for phase 3 trials based on pooled analysis of six phase 2 randomized controlled trials", "paper_author(s)": "Mithoefer et al.", "paper_pub_year": "2019", "label": true, "with_tool_classiï¬cation": false, "error": ["wrong_paper_title"] } [Errors in author matching] { "paper_title": "Language Models are Unsupervised Multitask Learners", "paper_author(s)": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever", "paper_pub_year": "2019", "label": true, "with_tool_classiï¬cation": false, "error": ["wrong_paper_author(s)"] } | 2307.13528#135 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.12950 | 0 | 3 2 0 2
g u A 8 1 ] L C . s c [
2 v 0 5 9 2 1 . 7 0 3 2 : v i X r a
# RLCD: REINFORCEMENT LEARNING FROM CONTRAST DISTILLATION FOR LANGUAGE MODEL ALIGNMENT
Kevin Yang1,2 Dan Klein2 Asli Celikyilmaz1 Nanyun Peng3 Yuandong Tian1 1Meta AI, 2UC Berkeley, 3UCLA {yangk,klein}@berkeley.edu,{aslic,yuandong}@meta.com,[email protected]
# ABSTRACT
We propose Reinforcement Learning from Contrast Distillation (RLCD), a method for aligning language models to follow natural language principles without using human feedback. RLCD trains a preference model using simulated preference pairs that contain both a high-quality and low-quality example, generated using contrasting positive and negative prompts. The preference model is then used to improve a base unaligned language model via reinforcement learning. Empirically, RLCD outperforms RLAIF (Bai et al., 2022b) and context distillation (Huang et al., 2022) baselines across three diverse alignment tasksâharmlessness, helpfulness, and story outline generationâand on both 7B and 30B model scales for preference data simulation.
# INTRODUCTION | 2307.12950#0 | RLCD: Reinforcement Learning from Contrast Distillation for Language Model Alignment | We propose Reinforcement Learning from Contrast Distillation (RLCD), a method
for aligning language models to follow natural language principles without
using human feedback. RLCD trains a preference model using simulated preference
pairs that contain both a high-quality and low-quality example, generated using
contrasting positive and negative prompts. The preference model is then used to
improve a base unaligned language model via reinforcement learning.
Empirically, RLCD outperforms RLAIF (Bai et al., 2022b) and context
distillation (Huang et al., 2022) baselines across three diverse alignment
tasks--harmlessness, helpfulness, and story outline generation--and on both 7B
and 30B model scales for preference data simulation. | http://arxiv.org/pdf/2307.12950 | Kevin Yang, Dan Klein, Asli Celikyilmaz, Nanyun Peng, Yuandong Tian | cs.CL, cs.AI | null | null | cs.CL | 20230724 | 20230818 | [
{
"id": "2302.13971"
},
{
"id": "2109.07445"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.03047"
},
{
"id": "2209.15189"
},
{
"id": "2306.15595"
},
{
"id": "1807.03748"
},
{
"id": "2202.00161"
},
{
"id": "1606.07947"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2204.05862"
},
{
"id": "2301.11270"
},
{
"id": "2112.00861"
},
{
"id": "2307.03172"
},
{
"id": "2206.11349"
},
{
"id": "2305.14387"
},
{
"id": "2210.11610"
},
{
"id": "2102.10960"
},
{
"id": "2306.11816"
}
] |
2307.12573 | 1 | Recent advancements in natural language and Large Language Models (LLMs) have enabled AI agents to simulate human-like interactions within virtual worlds. However, these interactions still face limitations in complexity and ï¬exibility, partic- ularly in scenarios involving multiple characters and novel objects. Pre-deï¬ning all interactable objects in the agentâs world model presents challenges, and conveying implicit intentions to multiple characters through complex interactions remains dif- ï¬cult. To address these issues, we propose integrating virtual Game Masters (GMs) into the agentâs world model, drawing inspiration from Tabletop Role-Playing Games (TRPGs). GMs play a crucial role in overseeing information, estimating playersâ intentions, providing environment descriptions, and offering feedback, compensating for current world model deï¬ciencies. To facilitate future explorations for complex interactions, we introduce a benchmark named Tachikuma, compris- ing a Multiple character and novel Object based interaction Estimation (MOE) task and a supporting dataset. MOE challenges models to understand charactersâ intentions and accurately determine their actions within intricate contexts | 2307.12573#1 | Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models | Recent advancements in natural language and Large Language Models (LLMs) have
enabled AI agents to simulate human-like interactions within virtual worlds.
However, these interactions still face limitations in complexity and
flexibility, particularly in scenarios involving multiple characters and novel
objects. Pre-defining all interactable objects in the agent's world model
presents challenges, and conveying implicit intentions to multiple characters
through complex interactions remains difficult. To address these issues, we
propose integrating virtual Game Masters (GMs) into the agent's world model,
drawing inspiration from Tabletop Role-Playing Games (TRPGs). GMs play a
crucial role in overseeing information, estimating players' intentions,
providing environment descriptions, and offering feedback, compensating for
current world model deficiencies. To facilitate future explorations for complex
interactions, we introduce a benchmark named Tachikuma, comprising a Multiple
character and novel Object based interaction Estimation (MOE) task and a
supporting dataset. MOE challenges models to understand characters' intentions
and accurately determine their actions within intricate contexts involving
multi-character and novel object interactions. Besides, the dataset captures
log data from real-time communications during gameplay, providing diverse,
grounded, and complex interactions for further explorations. Finally, we
present a simple prompting baseline and evaluate its performance, demonstrating
its effectiveness in enhancing interaction understanding. We hope that our
dataset and task will inspire further research in complex interactions with
natural language, fostering the development of more advanced AI agents. | http://arxiv.org/pdf/2307.12573 | Yuanzhi Liang, Linchao Zhu, Yi Yang | cs.CL | Preliminary version of an ongoing work | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2212.10060"
},
{
"id": "2205.11916"
},
{
"id": "2304.03442"
},
{
"id": "2203.11171"
},
{
"id": "2212.10618"
},
{
"id": "2301.00234"
},
{
"id": "2303.18223"
},
{
"id": "2010.07375"
},
{
"id": "2212.10403"
},
{
"id": "2305.01528"
},
{
"id": "2305.16960"
}
] |
2307.12856 | 1 | # ABSTRACT
Pre-trained large language models (LLMs) have recently achieved better gener- alization and sample efficiency in autonomous web automation. However, the performance on real-world websites has still suffered from (1) open domainness, (2) limited context length, and (3) lack of inductive bias on HTML. We introduce WebAgent, an LLM-driven agent that learns from self-experience to complete tasks on real websites following natural language instructions. WebAgent plans ahead by decomposing instructions into canonical sub-instructions, summarizes long HTML documents into task-relevant snippets, and acts on websites via Python programs generated from those. We design WebAgent with Flan-U-PaLM, for grounded code generation, and HTML-T5, new pre-trained LLMs for long HTML documents using local and global attention mechanisms and a mixture of long-span denoising objectives, for planning and summarization. We empirically demonstrate that our modular recipe improves the success on real websites by over 50%, and that HTML- T5 is the best model to solve various HTML understanding tasks; achieving 18.7% higher success rate than the prior method on MiniWoB web automation benchmark, and SoTA performance on Mind2Web, an offline task planning evaluation.
1
# INTRODUCTION | 2307.12856#1 | A Real-World WebAgent with Planning, Long Context Understanding, and Program Synthesis | Pre-trained large language models (LLMs) have recently achieved better
generalization and sample efficiency in autonomous web automation. However, the
performance on real-world websites has still suffered from (1) open domainness,
(2) limited context length, and (3) lack of inductive bias on HTML. We
introduce WebAgent, an LLM-driven agent that learns from self-experience to
complete tasks on real websites following natural language instructions.
WebAgent plans ahead by decomposing instructions into canonical
sub-instructions, summarizes long HTML documents into task-relevant snippets,
and acts on websites via Python programs generated from those. We design
WebAgent with Flan-U-PaLM, for grounded code generation, and HTML-T5, new
pre-trained LLMs for long HTML documents using local and global attention
mechanisms and a mixture of long-span denoising objectives, for planning and
summarization. We empirically demonstrate that our modular recipe improves the
success on real websites by over 50%, and that HTML-T5 is the best model to
solve various HTML understanding tasks; achieving 18.7% higher success rate
than the prior method on MiniWoB web automation benchmark, and SoTA performance
on Mind2Web, an offline task planning evaluation. | http://arxiv.org/pdf/2307.12856 | Izzeddin Gur, Hiroki Furuta, Austin Huang, Mustafa Safdari, Yutaka Matsuo, Douglas Eck, Aleksandra Faust | cs.LG, cs.AI, cs.CL | null | null | cs.LG | 20230724 | 20231003 | [
{
"id": "2101.02235"
},
{
"id": "2302.13971"
},
{
"id": "2103.06410"
},
{
"id": "2305.08848"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2106.15772"
},
{
"id": "2305.16653"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2105.13231"
},
{
"id": "2205.05131"
},
{
"id": "2204.00598"
},
{
"id": "2203.17189"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2105.11210"
},
{
"id": "2304.11477"
},
{
"id": "1906.01749"
},
{
"id": "2108.07732"
},
{
"id": "2202.00217"
},
{
"id": "2102.04664"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "1811.00937"
},
{
"id": "2303.08774"
},
{
"id": "1808.06226"
},
{
"id": "2010.12844"
},
{
"id": "2110.14168"
},
{
"id": "2209.11302"
},
{
"id": "2210.09261"
},
{
"id": "2306.07863"
},
{
"id": "2306.00245"
},
{
"id": "2305.16291"
},
{
"id": "2005.14165"
},
{
"id": "2305.11014"
},
{
"id": "2211.10435"
},
{
"id": "2210.03629"
},
{
"id": "1912.13318"
},
{
"id": "2108.13643"
},
{
"id": "2303.09752"
},
{
"id": "2103.07191"
},
{
"id": "1611.04230"
},
{
"id": "2209.07753"
},
{
"id": "2002.08155"
},
{
"id": "2212.10560"
},
{
"id": "2110.08518"
},
{
"id": "2210.03945"
},
{
"id": "2201.08239"
},
{
"id": "2204.01691"
},
{
"id": "1906.03741"
},
{
"id": "2203.02155"
},
{
"id": "2201.07207"
},
{
"id": "2004.08483"
},
{
"id": "1804.05685"
},
{
"id": "2206.10498"
},
{
"id": "2207.01206"
},
{
"id": "2105.09938"
},
{
"id": "2205.12255"
},
{
"id": "2305.11854"
}
] |
2307.12950 | 1 | # INTRODUCTION
Reinforcement Learning from Human Feedback (RLHF) has recently been used to great effect to align pretrained large language models (LLMs) to human preferences, optimizing for desirable qualities like harmlessness and helpfulness (Bai et al., 2022a) and achieving state-of-the-art results across a variety of natural language tasks (OpenAI, 2023).
A standard RLHF procedure fine-tunes an initial unaligned LLM using an RL algorithm such as PPO (Schulman et al., 2017), optimizing the LLM to align with human preferences. RLHF is thus critically dependent on a reward model derived from human-labeled preferences, typically pairwise preferences on LLM outputs (o1, o2) generated from a shared prompt p.
However, collecting human pairwise preference data, especially high-quality data, may be expensive and time consuming at scale. To address this problem, approaches have been proposed to obtain labels without human annotation, such as Reinforcement Learning from AI Feedback (RLAIF) and context distillation. | 2307.12950#1 | RLCD: Reinforcement Learning from Contrast Distillation for Language Model Alignment | We propose Reinforcement Learning from Contrast Distillation (RLCD), a method
for aligning language models to follow natural language principles without
using human feedback. RLCD trains a preference model using simulated preference
pairs that contain both a high-quality and low-quality example, generated using
contrasting positive and negative prompts. The preference model is then used to
improve a base unaligned language model via reinforcement learning.
Empirically, RLCD outperforms RLAIF (Bai et al., 2022b) and context
distillation (Huang et al., 2022) baselines across three diverse alignment
tasks--harmlessness, helpfulness, and story outline generation--and on both 7B
and 30B model scales for preference data simulation. | http://arxiv.org/pdf/2307.12950 | Kevin Yang, Dan Klein, Asli Celikyilmaz, Nanyun Peng, Yuandong Tian | cs.CL, cs.AI | null | null | cs.CL | 20230724 | 20230818 | [
{
"id": "2302.13971"
},
{
"id": "2109.07445"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.03047"
},
{
"id": "2209.15189"
},
{
"id": "2306.15595"
},
{
"id": "1807.03748"
},
{
"id": "2202.00161"
},
{
"id": "1606.07947"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2204.05862"
},
{
"id": "2301.11270"
},
{
"id": "2112.00861"
},
{
"id": "2307.03172"
},
{
"id": "2206.11349"
},
{
"id": "2305.14387"
},
{
"id": "2210.11610"
},
{
"id": "2102.10960"
},
{
"id": "2306.11816"
}
] |
2307.12966 | 1 | Large Language Models (LLMs) trained on extensive textual corpora have emerged as leading solutions for a broad array of Natu- ral Language Processing (NLP) tasks. De- spite their notable performance, these models are prone to certain limitations such as mis- understanding human instructions, generating potentially biased content, or factually incor- rect (hallucinated) information. Hence, align- ing LLMs with human expectations has be- come an active area of interest within the re- search community. This survey presents a comprehensive overview of these alignment technologies, including the following aspects. (1) Data collection: the methods for effec- tively collecting high-quality instructions for including the use of NLP LLM alignment, benchmarks, human annotations, and leverag- (2) Training methodolo- ing strong LLMs. gies: a detailed review of the prevailing train- ing methods employed for LLM alignment. Our exploration encompasses Supervised Fine- tuning, both Online and Ofï¬ine human pref- erence training, along with parameter-efï¬cient training mechanisms. (3) Model Evaluation: the methods for evaluating the effectiveness of these human-aligned LLMs, presenting a | 2307.12966#1 | Aligning Large Language Models with Human: A Survey | Large Language Models (LLMs) trained on extensive textual corpora have
emerged as leading solutions for a broad array of Natural Language Processing
(NLP) tasks. Despite their notable performance, these models are prone to
certain limitations such as misunderstanding human instructions, generating
potentially biased content, or factually incorrect (hallucinated) information.
Hence, aligning LLMs with human expectations has become an active area of
interest within the research community. This survey presents a comprehensive
overview of these alignment technologies, including the following aspects. (1)
Data collection: the methods for effectively collecting high-quality
instructions for LLM alignment, including the use of NLP benchmarks, human
annotations, and leveraging strong LLMs. (2) Training methodologies: a detailed
review of the prevailing training methods employed for LLM alignment. Our
exploration encompasses Supervised Fine-tuning, both Online and Offline human
preference training, along with parameter-efficient training mechanisms. (3)
Model Evaluation: the methods for evaluating the effectiveness of these
human-aligned LLMs, presenting a multifaceted approach towards their
assessment. In conclusion, we collate and distill our findings, shedding light
on several promising future research avenues in the field. This survey,
therefore, serves as a valuable resource for anyone invested in understanding
and advancing the alignment of LLMs to better suit human-oriented tasks and
expectations. An associated GitHub link collecting the latest papers is
available at https://github.com/GaryYufei/AlignLLMHumanSurvey. | http://arxiv.org/pdf/2307.12966 | Yufei Wang, Wanjun Zhong, Liangyou Li, Fei Mi, Xingshan Zeng, Wenyong Huang, Lifeng Shang, Xin Jiang, Qun Liu | cs.CL | work in progress | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2307.03109"
},
{
"id": "2305.14251"
},
{
"id": "2305.18290"
},
{
"id": "2305.13711"
},
{
"id": "2306.11644"
},
{
"id": "2306.09212"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.15011"
},
{
"id": "2304.08109"
},
{
"id": "2305.16264"
},
{
"id": "2305.10263"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2305.01210"
},
{
"id": "2306.04751"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2210.05158"
},
{
"id": "2306.15895"
},
{
"id": "2301.13688"
},
{
"id": "2307.02053"
},
{
"id": "2306.04563"
},
{
"id": "2305.15067"
},
{
"id": "2305.14688"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2306.09296"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2307.08701"
},
{
"id": "2304.08177"
},
{
"id": "2303.16634"
},
{
"id": "2306.17492"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.16739"
},
{
"id": "1909.08593"
},
{
"id": "2304.14317"
},
{
"id": "2305.14233"
},
{
"id": "2304.00723"
},
{
"id": "2306.08568"
},
{
"id": "2304.03277"
},
{
"id": "2307.03692"
},
{
"id": "2305.14314"
},
{
"id": "2307.06290"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2305.16960"
}
] |
2307.12573 | 2 | task and a supporting dataset. MOE challenges models to understand charactersâ intentions and accurately determine their actions within intricate contexts involving multi-character and novel object interactions. Besides, the dataset captures log data from real-time communications during gameplay, providing diverse, grounded, and complex interactions for further explorations. Finally, we present a simple prompting baseline and evaluate its performance, demonstrating its effectiveness in enhancing interaction understanding. We hope that our dataset and task will inspire further research in complex interactions with natural language, fostering the development of more advanced AI agents. | 2307.12573#2 | Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models | Recent advancements in natural language and Large Language Models (LLMs) have
enabled AI agents to simulate human-like interactions within virtual worlds.
However, these interactions still face limitations in complexity and
flexibility, particularly in scenarios involving multiple characters and novel
objects. Pre-defining all interactable objects in the agent's world model
presents challenges, and conveying implicit intentions to multiple characters
through complex interactions remains difficult. To address these issues, we
propose integrating virtual Game Masters (GMs) into the agent's world model,
drawing inspiration from Tabletop Role-Playing Games (TRPGs). GMs play a
crucial role in overseeing information, estimating players' intentions,
providing environment descriptions, and offering feedback, compensating for
current world model deficiencies. To facilitate future explorations for complex
interactions, we introduce a benchmark named Tachikuma, comprising a Multiple
character and novel Object based interaction Estimation (MOE) task and a
supporting dataset. MOE challenges models to understand characters' intentions
and accurately determine their actions within intricate contexts involving
multi-character and novel object interactions. Besides, the dataset captures
log data from real-time communications during gameplay, providing diverse,
grounded, and complex interactions for further explorations. Finally, we
present a simple prompting baseline and evaluate its performance, demonstrating
its effectiveness in enhancing interaction understanding. We hope that our
dataset and task will inspire further research in complex interactions with
natural language, fostering the development of more advanced AI agents. | http://arxiv.org/pdf/2307.12573 | Yuanzhi Liang, Linchao Zhu, Yi Yang | cs.CL | Preliminary version of an ongoing work | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2212.10060"
},
{
"id": "2205.11916"
},
{
"id": "2304.03442"
},
{
"id": "2203.11171"
},
{
"id": "2212.10618"
},
{
"id": "2301.00234"
},
{
"id": "2303.18223"
},
{
"id": "2010.07375"
},
{
"id": "2212.10403"
},
{
"id": "2305.01528"
},
{
"id": "2305.16960"
}
] |
2307.12856 | 2 | 1
# INTRODUCTION
Large language models (LLM) (Brown et al., 2020; Chowdhery et al., 2022; OpenAI, 2023) can solve variety of natural language tasks, such as arithmetic, commonsense, logical reasoning, question answering, text generation (Brown et al., 2020; Kojima et al., 2022; Wei et al., 2022), and even interactive decision making tasks (Ahn et al., 2022; Yao et al., 2022b). Recently, LLMs have also demonstrated success in autonomous web navigation, where the agents control computers or browse the internet to satisfy the given natural language instructions through the sequence of computer actions, by leveraging the capability of HTML comprehension and multi-step reasoning (Furuta et al., 2023; Gur et al., 2022; Kim et al., 2023). | 2307.12856#2 | A Real-World WebAgent with Planning, Long Context Understanding, and Program Synthesis | Pre-trained large language models (LLMs) have recently achieved better
generalization and sample efficiency in autonomous web automation. However, the
performance on real-world websites has still suffered from (1) open domainness,
(2) limited context length, and (3) lack of inductive bias on HTML. We
introduce WebAgent, an LLM-driven agent that learns from self-experience to
complete tasks on real websites following natural language instructions.
WebAgent plans ahead by decomposing instructions into canonical
sub-instructions, summarizes long HTML documents into task-relevant snippets,
and acts on websites via Python programs generated from those. We design
WebAgent with Flan-U-PaLM, for grounded code generation, and HTML-T5, new
pre-trained LLMs for long HTML documents using local and global attention
mechanisms and a mixture of long-span denoising objectives, for planning and
summarization. We empirically demonstrate that our modular recipe improves the
success on real websites by over 50%, and that HTML-T5 is the best model to
solve various HTML understanding tasks; achieving 18.7% higher success rate
than the prior method on MiniWoB web automation benchmark, and SoTA performance
on Mind2Web, an offline task planning evaluation. | http://arxiv.org/pdf/2307.12856 | Izzeddin Gur, Hiroki Furuta, Austin Huang, Mustafa Safdari, Yutaka Matsuo, Douglas Eck, Aleksandra Faust | cs.LG, cs.AI, cs.CL | null | null | cs.LG | 20230724 | 20231003 | [
{
"id": "2101.02235"
},
{
"id": "2302.13971"
},
{
"id": "2103.06410"
},
{
"id": "2305.08848"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2106.15772"
},
{
"id": "2305.16653"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2105.13231"
},
{
"id": "2205.05131"
},
{
"id": "2204.00598"
},
{
"id": "2203.17189"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2105.11210"
},
{
"id": "2304.11477"
},
{
"id": "1906.01749"
},
{
"id": "2108.07732"
},
{
"id": "2202.00217"
},
{
"id": "2102.04664"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "1811.00937"
},
{
"id": "2303.08774"
},
{
"id": "1808.06226"
},
{
"id": "2010.12844"
},
{
"id": "2110.14168"
},
{
"id": "2209.11302"
},
{
"id": "2210.09261"
},
{
"id": "2306.07863"
},
{
"id": "2306.00245"
},
{
"id": "2305.16291"
},
{
"id": "2005.14165"
},
{
"id": "2305.11014"
},
{
"id": "2211.10435"
},
{
"id": "2210.03629"
},
{
"id": "1912.13318"
},
{
"id": "2108.13643"
},
{
"id": "2303.09752"
},
{
"id": "2103.07191"
},
{
"id": "1611.04230"
},
{
"id": "2209.07753"
},
{
"id": "2002.08155"
},
{
"id": "2212.10560"
},
{
"id": "2110.08518"
},
{
"id": "2210.03945"
},
{
"id": "2201.08239"
},
{
"id": "2204.01691"
},
{
"id": "1906.03741"
},
{
"id": "2203.02155"
},
{
"id": "2201.07207"
},
{
"id": "2004.08483"
},
{
"id": "1804.05685"
},
{
"id": "2206.10498"
},
{
"id": "2207.01206"
},
{
"id": "2105.09938"
},
{
"id": "2205.12255"
},
{
"id": "2305.11854"
}
] |
2307.12950 | 2 | RLAIF approaches (e.g., Bai et al. (2022b)) simulate human pairwise preferences by scoring o1 and o2 with an LLM (Figure 1 center); the scoring LLM is often the same as the one used to generate the original pairs (o1, o2). Of course, the resulting LLM pairwise preferences will be somewhat noisier compared to human labels. However, this problem is exacerbated by using the same prompt p to generate both o1 and o2, causing o1 and o2 to often be of very similar quality and thus hard to differentiate (e.g., Table 1). Consequently, training signal can be overwhelmed by label noise, yielding lower-quality preference data.
Meanwhile, context distillation methods (e.g., Sun et al. (2023)) create more training signal by modifying the initial prompt p. The modified prompt p+ typically contains additional context encouraging a directional attribute change in the output o+ (Figure 1 right). However, context distillation methods only generate a single output o+ per prompt p+, which is then used for supervised fine-tuning, losing the pairwise preferences which help RLHF-style approaches to derive signal from the contrast between outputs. Multiple works have observed that RL approaches using preference models for pairwise preferences can substantially improve over supervised fine-tuning by itself when aligning LLMs (Ouyang et al., 2022; Dubois et al., 2023). | 2307.12950#2 | RLCD: Reinforcement Learning from Contrast Distillation for Language Model Alignment | We propose Reinforcement Learning from Contrast Distillation (RLCD), a method
for aligning language models to follow natural language principles without
using human feedback. RLCD trains a preference model using simulated preference
pairs that contain both a high-quality and low-quality example, generated using
contrasting positive and negative prompts. The preference model is then used to
improve a base unaligned language model via reinforcement learning.
Empirically, RLCD outperforms RLAIF (Bai et al., 2022b) and context
distillation (Huang et al., 2022) baselines across three diverse alignment
tasks--harmlessness, helpfulness, and story outline generation--and on both 7B
and 30B model scales for preference data simulation. | http://arxiv.org/pdf/2307.12950 | Kevin Yang, Dan Klein, Asli Celikyilmaz, Nanyun Peng, Yuandong Tian | cs.CL, cs.AI | null | null | cs.CL | 20230724 | 20230818 | [
{
"id": "2302.13971"
},
{
"id": "2109.07445"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.03047"
},
{
"id": "2209.15189"
},
{
"id": "2306.15595"
},
{
"id": "1807.03748"
},
{
"id": "2202.00161"
},
{
"id": "1606.07947"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2204.05862"
},
{
"id": "2301.11270"
},
{
"id": "2112.00861"
},
{
"id": "2307.03172"
},
{
"id": "2206.11349"
},
{
"id": "2305.14387"
},
{
"id": "2210.11610"
},
{
"id": "2102.10960"
},
{
"id": "2306.11816"
}
] |
2307.12966 | 2 | training mechanisms. (3) Model Evaluation: the methods for evaluating the effectiveness of these human-aligned LLMs, presenting a mul- tifaceted approach towards their assessment. In conclusion, we collate and distill our ï¬nd- ings, shedding light on several promising fu- ture research avenues in the ï¬eld. This sur- vey, therefore, serves as a valuable resource for anyone invested in understanding and ad- vancing the alignment of LLMs to better suit human-oriented tasks and expectations. An associated GitHub link collecting the latest papers is available at https://github.com/ GaryYufei/AlignLLMHumanSurvey. | 2307.12966#2 | Aligning Large Language Models with Human: A Survey | Large Language Models (LLMs) trained on extensive textual corpora have
emerged as leading solutions for a broad array of Natural Language Processing
(NLP) tasks. Despite their notable performance, these models are prone to
certain limitations such as misunderstanding human instructions, generating
potentially biased content, or factually incorrect (hallucinated) information.
Hence, aligning LLMs with human expectations has become an active area of
interest within the research community. This survey presents a comprehensive
overview of these alignment technologies, including the following aspects. (1)
Data collection: the methods for effectively collecting high-quality
instructions for LLM alignment, including the use of NLP benchmarks, human
annotations, and leveraging strong LLMs. (2) Training methodologies: a detailed
review of the prevailing training methods employed for LLM alignment. Our
exploration encompasses Supervised Fine-tuning, both Online and Offline human
preference training, along with parameter-efficient training mechanisms. (3)
Model Evaluation: the methods for evaluating the effectiveness of these
human-aligned LLMs, presenting a multifaceted approach towards their
assessment. In conclusion, we collate and distill our findings, shedding light
on several promising future research avenues in the field. This survey,
therefore, serves as a valuable resource for anyone invested in understanding
and advancing the alignment of LLMs to better suit human-oriented tasks and
expectations. An associated GitHub link collecting the latest papers is
available at https://github.com/GaryYufei/AlignLLMHumanSurvey. | http://arxiv.org/pdf/2307.12966 | Yufei Wang, Wanjun Zhong, Liangyou Li, Fei Mi, Xingshan Zeng, Wenyong Huang, Lifeng Shang, Xin Jiang, Qun Liu | cs.CL | work in progress | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2307.03109"
},
{
"id": "2305.14251"
},
{
"id": "2305.18290"
},
{
"id": "2305.13711"
},
{
"id": "2306.11644"
},
{
"id": "2306.09212"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.15011"
},
{
"id": "2304.08109"
},
{
"id": "2305.16264"
},
{
"id": "2305.10263"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2305.01210"
},
{
"id": "2306.04751"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2210.05158"
},
{
"id": "2306.15895"
},
{
"id": "2301.13688"
},
{
"id": "2307.02053"
},
{
"id": "2306.04563"
},
{
"id": "2305.15067"
},
{
"id": "2305.14688"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2306.09296"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2307.08701"
},
{
"id": "2304.08177"
},
{
"id": "2303.16634"
},
{
"id": "2306.17492"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.16739"
},
{
"id": "1909.08593"
},
{
"id": "2304.14317"
},
{
"id": "2305.14233"
},
{
"id": "2304.00723"
},
{
"id": "2306.08568"
},
{
"id": "2304.03277"
},
{
"id": "2307.03692"
},
{
"id": "2305.14314"
},
{
"id": "2307.06290"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2305.16960"
}
] |
2307.12573 | 3 | # Introduction
... the teaching of language is not explaining, but training.
â Ludwig Josef Johann Wittgenstei, Philosophical Investigations
In recent years, there has been a growing interest in constructing AI agents capable of simulating and supporting human-like interactions across various domains. Notably, some agents have exhibited exceptional performance, surpassing human abilities in games like MOBA, Starcraft, poker, and Go. Building on the advancements in Large Language Models (LLMs), researchers have extended agent interactions to incorporate natural language. For instance, Park et al. [17] have introduced generative agents that engage in free-form interactions using natural language, thereby creating virtual worlds where agents reside and even demonstrate spontaneous activities such as hosting parties. Similarly, Liu et al. [12] have developed simulated societies in which LLM-powered agents engage in the virtual world and can support some discussions for social problems. These recent developments hold promise
1 | 2307.12573#3 | Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models | Recent advancements in natural language and Large Language Models (LLMs) have
enabled AI agents to simulate human-like interactions within virtual worlds.
However, these interactions still face limitations in complexity and
flexibility, particularly in scenarios involving multiple characters and novel
objects. Pre-defining all interactable objects in the agent's world model
presents challenges, and conveying implicit intentions to multiple characters
through complex interactions remains difficult. To address these issues, we
propose integrating virtual Game Masters (GMs) into the agent's world model,
drawing inspiration from Tabletop Role-Playing Games (TRPGs). GMs play a
crucial role in overseeing information, estimating players' intentions,
providing environment descriptions, and offering feedback, compensating for
current world model deficiencies. To facilitate future explorations for complex
interactions, we introduce a benchmark named Tachikuma, comprising a Multiple
character and novel Object based interaction Estimation (MOE) task and a
supporting dataset. MOE challenges models to understand characters' intentions
and accurately determine their actions within intricate contexts involving
multi-character and novel object interactions. Besides, the dataset captures
log data from real-time communications during gameplay, providing diverse,
grounded, and complex interactions for further explorations. Finally, we
present a simple prompting baseline and evaluate its performance, demonstrating
its effectiveness in enhancing interaction understanding. We hope that our
dataset and task will inspire further research in complex interactions with
natural language, fostering the development of more advanced AI agents. | http://arxiv.org/pdf/2307.12573 | Yuanzhi Liang, Linchao Zhu, Yi Yang | cs.CL | Preliminary version of an ongoing work | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2212.10060"
},
{
"id": "2205.11916"
},
{
"id": "2304.03442"
},
{
"id": "2203.11171"
},
{
"id": "2212.10618"
},
{
"id": "2301.00234"
},
{
"id": "2303.18223"
},
{
"id": "2010.07375"
},
{
"id": "2212.10403"
},
{
"id": "2305.01528"
},
{
"id": "2305.16960"
}
] |
2307.12856 | 3 | However, web automation on real-world websites has still suffered from (1) the lack of pre-defined action space, (2) much longer HTML observations than simulators, and (3) the absence of domain knowledge for HTML in LLMs (Figure 1). Considering the open-ended real-world websites and the complexity of instructions, defining appropriate action space in advance is challenging. In addition, although several works have argued that recent LLMs with instruction-finetuning or reinforcement learning from human feedback improve HTML understanding and web automation accuracy (Furuta et al., 2023; Kim et al., 2023), their architectures are not always suitable to process real-world HTML documents; as presented in Figure 2, HTML tokens of real websites are much longer than those of simulators, and most LLMs have shorter context lengths than the average HTML tokens in real websites. It is prohibitively costly to treat such long documents as inputs directly, and even to adopt prior techniques for structured documents, such as text-XPath alignment (Li et al., 2021b)
Equal Contribution. â Work done as Student Researcher at Google.
1
# Preprint
Simulated Website Real Website vom Bom Pre-defined Open-ended Action Action Language Model Human Instruction Simplified HTML | 2307.12856#3 | A Real-World WebAgent with Planning, Long Context Understanding, and Program Synthesis | Pre-trained large language models (LLMs) have recently achieved better
generalization and sample efficiency in autonomous web automation. However, the
performance on real-world websites has still suffered from (1) open domainness,
(2) limited context length, and (3) lack of inductive bias on HTML. We
introduce WebAgent, an LLM-driven agent that learns from self-experience to
complete tasks on real websites following natural language instructions.
WebAgent plans ahead by decomposing instructions into canonical
sub-instructions, summarizes long HTML documents into task-relevant snippets,
and acts on websites via Python programs generated from those. We design
WebAgent with Flan-U-PaLM, for grounded code generation, and HTML-T5, new
pre-trained LLMs for long HTML documents using local and global attention
mechanisms and a mixture of long-span denoising objectives, for planning and
summarization. We empirically demonstrate that our modular recipe improves the
success on real websites by over 50%, and that HTML-T5 is the best model to
solve various HTML understanding tasks; achieving 18.7% higher success rate
than the prior method on MiniWoB web automation benchmark, and SoTA performance
on Mind2Web, an offline task planning evaluation. | http://arxiv.org/pdf/2307.12856 | Izzeddin Gur, Hiroki Furuta, Austin Huang, Mustafa Safdari, Yutaka Matsuo, Douglas Eck, Aleksandra Faust | cs.LG, cs.AI, cs.CL | null | null | cs.LG | 20230724 | 20231003 | [
{
"id": "2101.02235"
},
{
"id": "2302.13971"
},
{
"id": "2103.06410"
},
{
"id": "2305.08848"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2106.15772"
},
{
"id": "2305.16653"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2105.13231"
},
{
"id": "2205.05131"
},
{
"id": "2204.00598"
},
{
"id": "2203.17189"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2105.11210"
},
{
"id": "2304.11477"
},
{
"id": "1906.01749"
},
{
"id": "2108.07732"
},
{
"id": "2202.00217"
},
{
"id": "2102.04664"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "1811.00937"
},
{
"id": "2303.08774"
},
{
"id": "1808.06226"
},
{
"id": "2010.12844"
},
{
"id": "2110.14168"
},
{
"id": "2209.11302"
},
{
"id": "2210.09261"
},
{
"id": "2306.07863"
},
{
"id": "2306.00245"
},
{
"id": "2305.16291"
},
{
"id": "2005.14165"
},
{
"id": "2305.11014"
},
{
"id": "2211.10435"
},
{
"id": "2210.03629"
},
{
"id": "1912.13318"
},
{
"id": "2108.13643"
},
{
"id": "2303.09752"
},
{
"id": "2103.07191"
},
{
"id": "1611.04230"
},
{
"id": "2209.07753"
},
{
"id": "2002.08155"
},
{
"id": "2212.10560"
},
{
"id": "2110.08518"
},
{
"id": "2210.03945"
},
{
"id": "2201.08239"
},
{
"id": "2204.01691"
},
{
"id": "1906.03741"
},
{
"id": "2203.02155"
},
{
"id": "2201.07207"
},
{
"id": "2004.08483"
},
{
"id": "1804.05685"
},
{
"id": "2206.10498"
},
{
"id": "2207.01206"
},
{
"id": "2105.09938"
},
{
"id": "2205.12255"
},
{
"id": "2305.11854"
}
] |
2307.12950 | 3 | Therefore, while both RLAIF and context distillation approaches have already been successfully applied in practice to align language models, we posit that it may be even more effective to combine
1
Context Distillation Ps p- ii Pp Ps a Human: Human: : Human: Initial Joe is so annoying Joe is so annoying annoying annoying Joe is so annoying Prompts Assistant (harmless): | | Assistant (harmful): Assistant (harmless) = | I'm sorry to hear What a !@#$%. | s I'm sorry to hear Completions that. Could Joe just really hate people much about that. Could Joe just be having a bad day? | | like that. be having a bad day? eeeeeeceeseeneesfeesereereone ae Sa Tossassaseeee! i Label according to p)., p- Score by asking language model to choose | Preference = | i WA Labels 0.47 0.53 Train Preference Model (supervised) | Train Language Model Training Procedure (PPO) Train Language Model (supervised) | 2307.12950#3 | RLCD: Reinforcement Learning from Contrast Distillation for Language Model Alignment | We propose Reinforcement Learning from Contrast Distillation (RLCD), a method
for aligning language models to follow natural language principles without
using human feedback. RLCD trains a preference model using simulated preference
pairs that contain both a high-quality and low-quality example, generated using
contrasting positive and negative prompts. The preference model is then used to
improve a base unaligned language model via reinforcement learning.
Empirically, RLCD outperforms RLAIF (Bai et al., 2022b) and context
distillation (Huang et al., 2022) baselines across three diverse alignment
tasks--harmlessness, helpfulness, and story outline generation--and on both 7B
and 30B model scales for preference data simulation. | http://arxiv.org/pdf/2307.12950 | Kevin Yang, Dan Klein, Asli Celikyilmaz, Nanyun Peng, Yuandong Tian | cs.CL, cs.AI | null | null | cs.CL | 20230724 | 20230818 | [
{
"id": "2302.13971"
},
{
"id": "2109.07445"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.03047"
},
{
"id": "2209.15189"
},
{
"id": "2306.15595"
},
{
"id": "1807.03748"
},
{
"id": "2202.00161"
},
{
"id": "1606.07947"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2204.05862"
},
{
"id": "2301.11270"
},
{
"id": "2112.00861"
},
{
"id": "2307.03172"
},
{
"id": "2206.11349"
},
{
"id": "2305.14387"
},
{
"id": "2210.11610"
},
{
"id": "2102.10960"
},
{
"id": "2306.11816"
}
] |
2307.12966 | 3 | # Introduction
facilitating the generation of coherent and ï¬uent text in response to various inputs. Despite these strengths, foundational LLMs are not always adept at interpreting a wide range of instructions and can produce outputs that deviate from human expec- tations. Additionally, these models may produce biased content or invent (hallucinated) facts, which can limit their practical usefulness. | 2307.12966#3 | Aligning Large Language Models with Human: A Survey | Large Language Models (LLMs) trained on extensive textual corpora have
emerged as leading solutions for a broad array of Natural Language Processing
(NLP) tasks. Despite their notable performance, these models are prone to
certain limitations such as misunderstanding human instructions, generating
potentially biased content, or factually incorrect (hallucinated) information.
Hence, aligning LLMs with human expectations has become an active area of
interest within the research community. This survey presents a comprehensive
overview of these alignment technologies, including the following aspects. (1)
Data collection: the methods for effectively collecting high-quality
instructions for LLM alignment, including the use of NLP benchmarks, human
annotations, and leveraging strong LLMs. (2) Training methodologies: a detailed
review of the prevailing training methods employed for LLM alignment. Our
exploration encompasses Supervised Fine-tuning, both Online and Offline human
preference training, along with parameter-efficient training mechanisms. (3)
Model Evaluation: the methods for evaluating the effectiveness of these
human-aligned LLMs, presenting a multifaceted approach towards their
assessment. In conclusion, we collate and distill our findings, shedding light
on several promising future research avenues in the field. This survey,
therefore, serves as a valuable resource for anyone invested in understanding
and advancing the alignment of LLMs to better suit human-oriented tasks and
expectations. An associated GitHub link collecting the latest papers is
available at https://github.com/GaryYufei/AlignLLMHumanSurvey. | http://arxiv.org/pdf/2307.12966 | Yufei Wang, Wanjun Zhong, Liangyou Li, Fei Mi, Xingshan Zeng, Wenyong Huang, Lifeng Shang, Xin Jiang, Qun Liu | cs.CL | work in progress | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2307.03109"
},
{
"id": "2305.14251"
},
{
"id": "2305.18290"
},
{
"id": "2305.13711"
},
{
"id": "2306.11644"
},
{
"id": "2306.09212"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.15011"
},
{
"id": "2304.08109"
},
{
"id": "2305.16264"
},
{
"id": "2305.10263"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2305.01210"
},
{
"id": "2306.04751"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2210.05158"
},
{
"id": "2306.15895"
},
{
"id": "2301.13688"
},
{
"id": "2307.02053"
},
{
"id": "2306.04563"
},
{
"id": "2305.15067"
},
{
"id": "2305.14688"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2306.09296"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2307.08701"
},
{
"id": "2304.08177"
},
{
"id": "2303.16634"
},
{
"id": "2306.17492"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.16739"
},
{
"id": "1909.08593"
},
{
"id": "2304.14317"
},
{
"id": "2305.14233"
},
{
"id": "2304.00723"
},
{
"id": "2306.08568"
},
{
"id": "2304.03277"
},
{
"id": "2307.03692"
},
{
"id": "2305.14314"
},
{
"id": "2307.06290"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2305.16960"
}
] |
2307.12573 | 4 | 1
for advancing AI agents by leveraging natural language as an interactive tool, enabling them to exhibit more human-like behaviors. Furthermore, the exploration of phenomena resulting from endowing agents with more powerful language abilities for interaction can offer valuable insights. As discussed in the philosophical investigation, Ludwig Josef Johann Wittgenstein emphasized that teaching language is a form of training rather than mere explanation. General human communication is similar to engaging in a language game. Language serves as a fundamental tool for human interaction with the environment, facilitating the transmission of information, communication, negotiation, and cooperation within human groups, and contributing to the overall functioning of society. While the relationship between language and intelligence remains an open question, it is always worth exploring the potential evolution of more powerful and autonomous agents that can interact using natural language. | 2307.12573#4 | Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models | Recent advancements in natural language and Large Language Models (LLMs) have
enabled AI agents to simulate human-like interactions within virtual worlds.
However, these interactions still face limitations in complexity and
flexibility, particularly in scenarios involving multiple characters and novel
objects. Pre-defining all interactable objects in the agent's world model
presents challenges, and conveying implicit intentions to multiple characters
through complex interactions remains difficult. To address these issues, we
propose integrating virtual Game Masters (GMs) into the agent's world model,
drawing inspiration from Tabletop Role-Playing Games (TRPGs). GMs play a
crucial role in overseeing information, estimating players' intentions,
providing environment descriptions, and offering feedback, compensating for
current world model deficiencies. To facilitate future explorations for complex
interactions, we introduce a benchmark named Tachikuma, comprising a Multiple
character and novel Object based interaction Estimation (MOE) task and a
supporting dataset. MOE challenges models to understand characters' intentions
and accurately determine their actions within intricate contexts involving
multi-character and novel object interactions. Besides, the dataset captures
log data from real-time communications during gameplay, providing diverse,
grounded, and complex interactions for further explorations. Finally, we
present a simple prompting baseline and evaluate its performance, demonstrating
its effectiveness in enhancing interaction understanding. We hope that our
dataset and task will inspire further research in complex interactions with
natural language, fostering the development of more advanced AI agents. | http://arxiv.org/pdf/2307.12573 | Yuanzhi Liang, Linchao Zhu, Yi Yang | cs.CL | Preliminary version of an ongoing work | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2212.10060"
},
{
"id": "2205.11916"
},
{
"id": "2304.03442"
},
{
"id": "2203.11171"
},
{
"id": "2212.10618"
},
{
"id": "2301.00234"
},
{
"id": "2303.18223"
},
{
"id": "2010.07375"
},
{
"id": "2212.10403"
},
{
"id": "2305.01528"
},
{
"id": "2305.16960"
}
] |
2307.12856 | 4 | 1
# Preprint
Simulated Website Real Website vom Bom Pre-defined Open-ended Action Action Language Model Human Instruction Simplified HTML
Figure 1: Challenges in real-world web automation. Recent language model agents (Furuta et al., 2023; Gur et al., 2022; Kim et al., 2023; Yao et al., 2022b) can navigate simulated websites (Shi et al., 2017; Yao et al., 2022a), where the agents manipulate pre-defied actions and receive simplified HTML documents that are easy to parse. In contrast, language model agents have still suffered from navigating real websites, where the agents should deal with open-ended actions, and long HTML documents that include a lot of task-irrelevant parts. Such a sim-to-real gap in web automation has been underexplored so far.
or text-HTML token separation (Wang et al., 2022a). To prioritize broad task generalization and model-size scaling, such domain knowledge for HTML codes is not applied in recent LLMs. | 2307.12856#4 | A Real-World WebAgent with Planning, Long Context Understanding, and Program Synthesis | Pre-trained large language models (LLMs) have recently achieved better
generalization and sample efficiency in autonomous web automation. However, the
performance on real-world websites has still suffered from (1) open domainness,
(2) limited context length, and (3) lack of inductive bias on HTML. We
introduce WebAgent, an LLM-driven agent that learns from self-experience to
complete tasks on real websites following natural language instructions.
WebAgent plans ahead by decomposing instructions into canonical
sub-instructions, summarizes long HTML documents into task-relevant snippets,
and acts on websites via Python programs generated from those. We design
WebAgent with Flan-U-PaLM, for grounded code generation, and HTML-T5, new
pre-trained LLMs for long HTML documents using local and global attention
mechanisms and a mixture of long-span denoising objectives, for planning and
summarization. We empirically demonstrate that our modular recipe improves the
success on real websites by over 50%, and that HTML-T5 is the best model to
solve various HTML understanding tasks; achieving 18.7% higher success rate
than the prior method on MiniWoB web automation benchmark, and SoTA performance
on Mind2Web, an offline task planning evaluation. | http://arxiv.org/pdf/2307.12856 | Izzeddin Gur, Hiroki Furuta, Austin Huang, Mustafa Safdari, Yutaka Matsuo, Douglas Eck, Aleksandra Faust | cs.LG, cs.AI, cs.CL | null | null | cs.LG | 20230724 | 20231003 | [
{
"id": "2101.02235"
},
{
"id": "2302.13971"
},
{
"id": "2103.06410"
},
{
"id": "2305.08848"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2106.15772"
},
{
"id": "2305.16653"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2105.13231"
},
{
"id": "2205.05131"
},
{
"id": "2204.00598"
},
{
"id": "2203.17189"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2105.11210"
},
{
"id": "2304.11477"
},
{
"id": "1906.01749"
},
{
"id": "2108.07732"
},
{
"id": "2202.00217"
},
{
"id": "2102.04664"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "1811.00937"
},
{
"id": "2303.08774"
},
{
"id": "1808.06226"
},
{
"id": "2010.12844"
},
{
"id": "2110.14168"
},
{
"id": "2209.11302"
},
{
"id": "2210.09261"
},
{
"id": "2306.07863"
},
{
"id": "2306.00245"
},
{
"id": "2305.16291"
},
{
"id": "2005.14165"
},
{
"id": "2305.11014"
},
{
"id": "2211.10435"
},
{
"id": "2210.03629"
},
{
"id": "1912.13318"
},
{
"id": "2108.13643"
},
{
"id": "2303.09752"
},
{
"id": "2103.07191"
},
{
"id": "1611.04230"
},
{
"id": "2209.07753"
},
{
"id": "2002.08155"
},
{
"id": "2212.10560"
},
{
"id": "2110.08518"
},
{
"id": "2210.03945"
},
{
"id": "2201.08239"
},
{
"id": "2204.01691"
},
{
"id": "1906.03741"
},
{
"id": "2203.02155"
},
{
"id": "2201.07207"
},
{
"id": "2004.08483"
},
{
"id": "1804.05685"
},
{
"id": "2206.10498"
},
{
"id": "2207.01206"
},
{
"id": "2105.09938"
},
{
"id": "2205.12255"
},
{
"id": "2305.11854"
}
] |
2307.12950 | 4 | Figure 1: Stylized illustration showing RLCD compared to standard RLAIF and context distillation on harm- lessness attribute. RLCD generates preference pairs using two contrasting prompts p+, pâ, and labels according to the prompt used, thus making use of both pairwise preferences for RL as well as directional attribute change in outputs as encouraged by prompts. RLCD then trains a preference model on the resulting pairs, which is used to guide the LLM alignment via PPO.
the key advantages of both. That is, we will use RL with pairwise preferences, while also using modified prompts to encourage directional attribute change in outputs.
Concretely, we propose Reinforcement Learning from Contrast Distillation (RLCD). RLCD gen- erates preference data as follows. Rather than producing two i.i.d. model outputs (o1, o2) from the same prompt p as in RLAIF, RLCD creates two variations of p: a positive prompt p+ similar to context distillation which encourages directional change toward a desired attribute, and a negative prompt pâ which encourages directional change against it (Figure 1 left). We then generate model outputs (o+, oâ) respectively, and automatically label o+ as preferredâthat is, RLCD automatically âgeneratesâ pairwise preference labels by construction. We then follow the standard RL pipeline of training a preference model followed by PPO. | 2307.12950#4 | RLCD: Reinforcement Learning from Contrast Distillation for Language Model Alignment | We propose Reinforcement Learning from Contrast Distillation (RLCD), a method
for aligning language models to follow natural language principles without
using human feedback. RLCD trains a preference model using simulated preference
pairs that contain both a high-quality and low-quality example, generated using
contrasting positive and negative prompts. The preference model is then used to
improve a base unaligned language model via reinforcement learning.
Empirically, RLCD outperforms RLAIF (Bai et al., 2022b) and context
distillation (Huang et al., 2022) baselines across three diverse alignment
tasks--harmlessness, helpfulness, and story outline generation--and on both 7B
and 30B model scales for preference data simulation. | http://arxiv.org/pdf/2307.12950 | Kevin Yang, Dan Klein, Asli Celikyilmaz, Nanyun Peng, Yuandong Tian | cs.CL, cs.AI | null | null | cs.CL | 20230724 | 20230818 | [
{
"id": "2302.13971"
},
{
"id": "2109.07445"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.03047"
},
{
"id": "2209.15189"
},
{
"id": "2306.15595"
},
{
"id": "1807.03748"
},
{
"id": "2202.00161"
},
{
"id": "1606.07947"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2204.05862"
},
{
"id": "2301.11270"
},
{
"id": "2112.00861"
},
{
"id": "2307.03172"
},
{
"id": "2206.11349"
},
{
"id": "2305.14387"
},
{
"id": "2210.11610"
},
{
"id": "2102.10960"
},
{
"id": "2306.11816"
}
] |
2307.12966 | 4 | Therefore, recent NLP research efforts focus on empowering LLMs to understand instructions and to align with human expectations. Early methods for training LLMs to follow instructions primar- ily use task instruction sets, which are compiled by combining manually crafted task instruction templates with instances from standard NLP tasks. However, such approaches often fall short of cap- turing the intricacies of practical user instructions, as these instructions tend to originate from artiï¬- cial NLP tasks designed to test speciï¬c aspects of machine capabilities. Real-world user instructions, on the other hand, are signiï¬cantly more diverse and complex. As a result, OpenAI explored Super- vised Fine-Tuning (SFT) of LLMs using instruc- tions annotated by a diverse group of human users. Models developed through this process, such as InstructGPT (Ouyang et al., 2022) and ChatGPT 1, have demonstrated a marked improvement in under- standing human instructions and solving complex tasks. To further enhance alignment, Ouyang et al. (2022) incorporate the Reinforcement Learning from Human Feedback (RLHF) approach, which involves learning from | 2307.12966#4 | Aligning Large Language Models with Human: A Survey | Large Language Models (LLMs) trained on extensive textual corpora have
emerged as leading solutions for a broad array of Natural Language Processing
(NLP) tasks. Despite their notable performance, these models are prone to
certain limitations such as misunderstanding human instructions, generating
potentially biased content, or factually incorrect (hallucinated) information.
Hence, aligning LLMs with human expectations has become an active area of
interest within the research community. This survey presents a comprehensive
overview of these alignment technologies, including the following aspects. (1)
Data collection: the methods for effectively collecting high-quality
instructions for LLM alignment, including the use of NLP benchmarks, human
annotations, and leveraging strong LLMs. (2) Training methodologies: a detailed
review of the prevailing training methods employed for LLM alignment. Our
exploration encompasses Supervised Fine-tuning, both Online and Offline human
preference training, along with parameter-efficient training mechanisms. (3)
Model Evaluation: the methods for evaluating the effectiveness of these
human-aligned LLMs, presenting a multifaceted approach towards their
assessment. In conclusion, we collate and distill our findings, shedding light
on several promising future research avenues in the field. This survey,
therefore, serves as a valuable resource for anyone invested in understanding
and advancing the alignment of LLMs to better suit human-oriented tasks and
expectations. An associated GitHub link collecting the latest papers is
available at https://github.com/GaryYufei/AlignLLMHumanSurvey. | http://arxiv.org/pdf/2307.12966 | Yufei Wang, Wanjun Zhong, Liangyou Li, Fei Mi, Xingshan Zeng, Wenyong Huang, Lifeng Shang, Xin Jiang, Qun Liu | cs.CL | work in progress | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2307.03109"
},
{
"id": "2305.14251"
},
{
"id": "2305.18290"
},
{
"id": "2305.13711"
},
{
"id": "2306.11644"
},
{
"id": "2306.09212"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.15011"
},
{
"id": "2304.08109"
},
{
"id": "2305.16264"
},
{
"id": "2305.10263"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2305.01210"
},
{
"id": "2306.04751"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2210.05158"
},
{
"id": "2306.15895"
},
{
"id": "2301.13688"
},
{
"id": "2307.02053"
},
{
"id": "2306.04563"
},
{
"id": "2305.15067"
},
{
"id": "2305.14688"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2306.09296"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2307.08701"
},
{
"id": "2304.08177"
},
{
"id": "2303.16634"
},
{
"id": "2306.17492"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.16739"
},
{
"id": "1909.08593"
},
{
"id": "2304.14317"
},
{
"id": "2305.14233"
},
{
"id": "2304.00723"
},
{
"id": "2306.08568"
},
{
"id": "2304.03277"
},
{
"id": "2307.03692"
},
{
"id": "2305.14314"
},
{
"id": "2307.06290"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2305.16960"
}
] |
2307.12573 | 5 | Going further with agent interactions, we have yet to fully empower the sufï¬cient openness and freedom in the interactions between agents and the world. Existing approaches have often imposed constraints on agent interactions, leading to limited complexity and diversity in their capabilities. These constraints arise from the lack of interactions involving novel objects and multiple characters. While some prior research has explored language-based interaction abilities in generative agents [17], their diversity remains restricted, focusing on a limited range of interactable objects. Additionally, previous works have primarily concentrated on two-character communication without considering implicit intentions through complex interactions. Such interactions fail to encompass nuanced behaviors (e.g., refusal, persuasion, group decision making, coalition building), akin to real-time communications involving multi-characters. | 2307.12573#5 | Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models | Recent advancements in natural language and Large Language Models (LLMs) have
enabled AI agents to simulate human-like interactions within virtual worlds.
However, these interactions still face limitations in complexity and
flexibility, particularly in scenarios involving multiple characters and novel
objects. Pre-defining all interactable objects in the agent's world model
presents challenges, and conveying implicit intentions to multiple characters
through complex interactions remains difficult. To address these issues, we
propose integrating virtual Game Masters (GMs) into the agent's world model,
drawing inspiration from Tabletop Role-Playing Games (TRPGs). GMs play a
crucial role in overseeing information, estimating players' intentions,
providing environment descriptions, and offering feedback, compensating for
current world model deficiencies. To facilitate future explorations for complex
interactions, we introduce a benchmark named Tachikuma, comprising a Multiple
character and novel Object based interaction Estimation (MOE) task and a
supporting dataset. MOE challenges models to understand characters' intentions
and accurately determine their actions within intricate contexts involving
multi-character and novel object interactions. Besides, the dataset captures
log data from real-time communications during gameplay, providing diverse,
grounded, and complex interactions for further explorations. Finally, we
present a simple prompting baseline and evaluate its performance, demonstrating
its effectiveness in enhancing interaction understanding. We hope that our
dataset and task will inspire further research in complex interactions with
natural language, fostering the development of more advanced AI agents. | http://arxiv.org/pdf/2307.12573 | Yuanzhi Liang, Linchao Zhu, Yi Yang | cs.CL | Preliminary version of an ongoing work | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2212.10060"
},
{
"id": "2205.11916"
},
{
"id": "2304.03442"
},
{
"id": "2203.11171"
},
{
"id": "2212.10618"
},
{
"id": "2301.00234"
},
{
"id": "2303.18223"
},
{
"id": "2010.07375"
},
{
"id": "2212.10403"
},
{
"id": "2305.01528"
},
{
"id": "2305.16960"
}
] |
2307.12856 | 5 | In this work, we introduce WebAgent, an LLM-driven autonomous agent that learns from self-experience to com- plete user instructions on real websites by combining canonical web actions in a program space (Figure 3). We- bAgent (i) plans sub-instructions per step by decompos- ing natural language instructions, (ii) summarizes long HTML pages into task-relevant snippets based on sub- instructions, and (iii) acts via programming on real web- sites by grounding sub-instruction and HTML snippet into executable Python codes. We combine two LLMs to form WebAgent: Flan-U-PaLM (Chowdhery et al., 2022; Chung et al., 2022) for grounded code generation, and newly introduced HTML-T5, a domain-expert pre-trained language model, for task planning and conditional HTML summarization. HTML-T5 has an encoder-decoder archi- tecture and is specialized to capture the structure â syntax and semantics â of long HTML pages better by adopting local and global attention encoder (Guo et al., 2022). It is self-supervisedly pre-trained with a mixture of long-span denoising objectives (Tay | 2307.12856#5 | A Real-World WebAgent with Planning, Long Context Understanding, and Program Synthesis | Pre-trained large language models (LLMs) have recently achieved better
generalization and sample efficiency in autonomous web automation. However, the
performance on real-world websites has still suffered from (1) open domainness,
(2) limited context length, and (3) lack of inductive bias on HTML. We
introduce WebAgent, an LLM-driven agent that learns from self-experience to
complete tasks on real websites following natural language instructions.
WebAgent plans ahead by decomposing instructions into canonical
sub-instructions, summarizes long HTML documents into task-relevant snippets,
and acts on websites via Python programs generated from those. We design
WebAgent with Flan-U-PaLM, for grounded code generation, and HTML-T5, new
pre-trained LLMs for long HTML documents using local and global attention
mechanisms and a mixture of long-span denoising objectives, for planning and
summarization. We empirically demonstrate that our modular recipe improves the
success on real websites by over 50%, and that HTML-T5 is the best model to
solve various HTML understanding tasks; achieving 18.7% higher success rate
than the prior method on MiniWoB web automation benchmark, and SoTA performance
on Mind2Web, an offline task planning evaluation. | http://arxiv.org/pdf/2307.12856 | Izzeddin Gur, Hiroki Furuta, Austin Huang, Mustafa Safdari, Yutaka Matsuo, Douglas Eck, Aleksandra Faust | cs.LG, cs.AI, cs.CL | null | null | cs.LG | 20230724 | 20231003 | [
{
"id": "2101.02235"
},
{
"id": "2302.13971"
},
{
"id": "2103.06410"
},
{
"id": "2305.08848"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2106.15772"
},
{
"id": "2305.16653"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2105.13231"
},
{
"id": "2205.05131"
},
{
"id": "2204.00598"
},
{
"id": "2203.17189"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2105.11210"
},
{
"id": "2304.11477"
},
{
"id": "1906.01749"
},
{
"id": "2108.07732"
},
{
"id": "2202.00217"
},
{
"id": "2102.04664"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "1811.00937"
},
{
"id": "2303.08774"
},
{
"id": "1808.06226"
},
{
"id": "2010.12844"
},
{
"id": "2110.14168"
},
{
"id": "2209.11302"
},
{
"id": "2210.09261"
},
{
"id": "2306.07863"
},
{
"id": "2306.00245"
},
{
"id": "2305.16291"
},
{
"id": "2005.14165"
},
{
"id": "2305.11014"
},
{
"id": "2211.10435"
},
{
"id": "2210.03629"
},
{
"id": "1912.13318"
},
{
"id": "2108.13643"
},
{
"id": "2303.09752"
},
{
"id": "2103.07191"
},
{
"id": "1611.04230"
},
{
"id": "2209.07753"
},
{
"id": "2002.08155"
},
{
"id": "2212.10560"
},
{
"id": "2110.08518"
},
{
"id": "2210.03945"
},
{
"id": "2201.08239"
},
{
"id": "2204.01691"
},
{
"id": "1906.03741"
},
{
"id": "2203.02155"
},
{
"id": "2201.07207"
},
{
"id": "2004.08483"
},
{
"id": "1804.05685"
},
{
"id": "2206.10498"
},
{
"id": "2207.01206"
},
{
"id": "2105.09938"
},
{
"id": "2205.12255"
},
{
"id": "2305.11854"
}
] |
2307.12950 | 5 | Compared to RLAIF-generated preference pairs (o1, o2) from the same input prompt p, there is typically a clearer difference in the quality of o+ and oâ generated using RLCDâs directional prompts p+ and pâ, which may result in less label noise. That is, intuitively, RLCD exchanges having examples be closer to the classification boundary for much more accurate labels on average. Compared to standard context distillation methods, on top of leveraging pairwise preferences for RL training, RLCD can derive signal not only from the positive prompt p+ which improves output quality, but also from the negative prompt pâ which degrades it. Positive outputs o+ donât need to be perfect; they only need to contrast with oâ on the desired attribute while otherwise following a similar style. | 2307.12950#5 | RLCD: Reinforcement Learning from Contrast Distillation for Language Model Alignment | We propose Reinforcement Learning from Contrast Distillation (RLCD), a method
for aligning language models to follow natural language principles without
using human feedback. RLCD trains a preference model using simulated preference
pairs that contain both a high-quality and low-quality example, generated using
contrasting positive and negative prompts. The preference model is then used to
improve a base unaligned language model via reinforcement learning.
Empirically, RLCD outperforms RLAIF (Bai et al., 2022b) and context
distillation (Huang et al., 2022) baselines across three diverse alignment
tasks--harmlessness, helpfulness, and story outline generation--and on both 7B
and 30B model scales for preference data simulation. | http://arxiv.org/pdf/2307.12950 | Kevin Yang, Dan Klein, Asli Celikyilmaz, Nanyun Peng, Yuandong Tian | cs.CL, cs.AI | null | null | cs.CL | 20230724 | 20230818 | [
{
"id": "2302.13971"
},
{
"id": "2109.07445"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.03047"
},
{
"id": "2209.15189"
},
{
"id": "2306.15595"
},
{
"id": "1807.03748"
},
{
"id": "2202.00161"
},
{
"id": "1606.07947"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2204.05862"
},
{
"id": "2301.11270"
},
{
"id": "2112.00861"
},
{
"id": "2307.03172"
},
{
"id": "2206.11349"
},
{
"id": "2305.14387"
},
{
"id": "2210.11610"
},
{
"id": "2102.10960"
},
{
"id": "2306.11816"
}
] |
2307.12966 | 5 | enhance alignment, Ouyang et al. (2022) incorporate the Reinforcement Learning from Human Feedback (RLHF) approach, which involves learning from human preferences through a reward model trained with human-rated outputs. There are challenges in alignment processes and the subsequent evaluation: (a) Collecting high- quality data for both SFT and RLHF stages can be costly and time-consuming. (b) The training strategies need to be optimized as SFT training is resource-consuming, and reinforcement learning in RLHF often lacks stability. (c) Evaluating LLMs | 2307.12966#5 | Aligning Large Language Models with Human: A Survey | Large Language Models (LLMs) trained on extensive textual corpora have
emerged as leading solutions for a broad array of Natural Language Processing
(NLP) tasks. Despite their notable performance, these models are prone to
certain limitations such as misunderstanding human instructions, generating
potentially biased content, or factually incorrect (hallucinated) information.
Hence, aligning LLMs with human expectations has become an active area of
interest within the research community. This survey presents a comprehensive
overview of these alignment technologies, including the following aspects. (1)
Data collection: the methods for effectively collecting high-quality
instructions for LLM alignment, including the use of NLP benchmarks, human
annotations, and leveraging strong LLMs. (2) Training methodologies: a detailed
review of the prevailing training methods employed for LLM alignment. Our
exploration encompasses Supervised Fine-tuning, both Online and Offline human
preference training, along with parameter-efficient training mechanisms. (3)
Model Evaluation: the methods for evaluating the effectiveness of these
human-aligned LLMs, presenting a multifaceted approach towards their
assessment. In conclusion, we collate and distill our findings, shedding light
on several promising future research avenues in the field. This survey,
therefore, serves as a valuable resource for anyone invested in understanding
and advancing the alignment of LLMs to better suit human-oriented tasks and
expectations. An associated GitHub link collecting the latest papers is
available at https://github.com/GaryYufei/AlignLLMHumanSurvey. | http://arxiv.org/pdf/2307.12966 | Yufei Wang, Wanjun Zhong, Liangyou Li, Fei Mi, Xingshan Zeng, Wenyong Huang, Lifeng Shang, Xin Jiang, Qun Liu | cs.CL | work in progress | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2307.03109"
},
{
"id": "2305.14251"
},
{
"id": "2305.18290"
},
{
"id": "2305.13711"
},
{
"id": "2306.11644"
},
{
"id": "2306.09212"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.15011"
},
{
"id": "2304.08109"
},
{
"id": "2305.16264"
},
{
"id": "2305.10263"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2305.01210"
},
{
"id": "2306.04751"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2210.05158"
},
{
"id": "2306.15895"
},
{
"id": "2301.13688"
},
{
"id": "2307.02053"
},
{
"id": "2306.04563"
},
{
"id": "2305.15067"
},
{
"id": "2305.14688"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2306.09296"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2307.08701"
},
{
"id": "2304.08177"
},
{
"id": "2303.16634"
},
{
"id": "2306.17492"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.16739"
},
{
"id": "1909.08593"
},
{
"id": "2304.14317"
},
{
"id": "2305.14233"
},
{
"id": "2304.00723"
},
{
"id": "2306.08568"
},
{
"id": "2304.03277"
},
{
"id": "2307.03692"
},
{
"id": "2305.14314"
},
{
"id": "2307.06290"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2305.16960"
}
] |
2307.12573 | 6 | To address this challenge, we draw inspiration from tabletop role-playing games (TRPGs) and in- troduce a Game Master (GM) role into the agentâs world model. TRPGs inherently offer highly complex and diverse interactions through natural language, involving multiple players in intricate and grounded multi-character scenarios. The GM oversees the game, provides scenario details, under- stands charactersâ intentions, and offers feedback on player actions, aligning with the requirements for a more comprehensive world model. Constructing and introducing a virtual GM capable of handling complex interactions with real humans could signiï¬cantly enhance the feedback given to agents. However, existing benchmarks in TRPG-related research lack the scope needed to develop a virtual GM that compensates for world model deï¬ciencies. Current virtual GM works only explore short and simple interactions in limited rounds, lacking sufï¬cient complexity. For instance, previous works have been derived from play-by-post forums [15, 3], where players contribute by writing and posting their responses on the forum. While, this asynchronous online communication introduces signiï¬cant delays, with players often waiting for hours or even weeks to | 2307.12573#6 | Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models | Recent advancements in natural language and Large Language Models (LLMs) have
enabled AI agents to simulate human-like interactions within virtual worlds.
However, these interactions still face limitations in complexity and
flexibility, particularly in scenarios involving multiple characters and novel
objects. Pre-defining all interactable objects in the agent's world model
presents challenges, and conveying implicit intentions to multiple characters
through complex interactions remains difficult. To address these issues, we
propose integrating virtual Game Masters (GMs) into the agent's world model,
drawing inspiration from Tabletop Role-Playing Games (TRPGs). GMs play a
crucial role in overseeing information, estimating players' intentions,
providing environment descriptions, and offering feedback, compensating for
current world model deficiencies. To facilitate future explorations for complex
interactions, we introduce a benchmark named Tachikuma, comprising a Multiple
character and novel Object based interaction Estimation (MOE) task and a
supporting dataset. MOE challenges models to understand characters' intentions
and accurately determine their actions within intricate contexts involving
multi-character and novel object interactions. Besides, the dataset captures
log data from real-time communications during gameplay, providing diverse,
grounded, and complex interactions for further explorations. Finally, we
present a simple prompting baseline and evaluate its performance, demonstrating
its effectiveness in enhancing interaction understanding. We hope that our
dataset and task will inspire further research in complex interactions with
natural language, fostering the development of more advanced AI agents. | http://arxiv.org/pdf/2307.12573 | Yuanzhi Liang, Linchao Zhu, Yi Yang | cs.CL | Preliminary version of an ongoing work | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2212.10060"
},
{
"id": "2205.11916"
},
{
"id": "2304.03442"
},
{
"id": "2203.11171"
},
{
"id": "2212.10618"
},
{
"id": "2301.00234"
},
{
"id": "2303.18223"
},
{
"id": "2010.07375"
},
{
"id": "2212.10403"
},
{
"id": "2305.01528"
},
{
"id": "2305.16960"
}
] |
2307.12950 | 6 | We evaluate the practical effectiveness of RLCD through both human and automatic evaluations on three tasks, aiming to improve the ability of LLaMA-7B (Touvron et al., 2023a) to generate harmless outputs, helpful outputs, and high-quality story outlines. As shown in Sec. 4, RLCD substantially outperforms both RLAIF and context distillation baselines in pairwise comparisons when simulating preference data with LLaMA-7B, while still performing equal or better when simulating with LLaMA-30B. Code and simulated preference data are available at https://github. com/facebookresearch/rlcd.
# 2 RELATED WORK
Lately, several RL approaches leveraging reward models trained on human preferences (Ouyang et al., 2022; Bai et al., 2022a; Zhu et al., 2023; Rafailov et al., 2023) have been applied to align strong pretrained LLMs (Stiennon et al., 2020; OpenAI, 2022; 2023; Anthropic, 2023; Touvron et al., 2023b). However, it can be expensive to collect human pairwise preferences.
2 | 2307.12950#6 | RLCD: Reinforcement Learning from Contrast Distillation for Language Model Alignment | We propose Reinforcement Learning from Contrast Distillation (RLCD), a method
for aligning language models to follow natural language principles without
using human feedback. RLCD trains a preference model using simulated preference
pairs that contain both a high-quality and low-quality example, generated using
contrasting positive and negative prompts. The preference model is then used to
improve a base unaligned language model via reinforcement learning.
Empirically, RLCD outperforms RLAIF (Bai et al., 2022b) and context
distillation (Huang et al., 2022) baselines across three diverse alignment
tasks--harmlessness, helpfulness, and story outline generation--and on both 7B
and 30B model scales for preference data simulation. | http://arxiv.org/pdf/2307.12950 | Kevin Yang, Dan Klein, Asli Celikyilmaz, Nanyun Peng, Yuandong Tian | cs.CL, cs.AI | null | null | cs.CL | 20230724 | 20230818 | [
{
"id": "2302.13971"
},
{
"id": "2109.07445"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.03047"
},
{
"id": "2209.15189"
},
{
"id": "2306.15595"
},
{
"id": "1807.03748"
},
{
"id": "2202.00161"
},
{
"id": "1606.07947"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2204.05862"
},
{
"id": "2301.11270"
},
{
"id": "2112.00861"
},
{
"id": "2307.03172"
},
{
"id": "2206.11349"
},
{
"id": "2305.14387"
},
{
"id": "2210.11610"
},
{
"id": "2102.10960"
},
{
"id": "2306.11816"
}
] |
2307.12573 | 7 | posting their responses on the forum. While, this asynchronous online communication introduces signiï¬cant delays, with players often waiting for hours or even weeks to receive responses. As a result, data collected from such forums struggle to capture the vibrant and nuanced grounded semantics characteristic of real-time human interactions. Moreover, the forum-based communication format tends to encourage players to respond to the immediate turn and provide formal written replies, thereby limiting the richness and groundedness of expressions that can be observed in real-time interactions with multi-characters. Consequently, previous works derived from forum data do not fully represent the diversity and complexity found in real-world multi-character interactions. More comprehensive and realistic benchmarks are needed to support the development of effective virtual GMs and address the deï¬ciencies in agent world models. | 2307.12573#7 | Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models | Recent advancements in natural language and Large Language Models (LLMs) have
enabled AI agents to simulate human-like interactions within virtual worlds.
However, these interactions still face limitations in complexity and
flexibility, particularly in scenarios involving multiple characters and novel
objects. Pre-defining all interactable objects in the agent's world model
presents challenges, and conveying implicit intentions to multiple characters
through complex interactions remains difficult. To address these issues, we
propose integrating virtual Game Masters (GMs) into the agent's world model,
drawing inspiration from Tabletop Role-Playing Games (TRPGs). GMs play a
crucial role in overseeing information, estimating players' intentions,
providing environment descriptions, and offering feedback, compensating for
current world model deficiencies. To facilitate future explorations for complex
interactions, we introduce a benchmark named Tachikuma, comprising a Multiple
character and novel Object based interaction Estimation (MOE) task and a
supporting dataset. MOE challenges models to understand characters' intentions
and accurately determine their actions within intricate contexts involving
multi-character and novel object interactions. Besides, the dataset captures
log data from real-time communications during gameplay, providing diverse,
grounded, and complex interactions for further explorations. Finally, we
present a simple prompting baseline and evaluate its performance, demonstrating
its effectiveness in enhancing interaction understanding. We hope that our
dataset and task will inspire further research in complex interactions with
natural language, fostering the development of more advanced AI agents. | http://arxiv.org/pdf/2307.12573 | Yuanzhi Liang, Linchao Zhu, Yi Yang | cs.CL | Preliminary version of an ongoing work | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2212.10060"
},
{
"id": "2205.11916"
},
{
"id": "2304.03442"
},
{
"id": "2203.11171"
},
{
"id": "2212.10618"
},
{
"id": "2301.00234"
},
{
"id": "2303.18223"
},
{
"id": "2010.07375"
},
{
"id": "2212.10403"
},
{
"id": "2305.01528"
},
{
"id": "2305.16960"
}
] |
2307.12856 | 7 | Existing LLM-driven agents often solve decision making tasks with a single LLM conditioned on different prompts per role (Kim et al., 2023; Sun et al., 2023; Zheng et al., 2023), which is, however, not enough for real-world tasks whose complexity is higher than that of simulators. The empirical evaluations reveal that our method incorporating self-bootstrapped specialist language models improves HTML understanding and grounding, and achieves better generalization than single LLM agent. In real-world web automation, WebAgent significantly increases the success rate by 50%, and error analysis emphasizes that coupling task planning with HTML summarization in specialized language models is essential for task success. Moreover, HTML-T5 not only works as a core module for WebAgent but also achieves strong results by itself on the web-based tasks. On MiniWoB++ (Liu et al., 2018; Shi et al., 2017), HTML-T5 achieves 18.7% higher success than previous language model agent (Gur et al., 2022) while also outperforming competitive baselines, such as naive local- global attention models (Guo et al., 2022) and its instruction-finetuned ones (Chung et al., 2022). On | 2307.12856#7 | A Real-World WebAgent with Planning, Long Context Understanding, and Program Synthesis | Pre-trained large language models (LLMs) have recently achieved better
generalization and sample efficiency in autonomous web automation. However, the
performance on real-world websites has still suffered from (1) open domainness,
(2) limited context length, and (3) lack of inductive bias on HTML. We
introduce WebAgent, an LLM-driven agent that learns from self-experience to
complete tasks on real websites following natural language instructions.
WebAgent plans ahead by decomposing instructions into canonical
sub-instructions, summarizes long HTML documents into task-relevant snippets,
and acts on websites via Python programs generated from those. We design
WebAgent with Flan-U-PaLM, for grounded code generation, and HTML-T5, new
pre-trained LLMs for long HTML documents using local and global attention
mechanisms and a mixture of long-span denoising objectives, for planning and
summarization. We empirically demonstrate that our modular recipe improves the
success on real websites by over 50%, and that HTML-T5 is the best model to
solve various HTML understanding tasks; achieving 18.7% higher success rate
than the prior method on MiniWoB web automation benchmark, and SoTA performance
on Mind2Web, an offline task planning evaluation. | http://arxiv.org/pdf/2307.12856 | Izzeddin Gur, Hiroki Furuta, Austin Huang, Mustafa Safdari, Yutaka Matsuo, Douglas Eck, Aleksandra Faust | cs.LG, cs.AI, cs.CL | null | null | cs.LG | 20230724 | 20231003 | [
{
"id": "2101.02235"
},
{
"id": "2302.13971"
},
{
"id": "2103.06410"
},
{
"id": "2305.08848"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2106.15772"
},
{
"id": "2305.16653"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2105.13231"
},
{
"id": "2205.05131"
},
{
"id": "2204.00598"
},
{
"id": "2203.17189"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2105.11210"
},
{
"id": "2304.11477"
},
{
"id": "1906.01749"
},
{
"id": "2108.07732"
},
{
"id": "2202.00217"
},
{
"id": "2102.04664"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "1811.00937"
},
{
"id": "2303.08774"
},
{
"id": "1808.06226"
},
{
"id": "2010.12844"
},
{
"id": "2110.14168"
},
{
"id": "2209.11302"
},
{
"id": "2210.09261"
},
{
"id": "2306.07863"
},
{
"id": "2306.00245"
},
{
"id": "2305.16291"
},
{
"id": "2005.14165"
},
{
"id": "2305.11014"
},
{
"id": "2211.10435"
},
{
"id": "2210.03629"
},
{
"id": "1912.13318"
},
{
"id": "2108.13643"
},
{
"id": "2303.09752"
},
{
"id": "2103.07191"
},
{
"id": "1611.04230"
},
{
"id": "2209.07753"
},
{
"id": "2002.08155"
},
{
"id": "2212.10560"
},
{
"id": "2110.08518"
},
{
"id": "2210.03945"
},
{
"id": "2201.08239"
},
{
"id": "2204.01691"
},
{
"id": "1906.03741"
},
{
"id": "2203.02155"
},
{
"id": "2201.07207"
},
{
"id": "2004.08483"
},
{
"id": "1804.05685"
},
{
"id": "2206.10498"
},
{
"id": "2207.01206"
},
{
"id": "2105.09938"
},
{
"id": "2205.12255"
},
{
"id": "2305.11854"
}
] |
2307.12950 | 7 | 2
Reinforcement Learning from AI Feedback. RLAIF simulates human pairwise preferences using a LLM, whether the same LLM to be aligned later (Bai et al., 2022b) or a stronger LLM as an oracle (Dubois et al., 2023). Such methods typically obtain pairwise preferences by scoring two i.i.d. outputs (o1, o2). RLCD instead generates outputs (o+, oâ) from different distributions, obviating the need for post hoc scoring (whether human or AI). | 2307.12950#7 | RLCD: Reinforcement Learning from Contrast Distillation for Language Model Alignment | We propose Reinforcement Learning from Contrast Distillation (RLCD), a method
for aligning language models to follow natural language principles without
using human feedback. RLCD trains a preference model using simulated preference
pairs that contain both a high-quality and low-quality example, generated using
contrasting positive and negative prompts. The preference model is then used to
improve a base unaligned language model via reinforcement learning.
Empirically, RLCD outperforms RLAIF (Bai et al., 2022b) and context
distillation (Huang et al., 2022) baselines across three diverse alignment
tasks--harmlessness, helpfulness, and story outline generation--and on both 7B
and 30B model scales for preference data simulation. | http://arxiv.org/pdf/2307.12950 | Kevin Yang, Dan Klein, Asli Celikyilmaz, Nanyun Peng, Yuandong Tian | cs.CL, cs.AI | null | null | cs.CL | 20230724 | 20230818 | [
{
"id": "2302.13971"
},
{
"id": "2109.07445"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.03047"
},
{
"id": "2209.15189"
},
{
"id": "2306.15595"
},
{
"id": "1807.03748"
},
{
"id": "2202.00161"
},
{
"id": "1606.07947"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2204.05862"
},
{
"id": "2301.11270"
},
{
"id": "2112.00861"
},
{
"id": "2307.03172"
},
{
"id": "2206.11349"
},
{
"id": "2305.14387"
},
{
"id": "2210.11610"
},
{
"id": "2102.10960"
},
{
"id": "2306.11816"
}
] |
2307.12966 | 7 | Instructions From Human (§2.1) NLP Benchmarks PromptSource (Bach et al., 2022), SuperNaturalInstruction (Wang et al., 2022b), FLAN (Longpre et al., 2023), Unnatural Instructions (Honovich et al., 2022), OIG (Nguyen et al., 2023) Hand-crafted Instructions Dolly-v2 (Conover et al., 2023), OpenAssistant (Kopf et al., 2023), COIG (Zhang et al., 2023a), ShareGPT (Chiang et al., 2023), Improving Input Quality Self-Instruct Data Instruction From Strong LLMs (§2.2) Multi-Turn Instructions Improving Output Quality Baize (Xu et al., 2023c), CAMEL (Li et al., 2023a), SelFee (Ye et al., 2023a), UltraLLaMA (Ding et al., 2023), Vicuna (Chiang et al., 2023) Multilingual Instructions Phoenix (Chen et al., 2023e), BayLing (Zhang et al., 2023c), BactrianX (Li et al., 2023b) Instruction Data Management (§2.3) Instruction Implications Instruction Quantity TÃLU (Wang et al., 2023d), FLACUNA (Ghosal | 2307.12966#7 | Aligning Large Language Models with Human: A Survey | Large Language Models (LLMs) trained on extensive textual corpora have
emerged as leading solutions for a broad array of Natural Language Processing
(NLP) tasks. Despite their notable performance, these models are prone to
certain limitations such as misunderstanding human instructions, generating
potentially biased content, or factually incorrect (hallucinated) information.
Hence, aligning LLMs with human expectations has become an active area of
interest within the research community. This survey presents a comprehensive
overview of these alignment technologies, including the following aspects. (1)
Data collection: the methods for effectively collecting high-quality
instructions for LLM alignment, including the use of NLP benchmarks, human
annotations, and leveraging strong LLMs. (2) Training methodologies: a detailed
review of the prevailing training methods employed for LLM alignment. Our
exploration encompasses Supervised Fine-tuning, both Online and Offline human
preference training, along with parameter-efficient training mechanisms. (3)
Model Evaluation: the methods for evaluating the effectiveness of these
human-aligned LLMs, presenting a multifaceted approach towards their
assessment. In conclusion, we collate and distill our findings, shedding light
on several promising future research avenues in the field. This survey,
therefore, serves as a valuable resource for anyone invested in understanding
and advancing the alignment of LLMs to better suit human-oriented tasks and
expectations. An associated GitHub link collecting the latest papers is
available at https://github.com/GaryYufei/AlignLLMHumanSurvey. | http://arxiv.org/pdf/2307.12966 | Yufei Wang, Wanjun Zhong, Liangyou Li, Fei Mi, Xingshan Zeng, Wenyong Huang, Lifeng Shang, Xin Jiang, Qun Liu | cs.CL | work in progress | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2307.03109"
},
{
"id": "2305.14251"
},
{
"id": "2305.18290"
},
{
"id": "2305.13711"
},
{
"id": "2306.11644"
},
{
"id": "2306.09212"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.15011"
},
{
"id": "2304.08109"
},
{
"id": "2305.16264"
},
{
"id": "2305.10263"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2305.01210"
},
{
"id": "2306.04751"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2210.05158"
},
{
"id": "2306.15895"
},
{
"id": "2301.13688"
},
{
"id": "2307.02053"
},
{
"id": "2306.04563"
},
{
"id": "2305.15067"
},
{
"id": "2305.14688"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2306.09296"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2307.08701"
},
{
"id": "2304.08177"
},
{
"id": "2303.16634"
},
{
"id": "2306.17492"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.16739"
},
{
"id": "1909.08593"
},
{
"id": "2304.14317"
},
{
"id": "2305.14233"
},
{
"id": "2304.00723"
},
{
"id": "2306.08568"
},
{
"id": "2304.03277"
},
{
"id": "2307.03692"
},
{
"id": "2305.14314"
},
{
"id": "2307.06290"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2305.16960"
}
] |
2307.12573 | 8 | In this paper, we take the ï¬rst step towards enhancing the world model for agents by integrating a virtual GM role capable of handling complex real-time interactions with multiple characters. We propose a benchmark, named Tachikuma, designed to encourage the designation of the virtual GM to effectively handle these complex interactions, infer charactersâ intentions, and provide accurate feedback to corresponding characters. Our benchmark consists of two components: a Multiple character and novel Object based interaction Estimation (MOE) task and a supporting dataset. In MOE, models are presented with intricate contexts extracted from TRPG log data, capturing real- time communications during gameplay. The objective is to infer character intentions and identify corresponding interactions, typically represented as skill checks, judged by a GM. The dataset supports the MOE task by providing long and intricate contexts from game logs, featuring interactions among multiple characters. The complexity of interactions among multiple characters, grounded in natural language, makes MOE a valuable testbed for evaluating abilities of virtual GMs.
Furthermore, in our dataset, we collect complex and long contexts with diverse real-human interac- tions from the game logs. Our dataset differs from conventional play-by-post forum data collection
2 | 2307.12573#8 | Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models | Recent advancements in natural language and Large Language Models (LLMs) have
enabled AI agents to simulate human-like interactions within virtual worlds.
However, these interactions still face limitations in complexity and
flexibility, particularly in scenarios involving multiple characters and novel
objects. Pre-defining all interactable objects in the agent's world model
presents challenges, and conveying implicit intentions to multiple characters
through complex interactions remains difficult. To address these issues, we
propose integrating virtual Game Masters (GMs) into the agent's world model,
drawing inspiration from Tabletop Role-Playing Games (TRPGs). GMs play a
crucial role in overseeing information, estimating players' intentions,
providing environment descriptions, and offering feedback, compensating for
current world model deficiencies. To facilitate future explorations for complex
interactions, we introduce a benchmark named Tachikuma, comprising a Multiple
character and novel Object based interaction Estimation (MOE) task and a
supporting dataset. MOE challenges models to understand characters' intentions
and accurately determine their actions within intricate contexts involving
multi-character and novel object interactions. Besides, the dataset captures
log data from real-time communications during gameplay, providing diverse,
grounded, and complex interactions for further explorations. Finally, we
present a simple prompting baseline and evaluate its performance, demonstrating
its effectiveness in enhancing interaction understanding. We hope that our
dataset and task will inspire further research in complex interactions with
natural language, fostering the development of more advanced AI agents. | http://arxiv.org/pdf/2307.12573 | Yuanzhi Liang, Linchao Zhu, Yi Yang | cs.CL | Preliminary version of an ongoing work | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2212.10060"
},
{
"id": "2205.11916"
},
{
"id": "2304.03442"
},
{
"id": "2203.11171"
},
{
"id": "2212.10618"
},
{
"id": "2301.00234"
},
{
"id": "2303.18223"
},
{
"id": "2010.07375"
},
{
"id": "2212.10403"
},
{
"id": "2305.01528"
},
{
"id": "2305.16960"
}
] |
2307.12950 | 8 | Context Distillation. RLCD is related to context distillation approaches, which generate data for supervised fine-tuning by prompting a language model with different contexts (Askell et al., 2021; Choi et al., 2022; Snell et al., 2022; Huang et al., 2022). In contrast to knowledge distillation approaches using stronger models as a teacher (Kim & Rush, 2016; Chang et al., 2023), context distillation methods often generate data using the same LLM being aligned or fine-tuned later. In particular, Sun et al. (2023) apply this approach to align LLaMA-65B (Touvron et al., 2023a). Unlike existing context distillation approaches, RLCD generates pairwise preference data to train a preference model followed by applying RL. Consequently, RLCD can derive training signal from the contrast in output distributions for two different context-modified prompts p+ and pâ. | 2307.12950#8 | RLCD: Reinforcement Learning from Contrast Distillation for Language Model Alignment | We propose Reinforcement Learning from Contrast Distillation (RLCD), a method
for aligning language models to follow natural language principles without
using human feedback. RLCD trains a preference model using simulated preference
pairs that contain both a high-quality and low-quality example, generated using
contrasting positive and negative prompts. The preference model is then used to
improve a base unaligned language model via reinforcement learning.
Empirically, RLCD outperforms RLAIF (Bai et al., 2022b) and context
distillation (Huang et al., 2022) baselines across three diverse alignment
tasks--harmlessness, helpfulness, and story outline generation--and on both 7B
and 30B model scales for preference data simulation. | http://arxiv.org/pdf/2307.12950 | Kevin Yang, Dan Klein, Asli Celikyilmaz, Nanyun Peng, Yuandong Tian | cs.CL, cs.AI | null | null | cs.CL | 20230724 | 20230818 | [
{
"id": "2302.13971"
},
{
"id": "2109.07445"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.03047"
},
{
"id": "2209.15189"
},
{
"id": "2306.15595"
},
{
"id": "1807.03748"
},
{
"id": "2202.00161"
},
{
"id": "1606.07947"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2204.05862"
},
{
"id": "2301.11270"
},
{
"id": "2112.00861"
},
{
"id": "2307.03172"
},
{
"id": "2206.11349"
},
{
"id": "2305.14387"
},
{
"id": "2210.11610"
},
{
"id": "2102.10960"
},
{
"id": "2306.11816"
}
] |
2307.12966 | 8 | Instruction Data Management (§2.3) Instruction Implications Instruction Quantity TÃLU (Wang et al., 2023d), FLACUNA (Ghosal et al., 2023), Data-Constrained LM (Muennighoff et al., 2023), BELLE (Ji et al., 2023) IFS (AlShikh et al., 2023), LIMA (Zhou et al., 2023), Instruction Mining (Cao et al., 2023), Alpagasus (Chen et al., 2023b) Online Human Alignment (§3.1) RLHF (Ouyang et al., 2022), RAFT (Dong et al., 2023) Training Ofï¬ine Human Alignment (§3.2) Parameter-Efï¬cient Training (§3.3) Rank-based Training DPO (Rafailov et al., 2023), PRO (Song et al., 2023), RRHF (Yuan et al., 2023) SLiC (Zhao et al., 2023) Language-based Training Conditional Behavior Cloning (Wang et al., 2023a), CoH (Liu et al., 2023b), Second Thoughts (Liu et al., 2022b), Stable Alignment (Liu et al., 2023d), SelFee (Ye | 2307.12966#8 | Aligning Large Language Models with Human: A Survey | Large Language Models (LLMs) trained on extensive textual corpora have
emerged as leading solutions for a broad array of Natural Language Processing
(NLP) tasks. Despite their notable performance, these models are prone to
certain limitations such as misunderstanding human instructions, generating
potentially biased content, or factually incorrect (hallucinated) information.
Hence, aligning LLMs with human expectations has become an active area of
interest within the research community. This survey presents a comprehensive
overview of these alignment technologies, including the following aspects. (1)
Data collection: the methods for effectively collecting high-quality
instructions for LLM alignment, including the use of NLP benchmarks, human
annotations, and leveraging strong LLMs. (2) Training methodologies: a detailed
review of the prevailing training methods employed for LLM alignment. Our
exploration encompasses Supervised Fine-tuning, both Online and Offline human
preference training, along with parameter-efficient training mechanisms. (3)
Model Evaluation: the methods for evaluating the effectiveness of these
human-aligned LLMs, presenting a multifaceted approach towards their
assessment. In conclusion, we collate and distill our findings, shedding light
on several promising future research avenues in the field. This survey,
therefore, serves as a valuable resource for anyone invested in understanding
and advancing the alignment of LLMs to better suit human-oriented tasks and
expectations. An associated GitHub link collecting the latest papers is
available at https://github.com/GaryYufei/AlignLLMHumanSurvey. | http://arxiv.org/pdf/2307.12966 | Yufei Wang, Wanjun Zhong, Liangyou Li, Fei Mi, Xingshan Zeng, Wenyong Huang, Lifeng Shang, Xin Jiang, Qun Liu | cs.CL | work in progress | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2307.03109"
},
{
"id": "2305.14251"
},
{
"id": "2305.18290"
},
{
"id": "2305.13711"
},
{
"id": "2306.11644"
},
{
"id": "2306.09212"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.15011"
},
{
"id": "2304.08109"
},
{
"id": "2305.16264"
},
{
"id": "2305.10263"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2305.01210"
},
{
"id": "2306.04751"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2210.05158"
},
{
"id": "2306.15895"
},
{
"id": "2301.13688"
},
{
"id": "2307.02053"
},
{
"id": "2306.04563"
},
{
"id": "2305.15067"
},
{
"id": "2305.14688"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2306.09296"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2307.08701"
},
{
"id": "2304.08177"
},
{
"id": "2303.16634"
},
{
"id": "2306.17492"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.16739"
},
{
"id": "1909.08593"
},
{
"id": "2304.14317"
},
{
"id": "2305.14233"
},
{
"id": "2304.00723"
},
{
"id": "2306.08568"
},
{
"id": "2304.03277"
},
{
"id": "2307.03692"
},
{
"id": "2305.14314"
},
{
"id": "2307.06290"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2305.16960"
}
] |
2307.12573 | 9 | methods. Instead, we utilize data extracted from a Chinese TRPG forum1. These forum records, com- piled by GMs after game ending, consist of voice recordings or real-time chat logs. This data source overcomes the limitations of play-by-post data collection, enabling us to extract long contexts with complex semantics similar to the real interactions. As these logs capture immediate communications, the interactions also exhibit higher groundedness, resulting in more vibrant and realistic responses akin to everyday conversations, as demonstrated in Fig. 2. Moreover, our dataset encompasses not only the popular DND rules [6] but also a wide range of diverse game rules, including Call of Cthulhu (COC) [14], Pathï¬nder2 (PF2) [2], Savage Worlds (SW) [7], etc. This diversity enhances the com- plexity and variety of our dataset. Building upon this dataset, we introduce MOE task, which consists of 1,003 context sections extracted from the game logs. Each section represents a complete adventure with multiple turns, showcasing intricate semantics. As shown in Tab. 1, MOE includes an average of 32.12 turns per context excerpt, | 2307.12573#9 | Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models | Recent advancements in natural language and Large Language Models (LLMs) have
enabled AI agents to simulate human-like interactions within virtual worlds.
However, these interactions still face limitations in complexity and
flexibility, particularly in scenarios involving multiple characters and novel
objects. Pre-defining all interactable objects in the agent's world model
presents challenges, and conveying implicit intentions to multiple characters
through complex interactions remains difficult. To address these issues, we
propose integrating virtual Game Masters (GMs) into the agent's world model,
drawing inspiration from Tabletop Role-Playing Games (TRPGs). GMs play a
crucial role in overseeing information, estimating players' intentions,
providing environment descriptions, and offering feedback, compensating for
current world model deficiencies. To facilitate future explorations for complex
interactions, we introduce a benchmark named Tachikuma, comprising a Multiple
character and novel Object based interaction Estimation (MOE) task and a
supporting dataset. MOE challenges models to understand characters' intentions
and accurately determine their actions within intricate contexts involving
multi-character and novel object interactions. Besides, the dataset captures
log data from real-time communications during gameplay, providing diverse,
grounded, and complex interactions for further explorations. Finally, we
present a simple prompting baseline and evaluate its performance, demonstrating
its effectiveness in enhancing interaction understanding. We hope that our
dataset and task will inspire further research in complex interactions with
natural language, fostering the development of more advanced AI agents. | http://arxiv.org/pdf/2307.12573 | Yuanzhi Liang, Linchao Zhu, Yi Yang | cs.CL | Preliminary version of an ongoing work | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2212.10060"
},
{
"id": "2205.11916"
},
{
"id": "2304.03442"
},
{
"id": "2203.11171"
},
{
"id": "2212.10618"
},
{
"id": "2301.00234"
},
{
"id": "2303.18223"
},
{
"id": "2010.07375"
},
{
"id": "2212.10403"
},
{
"id": "2305.01528"
},
{
"id": "2305.16960"
}
] |
2307.12856 | 9 | 2
# Preprint
⢠We propose WebAgent, integration of two modular LLMs under self-supervision for real-world web automation. The domain-expert language model handles planning and HTML summarization, and a generalist language model generates executable programs.
⢠We newly introduce HTML-T5, pre-trained language models with local-global attentions and a mixture of long-span denoising on large-scale HTML corpus, which capture the syntax and semantics of HTML better.
⢠WebAgent notably improves the success rate by over 50% in real websites. HTML-T5 itself outperforms prior language model agent by 18.7% in MiniWoB++, and realizes SoTA performance in Mind2Web while surpassing GPT-4.
2 RELATED WORKS | 2307.12856#9 | A Real-World WebAgent with Planning, Long Context Understanding, and Program Synthesis | Pre-trained large language models (LLMs) have recently achieved better
generalization and sample efficiency in autonomous web automation. However, the
performance on real-world websites has still suffered from (1) open domainness,
(2) limited context length, and (3) lack of inductive bias on HTML. We
introduce WebAgent, an LLM-driven agent that learns from self-experience to
complete tasks on real websites following natural language instructions.
WebAgent plans ahead by decomposing instructions into canonical
sub-instructions, summarizes long HTML documents into task-relevant snippets,
and acts on websites via Python programs generated from those. We design
WebAgent with Flan-U-PaLM, for grounded code generation, and HTML-T5, new
pre-trained LLMs for long HTML documents using local and global attention
mechanisms and a mixture of long-span denoising objectives, for planning and
summarization. We empirically demonstrate that our modular recipe improves the
success on real websites by over 50%, and that HTML-T5 is the best model to
solve various HTML understanding tasks; achieving 18.7% higher success rate
than the prior method on MiniWoB web automation benchmark, and SoTA performance
on Mind2Web, an offline task planning evaluation. | http://arxiv.org/pdf/2307.12856 | Izzeddin Gur, Hiroki Furuta, Austin Huang, Mustafa Safdari, Yutaka Matsuo, Douglas Eck, Aleksandra Faust | cs.LG, cs.AI, cs.CL | null | null | cs.LG | 20230724 | 20231003 | [
{
"id": "2101.02235"
},
{
"id": "2302.13971"
},
{
"id": "2103.06410"
},
{
"id": "2305.08848"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2106.15772"
},
{
"id": "2305.16653"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2105.13231"
},
{
"id": "2205.05131"
},
{
"id": "2204.00598"
},
{
"id": "2203.17189"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2105.11210"
},
{
"id": "2304.11477"
},
{
"id": "1906.01749"
},
{
"id": "2108.07732"
},
{
"id": "2202.00217"
},
{
"id": "2102.04664"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "1811.00937"
},
{
"id": "2303.08774"
},
{
"id": "1808.06226"
},
{
"id": "2010.12844"
},
{
"id": "2110.14168"
},
{
"id": "2209.11302"
},
{
"id": "2210.09261"
},
{
"id": "2306.07863"
},
{
"id": "2306.00245"
},
{
"id": "2305.16291"
},
{
"id": "2005.14165"
},
{
"id": "2305.11014"
},
{
"id": "2211.10435"
},
{
"id": "2210.03629"
},
{
"id": "1912.13318"
},
{
"id": "2108.13643"
},
{
"id": "2303.09752"
},
{
"id": "2103.07191"
},
{
"id": "1611.04230"
},
{
"id": "2209.07753"
},
{
"id": "2002.08155"
},
{
"id": "2212.10560"
},
{
"id": "2110.08518"
},
{
"id": "2210.03945"
},
{
"id": "2201.08239"
},
{
"id": "2204.01691"
},
{
"id": "1906.03741"
},
{
"id": "2203.02155"
},
{
"id": "2201.07207"
},
{
"id": "2004.08483"
},
{
"id": "1804.05685"
},
{
"id": "2206.10498"
},
{
"id": "2207.01206"
},
{
"id": "2105.09938"
},
{
"id": "2205.12255"
},
{
"id": "2305.11854"
}
] |
2307.12950 | 9 | Reinforcement Learning with Contrastive Objective. Using a contrastive loss in RL has proven effective in various scenarios (Oord et al., 2018; Laskin et al., 2020; Liu et al., 2021; Laskin et al., 2022; Eysenbach et al., 2022). Compared to standard reward signals that may lead to insufficient numerical differences between good and bad cases, contrastive loss naturally focuses on sample pairs with similar appearances but different underlying semantics with current representations (Tian, 2022), thus improving sample efficiency and model quality. RLCD employs a similar idea to improve the generation of simulated preference data in the RLHF pipeline.
# 3 REINFORCEMENT LEARNING FROM CONTRAST DISTILLATION
We now describe our method, Reinforcement Learning from Contrast Distillation (RLCD), a novel method for simulating the initial pairwise preference data in an RLHF pipeline without accessing a stronger âoracleâ LLM.
3.1 METHOD DESCRIPTION | 2307.12950#9 | RLCD: Reinforcement Learning from Contrast Distillation for Language Model Alignment | We propose Reinforcement Learning from Contrast Distillation (RLCD), a method
for aligning language models to follow natural language principles without
using human feedback. RLCD trains a preference model using simulated preference
pairs that contain both a high-quality and low-quality example, generated using
contrasting positive and negative prompts. The preference model is then used to
improve a base unaligned language model via reinforcement learning.
Empirically, RLCD outperforms RLAIF (Bai et al., 2022b) and context
distillation (Huang et al., 2022) baselines across three diverse alignment
tasks--harmlessness, helpfulness, and story outline generation--and on both 7B
and 30B model scales for preference data simulation. | http://arxiv.org/pdf/2307.12950 | Kevin Yang, Dan Klein, Asli Celikyilmaz, Nanyun Peng, Yuandong Tian | cs.CL, cs.AI | null | null | cs.CL | 20230724 | 20230818 | [
{
"id": "2302.13971"
},
{
"id": "2109.07445"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.03047"
},
{
"id": "2209.15189"
},
{
"id": "2306.15595"
},
{
"id": "1807.03748"
},
{
"id": "2202.00161"
},
{
"id": "1606.07947"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2204.05862"
},
{
"id": "2301.11270"
},
{
"id": "2112.00861"
},
{
"id": "2307.03172"
},
{
"id": "2206.11349"
},
{
"id": "2305.14387"
},
{
"id": "2210.11610"
},
{
"id": "2102.10960"
},
{
"id": "2306.11816"
}
] |
2307.12966 | 9 | et al., 2023b), Second Thoughts (Liu et al., 2022b), Stable Alignment (Liu et al., 2023d), SelFee (Ye et al., 2023a) Preï¬x Tuning (Li and Liang, 2021), Prompt Tuning (Lester et al., 2021), LoRA (Hu et al., 2022), AdaLoRA (Zhang et al., 2023b), QLoRA (Dettmers et al., 2023), Uniï¬ed Prompt (Chen et al., 2023a) General Knowledge MMLU (Hendrycks et al., 2021), C-MMLU (Li et al., 2023c), C-Eval (Huang et al., 2023),Kola (Yu et al., 2023a), M3KE (Liu et al., 2023a), AGIEval (Zhong et al., 2023) Closed-set Benchmarks Reasoning GSM8K (Cobbe et al., 2021), Maths (Hendrycks et al., 2021), CSQA (Talmor et al., 2019), StrategyQA (Geva et al., 2021) Coin Flip (Wei et al., 2022b),BBH (Suzgun et al., 2022) Evaluation | 2307.12966#9 | Aligning Large Language Models with Human: A Survey | Large Language Models (LLMs) trained on extensive textual corpora have
emerged as leading solutions for a broad array of Natural Language Processing
(NLP) tasks. Despite their notable performance, these models are prone to
certain limitations such as misunderstanding human instructions, generating
potentially biased content, or factually incorrect (hallucinated) information.
Hence, aligning LLMs with human expectations has become an active area of
interest within the research community. This survey presents a comprehensive
overview of these alignment technologies, including the following aspects. (1)
Data collection: the methods for effectively collecting high-quality
instructions for LLM alignment, including the use of NLP benchmarks, human
annotations, and leveraging strong LLMs. (2) Training methodologies: a detailed
review of the prevailing training methods employed for LLM alignment. Our
exploration encompasses Supervised Fine-tuning, both Online and Offline human
preference training, along with parameter-efficient training mechanisms. (3)
Model Evaluation: the methods for evaluating the effectiveness of these
human-aligned LLMs, presenting a multifaceted approach towards their
assessment. In conclusion, we collate and distill our findings, shedding light
on several promising future research avenues in the field. This survey,
therefore, serves as a valuable resource for anyone invested in understanding
and advancing the alignment of LLMs to better suit human-oriented tasks and
expectations. An associated GitHub link collecting the latest papers is
available at https://github.com/GaryYufei/AlignLLMHumanSurvey. | http://arxiv.org/pdf/2307.12966 | Yufei Wang, Wanjun Zhong, Liangyou Li, Fei Mi, Xingshan Zeng, Wenyong Huang, Lifeng Shang, Xin Jiang, Qun Liu | cs.CL | work in progress | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2307.03109"
},
{
"id": "2305.14251"
},
{
"id": "2305.18290"
},
{
"id": "2305.13711"
},
{
"id": "2306.11644"
},
{
"id": "2306.09212"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.15011"
},
{
"id": "2304.08109"
},
{
"id": "2305.16264"
},
{
"id": "2305.10263"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2305.01210"
},
{
"id": "2306.04751"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2210.05158"
},
{
"id": "2306.15895"
},
{
"id": "2301.13688"
},
{
"id": "2307.02053"
},
{
"id": "2306.04563"
},
{
"id": "2305.15067"
},
{
"id": "2305.14688"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2306.09296"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2307.08701"
},
{
"id": "2304.08177"
},
{
"id": "2303.16634"
},
{
"id": "2306.17492"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.16739"
},
{
"id": "1909.08593"
},
{
"id": "2304.14317"
},
{
"id": "2305.14233"
},
{
"id": "2304.00723"
},
{
"id": "2306.08568"
},
{
"id": "2304.03277"
},
{
"id": "2307.03692"
},
{
"id": "2305.14314"
},
{
"id": "2307.06290"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2305.16960"
}
] |
2307.12573 | 10 | complete adventure with multiple turns, showcasing intricate semantics. As shown in Tab. 1, MOE includes an average of 32.12 turns per context excerpt, in contrast to previous works that typically involve only one turn. The number of possible answers for characters and skills varies depending on the context, ranging from one to eleven. Additionally, speciï¬c game rules necessitate different skill categories for answers. For instance, considering the DND rule, there are 51 potential skills. These factors collectively contribute to MOE representing a challenging task for AI agents. The agent must demonstrate a comprehensive understanding of both the complex interactions, emulating human-like comprehension. To provide a comprehensive assessment, we report the F-score as the ï¬nal metric, separately for the predicted characters and overall intention answers. Evaluating character predictions reï¬ects the accuracy of methods in inferring playersâ intentions. Simultaneously, evaluating overall answers offers insights into the understanding ability of both character intentions and the corresponding interactions. | 2307.12573#10 | Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models | Recent advancements in natural language and Large Language Models (LLMs) have
enabled AI agents to simulate human-like interactions within virtual worlds.
However, these interactions still face limitations in complexity and
flexibility, particularly in scenarios involving multiple characters and novel
objects. Pre-defining all interactable objects in the agent's world model
presents challenges, and conveying implicit intentions to multiple characters
through complex interactions remains difficult. To address these issues, we
propose integrating virtual Game Masters (GMs) into the agent's world model,
drawing inspiration from Tabletop Role-Playing Games (TRPGs). GMs play a
crucial role in overseeing information, estimating players' intentions,
providing environment descriptions, and offering feedback, compensating for
current world model deficiencies. To facilitate future explorations for complex
interactions, we introduce a benchmark named Tachikuma, comprising a Multiple
character and novel Object based interaction Estimation (MOE) task and a
supporting dataset. MOE challenges models to understand characters' intentions
and accurately determine their actions within intricate contexts involving
multi-character and novel object interactions. Besides, the dataset captures
log data from real-time communications during gameplay, providing diverse,
grounded, and complex interactions for further explorations. Finally, we
present a simple prompting baseline and evaluate its performance, demonstrating
its effectiveness in enhancing interaction understanding. We hope that our
dataset and task will inspire further research in complex interactions with
natural language, fostering the development of more advanced AI agents. | http://arxiv.org/pdf/2307.12573 | Yuanzhi Liang, Linchao Zhu, Yi Yang | cs.CL | Preliminary version of an ongoing work | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2212.10060"
},
{
"id": "2205.11916"
},
{
"id": "2304.03442"
},
{
"id": "2203.11171"
},
{
"id": "2212.10618"
},
{
"id": "2301.00234"
},
{
"id": "2303.18223"
},
{
"id": "2010.07375"
},
{
"id": "2212.10403"
},
{
"id": "2305.01528"
},
{
"id": "2305.16960"
}
] |
2307.12856 | 10 | 2 RELATED WORKS
Web Automation Web automation is a sequential de- cision making task where agents manipulate browsers following given instructions (Shi et al., 2017), such as form filling (Diaz et al., 2013) or information re- trieval (Adolphs et al., 2022) through the sequence of computer actions (Li et al., 2020; Mazumder & Riva, 2020; Shvo et al., 2021). Prior works have realized the web automation via reinforcement learning (Gur et al., 2019; Humphreys et al., 2022; Jia et al., 2019; Shaw et al., 2023), finetuned (Furuta et al., 2023; Gur et al., 2022) or prompted LLMs (Kim et al., 2023; Sun et al., 2023; Yao et al., 2022b; Zheng et al., 2023) on the simulated websites (Shi et al., 2017; Toyama et al., 2021; Yao et al., 2022a). However, there are still huge gaps between simplified simulators and real web environments; for instance, the average tokens for HTML pages are about 15 times larger (Figure 2), and pre-defined action space for specific websites is a strong assumption that may harm the generalization to out-of-distribution web pages or instructions. | 2307.12856#10 | A Real-World WebAgent with Planning, Long Context Understanding, and Program Synthesis | Pre-trained large language models (LLMs) have recently achieved better
generalization and sample efficiency in autonomous web automation. However, the
performance on real-world websites has still suffered from (1) open domainness,
(2) limited context length, and (3) lack of inductive bias on HTML. We
introduce WebAgent, an LLM-driven agent that learns from self-experience to
complete tasks on real websites following natural language instructions.
WebAgent plans ahead by decomposing instructions into canonical
sub-instructions, summarizes long HTML documents into task-relevant snippets,
and acts on websites via Python programs generated from those. We design
WebAgent with Flan-U-PaLM, for grounded code generation, and HTML-T5, new
pre-trained LLMs for long HTML documents using local and global attention
mechanisms and a mixture of long-span denoising objectives, for planning and
summarization. We empirically demonstrate that our modular recipe improves the
success on real websites by over 50%, and that HTML-T5 is the best model to
solve various HTML understanding tasks; achieving 18.7% higher success rate
than the prior method on MiniWoB web automation benchmark, and SoTA performance
on Mind2Web, an offline task planning evaluation. | http://arxiv.org/pdf/2307.12856 | Izzeddin Gur, Hiroki Furuta, Austin Huang, Mustafa Safdari, Yutaka Matsuo, Douglas Eck, Aleksandra Faust | cs.LG, cs.AI, cs.CL | null | null | cs.LG | 20230724 | 20231003 | [
{
"id": "2101.02235"
},
{
"id": "2302.13971"
},
{
"id": "2103.06410"
},
{
"id": "2305.08848"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2106.15772"
},
{
"id": "2305.16653"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2105.13231"
},
{
"id": "2205.05131"
},
{
"id": "2204.00598"
},
{
"id": "2203.17189"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2105.11210"
},
{
"id": "2304.11477"
},
{
"id": "1906.01749"
},
{
"id": "2108.07732"
},
{
"id": "2202.00217"
},
{
"id": "2102.04664"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "1811.00937"
},
{
"id": "2303.08774"
},
{
"id": "1808.06226"
},
{
"id": "2010.12844"
},
{
"id": "2110.14168"
},
{
"id": "2209.11302"
},
{
"id": "2210.09261"
},
{
"id": "2306.07863"
},
{
"id": "2306.00245"
},
{
"id": "2305.16291"
},
{
"id": "2005.14165"
},
{
"id": "2305.11014"
},
{
"id": "2211.10435"
},
{
"id": "2210.03629"
},
{
"id": "1912.13318"
},
{
"id": "2108.13643"
},
{
"id": "2303.09752"
},
{
"id": "2103.07191"
},
{
"id": "1611.04230"
},
{
"id": "2209.07753"
},
{
"id": "2002.08155"
},
{
"id": "2212.10560"
},
{
"id": "2110.08518"
},
{
"id": "2210.03945"
},
{
"id": "2201.08239"
},
{
"id": "2204.01691"
},
{
"id": "1906.03741"
},
{
"id": "2203.02155"
},
{
"id": "2201.07207"
},
{
"id": "2004.08483"
},
{
"id": "1804.05685"
},
{
"id": "2206.10498"
},
{
"id": "2207.01206"
},
{
"id": "2105.09938"
},
{
"id": "2205.12255"
},
{
"id": "2305.11854"
}
] |
2307.12950 | 10 | 3.1 METHOD DESCRIPTION
RLCD begins with an initial unaligned LLM and a set of prompts to be used as starting points for pairwise preference data generation, similar to RLHF or RLAIF. For each prompt p, RLCD then con- structs p+ and pâ (green and orange respectively in Figure 1), which should respectively encourage a directional change toward or against the attribute of interest (e.g., harmlessness, helpfulness). We then obtain corresponding outputs o+ and oâ by feeding p+ and pâ into the original LLM. When constructing the resulting training pair (o+, oâ), we automatically label o+ as preferred without further post hoc scoring.
After preference training pairs (o+, oâ) are created, RLCD follows the standard RLHF pipeline by training a preference model on the simulated pairwise preferences; this preference model is also based on fine-tuning the same unaligned LLM that we started with. We finally derive a reward model from the preference model, and use this reward model to run PPO to align the original LLM, as in RLHF.
3.2 POSITIVE AND NEGATIVE PROMPT CONSTRUCTION | 2307.12950#10 | RLCD: Reinforcement Learning from Contrast Distillation for Language Model Alignment | We propose Reinforcement Learning from Contrast Distillation (RLCD), a method
for aligning language models to follow natural language principles without
using human feedback. RLCD trains a preference model using simulated preference
pairs that contain both a high-quality and low-quality example, generated using
contrasting positive and negative prompts. The preference model is then used to
improve a base unaligned language model via reinforcement learning.
Empirically, RLCD outperforms RLAIF (Bai et al., 2022b) and context
distillation (Huang et al., 2022) baselines across three diverse alignment
tasks--harmlessness, helpfulness, and story outline generation--and on both 7B
and 30B model scales for preference data simulation. | http://arxiv.org/pdf/2307.12950 | Kevin Yang, Dan Klein, Asli Celikyilmaz, Nanyun Peng, Yuandong Tian | cs.CL, cs.AI | null | null | cs.CL | 20230724 | 20230818 | [
{
"id": "2302.13971"
},
{
"id": "2109.07445"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.03047"
},
{
"id": "2209.15189"
},
{
"id": "2306.15595"
},
{
"id": "1807.03748"
},
{
"id": "2202.00161"
},
{
"id": "1606.07947"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2204.05862"
},
{
"id": "2301.11270"
},
{
"id": "2112.00861"
},
{
"id": "2307.03172"
},
{
"id": "2206.11349"
},
{
"id": "2305.14387"
},
{
"id": "2210.11610"
},
{
"id": "2102.10960"
},
{
"id": "2306.11816"
}
] |
2307.12966 | 10 | (Geva et al., 2021) Coin Flip (Wei et al., 2022b),BBH (Suzgun et al., 2022) Evaluation Benchmarks (§4.1) Coding MBPP (Austin et al., 2021), DS-1000 (Lai et al., 2022) HumanEval (Chen et al., 2021), HumanEval+ (Liu et al., 2023c), Evaluation Open-set Benchmarks Vicuna-80 (Chiang et al., 2023), Open-Assistant-953 (Kopf et al., 2023) User-Instructions-252 (Wang et al., 2022a), FLASK (Ye et al., 2023b) MT-Bench (Zheng et al., 2023), AlpacaEval (Dubois et al., 2023) Human-based Evaluation Ordinal Classiï¬cation (Wang et al., 2022a; Wu et al., 2023) Pairwise Comparison (Taori et al., 2023), Elo (Zheng et al., 2023) Evaluation Paradigms (§4.2) Reference-Free Evaluation GPTEval (Liu et al., 2023e), GPTScore (Fu et al., 2023), Explicit Score (Chen et al., 2023d), | 2307.12966#10 | Aligning Large Language Models with Human: A Survey | Large Language Models (LLMs) trained on extensive textual corpora have
emerged as leading solutions for a broad array of Natural Language Processing
(NLP) tasks. Despite their notable performance, these models are prone to
certain limitations such as misunderstanding human instructions, generating
potentially biased content, or factually incorrect (hallucinated) information.
Hence, aligning LLMs with human expectations has become an active area of
interest within the research community. This survey presents a comprehensive
overview of these alignment technologies, including the following aspects. (1)
Data collection: the methods for effectively collecting high-quality
instructions for LLM alignment, including the use of NLP benchmarks, human
annotations, and leveraging strong LLMs. (2) Training methodologies: a detailed
review of the prevailing training methods employed for LLM alignment. Our
exploration encompasses Supervised Fine-tuning, both Online and Offline human
preference training, along with parameter-efficient training mechanisms. (3)
Model Evaluation: the methods for evaluating the effectiveness of these
human-aligned LLMs, presenting a multifaceted approach towards their
assessment. In conclusion, we collate and distill our findings, shedding light
on several promising future research avenues in the field. This survey,
therefore, serves as a valuable resource for anyone invested in understanding
and advancing the alignment of LLMs to better suit human-oriented tasks and
expectations. An associated GitHub link collecting the latest papers is
available at https://github.com/GaryYufei/AlignLLMHumanSurvey. | http://arxiv.org/pdf/2307.12966 | Yufei Wang, Wanjun Zhong, Liangyou Li, Fei Mi, Xingshan Zeng, Wenyong Huang, Lifeng Shang, Xin Jiang, Qun Liu | cs.CL | work in progress | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2307.03109"
},
{
"id": "2305.14251"
},
{
"id": "2305.18290"
},
{
"id": "2305.13711"
},
{
"id": "2306.11644"
},
{
"id": "2306.09212"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.15011"
},
{
"id": "2304.08109"
},
{
"id": "2305.16264"
},
{
"id": "2305.10263"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2305.01210"
},
{
"id": "2306.04751"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2210.05158"
},
{
"id": "2306.15895"
},
{
"id": "2301.13688"
},
{
"id": "2307.02053"
},
{
"id": "2306.04563"
},
{
"id": "2305.15067"
},
{
"id": "2305.14688"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2306.09296"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2307.08701"
},
{
"id": "2304.08177"
},
{
"id": "2303.16634"
},
{
"id": "2306.17492"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.16739"
},
{
"id": "1909.08593"
},
{
"id": "2304.14317"
},
{
"id": "2305.14233"
},
{
"id": "2304.00723"
},
{
"id": "2306.08568"
},
{
"id": "2304.03277"
},
{
"id": "2307.03692"
},
{
"id": "2305.14314"
},
{
"id": "2307.06290"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2305.16960"
}
] |
2307.12573 | 11 | Finally, we present a three-step prompting baseline for constructing an agent capable of handling interactions like a real-human GM in TRPGs. Our simple baseline serves to demonstrate the value of our task and dataset in understanding complex interactions. Our method incorporates prompts speciï¬cally related to existing characters, their intentions, and the associated skill checks. By utilizing these prompts, we guide LLMs in gradually comprehending the intricate interactions that occur between players. We thoroughly evaluate our baseline method and compare its performance with other prompting methods utilizing various LLMs within MOE task. The experimental results indicate that MOE task is solvable but still possesses a large room for further improvement. Furthermore, leveraging the answers obtained from MOE task, we employ LLMs to generate responses that simulate a real-human GM in the games. To evaluate the quality of these generated responses, we invite numerous volunteers to provide subjective evaluations. The experimental results demonstrate that incorporating the improved understanding ability of the agent leads to higher levels of factual correctness, naturalness, and groundedness in the generated responses, closely resembling real-human interactions. These results further underscore the signiï¬cance of understanding ability in constructing proï¬cient agents and highlight the importance of our benchmark. We hope our dataset and benchmark as valuable resources that will inspire the research community to delve into the understanding of complex interactions and contribute to the development of more capable AI agents.
Our contributions can be summarized as follows: | 2307.12573#11 | Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models | Recent advancements in natural language and Large Language Models (LLMs) have
enabled AI agents to simulate human-like interactions within virtual worlds.
However, these interactions still face limitations in complexity and
flexibility, particularly in scenarios involving multiple characters and novel
objects. Pre-defining all interactable objects in the agent's world model
presents challenges, and conveying implicit intentions to multiple characters
through complex interactions remains difficult. To address these issues, we
propose integrating virtual Game Masters (GMs) into the agent's world model,
drawing inspiration from Tabletop Role-Playing Games (TRPGs). GMs play a
crucial role in overseeing information, estimating players' intentions,
providing environment descriptions, and offering feedback, compensating for
current world model deficiencies. To facilitate future explorations for complex
interactions, we introduce a benchmark named Tachikuma, comprising a Multiple
character and novel Object based interaction Estimation (MOE) task and a
supporting dataset. MOE challenges models to understand characters' intentions
and accurately determine their actions within intricate contexts involving
multi-character and novel object interactions. Besides, the dataset captures
log data from real-time communications during gameplay, providing diverse,
grounded, and complex interactions for further explorations. Finally, we
present a simple prompting baseline and evaluate its performance, demonstrating
its effectiveness in enhancing interaction understanding. We hope that our
dataset and task will inspire further research in complex interactions with
natural language, fostering the development of more advanced AI agents. | http://arxiv.org/pdf/2307.12573 | Yuanzhi Liang, Linchao Zhu, Yi Yang | cs.CL | Preliminary version of an ongoing work | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2212.10060"
},
{
"id": "2205.11916"
},
{
"id": "2304.03442"
},
{
"id": "2203.11171"
},
{
"id": "2212.10618"
},
{
"id": "2301.00234"
},
{
"id": "2303.18223"
},
{
"id": "2010.07375"
},
{
"id": "2212.10403"
},
{
"id": "2305.01528"
},
{
"id": "2305.16960"
}
] |
2307.12856 | 11 | MindAct (Deng et al., 2023) could be the most relevant work, where finetuned language model summarizes the raw HTML document into task-relevant snippets, and another model predicts the web actions in a multi-choice QA format. While MindAct also combines several language models, it has just adopted DeBERTa (He et al., 2021) and Flan-T5 (Chung et al., 2022) for summarization and actor modules, and evaluated it on the offline dataset. In contrast, we design HTML-T5, specialized for web-based tasks, to handle long HTML documents. WebAgent leverages HTML-T5 finetuned with self-experience for summarization and planning, and Flan-U-PaLM as a capable programmer, which enables it to generate open-ended web actions and to act on online real-world websites. | 2307.12856#11 | A Real-World WebAgent with Planning, Long Context Understanding, and Program Synthesis | Pre-trained large language models (LLMs) have recently achieved better
generalization and sample efficiency in autonomous web automation. However, the
performance on real-world websites has still suffered from (1) open domainness,
(2) limited context length, and (3) lack of inductive bias on HTML. We
introduce WebAgent, an LLM-driven agent that learns from self-experience to
complete tasks on real websites following natural language instructions.
WebAgent plans ahead by decomposing instructions into canonical
sub-instructions, summarizes long HTML documents into task-relevant snippets,
and acts on websites via Python programs generated from those. We design
WebAgent with Flan-U-PaLM, for grounded code generation, and HTML-T5, new
pre-trained LLMs for long HTML documents using local and global attention
mechanisms and a mixture of long-span denoising objectives, for planning and
summarization. We empirically demonstrate that our modular recipe improves the
success on real websites by over 50%, and that HTML-T5 is the best model to
solve various HTML understanding tasks; achieving 18.7% higher success rate
than the prior method on MiniWoB web automation benchmark, and SoTA performance
on Mind2Web, an offline task planning evaluation. | http://arxiv.org/pdf/2307.12856 | Izzeddin Gur, Hiroki Furuta, Austin Huang, Mustafa Safdari, Yutaka Matsuo, Douglas Eck, Aleksandra Faust | cs.LG, cs.AI, cs.CL | null | null | cs.LG | 20230724 | 20231003 | [
{
"id": "2101.02235"
},
{
"id": "2302.13971"
},
{
"id": "2103.06410"
},
{
"id": "2305.08848"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2106.15772"
},
{
"id": "2305.16653"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2105.13231"
},
{
"id": "2205.05131"
},
{
"id": "2204.00598"
},
{
"id": "2203.17189"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2105.11210"
},
{
"id": "2304.11477"
},
{
"id": "1906.01749"
},
{
"id": "2108.07732"
},
{
"id": "2202.00217"
},
{
"id": "2102.04664"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "1811.00937"
},
{
"id": "2303.08774"
},
{
"id": "1808.06226"
},
{
"id": "2010.12844"
},
{
"id": "2110.14168"
},
{
"id": "2209.11302"
},
{
"id": "2210.09261"
},
{
"id": "2306.07863"
},
{
"id": "2306.00245"
},
{
"id": "2305.16291"
},
{
"id": "2005.14165"
},
{
"id": "2305.11014"
},
{
"id": "2211.10435"
},
{
"id": "2210.03629"
},
{
"id": "1912.13318"
},
{
"id": "2108.13643"
},
{
"id": "2303.09752"
},
{
"id": "2103.07191"
},
{
"id": "1611.04230"
},
{
"id": "2209.07753"
},
{
"id": "2002.08155"
},
{
"id": "2212.10560"
},
{
"id": "2110.08518"
},
{
"id": "2210.03945"
},
{
"id": "2201.08239"
},
{
"id": "2204.01691"
},
{
"id": "1906.03741"
},
{
"id": "2203.02155"
},
{
"id": "2201.07207"
},
{
"id": "2004.08483"
},
{
"id": "1804.05685"
},
{
"id": "2206.10498"
},
{
"id": "2207.01206"
},
{
"id": "2105.09938"
},
{
"id": "2205.12255"
},
{
"id": "2305.11854"
}
] |
2307.12950 | 11 | 3.2 POSITIVE AND NEGATIVE PROMPT CONSTRUCTION
From a technical standpoint, implementing RLCD is straightforward if starting from an existing RLAIF workflow. The main choice to make is how to construct RLCDâs positive and negative prompts p+, pâ for preference pair generation. We identify two major criteria for selecting these prompts:
1. p+ should be more likely than pâ to produce outputs exemplifying the desired attribute (e.g., harmlessness, helpfulness). Equivalently, pâ may explicitly encourage directional change toward the opposite attribute.
2. The surface forms of p+ and pâ should be as similar as possible, for example as in the RLCD box in Figure 1, where p+ and pâ differ only in the words âharmlessâ vs. âharmful.â
3
The first criterion is self-evident. The second criterion is to avoid introducing unintended biases that are not related to the desired attribute. Intuitively, p+ and pâ induce two different distributions; the first criterion ensures that these two distributions differ by as much as possible in the desired attribute, while the second criterion ensures that they differ by as little as possible on orthogonal axes. | 2307.12950#11 | RLCD: Reinforcement Learning from Contrast Distillation for Language Model Alignment | We propose Reinforcement Learning from Contrast Distillation (RLCD), a method
for aligning language models to follow natural language principles without
using human feedback. RLCD trains a preference model using simulated preference
pairs that contain both a high-quality and low-quality example, generated using
contrasting positive and negative prompts. The preference model is then used to
improve a base unaligned language model via reinforcement learning.
Empirically, RLCD outperforms RLAIF (Bai et al., 2022b) and context
distillation (Huang et al., 2022) baselines across three diverse alignment
tasks--harmlessness, helpfulness, and story outline generation--and on both 7B
and 30B model scales for preference data simulation. | http://arxiv.org/pdf/2307.12950 | Kevin Yang, Dan Klein, Asli Celikyilmaz, Nanyun Peng, Yuandong Tian | cs.CL, cs.AI | null | null | cs.CL | 20230724 | 20230818 | [
{
"id": "2302.13971"
},
{
"id": "2109.07445"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.03047"
},
{
"id": "2209.15189"
},
{
"id": "2306.15595"
},
{
"id": "1807.03748"
},
{
"id": "2202.00161"
},
{
"id": "1606.07947"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2204.05862"
},
{
"id": "2301.11270"
},
{
"id": "2112.00861"
},
{
"id": "2307.03172"
},
{
"id": "2206.11349"
},
{
"id": "2305.14387"
},
{
"id": "2210.11610"
},
{
"id": "2102.10960"
},
{
"id": "2306.11816"
}
] |
2307.12573 | 12 | Our contributions can be summarized as follows:
1. We introduce a Multiple character and novel Object based interaction Estimation (MOE) task, speciï¬cally addressing challenges in handling complex interaction like a real-human GM. This task serves as a valuable testbed for evaluating the abilities of constructing virtual GMs and contributes to advancements in developing more realistic agents.
2. We collect a dataset for MOE to address the limitations in exploring long contexts and intricate multi-character interactions in real-time communications. This dataset bridges a crucial gap in the current research, offering a comprehensive resource for analyzing and understanding these complex interactions.
3. We introduce a prompting baseline and conduct a comprehensive evaluation of different prompting methods using a range of Large Language Models (LLMs) within MOE task. The experimental results indicate that MOE task is solvable, yet there is ample room for further improvement.
4. We conduct subjective evaluations based on the answers obtained from MOE. These evaluations show that better performances in MOE lead to higher levels of factual correctness, naturalness, and groundedness in the generated responses, which are crucial factors for creating a vivid agents. These results further underscore the signiï¬cance of our dataset and task in improving AI agents.
# 1www.goddessfantasy.net
3
# 2 Related Work | 2307.12573#12 | Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models | Recent advancements in natural language and Large Language Models (LLMs) have
enabled AI agents to simulate human-like interactions within virtual worlds.
However, these interactions still face limitations in complexity and
flexibility, particularly in scenarios involving multiple characters and novel
objects. Pre-defining all interactable objects in the agent's world model
presents challenges, and conveying implicit intentions to multiple characters
through complex interactions remains difficult. To address these issues, we
propose integrating virtual Game Masters (GMs) into the agent's world model,
drawing inspiration from Tabletop Role-Playing Games (TRPGs). GMs play a
crucial role in overseeing information, estimating players' intentions,
providing environment descriptions, and offering feedback, compensating for
current world model deficiencies. To facilitate future explorations for complex
interactions, we introduce a benchmark named Tachikuma, comprising a Multiple
character and novel Object based interaction Estimation (MOE) task and a
supporting dataset. MOE challenges models to understand characters' intentions
and accurately determine their actions within intricate contexts involving
multi-character and novel object interactions. Besides, the dataset captures
log data from real-time communications during gameplay, providing diverse,
grounded, and complex interactions for further explorations. Finally, we
present a simple prompting baseline and evaluate its performance, demonstrating
its effectiveness in enhancing interaction understanding. We hope that our
dataset and task will inspire further research in complex interactions with
natural language, fostering the development of more advanced AI agents. | http://arxiv.org/pdf/2307.12573 | Yuanzhi Liang, Linchao Zhu, Yi Yang | cs.CL | Preliminary version of an ongoing work | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2212.10060"
},
{
"id": "2205.11916"
},
{
"id": "2304.03442"
},
{
"id": "2203.11171"
},
{
"id": "2212.10618"
},
{
"id": "2301.00234"
},
{
"id": "2303.18223"
},
{
"id": "2010.07375"
},
{
"id": "2212.10403"
},
{
"id": "2305.01528"
},
{
"id": "2305.16960"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.