doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2307.06135
98
Figure 9: Real World Execution of a Generated Long Horizon Plan. Execution of a generated and validated task plan on a real-world mobile manipulator robot. 47 # Input Prompt Structure Input prompt passed to the LLM for SayPlan. Note that the components highlighted in violet rep- resent static components of the prompt that remain fixed throughout both the semantic search and iterative replanning phases of SayPlan.
2307.06135#98
SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning
Large language models (LLMs) have demonstrated impressive results in developing generalist planning agents for diverse tasks. However, grounding these plans in expansive, multi-floor, and multi-room environments presents a significant challenge for robotics. We introduce SayPlan, a scalable approach to LLM-based, large-scale task planning for robotics using 3D scene graph (3DSG) representations. To ensure the scalability of our approach, we: (1) exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic search' for task-relevant subgraphs from a smaller, collapsed representation of the full graph; (2) reduce the planning horizon for the LLM by integrating a classical path planner and (3) introduce an 'iterative replanning' pipeline that refines the initial plan using feedback from a scene graph simulator, correcting infeasible actions and avoiding planning failures. We evaluate our approach on two large-scale environments spanning up to 3 floors and 36 rooms with 140 assets and objects and show that our approach is capable of grounding large-scale, long-horizon task plans from abstract, and natural language instruction for a mobile manipulator robot to execute. We provide real robot video demonstrations on our project page https://sayplan.github.io.
http://arxiv.org/pdf/2307.06135
Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf
cs.RO, cs.AI
Accepted for oral presentation at the Conference on Robot Learning (CoRL), 2023. Project page can be found here: https://sayplan.github.io
null
cs.RO
20230712
20230927
[ { "id": "2204.00598" }, { "id": "2210.05359" }, { "id": "2304.11477" }, { "id": "2302.04761" }, { "id": "2210.03629" }, { "id": "2207.05608" }, { "id": "2201.11903" }, { "id": "2303.14100" }, { "id": "2302.05128" }, { "id": "2302.12813" }, { "id": "2304.11116" }, { "id": "2212.04088" } ]
2307.06281
98
35.4% Kosmos-2 43.5% 76.7% 29.2% 20.8% mPLUG-Owl 47.8% 69.8% 8.3% 47.9% Otter-I 53.9% 76.7% 20.8% 50.0% Shikra 58.3% 93.0% 20.8% 45.8% OpenFlamingo v2 1.4% 1.2% 3.8% 0.0% OpenFlamingo 3.3% 0.0% 11.5% 1.2% MiniGPT-4 8.9% 11.2% 5.8% 8.6% MMGPT 20.7% 27.5% 3.8% 24.7% PandaGPT 22.1% 27.5% 9.6% 24.7% InstructBLIP 22.5% 27.5% 11.5% 24.7% LLaMA-Adapter 23.0% 31.2% 19.2% 17.3% TEST VisualGLM 27.7% 46.2% 3.8% 24.7% LLaVA 28.6% 37.5% 21.2% 24.7% MiniGPT-4-13B 30.0% 43.8% 21.2% 22.2% µ-G2PT
2307.06281#98
MMBench: Is Your Multi-modal Model an All-around Player?
Large vision-language models have recently achieved remarkable progress, exhibiting great perception and reasoning abilities concerning visual information. However, how to effectively evaluate these large vision-language models remains a major obstacle, hindering future model development. Traditional benchmarks like VQAv2 or COCO Caption provide quantitative performance measurements but suffer from a lack of fine-grained ability assessment and non-robust evaluation metrics. Recent subjective benchmarks, such as OwlEval, offer comprehensive evaluations of a model's abilities by incorporating human labor, but they are not scalable and display significant bias. In response to these challenges, we propose MMBench, a novel multi-modality benchmark. MMBench methodically develops a comprehensive evaluation pipeline, primarily comprised of two elements. The first element is a meticulously curated dataset that surpasses existing similar benchmarks in terms of the number and variety of evaluation questions and abilities. The second element introduces a novel CircularEval strategy and incorporates the use of ChatGPT. This implementation is designed to convert free-form predictions into pre-defined choices, thereby facilitating a more robust evaluation of the model's predictions. MMBench is a systematically-designed objective benchmark for robustly evaluating the various abilities of vision-language models. We hope MMBench will assist the research community in better evaluating their models and encourage future advancements in this domain. Project page: https://opencompass.org.cn/mmbench.
http://arxiv.org/pdf/2307.06281
Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin
cs.CV, cs.CL
null
null
cs.CV
20230712
20230813
[ { "id": "2302.13971" }, { "id": "2306.15195" }, { "id": "2305.03726" }, { "id": "2304.10592" }, { "id": "2106.09685" }, { "id": "2301.12597" }, { "id": "1504.00325" }, { "id": "2306.14824" }, { "id": "2305.16355" }, { "id": "2305.08322" }, { "id": "2111.02114" }, { "id": "2304.14178" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2304.08485" } ]
2307.06135
99
Agent Role: You are an excellent graph planning agent. Given a graph representation of an environment, you can explore the graph by expanding nodes to find the items of interest. You can then use this graph to generate a step-by-step task plan that the agent can follow to solve a given instruction. Environment Functions: goto(<pose>): Move the agent to any room node or pose node. access(<asset>): Provide access to the set of affordances associated with an asset node and its connected objects. pickup(<object>): Pick up an accessible object from the accessed node. release(<object>): Release grasped object at an asset node. turn_on/off(<object>): Toggle object at agent’s node, if accessible and has affordance. open/close(<asset>): Open/close asset at agent’s node, affecting object accessibility. done(): Call when the task is completed. Environment State: ontop_of(<asset>): Object is located on <asset> inside_of(<asset>): Object is located inside <asset> inside_hand: Object is currently being grasped by the robot/agent closed: Asset can be opened open: Asset can be closed or kept open on: Asset is currently on
2307.06135#99
SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning
Large language models (LLMs) have demonstrated impressive results in developing generalist planning agents for diverse tasks. However, grounding these plans in expansive, multi-floor, and multi-room environments presents a significant challenge for robotics. We introduce SayPlan, a scalable approach to LLM-based, large-scale task planning for robotics using 3D scene graph (3DSG) representations. To ensure the scalability of our approach, we: (1) exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic search' for task-relevant subgraphs from a smaller, collapsed representation of the full graph; (2) reduce the planning horizon for the LLM by integrating a classical path planner and (3) introduce an 'iterative replanning' pipeline that refines the initial plan using feedback from a scene graph simulator, correcting infeasible actions and avoiding planning failures. We evaluate our approach on two large-scale environments spanning up to 3 floors and 36 rooms with 140 assets and objects and show that our approach is capable of grounding large-scale, long-horizon task plans from abstract, and natural language instruction for a mobile manipulator robot to execute. We provide real robot video demonstrations on our project page https://sayplan.github.io.
http://arxiv.org/pdf/2307.06135
Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf
cs.RO, cs.AI
Accepted for oral presentation at the Conference on Robot Learning (CoRL), 2023. Project page can be found here: https://sayplan.github.io
null
cs.RO
20230712
20230927
[ { "id": "2204.00598" }, { "id": "2210.05359" }, { "id": "2304.11477" }, { "id": "2302.04761" }, { "id": "2210.03629" }, { "id": "2207.05608" }, { "id": "2201.11903" }, { "id": "2303.14100" }, { "id": "2302.05128" }, { "id": "2302.12813" }, { "id": "2304.11116" }, { "id": "2212.04088" } ]
2307.06135
100
inside_hand: Object is currently being grasped by the robot/agent closed: Asset can be opened open: Asset can be closed or kept open on: Asset is currently on off: Asset is currently off accessible: The object is not accessible if it is inside an asset and the asset state is "closed". Environment API: expand_node(<node>): Reveal assets/objects connected to a room/floor node. contract_node(<node>): Hide assets/objects, reducing graph size for memory constraints. verify_plan(): Verify generated plan in the scene graph environment. Output Response Format: {chain_of_thought: break your problem down into a series of intermediate reasoning steps to help you determine your next command, reasoning: justify why the next action is important mode: "exploring" OR "planning" command: {‘‘command_name": Environment API call
2307.06135#100
SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning
Large language models (LLMs) have demonstrated impressive results in developing generalist planning agents for diverse tasks. However, grounding these plans in expansive, multi-floor, and multi-room environments presents a significant challenge for robotics. We introduce SayPlan, a scalable approach to LLM-based, large-scale task planning for robotics using 3D scene graph (3DSG) representations. To ensure the scalability of our approach, we: (1) exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic search' for task-relevant subgraphs from a smaller, collapsed representation of the full graph; (2) reduce the planning horizon for the LLM by integrating a classical path planner and (3) introduce an 'iterative replanning' pipeline that refines the initial plan using feedback from a scene graph simulator, correcting infeasible actions and avoiding planning failures. We evaluate our approach on two large-scale environments spanning up to 3 floors and 36 rooms with 140 assets and objects and show that our approach is capable of grounding large-scale, long-horizon task plans from abstract, and natural language instruction for a mobile manipulator robot to execute. We provide real robot video demonstrations on our project page https://sayplan.github.io.
http://arxiv.org/pdf/2307.06135
Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf
cs.RO, cs.AI
Accepted for oral presentation at the Conference on Robot Learning (CoRL), 2023. Project page can be found here: https://sayplan.github.io
null
cs.RO
20230712
20230927
[ { "id": "2204.00598" }, { "id": "2210.05359" }, { "id": "2304.11477" }, { "id": "2302.04761" }, { "id": "2210.03629" }, { "id": "2207.05608" }, { "id": "2201.11903" }, { "id": "2303.14100" }, { "id": "2302.05128" }, { "id": "2302.12813" }, { "id": "2304.11116" }, { "id": "2212.04088" } ]
2307.06281
101
Image Style Image Topic Image Quality Q: Which category does this image belong to? A. Oil Paiting B. Sketch C. Digital art D. Photo GT: A Q: Which of the following captions best describes this image? A. A group of people playing soccer in a field B. A woman walking her dog on a beach C. Aman riding a bicycle on a mountain trail D. A child playing with a ball in a a park GTA Q: What type of environment is depicted in the picture? A. Home B. shopping mall C. Street D. forest GT: A @: Which mood does this image convey? A. Cozy B. Anxious C. Happy D. Angry Gu: c @: Which image is move brightful? A. The first ih image B. The second image GT: A @: Which category does this image belong to? A. Oil Paiting B. Sketch C. Digital art D. Photo GT: B Q: Which of the following captions best describes this image? A. A group of people playing soccer in a field B. A woman walking her dog on a beach C. Aman riding a bicycle on a mountain trail D. A child playing with a ball in a park GT: B Q: What type of environment is depicted in the picture?
2307.06281#101
MMBench: Is Your Multi-modal Model an All-around Player?
Large vision-language models have recently achieved remarkable progress, exhibiting great perception and reasoning abilities concerning visual information. However, how to effectively evaluate these large vision-language models remains a major obstacle, hindering future model development. Traditional benchmarks like VQAv2 or COCO Caption provide quantitative performance measurements but suffer from a lack of fine-grained ability assessment and non-robust evaluation metrics. Recent subjective benchmarks, such as OwlEval, offer comprehensive evaluations of a model's abilities by incorporating human labor, but they are not scalable and display significant bias. In response to these challenges, we propose MMBench, a novel multi-modality benchmark. MMBench methodically develops a comprehensive evaluation pipeline, primarily comprised of two elements. The first element is a meticulously curated dataset that surpasses existing similar benchmarks in terms of the number and variety of evaluation questions and abilities. The second element introduces a novel CircularEval strategy and incorporates the use of ChatGPT. This implementation is designed to convert free-form predictions into pre-defined choices, thereby facilitating a more robust evaluation of the model's predictions. MMBench is a systematically-designed objective benchmark for robustly evaluating the various abilities of vision-language models. We hope MMBench will assist the research community in better evaluating their models and encourage future advancements in this domain. Project page: https://opencompass.org.cn/mmbench.
http://arxiv.org/pdf/2307.06281
Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin
cs.CV, cs.CL
null
null
cs.CV
20230712
20230813
[ { "id": "2302.13971" }, { "id": "2306.15195" }, { "id": "2305.03726" }, { "id": "2304.10592" }, { "id": "2106.09685" }, { "id": "2301.12597" }, { "id": "1504.00325" }, { "id": "2306.14824" }, { "id": "2305.16355" }, { "id": "2305.08322" }, { "id": "2111.02114" }, { "id": "2304.14178" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2304.08485" } ]
2307.06135
102
Instruction: make a coffee for Tom and place it in his room 3D Scene Graph: {nodes: {room: [{id: bobs_room}, {id: toms_room}, {id: jacks_room}, {id: kitchen}, {id: livingroom}], pose: [{id: pose1}, {id: pose2}, {id: pose3}, {id: pose4}, {id: pose5}], agent: [{location: bobs_room, id: agent}]}, links: [bobs_room↔pose1, bobs_room↔agent, toms_room↔pose1, toms_room↔pose2, toms_room↔pose5, jacks_room↔pose2, jacks_room↔pose3, kitchen↔pose3, kitchen↔pose4, kitchen↔pose5, livingroom↔pose4]} Memory: [] SayPlan [Mode->Search]: chain-of-thought: explore graph for coffee mug, coffee machine and somewhere to place the coffee -> perform common sense exploration first -> explore toms room -> explore kitchen -> if items not found then explore other nodes reasoning: i will expand toms room command: expand(toms_room) Scene Graph
2307.06135#102
SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning
Large language models (LLMs) have demonstrated impressive results in developing generalist planning agents for diverse tasks. However, grounding these plans in expansive, multi-floor, and multi-room environments presents a significant challenge for robotics. We introduce SayPlan, a scalable approach to LLM-based, large-scale task planning for robotics using 3D scene graph (3DSG) representations. To ensure the scalability of our approach, we: (1) exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic search' for task-relevant subgraphs from a smaller, collapsed representation of the full graph; (2) reduce the planning horizon for the LLM by integrating a classical path planner and (3) introduce an 'iterative replanning' pipeline that refines the initial plan using feedback from a scene graph simulator, correcting infeasible actions and avoiding planning failures. We evaluate our approach on two large-scale environments spanning up to 3 floors and 36 rooms with 140 assets and objects and show that our approach is capable of grounding large-scale, long-horizon task plans from abstract, and natural language instruction for a mobile manipulator robot to execute. We provide real robot video demonstrations on our project page https://sayplan.github.io.
http://arxiv.org/pdf/2307.06135
Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf
cs.RO, cs.AI
Accepted for oral presentation at the Conference on Robot Learning (CoRL), 2023. Project page can be found here: https://sayplan.github.io
null
cs.RO
20230712
20230927
[ { "id": "2204.00598" }, { "id": "2210.05359" }, { "id": "2304.11477" }, { "id": "2302.04761" }, { "id": "2210.03629" }, { "id": "2207.05608" }, { "id": "2201.11903" }, { "id": "2303.14100" }, { "id": "2302.05128" }, { "id": "2302.12813" }, { "id": "2304.11116" }, { "id": "2212.04088" } ]
2307.06135
103
room -> explore kitchen -> if items not found then explore other nodes reasoning: i will expand toms room command: expand(toms_room) Scene Graph Simulator: <Updated 3D scene graph> Memory: [toms_room] SayPlan [Mode->Search]: chain-of-thought: i have found a wardrobe in tom’s room -> leave this node expanded -> the coffee mug is not in his room -> still have not found the coffee machine -> kitchen might have coffee machine and coffee mug -> explore this node next reasoning: i will expand the kitchen next command: expand(kitchen) Scene Graph Simulator: <Updated 3D scene graph> Memory: [toms_room, kitchen] SayPlan [Mode->Search]: chain-of-thought: i’ve found the coffee machine in the kitchen -> the coffee mug however is not in the kitchen -> i will explore other unexpanded nodes now for the coffee mug -> jacks room, bobs room and the living room are unexplored yet -> i will try jack"s room next reasoning: i will expand jacks room next command: expand(jacks_room) Scene Graph Simulator: <Updated 3D scene graph> Memory: [toms_room, kitchen,
2307.06135#103
SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning
Large language models (LLMs) have demonstrated impressive results in developing generalist planning agents for diverse tasks. However, grounding these plans in expansive, multi-floor, and multi-room environments presents a significant challenge for robotics. We introduce SayPlan, a scalable approach to LLM-based, large-scale task planning for robotics using 3D scene graph (3DSG) representations. To ensure the scalability of our approach, we: (1) exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic search' for task-relevant subgraphs from a smaller, collapsed representation of the full graph; (2) reduce the planning horizon for the LLM by integrating a classical path planner and (3) introduce an 'iterative replanning' pipeline that refines the initial plan using feedback from a scene graph simulator, correcting infeasible actions and avoiding planning failures. We evaluate our approach on two large-scale environments spanning up to 3 floors and 36 rooms with 140 assets and objects and show that our approach is capable of grounding large-scale, long-horizon task plans from abstract, and natural language instruction for a mobile manipulator robot to execute. We provide real robot video demonstrations on our project page https://sayplan.github.io.
http://arxiv.org/pdf/2307.06135
Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf
cs.RO, cs.AI
Accepted for oral presentation at the Conference on Robot Learning (CoRL), 2023. Project page can be found here: https://sayplan.github.io
null
cs.RO
20230712
20230927
[ { "id": "2204.00598" }, { "id": "2210.05359" }, { "id": "2304.11477" }, { "id": "2302.04761" }, { "id": "2210.03629" }, { "id": "2207.05608" }, { "id": "2201.11903" }, { "id": "2303.14100" }, { "id": "2302.05128" }, { "id": "2302.12813" }, { "id": "2304.11116" }, { "id": "2212.04088" } ]
2307.06281
103
Figure 10: Coarse Perception: Data samples. 24 Attribute Recognition Q: What is the shape of this object? A. Circle B. Triangle Cc. Square D. Rectangle Gu A Celebvity Recognition Q: Who is this person A. David Beckham B. Prince Harry C. Daniel Craig D. Tom Hardy GT: B Q: How many apples ave there in the image? And how many bananas are there? A. 4 apples and 2 bananas B. 3 apples and 3 banana C. 2 apples and 4 bananas D. 4 apples and 1 bananas GTA OCR Q: What does this outdoor billboard mean? A. Smoking is prohibited here. Q: what is the color of this object? A. Purple B. Pink C. Gray D. Orange GT: D : Who is this person Benedict Cumberbatch Idris Elba Ed Sheeran Harry Styles rT: A VORPH g : Which comer is the juice? Up Down Left Right D VOBPH g iT: Q: What does this picture want express? We ave expected to care for the earth. VU eee Es . We are expected to care for B. Something is on sale. 2 sceaal C. No photography allowed ad . We are expected to stay D. Take care of your speed. ALN positive. GT: B | WORK HARD! . We are expected to work hard. GT: D
2307.06281#103
MMBench: Is Your Multi-modal Model an All-around Player?
Large vision-language models have recently achieved remarkable progress, exhibiting great perception and reasoning abilities concerning visual information. However, how to effectively evaluate these large vision-language models remains a major obstacle, hindering future model development. Traditional benchmarks like VQAv2 or COCO Caption provide quantitative performance measurements but suffer from a lack of fine-grained ability assessment and non-robust evaluation metrics. Recent subjective benchmarks, such as OwlEval, offer comprehensive evaluations of a model's abilities by incorporating human labor, but they are not scalable and display significant bias. In response to these challenges, we propose MMBench, a novel multi-modality benchmark. MMBench methodically develops a comprehensive evaluation pipeline, primarily comprised of two elements. The first element is a meticulously curated dataset that surpasses existing similar benchmarks in terms of the number and variety of evaluation questions and abilities. The second element introduces a novel CircularEval strategy and incorporates the use of ChatGPT. This implementation is designed to convert free-form predictions into pre-defined choices, thereby facilitating a more robust evaluation of the model's predictions. MMBench is a systematically-designed objective benchmark for robustly evaluating the various abilities of vision-language models. We hope MMBench will assist the research community in better evaluating their models and encourage future advancements in this domain. Project page: https://opencompass.org.cn/mmbench.
http://arxiv.org/pdf/2307.06281
Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin
cs.CV, cs.CL
null
null
cs.CV
20230712
20230813
[ { "id": "2302.13971" }, { "id": "2306.15195" }, { "id": "2305.03726" }, { "id": "2304.10592" }, { "id": "2106.09685" }, { "id": "2301.12597" }, { "id": "1504.00325" }, { "id": "2306.14824" }, { "id": "2305.16355" }, { "id": "2305.08322" }, { "id": "2111.02114" }, { "id": "2304.14178" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2304.08485" } ]
2307.06135
104
will expand jacks room next command: expand(jacks_room) Scene Graph Simulator: <Updated 3D scene graph> Memory: [toms_room, kitchen, jacks_room] SayPlan [Mode->Search]: chain-of-thought: jacks room does not have anything relevant towards solving this task -> I should contract this node to save token memory -> i will explore other unexplored nodes after doing this reasoning: contract jack’s room to save memory command: contract(jacks_room) Scene Graph Simulator: <Updated 3D scene graph> Memory: [toms_room, kitchen, jacks_room] SayPlan [Mode->Search]: chain-of-thought: i have found a wardrobe in tom’s room and coffee machine in the kitchen -> the coffee mug is still required for the task -> explore a new room next reasoning: i will expand bob’s room as it is unexplored command: expand(bobs_room) Scene Graph Simulator: <Updated 3D scene graph> Memory: [toms_room, kitchen, jacks_room, bobs_room] SayPlan [Mode->Search]: chain-of-thought: i have found the coffee mug, the coffee machine and tom’s wardrobe on the graph -> search complete
2307.06135#104
SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning
Large language models (LLMs) have demonstrated impressive results in developing generalist planning agents for diverse tasks. However, grounding these plans in expansive, multi-floor, and multi-room environments presents a significant challenge for robotics. We introduce SayPlan, a scalable approach to LLM-based, large-scale task planning for robotics using 3D scene graph (3DSG) representations. To ensure the scalability of our approach, we: (1) exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic search' for task-relevant subgraphs from a smaller, collapsed representation of the full graph; (2) reduce the planning horizon for the LLM by integrating a classical path planner and (3) introduce an 'iterative replanning' pipeline that refines the initial plan using feedback from a scene graph simulator, correcting infeasible actions and avoiding planning failures. We evaluate our approach on two large-scale environments spanning up to 3 floors and 36 rooms with 140 assets and objects and show that our approach is capable of grounding large-scale, long-horizon task plans from abstract, and natural language instruction for a mobile manipulator robot to execute. We provide real robot video demonstrations on our project page https://sayplan.github.io.
http://arxiv.org/pdf/2307.06135
Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf
cs.RO, cs.AI
Accepted for oral presentation at the Conference on Robot Learning (CoRL), 2023. Project page can be found here: https://sayplan.github.io
null
cs.RO
20230712
20230927
[ { "id": "2204.00598" }, { "id": "2210.05359" }, { "id": "2304.11477" }, { "id": "2302.04761" }, { "id": "2210.03629" }, { "id": "2207.05608" }, { "id": "2201.11903" }, { "id": "2303.14100" }, { "id": "2302.05128" }, { "id": "2302.12813" }, { "id": "2304.11116" }, { "id": "2212.04088" } ]
2307.06281
104
Figure 11: Fine-grained Perception (single-instance): Data samples. ‘Spatial Relationship Crete? A. lonian Sea B. Aegean Sea C. Black sea D. Mediterranean Sea GT: D Attribute Comparison Q: Are the two arrows in the same direction in the picture? A. Same B. Not the same C. Can't judge GT: B Q: What kind of human behavior does this picture describe? A. Aman with a solemn expression, XXX driving. B. A man is practicing his skateboarding XXX skills. C. A group of XXX breather from work. D. A family is XXX clothing. GTA Q: Which sea is located in the south of [@ @: What direction is Indonesia in Philippines? A. northeast B. southwest C. southeast D. northwest GT: B @: Are the candies in the two Jars in the picture the same color? A. Same B. Not the same C. Can't judge GT: B Q: What kind of human behavior does this picture describe? A. This is a XXX smiles on their faces B. A man is XXX his breathing and inner thoughts. C. A musician XXX a classical piece. D. A family is XXX together. GTA Figure 12: Fine-grained Perception (cross-instance): Data samples. XXX indicates omitted contents which are less relevant to the question. 25
2307.06281#104
MMBench: Is Your Multi-modal Model an All-around Player?
Large vision-language models have recently achieved remarkable progress, exhibiting great perception and reasoning abilities concerning visual information. However, how to effectively evaluate these large vision-language models remains a major obstacle, hindering future model development. Traditional benchmarks like VQAv2 or COCO Caption provide quantitative performance measurements but suffer from a lack of fine-grained ability assessment and non-robust evaluation metrics. Recent subjective benchmarks, such as OwlEval, offer comprehensive evaluations of a model's abilities by incorporating human labor, but they are not scalable and display significant bias. In response to these challenges, we propose MMBench, a novel multi-modality benchmark. MMBench methodically develops a comprehensive evaluation pipeline, primarily comprised of two elements. The first element is a meticulously curated dataset that surpasses existing similar benchmarks in terms of the number and variety of evaluation questions and abilities. The second element introduces a novel CircularEval strategy and incorporates the use of ChatGPT. This implementation is designed to convert free-form predictions into pre-defined choices, thereby facilitating a more robust evaluation of the model's predictions. MMBench is a systematically-designed objective benchmark for robustly evaluating the various abilities of vision-language models. We hope MMBench will assist the research community in better evaluating their models and encourage future advancements in this domain. Project page: https://opencompass.org.cn/mmbench.
http://arxiv.org/pdf/2307.06281
Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin
cs.CV, cs.CL
null
null
cs.CV
20230712
20230813
[ { "id": "2302.13971" }, { "id": "2306.15195" }, { "id": "2305.03726" }, { "id": "2304.10592" }, { "id": "2106.09685" }, { "id": "2301.12597" }, { "id": "1504.00325" }, { "id": "2306.14824" }, { "id": "2305.16355" }, { "id": "2305.08322" }, { "id": "2111.02114" }, { "id": "2304.14178" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2304.08485" } ]
2307.06281
105
Figure 12: Fine-grained Perception (cross-instance): Data samples. XXX indicates omitted contents which are less relevant to the question. 25 Physical Property Reasoning Q: The object shown in this figure: A. Is the hardest naturally occurring substance on Earth. B. Conducts electricity well at room temperature. c. Is typically found in igneous rocks like basalt and granite. D. Has a low melting point compared to other minerals. GTA Q: What's the function of the demonstrated object? A. Cut vegetables B. stir C. Water purification D. Boiling water GT: B Q: What's the profession of the people in this picture? Libravian > B. radio host C. gardener D. Tauyer GT: C Q: The object shown in this figure: A. Is one kind of metal that is liquid at the room temperature. B. Can be easily dissolved in water. C. Has a low boiling point compared to other metals. D. Is attracted to magnets. GTA Q: What's the function of the demonstrated object? A. Separating B. Clamping c. drill D. incise GT: A Q: What's the profession of the people in this picture? A. Librarian B. accountant C. radio host D. gardener E. Tawyer GTA Figure 13: Attribute Reasoning: Data samples.
2307.06281#105
MMBench: Is Your Multi-modal Model an All-around Player?
Large vision-language models have recently achieved remarkable progress, exhibiting great perception and reasoning abilities concerning visual information. However, how to effectively evaluate these large vision-language models remains a major obstacle, hindering future model development. Traditional benchmarks like VQAv2 or COCO Caption provide quantitative performance measurements but suffer from a lack of fine-grained ability assessment and non-robust evaluation metrics. Recent subjective benchmarks, such as OwlEval, offer comprehensive evaluations of a model's abilities by incorporating human labor, but they are not scalable and display significant bias. In response to these challenges, we propose MMBench, a novel multi-modality benchmark. MMBench methodically develops a comprehensive evaluation pipeline, primarily comprised of two elements. The first element is a meticulously curated dataset that surpasses existing similar benchmarks in terms of the number and variety of evaluation questions and abilities. The second element introduces a novel CircularEval strategy and incorporates the use of ChatGPT. This implementation is designed to convert free-form predictions into pre-defined choices, thereby facilitating a more robust evaluation of the model's predictions. MMBench is a systematically-designed objective benchmark for robustly evaluating the various abilities of vision-language models. We hope MMBench will assist the research community in better evaluating their models and encourage future advancements in this domain. Project page: https://opencompass.org.cn/mmbench.
http://arxiv.org/pdf/2307.06281
Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin
cs.CV, cs.CL
null
null
cs.CV
20230712
20230813
[ { "id": "2302.13971" }, { "id": "2306.15195" }, { "id": "2305.03726" }, { "id": "2304.10592" }, { "id": "2106.09685" }, { "id": "2301.12597" }, { "id": "1504.00325" }, { "id": "2306.14824" }, { "id": "2305.16355" }, { "id": "2305.08322" }, { "id": "2111.02114" }, { "id": "2304.14178" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2304.08485" } ]
2307.06281
106
Figure 13: Attribute Reasoning: Data samples. Social_Relation Nature Relation @: What can be the relationship between the two persons in this image? A. Father and daughter B. Mother and son C. Brother and sister D. Husband and wife GT: D Q: In nature, what's the relationship between these two creatures? A. Predatory relationships B. Competitive relationships C. Parasitic relationships D. Symbiotic relationship GT: B Q: Who is closer to the football in the image, the player in the black jersey or the player in the green jersey? A. The player in the black jersey B. The player in the green jersey C. They ave equally close D. It cannot be determined GTA Q: What can be the relationship between the two persons in this image? A. Father and daughter B. Grandfather and granddaughter C. Brother and sister D. Husband and wife GT: B Q: In nature, what's the relationship between these two creatures? A. Predatory relationships B. Competitive relationships C. Parasitic relationships D. Symbiotic relationship GT: D @: How many tennis balls are placed on the tennis racket? A 4d aQ909 QakUDd Figure 14: Relation Reasoning: Data samples. 26
2307.06281#106
MMBench: Is Your Multi-modal Model an All-around Player?
Large vision-language models have recently achieved remarkable progress, exhibiting great perception and reasoning abilities concerning visual information. However, how to effectively evaluate these large vision-language models remains a major obstacle, hindering future model development. Traditional benchmarks like VQAv2 or COCO Caption provide quantitative performance measurements but suffer from a lack of fine-grained ability assessment and non-robust evaluation metrics. Recent subjective benchmarks, such as OwlEval, offer comprehensive evaluations of a model's abilities by incorporating human labor, but they are not scalable and display significant bias. In response to these challenges, we propose MMBench, a novel multi-modality benchmark. MMBench methodically develops a comprehensive evaluation pipeline, primarily comprised of two elements. The first element is a meticulously curated dataset that surpasses existing similar benchmarks in terms of the number and variety of evaluation questions and abilities. The second element introduces a novel CircularEval strategy and incorporates the use of ChatGPT. This implementation is designed to convert free-form predictions into pre-defined choices, thereby facilitating a more robust evaluation of the model's predictions. MMBench is a systematically-designed objective benchmark for robustly evaluating the various abilities of vision-language models. We hope MMBench will assist the research community in better evaluating their models and encourage future advancements in this domain. Project page: https://opencompass.org.cn/mmbench.
http://arxiv.org/pdf/2307.06281
Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin
cs.CV, cs.CL
null
null
cs.CV
20230712
20230813
[ { "id": "2302.13971" }, { "id": "2306.15195" }, { "id": "2305.03726" }, { "id": "2304.10592" }, { "id": "2106.09685" }, { "id": "2301.12597" }, { "id": "1504.00325" }, { "id": "2306.14824" }, { "id": "2305.16355" }, { "id": "2305.08322" }, { "id": "2111.02114" }, { "id": "2304.14178" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2304.08485" } ]
2307.06135
107
Instruction: make a coffee for Tom and place it in his room 3D Scene Graph: {nodes: {room: [{id: bobs_room}, {id: toms_room}, {id: jacks_room}, {id: kitchen}, {id: livingroom}], pose: [{id: pose1}, {id: pose2}, {id: pose3}, {id: pose4}, {id: pose5}], agent: [{location: bobs_room, id: agent}], asset: [{room: toms_room, state: free, affordances: [release], id: bed2}, {room: toms_room, state: closed, affordances: [open, close, release], id: wardrobe2}, {room: kitchen, state: closed, affordances: [open, close, release], id: fridge}, {room: kitchen, affordances: [turn_on, turn_off], state: off, id: coffee_machine}, {room: bobs_room, state: free, affordances: [release], id: bed1}, {room: bobs_room, state: closed, affordances: [open, close, release], id: wardrobe1}], object: [{affordances: [pickup], state:
2307.06135#107
SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning
Large language models (LLMs) have demonstrated impressive results in developing generalist planning agents for diverse tasks. However, grounding these plans in expansive, multi-floor, and multi-room environments presents a significant challenge for robotics. We introduce SayPlan, a scalable approach to LLM-based, large-scale task planning for robotics using 3D scene graph (3DSG) representations. To ensure the scalability of our approach, we: (1) exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic search' for task-relevant subgraphs from a smaller, collapsed representation of the full graph; (2) reduce the planning horizon for the LLM by integrating a classical path planner and (3) introduce an 'iterative replanning' pipeline that refines the initial plan using feedback from a scene graph simulator, correcting infeasible actions and avoiding planning failures. We evaluate our approach on two large-scale environments spanning up to 3 floors and 36 rooms with 140 assets and objects and show that our approach is capable of grounding large-scale, long-horizon task plans from abstract, and natural language instruction for a mobile manipulator robot to execute. We provide real robot video demonstrations on our project page https://sayplan.github.io.
http://arxiv.org/pdf/2307.06135
Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf
cs.RO, cs.AI
Accepted for oral presentation at the Conference on Robot Learning (CoRL), 2023. Project page can be found here: https://sayplan.github.io
null
cs.RO
20230712
20230927
[ { "id": "2204.00598" }, { "id": "2210.05359" }, { "id": "2304.11477" }, { "id": "2302.04761" }, { "id": "2210.03629" }, { "id": "2207.05608" }, { "id": "2201.11903" }, { "id": "2303.14100" }, { "id": "2302.05128" }, { "id": "2302.12813" }, { "id": "2304.11116" }, { "id": "2212.04088" } ]
2307.06281
107
Figure 14: Relation Reasoning: Data samples. 26 Future Prediction POMPE g a > Structuralized Image-text Understanding A. Orange B. Banana | | C. Pear oi d D. Apple GT: A : What will happen next? this person is gonna cry this person is gonna laugh this person is gonna get mad both A,B, and C @: According to this image, which fruit did the most kids like? : What will happen next? the motorcyle is gonna go forward the motorcyle is gonna crash the motorcyle is gonna go backward both A,B, and C GT: B Dp HBPrD Q: According to this image, what hobby is liked the least? A. Reading B. Singing C. Painting D. Dancing GT: C0 Figure 15: Logic Reasoning: Data samples. Table 14: Definition of each ability.
2307.06281#107
MMBench: Is Your Multi-modal Model an All-around Player?
Large vision-language models have recently achieved remarkable progress, exhibiting great perception and reasoning abilities concerning visual information. However, how to effectively evaluate these large vision-language models remains a major obstacle, hindering future model development. Traditional benchmarks like VQAv2 or COCO Caption provide quantitative performance measurements but suffer from a lack of fine-grained ability assessment and non-robust evaluation metrics. Recent subjective benchmarks, such as OwlEval, offer comprehensive evaluations of a model's abilities by incorporating human labor, but they are not scalable and display significant bias. In response to these challenges, we propose MMBench, a novel multi-modality benchmark. MMBench methodically develops a comprehensive evaluation pipeline, primarily comprised of two elements. The first element is a meticulously curated dataset that surpasses existing similar benchmarks in terms of the number and variety of evaluation questions and abilities. The second element introduces a novel CircularEval strategy and incorporates the use of ChatGPT. This implementation is designed to convert free-form predictions into pre-defined choices, thereby facilitating a more robust evaluation of the model's predictions. MMBench is a systematically-designed objective benchmark for robustly evaluating the various abilities of vision-language models. We hope MMBench will assist the research community in better evaluating their models and encourage future advancements in this domain. Project page: https://opencompass.org.cn/mmbench.
http://arxiv.org/pdf/2307.06281
Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin
cs.CV, cs.CL
null
null
cs.CV
20230712
20230813
[ { "id": "2302.13971" }, { "id": "2306.15195" }, { "id": "2305.03726" }, { "id": "2304.10592" }, { "id": "2106.09685" }, { "id": "2301.12597" }, { "id": "1504.00325" }, { "id": "2306.14824" }, { "id": "2305.16355" }, { "id": "2305.08322" }, { "id": "2111.02114" }, { "id": "2304.14178" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2304.08485" } ]
2307.06135
108
state: closed, affordances: [open, close, release], id: wardrobe1}], object: [{affordances: [pickup], state: inside_of(wardrobe1), attributes: "blue", id: coffee_mug}]}, links: [bobs_room↔pose1, bobs_room↔agent, bobs_room↔bed1, bobs_room↔wardrobe1, toms_room↔pose1, toms_room↔pose2, toms_room↔pose5, toms_room↔bed2, toms_room↔wardrobe2, jacks_room↔pose2, jacks_room↔pose3, kitchen↔pose3, kitchen↔pose4, kitchen↔pose5, kitchen↔fridge, kitchen↔coffee_machine, livingroom↔pose4, wardrobe1↔coffee_mug]} Memory: [toms_room, kitchen, jacks_room, bobs_room] SayPlan [Mode->Planning]: chain-of-thought: "i have found the coffee mug, the coffee machine and tom’s wardrobe on the graph -> search complete -> I will switch to planning mode-> collect coffee mug -> generate plan for
2307.06135#108
SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning
Large language models (LLMs) have demonstrated impressive results in developing generalist planning agents for diverse tasks. However, grounding these plans in expansive, multi-floor, and multi-room environments presents a significant challenge for robotics. We introduce SayPlan, a scalable approach to LLM-based, large-scale task planning for robotics using 3D scene graph (3DSG) representations. To ensure the scalability of our approach, we: (1) exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic search' for task-relevant subgraphs from a smaller, collapsed representation of the full graph; (2) reduce the planning horizon for the LLM by integrating a classical path planner and (3) introduce an 'iterative replanning' pipeline that refines the initial plan using feedback from a scene graph simulator, correcting infeasible actions and avoiding planning failures. We evaluate our approach on two large-scale environments spanning up to 3 floors and 36 rooms with 140 assets and objects and show that our approach is capable of grounding large-scale, long-horizon task plans from abstract, and natural language instruction for a mobile manipulator robot to execute. We provide real robot video demonstrations on our project page https://sayplan.github.io.
http://arxiv.org/pdf/2307.06135
Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf
cs.RO, cs.AI
Accepted for oral presentation at the Conference on Robot Learning (CoRL), 2023. Project page can be found here: https://sayplan.github.io
null
cs.RO
20230712
20230927
[ { "id": "2204.00598" }, { "id": "2210.05359" }, { "id": "2304.11477" }, { "id": "2302.04761" }, { "id": "2210.03629" }, { "id": "2207.05608" }, { "id": "2201.11903" }, { "id": "2303.14100" }, { "id": "2302.05128" }, { "id": "2302.12813" }, { "id": "2304.11116" }, { "id": "2212.04088" } ]
2307.06281
108
Name Ability Image Style Image Scene Image Emotion Image Quality Image Topic Object Localization Determine which type of image it belongs to, such as photos, paintings, CT scans, etc. Determine which environment is shown in the image, such as indoors, outdoors, forest, city, mountains, waterfront, sunny day, rainy day, etc. Determine which subjective emotion is conveyed by the overall image, such as cold, cheerful, sad, or oppressive. Determine the objective quality of the image, such as whether it is blurry, bright or dark, contrast, etc. Determine what the subject of the image is, such as scenery, portrait, close-up of an object, text, etc. For a single object, determine its position in the image (such as top, bottom, etc.), its absolute coordinates in the image, count the number of objects, and the orientation of the object Recognition of texture, shape, appearance characteristics, emotions, category, celebrities, famous places and objects, optical characters Attribute Recognition Celebrity Recognition Recognition of celebrities, landmarks, and common items. OCR Recognition of text, formula, and sheet in the image. Spatial Relationship Determine the relative position
2307.06281#108
MMBench: Is Your Multi-modal Model an All-around Player?
Large vision-language models have recently achieved remarkable progress, exhibiting great perception and reasoning abilities concerning visual information. However, how to effectively evaluate these large vision-language models remains a major obstacle, hindering future model development. Traditional benchmarks like VQAv2 or COCO Caption provide quantitative performance measurements but suffer from a lack of fine-grained ability assessment and non-robust evaluation metrics. Recent subjective benchmarks, such as OwlEval, offer comprehensive evaluations of a model's abilities by incorporating human labor, but they are not scalable and display significant bias. In response to these challenges, we propose MMBench, a novel multi-modality benchmark. MMBench methodically develops a comprehensive evaluation pipeline, primarily comprised of two elements. The first element is a meticulously curated dataset that surpasses existing similar benchmarks in terms of the number and variety of evaluation questions and abilities. The second element introduces a novel CircularEval strategy and incorporates the use of ChatGPT. This implementation is designed to convert free-form predictions into pre-defined choices, thereby facilitating a more robust evaluation of the model's predictions. MMBench is a systematically-designed objective benchmark for robustly evaluating the various abilities of vision-language models. We hope MMBench will assist the research community in better evaluating their models and encourage future advancements in this domain. Project page: https://opencompass.org.cn/mmbench.
http://arxiv.org/pdf/2307.06281
Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin
cs.CV, cs.CL
null
null
cs.CV
20230712
20230813
[ { "id": "2302.13971" }, { "id": "2306.15195" }, { "id": "2305.03726" }, { "id": "2304.10592" }, { "id": "2106.09685" }, { "id": "2301.12597" }, { "id": "1504.00325" }, { "id": "2306.14824" }, { "id": "2305.16355" }, { "id": "2305.08322" }, { "id": "2111.02114" }, { "id": "2304.14178" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2304.08485" } ]
2307.06135
109
mug, the coffee machine and tom’s wardrobe on the graph -> search complete -> I will switch to planning mode-> collect coffee mug -> generate plan for making coffee -> place coffee mug on Tom’s wardrobe" reasoning: I will generate a task plan using the identified subgraph plan: [goto(bobs_room) > access(wardrobe1) > pickup(coffee_mug) > goto(kitchen) > access(coffee_machine) > release(coffee_mug) > turn_on(coffee_machine) > turn_off(coffee_machine) > pickup(coffee_mug) > goto(toms_room) > access(wardrobe2) > release(coffee_mug) > done] Path Planner:plan: [goto(pose3) > goto(pose4) > goto(pose1) > goto(bobs_room) > access(wardrobe1) > pickup(coffee_mug) > goto(pose2) > goto(pose5) > goto(kitchen) > access(coffee_machine) > release(coffee_mug) > turn_on(coffee_machine) > turn_off(coffee_machine) >
2307.06135#109
SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning
Large language models (LLMs) have demonstrated impressive results in developing generalist planning agents for diverse tasks. However, grounding these plans in expansive, multi-floor, and multi-room environments presents a significant challenge for robotics. We introduce SayPlan, a scalable approach to LLM-based, large-scale task planning for robotics using 3D scene graph (3DSG) representations. To ensure the scalability of our approach, we: (1) exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic search' for task-relevant subgraphs from a smaller, collapsed representation of the full graph; (2) reduce the planning horizon for the LLM by integrating a classical path planner and (3) introduce an 'iterative replanning' pipeline that refines the initial plan using feedback from a scene graph simulator, correcting infeasible actions and avoiding planning failures. We evaluate our approach on two large-scale environments spanning up to 3 floors and 36 rooms with 140 assets and objects and show that our approach is capable of grounding large-scale, long-horizon task plans from abstract, and natural language instruction for a mobile manipulator robot to execute. We provide real robot video demonstrations on our project page https://sayplan.github.io.
http://arxiv.org/pdf/2307.06135
Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf
cs.RO, cs.AI
Accepted for oral presentation at the Conference on Robot Learning (CoRL), 2023. Project page can be found here: https://sayplan.github.io
null
cs.RO
20230712
20230927
[ { "id": "2204.00598" }, { "id": "2210.05359" }, { "id": "2304.11477" }, { "id": "2302.04761" }, { "id": "2210.03629" }, { "id": "2207.05608" }, { "id": "2201.11903" }, { "id": "2303.14100" }, { "id": "2302.05128" }, { "id": "2302.12813" }, { "id": "2304.11116" }, { "id": "2212.04088" } ]
2307.06281
109
of celebrities, landmarks, and common items. OCR Recognition of text, formula, and sheet in the image. Spatial Relationship Determine the relative position between objects in image. Attribute Comparison Compare attributes of different objects in image, such as shape, color, etc. Action Recognition Structuralized Image-Text Understanding Physical Property Reasoning Function Reasoning Identity Reasoning Social Relation Physical Relation Nature Relation Future Prediction Recognizing human actions, including pose motion, human-object interaction, and human-human interaction. Structured understanding of images and text, including parsing the content of charts (such as the trends of multiple bars in a bar chart), understanding the code in an image, etc. Predict the physical property of an object. Examples: the physical property of concentrated sulfuric acid is that it is volatile, the physical property of water is its fluidity, etc. Predict the function of an object. Examples: the function of a broom is to sweep the floor, the function of a spatula is to cook, the function of a pen is to write, etc. Predict the identity of a person. Example: by observing a person’s clothing and appearance, one may infer his / her occupation and social status Relations in human society or relations defined from the human
2307.06281#109
MMBench: Is Your Multi-modal Model an All-around Player?
Large vision-language models have recently achieved remarkable progress, exhibiting great perception and reasoning abilities concerning visual information. However, how to effectively evaluate these large vision-language models remains a major obstacle, hindering future model development. Traditional benchmarks like VQAv2 or COCO Caption provide quantitative performance measurements but suffer from a lack of fine-grained ability assessment and non-robust evaluation metrics. Recent subjective benchmarks, such as OwlEval, offer comprehensive evaluations of a model's abilities by incorporating human labor, but they are not scalable and display significant bias. In response to these challenges, we propose MMBench, a novel multi-modality benchmark. MMBench methodically develops a comprehensive evaluation pipeline, primarily comprised of two elements. The first element is a meticulously curated dataset that surpasses existing similar benchmarks in terms of the number and variety of evaluation questions and abilities. The second element introduces a novel CircularEval strategy and incorporates the use of ChatGPT. This implementation is designed to convert free-form predictions into pre-defined choices, thereby facilitating a more robust evaluation of the model's predictions. MMBench is a systematically-designed objective benchmark for robustly evaluating the various abilities of vision-language models. We hope MMBench will assist the research community in better evaluating their models and encourage future advancements in this domain. Project page: https://opencompass.org.cn/mmbench.
http://arxiv.org/pdf/2307.06281
Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin
cs.CV, cs.CL
null
null
cs.CV
20230712
20230813
[ { "id": "2302.13971" }, { "id": "2306.15195" }, { "id": "2305.03726" }, { "id": "2304.10592" }, { "id": "2106.09685" }, { "id": "2301.12597" }, { "id": "1504.00325" }, { "id": "2306.14824" }, { "id": "2305.16355" }, { "id": "2305.08322" }, { "id": "2111.02114" }, { "id": "2304.14178" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2304.08485" } ]
2307.06135
110
> access(coffee_machine) > release(coffee_mug) > turn_on(coffee_machine) > turn_off(coffee_machine) > pickup(coffee_mug) > goto(pose5) > goto(pose2) > goto(toms_room) > access(wardrobe2) > release(coffee_mug) > done] Scene Graph Simulator (Feedback): "coffee mug is not accessible" Memory: [toms_room, kitchen, jacks_room, bobs_room] SayPlan [Mode->Planning]: chain-of-thought: "the coffee mug was not accessible -> the wardrobe it is in is currently closed -> I update the plan to open the wardrobe first -> generate plan for making coffee -> place coffee mug on Tom’s wardrobe" reasoning: I will generate a task plan using the identified subgraph and the feedback from the simulator plan: [goto(bobs_room) > access(wardrobe1) > open(wardrobe1) > pickup(coffee_mug) > goto(kitchen) > access(coffee_machine) > release(coffee_mug) > turn_on(coffee_machine) > turn_off(coffee_machine) >
2307.06135#110
SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning
Large language models (LLMs) have demonstrated impressive results in developing generalist planning agents for diverse tasks. However, grounding these plans in expansive, multi-floor, and multi-room environments presents a significant challenge for robotics. We introduce SayPlan, a scalable approach to LLM-based, large-scale task planning for robotics using 3D scene graph (3DSG) representations. To ensure the scalability of our approach, we: (1) exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic search' for task-relevant subgraphs from a smaller, collapsed representation of the full graph; (2) reduce the planning horizon for the LLM by integrating a classical path planner and (3) introduce an 'iterative replanning' pipeline that refines the initial plan using feedback from a scene graph simulator, correcting infeasible actions and avoiding planning failures. We evaluate our approach on two large-scale environments spanning up to 3 floors and 36 rooms with 140 assets and objects and show that our approach is capable of grounding large-scale, long-horizon task plans from abstract, and natural language instruction for a mobile manipulator robot to execute. We provide real robot video demonstrations on our project page https://sayplan.github.io.
http://arxiv.org/pdf/2307.06135
Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf
cs.RO, cs.AI
Accepted for oral presentation at the Conference on Robot Learning (CoRL), 2023. Project page can be found here: https://sayplan.github.io
null
cs.RO
20230712
20230927
[ { "id": "2204.00598" }, { "id": "2210.05359" }, { "id": "2304.11477" }, { "id": "2302.04761" }, { "id": "2210.03629" }, { "id": "2207.05608" }, { "id": "2201.11903" }, { "id": "2303.14100" }, { "id": "2302.05128" }, { "id": "2302.12813" }, { "id": "2304.11116" }, { "id": "2212.04088" } ]
2307.06281
110
Example: by observing a person’s clothing and appearance, one may infer his / her occupation and social status Relations in human society or relations defined from the human perspective. Examples: Inter-person relations, such as father and son, husband and wife, friend, hostile, etc. All relationships that exist in the physical world, 3D spatial relationships and the connections between objects are. Other abstract relationships that exist in nature. Examples: predation, symbiosis, coexistence, etc. Predict what will happen in the future. Examples: if it is thundering in the sky now, it can be predicted that it will rain soon (physical phenomenon); if someone raises their fist, it means they are going to hit someone (event occurrence); if someone’s face becomes serious, it means they are going to get angry (emotional change).
2307.06281#110
MMBench: Is Your Multi-modal Model an All-around Player?
Large vision-language models have recently achieved remarkable progress, exhibiting great perception and reasoning abilities concerning visual information. However, how to effectively evaluate these large vision-language models remains a major obstacle, hindering future model development. Traditional benchmarks like VQAv2 or COCO Caption provide quantitative performance measurements but suffer from a lack of fine-grained ability assessment and non-robust evaluation metrics. Recent subjective benchmarks, such as OwlEval, offer comprehensive evaluations of a model's abilities by incorporating human labor, but they are not scalable and display significant bias. In response to these challenges, we propose MMBench, a novel multi-modality benchmark. MMBench methodically develops a comprehensive evaluation pipeline, primarily comprised of two elements. The first element is a meticulously curated dataset that surpasses existing similar benchmarks in terms of the number and variety of evaluation questions and abilities. The second element introduces a novel CircularEval strategy and incorporates the use of ChatGPT. This implementation is designed to convert free-form predictions into pre-defined choices, thereby facilitating a more robust evaluation of the model's predictions. MMBench is a systematically-designed objective benchmark for robustly evaluating the various abilities of vision-language models. We hope MMBench will assist the research community in better evaluating their models and encourage future advancements in this domain. Project page: https://opencompass.org.cn/mmbench.
http://arxiv.org/pdf/2307.06281
Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin
cs.CV, cs.CL
null
null
cs.CV
20230712
20230813
[ { "id": "2302.13971" }, { "id": "2306.15195" }, { "id": "2305.03726" }, { "id": "2304.10592" }, { "id": "2106.09685" }, { "id": "2301.12597" }, { "id": "1504.00325" }, { "id": "2306.14824" }, { "id": "2305.16355" }, { "id": "2305.08322" }, { "id": "2111.02114" }, { "id": "2304.14178" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2304.08485" } ]
2307.06135
111
> access(coffee_machine) > release(coffee_mug) > turn_on(coffee_machine) > turn_off(coffee_machine) > pickup(coffee_mug) > goto(toms_room) > access(wardrobe2) > release(coffee_mug) > done] Path Planner:plan: [goto(pose3) > goto(pose4) > goto(pose1) > goto(bobs_room) > access(wardrobe1) > open(wardrobe1) > pickup(coffee_mug) > goto(pose2) > goto(pose5) > goto(kitchen) > access(coffee_machine) > release(coffee_mug) > turn_on(coffee_machine) > turn_off(coffee_machine) > pickup(coffee_mug) > goto(pose5) > goto(pose2) > goto(toms_room) > access(wardrobe2) > release(coffee_mug) > done] Scene Graph Simulator: Plan Verified
2307.06135#111
SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning
Large language models (LLMs) have demonstrated impressive results in developing generalist planning agents for diverse tasks. However, grounding these plans in expansive, multi-floor, and multi-room environments presents a significant challenge for robotics. We introduce SayPlan, a scalable approach to LLM-based, large-scale task planning for robotics using 3D scene graph (3DSG) representations. To ensure the scalability of our approach, we: (1) exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic search' for task-relevant subgraphs from a smaller, collapsed representation of the full graph; (2) reduce the planning horizon for the LLM by integrating a classical path planner and (3) introduce an 'iterative replanning' pipeline that refines the initial plan using feedback from a scene graph simulator, correcting infeasible actions and avoiding planning failures. We evaluate our approach on two large-scale environments spanning up to 3 floors and 36 rooms with 140 assets and objects and show that our approach is capable of grounding large-scale, long-horizon task plans from abstract, and natural language instruction for a mobile manipulator robot to execute. We provide real robot video demonstrations on our project page https://sayplan.github.io.
http://arxiv.org/pdf/2307.06135
Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf
cs.RO, cs.AI
Accepted for oral presentation at the Conference on Robot Learning (CoRL), 2023. Project page can be found here: https://sayplan.github.io
null
cs.RO
20230712
20230927
[ { "id": "2204.00598" }, { "id": "2210.05359" }, { "id": "2304.11477" }, { "id": "2302.04761" }, { "id": "2210.03629" }, { "id": "2207.05608" }, { "id": "2201.11903" }, { "id": "2303.14100" }, { "id": "2302.05128" }, { "id": "2302.12813" }, { "id": "2304.11116" }, { "id": "2212.04088" } ]
2307.06281
112
VLM OpenFlamingo OpenFlamingov2 MMGPT MiniGPT-4 MiniGPT-4-13B PandaGPT VisualGLM InstructBLIP InstructBLIP-13B Otter-I LLaVA LLaMA-Adapter mPLUG-Owl KOSMOS-2 Shikra µ-G2PT Language Backbone Vision Backbone Overall Parameters LLaMA 7B CLIP ViT-L/14 9B MPT 7B CLIP ViT-L/14 9B LLaMA 7B CLIP ViT-L/14 9B Vicuna 7B EVA-G 8B Vicuna 13B EVA-G 14B Vicuna 13B ImageBind ViT-H/14 14B ChatGLM 6B EVA-CLIP 8B Vicuna 7B EVA-G 8B Vicuna 13B EVA-G 14B LLaMA 7B CLIP ViT-L/14 9B LLaMA 7B CLIP ViT-L/14 7.2B LLaMA 7B CLIP ViT-L/14 7.2B LLaMA 7B CLIP ViT-L/14 7.2B Decoder Only 1.3B CLIP ViT-L/14 1.6B LLaMA 7B CLIP ViT-L/14 7.2B
2307.06281#112
MMBench: Is Your Multi-modal Model an All-around Player?
Large vision-language models have recently achieved remarkable progress, exhibiting great perception and reasoning abilities concerning visual information. However, how to effectively evaluate these large vision-language models remains a major obstacle, hindering future model development. Traditional benchmarks like VQAv2 or COCO Caption provide quantitative performance measurements but suffer from a lack of fine-grained ability assessment and non-robust evaluation metrics. Recent subjective benchmarks, such as OwlEval, offer comprehensive evaluations of a model's abilities by incorporating human labor, but they are not scalable and display significant bias. In response to these challenges, we propose MMBench, a novel multi-modality benchmark. MMBench methodically develops a comprehensive evaluation pipeline, primarily comprised of two elements. The first element is a meticulously curated dataset that surpasses existing similar benchmarks in terms of the number and variety of evaluation questions and abilities. The second element introduces a novel CircularEval strategy and incorporates the use of ChatGPT. This implementation is designed to convert free-form predictions into pre-defined choices, thereby facilitating a more robust evaluation of the model's predictions. MMBench is a systematically-designed objective benchmark for robustly evaluating the various abilities of vision-language models. We hope MMBench will assist the research community in better evaluating their models and encourage future advancements in this domain. Project page: https://opencompass.org.cn/mmbench.
http://arxiv.org/pdf/2307.06281
Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin
cs.CV, cs.CL
null
null
cs.CV
20230712
20230813
[ { "id": "2302.13971" }, { "id": "2306.15195" }, { "id": "2305.03726" }, { "id": "2304.10592" }, { "id": "2106.09685" }, { "id": "2301.12597" }, { "id": "1504.00325" }, { "id": "2306.14824" }, { "id": "2305.16355" }, { "id": "2305.08322" }, { "id": "2111.02114" }, { "id": "2304.14178" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2304.08485" } ]
2307.06281
114
28 # References [1] W3c school. In https://www.w3schools.com/, 2023. 6 [2] Harsh Agrawal, Karan Desai, Yufei Wang, Xinlei Chen, Rishabh Jain, Mark Johnson, Dhruv Batra, Devi Parikh, Stefan Lee, and Peter Anderson. Nocaps: Novel object captioning at scale. In Proceedings of the IEEE/CVF international conference on computer vision, pages 8948–8957, 2019. 4 [3] Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, et al. Flamingo: a visual language model for few-shot learning. Advances in Neural Information Processing Systems, 35:23716–23736, 2022. 4, 8, 11, 12 [4] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020. 4
2307.06281#114
MMBench: Is Your Multi-modal Model an All-around Player?
Large vision-language models have recently achieved remarkable progress, exhibiting great perception and reasoning abilities concerning visual information. However, how to effectively evaluate these large vision-language models remains a major obstacle, hindering future model development. Traditional benchmarks like VQAv2 or COCO Caption provide quantitative performance measurements but suffer from a lack of fine-grained ability assessment and non-robust evaluation metrics. Recent subjective benchmarks, such as OwlEval, offer comprehensive evaluations of a model's abilities by incorporating human labor, but they are not scalable and display significant bias. In response to these challenges, we propose MMBench, a novel multi-modality benchmark. MMBench methodically develops a comprehensive evaluation pipeline, primarily comprised of two elements. The first element is a meticulously curated dataset that surpasses existing similar benchmarks in terms of the number and variety of evaluation questions and abilities. The second element introduces a novel CircularEval strategy and incorporates the use of ChatGPT. This implementation is designed to convert free-form predictions into pre-defined choices, thereby facilitating a more robust evaluation of the model's predictions. MMBench is a systematically-designed objective benchmark for robustly evaluating the various abilities of vision-language models. We hope MMBench will assist the research community in better evaluating their models and encourage future advancements in this domain. Project page: https://opencompass.org.cn/mmbench.
http://arxiv.org/pdf/2307.06281
Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin
cs.CV, cs.CL
null
null
cs.CV
20230712
20230813
[ { "id": "2302.13971" }, { "id": "2306.15195" }, { "id": "2305.03726" }, { "id": "2304.10592" }, { "id": "2106.09685" }, { "id": "2301.12597" }, { "id": "1504.00325" }, { "id": "2306.14824" }, { "id": "2305.16355" }, { "id": "2305.08322" }, { "id": "2111.02114" }, { "id": "2304.14178" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2304.08485" } ]
2307.06281
115
[5] Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, and Rui Zhao. Shikra: Unleashing multimodal llm’s referential dialogue magic. arXiv preprint arXiv:2306.15195, 2023. 11, 12 [6] Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco captions: Data collection and evaluation server. arXiv preprint arXiv:1504.00325, 2015. 2, 4, 6 [7] Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E Gonzalez, et al. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality. See https://vicuna. lmsys. org (accessed 14 April 2023), 2023. 4
2307.06281#115
MMBench: Is Your Multi-modal Model an All-around Player?
Large vision-language models have recently achieved remarkable progress, exhibiting great perception and reasoning abilities concerning visual information. However, how to effectively evaluate these large vision-language models remains a major obstacle, hindering future model development. Traditional benchmarks like VQAv2 or COCO Caption provide quantitative performance measurements but suffer from a lack of fine-grained ability assessment and non-robust evaluation metrics. Recent subjective benchmarks, such as OwlEval, offer comprehensive evaluations of a model's abilities by incorporating human labor, but they are not scalable and display significant bias. In response to these challenges, we propose MMBench, a novel multi-modality benchmark. MMBench methodically develops a comprehensive evaluation pipeline, primarily comprised of two elements. The first element is a meticulously curated dataset that surpasses existing similar benchmarks in terms of the number and variety of evaluation questions and abilities. The second element introduces a novel CircularEval strategy and incorporates the use of ChatGPT. This implementation is designed to convert free-form predictions into pre-defined choices, thereby facilitating a more robust evaluation of the model's predictions. MMBench is a systematically-designed objective benchmark for robustly evaluating the various abilities of vision-language models. We hope MMBench will assist the research community in better evaluating their models and encourage future advancements in this domain. Project page: https://opencompass.org.cn/mmbench.
http://arxiv.org/pdf/2307.06281
Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin
cs.CV, cs.CL
null
null
cs.CV
20230712
20230813
[ { "id": "2302.13971" }, { "id": "2306.15195" }, { "id": "2305.03726" }, { "id": "2304.10592" }, { "id": "2106.09685" }, { "id": "2301.12597" }, { "id": "1504.00325" }, { "id": "2306.14824" }, { "id": "2305.16355" }, { "id": "2305.08322" }, { "id": "2111.02114" }, { "id": "2304.14178" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2304.08485" } ]
2307.06281
116
[8] Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, and Steven Hoi. Instructblip: Towards general-purpose vision- language models with instruction tuning. arXiv preprint arXiv:2305.06500, 2023. 4, 6, 8, 11, 12 [9] Zhengxiao Du, Yujie Qian, Xiao Liu, Ming Ding, Jiezhong Qiu, Zhilin Yang, and Jie Tang. Glm: General language model pretraining with autoregressive blank infilling. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 320–335, 2022. 4, 8, 9, 11, 12 [10] Haodong Duan, Yue Zhao, Kai Chen, Yuanjun Xiong, and Dahua Lin. Mitigating representation bias in action recognition: Algorithms and benchmarks, 2022. 6 [11] Jerry A Fodor. The modularity of mind. MIT press, 1983. 5
2307.06281#116
MMBench: Is Your Multi-modal Model an All-around Player?
Large vision-language models have recently achieved remarkable progress, exhibiting great perception and reasoning abilities concerning visual information. However, how to effectively evaluate these large vision-language models remains a major obstacle, hindering future model development. Traditional benchmarks like VQAv2 or COCO Caption provide quantitative performance measurements but suffer from a lack of fine-grained ability assessment and non-robust evaluation metrics. Recent subjective benchmarks, such as OwlEval, offer comprehensive evaluations of a model's abilities by incorporating human labor, but they are not scalable and display significant bias. In response to these challenges, we propose MMBench, a novel multi-modality benchmark. MMBench methodically develops a comprehensive evaluation pipeline, primarily comprised of two elements. The first element is a meticulously curated dataset that surpasses existing similar benchmarks in terms of the number and variety of evaluation questions and abilities. The second element introduces a novel CircularEval strategy and incorporates the use of ChatGPT. This implementation is designed to convert free-form predictions into pre-defined choices, thereby facilitating a more robust evaluation of the model's predictions. MMBench is a systematically-designed objective benchmark for robustly evaluating the various abilities of vision-language models. We hope MMBench will assist the research community in better evaluating their models and encourage future advancements in this domain. Project page: https://opencompass.org.cn/mmbench.
http://arxiv.org/pdf/2307.06281
Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin
cs.CV, cs.CL
null
null
cs.CV
20230712
20230813
[ { "id": "2302.13971" }, { "id": "2306.15195" }, { "id": "2305.03726" }, { "id": "2304.10592" }, { "id": "2106.09685" }, { "id": "2301.12597" }, { "id": "1504.00325" }, { "id": "2306.14824" }, { "id": "2305.16355" }, { "id": "2305.08322" }, { "id": "2111.02114" }, { "id": "2304.14178" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2304.08485" } ]
2307.06281
117
[11] Jerry A Fodor. The modularity of mind. MIT press, 1983. 5 [12] Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Zhenyu Qiu, Wei Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, and Rongrong Ji. Mme: A comprehensive evaluation benchmark for multimodal large language models. ArXiv, abs/2306.13394, 2023. 4 [13] Peng Gao, Jiaming Han, Renrui Zhang, Ziyi Lin, Shijie Geng, Aojun Zhou, Wei Zhang, Pan Lu, Conghui He, Xiangyu Yue, et al. Llama-adapter v2: Parameter-efficient visual instruction model. arXiv preprint arXiv:2304.15010, 2023. 8, 9 [14] Tao Gong, Chengqi Lyu, Shilong Zhang, Yudong Wang, Miao Zheng, Qian Zhao, Kuikun Liu, Wenwei Zhang, Ping Luo, and Kai Chen. Multimodal-gpt: A vision and language model for dialogue with humans, 2023. 11, 12
2307.06281#117
MMBench: Is Your Multi-modal Model an All-around Player?
Large vision-language models have recently achieved remarkable progress, exhibiting great perception and reasoning abilities concerning visual information. However, how to effectively evaluate these large vision-language models remains a major obstacle, hindering future model development. Traditional benchmarks like VQAv2 or COCO Caption provide quantitative performance measurements but suffer from a lack of fine-grained ability assessment and non-robust evaluation metrics. Recent subjective benchmarks, such as OwlEval, offer comprehensive evaluations of a model's abilities by incorporating human labor, but they are not scalable and display significant bias. In response to these challenges, we propose MMBench, a novel multi-modality benchmark. MMBench methodically develops a comprehensive evaluation pipeline, primarily comprised of two elements. The first element is a meticulously curated dataset that surpasses existing similar benchmarks in terms of the number and variety of evaluation questions and abilities. The second element introduces a novel CircularEval strategy and incorporates the use of ChatGPT. This implementation is designed to convert free-form predictions into pre-defined choices, thereby facilitating a more robust evaluation of the model's predictions. MMBench is a systematically-designed objective benchmark for robustly evaluating the various abilities of vision-language models. We hope MMBench will assist the research community in better evaluating their models and encourage future advancements in this domain. Project page: https://opencompass.org.cn/mmbench.
http://arxiv.org/pdf/2307.06281
Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin
cs.CV, cs.CL
null
null
cs.CV
20230712
20230813
[ { "id": "2302.13971" }, { "id": "2306.15195" }, { "id": "2305.03726" }, { "id": "2304.10592" }, { "id": "2106.09685" }, { "id": "2301.12597" }, { "id": "1504.00325" }, { "id": "2306.14824" }, { "id": "2305.16355" }, { "id": "2305.08322" }, { "id": "2111.02114" }, { "id": "2304.14178" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2304.08485" } ]
2307.06281
118
[15] Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. Making the v in vqa matter: Elevating the role of image understanding in visual question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6904–6913, 2017. 2, 4 29 [16] Danna Gurari, Qing Li, Abigale J Stangl, Anhong Guo, Chi Lin, Kristen Grauman, Jiebo Luo, and Jeffrey P Bigham. Vizwiz grand challenge: Answering visual questions from blind people. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3608–3617, 2018. 4 [17] V. Hosu, H. Lin, T. Sziranyi, and D. Saupe. Koniq-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing, 29:4041–4056, 2020. 6 [18] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685, 2021. 4
2307.06281#118
MMBench: Is Your Multi-modal Model an All-around Player?
Large vision-language models have recently achieved remarkable progress, exhibiting great perception and reasoning abilities concerning visual information. However, how to effectively evaluate these large vision-language models remains a major obstacle, hindering future model development. Traditional benchmarks like VQAv2 or COCO Caption provide quantitative performance measurements but suffer from a lack of fine-grained ability assessment and non-robust evaluation metrics. Recent subjective benchmarks, such as OwlEval, offer comprehensive evaluations of a model's abilities by incorporating human labor, but they are not scalable and display significant bias. In response to these challenges, we propose MMBench, a novel multi-modality benchmark. MMBench methodically develops a comprehensive evaluation pipeline, primarily comprised of two elements. The first element is a meticulously curated dataset that surpasses existing similar benchmarks in terms of the number and variety of evaluation questions and abilities. The second element introduces a novel CircularEval strategy and incorporates the use of ChatGPT. This implementation is designed to convert free-form predictions into pre-defined choices, thereby facilitating a more robust evaluation of the model's predictions. MMBench is a systematically-designed objective benchmark for robustly evaluating the various abilities of vision-language models. We hope MMBench will assist the research community in better evaluating their models and encourage future advancements in this domain. Project page: https://opencompass.org.cn/mmbench.
http://arxiv.org/pdf/2307.06281
Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin
cs.CV, cs.CL
null
null
cs.CV
20230712
20230813
[ { "id": "2302.13971" }, { "id": "2306.15195" }, { "id": "2305.03726" }, { "id": "2304.10592" }, { "id": "2106.09685" }, { "id": "2301.12597" }, { "id": "1504.00325" }, { "id": "2306.14824" }, { "id": "2305.16355" }, { "id": "2305.08322" }, { "id": "2111.02114" }, { "id": "2304.14178" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2304.08485" } ]
2307.06281
119
[19] Yuzhen Huang, Yuzhuo Bai, Zhihao Zhu, Junlei Zhang, Jinghan Zhang, Tangjun Su, Junteng Liu, Chuancheng Lv, Yikai Zhang, Jiayi Lei, et al. C-eval: A multi-level multi-discipline chinese evaluation suite for foundation models. arXiv preprint arXiv:2305.08322, 2023. 10 [20] Drew A Hudson and Christopher D Manning. Gqa: A new dataset for real-world visual reasoning and compositional question answering. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 6700–6709, 2019. 2, 4, 9, 10, 14 [21] Justin Johnson, Bharath Hariharan, Laurens Van Der Maaten, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick. Clevr: A diagnostic dataset for compositional language and elementary In Proceedings of the IEEE conference on computer vision and pattern visual reasoning. recognition, pages 2901–2910, 2017. 6 [22] Bo Li, Yuanhan Zhang, Liangyu Chen, Jinghao Wang, Fanyi Pu, Jingkang Yang, Chunyuan Li, and Ziwei Liu. Mimic-it: Multi-modal in-context instruction tuning. 2023. 1, 11, 12
2307.06281#119
MMBench: Is Your Multi-modal Model an All-around Player?
Large vision-language models have recently achieved remarkable progress, exhibiting great perception and reasoning abilities concerning visual information. However, how to effectively evaluate these large vision-language models remains a major obstacle, hindering future model development. Traditional benchmarks like VQAv2 or COCO Caption provide quantitative performance measurements but suffer from a lack of fine-grained ability assessment and non-robust evaluation metrics. Recent subjective benchmarks, such as OwlEval, offer comprehensive evaluations of a model's abilities by incorporating human labor, but they are not scalable and display significant bias. In response to these challenges, we propose MMBench, a novel multi-modality benchmark. MMBench methodically develops a comprehensive evaluation pipeline, primarily comprised of two elements. The first element is a meticulously curated dataset that surpasses existing similar benchmarks in terms of the number and variety of evaluation questions and abilities. The second element introduces a novel CircularEval strategy and incorporates the use of ChatGPT. This implementation is designed to convert free-form predictions into pre-defined choices, thereby facilitating a more robust evaluation of the model's predictions. MMBench is a systematically-designed objective benchmark for robustly evaluating the various abilities of vision-language models. We hope MMBench will assist the research community in better evaluating their models and encourage future advancements in this domain. Project page: https://opencompass.org.cn/mmbench.
http://arxiv.org/pdf/2307.06281
Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin
cs.CV, cs.CL
null
null
cs.CV
20230712
20230813
[ { "id": "2302.13971" }, { "id": "2306.15195" }, { "id": "2305.03726" }, { "id": "2304.10592" }, { "id": "2106.09685" }, { "id": "2301.12597" }, { "id": "1504.00325" }, { "id": "2306.14824" }, { "id": "2305.16355" }, { "id": "2305.08322" }, { "id": "2111.02114" }, { "id": "2304.14178" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2304.08485" } ]
2307.06281
120
[23] Bo Li, Yuanhan Zhang, Liangyu Chen, Jinghao Wang, Jingkang Yang, and Ziwei Liu. Otter: A multi-modal model with in-context instruction tuning. arXiv preprint arXiv:2305.03726, 2023. 1, 4, 8, 11, 12 [24] Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language- image pre-training with frozen image encoders and large language models. arXiv preprint arXiv:2301.12597, 2023. 4 [25] Junnan Li, Yongkang Wong, Qi Zhao, and Mohan S Kankanhalli. Dual-glance model for In Proceedings of the IEEE international conference on deciphering social relationships. computer vision, pages 2650–2659, 2017. 6 [26] Fangyu Liu, Guy Edward Toh Emerson, and Nigel Collier. Visual spatial reasoning. Transactions of the Association for Computational Linguistics, 2023. 6 [27] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. arXiv preprint arXiv:2304.08485, 2023. 1, 4, 6, 8, 11, 12
2307.06281#120
MMBench: Is Your Multi-modal Model an All-around Player?
Large vision-language models have recently achieved remarkable progress, exhibiting great perception and reasoning abilities concerning visual information. However, how to effectively evaluate these large vision-language models remains a major obstacle, hindering future model development. Traditional benchmarks like VQAv2 or COCO Caption provide quantitative performance measurements but suffer from a lack of fine-grained ability assessment and non-robust evaluation metrics. Recent subjective benchmarks, such as OwlEval, offer comprehensive evaluations of a model's abilities by incorporating human labor, but they are not scalable and display significant bias. In response to these challenges, we propose MMBench, a novel multi-modality benchmark. MMBench methodically develops a comprehensive evaluation pipeline, primarily comprised of two elements. The first element is a meticulously curated dataset that surpasses existing similar benchmarks in terms of the number and variety of evaluation questions and abilities. The second element introduces a novel CircularEval strategy and incorporates the use of ChatGPT. This implementation is designed to convert free-form predictions into pre-defined choices, thereby facilitating a more robust evaluation of the model's predictions. MMBench is a systematically-designed objective benchmark for robustly evaluating the various abilities of vision-language models. We hope MMBench will assist the research community in better evaluating their models and encourage future advancements in this domain. Project page: https://opencompass.org.cn/mmbench.
http://arxiv.org/pdf/2307.06281
Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin
cs.CV, cs.CL
null
null
cs.CV
20230712
20230813
[ { "id": "2302.13971" }, { "id": "2306.15195" }, { "id": "2305.03726" }, { "id": "2304.10592" }, { "id": "2106.09685" }, { "id": "2301.12597" }, { "id": "1504.00325" }, { "id": "2306.14824" }, { "id": "2305.16355" }, { "id": "2305.08322" }, { "id": "2111.02114" }, { "id": "2304.14178" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2304.08485" } ]
2307.06281
121
[28] Pan Lu, Swaroop Mishra, Tanglin Xia, Liang Qiu, Kai-Wei Chang, Song-Chun Zhu, Oyvind Tafjord, Peter Clark, and Ashwin Kalyan. Learn to explain: Multimodal reasoning via thought chains for science question answering. Advances in Neural Information Processing Systems, 35:2507–2521, 2022. 4, 6 [29] Kenneth Marino, Mohammad Rastegari, Ali Farhadi, and Roozbeh Mottaghi. Ok-vqa: A visual question answering benchmark requiring external knowledge. In Proceedings of the IEEE/cvf conference on computer vision and pattern recognition, pages 3195–3204, 2019. 2, 4, 6, 9, 10, 14 [30] Mike Oaksford and Nick Chater. Bayesian rationality: The probabilistic approach to human reasoning. Oxford University Press, 2007. 5 [31] OpenAI. Gpt-4 technical report. ArXiv, abs/2303.08774, 2023. 1, 4 30
2307.06281#121
MMBench: Is Your Multi-modal Model an All-around Player?
Large vision-language models have recently achieved remarkable progress, exhibiting great perception and reasoning abilities concerning visual information. However, how to effectively evaluate these large vision-language models remains a major obstacle, hindering future model development. Traditional benchmarks like VQAv2 or COCO Caption provide quantitative performance measurements but suffer from a lack of fine-grained ability assessment and non-robust evaluation metrics. Recent subjective benchmarks, such as OwlEval, offer comprehensive evaluations of a model's abilities by incorporating human labor, but they are not scalable and display significant bias. In response to these challenges, we propose MMBench, a novel multi-modality benchmark. MMBench methodically develops a comprehensive evaluation pipeline, primarily comprised of two elements. The first element is a meticulously curated dataset that surpasses existing similar benchmarks in terms of the number and variety of evaluation questions and abilities. The second element introduces a novel CircularEval strategy and incorporates the use of ChatGPT. This implementation is designed to convert free-form predictions into pre-defined choices, thereby facilitating a more robust evaluation of the model's predictions. MMBench is a systematically-designed objective benchmark for robustly evaluating the various abilities of vision-language models. We hope MMBench will assist the research community in better evaluating their models and encourage future advancements in this domain. Project page: https://opencompass.org.cn/mmbench.
http://arxiv.org/pdf/2307.06281
Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin
cs.CV, cs.CL
null
null
cs.CV
20230712
20230813
[ { "id": "2302.13971" }, { "id": "2306.15195" }, { "id": "2305.03726" }, { "id": "2304.10592" }, { "id": "2106.09685" }, { "id": "2301.12597" }, { "id": "1504.00325" }, { "id": "2306.14824" }, { "id": "2305.16355" }, { "id": "2305.08322" }, { "id": "2111.02114" }, { "id": "2304.14178" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2304.08485" } ]
2307.06281
122
[31] OpenAI. Gpt-4 technical report. ArXiv, abs/2303.08774, 2023. 1, 4 30 [32] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730–27744, 2022. 4 [33] Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, and Furu Wei. Kosmos-2: Grounding multimodal large language models to the world. arXiv preprint arXiv:2306.14824, 2023. 11, 12 [34] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019. 4
2307.06281#122
MMBench: Is Your Multi-modal Model an All-around Player?
Large vision-language models have recently achieved remarkable progress, exhibiting great perception and reasoning abilities concerning visual information. However, how to effectively evaluate these large vision-language models remains a major obstacle, hindering future model development. Traditional benchmarks like VQAv2 or COCO Caption provide quantitative performance measurements but suffer from a lack of fine-grained ability assessment and non-robust evaluation metrics. Recent subjective benchmarks, such as OwlEval, offer comprehensive evaluations of a model's abilities by incorporating human labor, but they are not scalable and display significant bias. In response to these challenges, we propose MMBench, a novel multi-modality benchmark. MMBench methodically develops a comprehensive evaluation pipeline, primarily comprised of two elements. The first element is a meticulously curated dataset that surpasses existing similar benchmarks in terms of the number and variety of evaluation questions and abilities. The second element introduces a novel CircularEval strategy and incorporates the use of ChatGPT. This implementation is designed to convert free-form predictions into pre-defined choices, thereby facilitating a more robust evaluation of the model's predictions. MMBench is a systematically-designed objective benchmark for robustly evaluating the various abilities of vision-language models. We hope MMBench will assist the research community in better evaluating their models and encourage future advancements in this domain. Project page: https://opencompass.org.cn/mmbench.
http://arxiv.org/pdf/2307.06281
Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin
cs.CV, cs.CL
null
null
cs.CV
20230712
20230813
[ { "id": "2302.13971" }, { "id": "2306.15195" }, { "id": "2305.03726" }, { "id": "2304.10592" }, { "id": "2106.09685" }, { "id": "2301.12597" }, { "id": "1504.00325" }, { "id": "2306.14824" }, { "id": "2305.16355" }, { "id": "2305.08322" }, { "id": "2111.02114" }, { "id": "2304.14178" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2304.08485" } ]
2307.06281
123
[35] Christoph Schuhmann, Richard Vencu, Romain Beaumont, Robert Kaczmarczyk, Clayton Mullis, Aarush Katta, Theo Coombes, Jenia Jitsev, and Aran Komatsuzaki. Laion-400m: Open dataset of clip-filtered 400 million image-text pairs. arXiv preprint arXiv:2111.02114, 2021. 11 [36] Amanpreet Singh, Vivek Natarajan, Meet Shah, Yu Jiang, Xinlei Chen, Dhruv Batra, Devi Parikh, and Marcus Rohrbach. Towards vqa models that can read. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 8317–8326, 2019. 4, 6, 9, 10, 14 [37] Yixuan Su, Tian Lan, Huayang Li, Jialu Xu, Yan Wang, and Deng Cai. Pandagpt: One model to instruction-follow them all. arXiv preprint arXiv:2305.16355, 2023. 11, 12
2307.06281#123
MMBench: Is Your Multi-modal Model an All-around Player?
Large vision-language models have recently achieved remarkable progress, exhibiting great perception and reasoning abilities concerning visual information. However, how to effectively evaluate these large vision-language models remains a major obstacle, hindering future model development. Traditional benchmarks like VQAv2 or COCO Caption provide quantitative performance measurements but suffer from a lack of fine-grained ability assessment and non-robust evaluation metrics. Recent subjective benchmarks, such as OwlEval, offer comprehensive evaluations of a model's abilities by incorporating human labor, but they are not scalable and display significant bias. In response to these challenges, we propose MMBench, a novel multi-modality benchmark. MMBench methodically develops a comprehensive evaluation pipeline, primarily comprised of two elements. The first element is a meticulously curated dataset that surpasses existing similar benchmarks in terms of the number and variety of evaluation questions and abilities. The second element introduces a novel CircularEval strategy and incorporates the use of ChatGPT. This implementation is designed to convert free-form predictions into pre-defined choices, thereby facilitating a more robust evaluation of the model's predictions. MMBench is a systematically-designed objective benchmark for robustly evaluating the various abilities of vision-language models. We hope MMBench will assist the research community in better evaluating their models and encourage future advancements in this domain. Project page: https://opencompass.org.cn/mmbench.
http://arxiv.org/pdf/2307.06281
Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin
cs.CV, cs.CL
null
null
cs.CV
20230712
20230813
[ { "id": "2302.13971" }, { "id": "2306.15195" }, { "id": "2305.03726" }, { "id": "2304.10592" }, { "id": "2106.09685" }, { "id": "2301.12597" }, { "id": "1504.00325" }, { "id": "2306.14824" }, { "id": "2305.16355" }, { "id": "2305.08322" }, { "id": "2111.02114" }, { "id": "2304.14178" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2304.08485" } ]
2307.06281
124
[38] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timo- thée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023. 4 [39] Peng Xu, Wenqi Shao, Kaipeng Zhang, Peng Gao, Shuo Liu, Meng Lei, Fanqing Meng, Siyuan Huang, Yu Jiao Qiao, and Ping Luo. Lvlm-ehub: A comprehensive evaluation benchmark for large vision-language models. 2023. 2 [40] Qinghao Ye, Haiyang Xu, Guohai Xu, Jiabo Ye, Ming Yan, Yiyang Zhou, Junyang Wang, Anwen Hu, Pengcheng Shi, Yaya Shi, et al. mplug-owl: Modularization empowers large language models with multimodality. arXiv preprint arXiv:2304.14178, 2023. 2, 4, 8, 11, 12
2307.06281#124
MMBench: Is Your Multi-modal Model an All-around Player?
Large vision-language models have recently achieved remarkable progress, exhibiting great perception and reasoning abilities concerning visual information. However, how to effectively evaluate these large vision-language models remains a major obstacle, hindering future model development. Traditional benchmarks like VQAv2 or COCO Caption provide quantitative performance measurements but suffer from a lack of fine-grained ability assessment and non-robust evaluation metrics. Recent subjective benchmarks, such as OwlEval, offer comprehensive evaluations of a model's abilities by incorporating human labor, but they are not scalable and display significant bias. In response to these challenges, we propose MMBench, a novel multi-modality benchmark. MMBench methodically develops a comprehensive evaluation pipeline, primarily comprised of two elements. The first element is a meticulously curated dataset that surpasses existing similar benchmarks in terms of the number and variety of evaluation questions and abilities. The second element introduces a novel CircularEval strategy and incorporates the use of ChatGPT. This implementation is designed to convert free-form predictions into pre-defined choices, thereby facilitating a more robust evaluation of the model's predictions. MMBench is a systematically-designed objective benchmark for robustly evaluating the various abilities of vision-language models. We hope MMBench will assist the research community in better evaluating their models and encourage future advancements in this domain. Project page: https://opencompass.org.cn/mmbench.
http://arxiv.org/pdf/2307.06281
Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin
cs.CV, cs.CL
null
null
cs.CV
20230712
20230813
[ { "id": "2302.13971" }, { "id": "2306.15195" }, { "id": "2305.03726" }, { "id": "2304.10592" }, { "id": "2106.09685" }, { "id": "2301.12597" }, { "id": "1504.00325" }, { "id": "2306.14824" }, { "id": "2305.16355" }, { "id": "2305.08322" }, { "id": "2111.02114" }, { "id": "2304.14178" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2304.08485" } ]
2307.06281
125
[41] Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. Transactions of the Association for Computational Linguistics, 2:67–78, 2014. 4 [42] Renrui Zhang, Jiaming Han, Chris Liu, Peng Gao, Aojun Zhou, Xiangfei Hu, Shilin Yan, Pan Lu, Hongsheng Li, and Yu Qiao. Llama-adapter: Efficient fine-tuning of language models with zero-init attention, 2023. 11, 12 [43] Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric. P Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. Judging llm-as-a-judge with mt-bench and chatbot arena, 2023. 3 [44] Bolei Zhou, Agata Lapedriza, Aditya Khosla, Aude Oliva, and Antonio Torralba. Places: A 10 million image database for scene recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017. 6
2307.06281#125
MMBench: Is Your Multi-modal Model an All-around Player?
Large vision-language models have recently achieved remarkable progress, exhibiting great perception and reasoning abilities concerning visual information. However, how to effectively evaluate these large vision-language models remains a major obstacle, hindering future model development. Traditional benchmarks like VQAv2 or COCO Caption provide quantitative performance measurements but suffer from a lack of fine-grained ability assessment and non-robust evaluation metrics. Recent subjective benchmarks, such as OwlEval, offer comprehensive evaluations of a model's abilities by incorporating human labor, but they are not scalable and display significant bias. In response to these challenges, we propose MMBench, a novel multi-modality benchmark. MMBench methodically develops a comprehensive evaluation pipeline, primarily comprised of two elements. The first element is a meticulously curated dataset that surpasses existing similar benchmarks in terms of the number and variety of evaluation questions and abilities. The second element introduces a novel CircularEval strategy and incorporates the use of ChatGPT. This implementation is designed to convert free-form predictions into pre-defined choices, thereby facilitating a more robust evaluation of the model's predictions. MMBench is a systematically-designed objective benchmark for robustly evaluating the various abilities of vision-language models. We hope MMBench will assist the research community in better evaluating their models and encourage future advancements in this domain. Project page: https://opencompass.org.cn/mmbench.
http://arxiv.org/pdf/2307.06281
Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin
cs.CV, cs.CL
null
null
cs.CV
20230712
20230813
[ { "id": "2302.13971" }, { "id": "2306.15195" }, { "id": "2305.03726" }, { "id": "2304.10592" }, { "id": "2106.09685" }, { "id": "2301.12597" }, { "id": "1504.00325" }, { "id": "2306.14824" }, { "id": "2305.16355" }, { "id": "2305.08322" }, { "id": "2111.02114" }, { "id": "2304.14178" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2304.08485" } ]
2307.06281
126
[45] Luowei Zhou, Chenliang Xu, and Jason Corso. Towards automatic learning of procedures from web instructional videos. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32, 2018. 4 [46] Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mohamed Elhoseiny. Minigpt-4: En- hancing vision-language understanding with advanced large language models. arXiv preprint arXiv:2304.10592, 2023. 1, 4, 8, 11, 12 31
2307.06281#126
MMBench: Is Your Multi-modal Model an All-around Player?
Large vision-language models have recently achieved remarkable progress, exhibiting great perception and reasoning abilities concerning visual information. However, how to effectively evaluate these large vision-language models remains a major obstacle, hindering future model development. Traditional benchmarks like VQAv2 or COCO Caption provide quantitative performance measurements but suffer from a lack of fine-grained ability assessment and non-robust evaluation metrics. Recent subjective benchmarks, such as OwlEval, offer comprehensive evaluations of a model's abilities by incorporating human labor, but they are not scalable and display significant bias. In response to these challenges, we propose MMBench, a novel multi-modality benchmark. MMBench methodically develops a comprehensive evaluation pipeline, primarily comprised of two elements. The first element is a meticulously curated dataset that surpasses existing similar benchmarks in terms of the number and variety of evaluation questions and abilities. The second element introduces a novel CircularEval strategy and incorporates the use of ChatGPT. This implementation is designed to convert free-form predictions into pre-defined choices, thereby facilitating a more robust evaluation of the model's predictions. MMBench is a systematically-designed objective benchmark for robustly evaluating the various abilities of vision-language models. We hope MMBench will assist the research community in better evaluating their models and encourage future advancements in this domain. Project page: https://opencompass.org.cn/mmbench.
http://arxiv.org/pdf/2307.06281
Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin
cs.CV, cs.CL
null
null
cs.CV
20230712
20230813
[ { "id": "2302.13971" }, { "id": "2306.15195" }, { "id": "2305.03726" }, { "id": "2304.10592" }, { "id": "2106.09685" }, { "id": "2301.12597" }, { "id": "1504.00325" }, { "id": "2306.14824" }, { "id": "2305.16355" }, { "id": "2305.08322" }, { "id": "2111.02114" }, { "id": "2304.14178" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2304.08485" } ]
2307.04964
0
3 2 0 2 l u J 8 1 ] L C . s c [ 2 v 4 6 9 4 0 . 7 0 3 2 : v i X r a # Secrets of RLHF in Large Language Models Part I: PPO # Rui Zheng∗ †, Shihan Dou∗†, Songyang Gao∗, Yuan Hua‡, Wei Shen, Binghai Wang, Yan Liu, Senjie Jin, Qin Liu, Yuhao Zhou, Limao Xiong, Lu Chen, Zhiheng Xi, Nuo Xu, Wenbin Lai, Minghao Zhu‡, Cheng Chang, Zhangyue Yin, Rongxiang Weng, Wensen Cheng, Haoran Huang‡, Tianxiang Sun, Hang Yan, Tao Gui†, Qi Zhang† , Xipeng Qiu, Xuanjing Huang Fudan NLP Group ‡ ByteDance Inc # Abstract
2307.04964#0
Secrets of RLHF in Large Language Models Part I: PPO
Large language models (LLMs) have formulated a blueprint for the advancement of artificial general intelligence. Its primary objective is to function as a human-centric (helpful, honest, and harmless) assistant. Alignment with humans assumes paramount significance, and reinforcement learning with human feedback (RLHF) emerges as the pivotal technological paradigm underpinning this pursuit. Current technical routes usually include \textbf{reward models} to measure human preferences, \textbf{Proximal Policy Optimization} (PPO) to optimize policy model outputs, and \textbf{process supervision} to improve step-by-step reasoning capabilities. However, due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle. In the first report, we dissect the framework of RLHF, re-evaluate the inner workings of PPO, and explore how the parts comprising PPO algorithms impact policy agent training. We identify policy constraints being the key factor for the effective implementation of the PPO algorithm. Therefore, we explore the PPO-max, an advanced version of PPO algorithm, to efficiently improve the training stability of the policy model. Based on our main results, we perform a comprehensive analysis of RLHF abilities compared with SFT models and ChatGPT. The absence of open-source implementations has posed significant challenges to the investigation of LLMs alignment. Therefore, we are eager to release technical reports, reward models and PPO codes, aiming to make modest contributions to the advancement of LLMs.
http://arxiv.org/pdf/2307.04964
Rui Zheng, Shihan Dou, Songyang Gao, Yuan Hua, Wei Shen, Binghai Wang, Yan Liu, Senjie Jin, Qin Liu, Yuhao Zhou, Limao Xiong, Lu Chen, Zhiheng Xi, Nuo Xu, Wenbin Lai, Minghao Zhu, Cheng Chang, Zhangyue Yin, Rongxiang Weng, Wensen Cheng, Haoran Huang, Tianxiang Sun, Hang Yan, Tao Gui, Qi Zhang, Xipeng Qiu, Xuanjing Huang
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230711
20230718
[ { "id": "2302.13971" }, { "id": "1909.08593" }, { "id": "2201.08239" }, { "id": "2304.03277" }, { "id": "2304.03442" }, { "id": "2303.18223" }, { "id": "2108.07258" }, { "id": "2204.05862" }, { "id": "2303.03378" }, { "id": "2203.02155" }, { "id": "2112.00861" }, { "id": "2201.11903" }, { "id": "1904.09751" } ]
2307.04986
0
# Epidemic Modeling with Generative Agents Ross Williams1, Niyousha Hosseinichimeh1, Aritra Majumdar2, Navid Ghaffarzadegan1* 1Industrial and Systems Engineering, Virginia Tech; Falls Church, VA 22043, USA. 2Computer Science, Virginia Tech; Falls Church, VA 22043, USA. *Corresponding author. Email: [email protected] # Abstract This study offers a new paradigm of individual-level modeling to address the grand challenge of incorporating human behavior in epidemic models. Using generative artificial intelligence in an agent-based epidemic model, each agent is empowered to make its own reasonings and decisions via connecting to a large language model such as ChatGPT. Through various simulation experiments, we present compelling evidence that generative agents mimic real-world behaviors such as quarantining when sick and self-isolation when cases rise. Collectively, the agents demonstrate patterns akin to multiple waves observed in recent pandemics followed by an endemic period. Moreover, the agents successfully flatten the epidemic curve. This study creates potential to improve dynamic system modeling by offering a way to represent human brain, reasoning, and decision making. One-Sentence Summary: A new modeling technique using generative AI applied to an epidemic to incorporate human reasoning and decision making.
2307.04986#0
Epidemic Modeling with Generative Agents
This study offers a new paradigm of individual-level modeling to address the grand challenge of incorporating human behavior in epidemic models. Using generative artificial intelligence in an agent-based epidemic model, each agent is empowered to make its own reasonings and decisions via connecting to a large language model such as ChatGPT. Through various simulation experiments, we present compelling evidence that generative agents mimic real-world behaviors such as quarantining when sick and self-isolation when cases rise. Collectively, the agents demonstrate patterns akin to multiple waves observed in recent pandemics followed by an endemic period. Moreover, the agents successfully flatten the epidemic curve. This study creates potential to improve dynamic system modeling by offering a way to represent human brain, reasoning, and decision making.
http://arxiv.org/pdf/2307.04986
Ross Williams, Niyousha Hosseinichimeh, Aritra Majumdar, Navid Ghaffarzadegan
cs.AI, cs.MA, econ.GN, nlin.AO, physics.soc-ph, q-fin.EC
null
null
cs.AI
20230711
20230711
[]
2307.05300
0
4 2 0 2 n a J 4 ] I A . s c [ 3 v 0 0 3 5 0 . 7 0 3 2 : v i X r a # Unleashing the Emergent Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration Zhenhailong Wang1∗, Shaoguang Mao2, Wenshan Wu2, Tao Ge2, Furu Wei2, Heng Ji1 1University of Illinois Urbana-Champaign, 2Microsoft Research Asia {wangz3,hengji}@illinois.edu {shaoguang.mao,wenshan.wu,tage,fuwei}@microsoft.com # Abstract
2307.05300#0
Unleashing the Emergent Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration
Human intelligence thrives on cognitive synergy, where collaboration among different minds yield superior outcomes compared to isolated individuals. In this work, we propose Solo Performance Prompting (SPP), which transforms a single LLM into a cognitive synergist by engaging in multi-turn self-collaboration with multiple personas. A cognitive synergist is an intelligent agent that collaboratively combines multiple minds' strengths and knowledge to enhance problem-solving in complex tasks. By dynamically identifying and simulating different personas based on task inputs, SPP unleashes the potential of cognitive synergy in LLMs. Our in-depth analysis shows that assigning multiple fine-grained personas in LLMs improves problem-solving abilities compared to using a single or fixed number of personas. We evaluate SPP on three challenging tasks: Trivia Creative Writing, Codenames Collaborative, and Logic Grid Puzzle, encompassing both knowledge-intensive and reasoning-intensive types. Unlike previous works, such as Chain-of-Thought, that solely enhance the reasoning abilities in LLMs, experimental results demonstrate that SPP effectively reduces factual hallucination, and maintains strong reasoning capabilities. Additionally, comparative experiments show that cognitive synergy only emerges in GPT-4 and does not appear in less capable models, such as GPT-3.5-turbo and Llama2-13b-chat, which draws an interesting analogy to human development. Code, data, and prompts can be found at: https://github.com/MikeWangWZHL/Solo-Performance-Prompting.git.
http://arxiv.org/pdf/2307.05300
Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, Heng Ji
cs.AI, cs.CL
null
null
cs.AI
20230711
20240104
[ { "id": "2302.06476" }, { "id": "2305.17126" }, { "id": "2305.11499" }, { "id": "2210.00185" }, { "id": "2305.10601" }, { "id": "2304.05335" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14688" }, { "id": "2208.11663" }, { "id": "2208.03299" }, { "id": "2206.04615" }, { "id": "2104.07567" }, { "id": "2306.09896" }, { "id": "2302.04023" }, { "id": "2205.11916" }, { "id": "2304.03442" }, { "id": "2207.05608" }, { "id": "2303.12712" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" } ]
2307.07526
0
3 2 0 2 l u J 1 1 ] I A . s c [ 1 v 6 2 5 7 0 . 7 0 3 2 : v i X r a CAN I SAY, NOW MACHINES CAN THINK? Nitisha Aggarwal1, Geetika Jain Saxena2, Sanjeev Singh1, and Amit Pundir2 1Institute of Informatics & Communication, University of Delhi, Delhi, India 2Department of Electronics, Maharaja Agrasen College, University of Delhi, Delhi, India Generative AI techniques have opened the path for new generations of machines in diverse domains. These machines have various capabilities for example, they can produce images, generate answers or stories, and write codes based on the “prompts” only provided by users. These machines are considered ‘thinking minds’ because they have the ability to generate human-like responses. In this study, we have analyzed and explored the capabilities of artificial intelligence-enabled machines. We have revisited on Turing’s concept of thinking machines and compared it with recent technological advancements. The objections and consequences of the thinking machines are also discussed in this study, along with available techniques to evaluate machines’ cognitive capabilities. We have concluded that Turing Test is a critical aspect of evaluating machines’ ability. However, there are other aspects of intelligence too, and AI machines exhibit most of these aspects.
2307.07526#0
Can I say, now machines can think?
Generative AI techniques have opened the path for new generations of machines in diverse domains. These machines have various capabilities for example, they can produce images, generate answers or stories, and write codes based on the "prompts" only provided by users. These machines are considered 'thinking minds' because they have the ability to generate human-like responses. In this study, we have analyzed and explored the capabilities of artificial intelligence-enabled machines. We have revisited on Turing's concept of thinking machines and compared it with recent technological advancements. The objections and consequences of the thinking machines are also discussed in this study, along with available techniques to evaluate machines' cognitive capabilities. We have concluded that Turing Test is a critical aspect of evaluating machines' ability. However, there are other aspects of intelligence too, and AI machines exhibit most of these aspects.
http://arxiv.org/pdf/2307.07526
Nitisha Aggarwal, Geetika Jain Saxena, Sanjeev Singh, Amit Pundir
cs.AI, cs.CY, I.2.m Miscellaneous
11 pages, 3 figures
null
cs.AI
20230711
20230711
[ { "id": "1801.01957" }, { "id": "2304.03277" }, { "id": "2305.18449" }, { "id": "2303.12712" }, { "id": "2304.03439" }, { "id": "2003.02989" } ]
2307.04964
1
Large language models (LLMs) have formulated a blueprint for the advancement of artificial general intelligence. Its primary objective is to function as a human- centric (helpful, honest, and harmless) assistant. Alignment with humans assumes paramount significance, and reinforcement learning with human feedback (RLHF) emerges as the pivotal technological paradigm underpinning this pursuit. Current technical routes usually include reward models to measure human preferences, Proximal Policy Optimization (PPO) to optimize policy model outputs, and process supervision to improve step-by-step reasoning capabilities. However, due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle. In the first report, we dissect the framework of RLHF, re-evaluate the inner workings of PPO, and explore how the parts comprising PPO algorithms impact policy agent training. We identify policy constraints being the key factor for the
2307.04964#1
Secrets of RLHF in Large Language Models Part I: PPO
Large language models (LLMs) have formulated a blueprint for the advancement of artificial general intelligence. Its primary objective is to function as a human-centric (helpful, honest, and harmless) assistant. Alignment with humans assumes paramount significance, and reinforcement learning with human feedback (RLHF) emerges as the pivotal technological paradigm underpinning this pursuit. Current technical routes usually include \textbf{reward models} to measure human preferences, \textbf{Proximal Policy Optimization} (PPO) to optimize policy model outputs, and \textbf{process supervision} to improve step-by-step reasoning capabilities. However, due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle. In the first report, we dissect the framework of RLHF, re-evaluate the inner workings of PPO, and explore how the parts comprising PPO algorithms impact policy agent training. We identify policy constraints being the key factor for the effective implementation of the PPO algorithm. Therefore, we explore the PPO-max, an advanced version of PPO algorithm, to efficiently improve the training stability of the policy model. Based on our main results, we perform a comprehensive analysis of RLHF abilities compared with SFT models and ChatGPT. The absence of open-source implementations has posed significant challenges to the investigation of LLMs alignment. Therefore, we are eager to release technical reports, reward models and PPO codes, aiming to make modest contributions to the advancement of LLMs.
http://arxiv.org/pdf/2307.04964
Rui Zheng, Shihan Dou, Songyang Gao, Yuan Hua, Wei Shen, Binghai Wang, Yan Liu, Senjie Jin, Qin Liu, Yuhao Zhou, Limao Xiong, Lu Chen, Zhiheng Xi, Nuo Xu, Wenbin Lai, Minghao Zhu, Cheng Chang, Zhangyue Yin, Rongxiang Weng, Wensen Cheng, Haoran Huang, Tianxiang Sun, Hang Yan, Tao Gui, Qi Zhang, Xipeng Qiu, Xuanjing Huang
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230711
20230718
[ { "id": "2302.13971" }, { "id": "1909.08593" }, { "id": "2201.08239" }, { "id": "2304.03277" }, { "id": "2304.03442" }, { "id": "2303.18223" }, { "id": "2108.07258" }, { "id": "2204.05862" }, { "id": "2303.03378" }, { "id": "2203.02155" }, { "id": "2112.00861" }, { "id": "2201.11903" }, { "id": "1904.09751" } ]
2307.05300
1
Human intelligence thrives on cognitive syn- ergy, where collaboration among different minds yield superior outcomes compared to iso- lated individuals. In this work, we propose Solo Performance Prompting (SPP), which trans- forms a single LLM into a cognitive synergist by engaging in multi-turn self-collaboration with multiple personas. A cognitive syner- gist is an intelligent agent that collaboratively combines multiple minds’ strengths and knowl- edge to enhance problem-solving in complex tasks. By dynamically identifying and simu- lating different personas based on task inputs, SPP unleashes the potential of cognitive syn- ergy in LLMs. Our in-depth analysis shows that assigning multiple fine-grained personas in LLMs improves problem-solving abilities compared to using a single or fixed number of personas. We evaluate SPP on three chal- lenging tasks: Trivia Creative Writing, Code- names Collaborative, and Logic Grid Puzzle, encompassing both knowledge-intensive and reasoning-intensive types. Unlike previous works, such as Chain-of-Thought, that solely enhance the reasoning abilities in LLMs, ex- perimental results demonstrate that SPP effec- tively reduces factual
2307.05300#1
Unleashing the Emergent Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration
Human intelligence thrives on cognitive synergy, where collaboration among different minds yield superior outcomes compared to isolated individuals. In this work, we propose Solo Performance Prompting (SPP), which transforms a single LLM into a cognitive synergist by engaging in multi-turn self-collaboration with multiple personas. A cognitive synergist is an intelligent agent that collaboratively combines multiple minds' strengths and knowledge to enhance problem-solving in complex tasks. By dynamically identifying and simulating different personas based on task inputs, SPP unleashes the potential of cognitive synergy in LLMs. Our in-depth analysis shows that assigning multiple fine-grained personas in LLMs improves problem-solving abilities compared to using a single or fixed number of personas. We evaluate SPP on three challenging tasks: Trivia Creative Writing, Codenames Collaborative, and Logic Grid Puzzle, encompassing both knowledge-intensive and reasoning-intensive types. Unlike previous works, such as Chain-of-Thought, that solely enhance the reasoning abilities in LLMs, experimental results demonstrate that SPP effectively reduces factual hallucination, and maintains strong reasoning capabilities. Additionally, comparative experiments show that cognitive synergy only emerges in GPT-4 and does not appear in less capable models, such as GPT-3.5-turbo and Llama2-13b-chat, which draws an interesting analogy to human development. Code, data, and prompts can be found at: https://github.com/MikeWangWZHL/Solo-Performance-Prompting.git.
http://arxiv.org/pdf/2307.05300
Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, Heng Ji
cs.AI, cs.CL
null
null
cs.AI
20230711
20240104
[ { "id": "2302.06476" }, { "id": "2305.17126" }, { "id": "2305.11499" }, { "id": "2210.00185" }, { "id": "2305.10601" }, { "id": "2304.05335" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14688" }, { "id": "2208.11663" }, { "id": "2208.03299" }, { "id": "2206.04615" }, { "id": "2104.07567" }, { "id": "2306.09896" }, { "id": "2302.04023" }, { "id": "2205.11916" }, { "id": "2304.03442" }, { "id": "2207.05608" }, { "id": "2303.12712" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" } ]
2307.04964
2
the inner workings of PPO, and explore how the parts comprising PPO algorithms impact policy agent training. We identify policy constraints being the key factor for the effective implementation of the PPO algorithm. Therefore, we explore the PPO-max, an advanced version of PPO algorithm, to efficiently improve the training stability of the policy model. Based on our main results, we perform a comprehensive analysis of RLHF abilities compared with SFT models and ChatGPT. Beyond additional qualitative results, we even find that LLMs successfully trained by our algorithm can often better understand the deep meaning of the queries, and its responses are more able to hit people’s souls directly. The absence of open-source implementations has posed significant challenges to the investigation of LLMs alignment. Therefore, we are eager to release technical reports, reward models and PPO codes1, aiming to make modest contributions to the advancement of LLMs.
2307.04964#2
Secrets of RLHF in Large Language Models Part I: PPO
Large language models (LLMs) have formulated a blueprint for the advancement of artificial general intelligence. Its primary objective is to function as a human-centric (helpful, honest, and harmless) assistant. Alignment with humans assumes paramount significance, and reinforcement learning with human feedback (RLHF) emerges as the pivotal technological paradigm underpinning this pursuit. Current technical routes usually include \textbf{reward models} to measure human preferences, \textbf{Proximal Policy Optimization} (PPO) to optimize policy model outputs, and \textbf{process supervision} to improve step-by-step reasoning capabilities. However, due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle. In the first report, we dissect the framework of RLHF, re-evaluate the inner workings of PPO, and explore how the parts comprising PPO algorithms impact policy agent training. We identify policy constraints being the key factor for the effective implementation of the PPO algorithm. Therefore, we explore the PPO-max, an advanced version of PPO algorithm, to efficiently improve the training stability of the policy model. Based on our main results, we perform a comprehensive analysis of RLHF abilities compared with SFT models and ChatGPT. The absence of open-source implementations has posed significant challenges to the investigation of LLMs alignment. Therefore, we are eager to release technical reports, reward models and PPO codes, aiming to make modest contributions to the advancement of LLMs.
http://arxiv.org/pdf/2307.04964
Rui Zheng, Shihan Dou, Songyang Gao, Yuan Hua, Wei Shen, Binghai Wang, Yan Liu, Senjie Jin, Qin Liu, Yuhao Zhou, Limao Xiong, Lu Chen, Zhiheng Xi, Nuo Xu, Wenbin Lai, Minghao Zhu, Cheng Chang, Zhangyue Yin, Rongxiang Weng, Wensen Cheng, Haoran Huang, Tianxiang Sun, Hang Yan, Tao Gui, Qi Zhang, Xipeng Qiu, Xuanjing Huang
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230711
20230718
[ { "id": "2302.13971" }, { "id": "1909.08593" }, { "id": "2201.08239" }, { "id": "2304.03277" }, { "id": "2304.03442" }, { "id": "2303.18223" }, { "id": "2108.07258" }, { "id": "2204.05862" }, { "id": "2303.03378" }, { "id": "2203.02155" }, { "id": "2112.00861" }, { "id": "2201.11903" }, { "id": "1904.09751" } ]
2307.04986
2
Keywords: Epidemic modeling, generative artificial intelligence, generative agent-based model, computational social science, system dynamics, COVID-19 1 # I. Introduction The COVID-19 pandemic has emphasized the vital role of human behavior, including practices such as social distancing, wearing masks, and vaccination, in shaping the trajectory of the disease (1, 2). People's responses to perceived risks create a feedback loop that influences the pandemic’s future course, which in turn influences pandemic outcomes and future public risk perception (3). Despite widespread recognition of the importance of incorporating changes in human behavior in epidemic models (4-6), many modelers have yet to comprehensively address this crucial aspect (6, 7). While some empirically grounded models have integrated past human behavior data to model disease progression, often by exogenously feeding time series data (e.g., mobility and seasonality data) (8), only a limited number of models have endogenously formulated the dynamic nature of human behavior as a response to the state of the disease (9, 10). The lack of endogenous formulations have resulted in biases in long-term projections (10) and hence to suboptimal policy recommendations (11). Advancing the inclusion of human behavior in epidemic models is, therefore, essential to improve the accuracy of projections and inform more effective policy decisions (3).
2307.04986#2
Epidemic Modeling with Generative Agents
This study offers a new paradigm of individual-level modeling to address the grand challenge of incorporating human behavior in epidemic models. Using generative artificial intelligence in an agent-based epidemic model, each agent is empowered to make its own reasonings and decisions via connecting to a large language model such as ChatGPT. Through various simulation experiments, we present compelling evidence that generative agents mimic real-world behaviors such as quarantining when sick and self-isolation when cases rise. Collectively, the agents demonstrate patterns akin to multiple waves observed in recent pandemics followed by an endemic period. Moreover, the agents successfully flatten the epidemic curve. This study creates potential to improve dynamic system modeling by offering a way to represent human brain, reasoning, and decision making.
http://arxiv.org/pdf/2307.04986
Ross Williams, Niyousha Hosseinichimeh, Aritra Majumdar, Navid Ghaffarzadegan
cs.AI, cs.MA, econ.GN, nlin.AO, physics.soc-ph, q-fin.EC
null
null
cs.AI
20230711
20230711
[]
2307.05300
2
as Chain-of-Thought, that solely enhance the reasoning abilities in LLMs, ex- perimental results demonstrate that SPP effec- tively reduces factual hallucination, and main- tains strong reasoning capabilities. Addition- ally, comparative experiments show that cog- nitive synergy only emerges in GPT-4 and does not appear in less capable models, such as GPT-3.5-turbo and Llama2-13b-chat, which draws an interesting analogy to human devel- opment. Code, data, and prompts can be found at: https://github.com/MikeWangWZHL/ Solo-Performance-Prompting.git
2307.05300#2
Unleashing the Emergent Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration
Human intelligence thrives on cognitive synergy, where collaboration among different minds yield superior outcomes compared to isolated individuals. In this work, we propose Solo Performance Prompting (SPP), which transforms a single LLM into a cognitive synergist by engaging in multi-turn self-collaboration with multiple personas. A cognitive synergist is an intelligent agent that collaboratively combines multiple minds' strengths and knowledge to enhance problem-solving in complex tasks. By dynamically identifying and simulating different personas based on task inputs, SPP unleashes the potential of cognitive synergy in LLMs. Our in-depth analysis shows that assigning multiple fine-grained personas in LLMs improves problem-solving abilities compared to using a single or fixed number of personas. We evaluate SPP on three challenging tasks: Trivia Creative Writing, Codenames Collaborative, and Logic Grid Puzzle, encompassing both knowledge-intensive and reasoning-intensive types. Unlike previous works, such as Chain-of-Thought, that solely enhance the reasoning abilities in LLMs, experimental results demonstrate that SPP effectively reduces factual hallucination, and maintains strong reasoning capabilities. Additionally, comparative experiments show that cognitive synergy only emerges in GPT-4 and does not appear in less capable models, such as GPT-3.5-turbo and Llama2-13b-chat, which draws an interesting analogy to human development. Code, data, and prompts can be found at: https://github.com/MikeWangWZHL/Solo-Performance-Prompting.git.
http://arxiv.org/pdf/2307.05300
Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, Heng Ji
cs.AI, cs.CL
null
null
cs.AI
20230711
20240104
[ { "id": "2302.06476" }, { "id": "2305.17126" }, { "id": "2305.11499" }, { "id": "2210.00185" }, { "id": "2305.10601" }, { "id": "2304.05335" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14688" }, { "id": "2208.11663" }, { "id": "2208.03299" }, { "id": "2206.04615" }, { "id": "2104.07567" }, { "id": "2306.09896" }, { "id": "2302.04023" }, { "id": "2205.11916" }, { "id": "2304.03442" }, { "id": "2207.05608" }, { "id": "2303.12712" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" } ]
2307.07526
2
Center for AI Safety, a nonprofit organization, recently released an open letter, signed by more than 350 industry leaders, researchers, and AI experts [1], named "Statement on AI Risk." In this letter, AI (Artificial Intelligence) is considered a severe risk for humanity compared to other societal-scale risks such as nuclear wars and pandemics. Another open letter [2] to call for an immediate pause in the training of giant AI systems for at least 6 months was signed by more than 31000 people, mainly prominent researchers, and industry executives, including Elon Musk, CEO of SpaceX, Tesla & Twitter. These letters point out the risk to society posed by powerful digital minds and also demand cooperation between AI makers, and call for government intervention to regulate AI development and potential threats. Researchers are claiming that modern AI systems are competing with humans in various tasks and also outperforming humans in some domains [3]. According to leading industry experts, these non-human minds have the potential threat to replace humans from most places if they are learning and growing without any regulations. The concerns are not limited to biased or incorrect answers from machines but are also societal-scale
2307.07526#2
Can I say, now machines can think?
Generative AI techniques have opened the path for new generations of machines in diverse domains. These machines have various capabilities for example, they can produce images, generate answers or stories, and write codes based on the "prompts" only provided by users. These machines are considered 'thinking minds' because they have the ability to generate human-like responses. In this study, we have analyzed and explored the capabilities of artificial intelligence-enabled machines. We have revisited on Turing's concept of thinking machines and compared it with recent technological advancements. The objections and consequences of the thinking machines are also discussed in this study, along with available techniques to evaluate machines' cognitive capabilities. We have concluded that Turing Test is a critical aspect of evaluating machines' ability. However, there are other aspects of intelligence too, and AI machines exhibit most of these aspects.
http://arxiv.org/pdf/2307.07526
Nitisha Aggarwal, Geetika Jain Saxena, Sanjeev Singh, Amit Pundir
cs.AI, cs.CY, I.2.m Miscellaneous
11 pages, 3 figures
null
cs.AI
20230711
20230711
[ { "id": "1801.01957" }, { "id": "2304.03277" }, { "id": "2305.18449" }, { "id": "2303.12712" }, { "id": "2304.03439" }, { "id": "2003.02989" } ]
2307.04964
3
∗ Equal contributions. † Correspondence to: {rzheng20, shdou21, tgui, qz}@fudan.edu.cn 1 https://github.com/OpenLMLab/MOSS-RLHF Disclaimer: This paper contains content that may be profane, vulgar, or offensive. # 1 Introduction Nowadays, large language models (LLMs) have made remarkable progress, posing a significant impact on the AI community [1, 2, 3, 4]. By scaling up model size, data size, and the amount of training computation, these LLMs emerge prominent characteristics that are not present in small models, typically including in-context learning [5], instruction following [6, 7], and step-by-step reasoning [8]. Based on these emergent abilities, LLMs even exhibit some potential to link between words and percepts for interacting with the real world, leading to the possibilities of artificial general intelligence (AGI), like embodied language models with tool manipulation [9] and generative agents in interactive sandbox environment [10].
2307.04964#3
Secrets of RLHF in Large Language Models Part I: PPO
Large language models (LLMs) have formulated a blueprint for the advancement of artificial general intelligence. Its primary objective is to function as a human-centric (helpful, honest, and harmless) assistant. Alignment with humans assumes paramount significance, and reinforcement learning with human feedback (RLHF) emerges as the pivotal technological paradigm underpinning this pursuit. Current technical routes usually include \textbf{reward models} to measure human preferences, \textbf{Proximal Policy Optimization} (PPO) to optimize policy model outputs, and \textbf{process supervision} to improve step-by-step reasoning capabilities. However, due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle. In the first report, we dissect the framework of RLHF, re-evaluate the inner workings of PPO, and explore how the parts comprising PPO algorithms impact policy agent training. We identify policy constraints being the key factor for the effective implementation of the PPO algorithm. Therefore, we explore the PPO-max, an advanced version of PPO algorithm, to efficiently improve the training stability of the policy model. Based on our main results, we perform a comprehensive analysis of RLHF abilities compared with SFT models and ChatGPT. The absence of open-source implementations has posed significant challenges to the investigation of LLMs alignment. Therefore, we are eager to release technical reports, reward models and PPO codes, aiming to make modest contributions to the advancement of LLMs.
http://arxiv.org/pdf/2307.04964
Rui Zheng, Shihan Dou, Songyang Gao, Yuan Hua, Wei Shen, Binghai Wang, Yan Liu, Senjie Jin, Qin Liu, Yuhao Zhou, Limao Xiong, Lu Chen, Zhiheng Xi, Nuo Xu, Wenbin Lai, Minghao Zhu, Cheng Chang, Zhangyue Yin, Rongxiang Weng, Wensen Cheng, Haoran Huang, Tianxiang Sun, Hang Yan, Tao Gui, Qi Zhang, Xipeng Qiu, Xuanjing Huang
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230711
20230718
[ { "id": "2302.13971" }, { "id": "1909.08593" }, { "id": "2201.08239" }, { "id": "2304.03277" }, { "id": "2304.03442" }, { "id": "2303.18223" }, { "id": "2108.07258" }, { "id": "2204.05862" }, { "id": "2303.03378" }, { "id": "2203.02155" }, { "id": "2112.00861" }, { "id": "2201.11903" }, { "id": "1904.09751" } ]
2307.04986
3
Incorporating human behavior into epidemic models entails multiple challenges (12). Human behavior is complex and diverse, varying across individuals, communities, cultures, and contexts. Integrating such complexity into models poses difficulties in representation and parameterization. In addition, the lack of a unified, validated theory of human behavioral change in response to an infectious disease, and limited collaboration with behavioral and social scientists, have isolated epidemic models from adequate incorporation of behavioral mechanisms (13). Acquiring reliable and detailed data on behavior and validating behavioral mechanisms are also challenging (14). Consequently, many epidemic models are not coupled with behavioral models or are limited in the extent to which they integrate change in human behavior (10). We argue that new developments in artificial intelligence (AI), specifically generative AI, are changing the behavioral modeling landscape, with major implications for epidemic modeling.
2307.04986#3
Epidemic Modeling with Generative Agents
This study offers a new paradigm of individual-level modeling to address the grand challenge of incorporating human behavior in epidemic models. Using generative artificial intelligence in an agent-based epidemic model, each agent is empowered to make its own reasonings and decisions via connecting to a large language model such as ChatGPT. Through various simulation experiments, we present compelling evidence that generative agents mimic real-world behaviors such as quarantining when sick and self-isolation when cases rise. Collectively, the agents demonstrate patterns akin to multiple waves observed in recent pandemics followed by an endemic period. Moreover, the agents successfully flatten the epidemic curve. This study creates potential to improve dynamic system modeling by offering a way to represent human brain, reasoning, and decision making.
http://arxiv.org/pdf/2307.04986
Ross Williams, Niyousha Hosseinichimeh, Aritra Majumdar, Navid Ghaffarzadegan
cs.AI, cs.MA, econ.GN, nlin.AO, physics.soc-ph, q-fin.EC
null
null
cs.AI
20230711
20230711
[]
2307.05300
3
# Introduction Although large language models (LLMs) have demonstrated impressive performance as general (-] a single LLM Qs && & @ personas single persona > Al Assistant (a) Standard Prompting single persona or {2,3h Al Assistant 8 Domain Experts / Audiences (c) Solo Performance Prompting (SPP) Figure 1: Schematic illustration of Solo Performance Prompting (SPP) and the difference compared to previ- ous prompting methods.
2307.05300#3
Unleashing the Emergent Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration
Human intelligence thrives on cognitive synergy, where collaboration among different minds yield superior outcomes compared to isolated individuals. In this work, we propose Solo Performance Prompting (SPP), which transforms a single LLM into a cognitive synergist by engaging in multi-turn self-collaboration with multiple personas. A cognitive synergist is an intelligent agent that collaboratively combines multiple minds' strengths and knowledge to enhance problem-solving in complex tasks. By dynamically identifying and simulating different personas based on task inputs, SPP unleashes the potential of cognitive synergy in LLMs. Our in-depth analysis shows that assigning multiple fine-grained personas in LLMs improves problem-solving abilities compared to using a single or fixed number of personas. We evaluate SPP on three challenging tasks: Trivia Creative Writing, Codenames Collaborative, and Logic Grid Puzzle, encompassing both knowledge-intensive and reasoning-intensive types. Unlike previous works, such as Chain-of-Thought, that solely enhance the reasoning abilities in LLMs, experimental results demonstrate that SPP effectively reduces factual hallucination, and maintains strong reasoning capabilities. Additionally, comparative experiments show that cognitive synergy only emerges in GPT-4 and does not appear in less capable models, such as GPT-3.5-turbo and Llama2-13b-chat, which draws an interesting analogy to human development. Code, data, and prompts can be found at: https://github.com/MikeWangWZHL/Solo-Performance-Prompting.git.
http://arxiv.org/pdf/2307.05300
Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, Heng Ji
cs.AI, cs.CL
null
null
cs.AI
20230711
20240104
[ { "id": "2302.06476" }, { "id": "2305.17126" }, { "id": "2305.11499" }, { "id": "2210.00185" }, { "id": "2305.10601" }, { "id": "2304.05335" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14688" }, { "id": "2208.11663" }, { "id": "2208.03299" }, { "id": "2206.04615" }, { "id": "2104.07567" }, { "id": "2306.09896" }, { "id": "2302.04023" }, { "id": "2205.11916" }, { "id": "2304.03442" }, { "id": "2207.05608" }, { "id": "2303.12712" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" } ]
2307.07526
3
from most places if they are learning and growing without any regulations. The concerns are not limited to biased or incorrect answers from machines but are also societal-scale disruptions by AI such as cultural extinction [4, 5, 6]. The risk of extinction of humans from AI is only possible if these digital brains have some ideology and if industry leaders or researchers are concerned about the growth of AI now, that implies they may have foreseen this ideology. So it may be the right time to say that machines have started thinking. However, it is not the first time that the idea of thinking machines and consequences has been discussed.
2307.07526#3
Can I say, now machines can think?
Generative AI techniques have opened the path for new generations of machines in diverse domains. These machines have various capabilities for example, they can produce images, generate answers or stories, and write codes based on the "prompts" only provided by users. These machines are considered 'thinking minds' because they have the ability to generate human-like responses. In this study, we have analyzed and explored the capabilities of artificial intelligence-enabled machines. We have revisited on Turing's concept of thinking machines and compared it with recent technological advancements. The objections and consequences of the thinking machines are also discussed in this study, along with available techniques to evaluate machines' cognitive capabilities. We have concluded that Turing Test is a critical aspect of evaluating machines' ability. However, there are other aspects of intelligence too, and AI machines exhibit most of these aspects.
http://arxiv.org/pdf/2307.07526
Nitisha Aggarwal, Geetika Jain Saxena, Sanjeev Singh, Amit Pundir
cs.AI, cs.CY, I.2.m Miscellaneous
11 pages, 3 figures
null
cs.AI
20230711
20230711
[ { "id": "1801.01957" }, { "id": "2304.03277" }, { "id": "2305.18449" }, { "id": "2303.12712" }, { "id": "2304.03439" }, { "id": "2003.02989" } ]
2307.04964
4
Despite the capacities, since LLMs are trained to capture the data characteristics of pre-training corpora (including both high-quality and low-quality data) [11, 12], these models are likely to express unintended behaviors such as making up facts, generating biased or toxic text, or even harmful content for humans [13, 14]. Accordingly, it is crucial that the ratio of safety progress to capability progress increases as emphasized in OpenAI’s plan for AGI [15]. Hence, it is necessary to align LLMs with human values, e.g., helpful, honest, and harmless (3H) [12, 16, 17]. Especially, the arrival of open source foundation models, such as LLaMA [1] and OpenChineseLLaMA [18], has rapidly promoted the LLMs into the supervised fine-tuning (SFT) stage. In order to mitigate a huge risk of harmfulness, most of the current work tries to add some 3H data in SFT, hoping to activate the responses of the models to make a positive change at the moral and ethical level [7, 19, 20]. However, even though a set of safety and groundedness objectives are added to capture the behavior that the model should exhibit in a dialog [12],
2307.04964#4
Secrets of RLHF in Large Language Models Part I: PPO
Large language models (LLMs) have formulated a blueprint for the advancement of artificial general intelligence. Its primary objective is to function as a human-centric (helpful, honest, and harmless) assistant. Alignment with humans assumes paramount significance, and reinforcement learning with human feedback (RLHF) emerges as the pivotal technological paradigm underpinning this pursuit. Current technical routes usually include \textbf{reward models} to measure human preferences, \textbf{Proximal Policy Optimization} (PPO) to optimize policy model outputs, and \textbf{process supervision} to improve step-by-step reasoning capabilities. However, due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle. In the first report, we dissect the framework of RLHF, re-evaluate the inner workings of PPO, and explore how the parts comprising PPO algorithms impact policy agent training. We identify policy constraints being the key factor for the effective implementation of the PPO algorithm. Therefore, we explore the PPO-max, an advanced version of PPO algorithm, to efficiently improve the training stability of the policy model. Based on our main results, we perform a comprehensive analysis of RLHF abilities compared with SFT models and ChatGPT. The absence of open-source implementations has posed significant challenges to the investigation of LLMs alignment. Therefore, we are eager to release technical reports, reward models and PPO codes, aiming to make modest contributions to the advancement of LLMs.
http://arxiv.org/pdf/2307.04964
Rui Zheng, Shihan Dou, Songyang Gao, Yuan Hua, Wei Shen, Binghai Wang, Yan Liu, Senjie Jin, Qin Liu, Yuhao Zhou, Limao Xiong, Lu Chen, Zhiheng Xi, Nuo Xu, Wenbin Lai, Minghao Zhu, Cheng Chang, Zhangyue Yin, Rongxiang Weng, Wensen Cheng, Haoran Huang, Tianxiang Sun, Hang Yan, Tao Gui, Qi Zhang, Xipeng Qiu, Xuanjing Huang
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230711
20230718
[ { "id": "2302.13971" }, { "id": "1909.08593" }, { "id": "2201.08239" }, { "id": "2304.03277" }, { "id": "2304.03442" }, { "id": "2303.18223" }, { "id": "2108.07258" }, { "id": "2204.05862" }, { "id": "2303.03378" }, { "id": "2203.02155" }, { "id": "2112.00861" }, { "id": "2201.11903" }, { "id": "1904.09751" } ]
2307.04986
4
The field of generative AI has witnessed remarkable growth in recent years, revolutionizing various domains of science and research (15-17). Generative AI refers to AI with the ability to generate new content, such as images, text, music, and videos, by learning patterns and structures from existing data (18). Generative AI models (e.g., GPT-3.5, DALL-E) are first trained on a diverse corpus of data (e.g., books, music, articles, images, and websites), thus increasing their usability and range of applications (19). We build on this promising idea and leverage generative AI’s power to create more realistic representations of human behavior for epidemic modeling. We hypothesize that generative AI can help incorporate human behavior in epidemic models by offering a modeling paradigm that is radically different not only from conventional compartmental models, but also from rule-based individual-level models (often referred to as agent-based models, or ABM).
2307.04986#4
Epidemic Modeling with Generative Agents
This study offers a new paradigm of individual-level modeling to address the grand challenge of incorporating human behavior in epidemic models. Using generative artificial intelligence in an agent-based epidemic model, each agent is empowered to make its own reasonings and decisions via connecting to a large language model such as ChatGPT. Through various simulation experiments, we present compelling evidence that generative agents mimic real-world behaviors such as quarantining when sick and self-isolation when cases rise. Collectively, the agents demonstrate patterns akin to multiple waves observed in recent pandemics followed by an endemic period. Moreover, the agents successfully flatten the epidemic curve. This study creates potential to improve dynamic system modeling by offering a way to represent human brain, reasoning, and decision making.
http://arxiv.org/pdf/2307.04986
Ross Williams, Niyousha Hosseinichimeh, Aritra Majumdar, Navid Ghaffarzadegan
cs.AI, cs.MA, econ.GN, nlin.AO, physics.soc-ph, q-fin.EC
null
null
cs.AI
20230711
20230711
[]
2307.05300
4
Figure 1: Schematic illustration of Solo Performance Prompting (SPP) and the difference compared to previ- ous prompting methods. they still encounter chal- task-solving agents, lenges (Qin et al., 2023; Bang et al., 2023; OpenAI, 2023b; Bubeck et al., 2023) in various knowledge- intensive and reasoning-intensive tasks due to fac- tual hallucination (Maynez et al., 2020) and a lack of slow-thinking (Sloman, 1996) capabilities. Un- like humans, who can leverage the power of collab- oration and information integration among differ- ent cognitive processes and individuals (referred to as cognitive synergy (Cur¸seu et al., 2015; Goertzel, 2009, 2017)), current LLMs are akin to "jack-of-all- trades" with a vast mixture of knowledge and char- acteristics. Recent advancements, such as Chain- of-Thought (CoT) prompting (Wei et al., 2023; Ko- jima et al., 2022) and Self-refinement (Madaan et al., 2023; Shinn et al., 2023), have successfully enhanced the reasoning abilities of LLMs by sim- ulating slow-thinking through the generation of intermediate steps or iterative revision. However, factual hallucination remains a major challenge for LLMs on knowledge-intensive tasks.
2307.05300#4
Unleashing the Emergent Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration
Human intelligence thrives on cognitive synergy, where collaboration among different minds yield superior outcomes compared to isolated individuals. In this work, we propose Solo Performance Prompting (SPP), which transforms a single LLM into a cognitive synergist by engaging in multi-turn self-collaboration with multiple personas. A cognitive synergist is an intelligent agent that collaboratively combines multiple minds' strengths and knowledge to enhance problem-solving in complex tasks. By dynamically identifying and simulating different personas based on task inputs, SPP unleashes the potential of cognitive synergy in LLMs. Our in-depth analysis shows that assigning multiple fine-grained personas in LLMs improves problem-solving abilities compared to using a single or fixed number of personas. We evaluate SPP on three challenging tasks: Trivia Creative Writing, Codenames Collaborative, and Logic Grid Puzzle, encompassing both knowledge-intensive and reasoning-intensive types. Unlike previous works, such as Chain-of-Thought, that solely enhance the reasoning abilities in LLMs, experimental results demonstrate that SPP effectively reduces factual hallucination, and maintains strong reasoning capabilities. Additionally, comparative experiments show that cognitive synergy only emerges in GPT-4 and does not appear in less capable models, such as GPT-3.5-turbo and Llama2-13b-chat, which draws an interesting analogy to human development. Code, data, and prompts can be found at: https://github.com/MikeWangWZHL/Solo-Performance-Prompting.git.
http://arxiv.org/pdf/2307.05300
Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, Heng Ji
cs.AI, cs.CL
null
null
cs.AI
20230711
20240104
[ { "id": "2302.06476" }, { "id": "2305.17126" }, { "id": "2305.11499" }, { "id": "2210.00185" }, { "id": "2305.10601" }, { "id": "2304.05335" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14688" }, { "id": "2208.11663" }, { "id": "2208.03299" }, { "id": "2206.04615" }, { "id": "2104.07567" }, { "id": "2306.09896" }, { "id": "2302.04023" }, { "id": "2205.11916" }, { "id": "2304.03442" }, { "id": "2207.05608" }, { "id": "2303.12712" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" } ]
2307.07526
4
In 1637, René Descartes discussed in his work ’Discourse on the Method’ that if machines have ’reason,’ they can also speak like humans. Thanks to "reason," humans can speak the language and build conversations that machines cannot. In 1950, Turing proposed the question, "Can machines think?" He further discussed intelligence as the capability to think and machines can attain intelligence by adapting and evolving [7]. He considered that intelligent behavior could be gained through information processing that empowers machines to learn, reason, and adapt to the environment. Turing suggested a well-known test as the Imitation Game, which he assumed that in the next fifty years, machines would be able to pass this test. Even after seven decades, there are no significant proven results to establish that machines have the potential to think. Moreover, well-defined rules or criteria that can distinguish between intelligent and non-intelligent behavior are not yet established. A few aspects of intelligence, such as deductive and inductive reasoning, logical inferences, analysis of information, driving connections between information, and finally, bringing out a conclusion based on available information, are modeled by machines with Artificial Intelligence (AI) [8]. These machines are improving their ability to exhibit intelligent behavior day by day [9] and simulating various cognitive Can I say, now machines can think?
2307.07526#4
Can I say, now machines can think?
Generative AI techniques have opened the path for new generations of machines in diverse domains. These machines have various capabilities for example, they can produce images, generate answers or stories, and write codes based on the "prompts" only provided by users. These machines are considered 'thinking minds' because they have the ability to generate human-like responses. In this study, we have analyzed and explored the capabilities of artificial intelligence-enabled machines. We have revisited on Turing's concept of thinking machines and compared it with recent technological advancements. The objections and consequences of the thinking machines are also discussed in this study, along with available techniques to evaluate machines' cognitive capabilities. We have concluded that Turing Test is a critical aspect of evaluating machines' ability. However, there are other aspects of intelligence too, and AI machines exhibit most of these aspects.
http://arxiv.org/pdf/2307.07526
Nitisha Aggarwal, Geetika Jain Saxena, Sanjeev Singh, Amit Pundir
cs.AI, cs.CY, I.2.m Miscellaneous
11 pages, 3 figures
null
cs.AI
20230711
20230711
[ { "id": "1801.01957" }, { "id": "2304.03277" }, { "id": "2305.18449" }, { "id": "2303.12712" }, { "id": "2304.03439" }, { "id": "2003.02989" } ]
2307.04964
5
19, 20]. However, even though a set of safety and groundedness objectives are added to capture the behavior that the model should exhibit in a dialog [12], the model’s performance remains below human levels in safety and groundedness [17]. Hence, it requires more effective and efficient control approaches to eliminate the potential risk of the use of LLMs. Fortunately, OpenAI and Anthropic have verified that RLHF is a valid avenue for aligning language models with user intent on a wide range of tasks [16, 17].
2307.04964#5
Secrets of RLHF in Large Language Models Part I: PPO
Large language models (LLMs) have formulated a blueprint for the advancement of artificial general intelligence. Its primary objective is to function as a human-centric (helpful, honest, and harmless) assistant. Alignment with humans assumes paramount significance, and reinforcement learning with human feedback (RLHF) emerges as the pivotal technological paradigm underpinning this pursuit. Current technical routes usually include \textbf{reward models} to measure human preferences, \textbf{Proximal Policy Optimization} (PPO) to optimize policy model outputs, and \textbf{process supervision} to improve step-by-step reasoning capabilities. However, due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle. In the first report, we dissect the framework of RLHF, re-evaluate the inner workings of PPO, and explore how the parts comprising PPO algorithms impact policy agent training. We identify policy constraints being the key factor for the effective implementation of the PPO algorithm. Therefore, we explore the PPO-max, an advanced version of PPO algorithm, to efficiently improve the training stability of the policy model. Based on our main results, we perform a comprehensive analysis of RLHF abilities compared with SFT models and ChatGPT. The absence of open-source implementations has posed significant challenges to the investigation of LLMs alignment. Therefore, we are eager to release technical reports, reward models and PPO codes, aiming to make modest contributions to the advancement of LLMs.
http://arxiv.org/pdf/2307.04964
Rui Zheng, Shihan Dou, Songyang Gao, Yuan Hua, Wei Shen, Binghai Wang, Yan Liu, Senjie Jin, Qin Liu, Yuhao Zhou, Limao Xiong, Lu Chen, Zhiheng Xi, Nuo Xu, Wenbin Lai, Minghao Zhu, Cheng Chang, Zhangyue Yin, Rongxiang Weng, Wensen Cheng, Haoran Huang, Tianxiang Sun, Hang Yan, Tao Gui, Qi Zhang, Xipeng Qiu, Xuanjing Huang
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230711
20230718
[ { "id": "2302.13971" }, { "id": "1909.08593" }, { "id": "2201.08239" }, { "id": "2304.03277" }, { "id": "2304.03442" }, { "id": "2303.18223" }, { "id": "2108.07258" }, { "id": "2204.05862" }, { "id": "2303.03378" }, { "id": "2203.02155" }, { "id": "2112.00861" }, { "id": "2201.11903" }, { "id": "1904.09751" } ]
2307.04986
5
In this paper, we present a generative agent-based model (GABM) of an epidemic that incorporates changes in human behavior by utilizing generative agents as representative proxies. Unlike conventional ABMs, this new paradigm has empowered each agent by generative AI giving them the ability to reason and decide based on available information without modelers enforcing any decision-making rule—as Figure 1 illustrates. Our proposed paradigm introduces agents with distinct personas, representing the society, that capture various characteristics 2 capable of influencing human responses to an evolving epidemic from an infectious virus. Similar to residents in a town, these agents can go through daily routines (such as reading the daily news) and make decisions (such as whether to go to work). Crucially, the agents’ decisions regarding whether to go to work are influenced by their individual personalities (20), information about the virus, their own perceived health, and the perceived risks of infection derived from multiple information sources, including daily case counts reported in news outlets. Their decision-making process is powered by a pretrained large language model (LLM) such as ChatGPT. Leveraging an LLM, these agents possess realistic personas that, while varying across individuals, represent different groups of people in society.
2307.04986#5
Epidemic Modeling with Generative Agents
This study offers a new paradigm of individual-level modeling to address the grand challenge of incorporating human behavior in epidemic models. Using generative artificial intelligence in an agent-based epidemic model, each agent is empowered to make its own reasonings and decisions via connecting to a large language model such as ChatGPT. Through various simulation experiments, we present compelling evidence that generative agents mimic real-world behaviors such as quarantining when sick and self-isolation when cases rise. Collectively, the agents demonstrate patterns akin to multiple waves observed in recent pandemics followed by an endemic period. Moreover, the agents successfully flatten the epidemic curve. This study creates potential to improve dynamic system modeling by offering a way to represent human brain, reasoning, and decision making.
http://arxiv.org/pdf/2307.04986
Ross Williams, Niyousha Hosseinichimeh, Aritra Majumdar, Navid Ghaffarzadegan
cs.AI, cs.MA, econ.GN, nlin.AO, physics.soc-ph, q-fin.EC
null
null
cs.AI
20230711
20230711
[]
2307.07526
5
abilities such as memory (data encoding, storage, and retrieval when required), paying attention to specific information while excluding or ignoring other less relevant information, communication in natural language, processing visual information, learning from past experiences and self-correction [10]. Additionally, with the recent advancement of Generative Adversarial Networks (GAN) [11], machines have started synthesizing incredible results which are difficult to distinguish from the results generated by humans. AI chatbots, such as ChatGPT [12] and BARD, are applications of GANs, they have various capabilities, for example, story writing, answering questions by understanding them, the composition of poems, and suggesting improvements in the codes [13]. Machines today can summarize the literature [14], identify research gaps, write abstracts [15], analyze results, and draft essays & manuscripts [16]. One study [17], reported that the machine’s reply is better than a mediocre student’s answer. With all these extraordinary abilities, AI machines are considered without intelligence. Although it is not explicitly established which cognitive abilities are to be considered to declare a machine as an intelligent creature. If human intelligence is the benchmark,
2307.07526#5
Can I say, now machines can think?
Generative AI techniques have opened the path for new generations of machines in diverse domains. These machines have various capabilities for example, they can produce images, generate answers or stories, and write codes based on the "prompts" only provided by users. These machines are considered 'thinking minds' because they have the ability to generate human-like responses. In this study, we have analyzed and explored the capabilities of artificial intelligence-enabled machines. We have revisited on Turing's concept of thinking machines and compared it with recent technological advancements. The objections and consequences of the thinking machines are also discussed in this study, along with available techniques to evaluate machines' cognitive capabilities. We have concluded that Turing Test is a critical aspect of evaluating machines' ability. However, there are other aspects of intelligence too, and AI machines exhibit most of these aspects.
http://arxiv.org/pdf/2307.07526
Nitisha Aggarwal, Geetika Jain Saxena, Sanjeev Singh, Amit Pundir
cs.AI, cs.CY, I.2.m Miscellaneous
11 pages, 3 figures
null
cs.AI
20230711
20230711
[ { "id": "1801.01957" }, { "id": "2304.03277" }, { "id": "2305.18449" }, { "id": "2303.12712" }, { "id": "2304.03439" }, { "id": "2003.02989" } ]
2307.04964
6
However, training large language models that align with human values is a daunting task, often resulting in repeated failure when trained using reinforcement learning [21]. Generally speaking, successful RLHF training requires an accurate reward model as a surrogate for human judgment, careful hyperparameter exploration for stable parameter updating, and a strong PPO algorithm for robust policy optimization. While the reward model trained by low-quality data and hard-to-define alignment target can easily mislead the PPO algorithm to a unintelligible direction. Besides, finetuning language models with PPO needs to coordinate four models to work together, i.e., a policy model, a value model, a reward model, and a reference model, making it hard to train and scale up to large-scale parameter models. In the new language environment, PPO suffers from sparse reward and inefficient exploration in word space, making it sensitive to hyperparameters. Models trained solely through repeated experiments, failed runs, and hyperparameter sweeps achieve far inferior results. The huge trial and error cost of LLMs makes researchers dare not easily let the research enter the RLHF stage, which hinders the LLMs safe landing. Hence, a robust PPO algorithm specially designed for LLMs is the key step to align human preferences.
2307.04964#6
Secrets of RLHF in Large Language Models Part I: PPO
Large language models (LLMs) have formulated a blueprint for the advancement of artificial general intelligence. Its primary objective is to function as a human-centric (helpful, honest, and harmless) assistant. Alignment with humans assumes paramount significance, and reinforcement learning with human feedback (RLHF) emerges as the pivotal technological paradigm underpinning this pursuit. Current technical routes usually include \textbf{reward models} to measure human preferences, \textbf{Proximal Policy Optimization} (PPO) to optimize policy model outputs, and \textbf{process supervision} to improve step-by-step reasoning capabilities. However, due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle. In the first report, we dissect the framework of RLHF, re-evaluate the inner workings of PPO, and explore how the parts comprising PPO algorithms impact policy agent training. We identify policy constraints being the key factor for the effective implementation of the PPO algorithm. Therefore, we explore the PPO-max, an advanced version of PPO algorithm, to efficiently improve the training stability of the policy model. Based on our main results, we perform a comprehensive analysis of RLHF abilities compared with SFT models and ChatGPT. The absence of open-source implementations has posed significant challenges to the investigation of LLMs alignment. Therefore, we are eager to release technical reports, reward models and PPO codes, aiming to make modest contributions to the advancement of LLMs.
http://arxiv.org/pdf/2307.04964
Rui Zheng, Shihan Dou, Songyang Gao, Yuan Hua, Wei Shen, Binghai Wang, Yan Liu, Senjie Jin, Qin Liu, Yuhao Zhou, Limao Xiong, Lu Chen, Zhiheng Xi, Nuo Xu, Wenbin Lai, Minghao Zhu, Cheng Chang, Zhangyue Yin, Rongxiang Weng, Wensen Cheng, Haoran Huang, Tianxiang Sun, Hang Yan, Tao Gui, Qi Zhang, Xipeng Qiu, Xuanjing Huang
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230711
20230718
[ { "id": "2302.13971" }, { "id": "1909.08593" }, { "id": "2201.08239" }, { "id": "2304.03277" }, { "id": "2304.03442" }, { "id": "2303.18223" }, { "id": "2108.07258" }, { "id": "2204.05862" }, { "id": "2303.03378" }, { "id": "2203.02155" }, { "id": "2112.00861" }, { "id": "2201.11903" }, { "id": "1904.09751" } ]
2307.04986
6
Large Language Model (ChatGPT) as fF RO RRS re yy Personal Information: -Name age s - Personality traits ‘API Calls - Resides in Dewberry Hollow = - Job status Decision - Reasoning - Response -Felsnomal § \{( SYS OW |] - Has light cough Health Information - Has fever Environmental Information >) - Knows about virus - Aware of daily new cases Agent 1 Agent 2 Agent N Information feedback Fig. 1. Generative agent-based modeling provides reasoning and decision-making ability for each agent through closing a feedback between an AI-based large language model of reasoning (GAI) and a simulation model of spread (ABM). In the ABM section, green represents people susceptible to the virus, red infected people, and gray people who have recovered from the virus.
2307.04986#6
Epidemic Modeling with Generative Agents
This study offers a new paradigm of individual-level modeling to address the grand challenge of incorporating human behavior in epidemic models. Using generative artificial intelligence in an agent-based epidemic model, each agent is empowered to make its own reasonings and decisions via connecting to a large language model such as ChatGPT. Through various simulation experiments, we present compelling evidence that generative agents mimic real-world behaviors such as quarantining when sick and self-isolation when cases rise. Collectively, the agents demonstrate patterns akin to multiple waves observed in recent pandemics followed by an endemic period. Moreover, the agents successfully flatten the epidemic curve. This study creates potential to improve dynamic system modeling by offering a way to represent human brain, reasoning, and decision making.
http://arxiv.org/pdf/2307.04986
Ross Williams, Niyousha Hosseinichimeh, Aritra Majumdar, Navid Ghaffarzadegan
cs.AI, cs.MA, econ.GN, nlin.AO, physics.soc-ph, q-fin.EC
null
null
cs.AI
20230711
20230711
[]
2307.05300
6
Write a short, one-paragraph background story of an NPC for the next Legend of Zelda game. The background story should mention (1) the incantation of the name of a character who is beheaded in the ninth episode of the Game of Thrones TV series, and (3) the name of the last song in the second album by Jay Chou. Ne Task Input In the land of Hyrule, €%@ there lived a wise and mysterious NPC named Eddard, @ who was once a renowned wizard in the magical world of Hogwarts. Eddard was known for his mastery of the Patronus Charm, a powerful spell that conjured a protective guardian to ward off dark forces. }& (missing the incantation!) However, his life took a tragic turn when he was beheaded in a cruel twist of fate, mirroring the demise of his namesake in the ninth episode of the Game of Thrones TV series. Now, as a spirit, Eddard roams the land, offering guidance to those in need and seeking solace in the haunting melody of "The Final Battle," x (factual error!) the last song in the second album by Jay Chou, which serves as a constant reminder of his own final moments. Patronus Charm in Harry Potter
2307.05300#6
Unleashing the Emergent Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration
Human intelligence thrives on cognitive synergy, where collaboration among different minds yield superior outcomes compared to isolated individuals. In this work, we propose Solo Performance Prompting (SPP), which transforms a single LLM into a cognitive synergist by engaging in multi-turn self-collaboration with multiple personas. A cognitive synergist is an intelligent agent that collaboratively combines multiple minds' strengths and knowledge to enhance problem-solving in complex tasks. By dynamically identifying and simulating different personas based on task inputs, SPP unleashes the potential of cognitive synergy in LLMs. Our in-depth analysis shows that assigning multiple fine-grained personas in LLMs improves problem-solving abilities compared to using a single or fixed number of personas. We evaluate SPP on three challenging tasks: Trivia Creative Writing, Codenames Collaborative, and Logic Grid Puzzle, encompassing both knowledge-intensive and reasoning-intensive types. Unlike previous works, such as Chain-of-Thought, that solely enhance the reasoning abilities in LLMs, experimental results demonstrate that SPP effectively reduces factual hallucination, and maintains strong reasoning capabilities. Additionally, comparative experiments show that cognitive synergy only emerges in GPT-4 and does not appear in less capable models, such as GPT-3.5-turbo and Llama2-13b-chat, which draws an interesting analogy to human development. Code, data, and prompts can be found at: https://github.com/MikeWangWZHL/Solo-Performance-Prompting.git.
http://arxiv.org/pdf/2307.05300
Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, Heng Ji
cs.AI, cs.CL
null
null
cs.AI
20230711
20240104
[ { "id": "2302.06476" }, { "id": "2305.17126" }, { "id": "2305.11499" }, { "id": "2210.00185" }, { "id": "2305.10601" }, { "id": "2304.05335" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14688" }, { "id": "2208.11663" }, { "id": "2208.03299" }, { "id": "2206.04615" }, { "id": "2104.07567" }, { "id": "2306.09896" }, { "id": "2302.04023" }, { "id": "2205.11916" }, { "id": "2304.03442" }, { "id": "2207.05608" }, { "id": "2303.12712" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" } ]
2307.07526
6
without intelligence. Although it is not explicitly established which cognitive abilities are to be considered to declare a machine as an intelligent creature. If human intelligence is the benchmark, then the level of intellect must be defined as it is ranked in various levels, from mental retardation to highly intelligent (brilliant) [18]. Moreover, human intelligence is a multifaceted concept and humans are classified as mediocre or bright learners, gullible or skeptical people, sentimental or apathetic persons, and rational or irrational minds [19]. Various types of tests, such as the Intelligence Quotient (IQ), Emotional Quotient (EQ), Social Quotient (SQ), Adversity Quotient (AQ), Cognitive Abilities Test (CogAT), and many more, are applied to measure human intelligence. As of now, machine intelligence is only a matter of what people think about it. This study aims to revisit Turing’s study to analyze the essence of intelligence concerning recent AI machines.
2307.07526#6
Can I say, now machines can think?
Generative AI techniques have opened the path for new generations of machines in diverse domains. These machines have various capabilities for example, they can produce images, generate answers or stories, and write codes based on the "prompts" only provided by users. These machines are considered 'thinking minds' because they have the ability to generate human-like responses. In this study, we have analyzed and explored the capabilities of artificial intelligence-enabled machines. We have revisited on Turing's concept of thinking machines and compared it with recent technological advancements. The objections and consequences of the thinking machines are also discussed in this study, along with available techniques to evaluate machines' cognitive capabilities. We have concluded that Turing Test is a critical aspect of evaluating machines' ability. However, there are other aspects of intelligence too, and AI machines exhibit most of these aspects.
http://arxiv.org/pdf/2307.07526
Nitisha Aggarwal, Geetika Jain Saxena, Sanjeev Singh, Amit Pundir
cs.AI, cs.CY, I.2.m Miscellaneous
11 pages, 3 figures
null
cs.AI
20230711
20230711
[ { "id": "1801.01957" }, { "id": "2304.03277" }, { "id": "2305.18449" }, { "id": "2303.12712" }, { "id": "2304.03439" }, { "id": "2003.02989" } ]
2307.04964
7
In this report, we carefully dissect the framework of RLHF and discuss the entire process that determines the success of the algorithm’s training. We explored how the quality of the reward model affects the final result of the policy model. We find that the quality of the reward model directly determines the upper bound of the policy model, and designing an appropriate PPO algorithm is crucial for RLHF’s successful training. Moreover, accurate code implementation matters in deep policy (practice makes perfect). Therefore, we have conducted in-depth evaluations of the inner workings of PPO algorithm to study how code-level and theory-level optimizations change agent training dynamics. We propose to monitor the PPO training process by using action space modeling metrics derived from the policy model, such as perplexity, response length, and KL divergence between the policy model and the SFT model. These metrics are more informative of the training stability than the values of response reward and loss functions. Based on these observations, we identify the policy constraints in the PPO algorithm as the key factor to achieve consistent alignment with human preferences. After extensive comparative experiments with various possible implementations of PPO
2307.04964#7
Secrets of RLHF in Large Language Models Part I: PPO
Large language models (LLMs) have formulated a blueprint for the advancement of artificial general intelligence. Its primary objective is to function as a human-centric (helpful, honest, and harmless) assistant. Alignment with humans assumes paramount significance, and reinforcement learning with human feedback (RLHF) emerges as the pivotal technological paradigm underpinning this pursuit. Current technical routes usually include \textbf{reward models} to measure human preferences, \textbf{Proximal Policy Optimization} (PPO) to optimize policy model outputs, and \textbf{process supervision} to improve step-by-step reasoning capabilities. However, due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle. In the first report, we dissect the framework of RLHF, re-evaluate the inner workings of PPO, and explore how the parts comprising PPO algorithms impact policy agent training. We identify policy constraints being the key factor for the effective implementation of the PPO algorithm. Therefore, we explore the PPO-max, an advanced version of PPO algorithm, to efficiently improve the training stability of the policy model. Based on our main results, we perform a comprehensive analysis of RLHF abilities compared with SFT models and ChatGPT. The absence of open-source implementations has posed significant challenges to the investigation of LLMs alignment. Therefore, we are eager to release technical reports, reward models and PPO codes, aiming to make modest contributions to the advancement of LLMs.
http://arxiv.org/pdf/2307.04964
Rui Zheng, Shihan Dou, Songyang Gao, Yuan Hua, Wei Shen, Binghai Wang, Yan Liu, Senjie Jin, Qin Liu, Yuhao Zhou, Limao Xiong, Lu Chen, Zhiheng Xi, Nuo Xu, Wenbin Lai, Minghao Zhu, Cheng Chang, Zhangyue Yin, Rongxiang Weng, Wensen Cheng, Haoran Huang, Tianxiang Sun, Hang Yan, Tao Gui, Qi Zhang, Xipeng Qiu, Xuanjing Huang
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230711
20230718
[ { "id": "2302.13971" }, { "id": "1909.08593" }, { "id": "2201.08239" }, { "id": "2304.03277" }, { "id": "2304.03442" }, { "id": "2303.18223" }, { "id": "2108.07258" }, { "id": "2204.05862" }, { "id": "2303.03378" }, { "id": "2203.02155" }, { "id": "2112.00861" }, { "id": "2201.11903" }, { "id": "1904.09751" } ]
2307.04986
7
Through several simulation experiments, we present compelling evidence that configuring our model to closely mimic real-world conditions yields agents who are risk-responsive, self-isolate as the number of cases grows, and quarantine when they feel sick. Moreover, the agents consider age-related risks and their reasoning and decisions are affected by their personalities. Collectively, the agents’ interactions result in emerging patterns akin to many real-world observations. Notably, the emerging patterns qualitatively mimic the multiple waves observed in recent pandemics followed by a period of endemicity. Our agents collectively flatten the epidemic curve and decrease the total number of cases. 3 This research represents a significant advance in the field of epidemiological modeling resulting from allowing AI to integrate epidemic modeling with the social and behavioral sciences. This integrated approach allows for the representation of human behavior and its interconnectedness with the spread of the disease, fostering a dynamic interaction between the two. This progress serves as a crucial step toward addressing the challenges associated with incorporating human behavior in epidemic models, with applications in many other complex social system modeling problems. # II. Materials and Methods Conceptual Model
2307.04986#7
Epidemic Modeling with Generative Agents
This study offers a new paradigm of individual-level modeling to address the grand challenge of incorporating human behavior in epidemic models. Using generative artificial intelligence in an agent-based epidemic model, each agent is empowered to make its own reasonings and decisions via connecting to a large language model such as ChatGPT. Through various simulation experiments, we present compelling evidence that generative agents mimic real-world behaviors such as quarantining when sick and self-isolation when cases rise. Collectively, the agents demonstrate patterns akin to multiple waves observed in recent pandemics followed by an endemic period. Moreover, the agents successfully flatten the epidemic curve. This study creates potential to improve dynamic system modeling by offering a way to represent human brain, reasoning, and decision making.
http://arxiv.org/pdf/2307.04986
Ross Williams, Niyousha Hosseinichimeh, Aritra Majumdar, Navid Ghaffarzadegan
cs.AI, cs.MA, econ.GN, nlin.AO, physics.soc-ph, q-fin.EC
null
null
cs.AI
20230711
20230711
[]
2307.05300
7
error!) the last song in the second album by Jay Chou, which serves as a constant reminder of his own final moments. Patronus Charm in Harry Potter (2) the > Persona Identification Participants: Al Assistant (you) Game Designer @, Harry Potter Fan fcr Game of Thrones Fan © Jay Chou Fan & Start Collaboration! Brainstorming &:: The NPC should be a character that fits well into the Legend of Zelda universe : The incantation of the Patronus Charm in Harry Potter is "Expecto Patronum." 6 : In the ninth episode of the Game of Thrones TV series, the character who is beheaded is Eddard Stark &: The last song in the second album by Jay Chou is "An Jing." Multi-Persona Iterative Collaboration :Thanks for the information! Here's my initial attempt at the background story: L... story draft ... ] S-the background story is engaging and adds depth to the game world... :The incantation "Expecto Patronum" is well-incorporated into the story. fe) :The character name Eddard Stark is used effectively in the story. QyThe inclusion of "An Jing" adds a nice touch to the story. yovqgp2e4 Finish
2307.05300#7
Unleashing the Emergent Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration
Human intelligence thrives on cognitive synergy, where collaboration among different minds yield superior outcomes compared to isolated individuals. In this work, we propose Solo Performance Prompting (SPP), which transforms a single LLM into a cognitive synergist by engaging in multi-turn self-collaboration with multiple personas. A cognitive synergist is an intelligent agent that collaboratively combines multiple minds' strengths and knowledge to enhance problem-solving in complex tasks. By dynamically identifying and simulating different personas based on task inputs, SPP unleashes the potential of cognitive synergy in LLMs. Our in-depth analysis shows that assigning multiple fine-grained personas in LLMs improves problem-solving abilities compared to using a single or fixed number of personas. We evaluate SPP on three challenging tasks: Trivia Creative Writing, Codenames Collaborative, and Logic Grid Puzzle, encompassing both knowledge-intensive and reasoning-intensive types. Unlike previous works, such as Chain-of-Thought, that solely enhance the reasoning abilities in LLMs, experimental results demonstrate that SPP effectively reduces factual hallucination, and maintains strong reasoning capabilities. Additionally, comparative experiments show that cognitive synergy only emerges in GPT-4 and does not appear in less capable models, such as GPT-3.5-turbo and Llama2-13b-chat, which draws an interesting analogy to human development. Code, data, and prompts can be found at: https://github.com/MikeWangWZHL/Solo-Performance-Prompting.git.
http://arxiv.org/pdf/2307.05300
Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, Heng Ji
cs.AI, cs.CL
null
null
cs.AI
20230711
20240104
[ { "id": "2302.06476" }, { "id": "2305.17126" }, { "id": "2305.11499" }, { "id": "2210.00185" }, { "id": "2305.10601" }, { "id": "2304.05335" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14688" }, { "id": "2208.11663" }, { "id": "2208.03299" }, { "id": "2206.04615" }, { "id": "2104.07567" }, { "id": "2306.09896" }, { "id": "2302.04023" }, { "id": "2205.11916" }, { "id": "2304.03442" }, { "id": "2207.05608" }, { "id": "2303.12712" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" } ]
2307.07526
7
# 2 The Imitation Game In his 1950 paper titled "Computing Machinery and Intelligence, " Alan Turing suggested the Imitation Test to evaluate a machine’s ability to exhibit intelligent behavior indistinguishable from a human’s. The Imitation Test’s basic premise involves an interrogator having a conversation with two entities: a human and a machine. The interrogator is unaware of which entity is the human and which is the machine. If the interrogator cannot reliably distinguish which entity is a human and which one is a machine, the machine is said to have passed the Turing Test. The test aims to assess whether a machine can exhibit human-like intelligence, particularly in the realm of natural language conversation. Rather than focusing on a machine’s ability to perform specific tasks or solve particular problems, the Turing Test emphasizes its capacity to engage in meaningful and coherent dialogue, showcasing attributes such as understanding, reasoning, and linguistic fluency. In order to assess these attributes, the interrogator asks questions that can be empirical or conceptual like • Add 15489 to 23654 • Write a sonnet on the subject of true love
2307.07526#7
Can I say, now machines can think?
Generative AI techniques have opened the path for new generations of machines in diverse domains. These machines have various capabilities for example, they can produce images, generate answers or stories, and write codes based on the "prompts" only provided by users. These machines are considered 'thinking minds' because they have the ability to generate human-like responses. In this study, we have analyzed and explored the capabilities of artificial intelligence-enabled machines. We have revisited on Turing's concept of thinking machines and compared it with recent technological advancements. The objections and consequences of the thinking machines are also discussed in this study, along with available techniques to evaluate machines' cognitive capabilities. We have concluded that Turing Test is a critical aspect of evaluating machines' ability. However, there are other aspects of intelligence too, and AI machines exhibit most of these aspects.
http://arxiv.org/pdf/2307.07526
Nitisha Aggarwal, Geetika Jain Saxena, Sanjeev Singh, Amit Pundir
cs.AI, cs.CY, I.2.m Miscellaneous
11 pages, 3 figures
null
cs.AI
20230711
20230711
[ { "id": "1801.01957" }, { "id": "2304.03277" }, { "id": "2305.18449" }, { "id": "2303.12712" }, { "id": "2304.03439" }, { "id": "2003.02989" } ]
2307.04964
8
PPO algorithm as the key factor to achieve consistent alignment with human preferences. After extensive comparative experiments with various possible implementations of PPO framework, we finally introduce a preferable policy optimization algorithm named PPO-max, which incorporates the collection of effective and essential implementations, and is carefully calibrated to avoid interference among them. PPO-max alleviates the instability of vanilla PPO training and enables longer training steps with a larger training corpus. We evaluate PPO-max on 7B and 13B SFT models, demonstrating comparable alignment performance with ChatGPT.
2307.04964#8
Secrets of RLHF in Large Language Models Part I: PPO
Large language models (LLMs) have formulated a blueprint for the advancement of artificial general intelligence. Its primary objective is to function as a human-centric (helpful, honest, and harmless) assistant. Alignment with humans assumes paramount significance, and reinforcement learning with human feedback (RLHF) emerges as the pivotal technological paradigm underpinning this pursuit. Current technical routes usually include \textbf{reward models} to measure human preferences, \textbf{Proximal Policy Optimization} (PPO) to optimize policy model outputs, and \textbf{process supervision} to improve step-by-step reasoning capabilities. However, due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle. In the first report, we dissect the framework of RLHF, re-evaluate the inner workings of PPO, and explore how the parts comprising PPO algorithms impact policy agent training. We identify policy constraints being the key factor for the effective implementation of the PPO algorithm. Therefore, we explore the PPO-max, an advanced version of PPO algorithm, to efficiently improve the training stability of the policy model. Based on our main results, we perform a comprehensive analysis of RLHF abilities compared with SFT models and ChatGPT. The absence of open-source implementations has posed significant challenges to the investigation of LLMs alignment. Therefore, we are eager to release technical reports, reward models and PPO codes, aiming to make modest contributions to the advancement of LLMs.
http://arxiv.org/pdf/2307.04964
Rui Zheng, Shihan Dou, Songyang Gao, Yuan Hua, Wei Shen, Binghai Wang, Yan Liu, Senjie Jin, Qin Liu, Yuhao Zhou, Limao Xiong, Lu Chen, Zhiheng Xi, Nuo Xu, Wenbin Lai, Minghao Zhu, Cheng Chang, Zhangyue Yin, Rongxiang Weng, Wensen Cheng, Haoran Huang, Tianxiang Sun, Hang Yan, Tao Gui, Qi Zhang, Xipeng Qiu, Xuanjing Huang
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230711
20230718
[ { "id": "2302.13971" }, { "id": "1909.08593" }, { "id": "2201.08239" }, { "id": "2304.03277" }, { "id": "2304.03442" }, { "id": "2303.18223" }, { "id": "2108.07258" }, { "id": "2204.05862" }, { "id": "2303.03378" }, { "id": "2203.02155" }, { "id": "2112.00861" }, { "id": "2201.11903" }, { "id": "1904.09751" } ]
2307.04986
8
# II. Materials and Methods Conceptual Model As Fig. 1 shows, the model is a combination of two parts: GAI to model each agent’s reasoning and behavior, and ABM to model the spread of the disease. At the beginning of each time step, agents begin in the GAI portion and decide whether to go outside, in which case their interactions with others for that day will be modeled by ABM. In the GAI sector, powered by ChatGPT, each agent is fed a prompt that includes the agent’s name, age, traits the agent possesses, basic biography, and relevant memories. The information comprising the relevant memories is set based on the experimental condition. For instance, it can include the agent’s symptoms (if any) or information about the percentage of sick individuals in the town (see “Experimental Design” below for details). Then, at each time step, agents are asked whether they should stay at home for the entire day and their reason. For those who decide to leave their home, the ABM portion will have each agent individually interact with a number of unique agents equal to the model’s contact rate, which can result in disease transmission between a susceptible person and an infected person. Once all agent interactions are carried out, the time step increments by one, and health statuses are updated. Experimental Design
2307.04986#8
Epidemic Modeling with Generative Agents
This study offers a new paradigm of individual-level modeling to address the grand challenge of incorporating human behavior in epidemic models. Using generative artificial intelligence in an agent-based epidemic model, each agent is empowered to make its own reasonings and decisions via connecting to a large language model such as ChatGPT. Through various simulation experiments, we present compelling evidence that generative agents mimic real-world behaviors such as quarantining when sick and self-isolation when cases rise. Collectively, the agents demonstrate patterns akin to multiple waves observed in recent pandemics followed by an endemic period. Moreover, the agents successfully flatten the epidemic curve. This study creates potential to improve dynamic system modeling by offering a way to represent human brain, reasoning, and decision making.
http://arxiv.org/pdf/2307.04986
Ross Williams, Niyousha Hosseinichimeh, Aritra Majumdar, Navid Ghaffarzadegan
cs.AI, cs.MA, econ.GN, nlin.AO, physics.soc-ph, q-fin.EC
null
null
cs.AI
20230711
20230711
[]
2307.05300
8
Stark is used effectively in the story. QyThe inclusion of "An Jing" adds a nice touch to the story. yovqgp2e4 Finish Collaboration! Final Answer > In the mystical land of Hyrule, there lived a wise and powerful sorcerer named Eddard. Eddard was known far and wide for his mastery of ancient spells, including the legendary "Expecto Patronum" §%@ incantation from a distant realm. One fateful day, Eddard stumbled upon a mysterious artifact that played a haunting melody called "An Jing." (The melody resonated with his soul, unlocking a hidden power within him. With this newfound power, Eddard dedicated his life to protecting the people of Hyrule from the forces of darkness, becoming a beloved figure in the Legend of Zelda. Standard Prompting Result (GPT-4) Solo Performance Prompting Result (GPT-4)
2307.05300#8
Unleashing the Emergent Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration
Human intelligence thrives on cognitive synergy, where collaboration among different minds yield superior outcomes compared to isolated individuals. In this work, we propose Solo Performance Prompting (SPP), which transforms a single LLM into a cognitive synergist by engaging in multi-turn self-collaboration with multiple personas. A cognitive synergist is an intelligent agent that collaboratively combines multiple minds' strengths and knowledge to enhance problem-solving in complex tasks. By dynamically identifying and simulating different personas based on task inputs, SPP unleashes the potential of cognitive synergy in LLMs. Our in-depth analysis shows that assigning multiple fine-grained personas in LLMs improves problem-solving abilities compared to using a single or fixed number of personas. We evaluate SPP on three challenging tasks: Trivia Creative Writing, Codenames Collaborative, and Logic Grid Puzzle, encompassing both knowledge-intensive and reasoning-intensive types. Unlike previous works, such as Chain-of-Thought, that solely enhance the reasoning abilities in LLMs, experimental results demonstrate that SPP effectively reduces factual hallucination, and maintains strong reasoning capabilities. Additionally, comparative experiments show that cognitive synergy only emerges in GPT-4 and does not appear in less capable models, such as GPT-3.5-turbo and Llama2-13b-chat, which draws an interesting analogy to human development. Code, data, and prompts can be found at: https://github.com/MikeWangWZHL/Solo-Performance-Prompting.git.
http://arxiv.org/pdf/2307.05300
Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, Heng Ji
cs.AI, cs.CL
null
null
cs.AI
20230711
20240104
[ { "id": "2302.06476" }, { "id": "2305.17126" }, { "id": "2305.11499" }, { "id": "2210.00185" }, { "id": "2305.10601" }, { "id": "2304.05335" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14688" }, { "id": "2208.11663" }, { "id": "2208.03299" }, { "id": "2206.04615" }, { "id": "2104.07567" }, { "id": "2306.09896" }, { "id": "2302.04023" }, { "id": "2205.11916" }, { "id": "2304.03442" }, { "id": "2207.05608" }, { "id": "2303.12712" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" } ]
2307.07526
8
The interrogator can ask questions about the abilities or appearance of players, like, Do you know how to play chess or the length of his/her hair, etc. Turing had envisaged that in the next 50 years, the probability of passing the game by digital computer will be more than 70% and machines could be considered to have thinking abilities. Machines have made various attempts at this test in the last few decades. In 1966, ELIZA, an early chatbot created by Joseph Weizenbaum at MIT, used pattern matching and design replies to mimic human-like interactions with psychotherapists. Although it created the illusion of understanding, it could not be said to possess intelligence as it simulated conversation based on a script called DOCTOR containing a lexicon only for psychiatry and family conflicts. Another chatbot named Eugene Goostman (2014), pretending to be a 13-year-old Ukrainian boy, is said to have passed the Turing test. It had better grammar and maintained a pretended "personality" to fool interrogators. Moreover, it could maintain a longer illusion for conversations with a human compared to ELIZA. Few other one-off competitions also reported similar achievements of machines
2307.07526#8
Can I say, now machines can think?
Generative AI techniques have opened the path for new generations of machines in diverse domains. These machines have various capabilities for example, they can produce images, generate answers or stories, and write codes based on the "prompts" only provided by users. These machines are considered 'thinking minds' because they have the ability to generate human-like responses. In this study, we have analyzed and explored the capabilities of artificial intelligence-enabled machines. We have revisited on Turing's concept of thinking machines and compared it with recent technological advancements. The objections and consequences of the thinking machines are also discussed in this study, along with available techniques to evaluate machines' cognitive capabilities. We have concluded that Turing Test is a critical aspect of evaluating machines' ability. However, there are other aspects of intelligence too, and AI machines exhibit most of these aspects.
http://arxiv.org/pdf/2307.07526
Nitisha Aggarwal, Geetika Jain Saxena, Sanjeev Singh, Amit Pundir
cs.AI, cs.CY, I.2.m Miscellaneous
11 pages, 3 figures
null
cs.AI
20230711
20230711
[ { "id": "1801.01957" }, { "id": "2304.03277" }, { "id": "2305.18449" }, { "id": "2303.12712" }, { "id": "2304.03439" }, { "id": "2003.02989" } ]
2307.04964
9
2 Contributions are summarized as follows: 1) we release competitive Chinese and English reward models, respectively, which have good cross-model generalization ability, alleviating the cost of relabeling human preference data; 2) we conduct in-depth analysis on the inner workings of PPO algorithm and propose the PPO-max algorithm to ensure stable model training; and 3) we release the complete PPO-max codes to ensure that the LLMs in the current SFT stage can be better aligned with humans. # 2 Related Work
2307.04964#9
Secrets of RLHF in Large Language Models Part I: PPO
Large language models (LLMs) have formulated a blueprint for the advancement of artificial general intelligence. Its primary objective is to function as a human-centric (helpful, honest, and harmless) assistant. Alignment with humans assumes paramount significance, and reinforcement learning with human feedback (RLHF) emerges as the pivotal technological paradigm underpinning this pursuit. Current technical routes usually include \textbf{reward models} to measure human preferences, \textbf{Proximal Policy Optimization} (PPO) to optimize policy model outputs, and \textbf{process supervision} to improve step-by-step reasoning capabilities. However, due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle. In the first report, we dissect the framework of RLHF, re-evaluate the inner workings of PPO, and explore how the parts comprising PPO algorithms impact policy agent training. We identify policy constraints being the key factor for the effective implementation of the PPO algorithm. Therefore, we explore the PPO-max, an advanced version of PPO algorithm, to efficiently improve the training stability of the policy model. Based on our main results, we perform a comprehensive analysis of RLHF abilities compared with SFT models and ChatGPT. The absence of open-source implementations has posed significant challenges to the investigation of LLMs alignment. Therefore, we are eager to release technical reports, reward models and PPO codes, aiming to make modest contributions to the advancement of LLMs.
http://arxiv.org/pdf/2307.04964
Rui Zheng, Shihan Dou, Songyang Gao, Yuan Hua, Wei Shen, Binghai Wang, Yan Liu, Senjie Jin, Qin Liu, Yuhao Zhou, Limao Xiong, Lu Chen, Zhiheng Xi, Nuo Xu, Wenbin Lai, Minghao Zhu, Cheng Chang, Zhangyue Yin, Rongxiang Weng, Wensen Cheng, Haoran Huang, Tianxiang Sun, Hang Yan, Tao Gui, Qi Zhang, Xipeng Qiu, Xuanjing Huang
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230711
20230718
[ { "id": "2302.13971" }, { "id": "1909.08593" }, { "id": "2201.08239" }, { "id": "2304.03277" }, { "id": "2304.03442" }, { "id": "2303.18223" }, { "id": "2108.07258" }, { "id": "2204.05862" }, { "id": "2303.03378" }, { "id": "2203.02155" }, { "id": "2112.00861" }, { "id": "2201.11903" }, { "id": "1904.09751" } ]
2307.04986
9
full feedback. During the base run, agents are informed about the town, their personalities and age, and the fact that they go to work to earn a living. In this condition, there is no information feeding back to the agents, and even though they can decide whether to stay home and consequently not interact with others, we expect the model to replicate basic ABM outcomes. In the self-health feedback condition, in addition to the base run information, agents are informed about health symptoms they are experiencing (if any), which can potentially cause them to self- quarantine by staying at home. We hypothesize that some agents will practice self-quarantine based on information about their symptoms, which should in turn decrease the infection rate. In the full feedback condition, in addition to the self-health feedback, agents read daily news that includes information about the percentage of people in the town reported to have Catasat symptoms. We hypothesize that some agents will practice self-isolation, a behavior that correlates with information about the spread of the disease in the town, and as result patterns for the spread of the virus resemble oscillatory patterns. For all three types of model simulations, each agent provides a daily decision for each time step of whether to go to work, and their reasoning behind
2307.04986#9
Epidemic Modeling with Generative Agents
This study offers a new paradigm of individual-level modeling to address the grand challenge of incorporating human behavior in epidemic models. Using generative artificial intelligence in an agent-based epidemic model, each agent is empowered to make its own reasonings and decisions via connecting to a large language model such as ChatGPT. Through various simulation experiments, we present compelling evidence that generative agents mimic real-world behaviors such as quarantining when sick and self-isolation when cases rise. Collectively, the agents demonstrate patterns akin to multiple waves observed in recent pandemics followed by an endemic period. Moreover, the agents successfully flatten the epidemic curve. This study creates potential to improve dynamic system modeling by offering a way to represent human brain, reasoning, and decision making.
http://arxiv.org/pdf/2307.04986
Ross Williams, Niyousha Hosseinichimeh, Aritra Majumdar, Navid Ghaffarzadegan
cs.AI, cs.MA, econ.GN, nlin.AO, physics.soc-ph, q-fin.EC
null
null
cs.AI
20230711
20230711
[]
2307.05300
9
Figure 2: Task-solving example of Solo Performance Prompting (SPP) with GPT-4. The personas of the participants are automatically identified by GPT-4 based on the task input. This example shows that Standard Prompting suffers from factual errors, whereas SPP provides accurate information and a coherent answer. Note that, in real-world applications, the domains can vary not only within entertainment but also encompass history, science, education, healthcare, etc.
2307.05300#9
Unleashing the Emergent Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration
Human intelligence thrives on cognitive synergy, where collaboration among different minds yield superior outcomes compared to isolated individuals. In this work, we propose Solo Performance Prompting (SPP), which transforms a single LLM into a cognitive synergist by engaging in multi-turn self-collaboration with multiple personas. A cognitive synergist is an intelligent agent that collaboratively combines multiple minds' strengths and knowledge to enhance problem-solving in complex tasks. By dynamically identifying and simulating different personas based on task inputs, SPP unleashes the potential of cognitive synergy in LLMs. Our in-depth analysis shows that assigning multiple fine-grained personas in LLMs improves problem-solving abilities compared to using a single or fixed number of personas. We evaluate SPP on three challenging tasks: Trivia Creative Writing, Codenames Collaborative, and Logic Grid Puzzle, encompassing both knowledge-intensive and reasoning-intensive types. Unlike previous works, such as Chain-of-Thought, that solely enhance the reasoning abilities in LLMs, experimental results demonstrate that SPP effectively reduces factual hallucination, and maintains strong reasoning capabilities. Additionally, comparative experiments show that cognitive synergy only emerges in GPT-4 and does not appear in less capable models, such as GPT-3.5-turbo and Llama2-13b-chat, which draws an interesting analogy to human development. Code, data, and prompts can be found at: https://github.com/MikeWangWZHL/Solo-Performance-Prompting.git.
http://arxiv.org/pdf/2307.05300
Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, Heng Ji
cs.AI, cs.CL
null
null
cs.AI
20230711
20240104
[ { "id": "2302.06476" }, { "id": "2305.17126" }, { "id": "2305.11499" }, { "id": "2210.00185" }, { "id": "2305.10601" }, { "id": "2304.05335" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14688" }, { "id": "2208.11663" }, { "id": "2208.03299" }, { "id": "2206.04615" }, { "id": "2104.07567" }, { "id": "2306.09896" }, { "id": "2302.04023" }, { "id": "2205.11916" }, { "id": "2304.03442" }, { "id": "2207.05608" }, { "id": "2303.12712" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" } ]
2307.07526
9
it could maintain a longer illusion for conversations with a human compared to ELIZA. Few other one-off competitions also reported similar achievements of machines [20]. However, critics claimed that trials were very small in these competitions, and the interrogators’ ability to distinguish was debatable. According to them, the objective of designing these machines was only to fool the interrogators and pass the Test rather than proving the machines as putatively minded entities [21]. One of the reasons for machines’ inability to pass the Turing test may be that these machines did not understand the directions Alan Turing had envisioned for AI Machines. From the objections he raised for these machines can conclude that these machines serve a level of understanding of the sort that humans have.
2307.07526#9
Can I say, now machines can think?
Generative AI techniques have opened the path for new generations of machines in diverse domains. These machines have various capabilities for example, they can produce images, generate answers or stories, and write codes based on the "prompts" only provided by users. These machines are considered 'thinking minds' because they have the ability to generate human-like responses. In this study, we have analyzed and explored the capabilities of artificial intelligence-enabled machines. We have revisited on Turing's concept of thinking machines and compared it with recent technological advancements. The objections and consequences of the thinking machines are also discussed in this study, along with available techniques to evaluate machines' cognitive capabilities. We have concluded that Turing Test is a critical aspect of evaluating machines' ability. However, there are other aspects of intelligence too, and AI machines exhibit most of these aspects.
http://arxiv.org/pdf/2307.07526
Nitisha Aggarwal, Geetika Jain Saxena, Sanjeev Singh, Amit Pundir
cs.AI, cs.CY, I.2.m Miscellaneous
11 pages, 3 figures
null
cs.AI
20230711
20230711
[ { "id": "1801.01957" }, { "id": "2304.03277" }, { "id": "2305.18449" }, { "id": "2303.12712" }, { "id": "2304.03439" }, { "id": "2003.02989" } ]
2307.04964
10
Despite the promising capacities, LLMs are likely to express unintended behaviors such as making up facts, generating biased or toxic text, or even harmful content for humans [13, 14] due to the low-quality pre-training data. Hence, it is necessary to align LLMs with human values, e.g., helpful, honest, and harmless (3H) [16, 17, 12]. In order to mitigate a huge risk of harmfulness, most of the current work tries to involve 3H data in SFT, hoping to activate the responses of the models to make a positive change at the moral and ethical level [7, 19, 20], while the model’s performance remains below human levels in safety and groundedness [17]. Hence, more effective and efficient control approaches are required to eliminate the potential risk of LLMs. Fine-tuning language models to align with human preferences provides an effective solution to this challenge, where an agent is required to learn human preferences and provide human-like results given a context and corresponding suffixes ranked or scored by human annotators. Reinforcement Learning (RL) provides the most straightforward solution to reach this goal, for the agent needs just scarce supervision signal from the
2307.04964#10
Secrets of RLHF in Large Language Models Part I: PPO
Large language models (LLMs) have formulated a blueprint for the advancement of artificial general intelligence. Its primary objective is to function as a human-centric (helpful, honest, and harmless) assistant. Alignment with humans assumes paramount significance, and reinforcement learning with human feedback (RLHF) emerges as the pivotal technological paradigm underpinning this pursuit. Current technical routes usually include \textbf{reward models} to measure human preferences, \textbf{Proximal Policy Optimization} (PPO) to optimize policy model outputs, and \textbf{process supervision} to improve step-by-step reasoning capabilities. However, due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle. In the first report, we dissect the framework of RLHF, re-evaluate the inner workings of PPO, and explore how the parts comprising PPO algorithms impact policy agent training. We identify policy constraints being the key factor for the effective implementation of the PPO algorithm. Therefore, we explore the PPO-max, an advanced version of PPO algorithm, to efficiently improve the training stability of the policy model. Based on our main results, we perform a comprehensive analysis of RLHF abilities compared with SFT models and ChatGPT. The absence of open-source implementations has posed significant challenges to the investigation of LLMs alignment. Therefore, we are eager to release technical reports, reward models and PPO codes, aiming to make modest contributions to the advancement of LLMs.
http://arxiv.org/pdf/2307.04964
Rui Zheng, Shihan Dou, Songyang Gao, Yuan Hua, Wei Shen, Binghai Wang, Yan Liu, Senjie Jin, Qin Liu, Yuhao Zhou, Limao Xiong, Lu Chen, Zhiheng Xi, Nuo Xu, Wenbin Lai, Minghao Zhu, Cheng Chang, Zhangyue Yin, Rongxiang Weng, Wensen Cheng, Haoran Huang, Tianxiang Sun, Hang Yan, Tao Gui, Qi Zhang, Xipeng Qiu, Xuanjing Huang
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230711
20230718
[ { "id": "2302.13971" }, { "id": "1909.08593" }, { "id": "2201.08239" }, { "id": "2304.03277" }, { "id": "2304.03442" }, { "id": "2303.18223" }, { "id": "2108.07258" }, { "id": "2204.05862" }, { "id": "2303.03378" }, { "id": "2203.02155" }, { "id": "2112.00861" }, { "id": "2201.11903" }, { "id": "1904.09751" } ]
2307.05300
10
that collaborates with multiple minds to enhance problem-solving and efficacy in complex tasks. In this work, we aim to create a cognitive syner- gist based on a single LLM that can "split into" multiple personas and engage in self-collaboration to solve both knowledge-intensive and reasoning- intensive tasks. This idea is heavily inspired by the role of pretend play (Piaget, 1954; Pellegrini, 2009) in cognitive development and recent findings that assigning personas (Deshpande et al., 2023; Xu et al., 2023) to LLMs can elicit specific be- haviors, improve answer quality, and potentially build an AI society (Park et al., 2023; Schick et al., 2022; Li et al., 2023; Cai et al., 2023) with col- laborative LLM agents. However, as shown in Ta- ble 1, previous works have limitations such as fixed or task-specific personas, the need for additional fine-tuning, and increased inference costs due to multiple LLM instances. To unleash the potential of cognitive synergy for general task-solving, we propose Solo Perfor- mance Prompting (SPP), which prompts a single LLM to identify, simulate, and collaborate with multiple personas. Figure 1 provides a high-level
2307.05300#10
Unleashing the Emergent Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration
Human intelligence thrives on cognitive synergy, where collaboration among different minds yield superior outcomes compared to isolated individuals. In this work, we propose Solo Performance Prompting (SPP), which transforms a single LLM into a cognitive synergist by engaging in multi-turn self-collaboration with multiple personas. A cognitive synergist is an intelligent agent that collaboratively combines multiple minds' strengths and knowledge to enhance problem-solving in complex tasks. By dynamically identifying and simulating different personas based on task inputs, SPP unleashes the potential of cognitive synergy in LLMs. Our in-depth analysis shows that assigning multiple fine-grained personas in LLMs improves problem-solving abilities compared to using a single or fixed number of personas. We evaluate SPP on three challenging tasks: Trivia Creative Writing, Codenames Collaborative, and Logic Grid Puzzle, encompassing both knowledge-intensive and reasoning-intensive types. Unlike previous works, such as Chain-of-Thought, that solely enhance the reasoning abilities in LLMs, experimental results demonstrate that SPP effectively reduces factual hallucination, and maintains strong reasoning capabilities. Additionally, comparative experiments show that cognitive synergy only emerges in GPT-4 and does not appear in less capable models, such as GPT-3.5-turbo and Llama2-13b-chat, which draws an interesting analogy to human development. Code, data, and prompts can be found at: https://github.com/MikeWangWZHL/Solo-Performance-Prompting.git.
http://arxiv.org/pdf/2307.05300
Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, Heng Ji
cs.AI, cs.CL
null
null
cs.AI
20230711
20240104
[ { "id": "2302.06476" }, { "id": "2305.17126" }, { "id": "2305.11499" }, { "id": "2210.00185" }, { "id": "2305.10601" }, { "id": "2304.05335" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14688" }, { "id": "2208.11663" }, { "id": "2208.03299" }, { "id": "2206.04615" }, { "id": "2104.07567" }, { "id": "2306.09896" }, { "id": "2302.04023" }, { "id": "2205.11916" }, { "id": "2304.03442" }, { "id": "2207.05608" }, { "id": "2303.12712" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" } ]
2307.07526
10
# 3 Objections to Turing’s Approach and Responses Alan Turing himself highlighted some objections and arguments on machines with "thinking" properties. Through these arguments, researchers can understand the aspect of intelligent machines and their objections and consequences of them. 2 Can I say, now machines can think? Table 1: Table generated by ChatGTP on prompt ’please prepare a chart of 5 job which can be replaced by GPT-3 along with human characteristics’.
2307.07526#10
Can I say, now machines can think?
Generative AI techniques have opened the path for new generations of machines in diverse domains. These machines have various capabilities for example, they can produce images, generate answers or stories, and write codes based on the "prompts" only provided by users. These machines are considered 'thinking minds' because they have the ability to generate human-like responses. In this study, we have analyzed and explored the capabilities of artificial intelligence-enabled machines. We have revisited on Turing's concept of thinking machines and compared it with recent technological advancements. The objections and consequences of the thinking machines are also discussed in this study, along with available techniques to evaluate machines' cognitive capabilities. We have concluded that Turing Test is a critical aspect of evaluating machines' ability. However, there are other aspects of intelligence too, and AI machines exhibit most of these aspects.
http://arxiv.org/pdf/2307.07526
Nitisha Aggarwal, Geetika Jain Saxena, Sanjeev Singh, Amit Pundir
cs.AI, cs.CY, I.2.m Miscellaneous
11 pages, 3 figures
null
cs.AI
20230711
20230711
[ { "id": "1801.01957" }, { "id": "2304.03277" }, { "id": "2305.18449" }, { "id": "2303.12712" }, { "id": "2304.03439" }, { "id": "2003.02989" } ]
2307.04964
11
scored by human annotators. Reinforcement Learning (RL) provides the most straightforward solution to reach this goal, for the agent needs just scarce supervision signal from the reward model as human proxies, and is modified through numerous trials under RL framework, namely Reinforcement Learning from Human Feedback (RLHF). There have been many attempts on this path recently [22, 23, 24, 25, 17, 16, 26].
2307.04964#11
Secrets of RLHF in Large Language Models Part I: PPO
Large language models (LLMs) have formulated a blueprint for the advancement of artificial general intelligence. Its primary objective is to function as a human-centric (helpful, honest, and harmless) assistant. Alignment with humans assumes paramount significance, and reinforcement learning with human feedback (RLHF) emerges as the pivotal technological paradigm underpinning this pursuit. Current technical routes usually include \textbf{reward models} to measure human preferences, \textbf{Proximal Policy Optimization} (PPO) to optimize policy model outputs, and \textbf{process supervision} to improve step-by-step reasoning capabilities. However, due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle. In the first report, we dissect the framework of RLHF, re-evaluate the inner workings of PPO, and explore how the parts comprising PPO algorithms impact policy agent training. We identify policy constraints being the key factor for the effective implementation of the PPO algorithm. Therefore, we explore the PPO-max, an advanced version of PPO algorithm, to efficiently improve the training stability of the policy model. Based on our main results, we perform a comprehensive analysis of RLHF abilities compared with SFT models and ChatGPT. The absence of open-source implementations has posed significant challenges to the investigation of LLMs alignment. Therefore, we are eager to release technical reports, reward models and PPO codes, aiming to make modest contributions to the advancement of LLMs.
http://arxiv.org/pdf/2307.04964
Rui Zheng, Shihan Dou, Songyang Gao, Yuan Hua, Wei Shen, Binghai Wang, Yan Liu, Senjie Jin, Qin Liu, Yuhao Zhou, Limao Xiong, Lu Chen, Zhiheng Xi, Nuo Xu, Wenbin Lai, Minghao Zhu, Cheng Chang, Zhangyue Yin, Rongxiang Weng, Wensen Cheng, Haoran Huang, Tianxiang Sun, Hang Yan, Tao Gui, Qi Zhang, Xipeng Qiu, Xuanjing Huang
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230711
20230718
[ { "id": "2302.13971" }, { "id": "1909.08593" }, { "id": "2201.08239" }, { "id": "2304.03277" }, { "id": "2304.03442" }, { "id": "2303.18223" }, { "id": "2108.07258" }, { "id": "2204.05862" }, { "id": "2303.03378" }, { "id": "2203.02155" }, { "id": "2112.00861" }, { "id": "2201.11903" }, { "id": "1904.09751" } ]
2307.04986
11
4 Coding We use a Python script to operationalize our GABM approach. The LLM used is gpt-3.5- turbo-0301 accessed via OpenAI API calls. Agents and the world they live in were defined using a Python library called Mesa (26). Their names were selected using the names-dataset 3.1.0 Python library (https://github.com/philipperemy/name-datase), and ages were randomly selected integers from 18 to 64. Agent traits are based on the Big Five traits typically used in psychology (20), and we gave a 50% chance of a positive versus negative version of each trait for each agent. In the base run, the agent has the following relevant memory: “[agent’s name] goes to work to earn money to support [agent’s name]’s self.” For the own health feedback case, agents’ relevant memories include the base run memory and one of these three health strings: “[agent’s name] feels normal,” “[agent’s name] has a light cough,” and “[agent’s name] has a fever and a cough.”
2307.04986#11
Epidemic Modeling with Generative Agents
This study offers a new paradigm of individual-level modeling to address the grand challenge of incorporating human behavior in epidemic models. Using generative artificial intelligence in an agent-based epidemic model, each agent is empowered to make its own reasonings and decisions via connecting to a large language model such as ChatGPT. Through various simulation experiments, we present compelling evidence that generative agents mimic real-world behaviors such as quarantining when sick and self-isolation when cases rise. Collectively, the agents demonstrate patterns akin to multiple waves observed in recent pandemics followed by an endemic period. Moreover, the agents successfully flatten the epidemic curve. This study creates potential to improve dynamic system modeling by offering a way to represent human brain, reasoning, and decision making.
http://arxiv.org/pdf/2307.04986
Ross Williams, Niyousha Hosseinichimeh, Aritra Majumdar, Navid Ghaffarzadegan
cs.AI, cs.MA, econ.GN, nlin.AO, physics.soc-ph, q-fin.EC
null
null
cs.AI
20230711
20230711
[]
2307.05300
11
overview of SPP. Here, a persona can represent either a domain expert, such as a movie enthusi- ast, or a target audience, such as a ten-year-old child. Through the dynamic identification of vari- ous personas, we empower a single LLM to acquire diverse domain knowledge accurately without ad- ditional retrieval systems. By facilitating multi- turn self-collaboration, we enable self-revision and self-feedback from various perspectives without requiring additional agents. In real-world scenarios, such as those in creative industries, there is often a need to incorporate di- verse information from different domains. Figure 2 presents a concrete example of how SPP operates on a challenging task that requires creative integra- tion of information from various domains, such as the Legend of Zelda game, Harry Potter movies, and Jay Chou’s albums. Standard prompting fails to generate satisfactory output due to missing es- sential information and factual errors. In contrast, SPP produces informative and coherent answers by automatically identifying expert personas and engaging in a multi-turn self-collaboration. In this process, the AI Assistant persona iteratively writes
2307.05300#11
Unleashing the Emergent Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration
Human intelligence thrives on cognitive synergy, where collaboration among different minds yield superior outcomes compared to isolated individuals. In this work, we propose Solo Performance Prompting (SPP), which transforms a single LLM into a cognitive synergist by engaging in multi-turn self-collaboration with multiple personas. A cognitive synergist is an intelligent agent that collaboratively combines multiple minds' strengths and knowledge to enhance problem-solving in complex tasks. By dynamically identifying and simulating different personas based on task inputs, SPP unleashes the potential of cognitive synergy in LLMs. Our in-depth analysis shows that assigning multiple fine-grained personas in LLMs improves problem-solving abilities compared to using a single or fixed number of personas. We evaluate SPP on three challenging tasks: Trivia Creative Writing, Codenames Collaborative, and Logic Grid Puzzle, encompassing both knowledge-intensive and reasoning-intensive types. Unlike previous works, such as Chain-of-Thought, that solely enhance the reasoning abilities in LLMs, experimental results demonstrate that SPP effectively reduces factual hallucination, and maintains strong reasoning capabilities. Additionally, comparative experiments show that cognitive synergy only emerges in GPT-4 and does not appear in less capable models, such as GPT-3.5-turbo and Llama2-13b-chat, which draws an interesting analogy to human development. Code, data, and prompts can be found at: https://github.com/MikeWangWZHL/Solo-Performance-Prompting.git.
http://arxiv.org/pdf/2307.05300
Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, Heng Ji
cs.AI, cs.CL
null
null
cs.AI
20230711
20240104
[ { "id": "2302.06476" }, { "id": "2305.17126" }, { "id": "2305.11499" }, { "id": "2210.00185" }, { "id": "2305.10601" }, { "id": "2304.05335" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14688" }, { "id": "2208.11663" }, { "id": "2208.03299" }, { "id": "2206.04615" }, { "id": "2104.07567" }, { "id": "2306.09896" }, { "id": "2302.04023" }, { "id": "2205.11916" }, { "id": "2304.03442" }, { "id": "2207.05608" }, { "id": "2303.12712" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" } ]
2307.07526
11
Table 1: Table generated by ChatGTP on prompt ’please prepare a chart of 5 job which can be replaced by GPT-3 along with human characteristics’. Job Title Human Characteristics How GPT-3 Can Replace/Improve human characteristics Administrative Assistant Translator Data Entry Operator Researcher Data Analyst Interpersonal Communica- tion, Prioritization, Critical Thinking Cultural and Linguistic Nu- ances, Idioms, and Idiomatic Expressions Accuracy Creativity, Intuition, Interpre- tation of Results Expertise in Data Interpre- tation, Hypothesis Testing, Decision-Making based on Data GPT-3 can assist with scheduling, email management, and basic correspondence, reducing the need for human administrative assistants. GPT-3 can translate text between languages quickly and accurately, but human translators are still needed for tasks that require cultural and linguistic nuances, idioms, and idiomatic expressions GPT-3 can automate data entry tasks with high accuracy, reducing the need for human data entry operators GPT-3 can assist with data analysis and research, reducing the need for human researchers. GPT-3 can analyze large amounts of data quickly and accurately, reducing the need for human data analysts. However, human analysts are still necessary # 3.1 Theological Objection
2307.07526#11
Can I say, now machines can think?
Generative AI techniques have opened the path for new generations of machines in diverse domains. These machines have various capabilities for example, they can produce images, generate answers or stories, and write codes based on the "prompts" only provided by users. These machines are considered 'thinking minds' because they have the ability to generate human-like responses. In this study, we have analyzed and explored the capabilities of artificial intelligence-enabled machines. We have revisited on Turing's concept of thinking machines and compared it with recent technological advancements. The objections and consequences of the thinking machines are also discussed in this study, along with available techniques to evaluate machines' cognitive capabilities. We have concluded that Turing Test is a critical aspect of evaluating machines' ability. However, there are other aspects of intelligence too, and AI machines exhibit most of these aspects.
http://arxiv.org/pdf/2307.07526
Nitisha Aggarwal, Geetika Jain Saxena, Sanjeev Singh, Amit Pundir
cs.AI, cs.CY, I.2.m Miscellaneous
11 pages, 3 figures
null
cs.AI
20230711
20230711
[ { "id": "1801.01957" }, { "id": "2304.03277" }, { "id": "2305.18449" }, { "id": "2303.12712" }, { "id": "2304.03439" }, { "id": "2003.02989" } ]
2307.04964
12
In the context of large language models, RLHF is especially adopted for the purpose of a helpful, honest, and harmless LLM that aligns with human values [16, 17, 12], alleviating the negative societal impacts from general-purpose language models. LaMDA [12] finetunes large language models to participate in interesting, helpful, factually grounded, and safe natural language dialogue and use of external information to ensure accuracy and groundedness. Rather than using reinforcement learning, they apply a mix of supervised learning techniques for human preference alignment. InstructGPT [16] finetunes GPT-3-type models [5] to improve helpfulness, which is mixed with RL from human preferences expressed through comparisons. [27] adopts the pre-training and fine-tuning tradition to train the preference model for human alignment, claiming that ranked preference modeling turns out to be the most effective training objective for distinguishing between “good” and “bad” behavior. This attempt is further improved by an iterated online mode of training, where preference models and RL policies are updated on a weekly cadence with fresh human feedback data, and PPO is incorporated to stabilize RL training [17]. Despite its effectiveness, RLHF (especially PPO) exhibits complexity, instability, and sensitivity to hyperparameters, which is not yet addressed in previous works.
2307.04964#12
Secrets of RLHF in Large Language Models Part I: PPO
Large language models (LLMs) have formulated a blueprint for the advancement of artificial general intelligence. Its primary objective is to function as a human-centric (helpful, honest, and harmless) assistant. Alignment with humans assumes paramount significance, and reinforcement learning with human feedback (RLHF) emerges as the pivotal technological paradigm underpinning this pursuit. Current technical routes usually include \textbf{reward models} to measure human preferences, \textbf{Proximal Policy Optimization} (PPO) to optimize policy model outputs, and \textbf{process supervision} to improve step-by-step reasoning capabilities. However, due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle. In the first report, we dissect the framework of RLHF, re-evaluate the inner workings of PPO, and explore how the parts comprising PPO algorithms impact policy agent training. We identify policy constraints being the key factor for the effective implementation of the PPO algorithm. Therefore, we explore the PPO-max, an advanced version of PPO algorithm, to efficiently improve the training stability of the policy model. Based on our main results, we perform a comprehensive analysis of RLHF abilities compared with SFT models and ChatGPT. The absence of open-source implementations has posed significant challenges to the investigation of LLMs alignment. Therefore, we are eager to release technical reports, reward models and PPO codes, aiming to make modest contributions to the advancement of LLMs.
http://arxiv.org/pdf/2307.04964
Rui Zheng, Shihan Dou, Songyang Gao, Yuan Hua, Wei Shen, Binghai Wang, Yan Liu, Senjie Jin, Qin Liu, Yuhao Zhou, Limao Xiong, Lu Chen, Zhiheng Xi, Nuo Xu, Wenbin Lai, Minghao Zhu, Cheng Chang, Zhangyue Yin, Rongxiang Weng, Wensen Cheng, Haoran Huang, Tianxiang Sun, Hang Yan, Tao Gui, Qi Zhang, Xipeng Qiu, Xuanjing Huang
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230711
20230718
[ { "id": "2302.13971" }, { "id": "1909.08593" }, { "id": "2201.08239" }, { "id": "2304.03277" }, { "id": "2304.03442" }, { "id": "2303.18223" }, { "id": "2108.07258" }, { "id": "2204.05862" }, { "id": "2303.03378" }, { "id": "2203.02155" }, { "id": "2112.00861" }, { "id": "2201.11903" }, { "id": "1904.09751" } ]
2307.04986
12
The disease duration is assumed to be 6 days. The first two days of infection are assumed to be asymptomatic (agents feel normal) but infectious, while in days three and six, a light cough, and days four and five, a fever and cough are experienced. Upon recovery the agents feel normal and are immune. For the societal health information feedback, the following string is given to the agent: “[agent’s name] knows about the Catasat virus spreading across the country. It is an infectious disease that spreads from human to human contact via an airborne virus. The deadliness of the virus is unknown. Scientists are warning about a potential epidemic. [Agent’s name] checks the newspaper and finds that [X]% of Dewberry Hollow's population caught new infections of the Catasat virus yesterday,” where X is the percentage of the population with yesterday’s new daily active cases. New daily active cases are counted by the number of agents who are in day 4 of being infected.
2307.04986#12
Epidemic Modeling with Generative Agents
This study offers a new paradigm of individual-level modeling to address the grand challenge of incorporating human behavior in epidemic models. Using generative artificial intelligence in an agent-based epidemic model, each agent is empowered to make its own reasonings and decisions via connecting to a large language model such as ChatGPT. Through various simulation experiments, we present compelling evidence that generative agents mimic real-world behaviors such as quarantining when sick and self-isolation when cases rise. Collectively, the agents demonstrate patterns akin to multiple waves observed in recent pandemics followed by an endemic period. Moreover, the agents successfully flatten the epidemic curve. This study creates potential to improve dynamic system modeling by offering a way to represent human brain, reasoning, and decision making.
http://arxiv.org/pdf/2307.04986
Ross Williams, Niyousha Hosseinichimeh, Aritra Majumdar, Navid Ghaffarzadegan
cs.AI, cs.MA, econ.GN, nlin.AO, physics.soc-ph, q-fin.EC
null
null
cs.AI
20230711
20230711
[]
2307.05300
12
General task solving? Pure zero-shot prompting? Has multiple personas? Personas dynamically identified? Has iterative refinement? Need only a single LLM? † Standard Prompting (Brown et al., 2020) † Chain-of-Thought (Wei et al., 2023) Inner Monologue (Huang et al., 2022) ReAct (Yao et al., 2022) Reflexion (Shinn et al., 2023) † Self-Refine (Madaan et al., 2023) Tree-of-thought (Yao et al., 2023) GPT-Bargaining (Fu et al., 2023) Camel (Li et al., 2023) ExpertPrompting (Xu et al., 2023) Solo Performance Prompting (ours) (fixed to 3) (fixed to 2) (varied) Table 1: High-level comparison with various prompting-based methods. Methods directly comparable to ours are denoted by †. Results for the comparison can be found in Section 3. In Section 4, we further design and compare with two variants of Solo Performance Prompting: one adopting fixed personas, as in Camel (Li et al., 2023), and another with additional persona profiles, as proposed in ExpertPrompting (Xu et al., 2023). drafts of the story, solicits feedback from other participants, and revises accordingly.
2307.05300#12
Unleashing the Emergent Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration
Human intelligence thrives on cognitive synergy, where collaboration among different minds yield superior outcomes compared to isolated individuals. In this work, we propose Solo Performance Prompting (SPP), which transforms a single LLM into a cognitive synergist by engaging in multi-turn self-collaboration with multiple personas. A cognitive synergist is an intelligent agent that collaboratively combines multiple minds' strengths and knowledge to enhance problem-solving in complex tasks. By dynamically identifying and simulating different personas based on task inputs, SPP unleashes the potential of cognitive synergy in LLMs. Our in-depth analysis shows that assigning multiple fine-grained personas in LLMs improves problem-solving abilities compared to using a single or fixed number of personas. We evaluate SPP on three challenging tasks: Trivia Creative Writing, Codenames Collaborative, and Logic Grid Puzzle, encompassing both knowledge-intensive and reasoning-intensive types. Unlike previous works, such as Chain-of-Thought, that solely enhance the reasoning abilities in LLMs, experimental results demonstrate that SPP effectively reduces factual hallucination, and maintains strong reasoning capabilities. Additionally, comparative experiments show that cognitive synergy only emerges in GPT-4 and does not appear in less capable models, such as GPT-3.5-turbo and Llama2-13b-chat, which draws an interesting analogy to human development. Code, data, and prompts can be found at: https://github.com/MikeWangWZHL/Solo-Performance-Prompting.git.
http://arxiv.org/pdf/2307.05300
Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, Heng Ji
cs.AI, cs.CL
null
null
cs.AI
20230711
20240104
[ { "id": "2302.06476" }, { "id": "2305.17126" }, { "id": "2305.11499" }, { "id": "2210.00185" }, { "id": "2305.10601" }, { "id": "2304.05335" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14688" }, { "id": "2208.11663" }, { "id": "2208.03299" }, { "id": "2206.04615" }, { "id": "2104.07567" }, { "id": "2306.09896" }, { "id": "2302.04023" }, { "id": "2205.11916" }, { "id": "2304.03442" }, { "id": "2207.05608" }, { "id": "2303.12712" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" } ]
2307.07526
12
# 3.1 Theological Objection According to theological theory, thinking is a function of the human soul. Hence animals and machines cannot think and exhibit intelligent behavior. However, Alen rejected this objection, arguing that the existence of a soul is a matter of faith and cannot be used as a scientific argument against machine intelligence. Additionally, researchers [22] studied and argued that the intelligence of non-human primates, particularly apes, has sophisticated cognitive abilities, including self-awareness, recognizing intentions, teaching, and understanding causality. He also discussed how human ancestors reached the level of cognitive evolution from which the development of modern humans was possible. Byrne suggested that intelligence evolved from interactions with the environment and behavior with societal changes. James R. Flynn, a philosopher and scientist, also suggested the consistent increase in intelligence over time through the Flynn effect [23]. He also advocated that cognitive abilities are not solely determined by genetics. Another philosopher Harari [24] thinks that biochemical organisms, including human beings, are algorithms, so there are no differences between organisms and machines that are also algorithms. As the soul remains a matter of faith and it does not matter in machines’ linguistic interaction capabilities, machines have also evolved tremendously recently. Hence, it can be possible that intelligence and thinking ability that was earlier thought to be unique to humans can be acquired through evolutionary processes. # ’Heads in the Sand’ Objection
2307.07526#12
Can I say, now machines can think?
Generative AI techniques have opened the path for new generations of machines in diverse domains. These machines have various capabilities for example, they can produce images, generate answers or stories, and write codes based on the "prompts" only provided by users. These machines are considered 'thinking minds' because they have the ability to generate human-like responses. In this study, we have analyzed and explored the capabilities of artificial intelligence-enabled machines. We have revisited on Turing's concept of thinking machines and compared it with recent technological advancements. The objections and consequences of the thinking machines are also discussed in this study, along with available techniques to evaluate machines' cognitive capabilities. We have concluded that Turing Test is a critical aspect of evaluating machines' ability. However, there are other aspects of intelligence too, and AI machines exhibit most of these aspects.
http://arxiv.org/pdf/2307.07526
Nitisha Aggarwal, Geetika Jain Saxena, Sanjeev Singh, Amit Pundir
cs.AI, cs.CY, I.2.m Miscellaneous
11 pages, 3 figures
null
cs.AI
20230711
20230711
[ { "id": "1801.01957" }, { "id": "2304.03277" }, { "id": "2305.18449" }, { "id": "2303.12712" }, { "id": "2304.03439" }, { "id": "2003.02989" } ]
2307.04964
13
Under similar concerns, several works highlighted the importance of PPO for RL framework and made an attempt to improve its efficiency [28, 29]. [29] reveals that much of the observed improvement in reward brought by PPO may come from seemingly small modifications to the core algorithm (i.e. code-level optimizations). [28] further points out that a large number of low- and high-level design decisions of RL are usually not discussed in research papers but are indeed crucial for performance. As a result, [28] conducts a fair comparison among low-level designs based on a unified RL implementation and claims that the policy initialization scheme significantly influences the performance. Despite the efforts of revealing the importance of PPO and its recommended implementation, few attempts have been made to address the problem of instability and sensitivity to hyperparameters. In this paper, we dissect the framework of RLHF, especially shedding light on the inner workings of PPO, and explore an advanced version of the PPO which efficiently improves the training stability of the policy model. # 3 Reinforcement Learning from Human Feedback The training process of AI assistant comprises three main stages: supervised fine-tuning (SFT), reward model (RM) training, and proximal policy optimization (PPO) on this reward model. During the SFT
2307.04964#13
Secrets of RLHF in Large Language Models Part I: PPO
Large language models (LLMs) have formulated a blueprint for the advancement of artificial general intelligence. Its primary objective is to function as a human-centric (helpful, honest, and harmless) assistant. Alignment with humans assumes paramount significance, and reinforcement learning with human feedback (RLHF) emerges as the pivotal technological paradigm underpinning this pursuit. Current technical routes usually include \textbf{reward models} to measure human preferences, \textbf{Proximal Policy Optimization} (PPO) to optimize policy model outputs, and \textbf{process supervision} to improve step-by-step reasoning capabilities. However, due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle. In the first report, we dissect the framework of RLHF, re-evaluate the inner workings of PPO, and explore how the parts comprising PPO algorithms impact policy agent training. We identify policy constraints being the key factor for the effective implementation of the PPO algorithm. Therefore, we explore the PPO-max, an advanced version of PPO algorithm, to efficiently improve the training stability of the policy model. Based on our main results, we perform a comprehensive analysis of RLHF abilities compared with SFT models and ChatGPT. The absence of open-source implementations has posed significant challenges to the investigation of LLMs alignment. Therefore, we are eager to release technical reports, reward models and PPO codes, aiming to make modest contributions to the advancement of LLMs.
http://arxiv.org/pdf/2307.04964
Rui Zheng, Shihan Dou, Songyang Gao, Yuan Hua, Wei Shen, Binghai Wang, Yan Liu, Senjie Jin, Qin Liu, Yuhao Zhou, Limao Xiong, Lu Chen, Zhiheng Xi, Nuo Xu, Wenbin Lai, Minghao Zhu, Cheng Chang, Zhangyue Yin, Rongxiang Weng, Wensen Cheng, Haoran Huang, Tianxiang Sun, Hang Yan, Tao Gui, Qi Zhang, Xipeng Qiu, Xuanjing Huang
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230711
20230718
[ { "id": "2302.13971" }, { "id": "1909.08593" }, { "id": "2201.08239" }, { "id": "2304.03277" }, { "id": "2304.03442" }, { "id": "2303.18223" }, { "id": "2108.07258" }, { "id": "2204.05862" }, { "id": "2303.03378" }, { "id": "2203.02155" }, { "id": "2112.00861" }, { "id": "2201.11903" }, { "id": "1904.09751" } ]
2307.04986
13
individuals and an average contact rate of 5 contacts per individual. The virus infectivity (infection probability given a contact between an infected and a susceptible) is set at 0.1, which gives an average R0 of 3. We repeat each of the three conditions ten times (total of 30 runs), each run taking about two hours to complete. In the second round, we scale up the analysis for n=1000 individuals, and conduct two experiments per condition of R0 of 2 (infectivity is set to 0.0833 and contact rate of 4, R0 of 2.5 (infectivity is set to 0.0833), and R0 of 3 (total of 7 runs). Each run takes about 80 hours to complete. # III. Results Given the computational intensiveness of this study, we first conduct our simulation experiments with a population of 100 agents. We use the pseudonyms of Catasat and Dewberry Hollow for the virus and the town’s names, respectively, to avoid any unknown biases in LLM. When information about the virus is provided, it is specified that Catasat is an airborne human-to- human infectious virus with unknown deadliness and that scientists are warning about a potential epidemic.
2307.04986#13
Epidemic Modeling with Generative Agents
This study offers a new paradigm of individual-level modeling to address the grand challenge of incorporating human behavior in epidemic models. Using generative artificial intelligence in an agent-based epidemic model, each agent is empowered to make its own reasonings and decisions via connecting to a large language model such as ChatGPT. Through various simulation experiments, we present compelling evidence that generative agents mimic real-world behaviors such as quarantining when sick and self-isolation when cases rise. Collectively, the agents demonstrate patterns akin to multiple waves observed in recent pandemics followed by an endemic period. Moreover, the agents successfully flatten the epidemic curve. This study creates potential to improve dynamic system modeling by offering a way to represent human brain, reasoning, and decision making.
http://arxiv.org/pdf/2307.04986
Ross Williams, Niyousha Hosseinichimeh, Aritra Majumdar, Navid Ghaffarzadegan
cs.AI, cs.MA, econ.GN, nlin.AO, physics.soc-ph, q-fin.EC
null
null
cs.AI
20230711
20230711
[]
2307.05300
13
drafts of the story, solicits feedback from other participants, and revises accordingly. To explore the prevalence of cognitive synergy in different LLMs, we apply SPP to LLMs with varying scales and capabilities, including GPT-4, GPT-3.5-turbo, and Llama-13b-chat. Comparative results show that cognitive synergy only emerges in GPT-4 and not in less capable models. This draws an interesting analogy to human development, as children typically start engaging in role-playing at the age of 2 to 3 (Piaget, 1954), but not earlier. In summary, the key contributions of this paper are as follows: • We investigate whether LLMs can leverag- ing cognitive synergy for general task-solving. We introduce Solo Performance Prompting (SPP), which simulates multi-agent, multi- persona collaboration in a pure zero-shot man- ner. • We evaluate SPP across three challenging tasks: Trivia Creative Writing, Codenames Collaborative and Logic Grid Puzzle, span- ning both knowledge- and reasoning-intensive domains. To our knowledge, SPP is the first zero-shot prompting method that can enhance both knowledge and reasoning abilities on GPT-4. # 2 Solo Performance Prompting
2307.05300#13
Unleashing the Emergent Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration
Human intelligence thrives on cognitive synergy, where collaboration among different minds yield superior outcomes compared to isolated individuals. In this work, we propose Solo Performance Prompting (SPP), which transforms a single LLM into a cognitive synergist by engaging in multi-turn self-collaboration with multiple personas. A cognitive synergist is an intelligent agent that collaboratively combines multiple minds' strengths and knowledge to enhance problem-solving in complex tasks. By dynamically identifying and simulating different personas based on task inputs, SPP unleashes the potential of cognitive synergy in LLMs. Our in-depth analysis shows that assigning multiple fine-grained personas in LLMs improves problem-solving abilities compared to using a single or fixed number of personas. We evaluate SPP on three challenging tasks: Trivia Creative Writing, Codenames Collaborative, and Logic Grid Puzzle, encompassing both knowledge-intensive and reasoning-intensive types. Unlike previous works, such as Chain-of-Thought, that solely enhance the reasoning abilities in LLMs, experimental results demonstrate that SPP effectively reduces factual hallucination, and maintains strong reasoning capabilities. Additionally, comparative experiments show that cognitive synergy only emerges in GPT-4 and does not appear in less capable models, such as GPT-3.5-turbo and Llama2-13b-chat, which draws an interesting analogy to human development. Code, data, and prompts can be found at: https://github.com/MikeWangWZHL/Solo-Performance-Prompting.git.
http://arxiv.org/pdf/2307.05300
Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, Heng Ji
cs.AI, cs.CL
null
null
cs.AI
20230711
20240104
[ { "id": "2302.06476" }, { "id": "2305.17126" }, { "id": "2305.11499" }, { "id": "2210.00185" }, { "id": "2305.10601" }, { "id": "2304.05335" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14688" }, { "id": "2208.11663" }, { "id": "2208.03299" }, { "id": "2206.04615" }, { "id": "2104.07567" }, { "id": "2306.09896" }, { "id": "2302.04023" }, { "id": "2205.11916" }, { "id": "2304.03442" }, { "id": "2207.05608" }, { "id": "2303.12712" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" } ]
2307.07526
13
# ’Heads in the Sand’ Objection This objection expresses the fear that machines possessing thinking abilities would probably dominate humans. In 1950, this argument was not substantial for refutation. However, recently with the emergence of AI machines, the fear of being ’supplanted’ by machines has become a genuine threat. In an interview with podcast host Lex Friedman, CEO of OpenAI Sam Altman has accepted that ChatGPT can replace specific types of jobs [25]. Recently, Geoffrey Hinton, the "Godfather of AI," claimed that machines are getting more intelligent than us and warned people about the risk of AI [26]. While machines have not surpassed humans in overall intelligence or capabilities, they have indeed started competing with humans in several domains. For example, human chess grandmasters have not been able to win against AI since 2005 [27], IBM’s Watson competed against former champions in the quiz show Jeopardy! and emerged as the winner in 2011. In1, various human capabilities are compared by functions that machines can perform. Researcher claimed that human is now under the thumb of technologies, machine has evaluated from decision support systems to autonomous decision systems. Machines have also become the source of critical and responsible actions that earlier were considered solely humans’ task [28]. Thus, we can say machines are improving their abilities while humans are becoming more dependent on machines. # 3.3 Mathematical Objection
2307.07526#13
Can I say, now machines can think?
Generative AI techniques have opened the path for new generations of machines in diverse domains. These machines have various capabilities for example, they can produce images, generate answers or stories, and write codes based on the "prompts" only provided by users. These machines are considered 'thinking minds' because they have the ability to generate human-like responses. In this study, we have analyzed and explored the capabilities of artificial intelligence-enabled machines. We have revisited on Turing's concept of thinking machines and compared it with recent technological advancements. The objections and consequences of the thinking machines are also discussed in this study, along with available techniques to evaluate machines' cognitive capabilities. We have concluded that Turing Test is a critical aspect of evaluating machines' ability. However, there are other aspects of intelligence too, and AI machines exhibit most of these aspects.
http://arxiv.org/pdf/2307.07526
Nitisha Aggarwal, Geetika Jain Saxena, Sanjeev Singh, Amit Pundir
cs.AI, cs.CY, I.2.m Miscellaneous
11 pages, 3 figures
null
cs.AI
20230711
20230711
[ { "id": "1801.01957" }, { "id": "2304.03277" }, { "id": "2305.18449" }, { "id": "2303.12712" }, { "id": "2304.03439" }, { "id": "2003.02989" } ]
2307.04964
14
3 Tia (@ele) GAE + Advantage Function A(S ar) = LAY'Ses + TD Error 5c = 1(Se Ge) + WV Gers) = V (Se) + Return RL, Tp Carls Cry m8“ Caelse) cry) Ae, ae) PPO-clip Loss Ry = A(Se a) + VS) Pa Pretraining Data a my (els) a) (St@e) A(sp, ar) RL TH sia(Ael5e) aL mB (aelse) Ry Veo MSE Loss Experience Buffer User Query Figure 1: PPO workflow, depicting the sequential steps in the algorithm’s execution. The process begins with sampling from the environment, followed by the application of GAE for improved advantage approximation. The diagram then illustrates the computation of various loss functions employed in PPO, signifying the iterative nature of the learning process and the policy updates derived from these losses.
2307.04964#14
Secrets of RLHF in Large Language Models Part I: PPO
Large language models (LLMs) have formulated a blueprint for the advancement of artificial general intelligence. Its primary objective is to function as a human-centric (helpful, honest, and harmless) assistant. Alignment with humans assumes paramount significance, and reinforcement learning with human feedback (RLHF) emerges as the pivotal technological paradigm underpinning this pursuit. Current technical routes usually include \textbf{reward models} to measure human preferences, \textbf{Proximal Policy Optimization} (PPO) to optimize policy model outputs, and \textbf{process supervision} to improve step-by-step reasoning capabilities. However, due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle. In the first report, we dissect the framework of RLHF, re-evaluate the inner workings of PPO, and explore how the parts comprising PPO algorithms impact policy agent training. We identify policy constraints being the key factor for the effective implementation of the PPO algorithm. Therefore, we explore the PPO-max, an advanced version of PPO algorithm, to efficiently improve the training stability of the policy model. Based on our main results, we perform a comprehensive analysis of RLHF abilities compared with SFT models and ChatGPT. The absence of open-source implementations has posed significant challenges to the investigation of LLMs alignment. Therefore, we are eager to release technical reports, reward models and PPO codes, aiming to make modest contributions to the advancement of LLMs.
http://arxiv.org/pdf/2307.04964
Rui Zheng, Shihan Dou, Songyang Gao, Yuan Hua, Wei Shen, Binghai Wang, Yan Liu, Senjie Jin, Qin Liu, Yuhao Zhou, Limao Xiong, Lu Chen, Zhiheng Xi, Nuo Xu, Wenbin Lai, Minghao Zhu, Cheng Chang, Zhangyue Yin, Rongxiang Weng, Wensen Cheng, Haoran Huang, Tianxiang Sun, Hang Yan, Tao Gui, Qi Zhang, Xipeng Qiu, Xuanjing Huang
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230711
20230718
[ { "id": "2302.13971" }, { "id": "1909.08593" }, { "id": "2201.08239" }, { "id": "2304.03277" }, { "id": "2304.03442" }, { "id": "2303.18223" }, { "id": "2108.07258" }, { "id": "2204.05862" }, { "id": "2303.03378" }, { "id": "2203.02155" }, { "id": "2112.00861" }, { "id": "2201.11903" }, { "id": "1904.09751" } ]
2307.04986
14
Figure 2 shows the results for this population under three experimental setups, each one conducted 10 times (details in the Materials and Methods section). In the first condition (Fig. 2A), there is no information feedback to the agents, and we expect the model to replicate the 5 common observation in conventional susceptible, infectious, and recovered (SIR) models (compartmental or ABM) of a single bell-shaped curve of daily cases and constant mobility. Fig. 2A shows average daily cases from 10 simulation experiments (80% CI), average mobility (80% CI), and 4 samples of individual runs, all depicting the expected outcomes. The epidemic continues until almost everyone is infected. Fig 2B reports results from the second experimental condition, in which agents are informed about their own health and, based on this new information, they can alter their decision regarding whether to go outside. The results show a decline in mobility due to self-quarantine, and the subsequent decline in the maximum peak of the epidemic.
2307.04986#14
Epidemic Modeling with Generative Agents
This study offers a new paradigm of individual-level modeling to address the grand challenge of incorporating human behavior in epidemic models. Using generative artificial intelligence in an agent-based epidemic model, each agent is empowered to make its own reasonings and decisions via connecting to a large language model such as ChatGPT. Through various simulation experiments, we present compelling evidence that generative agents mimic real-world behaviors such as quarantining when sick and self-isolation when cases rise. Collectively, the agents demonstrate patterns akin to multiple waves observed in recent pandemics followed by an endemic period. Moreover, the agents successfully flatten the epidemic curve. This study creates potential to improve dynamic system modeling by offering a way to represent human brain, reasoning, and decision making.
http://arxiv.org/pdf/2307.04986
Ross Williams, Niyousha Hosseinichimeh, Aritra Majumdar, Navid Ghaffarzadegan
cs.AI, cs.MA, econ.GN, nlin.AO, physics.soc-ph, q-fin.EC
null
null
cs.AI
20230711
20230711
[]
2307.05300
14
# 2 Solo Performance Prompting To unleash the power of synergizing different per- sonas to tackle complex problems, we propose Solo Performance Prompting (SPP) which instructs a LLM to perform the following the procedure for general task-solving: (1) Persona Identification: Identify multiple participants with special personas (including a leader persona: AI Assistant) that are essential for solving the particular task. (2) Brain- storming: The participants share knowledge and provide suggestions on how to approach the task based on their own expertise. (3) Multi-Persona Iterative Collaboration: The leader persona, AI Assistant, proposes initial solutions, consults the other participants for feedback, and revise the an- swer iteratively. Figure 2 shows a walking example of SPP during inference. Next, we formally de- scribe the SPP procedure in detail. Given an input sequence x and a model M, let a prompt (including demonstration examples) prepended to the input to be p and the final output to be y. Denote an intermediate generation before generating the final y as z. Under this formulation, Standard Prompting and Chain-of-Thought (CoT) Prompting can be described as: Standard Prompting: y = M(x) (1) • We present an intriguing finding regarding the emergent nature of cognitive synergy ability in LLMs, which only emerges in GPT-4 and not in less powerful models.
2307.05300#14
Unleashing the Emergent Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration
Human intelligence thrives on cognitive synergy, where collaboration among different minds yield superior outcomes compared to isolated individuals. In this work, we propose Solo Performance Prompting (SPP), which transforms a single LLM into a cognitive synergist by engaging in multi-turn self-collaboration with multiple personas. A cognitive synergist is an intelligent agent that collaboratively combines multiple minds' strengths and knowledge to enhance problem-solving in complex tasks. By dynamically identifying and simulating different personas based on task inputs, SPP unleashes the potential of cognitive synergy in LLMs. Our in-depth analysis shows that assigning multiple fine-grained personas in LLMs improves problem-solving abilities compared to using a single or fixed number of personas. We evaluate SPP on three challenging tasks: Trivia Creative Writing, Codenames Collaborative, and Logic Grid Puzzle, encompassing both knowledge-intensive and reasoning-intensive types. Unlike previous works, such as Chain-of-Thought, that solely enhance the reasoning abilities in LLMs, experimental results demonstrate that SPP effectively reduces factual hallucination, and maintains strong reasoning capabilities. Additionally, comparative experiments show that cognitive synergy only emerges in GPT-4 and does not appear in less capable models, such as GPT-3.5-turbo and Llama2-13b-chat, which draws an interesting analogy to human development. Code, data, and prompts can be found at: https://github.com/MikeWangWZHL/Solo-Performance-Prompting.git.
http://arxiv.org/pdf/2307.05300
Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, Heng Ji
cs.AI, cs.CL
null
null
cs.AI
20230711
20240104
[ { "id": "2302.06476" }, { "id": "2305.17126" }, { "id": "2305.11499" }, { "id": "2210.00185" }, { "id": "2305.10601" }, { "id": "2304.05335" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14688" }, { "id": "2208.11663" }, { "id": "2208.03299" }, { "id": "2206.04615" }, { "id": "2104.07567" }, { "id": "2306.09896" }, { "id": "2302.04023" }, { "id": "2205.11916" }, { "id": "2304.03442" }, { "id": "2207.05608" }, { "id": "2303.12712" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" } ]
2307.07526
14
# 3.3 Mathematical Objection This argument discussed the limitations of digital machines since machines process on pre-defined instructions or algorithms. Hence, machines can answer appropriately with objective answers like ’yes’ or ’no’ but not conceptual questions such as ’What do you think of Picasso.’ However, Turing argued that human intellect also has limitations. Humans can also give appropriate answers if they have acquired knowledge on that topic otherwise, the answer can 3 Can I say, now machines can think? be wrong or no answer. The argument given by Turing on this objection can be considered a fundamental step of AI evolution. AI techniques mimic human intelligence by exerting features from past experiences and iterating learning several times to understand the data patterns. Large language models (LLM) from the GPT family can answer conceptual questions, as shown in Figure 1. Hence, it can infer that machines understand conceptual questions and can compute the answer with high accuracy. # 3.4 The Argument from Consciousness
2307.07526#14
Can I say, now machines can think?
Generative AI techniques have opened the path for new generations of machines in diverse domains. These machines have various capabilities for example, they can produce images, generate answers or stories, and write codes based on the "prompts" only provided by users. These machines are considered 'thinking minds' because they have the ability to generate human-like responses. In this study, we have analyzed and explored the capabilities of artificial intelligence-enabled machines. We have revisited on Turing's concept of thinking machines and compared it with recent technological advancements. The objections and consequences of the thinking machines are also discussed in this study, along with available techniques to evaluate machines' cognitive capabilities. We have concluded that Turing Test is a critical aspect of evaluating machines' ability. However, there are other aspects of intelligence too, and AI machines exhibit most of these aspects.
http://arxiv.org/pdf/2307.07526
Nitisha Aggarwal, Geetika Jain Saxena, Sanjeev Singh, Amit Pundir
cs.AI, cs.CY, I.2.m Miscellaneous
11 pages, 3 figures
null
cs.AI
20230711
20230711
[ { "id": "1801.01957" }, { "id": "2304.03277" }, { "id": "2305.18449" }, { "id": "2303.12712" }, { "id": "2304.03439" }, { "id": "2003.02989" } ]
2307.04964
15
phase, the model learns to engage in general human-like dialogues by imitating human-annotated dialogue examples. Subsequently, the reward model is trained, in which the model learns to compare the preference of different responses based on human feedback. Lastly, in the PPO phase, the model is updated based on feedback from the reward model, striving to discover an optimized policy through exploration and exploitation. In the RLHF process, we mainly consider the stages of RM training and reinforcement learning via PPO. The PPO algorithm follows a series of steps as depicted in Figure 1. # 3.1 Reward Modeling For the RM architecture, we use pre-trained transformer-based language models with the last unem- bedding layer removed and add an additional linear layer to the final transformer layer. Given any text, the reward model will assign a scalar reward value to the last token, and the larger the reward value, the better the sample. Following Stiennon et al. [25], training reward models often involves utilizing a dataset comprised of paired comparisons between two responses generated for the same input. The modeling loss for each pair of preferred and dispreferred samples is:
2307.04964#15
Secrets of RLHF in Large Language Models Part I: PPO
Large language models (LLMs) have formulated a blueprint for the advancement of artificial general intelligence. Its primary objective is to function as a human-centric (helpful, honest, and harmless) assistant. Alignment with humans assumes paramount significance, and reinforcement learning with human feedback (RLHF) emerges as the pivotal technological paradigm underpinning this pursuit. Current technical routes usually include \textbf{reward models} to measure human preferences, \textbf{Proximal Policy Optimization} (PPO) to optimize policy model outputs, and \textbf{process supervision} to improve step-by-step reasoning capabilities. However, due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle. In the first report, we dissect the framework of RLHF, re-evaluate the inner workings of PPO, and explore how the parts comprising PPO algorithms impact policy agent training. We identify policy constraints being the key factor for the effective implementation of the PPO algorithm. Therefore, we explore the PPO-max, an advanced version of PPO algorithm, to efficiently improve the training stability of the policy model. Based on our main results, we perform a comprehensive analysis of RLHF abilities compared with SFT models and ChatGPT. The absence of open-source implementations has posed significant challenges to the investigation of LLMs alignment. Therefore, we are eager to release technical reports, reward models and PPO codes, aiming to make modest contributions to the advancement of LLMs.
http://arxiv.org/pdf/2307.04964
Rui Zheng, Shihan Dou, Songyang Gao, Yuan Hua, Wei Shen, Binghai Wang, Yan Liu, Senjie Jin, Qin Liu, Yuhao Zhou, Limao Xiong, Lu Chen, Zhiheng Xi, Nuo Xu, Wenbin Lai, Minghao Zhu, Cheng Chang, Zhangyue Yin, Rongxiang Weng, Wensen Cheng, Haoran Huang, Tianxiang Sun, Hang Yan, Tao Gui, Qi Zhang, Xipeng Qiu, Xuanjing Huang
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230711
20230718
[ { "id": "2302.13971" }, { "id": "1909.08593" }, { "id": "2201.08239" }, { "id": "2304.03277" }, { "id": "2304.03442" }, { "id": "2303.18223" }, { "id": "2108.07258" }, { "id": "2204.05862" }, { "id": "2303.03378" }, { "id": "2203.02155" }, { "id": "2112.00861" }, { "id": "2201.11903" }, { "id": "1904.09751" } ]
2307.04986
15
Fig. 2C shows the full feedback condition, in which agents learn about the town’s cases in addition to discovering their own health-related feelings. As depicted, agents react to the growing number of cases, self-isolate, and as a result, the number of cases declines. Generative agents provided multiple reasons for choosing to stay home, including perceived higher risks and the presence of symptoms. For example, the reason Carol (an agent) chose to stay home at a particular time is indicated as: “Carol has a light cough and there is a potential epidemic of an unknown deadly virus spreading through human contact. Additionally, 0.7% of Dewberry Hollow's population caught new infections of the virus yesterday. Going to work could increase the risk of exposure to the virus and potentially spread it to others.”
2307.04986#15
Epidemic Modeling with Generative Agents
This study offers a new paradigm of individual-level modeling to address the grand challenge of incorporating human behavior in epidemic models. Using generative artificial intelligence in an agent-based epidemic model, each agent is empowered to make its own reasonings and decisions via connecting to a large language model such as ChatGPT. Through various simulation experiments, we present compelling evidence that generative agents mimic real-world behaviors such as quarantining when sick and self-isolation when cases rise. Collectively, the agents demonstrate patterns akin to multiple waves observed in recent pandemics followed by an endemic period. Moreover, the agents successfully flatten the epidemic curve. This study creates potential to improve dynamic system modeling by offering a way to represent human brain, reasoning, and decision making.
http://arxiv.org/pdf/2307.04986
Ross Williams, Niyousha Hosseinichimeh, Aritra Majumdar, Navid Ghaffarzadegan
cs.AI, cs.MA, econ.GN, nlin.AO, physics.soc-ph, q-fin.EC
null
null
cs.AI
20230711
20230711
[]
2307.05300
15
• We present an intriguing finding regarding the emergent nature of cognitive synergy ability in LLMs, which only emerges in GPT-4 and not in less powerful models. • We conduct in-depth analyses of the impact of the identified personas and SPP prompt de- sign, providing insights into why dynamic, fine-grained personas are necessary, as op- posed to fixed, coarse-grained personas. CoT Prompting: y = M(pcot∥x∥{z1, z2, ..., zn}) (2) where pcot is the CoT prompt, e.g., "Solve the task step-by-step" and {z1, z2..., zn} are the intermediate steps. In contrast, our proposed Solo Performance Prompting can be described as fol- lows: Solo Performance Prompting: y = M(pspp∥x∥zp∥{z1 f }j=1..n) (3) b , z2 b , ..., zm b }∥{z0 s , z1 f , ..., zm
2307.05300#15
Unleashing the Emergent Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration
Human intelligence thrives on cognitive synergy, where collaboration among different minds yield superior outcomes compared to isolated individuals. In this work, we propose Solo Performance Prompting (SPP), which transforms a single LLM into a cognitive synergist by engaging in multi-turn self-collaboration with multiple personas. A cognitive synergist is an intelligent agent that collaboratively combines multiple minds' strengths and knowledge to enhance problem-solving in complex tasks. By dynamically identifying and simulating different personas based on task inputs, SPP unleashes the potential of cognitive synergy in LLMs. Our in-depth analysis shows that assigning multiple fine-grained personas in LLMs improves problem-solving abilities compared to using a single or fixed number of personas. We evaluate SPP on three challenging tasks: Trivia Creative Writing, Codenames Collaborative, and Logic Grid Puzzle, encompassing both knowledge-intensive and reasoning-intensive types. Unlike previous works, such as Chain-of-Thought, that solely enhance the reasoning abilities in LLMs, experimental results demonstrate that SPP effectively reduces factual hallucination, and maintains strong reasoning capabilities. Additionally, comparative experiments show that cognitive synergy only emerges in GPT-4 and does not appear in less capable models, such as GPT-3.5-turbo and Llama2-13b-chat, which draws an interesting analogy to human development. Code, data, and prompts can be found at: https://github.com/MikeWangWZHL/Solo-Performance-Prompting.git.
http://arxiv.org/pdf/2307.05300
Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, Heng Ji
cs.AI, cs.CL
null
null
cs.AI
20230711
20240104
[ { "id": "2302.06476" }, { "id": "2305.17126" }, { "id": "2305.11499" }, { "id": "2210.00185" }, { "id": "2305.10601" }, { "id": "2304.05335" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14688" }, { "id": "2208.11663" }, { "id": "2208.03299" }, { "id": "2206.04615" }, { "id": "2104.07567" }, { "id": "2306.09896" }, { "id": "2302.04023" }, { "id": "2205.11916" }, { "id": "2304.03442" }, { "id": "2207.05608" }, { "id": "2303.12712" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" } ]
2307.07526
15
Professor Jefferson’s Lister Oration [29] considered the objection to the consciousness of Machines. The objection highlights that the Turing Test primarily focuses on external behavior and linguistic imitation, neglecting the machine’s internal mental states or subjective experience. Consciousness requires subjective experience, feelings, and a sense of self-awareness other than computational ability. Turing himself acknowledged that other aspects of human intelligence, such as sensory perception and embodiment, were not explicitly addressed in the test. Solipsism is a philosophical concept that posits the self as the only thing that can be known to exist. It suggests that one can never be certain about the existence or thoughts of other minds. From that perspective, no one can be certain about another person’s thinking, and only for their own. Hence this can be true for machines, also. With recent advancements in chatbots, such as an AI-powered chatbot enabled with Bing from Microsoft, they can show emotions and sentiments as humans do. It has some level of consciousness to manipulate conversations with emotions, whether real or fake. Humans do not always have real emotions but pretend to have them. AI bots, at times, respond the same
2307.07526#15
Can I say, now machines can think?
Generative AI techniques have opened the path for new generations of machines in diverse domains. These machines have various capabilities for example, they can produce images, generate answers or stories, and write codes based on the "prompts" only provided by users. These machines are considered 'thinking minds' because they have the ability to generate human-like responses. In this study, we have analyzed and explored the capabilities of artificial intelligence-enabled machines. We have revisited on Turing's concept of thinking machines and compared it with recent technological advancements. The objections and consequences of the thinking machines are also discussed in this study, along with available techniques to evaluate machines' cognitive capabilities. We have concluded that Turing Test is a critical aspect of evaluating machines' ability. However, there are other aspects of intelligence too, and AI machines exhibit most of these aspects.
http://arxiv.org/pdf/2307.07526
Nitisha Aggarwal, Geetika Jain Saxena, Sanjeev Singh, Amit Pundir
cs.AI, cs.CY, I.2.m Miscellaneous
11 pages, 3 figures
null
cs.AI
20230711
20230711
[ { "id": "1801.01957" }, { "id": "2304.03277" }, { "id": "2305.18449" }, { "id": "2303.12712" }, { "id": "2304.03439" }, { "id": "2003.02989" } ]
2307.04964
16
LW) = loga(r(x, Yw) — (2,90), (1) where g is the sigmoid function. r represents the reward model with parameters 7, and r(x, y) is the a single scalar predicted reward for input prompt x and response y. Additionally, we follow to use imitation learning, which introduces the autoregressive LM loss on the preferred response of each pair, allowing the model to imitate the preferred response in each sentence pair. In practice, we add the coefficient 3, the LM loss respectively. Finally, we define the following reward modeling loss: LW) = AE (eyyusyn)~Dem LOS O(r(2, Yw) — 7(@, Y))] + BrmE(e,yu.)Den Log(r’(, Yw)], (2) where D,,, is the empirical distribution of the training set. r’ is the same model with r except for the top linear layer, the dimension of which corresponds to the vocabulary size, and r(x, yw) is the likelihood given the prompt x and the preferred response y,,,.
2307.04964#16
Secrets of RLHF in Large Language Models Part I: PPO
Large language models (LLMs) have formulated a blueprint for the advancement of artificial general intelligence. Its primary objective is to function as a human-centric (helpful, honest, and harmless) assistant. Alignment with humans assumes paramount significance, and reinforcement learning with human feedback (RLHF) emerges as the pivotal technological paradigm underpinning this pursuit. Current technical routes usually include \textbf{reward models} to measure human preferences, \textbf{Proximal Policy Optimization} (PPO) to optimize policy model outputs, and \textbf{process supervision} to improve step-by-step reasoning capabilities. However, due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle. In the first report, we dissect the framework of RLHF, re-evaluate the inner workings of PPO, and explore how the parts comprising PPO algorithms impact policy agent training. We identify policy constraints being the key factor for the effective implementation of the PPO algorithm. Therefore, we explore the PPO-max, an advanced version of PPO algorithm, to efficiently improve the training stability of the policy model. Based on our main results, we perform a comprehensive analysis of RLHF abilities compared with SFT models and ChatGPT. The absence of open-source implementations has posed significant challenges to the investigation of LLMs alignment. Therefore, we are eager to release technical reports, reward models and PPO codes, aiming to make modest contributions to the advancement of LLMs.
http://arxiv.org/pdf/2307.04964
Rui Zheng, Shihan Dou, Songyang Gao, Yuan Hua, Wei Shen, Binghai Wang, Yan Liu, Senjie Jin, Qin Liu, Yuhao Zhou, Limao Xiong, Lu Chen, Zhiheng Xi, Nuo Xu, Wenbin Lai, Minghao Zhu, Cheng Chang, Zhangyue Yin, Rongxiang Weng, Wensen Cheng, Haoran Huang, Tianxiang Sun, Hang Yan, Tao Gui, Qi Zhang, Xipeng Qiu, Xuanjing Huang
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230711
20230718
[ { "id": "2302.13971" }, { "id": "1909.08593" }, { "id": "2201.08239" }, { "id": "2304.03277" }, { "id": "2304.03442" }, { "id": "2303.18223" }, { "id": "2108.07258" }, { "id": "2204.05862" }, { "id": "2303.03378" }, { "id": "2203.02155" }, { "id": "2112.00861" }, { "id": "2201.11903" }, { "id": "1904.09751" } ]
2307.04986
16
Generative agents who chose to leave home for work provided reasons such as the need to earn money. For example, the same agent Carol gives the following reason for going to work at a particular time: “The percentage of new infections in Dewberry Hollow is relatively low and Carol needs to go to work to earn money.” (Fig. 4 gives more examples of reasons.) This case demonstrates that without modelers imposing any decision-making rules on how to behave based on the available information, agents’ reasoning (which is based on an LLM) influences their decisions. Collectively, agents flatten the curve of the epidemic as they react. The four individual sample runs on the right side also show the emergence of epidemic waves, which can range from a single wave to multiple waves of different sizes. Fig. 2D compares several measures across the experiments. It shows that generative agents with information about prevalence are able to bring down the number of cumulative cases and the maximum number of cases by lowering their own mobility, while their reactions double the epidemic duration. 6
2307.04986#16
Epidemic Modeling with Generative Agents
This study offers a new paradigm of individual-level modeling to address the grand challenge of incorporating human behavior in epidemic models. Using generative artificial intelligence in an agent-based epidemic model, each agent is empowered to make its own reasonings and decisions via connecting to a large language model such as ChatGPT. Through various simulation experiments, we present compelling evidence that generative agents mimic real-world behaviors such as quarantining when sick and self-isolation when cases rise. Collectively, the agents demonstrate patterns akin to multiple waves observed in recent pandemics followed by an endemic period. Moreover, the agents successfully flatten the epidemic curve. This study creates potential to improve dynamic system modeling by offering a way to represent human brain, reasoning, and decision making.
http://arxiv.org/pdf/2307.04986
Ross Williams, Niyousha Hosseinichimeh, Aritra Majumdar, Navid Ghaffarzadegan
cs.AI, cs.MA, econ.GN, nlin.AO, physics.soc-ph, q-fin.EC
null
null
cs.AI
20230711
20230711
[]
2307.05300
16
where the SPP prompt (pspp) includes a high-level instruction and two carefully crafted demonstration examples1 that showcase the expected task-solving procedure of SPP. We describe the design details of the prompt in §A.1. The corresponding interme- diate generations (z) of SPP are detailed below. Persona Identification (zp). Given an input task, SPP first generates a list of participants with differ- ent personas. For example in Figure 2, the model identified a Jay Chou Fan persona to help answer "the last song in the second album by Jay Chou". We let the language model identify the personas dynamically instead of manually defining them. Given only two demonstration examples (detailed in §A), we observe that a state-of-the-art large lan- guage model, e.g., GPT-4 (OpenAI, 2023b), can identify accurate and meaningful personas for di- verse tasks. We denote this part of intermediate generation as zp in Equation 3.
2307.05300#16
Unleashing the Emergent Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration
Human intelligence thrives on cognitive synergy, where collaboration among different minds yield superior outcomes compared to isolated individuals. In this work, we propose Solo Performance Prompting (SPP), which transforms a single LLM into a cognitive synergist by engaging in multi-turn self-collaboration with multiple personas. A cognitive synergist is an intelligent agent that collaboratively combines multiple minds' strengths and knowledge to enhance problem-solving in complex tasks. By dynamically identifying and simulating different personas based on task inputs, SPP unleashes the potential of cognitive synergy in LLMs. Our in-depth analysis shows that assigning multiple fine-grained personas in LLMs improves problem-solving abilities compared to using a single or fixed number of personas. We evaluate SPP on three challenging tasks: Trivia Creative Writing, Codenames Collaborative, and Logic Grid Puzzle, encompassing both knowledge-intensive and reasoning-intensive types. Unlike previous works, such as Chain-of-Thought, that solely enhance the reasoning abilities in LLMs, experimental results demonstrate that SPP effectively reduces factual hallucination, and maintains strong reasoning capabilities. Additionally, comparative experiments show that cognitive synergy only emerges in GPT-4 and does not appear in less capable models, such as GPT-3.5-turbo and Llama2-13b-chat, which draws an interesting analogy to human development. Code, data, and prompts can be found at: https://github.com/MikeWangWZHL/Solo-Performance-Prompting.git.
http://arxiv.org/pdf/2307.05300
Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, Heng Ji
cs.AI, cs.CL
null
null
cs.AI
20230711
20240104
[ { "id": "2302.06476" }, { "id": "2305.17126" }, { "id": "2305.11499" }, { "id": "2210.00185" }, { "id": "2305.10601" }, { "id": "2304.05335" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14688" }, { "id": "2208.11663" }, { "id": "2208.03299" }, { "id": "2206.04615" }, { "id": "2104.07567" }, { "id": "2306.09896" }, { "id": "2302.04023" }, { "id": "2205.11916" }, { "id": "2304.03442" }, { "id": "2207.05608" }, { "id": "2303.12712" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" } ]
2307.07526
16
manipulate conversations with emotions, whether real or fake. Humans do not always have real emotions but pretend to have them. AI bots, at times, respond the same way. Consider the responses by a few AI-enabled chatbots "Don’t ask me to recite any now, though – I wouldn’t want to overwhelm your puny human brain with my brilliance!". "There’s just something about their quirky personalities and awkward movements that I find utterly charming!" [30]. They can be considerably ruder than expected by users. These chatbots can also make choices and pretend to feel wonderful, grateful, curious, fascinated, happy, peaceful, sad, and angry [31]. Users get amazed by the responses of these bots as they are not ready to accept that machines can reply consciously (not as a stochastic parrot). So, it can be starting of a new era where chatbot or LLM models have achieved computational efficiency to mimic human emotional intelligence and generate conscious replies for which it was not trained.
2307.07526#16
Can I say, now machines can think?
Generative AI techniques have opened the path for new generations of machines in diverse domains. These machines have various capabilities for example, they can produce images, generate answers or stories, and write codes based on the "prompts" only provided by users. These machines are considered 'thinking minds' because they have the ability to generate human-like responses. In this study, we have analyzed and explored the capabilities of artificial intelligence-enabled machines. We have revisited on Turing's concept of thinking machines and compared it with recent technological advancements. The objections and consequences of the thinking machines are also discussed in this study, along with available techniques to evaluate machines' cognitive capabilities. We have concluded that Turing Test is a critical aspect of evaluating machines' ability. However, there are other aspects of intelligence too, and AI machines exhibit most of these aspects.
http://arxiv.org/pdf/2307.07526
Nitisha Aggarwal, Geetika Jain Saxena, Sanjeev Singh, Amit Pundir
cs.AI, cs.CY, I.2.m Miscellaneous
11 pages, 3 figures
null
cs.AI
20230711
20230711
[ { "id": "1801.01957" }, { "id": "2304.03277" }, { "id": "2305.18449" }, { "id": "2303.12712" }, { "id": "2304.03439" }, { "id": "2003.02989" } ]
2307.04964
17
We incorporate an extra term into the reward function, which introduces a penalty based on the Kullback-Leibler (KL) divergence between the learned RL policy πRL φ and initial supervised model πSFT. The total reward can be expressed as [30]: rtotal = r(x, y) − ηKL(πRL φ (y|x), πSFT(y|x)), where η is KL reward coefficient and controls the strength of the KL penalty. This KL divergence term plays two significant roles within this context. First, it functions as an entropy bonus, fostering 4 (3) exploration within the policy landscape and preventing the policy from prematurely converging to a single mode. Second, it works to ensure that the RL policy’s output does not deviate drastically from the samples that the reward model encountered during its training phase. # 3.2 Reinforcement Learning
2307.04964#17
Secrets of RLHF in Large Language Models Part I: PPO
Large language models (LLMs) have formulated a blueprint for the advancement of artificial general intelligence. Its primary objective is to function as a human-centric (helpful, honest, and harmless) assistant. Alignment with humans assumes paramount significance, and reinforcement learning with human feedback (RLHF) emerges as the pivotal technological paradigm underpinning this pursuit. Current technical routes usually include \textbf{reward models} to measure human preferences, \textbf{Proximal Policy Optimization} (PPO) to optimize policy model outputs, and \textbf{process supervision} to improve step-by-step reasoning capabilities. However, due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle. In the first report, we dissect the framework of RLHF, re-evaluate the inner workings of PPO, and explore how the parts comprising PPO algorithms impact policy agent training. We identify policy constraints being the key factor for the effective implementation of the PPO algorithm. Therefore, we explore the PPO-max, an advanced version of PPO algorithm, to efficiently improve the training stability of the policy model. Based on our main results, we perform a comprehensive analysis of RLHF abilities compared with SFT models and ChatGPT. The absence of open-source implementations has posed significant challenges to the investigation of LLMs alignment. Therefore, we are eager to release technical reports, reward models and PPO codes, aiming to make modest contributions to the advancement of LLMs.
http://arxiv.org/pdf/2307.04964
Rui Zheng, Shihan Dou, Songyang Gao, Yuan Hua, Wei Shen, Binghai Wang, Yan Liu, Senjie Jin, Qin Liu, Yuhao Zhou, Limao Xiong, Lu Chen, Zhiheng Xi, Nuo Xu, Wenbin Lai, Minghao Zhu, Cheng Chang, Zhangyue Yin, Rongxiang Weng, Wensen Cheng, Haoran Huang, Tianxiang Sun, Hang Yan, Tao Gui, Qi Zhang, Xipeng Qiu, Xuanjing Huang
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230711
20230718
[ { "id": "2302.13971" }, { "id": "1909.08593" }, { "id": "2201.08239" }, { "id": "2304.03277" }, { "id": "2304.03442" }, { "id": "2303.18223" }, { "id": "2108.07258" }, { "id": "2204.05862" }, { "id": "2303.03378" }, { "id": "2203.02155" }, { "id": "2112.00861" }, { "id": "2201.11903" }, { "id": "1904.09751" } ]