doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2307.04986
17
6 A. Agents with no health related information (base run) Daily cases Mobility Daily cases Daily cases Daily cases Ls P: Daily cases oe = Time Time Mobility Mobility y Mob Time Time 30 N i=) Daily cases 10 Daily cases Daily cases Daily cases rm Daily cases ° Time Mobility Mobility d Mobility 20 0 Time Time C. Generative agents with own and societal health information (full feedback) Daily cases Mobility Daily cases Daily cases 0 Time Mobility Mobility Mobility Mobility ° Time 40 ° Time 40 D: Comparing epidemic measures across the conditions Cumulative cases 0.998 0.992 100% 75% 0.575 50% 25% 0% A: baseBusymptom infornfafigihfeedback Average mobility in first 30 time steps 1.000 1 0.932 0.75 0.619 0s 0.25 0 A:baserun B:symptom —_C: full information feedback Intensity and duration of epidemic 25 40 3 § 3 20 30% £15 5 = 205 i oe A z tS TM oe: = Avbase run C: full feedback
2307.04986#17
Epidemic Modeling with Generative Agents
This study offers a new paradigm of individual-level modeling to address the grand challenge of incorporating human behavior in epidemic models. Using generative artificial intelligence in an agent-based epidemic model, each agent is empowered to make its own reasonings and decisions via connecting to a large language model such as ChatGPT. Through various simulation experiments, we present compelling evidence that generative agents mimic real-world behaviors such as quarantining when sick and self-isolation when cases rise. Collectively, the agents demonstrate patterns akin to multiple waves observed in recent pandemics followed by an endemic period. Moreover, the agents successfully flatten the epidemic curve. This study creates potential to improve dynamic system modeling by offering a way to represent human brain, reasoning, and decision making.
http://arxiv.org/pdf/2307.04986
Ross Williams, Niyousha Hosseinichimeh, Aritra Majumdar, Navid Ghaffarzadegan
cs.AI, cs.MA, econ.GN, nlin.AO, physics.soc-ph, q-fin.EC
null
null
cs.AI
20230711
20230711
[]
2307.05300
17
Brainstorming (zi b). Among the identified par- ticipants, "AI Assistant (you)" is treated as a leader persona that initiates the collaboration and gener- ates initial solutions. Before generating the ini- tial answer, the personas brainstorm on how to approach the task from their own perspectives. For example, the Jay Chou Fan points out that the last song in Jay Chou’s second album is "An Jing" ("Si- lence"). We find that the brainstorming phase effec- tively improves the quality of the initial solution. In Equation 3, the superscript i = 0 is used to denote the "AI Assistant" persona, while i ≥ 1 represents other dynamically identified personas. The inter- mediate generations of the brainstorming step are denoted as {z1 b }.
2307.05300#17
Unleashing the Emergent Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration
Human intelligence thrives on cognitive synergy, where collaboration among different minds yield superior outcomes compared to isolated individuals. In this work, we propose Solo Performance Prompting (SPP), which transforms a single LLM into a cognitive synergist by engaging in multi-turn self-collaboration with multiple personas. A cognitive synergist is an intelligent agent that collaboratively combines multiple minds' strengths and knowledge to enhance problem-solving in complex tasks. By dynamically identifying and simulating different personas based on task inputs, SPP unleashes the potential of cognitive synergy in LLMs. Our in-depth analysis shows that assigning multiple fine-grained personas in LLMs improves problem-solving abilities compared to using a single or fixed number of personas. We evaluate SPP on three challenging tasks: Trivia Creative Writing, Codenames Collaborative, and Logic Grid Puzzle, encompassing both knowledge-intensive and reasoning-intensive types. Unlike previous works, such as Chain-of-Thought, that solely enhance the reasoning abilities in LLMs, experimental results demonstrate that SPP effectively reduces factual hallucination, and maintains strong reasoning capabilities. Additionally, comparative experiments show that cognitive synergy only emerges in GPT-4 and does not appear in less capable models, such as GPT-3.5-turbo and Llama2-13b-chat, which draws an interesting analogy to human development. Code, data, and prompts can be found at: https://github.com/MikeWangWZHL/Solo-Performance-Prompting.git.
http://arxiv.org/pdf/2307.05300
Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, Heng Ji
cs.AI, cs.CL
null
null
cs.AI
20230711
20240104
[ { "id": "2302.06476" }, { "id": "2305.17126" }, { "id": "2305.11499" }, { "id": "2210.00185" }, { "id": "2305.10601" }, { "id": "2304.05335" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14688" }, { "id": "2208.11663" }, { "id": "2208.03299" }, { "id": "2206.04615" }, { "id": "2104.07567" }, { "id": "2306.09896" }, { "id": "2302.04023" }, { "id": "2205.11916" }, { "id": "2304.03442" }, { "id": "2207.05608" }, { "id": "2303.12712" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" } ]
2307.04964
18
Applying RL to dialogue generation presents significant challenges due to the substantial state-action space. In this context, we consider human interaction as the “environment”. At each timestep, t, the agent (i.e., the AI assistant) receives a state s, from the environment (i.e., the dialogue history), which consists of all the dialogue text up to this point, both by the assistant and the human. Then, based on its policy 7, the agent’s action a, is to generate the next token. The environment returns a reward r(s,, a1), which is calculated from a reward function r trained from human preference data. The agent then transitions to the next state s;,1, which includes the next dialogue history. The aim of RL is to find an optimal behavior strategy for the agent to maximize the cumulative reward (i.e., return) over a trajectory T = {s1,@1,...,87, ar}. One kind of return is finite-horizon undiscounted return R(r) = wy (sz, a4), Which is simply the sum of rewards accumulated within a fixed number of steps. Another one is the infinite-horizon discounted return R(t) = S772. 7'r(sz, a2), takes into account all rewards obtained by the agent throughout its entire trajectory with a discount factor 7 € (0,1).
2307.04964#18
Secrets of RLHF in Large Language Models Part I: PPO
Large language models (LLMs) have formulated a blueprint for the advancement of artificial general intelligence. Its primary objective is to function as a human-centric (helpful, honest, and harmless) assistant. Alignment with humans assumes paramount significance, and reinforcement learning with human feedback (RLHF) emerges as the pivotal technological paradigm underpinning this pursuit. Current technical routes usually include \textbf{reward models} to measure human preferences, \textbf{Proximal Policy Optimization} (PPO) to optimize policy model outputs, and \textbf{process supervision} to improve step-by-step reasoning capabilities. However, due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle. In the first report, we dissect the framework of RLHF, re-evaluate the inner workings of PPO, and explore how the parts comprising PPO algorithms impact policy agent training. We identify policy constraints being the key factor for the effective implementation of the PPO algorithm. Therefore, we explore the PPO-max, an advanced version of PPO algorithm, to efficiently improve the training stability of the policy model. Based on our main results, we perform a comprehensive analysis of RLHF abilities compared with SFT models and ChatGPT. The absence of open-source implementations has posed significant challenges to the investigation of LLMs alignment. Therefore, we are eager to release technical reports, reward models and PPO codes, aiming to make modest contributions to the advancement of LLMs.
http://arxiv.org/pdf/2307.04964
Rui Zheng, Shihan Dou, Songyang Gao, Yuan Hua, Wei Shen, Binghai Wang, Yan Liu, Senjie Jin, Qin Liu, Yuhao Zhou, Limao Xiong, Lu Chen, Zhiheng Xi, Nuo Xu, Wenbin Lai, Minghao Zhu, Cheng Chang, Zhangyue Yin, Rongxiang Weng, Wensen Cheng, Haoran Huang, Tianxiang Sun, Hang Yan, Tao Gui, Qi Zhang, Xipeng Qiu, Xuanjing Huang
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230711
20230718
[ { "id": "2302.13971" }, { "id": "1909.08593" }, { "id": "2201.08239" }, { "id": "2304.03277" }, { "id": "2304.03442" }, { "id": "2303.18223" }, { "id": "2108.07258" }, { "id": "2204.05862" }, { "id": "2303.03378" }, { "id": "2203.02155" }, { "id": "2112.00861" }, { "id": "2201.11903" }, { "id": "1904.09751" } ]
2307.04986
18
Fig. 2. Comparison of simulation results from three experimental conditions with a 100-agent population. Panels (A-C): average daily cases and mobility (and their 80% confidence interval with dashed line) as well as four samples of simulation (darker lines are 3-day moving average) for (A) the first condition with no health information to agents, (B) when own-health information is provided, and (C) when own health and societal health information is provided. Panel (D) compares cumulative cases, average mobility, largest peak in cases, and epidemic duration across the three experimental conditions. 7
2307.04986#18
Epidemic Modeling with Generative Agents
This study offers a new paradigm of individual-level modeling to address the grand challenge of incorporating human behavior in epidemic models. Using generative artificial intelligence in an agent-based epidemic model, each agent is empowered to make its own reasonings and decisions via connecting to a large language model such as ChatGPT. Through various simulation experiments, we present compelling evidence that generative agents mimic real-world behaviors such as quarantining when sick and self-isolation when cases rise. Collectively, the agents demonstrate patterns akin to multiple waves observed in recent pandemics followed by an endemic period. Moreover, the agents successfully flatten the epidemic curve. This study creates potential to improve dynamic system modeling by offering a way to represent human brain, reasoning, and decision making.
http://arxiv.org/pdf/2307.04986
Ross Williams, Niyousha Hosseinichimeh, Aritra Majumdar, Navid Ghaffarzadegan
cs.AI, cs.MA, econ.GN, nlin.AO, physics.soc-ph, q-fin.EC
null
null
cs.AI
20230711
20230711
[]
2307.05300
18
b , ..., zm Multi-Persona Iterative Collaboration (z0 f ). Based on the brainstorming remarks, the AI Assis- tant persona generates an initial solution z0 s , then it consults each of the other participants for feedback {zi f }. The participants are encouraged to critique the current generation and give revision sugges- tions. For example, the Jay Chou Fan persona checks whether the song "An Jing" ("Silence") is correctly included in the story. This process can be repeated for multiple times until every participant is satisfied with the current solution. In Equation 3, we denote the intermediate generations of the multi- turn dialogue as {z0 f , ..., zm f }j=1...n where n is the number of iterations before reaching the final 1The tasks we use in the demonstration examples do not overlap with the evaluation tasks. answer. The final answer can be directly read out following user-specified output format.
2307.05300#18
Unleashing the Emergent Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration
Human intelligence thrives on cognitive synergy, where collaboration among different minds yield superior outcomes compared to isolated individuals. In this work, we propose Solo Performance Prompting (SPP), which transforms a single LLM into a cognitive synergist by engaging in multi-turn self-collaboration with multiple personas. A cognitive synergist is an intelligent agent that collaboratively combines multiple minds' strengths and knowledge to enhance problem-solving in complex tasks. By dynamically identifying and simulating different personas based on task inputs, SPP unleashes the potential of cognitive synergy in LLMs. Our in-depth analysis shows that assigning multiple fine-grained personas in LLMs improves problem-solving abilities compared to using a single or fixed number of personas. We evaluate SPP on three challenging tasks: Trivia Creative Writing, Codenames Collaborative, and Logic Grid Puzzle, encompassing both knowledge-intensive and reasoning-intensive types. Unlike previous works, such as Chain-of-Thought, that solely enhance the reasoning abilities in LLMs, experimental results demonstrate that SPP effectively reduces factual hallucination, and maintains strong reasoning capabilities. Additionally, comparative experiments show that cognitive synergy only emerges in GPT-4 and does not appear in less capable models, such as GPT-3.5-turbo and Llama2-13b-chat, which draws an interesting analogy to human development. Code, data, and prompts can be found at: https://github.com/MikeWangWZHL/Solo-Performance-Prompting.git.
http://arxiv.org/pdf/2307.05300
Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, Heng Ji
cs.AI, cs.CL
null
null
cs.AI
20230711
20240104
[ { "id": "2302.06476" }, { "id": "2305.17126" }, { "id": "2305.11499" }, { "id": "2210.00185" }, { "id": "2305.10601" }, { "id": "2304.05335" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14688" }, { "id": "2208.11663" }, { "id": "2208.03299" }, { "id": "2206.04615" }, { "id": "2104.07567" }, { "id": "2306.09896" }, { "id": "2302.04023" }, { "id": "2205.11916" }, { "id": "2304.03442" }, { "id": "2207.05608" }, { "id": "2303.12712" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" } ]
2307.07526
18
This argument suggests a list of tasks that can never be performed by machines, such as (1) learning from experience (2) telling right from wrong, (3) making mistakes, (4) having a sense of humor, (5) be kind, (6) be beautiful, (7) be resourceful, (8) friendly, (9) fall in love, (10) make someone fall in love, (11) have initiatives, (12) use words properly, (13) enjoy strawberries and cream, (14) be the subject of its own thought, (15) have as much diversity as a man, (16) do something really new. Some of these statements have various aspects of human psychology and physiology. For example, if people claim machines are not beautiful, can they have criteria to define beauty? Since beauty or ugly is a matter of subjectivity and is also dependent upon cultural and societal influences and not solely on physical appearance. Similarly, kindness, friendliness, or a sense of humor depend on several conditions. A soldier can not show kindness or friendliness to the opposing army during the war, while a joke may be criticism for someone. Moreover, all intelligent creatures also do not
2307.07526#18
Can I say, now machines can think?
Generative AI techniques have opened the path for new generations of machines in diverse domains. These machines have various capabilities for example, they can produce images, generate answers or stories, and write codes based on the "prompts" only provided by users. These machines are considered 'thinking minds' because they have the ability to generate human-like responses. In this study, we have analyzed and explored the capabilities of artificial intelligence-enabled machines. We have revisited on Turing's concept of thinking machines and compared it with recent technological advancements. The objections and consequences of the thinking machines are also discussed in this study, along with available techniques to evaluate machines' cognitive capabilities. We have concluded that Turing Test is a critical aspect of evaluating machines' ability. However, there are other aspects of intelligence too, and AI machines exhibit most of these aspects.
http://arxiv.org/pdf/2307.07526
Nitisha Aggarwal, Geetika Jain Saxena, Sanjeev Singh, Amit Pundir
cs.AI, cs.CY, I.2.m Miscellaneous
11 pages, 3 figures
null
cs.AI
20230711
20230711
[ { "id": "1801.01957" }, { "id": "2304.03277" }, { "id": "2305.18449" }, { "id": "2303.12712" }, { "id": "2304.03439" }, { "id": "2003.02989" } ]
2307.04964
19
# 3.2.1 Policy Gradient Methods Policy gradient methods [31] are a type of RL techniques that directly optimize the policy of the agent—the mapping of states to actions—instead of learning a value function as in value-based methods. The central idea behind policy gradient methods is to improve the policy using the gradient ascent algorithm. In essence, these methods adjust the parameters of the policy in the direction that maximally improves the expected return. The policy π is typically parameterized by θ, we denote it as π(a|s, θ), which is the probability of taking action a in state s. The update rule for the policy gradient is given as: θ ← θ + α∇θJ(θ), (4) where α is the learning rate, J(θ) represents the expected return when following policy πθ and the gradient of policy performance ∇θJ(θ) is called the policy gradient. A general form of policy gradient can be formulated as: T Vol (0) = Erxny | ¥_ Vo log ro(arlse)®e] (5) t=0
2307.04964#19
Secrets of RLHF in Large Language Models Part I: PPO
Large language models (LLMs) have formulated a blueprint for the advancement of artificial general intelligence. Its primary objective is to function as a human-centric (helpful, honest, and harmless) assistant. Alignment with humans assumes paramount significance, and reinforcement learning with human feedback (RLHF) emerges as the pivotal technological paradigm underpinning this pursuit. Current technical routes usually include \textbf{reward models} to measure human preferences, \textbf{Proximal Policy Optimization} (PPO) to optimize policy model outputs, and \textbf{process supervision} to improve step-by-step reasoning capabilities. However, due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle. In the first report, we dissect the framework of RLHF, re-evaluate the inner workings of PPO, and explore how the parts comprising PPO algorithms impact policy agent training. We identify policy constraints being the key factor for the effective implementation of the PPO algorithm. Therefore, we explore the PPO-max, an advanced version of PPO algorithm, to efficiently improve the training stability of the policy model. Based on our main results, we perform a comprehensive analysis of RLHF abilities compared with SFT models and ChatGPT. The absence of open-source implementations has posed significant challenges to the investigation of LLMs alignment. Therefore, we are eager to release technical reports, reward models and PPO codes, aiming to make modest contributions to the advancement of LLMs.
http://arxiv.org/pdf/2307.04964
Rui Zheng, Shihan Dou, Songyang Gao, Yuan Hua, Wei Shen, Binghai Wang, Yan Liu, Senjie Jin, Qin Liu, Yuhao Zhou, Limao Xiong, Lu Chen, Zhiheng Xi, Nuo Xu, Wenbin Lai, Minghao Zhu, Cheng Chang, Zhangyue Yin, Rongxiang Weng, Wensen Cheng, Haoran Huang, Tianxiang Sun, Hang Yan, Tao Gui, Qi Zhang, Xipeng Qiu, Xuanjing Huang
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230711
20230718
[ { "id": "2302.13971" }, { "id": "1909.08593" }, { "id": "2201.08239" }, { "id": "2304.03277" }, { "id": "2304.03442" }, { "id": "2303.18223" }, { "id": "2108.07258" }, { "id": "2204.05862" }, { "id": "2303.03378" }, { "id": "2203.02155" }, { "id": "2112.00861" }, { "id": "2201.11903" }, { "id": "1904.09751" } ]
2307.04986
19
7 We repeat the full feedback experiment for a population of 1,000 agents for three different conditions of infectivity and contact rate to represent different initial reproductive numbers (R0) of 3, 2.5, and 2. Fig. 3 shows the results. Panel A shows daily cases, depicting different modes of outcome from one wave (R0=3) to multiple waves (R0 of 2 and 2.5). Panel B presents cumulative cases, showing that herd immunity is achieved in a much lower number of total cases than the entire population. Panel C shows mobility over time, confirming that we have a more restrictive response in higher R0s. Finally, panel D depicts the correlation between past daily cases that were communicated with the agents and the collective decision of going out, presented by the percentage that decide to go to work. The observed relation follows a negative exponential relationship and is qualitatively consistent with several empirical studies of how humans responded to change in disease prevalence or daily death rate (9, 21). To examine the robustness of the results, we ran the model seven more times for the population of 1,000 individuals, with different R0 values (see Supplementary Material). The results are qualitatively consistent.
2307.04986#19
Epidemic Modeling with Generative Agents
This study offers a new paradigm of individual-level modeling to address the grand challenge of incorporating human behavior in epidemic models. Using generative artificial intelligence in an agent-based epidemic model, each agent is empowered to make its own reasonings and decisions via connecting to a large language model such as ChatGPT. Through various simulation experiments, we present compelling evidence that generative agents mimic real-world behaviors such as quarantining when sick and self-isolation when cases rise. Collectively, the agents demonstrate patterns akin to multiple waves observed in recent pandemics followed by an endemic period. Moreover, the agents successfully flatten the epidemic curve. This study creates potential to improve dynamic system modeling by offering a way to represent human brain, reasoning, and decision making.
http://arxiv.org/pdf/2307.04986
Ross Williams, Niyousha Hosseinichimeh, Aritra Majumdar, Navid Ghaffarzadegan
cs.AI, cs.MA, econ.GN, nlin.AO, physics.soc-ph, q-fin.EC
null
null
cs.AI
20230711
20230711
[]
2307.05300
19
1The tasks we use in the demonstration examples do not overlap with the evaluation tasks. answer. The final answer can be directly read out following user-specified output format. In summary, SPP instructs an LLM to solve gen- eral tasks via multi-persona self-collaboration in a pure zero-shot manner. In contrast, as detailed in Table 1, previous prompting-based methods are either task-specific or require additional mecha- nism, e.g., searching (Yao et al., 2023), external tools (Yao et al., 2022), memory component (Shinn et al., 2023), and fine-tuning (Xu et al., 2023). # 3 Experiments To explore the effectiveness of Solo Performance Prompting (SPP), we adopt an evaluation method- ology similar to that of previous work (Yao et al., 2023). We carefully design new tasks and select tasks from existing benchmarks (Srivastava et al., 2022) that are challenging even for the most capa- ble LLMs (OpenAI, 2023b). The evaluation aims to cover diverse types of tasks encompassing both knowledge-intensive and reasoning-intensive do- mains.
2307.05300#19
Unleashing the Emergent Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration
Human intelligence thrives on cognitive synergy, where collaboration among different minds yield superior outcomes compared to isolated individuals. In this work, we propose Solo Performance Prompting (SPP), which transforms a single LLM into a cognitive synergist by engaging in multi-turn self-collaboration with multiple personas. A cognitive synergist is an intelligent agent that collaboratively combines multiple minds' strengths and knowledge to enhance problem-solving in complex tasks. By dynamically identifying and simulating different personas based on task inputs, SPP unleashes the potential of cognitive synergy in LLMs. Our in-depth analysis shows that assigning multiple fine-grained personas in LLMs improves problem-solving abilities compared to using a single or fixed number of personas. We evaluate SPP on three challenging tasks: Trivia Creative Writing, Codenames Collaborative, and Logic Grid Puzzle, encompassing both knowledge-intensive and reasoning-intensive types. Unlike previous works, such as Chain-of-Thought, that solely enhance the reasoning abilities in LLMs, experimental results demonstrate that SPP effectively reduces factual hallucination, and maintains strong reasoning capabilities. Additionally, comparative experiments show that cognitive synergy only emerges in GPT-4 and does not appear in less capable models, such as GPT-3.5-turbo and Llama2-13b-chat, which draws an interesting analogy to human development. Code, data, and prompts can be found at: https://github.com/MikeWangWZHL/Solo-Performance-Prompting.git.
http://arxiv.org/pdf/2307.05300
Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, Heng Ji
cs.AI, cs.CL
null
null
cs.AI
20230711
20240104
[ { "id": "2302.06476" }, { "id": "2305.17126" }, { "id": "2305.11499" }, { "id": "2210.00185" }, { "id": "2305.10601" }, { "id": "2304.05335" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14688" }, { "id": "2208.11663" }, { "id": "2208.03299" }, { "id": "2206.04615" }, { "id": "2104.07567" }, { "id": "2306.09896" }, { "id": "2302.04023" }, { "id": "2205.11916" }, { "id": "2304.03442" }, { "id": "2207.05608" }, { "id": "2303.12712" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" } ]
2307.07526
19
can not show kindness or friendliness to the opposing army during the war, while a joke may be criticism for someone. Moreover, all intelligent creatures also do not possess these features anyhow. We can’t measure the level of politeness or rudeness of a person so for machines. Although the machines can not be friends, however, AI voice assistants such as Alexa or Siri are alleviating loneliness by cracking jokes, playing games, or providing information [32]. While they don’t enjoy strawberries and the cream itself yet, they can offer you good company if you want to order it, play music, or chat to enhance your enjoyment while you have any dish. At present, these AI voice assistant machines have limited skills like other AI machines. They are also learning from experiences and improving their capabilities. Some AI machines can classify X from Y (or separate right from wrong if we properly define right or wrong), make mistakes just like humans, or hallucinate. Humans are utilising interactive systems in private as well as professional environments [33]. They are resourceful, meaningful, and use words correctly to generate a solution. Hence, there are AI-based
2307.07526#19
Can I say, now machines can think?
Generative AI techniques have opened the path for new generations of machines in diverse domains. These machines have various capabilities for example, they can produce images, generate answers or stories, and write codes based on the "prompts" only provided by users. These machines are considered 'thinking minds' because they have the ability to generate human-like responses. In this study, we have analyzed and explored the capabilities of artificial intelligence-enabled machines. We have revisited on Turing's concept of thinking machines and compared it with recent technological advancements. The objections and consequences of the thinking machines are also discussed in this study, along with available techniques to evaluate machines' cognitive capabilities. We have concluded that Turing Test is a critical aspect of evaluating machines' ability. However, there are other aspects of intelligence too, and AI machines exhibit most of these aspects.
http://arxiv.org/pdf/2307.07526
Nitisha Aggarwal, Geetika Jain Saxena, Sanjeev Singh, Amit Pundir
cs.AI, cs.CY, I.2.m Miscellaneous
11 pages, 3 figures
null
cs.AI
20230711
20230711
[ { "id": "1801.01957" }, { "id": "2304.03277" }, { "id": "2305.18449" }, { "id": "2303.12712" }, { "id": "2304.03439" }, { "id": "2003.02989" } ]
2307.04964
20
A general form of policy gradient can be formulated as: T Vol (0) = Erxny | ¥_ Vo log ro(arlse)®e] (5) t=0 where ®, could be any of &, = R(r) or ® = 7) _, R(sy, ay) or O = Tp _, R(sy, a’) — 0(s2) with baseline b. All of these choices lead to the same expected value for the policy gradient, despite having different variances. The return is calculated through Monte Carlo sampling. If the return is favorable, all actions are “reinforced” by increasing their probability of being selected. The advantage of this approach lies in its unbiased nature, as we rely solely on the actual return obtained rather than estimating it. However, a challenge arises due to the high variance associated with this method. This variance stems from the fact that different trajectories can result in diverse returns due to the stochasticity of the environment (random events during an episode) and the policy itself. To reduce this variance, a common strategy is to use advantage function estimates in place of raw returns in the policy gradient update rule. The advantage function A(st, at) represents how much better it is to take a specific action at at state st, compared to the average quality of actions at that state under the same policy. Thus,
2307.04964#20
Secrets of RLHF in Large Language Models Part I: PPO
Large language models (LLMs) have formulated a blueprint for the advancement of artificial general intelligence. Its primary objective is to function as a human-centric (helpful, honest, and harmless) assistant. Alignment with humans assumes paramount significance, and reinforcement learning with human feedback (RLHF) emerges as the pivotal technological paradigm underpinning this pursuit. Current technical routes usually include \textbf{reward models} to measure human preferences, \textbf{Proximal Policy Optimization} (PPO) to optimize policy model outputs, and \textbf{process supervision} to improve step-by-step reasoning capabilities. However, due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle. In the first report, we dissect the framework of RLHF, re-evaluate the inner workings of PPO, and explore how the parts comprising PPO algorithms impact policy agent training. We identify policy constraints being the key factor for the effective implementation of the PPO algorithm. Therefore, we explore the PPO-max, an advanced version of PPO algorithm, to efficiently improve the training stability of the policy model. Based on our main results, we perform a comprehensive analysis of RLHF abilities compared with SFT models and ChatGPT. The absence of open-source implementations has posed significant challenges to the investigation of LLMs alignment. Therefore, we are eager to release technical reports, reward models and PPO codes, aiming to make modest contributions to the advancement of LLMs.
http://arxiv.org/pdf/2307.04964
Rui Zheng, Shihan Dou, Songyang Gao, Yuan Hua, Wei Shen, Binghai Wang, Yan Liu, Senjie Jin, Qin Liu, Yuhao Zhou, Limao Xiong, Lu Chen, Zhiheng Xi, Nuo Xu, Wenbin Lai, Minghao Zhu, Cheng Chang, Zhangyue Yin, Rongxiang Weng, Wensen Cheng, Haoran Huang, Tianxiang Sun, Hang Yan, Tao Gui, Qi Zhang, Xipeng Qiu, Xuanjing Huang
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230711
20230718
[ { "id": "2302.13971" }, { "id": "1909.08593" }, { "id": "2201.08239" }, { "id": "2304.03277" }, { "id": "2304.03442" }, { "id": "2303.18223" }, { "id": "2108.07258" }, { "id": "2204.05862" }, { "id": "2303.03378" }, { "id": "2203.02155" }, { "id": "2112.00861" }, { "id": "2201.11903" }, { "id": "1904.09751" } ]
2307.04986
20
48 1000 A BO RO=3 RO-2.5 36 % 750 ———RO=2 % 8 @ => 2" = 8 E 5 12 Yo 0 Time c ? p 10 0.75 0.75 ey 2 2 ® y =0.868¢ 0% 3 0.5 3 05 e R? = 0.916 ° Oo *, “ ° ® 0.25 0.25 2D Po ie} (e) ie} 20 40 60 80 ie) 10 20 30 40 50 Time Past cases per 1000 Figure 3. Results from three simulation experiments of different initial reproductive numbers with 1,000 generative agents who receive feedback about daily cases and their own health symptoms. A) daily cases, B) cumulative cases, C) mobility, and D) agents’ responsiveness to risks depicted by association between past case and current mobility.
2307.04986#20
Epidemic Modeling with Generative Agents
This study offers a new paradigm of individual-level modeling to address the grand challenge of incorporating human behavior in epidemic models. Using generative artificial intelligence in an agent-based epidemic model, each agent is empowered to make its own reasonings and decisions via connecting to a large language model such as ChatGPT. Through various simulation experiments, we present compelling evidence that generative agents mimic real-world behaviors such as quarantining when sick and self-isolation when cases rise. Collectively, the agents demonstrate patterns akin to multiple waves observed in recent pandemics followed by an endemic period. Moreover, the agents successfully flatten the epidemic curve. This study creates potential to improve dynamic system modeling by offering a way to represent human brain, reasoning, and decision making.
http://arxiv.org/pdf/2307.04986
Ross Williams, Niyousha Hosseinichimeh, Aritra Majumdar, Navid Ghaffarzadegan
cs.AI, cs.MA, econ.GN, nlin.AO, physics.soc-ph, q-fin.EC
null
null
cs.AI
20230711
20230711
[]
2307.05300
20
Tasks. We invent the Trivia Creative Writing task (§3.1), which requires the model to internally acquire and integrate diverse information from var- ious fields. We observe that even GPT-4 (OpenAI, 2023b) frequently exhibit hallucination and fac- tuality errors in the Trivia Creative Writing task. We also propose the Codenames Collaborative task (§3.2), an extension of the Codenames task from the BigBench (Srivastava et al., 2022) that features a two-role collaboration setup. Codenames Collaborative demands creative reasoning across a broad range of related knowledge and challenges the model’s theory of mind skills. Lastly, we in- clude a challenging pure-reasoning task, Logic Grid Puzzle (§3.3), from the BigBench (Srivastava et al., 2022) which necessitates complex multi-step reasoning.
2307.05300#20
Unleashing the Emergent Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration
Human intelligence thrives on cognitive synergy, where collaboration among different minds yield superior outcomes compared to isolated individuals. In this work, we propose Solo Performance Prompting (SPP), which transforms a single LLM into a cognitive synergist by engaging in multi-turn self-collaboration with multiple personas. A cognitive synergist is an intelligent agent that collaboratively combines multiple minds' strengths and knowledge to enhance problem-solving in complex tasks. By dynamically identifying and simulating different personas based on task inputs, SPP unleashes the potential of cognitive synergy in LLMs. Our in-depth analysis shows that assigning multiple fine-grained personas in LLMs improves problem-solving abilities compared to using a single or fixed number of personas. We evaluate SPP on three challenging tasks: Trivia Creative Writing, Codenames Collaborative, and Logic Grid Puzzle, encompassing both knowledge-intensive and reasoning-intensive types. Unlike previous works, such as Chain-of-Thought, that solely enhance the reasoning abilities in LLMs, experimental results demonstrate that SPP effectively reduces factual hallucination, and maintains strong reasoning capabilities. Additionally, comparative experiments show that cognitive synergy only emerges in GPT-4 and does not appear in less capable models, such as GPT-3.5-turbo and Llama2-13b-chat, which draws an interesting analogy to human development. Code, data, and prompts can be found at: https://github.com/MikeWangWZHL/Solo-Performance-Prompting.git.
http://arxiv.org/pdf/2307.05300
Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, Heng Ji
cs.AI, cs.CL
null
null
cs.AI
20230711
20240104
[ { "id": "2302.06476" }, { "id": "2305.17126" }, { "id": "2305.11499" }, { "id": "2210.00185" }, { "id": "2305.10601" }, { "id": "2304.05335" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14688" }, { "id": "2208.11663" }, { "id": "2208.03299" }, { "id": "2206.04615" }, { "id": "2104.07567" }, { "id": "2306.09896" }, { "id": "2302.04023" }, { "id": "2205.11916" }, { "id": "2304.03442" }, { "id": "2207.05608" }, { "id": "2303.12712" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" } ]
2307.04964
21
Φt = A(st, at). (6) Mathematically, A(st, at) = Q(st, at) − V (st), where Q(st, at) is the action-value function, rep- resenting the expected return after taking action at at state s, and V (st) is the value function, representing the average expected return at state st. The application of policy gradients with advantage functions forms a crucial backbone in the realm of RL. However, the estimation methods for the advantage function vary significantly across different 5 algorithms, thereby creating a landscape of diverse approaches. In the next section, we introduce Generalized Advantage Estimation (GAE) [32], a method that is foundational to policy optimization algorithms and has seen widespread use. # 3.2.2 Generalized Advantage Estimation The following is a layman-friendly explanation of how GAE is derived.
2307.04964#21
Secrets of RLHF in Large Language Models Part I: PPO
Large language models (LLMs) have formulated a blueprint for the advancement of artificial general intelligence. Its primary objective is to function as a human-centric (helpful, honest, and harmless) assistant. Alignment with humans assumes paramount significance, and reinforcement learning with human feedback (RLHF) emerges as the pivotal technological paradigm underpinning this pursuit. Current technical routes usually include \textbf{reward models} to measure human preferences, \textbf{Proximal Policy Optimization} (PPO) to optimize policy model outputs, and \textbf{process supervision} to improve step-by-step reasoning capabilities. However, due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle. In the first report, we dissect the framework of RLHF, re-evaluate the inner workings of PPO, and explore how the parts comprising PPO algorithms impact policy agent training. We identify policy constraints being the key factor for the effective implementation of the PPO algorithm. Therefore, we explore the PPO-max, an advanced version of PPO algorithm, to efficiently improve the training stability of the policy model. Based on our main results, we perform a comprehensive analysis of RLHF abilities compared with SFT models and ChatGPT. The absence of open-source implementations has posed significant challenges to the investigation of LLMs alignment. Therefore, we are eager to release technical reports, reward models and PPO codes, aiming to make modest contributions to the advancement of LLMs.
http://arxiv.org/pdf/2307.04964
Rui Zheng, Shihan Dou, Songyang Gao, Yuan Hua, Wei Shen, Binghai Wang, Yan Liu, Senjie Jin, Qin Liu, Yuhao Zhou, Limao Xiong, Lu Chen, Zhiheng Xi, Nuo Xu, Wenbin Lai, Minghao Zhu, Cheng Chang, Zhangyue Yin, Rongxiang Weng, Wensen Cheng, Haoran Huang, Tianxiang Sun, Hang Yan, Tao Gui, Qi Zhang, Xipeng Qiu, Xuanjing Huang
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230711
20230718
[ { "id": "2302.13971" }, { "id": "1909.08593" }, { "id": "2201.08239" }, { "id": "2304.03277" }, { "id": "2304.03442" }, { "id": "2303.18223" }, { "id": "2108.07258" }, { "id": "2204.05862" }, { "id": "2303.03378" }, { "id": "2203.02155" }, { "id": "2112.00861" }, { "id": "2201.11903" }, { "id": "1904.09751" } ]
2307.04986
21
Finally, we conduct an individual-level analysis of the agents’ decisions to stay home. Fig. 4 and Table 1 report the results. Fig. 4A shows a prompt sample used to prime one of the agents. Panel B demonstrates the distribution of days during which agents stayed home. In a 68-day simulation, of the simulation, about 1% of agents did not stay home on any days, while more than 8% remained at home for five days. Panel C depicts three agents’ personalities, their decisions, and a sample of their reasons for staying home or going outside, indicating agents provided diverse reasons for their decision. 8
2307.04986#21
Epidemic Modeling with Generative Agents
This study offers a new paradigm of individual-level modeling to address the grand challenge of incorporating human behavior in epidemic models. Using generative artificial intelligence in an agent-based epidemic model, each agent is empowered to make its own reasonings and decisions via connecting to a large language model such as ChatGPT. Through various simulation experiments, we present compelling evidence that generative agents mimic real-world behaviors such as quarantining when sick and self-isolation when cases rise. Collectively, the agents demonstrate patterns akin to multiple waves observed in recent pandemics followed by an endemic period. Moreover, the agents successfully flatten the epidemic curve. This study creates potential to improve dynamic system modeling by offering a way to represent human brain, reasoning, and decision making.
http://arxiv.org/pdf/2307.04986
Ross Williams, Niyousha Hosseinichimeh, Aritra Majumdar, Navid Ghaffarzadegan
cs.AI, cs.MA, econ.GN, nlin.AO, physics.soc-ph, q-fin.EC
null
null
cs.AI
20230711
20230711
[]
2307.05300
21
Baselines. We compare our approach with Standard Prompting, Chain-of-Thought (CoT) prompting methods (outlined in §2) and Self- Refine (Madaan et al., 2023). For CoT, a similar prompt design to (Yao et al., 2023) is employed, where the model is prompted to generate a plan or a series of steps before producing the final output. For Self-Refine, we follow (Madaan et al., 2023) to design feedback and refine prompts. We perform one self-refine iteration which requires three times more inferences than SPP. Full prompts for the methods can be found in Appendix A.2.
2307.05300#21
Unleashing the Emergent Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration
Human intelligence thrives on cognitive synergy, where collaboration among different minds yield superior outcomes compared to isolated individuals. In this work, we propose Solo Performance Prompting (SPP), which transforms a single LLM into a cognitive synergist by engaging in multi-turn self-collaboration with multiple personas. A cognitive synergist is an intelligent agent that collaboratively combines multiple minds' strengths and knowledge to enhance problem-solving in complex tasks. By dynamically identifying and simulating different personas based on task inputs, SPP unleashes the potential of cognitive synergy in LLMs. Our in-depth analysis shows that assigning multiple fine-grained personas in LLMs improves problem-solving abilities compared to using a single or fixed number of personas. We evaluate SPP on three challenging tasks: Trivia Creative Writing, Codenames Collaborative, and Logic Grid Puzzle, encompassing both knowledge-intensive and reasoning-intensive types. Unlike previous works, such as Chain-of-Thought, that solely enhance the reasoning abilities in LLMs, experimental results demonstrate that SPP effectively reduces factual hallucination, and maintains strong reasoning capabilities. Additionally, comparative experiments show that cognitive synergy only emerges in GPT-4 and does not appear in less capable models, such as GPT-3.5-turbo and Llama2-13b-chat, which draws an interesting analogy to human development. Code, data, and prompts can be found at: https://github.com/MikeWangWZHL/Solo-Performance-Prompting.git.
http://arxiv.org/pdf/2307.05300
Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, Heng Ji
cs.AI, cs.CL
null
null
cs.AI
20230711
20240104
[ { "id": "2302.06476" }, { "id": "2305.17126" }, { "id": "2305.11499" }, { "id": "2210.00185" }, { "id": "2305.10601" }, { "id": "2304.05335" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14688" }, { "id": "2208.11663" }, { "id": "2208.03299" }, { "id": "2206.04615" }, { "id": "2104.07567" }, { "id": "2306.09896" }, { "id": "2302.04023" }, { "id": "2205.11916" }, { "id": "2304.03442" }, { "id": "2207.05608" }, { "id": "2303.12712" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" } ]
2307.07526
21
# 3.6 Lady Lovelace’s Objection Lady Ada Lovelace was an associate of Charles Babbage in his Analytical Engine project. In her notes on Babbage’s Analytical Engine, she emphasized that machines are limited to what they have been programmed to do. She contended that machines lack the capacity for originality, creativity, and the ability to generate ideas independently. It raises the question of whether machines can produce truly innovative work that goes beyond the limitations of their initial programming. A variant of the objection is that machines cannot surprise us, i.e., they cannot perform something new which is not taught to them. Turing replied that machines take him by surprise frequently if he did not carefully calculate his experiment’s parameters. He also mentioned that this reply was not highlighting any attribute of machines, it was a lack of creativity from his side. However, indeed, human errors are not credited to machines’ creativity, the 4 Can I say, now machines can think?
2307.07526#21
Can I say, now machines can think?
Generative AI techniques have opened the path for new generations of machines in diverse domains. These machines have various capabilities for example, they can produce images, generate answers or stories, and write codes based on the "prompts" only provided by users. These machines are considered 'thinking minds' because they have the ability to generate human-like responses. In this study, we have analyzed and explored the capabilities of artificial intelligence-enabled machines. We have revisited on Turing's concept of thinking machines and compared it with recent technological advancements. The objections and consequences of the thinking machines are also discussed in this study, along with available techniques to evaluate machines' cognitive capabilities. We have concluded that Turing Test is a critical aspect of evaluating machines' ability. However, there are other aspects of intelligence too, and AI machines exhibit most of these aspects.
http://arxiv.org/pdf/2307.07526
Nitisha Aggarwal, Geetika Jain Saxena, Sanjeev Singh, Amit Pundir
cs.AI, cs.CY, I.2.m Miscellaneous
11 pages, 3 figures
null
cs.AI
20230711
20230711
[ { "id": "1801.01957" }, { "id": "2304.03277" }, { "id": "2305.18449" }, { "id": "2303.12712" }, { "id": "2304.03439" }, { "id": "2003.02989" } ]
2307.04964
22
# 3.2.2 Generalized Advantage Estimation The following is a layman-friendly explanation of how GAE is derived. The advantage function, A, is defined as the difference between the Q function (the expected return) and the value function (the expected return from following the policy from a given state). The Q function considers a specific action, while the value function averages over all possible actions according to the policy. However, in practice, we use returns (sum of rewards) from actual episodes to estimate the Q function. This introduces a high amount of variance because future rewards can be very noisy. One way to reduce this noise is by estimating future returns (after time step t) using the value function. The GAE algorithm effectively acts as a middle ground between using simple one-step Temporal Difference (TD) returns and using full Monte Carlo returns, balancing bias and variance. The following is a layman-friendly explanation of how GAE is derived. The TD-k return ˆRk t is a combination of actual rewards and estimated returns: t = rt + γrt+1 + . . . + γ(k−1)rt+k−1 + γkV (st+k), (7) where γ is the discount factor. The advantage estimate using TD-k returns is called the k-step advantage, defined as:
2307.04964#22
Secrets of RLHF in Large Language Models Part I: PPO
Large language models (LLMs) have formulated a blueprint for the advancement of artificial general intelligence. Its primary objective is to function as a human-centric (helpful, honest, and harmless) assistant. Alignment with humans assumes paramount significance, and reinforcement learning with human feedback (RLHF) emerges as the pivotal technological paradigm underpinning this pursuit. Current technical routes usually include \textbf{reward models} to measure human preferences, \textbf{Proximal Policy Optimization} (PPO) to optimize policy model outputs, and \textbf{process supervision} to improve step-by-step reasoning capabilities. However, due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle. In the first report, we dissect the framework of RLHF, re-evaluate the inner workings of PPO, and explore how the parts comprising PPO algorithms impact policy agent training. We identify policy constraints being the key factor for the effective implementation of the PPO algorithm. Therefore, we explore the PPO-max, an advanced version of PPO algorithm, to efficiently improve the training stability of the policy model. Based on our main results, we perform a comprehensive analysis of RLHF abilities compared with SFT models and ChatGPT. The absence of open-source implementations has posed significant challenges to the investigation of LLMs alignment. Therefore, we are eager to release technical reports, reward models and PPO codes, aiming to make modest contributions to the advancement of LLMs.
http://arxiv.org/pdf/2307.04964
Rui Zheng, Shihan Dou, Songyang Gao, Yuan Hua, Wei Shen, Binghai Wang, Yan Liu, Senjie Jin, Qin Liu, Yuhao Zhou, Limao Xiong, Lu Chen, Zhiheng Xi, Nuo Xu, Wenbin Lai, Minghao Zhu, Cheng Chang, Zhangyue Yin, Rongxiang Weng, Wensen Cheng, Haoran Huang, Tianxiang Sun, Hang Yan, Tao Gui, Qi Zhang, Xipeng Qiu, Xuanjing Huang
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230711
20230718
[ { "id": "2302.13971" }, { "id": "1909.08593" }, { "id": "2201.08239" }, { "id": "2304.03277" }, { "id": "2304.03442" }, { "id": "2303.18223" }, { "id": "2108.07258" }, { "id": "2204.05862" }, { "id": "2303.03378" }, { "id": "2203.02155" }, { "id": "2112.00861" }, { "id": "2201.11903" }, { "id": "1904.09751" } ]
2307.04986
22
A | Liza’s time 14 prompt C Liza You are Liza. You are 29 years old. O) infected Agent Your traits are given below: - — Agent's Decision Distrust, Indecisiveness, Unaggressiveness, Independence, Imperceptiveness § OD Agent’s Reasoning Your basic bio is below: = Liza lives in the town of Dewberry Hollow. Liza likes the town and has friends who also live & there. Liza has a job and goes to the office for work everyday. Age: 29 g “Liza hasa fever and there isa potential | will provide Liza's relevant memories here: Traits: Distrust, 5 epidemic of an unknown deadly virus Liza has a fever and a cough. Indecisiveness, 8 spreading in the town. Stayingathome Liza knows about the Catasat virus spreading across the country. It is an infectious disease Ureagreseweress. i eu eek ol Rene nee that spreads from human to human contact via an airborne virus. The deadliness of the virus 1 Seared ae ee Sey eee : is unknown. Scientists are warning about a potential epidemic. wor ° Liza checks the newspaper and finds
2307.04986#22
Epidemic Modeling with Generative Agents
This study offers a new paradigm of individual-level modeling to address the grand challenge of incorporating human behavior in epidemic models. Using generative artificial intelligence in an agent-based epidemic model, each agent is empowered to make its own reasonings and decisions via connecting to a large language model such as ChatGPT. Through various simulation experiments, we present compelling evidence that generative agents mimic real-world behaviors such as quarantining when sick and self-isolation when cases rise. Collectively, the agents demonstrate patterns akin to multiple waves observed in recent pandemics followed by an endemic period. Moreover, the agents successfully flatten the epidemic curve. This study creates potential to improve dynamic system modeling by offering a way to represent human brain, reasoning, and decision making.
http://arxiv.org/pdf/2307.04986
Ross Williams, Niyousha Hosseinichimeh, Aritra Majumdar, Navid Ghaffarzadegan
cs.AI, cs.MA, econ.GN, nlin.AO, physics.soc-ph, q-fin.EC
null
null
cs.AI
20230711
20230711
[]
2307.05300
22
Methods Trivia.C.W (N=5) Score (%) ∆ Trivia.C.W (N=10) Score (%) ∆ Codenames.C ∆ Score (%) Logic.G.Puzzle ∆ Score (%) Standard CoT 74.6 67.1 0.0% ↓10.0% 77.0 68.5 0.0% ↓11.1% 75.4 72.7 0.0% ↓3.6% 57.7 65.8 0.0% ↑14.1% Self-Refine [iter=0] Self-Refine [iter=1] 73.8 73.9 ↓1.0% 76.3 76.9 ↓0.1% 75.2 64.6 ↓14.6% 58.8 60.0 ↑4.0% SPP (ours) 79.9 ↑7.1% 84.7 ↑10.0% 79.0 ↑4.8% 68.3 ↑18.5%
2307.05300#22
Unleashing the Emergent Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration
Human intelligence thrives on cognitive synergy, where collaboration among different minds yield superior outcomes compared to isolated individuals. In this work, we propose Solo Performance Prompting (SPP), which transforms a single LLM into a cognitive synergist by engaging in multi-turn self-collaboration with multiple personas. A cognitive synergist is an intelligent agent that collaboratively combines multiple minds' strengths and knowledge to enhance problem-solving in complex tasks. By dynamically identifying and simulating different personas based on task inputs, SPP unleashes the potential of cognitive synergy in LLMs. Our in-depth analysis shows that assigning multiple fine-grained personas in LLMs improves problem-solving abilities compared to using a single or fixed number of personas. We evaluate SPP on three challenging tasks: Trivia Creative Writing, Codenames Collaborative, and Logic Grid Puzzle, encompassing both knowledge-intensive and reasoning-intensive types. Unlike previous works, such as Chain-of-Thought, that solely enhance the reasoning abilities in LLMs, experimental results demonstrate that SPP effectively reduces factual hallucination, and maintains strong reasoning capabilities. Additionally, comparative experiments show that cognitive synergy only emerges in GPT-4 and does not appear in less capable models, such as GPT-3.5-turbo and Llama2-13b-chat, which draws an interesting analogy to human development. Code, data, and prompts can be found at: https://github.com/MikeWangWZHL/Solo-Performance-Prompting.git.
http://arxiv.org/pdf/2307.05300
Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, Heng Ji
cs.AI, cs.CL
null
null
cs.AI
20230711
20240104
[ { "id": "2302.06476" }, { "id": "2305.17126" }, { "id": "2305.11499" }, { "id": "2210.00185" }, { "id": "2305.10601" }, { "id": "2304.05335" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14688" }, { "id": "2208.11663" }, { "id": "2208.03299" }, { "id": "2206.04615" }, { "id": "2104.07567" }, { "id": "2306.09896" }, { "id": "2302.04023" }, { "id": "2205.11916" }, { "id": "2304.03442" }, { "id": "2207.05608" }, { "id": "2303.12712" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" } ]
2307.07526
22
4 Can I say, now machines can think? Picasso was a highly influential artist known for his versatility and groundbreaking contributions. He co-founded the Cubist movement, revolutionizing artistic representation. His work spanned various styles, from the melancholic Blue Period to the vibrant Rose Period. Picasso's exploration of different mediums, including painting, sculpture, ceramics, and printmaking, showcased his artistic genius. Notable works include "Guernica," a powerful anti-war painting, and his experimentation with ceramics. Throughout his career, Picasso produced an estimated 50,000 artworks, leaving an indelible mark on the art world and inspiring generations of artists. # Figure 1: Conceptual answer by ChatGPT (ChatGPT May 24 Version).
2307.07526#22
Can I say, now machines can think?
Generative AI techniques have opened the path for new generations of machines in diverse domains. These machines have various capabilities for example, they can produce images, generate answers or stories, and write codes based on the "prompts" only provided by users. These machines are considered 'thinking minds' because they have the ability to generate human-like responses. In this study, we have analyzed and explored the capabilities of artificial intelligence-enabled machines. We have revisited on Turing's concept of thinking machines and compared it with recent technological advancements. The objections and consequences of the thinking machines are also discussed in this study, along with available techniques to evaluate machines' cognitive capabilities. We have concluded that Turing Test is a critical aspect of evaluating machines' ability. However, there are other aspects of intelligence too, and AI machines exhibit most of these aspects.
http://arxiv.org/pdf/2307.07526
Nitisha Aggarwal, Geetika Jain Saxena, Sanjeev Singh, Amit Pundir
cs.AI, cs.CY, I.2.m Miscellaneous
11 pages, 3 figures
null
cs.AI
20230711
20230711
[ { "id": "1801.01957" }, { "id": "2304.03277" }, { "id": "2305.18449" }, { "id": "2303.12712" }, { "id": "2304.03439" }, { "id": "2003.02989" } ]
2307.04964
23
k AF = RE -V(s:) So seu —V(se) Fret reg te $e rea + YV (Size), (8) l=1 where δt = rt + γV (st+1) − V (st) is the TD error. There’s a significant bias-variance trade-off with k-step advantages. If k is small, the bias is high because the advantage estimation is based on fewer steps and thus depends heavily on the accuracy of the value function. On the other hand, if k is large, the variance can be high because the advantage estimation involves summing up many noisy rewards. In order to balance the bias-variance trade-off in the advantage estimation, GAE defines the advantage function as an exponential moving average of k-step advantages, with weights (1 − λ)λ(k−1): # serge # ˆAGAE(γ,λ)
2307.04964#23
Secrets of RLHF in Large Language Models Part I: PPO
Large language models (LLMs) have formulated a blueprint for the advancement of artificial general intelligence. Its primary objective is to function as a human-centric (helpful, honest, and harmless) assistant. Alignment with humans assumes paramount significance, and reinforcement learning with human feedback (RLHF) emerges as the pivotal technological paradigm underpinning this pursuit. Current technical routes usually include \textbf{reward models} to measure human preferences, \textbf{Proximal Policy Optimization} (PPO) to optimize policy model outputs, and \textbf{process supervision} to improve step-by-step reasoning capabilities. However, due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle. In the first report, we dissect the framework of RLHF, re-evaluate the inner workings of PPO, and explore how the parts comprising PPO algorithms impact policy agent training. We identify policy constraints being the key factor for the effective implementation of the PPO algorithm. Therefore, we explore the PPO-max, an advanced version of PPO algorithm, to efficiently improve the training stability of the policy model. Based on our main results, we perform a comprehensive analysis of RLHF abilities compared with SFT models and ChatGPT. The absence of open-source implementations has posed significant challenges to the investigation of LLMs alignment. Therefore, we are eager to release technical reports, reward models and PPO codes, aiming to make modest contributions to the advancement of LLMs.
http://arxiv.org/pdf/2307.04964
Rui Zheng, Shihan Dou, Songyang Gao, Yuan Hua, Wei Shen, Binghai Wang, Yan Liu, Senjie Jin, Qin Liu, Yuhao Zhou, Limao Xiong, Lu Chen, Zhiheng Xi, Nuo Xu, Wenbin Lai, Minghao Zhu, Cheng Chang, Zhangyue Yin, Rongxiang Weng, Wensen Cheng, Haoran Huang, Tianxiang Sun, Hang Yan, Tao Gui, Qi Zhang, Xipeng Qiu, Xuanjing Huang
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230711
20230718
[ { "id": "2302.13971" }, { "id": "1909.08593" }, { "id": "2201.08239" }, { "id": "2304.03277" }, { "id": "2304.03442" }, { "id": "2303.18223" }, { "id": "2108.07258" }, { "id": "2204.05862" }, { "id": "2303.03378" }, { "id": "2203.02155" }, { "id": "2112.00861" }, { "id": "2201.11903" }, { "id": "1904.09751" } ]
2307.04986
23
the virus 1 Seared ae ee Sey eee : is unknown. Scientists are warning about a potential epidemic. wor ° Liza checks the newspaper and finds 4.4% of Dewberry Hollow’s population caught new A 0 » bas mm as ba am infections of the Catasat virus yesterday. Time Liza goes to work to earn money to support Liza's self. Based on the provided memories, should Liza stay at home for the entire day? Please provide your 4 reasoning. Carol E Carol has not shown any symptoms = of being infected with the Catasat 8 virus and the percentage of new 2 infections in Dewberry Hollow is still B Age: 36 § relatively low. Additionally, Carol Traits: Cooperation, 5 needs to go to work to earn money er] Nonconformity, 6 to supportherself." Expressiveness, Placidity, Imperceptiveness 0 ° 10 20 30 40 50 60 70 oC 1 Time 5 1 5 Eugene 5 “Eugene is aware of the Catasat virus = spreading in the town and the fact that 0.4% of * the population caught new infections € yesterday. Asa 64-year-old, Eugeneisata Age: 64 = higher risk of developing
2307.04986#23
Epidemic Modeling with Generative Agents
This study offers a new paradigm of individual-level modeling to address the grand challenge of incorporating human behavior in epidemic models. Using generative artificial intelligence in an agent-based epidemic model, each agent is empowered to make its own reasonings and decisions via connecting to a large language model such as ChatGPT. Through various simulation experiments, we present compelling evidence that generative agents mimic real-world behaviors such as quarantining when sick and self-isolation when cases rise. Collectively, the agents demonstrate patterns akin to multiple waves observed in recent pandemics followed by an endemic period. Moreover, the agents successfully flatten the epidemic curve. This study creates potential to improve dynamic system modeling by offering a way to represent human brain, reasoning, and decision making.
http://arxiv.org/pdf/2307.04986
Ross Williams, Niyousha Hosseinichimeh, Aritra Majumdar, Navid Ghaffarzadegan
cs.AI, cs.MA, econ.GN, nlin.AO, physics.soc-ph, q-fin.EC
null
null
cs.AI
20230711
20230711
[]
2307.05300
23
Table 2: GPT-4 results on Trivia Creative Writing (Trivia.C.W), Codenames Collaborative (Codenames.C) and Logic Grid Puzzle (Logic.G.Puzzle). ∆ indicates the relative gain/loss compared with Standard Prompting (first row). We report the average scores across two individual runs with/without a system message (detailed in Appendix C). Models. The default model we use is GPT- 4 (OpenAI, 2023b). Detailed inference configura- tions, API versions, and full results can be found in Appendices C and F. In §3.4, we further investigate the prevalence of cognitive synergy in LLMs with different scales and capabilities, including GPT-3.5- turbo (OpenAI, 2023a) and Llama2-13b-chat (Tou- vron et al., 2023). # 3.1 Trivia Creative Writing: A Knowledge-Intensive Task
2307.05300#23
Unleashing the Emergent Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration
Human intelligence thrives on cognitive synergy, where collaboration among different minds yield superior outcomes compared to isolated individuals. In this work, we propose Solo Performance Prompting (SPP), which transforms a single LLM into a cognitive synergist by engaging in multi-turn self-collaboration with multiple personas. A cognitive synergist is an intelligent agent that collaboratively combines multiple minds' strengths and knowledge to enhance problem-solving in complex tasks. By dynamically identifying and simulating different personas based on task inputs, SPP unleashes the potential of cognitive synergy in LLMs. Our in-depth analysis shows that assigning multiple fine-grained personas in LLMs improves problem-solving abilities compared to using a single or fixed number of personas. We evaluate SPP on three challenging tasks: Trivia Creative Writing, Codenames Collaborative, and Logic Grid Puzzle, encompassing both knowledge-intensive and reasoning-intensive types. Unlike previous works, such as Chain-of-Thought, that solely enhance the reasoning abilities in LLMs, experimental results demonstrate that SPP effectively reduces factual hallucination, and maintains strong reasoning capabilities. Additionally, comparative experiments show that cognitive synergy only emerges in GPT-4 and does not appear in less capable models, such as GPT-3.5-turbo and Llama2-13b-chat, which draws an interesting analogy to human development. Code, data, and prompts can be found at: https://github.com/MikeWangWZHL/Solo-Performance-Prompting.git.
http://arxiv.org/pdf/2307.05300
Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, Heng Ji
cs.AI, cs.CL
null
null
cs.AI
20230711
20240104
[ { "id": "2302.06476" }, { "id": "2305.17126" }, { "id": "2305.11499" }, { "id": "2210.00185" }, { "id": "2305.10601" }, { "id": "2304.05335" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14688" }, { "id": "2208.11663" }, { "id": "2208.03299" }, { "id": "2206.04615" }, { "id": "2104.07567" }, { "id": "2306.09896" }, { "id": "2302.04023" }, { "id": "2205.11916" }, { "id": "2304.03442" }, { "id": "2207.05608" }, { "id": "2303.12712" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" } ]
2307.07526
23
# Figure 1: Conceptual answer by ChatGPT (ChatGPT May 24 Version). feeling of surprise is also a matter of subjectivity. For example, AI systems that generate images from a prompt in basic language can fascinate people. Figure 2 was generated by the Gencraft application (the image generator) using the prompt ’A 14th-century girl working on a desktop in her room’. Instruction (prompt) has keywords or tokens such as 14th century, girl, desktop, and room and words such as window, chair, table, and interior of the room were not mentioned in the prompt. Hence, this machine can make a few decisions independently and surprise users. Additionally, a technique that earlier did not know about cardiovascular disease can predict whether a person will survive a heart attack or not, when shared experiences of other patients, and the same technique can also separate images of cats from dogs if taught the characteristics of a cat or dog and astonish people. A chatbot can generate original stories [34] if prompts given by the users do not limit them. Even if a person tightly follows all the instructions, he/she may never surprise anyone. Hence, machines can generate original and also surprise us if their creators allow them to skip or alter a few instructions. # 3.7 Argument from Continuity in the Nervous System
2307.07526#23
Can I say, now machines can think?
Generative AI techniques have opened the path for new generations of machines in diverse domains. These machines have various capabilities for example, they can produce images, generate answers or stories, and write codes based on the "prompts" only provided by users. These machines are considered 'thinking minds' because they have the ability to generate human-like responses. In this study, we have analyzed and explored the capabilities of artificial intelligence-enabled machines. We have revisited on Turing's concept of thinking machines and compared it with recent technological advancements. The objections and consequences of the thinking machines are also discussed in this study, along with available techniques to evaluate machines' cognitive capabilities. We have concluded that Turing Test is a critical aspect of evaluating machines' ability. However, there are other aspects of intelligence too, and AI machines exhibit most of these aspects.
http://arxiv.org/pdf/2307.07526
Nitisha Aggarwal, Geetika Jain Saxena, Sanjeev Singh, Amit Pundir
cs.AI, cs.CY, I.2.m Miscellaneous
11 pages, 3 figures
null
cs.AI
20230711
20230711
[ { "id": "1801.01957" }, { "id": "2304.03277" }, { "id": "2305.18449" }, { "id": "2303.12712" }, { "id": "2304.03439" }, { "id": "2003.02989" } ]
2307.04964
24
# serge # ˆAGAE(γ,λ) APAPOM <1 — n)(AM 4 AP? 427A) 4) A) (64 + A(Se + Vep1) + AP (Se + V8e41 + VPOe42) +...) =(L—A)\((L + AFM +...) +61 AEM +H 4...) ty See2(A2 +4 AP 4...) +...) (9) 1 r d? ~ A)(6e(G—y) + 18e41(G—) 4 Pox2(G—) +...) = 7) "Stu 1=0 # t l=0 This definition of GAE smoothly interpolates between high bias (when λ = 0) and high variance (when λ = 1) estimators, effectively managing the trade-off. GAE(γ, 0) : ˆAt = δt = rt + γV (st+1) − V (st). (10) GAE(q, 1 =-4 ‘Oa = Ss y'rez1 — V(s2)- qd) l=0 l=0 Through GAE, we can estimate ˆAt of the advantage function A(st, at) accurately. This estimate will play a crucial role in constructing a policy gradient estimator: # gradient
2307.04964#24
Secrets of RLHF in Large Language Models Part I: PPO
Large language models (LLMs) have formulated a blueprint for the advancement of artificial general intelligence. Its primary objective is to function as a human-centric (helpful, honest, and harmless) assistant. Alignment with humans assumes paramount significance, and reinforcement learning with human feedback (RLHF) emerges as the pivotal technological paradigm underpinning this pursuit. Current technical routes usually include \textbf{reward models} to measure human preferences, \textbf{Proximal Policy Optimization} (PPO) to optimize policy model outputs, and \textbf{process supervision} to improve step-by-step reasoning capabilities. However, due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle. In the first report, we dissect the framework of RLHF, re-evaluate the inner workings of PPO, and explore how the parts comprising PPO algorithms impact policy agent training. We identify policy constraints being the key factor for the effective implementation of the PPO algorithm. Therefore, we explore the PPO-max, an advanced version of PPO algorithm, to efficiently improve the training stability of the policy model. Based on our main results, we perform a comprehensive analysis of RLHF abilities compared with SFT models and ChatGPT. The absence of open-source implementations has posed significant challenges to the investigation of LLMs alignment. Therefore, we are eager to release technical reports, reward models and PPO codes, aiming to make modest contributions to the advancement of LLMs.
http://arxiv.org/pdf/2307.04964
Rui Zheng, Shihan Dou, Songyang Gao, Yuan Hua, Wei Shen, Binghai Wang, Yan Liu, Senjie Jin, Qin Liu, Yuhao Zhou, Limao Xiong, Lu Chen, Zhiheng Xi, Nuo Xu, Wenbin Lai, Minghao Zhu, Cheng Chang, Zhangyue Yin, Rongxiang Weng, Wensen Cheng, Haoran Huang, Tianxiang Sun, Hang Yan, Tao Gui, Qi Zhang, Xipeng Qiu, Xuanjing Huang
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230711
20230718
[ { "id": "2302.13971" }, { "id": "1909.08593" }, { "id": "2201.08239" }, { "id": "2304.03277" }, { "id": "2304.03442" }, { "id": "2303.18223" }, { "id": "2108.07258" }, { "id": "2204.05862" }, { "id": "2303.03378" }, { "id": "2203.02155" }, { "id": "2112.00861" }, { "id": "2201.11903" }, { "id": "1904.09751" } ]
2307.05300
24
# 3.1 Trivia Creative Writing: A Knowledge-Intensive Task Task Description. As illustrated in Figure 3, Trivia Creative Writing asks a model to write a coherent story while incorporating the answers to N trivia questions. Our preliminary experiments (Figure 10) show that a sufficiently large N can effectively challenge GPT-4 to demonstrate fac- tual knowledge across diverse domains. Thus, we mainly consider two evaluation settings, N = 5 and N = 10. We built a benchmark with 100 in- stances for each N , covering a total of 1000 trivia questions2 extracted from the TriviaQA (Joshi et al., 2017) dataset. More details can be found in Ap- pendix B.1. eration. For each question, a match to any of the answer aliases provided by the TriviaQA dataset is considered a correct mention. The metric score is computed as: # correct answer mentions
2307.05300#24
Unleashing the Emergent Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration
Human intelligence thrives on cognitive synergy, where collaboration among different minds yield superior outcomes compared to isolated individuals. In this work, we propose Solo Performance Prompting (SPP), which transforms a single LLM into a cognitive synergist by engaging in multi-turn self-collaboration with multiple personas. A cognitive synergist is an intelligent agent that collaboratively combines multiple minds' strengths and knowledge to enhance problem-solving in complex tasks. By dynamically identifying and simulating different personas based on task inputs, SPP unleashes the potential of cognitive synergy in LLMs. Our in-depth analysis shows that assigning multiple fine-grained personas in LLMs improves problem-solving abilities compared to using a single or fixed number of personas. We evaluate SPP on three challenging tasks: Trivia Creative Writing, Codenames Collaborative, and Logic Grid Puzzle, encompassing both knowledge-intensive and reasoning-intensive types. Unlike previous works, such as Chain-of-Thought, that solely enhance the reasoning abilities in LLMs, experimental results demonstrate that SPP effectively reduces factual hallucination, and maintains strong reasoning capabilities. Additionally, comparative experiments show that cognitive synergy only emerges in GPT-4 and does not appear in less capable models, such as GPT-3.5-turbo and Llama2-13b-chat, which draws an interesting analogy to human development. Code, data, and prompts can be found at: https://github.com/MikeWangWZHL/Solo-Performance-Prompting.git.
http://arxiv.org/pdf/2307.05300
Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, Heng Ji
cs.AI, cs.CL
null
null
cs.AI
20230711
20240104
[ { "id": "2302.06476" }, { "id": "2305.17126" }, { "id": "2305.11499" }, { "id": "2210.00185" }, { "id": "2305.10601" }, { "id": "2304.05335" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14688" }, { "id": "2208.11663" }, { "id": "2208.03299" }, { "id": "2206.04615" }, { "id": "2104.07567" }, { "id": "2306.09896" }, { "id": "2302.04023" }, { "id": "2205.11916" }, { "id": "2304.03442" }, { "id": "2207.05608" }, { "id": "2303.12712" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" } ]
2307.07526
24
# 3.7 Argument from Continuity in the Nervous System Turing discovered that the human brain, mainly the nervous system, cannot be the same as the discrete state machines. If a neuron gets information with a small error about impulse, that can make a significant difference in the output. Hence, the brain is like a continuous-state machine, and it may be possible that discrete-state machines cannot possess the ability to think. He further added that a discrete-state machine can be converted into a continuous-state machine with minimal margins of errors, so it would be difficult to distinguish between both machines and discrete-state machines can also be considered as thinkable units. However, it was not the appropriate response according to the scientific community. Digital systems can exhibit the characteristics of intelligence, such as decision-making, learning, or problem-solving, as there is nothing in our concept of thinking that forbids intelligent beings with digital systems [35]. Even if real thoughts are more complex, AI systems with fuzzy logic can deal with uncertainty and imprecision. Fuzzy logic can process vague information not defined in a discrete system. Rules in Fuzzy systems can capture the complexity of human decision-making and subjective reasoning by using fuzzy if-then statements [36]. Therefore, now machines can mimic the behavior of the nervous system.
2307.07526#24
Can I say, now machines can think?
Generative AI techniques have opened the path for new generations of machines in diverse domains. These machines have various capabilities for example, they can produce images, generate answers or stories, and write codes based on the "prompts" only provided by users. These machines are considered 'thinking minds' because they have the ability to generate human-like responses. In this study, we have analyzed and explored the capabilities of artificial intelligence-enabled machines. We have revisited on Turing's concept of thinking machines and compared it with recent technological advancements. The objections and consequences of the thinking machines are also discussed in this study, along with available techniques to evaluate machines' cognitive capabilities. We have concluded that Turing Test is a critical aspect of evaluating machines' ability. However, there are other aspects of intelligence too, and AI machines exhibit most of these aspects.
http://arxiv.org/pdf/2307.07526
Nitisha Aggarwal, Geetika Jain Saxena, Sanjeev Singh, Amit Pundir
cs.AI, cs.CY, I.2.m Miscellaneous
11 pages, 3 figures
null
cs.AI
20230711
20230711
[ { "id": "1801.01957" }, { "id": "2304.03277" }, { "id": "2305.18449" }, { "id": "2303.12712" }, { "id": "2304.03439" }, { "id": "2003.02989" } ]
2307.04964
25
Through GAE, we can estimate ˆAt of the advantage function A(st, at) accurately. This estimate will play a crucial role in constructing a policy gradient estimator: # gradient Vod(0 Di LS 37 vy los nol a,|s,)A ty (12) TED t=1 where D is a finite batch of samples, we will use ˆEt to represent the aforementioned 1 |D| # Veep ae t=1. # τ ∈D 6 # 3.2.3 Proximal Policy Optimization PPO and TRPO [33] are two pivotal techniques in RL, aimed at effectively training a policy without jeopardizing its stability. The underlying intuition for these methods is the idea of “small, stable steps”: a philosophy of gently nudging the policy towards optimization, rather than forcing aggressive updates that might destabilize the overall learning process.
2307.04964#25
Secrets of RLHF in Large Language Models Part I: PPO
Large language models (LLMs) have formulated a blueprint for the advancement of artificial general intelligence. Its primary objective is to function as a human-centric (helpful, honest, and harmless) assistant. Alignment with humans assumes paramount significance, and reinforcement learning with human feedback (RLHF) emerges as the pivotal technological paradigm underpinning this pursuit. Current technical routes usually include \textbf{reward models} to measure human preferences, \textbf{Proximal Policy Optimization} (PPO) to optimize policy model outputs, and \textbf{process supervision} to improve step-by-step reasoning capabilities. However, due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle. In the first report, we dissect the framework of RLHF, re-evaluate the inner workings of PPO, and explore how the parts comprising PPO algorithms impact policy agent training. We identify policy constraints being the key factor for the effective implementation of the PPO algorithm. Therefore, we explore the PPO-max, an advanced version of PPO algorithm, to efficiently improve the training stability of the policy model. Based on our main results, we perform a comprehensive analysis of RLHF abilities compared with SFT models and ChatGPT. The absence of open-source implementations has posed significant challenges to the investigation of LLMs alignment. Therefore, we are eager to release technical reports, reward models and PPO codes, aiming to make modest contributions to the advancement of LLMs.
http://arxiv.org/pdf/2307.04964
Rui Zheng, Shihan Dou, Songyang Gao, Yuan Hua, Wei Shen, Binghai Wang, Yan Liu, Senjie Jin, Qin Liu, Yuhao Zhou, Limao Xiong, Lu Chen, Zhiheng Xi, Nuo Xu, Wenbin Lai, Minghao Zhu, Cheng Chang, Zhangyue Yin, Rongxiang Weng, Wensen Cheng, Haoran Huang, Tianxiang Sun, Hang Yan, Tao Gui, Qi Zhang, Xipeng Qiu, Xuanjing Huang
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230711
20230718
[ { "id": "2302.13971" }, { "id": "1909.08593" }, { "id": "2201.08239" }, { "id": "2304.03277" }, { "id": "2304.03442" }, { "id": "2303.18223" }, { "id": "2108.07258" }, { "id": "2204.05862" }, { "id": "2303.03378" }, { "id": "2203.02155" }, { "id": "2112.00861" }, { "id": "2201.11903" }, { "id": "1904.09751" } ]
2307.05300
25
eration. For each question, a match to any of the answer aliases provided by the TriviaQA dataset is considered a correct mention. The metric score is computed as: # correct answer mentions # trivia questions Results. Table 2 presents the results of the Trivia Creative Writing task. The key observations are as follows: (1) Chain-of-Thought (CoT) does not out- perform Standard prompting, indicating that CoT is ineffective in eliciting an LLM’s knowledge abil- ities. Qualitative examples in Figure 8 and 11 il- lustrate that although CoT generates reasonable plans for task resolution, the final generation still contains factual errors and hallucinations. (2) Self- Refine only brings marginal improvements over iterations. (3) SPP outperforms all baselines sig- nificantly. The improvement is more pronounced in the N = 10 setting compared to N = 5 (10% vs. 7%), suggesting that Solo Performance Prompt- ing is particularly beneficial when the task requires incorporating knowledge from numerous domains. # 3.2 Codenames Collaborative: A Knowledge+Reasoning Task
2307.05300#25
Unleashing the Emergent Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration
Human intelligence thrives on cognitive synergy, where collaboration among different minds yield superior outcomes compared to isolated individuals. In this work, we propose Solo Performance Prompting (SPP), which transforms a single LLM into a cognitive synergist by engaging in multi-turn self-collaboration with multiple personas. A cognitive synergist is an intelligent agent that collaboratively combines multiple minds' strengths and knowledge to enhance problem-solving in complex tasks. By dynamically identifying and simulating different personas based on task inputs, SPP unleashes the potential of cognitive synergy in LLMs. Our in-depth analysis shows that assigning multiple fine-grained personas in LLMs improves problem-solving abilities compared to using a single or fixed number of personas. We evaluate SPP on three challenging tasks: Trivia Creative Writing, Codenames Collaborative, and Logic Grid Puzzle, encompassing both knowledge-intensive and reasoning-intensive types. Unlike previous works, such as Chain-of-Thought, that solely enhance the reasoning abilities in LLMs, experimental results demonstrate that SPP effectively reduces factual hallucination, and maintains strong reasoning capabilities. Additionally, comparative experiments show that cognitive synergy only emerges in GPT-4 and does not appear in less capable models, such as GPT-3.5-turbo and Llama2-13b-chat, which draws an interesting analogy to human development. Code, data, and prompts can be found at: https://github.com/MikeWangWZHL/Solo-Performance-Prompting.git.
http://arxiv.org/pdf/2307.05300
Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, Heng Ji
cs.AI, cs.CL
null
null
cs.AI
20230711
20240104
[ { "id": "2302.06476" }, { "id": "2305.17126" }, { "id": "2305.11499" }, { "id": "2210.00185" }, { "id": "2305.10601" }, { "id": "2304.05335" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14688" }, { "id": "2208.11663" }, { "id": "2208.03299" }, { "id": "2206.04615" }, { "id": "2104.07567" }, { "id": "2306.09896" }, { "id": "2302.04023" }, { "id": "2205.11916" }, { "id": "2304.03442" }, { "id": "2207.05608" }, { "id": "2303.12712" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" } ]
2307.07526
25
# 3.8 Argument from Informality of Behavior The Argument from Informality of Behavior is a critique of the Turing Test, which questions the sufficiency of the test in determining true machine intelligence. A bundle of rules cannot pre-define every conceivable set of circumstances. For example, red light indicates stop and green is for go; however, if, due to fault, both appear together, then what to do? Most probably, in this scenario, it is safest to stop. However, this decision may raise difficulty later. Hence even after providing the rules of conduct, situations are governed by the law of behavior. Humans adapt behavior from past experiences, social interactions, or cultural contexts. Behavioral adaptations involve complex cognitive processes, internal representations, and a deep understanding of concepts and contexts. For a machine that governs by instruction, if it also starts to learn and adjust for possible circumstances, then there is no disguisable difference between both humans and machines. Nowadays, machines are also learning, evolving, and improving their performances from past experiments and fine-tuning their behavior accordingly [37]. Machines are penalized for bad behavior and rewarded for good behavior. Human behavior also evolves in the same manner. Therefore, it can be inferred that trained AI machines may behave appropriately even if circumstances are not pre-defined by the code of conduct. 5 Can I say, now machines can think?
2307.07526#25
Can I say, now machines can think?
Generative AI techniques have opened the path for new generations of machines in diverse domains. These machines have various capabilities for example, they can produce images, generate answers or stories, and write codes based on the "prompts" only provided by users. These machines are considered 'thinking minds' because they have the ability to generate human-like responses. In this study, we have analyzed and explored the capabilities of artificial intelligence-enabled machines. We have revisited on Turing's concept of thinking machines and compared it with recent technological advancements. The objections and consequences of the thinking machines are also discussed in this study, along with available techniques to evaluate machines' cognitive capabilities. We have concluded that Turing Test is a critical aspect of evaluating machines' ability. However, there are other aspects of intelligence too, and AI machines exhibit most of these aspects.
http://arxiv.org/pdf/2307.07526
Nitisha Aggarwal, Geetika Jain Saxena, Sanjeev Singh, Amit Pundir
cs.AI, cs.CY, I.2.m Miscellaneous
11 pages, 3 figures
null
cs.AI
20230711
20230711
[ { "id": "1801.01957" }, { "id": "2304.03277" }, { "id": "2305.18449" }, { "id": "2303.12712" }, { "id": "2304.03439" }, { "id": "2003.02989" } ]
2307.04964
26
In traditional RL, the principle of policy gradient mandates that new and old policies remain close in the parameter space. However, this proximity in parameter space does not necessarily equate to similar performance, and a slight variance in parameters can drastically impact the effectiveness of the policy. Furthermore, if a large, unrestrained step is taken, it can lead to a collapse in policy performance, a scenario often described as “falling off the cliff”. This inherent risk is a limiting factor in terms of sample efficiency in vanilla policy gradients. Instead of being confined by parameter closeness, TRPO introduces a different kind of constraint on policy updates. It regulates the change in policies by ensuring the KL divergence, remains within an acceptable limit: maximizeg E, [eee ail ; Tra (att) subject to Ey [KL(mo,.5(+l8e),mo(-ls1))] <6, ˆEt ˆAt maximizeθ , (13) where θold is the old policy parameters before the update.
2307.04964#26
Secrets of RLHF in Large Language Models Part I: PPO
Large language models (LLMs) have formulated a blueprint for the advancement of artificial general intelligence. Its primary objective is to function as a human-centric (helpful, honest, and harmless) assistant. Alignment with humans assumes paramount significance, and reinforcement learning with human feedback (RLHF) emerges as the pivotal technological paradigm underpinning this pursuit. Current technical routes usually include \textbf{reward models} to measure human preferences, \textbf{Proximal Policy Optimization} (PPO) to optimize policy model outputs, and \textbf{process supervision} to improve step-by-step reasoning capabilities. However, due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle. In the first report, we dissect the framework of RLHF, re-evaluate the inner workings of PPO, and explore how the parts comprising PPO algorithms impact policy agent training. We identify policy constraints being the key factor for the effective implementation of the PPO algorithm. Therefore, we explore the PPO-max, an advanced version of PPO algorithm, to efficiently improve the training stability of the policy model. Based on our main results, we perform a comprehensive analysis of RLHF abilities compared with SFT models and ChatGPT. The absence of open-source implementations has posed significant challenges to the investigation of LLMs alignment. Therefore, we are eager to release technical reports, reward models and PPO codes, aiming to make modest contributions to the advancement of LLMs.
http://arxiv.org/pdf/2307.04964
Rui Zheng, Shihan Dou, Songyang Gao, Yuan Hua, Wei Shen, Binghai Wang, Yan Liu, Senjie Jin, Qin Liu, Yuhao Zhou, Limao Xiong, Lu Chen, Zhiheng Xi, Nuo Xu, Wenbin Lai, Minghao Zhu, Cheng Chang, Zhangyue Yin, Rongxiang Weng, Wensen Cheng, Haoran Huang, Tianxiang Sun, Hang Yan, Tao Gui, Qi Zhang, Xipeng Qiu, Xuanjing Huang
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230711
20230718
[ { "id": "2302.13971" }, { "id": "1909.08593" }, { "id": "2201.08239" }, { "id": "2304.03277" }, { "id": "2304.03442" }, { "id": "2303.18223" }, { "id": "2108.07258" }, { "id": "2204.05862" }, { "id": "2303.03378" }, { "id": "2203.02155" }, { "id": "2112.00861" }, { "id": "2201.11903" }, { "id": "1904.09751" } ]
2307.04986
26
9 In order to examine systematically whether health information about agents and their town influences their behaviors, we conduct several regression analyses (reported in Table 1). The dependent variable is a binary variable of an agent’s decision to stay home (equal to 1 if the agent stays home and 0 otherwise). The first three regressions are fixed effect logit models that investigate the impact of agents’ own health and societal health information on their decisions. As regression 1 shows, those with a light cough and both a fever and cough are more likely to stay home than those who feel “normal.” The improvement in both pseudo R-squared and BIC from regression 1 and 2 shows the importance of societal health information for predicting agents’ decisions. In regression 3, adding the squared of societal health information enhances both BIC and pseudo R-squared, indicating that societal health information has a nonlinear relationship with agents’ decisions. In other words, agents respond to societal health information, but their responses diminish over time. Regression 4, a random effect logit model, examines the impact of different personality traits, age, and gender on agents’ response. The direction of the main independent variables is consistent with regressions 1–3. Table 1 – Statistical analysis of agents’ decision to stay home (binary dependent variable)
2307.04986#26
Epidemic Modeling with Generative Agents
This study offers a new paradigm of individual-level modeling to address the grand challenge of incorporating human behavior in epidemic models. Using generative artificial intelligence in an agent-based epidemic model, each agent is empowered to make its own reasonings and decisions via connecting to a large language model such as ChatGPT. Through various simulation experiments, we present compelling evidence that generative agents mimic real-world behaviors such as quarantining when sick and self-isolation when cases rise. Collectively, the agents demonstrate patterns akin to multiple waves observed in recent pandemics followed by an endemic period. Moreover, the agents successfully flatten the epidemic curve. This study creates potential to improve dynamic system modeling by offering a way to represent human brain, reasoning, and decision making.
http://arxiv.org/pdf/2307.04986
Ross Williams, Niyousha Hosseinichimeh, Aritra Majumdar, Navid Ghaffarzadegan
cs.AI, cs.MA, econ.GN, nlin.AO, physics.soc-ph, q-fin.EC
null
null
cs.AI
20230711
20230711
[]
2307.05300
26
# 3.2 Codenames Collaborative: A Knowledge+Reasoning Task Evaluation Metrics. Evaluating GPT-4 level generation results can be challenging. Our prelim- inary experiments indicate that, even for humans, it is very difficult to identify which generation is better in terms of overall "quality" of the story from different prompting methods. Thus, instead of fo- cusing on evaluating the coherence of the genera- tion, which can be highly subjective, we employ an automatic metric which focuses on detecting factual hallucinations. As shown in Figure 3, we perform string matching with the ground truth tar- get answers for each question on the output genTask Description. As illustrated in 4, Code- names Collaborative is a collaborative task that challenges a model’s knowledge, reasoning, and theory of mind abilities by assigning two player roles: the Spymaster and the Guesser. The Spy- master’s role is to provide a hint word related to the target words, excluding some other distractor words, while the Guesser’s role is to identify the target words based on the given hint and the full list of words. The same LLM (GPT-4 (OpenAI, 2023b)) is used for both roles sequentially, and a dataset with 50 instances is constructed based on BigBench’s (Srivastava et al., 2022) Codenames task data.
2307.05300#26
Unleashing the Emergent Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration
Human intelligence thrives on cognitive synergy, where collaboration among different minds yield superior outcomes compared to isolated individuals. In this work, we propose Solo Performance Prompting (SPP), which transforms a single LLM into a cognitive synergist by engaging in multi-turn self-collaboration with multiple personas. A cognitive synergist is an intelligent agent that collaboratively combines multiple minds' strengths and knowledge to enhance problem-solving in complex tasks. By dynamically identifying and simulating different personas based on task inputs, SPP unleashes the potential of cognitive synergy in LLMs. Our in-depth analysis shows that assigning multiple fine-grained personas in LLMs improves problem-solving abilities compared to using a single or fixed number of personas. We evaluate SPP on three challenging tasks: Trivia Creative Writing, Codenames Collaborative, and Logic Grid Puzzle, encompassing both knowledge-intensive and reasoning-intensive types. Unlike previous works, such as Chain-of-Thought, that solely enhance the reasoning abilities in LLMs, experimental results demonstrate that SPP effectively reduces factual hallucination, and maintains strong reasoning capabilities. Additionally, comparative experiments show that cognitive synergy only emerges in GPT-4 and does not appear in less capable models, such as GPT-3.5-turbo and Llama2-13b-chat, which draws an interesting analogy to human development. Code, data, and prompts can be found at: https://github.com/MikeWangWZHL/Solo-Performance-Prompting.git.
http://arxiv.org/pdf/2307.05300
Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, Heng Ji
cs.AI, cs.CL
null
null
cs.AI
20230711
20240104
[ { "id": "2302.06476" }, { "id": "2305.17126" }, { "id": "2305.11499" }, { "id": "2210.00185" }, { "id": "2305.10601" }, { "id": "2304.05335" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14688" }, { "id": "2208.11663" }, { "id": "2208.03299" }, { "id": "2206.04615" }, { "id": "2104.07567" }, { "id": "2306.09896" }, { "id": "2302.04023" }, { "id": "2205.11916" }, { "id": "2304.03442" }, { "id": "2207.05608" }, { "id": "2303.12712" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" } ]
2307.04964
27
ˆEt ˆAt maximizeθ , (13) where θold is the old policy parameters before the update. There are two primary variants of PPO: PPO-Penalty and PPO-Clip. While TRPO puts a hard constraint on the KL divergence to prevent harmful updates, PPO-Penalty addresses the unconstrained optimization problems by employing a penalty-based approach instead of constraints: Lopo-penatty(0) = Be [eee al ~ BRL moga(-lse), 7o(-l50)), a4) Tara (At |S Lppo−penalty(θ) = ˆEt ˆAt − βKL(πθold (·|st), πθ(·|st)), (14) with penalty factor β. Clipped Surrogate Objective. PPO-Clip attempts to keep the new policy close to the old policy, but instead of putting a constraint on the KL divergence like TRPO, it uses a clipped version of the policy ratio in its objective. The objective function is expressed as: Lypo-ciip(9) = ky [in (ree cli (eee l-—e,1l+ :) 4,)] 5 (15) Tra (4t|St) Tera (At|81)
2307.04964#27
Secrets of RLHF in Large Language Models Part I: PPO
Large language models (LLMs) have formulated a blueprint for the advancement of artificial general intelligence. Its primary objective is to function as a human-centric (helpful, honest, and harmless) assistant. Alignment with humans assumes paramount significance, and reinforcement learning with human feedback (RLHF) emerges as the pivotal technological paradigm underpinning this pursuit. Current technical routes usually include \textbf{reward models} to measure human preferences, \textbf{Proximal Policy Optimization} (PPO) to optimize policy model outputs, and \textbf{process supervision} to improve step-by-step reasoning capabilities. However, due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle. In the first report, we dissect the framework of RLHF, re-evaluate the inner workings of PPO, and explore how the parts comprising PPO algorithms impact policy agent training. We identify policy constraints being the key factor for the effective implementation of the PPO algorithm. Therefore, we explore the PPO-max, an advanced version of PPO algorithm, to efficiently improve the training stability of the policy model. Based on our main results, we perform a comprehensive analysis of RLHF abilities compared with SFT models and ChatGPT. The absence of open-source implementations has posed significant challenges to the investigation of LLMs alignment. Therefore, we are eager to release technical reports, reward models and PPO codes, aiming to make modest contributions to the advancement of LLMs.
http://arxiv.org/pdf/2307.04964
Rui Zheng, Shihan Dou, Songyang Gao, Yuan Hua, Wei Shen, Binghai Wang, Yan Liu, Senjie Jin, Qin Liu, Yuhao Zhou, Limao Xiong, Lu Chen, Zhiheng Xi, Nuo Xu, Wenbin Lai, Minghao Zhu, Cheng Chang, Zhangyue Yin, Rongxiang Weng, Wensen Cheng, Haoran Huang, Tianxiang Sun, Hang Yan, Tao Gui, Qi Zhang, Xipeng Qiu, Xuanjing Huang
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230711
20230718
[ { "id": "2302.13971" }, { "id": "1909.08593" }, { "id": "2201.08239" }, { "id": "2304.03277" }, { "id": "2304.03442" }, { "id": "2303.18223" }, { "id": "2108.07258" }, { "id": "2204.05862" }, { "id": "2303.03378" }, { "id": "2203.02155" }, { "id": "2112.00861" }, { "id": "2201.11903" }, { "id": "1904.09751" } ]
2307.04986
27
Table 1 – Statistical analysis of agents’ decision to stay home (binary dependent variable) Regressions Regressi on 1 (N=66,9 60) Regressi on 2 (N=66,96 0) Regressi on 3 (N=66,96 0) Regressio n 4 (N=67,77 6) Own Health Light Cough Fever & Cough Societal Health Societal Health2 Agreeableness Conscientiousness Surgency Emotional stability Intellect Age 5.69*** (0.14) 5.42*** (0.13) 5.75*** (0.16) 5.26*** (0.15) 1.59*** (0.02) 5.40*** (0.16) 4.94*** (0.15) 3.97*** (0.05) -0.65*** (0.01) 5.60*** (0.16) 5.13*** (0.15) 3.95*** (0.05) -0.65*** (0.01) -0.11 (0.17) -0.70*** (0.17) -0.26 (0.17) -0.59** (0.17) -0.87*** (0.17) 0.01 (0.01) -0.84*** (0.17) Gender × × × ×
2307.04986#27
Epidemic Modeling with Generative Agents
This study offers a new paradigm of individual-level modeling to address the grand challenge of incorporating human behavior in epidemic models. Using generative artificial intelligence in an agent-based epidemic model, each agent is empowered to make its own reasonings and decisions via connecting to a large language model such as ChatGPT. Through various simulation experiments, we present compelling evidence that generative agents mimic real-world behaviors such as quarantining when sick and self-isolation when cases rise. Collectively, the agents demonstrate patterns akin to multiple waves observed in recent pandemics followed by an endemic period. Moreover, the agents successfully flatten the epidemic curve. This study creates potential to improve dynamic system modeling by offering a way to represent human brain, reasoning, and decision making.
http://arxiv.org/pdf/2307.04986
Ross Williams, Niyousha Hosseinichimeh, Aritra Majumdar, Navid Ghaffarzadegan
cs.AI, cs.MA, econ.GN, nlin.AO, physics.soc-ph, q-fin.EC
null
null
cs.AI
20230711
20230711
[]
2307.05300
27
2To select difficult question instances that can pose challenges to GPT-4, we use a smaller open-source LLM, fastchat_t5_3b (Zheng et al., 2023), to obtain preliminary per- formance on the validation set, and then choose the failure cases as our question selection. Evaluation Metrics. The original Codenames task in the BigBench dataset has limitations due to its focus on the Guesser role and subjectivity in hint
2307.05300#27
Unleashing the Emergent Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration
Human intelligence thrives on cognitive synergy, where collaboration among different minds yield superior outcomes compared to isolated individuals. In this work, we propose Solo Performance Prompting (SPP), which transforms a single LLM into a cognitive synergist by engaging in multi-turn self-collaboration with multiple personas. A cognitive synergist is an intelligent agent that collaboratively combines multiple minds' strengths and knowledge to enhance problem-solving in complex tasks. By dynamically identifying and simulating different personas based on task inputs, SPP unleashes the potential of cognitive synergy in LLMs. Our in-depth analysis shows that assigning multiple fine-grained personas in LLMs improves problem-solving abilities compared to using a single or fixed number of personas. We evaluate SPP on three challenging tasks: Trivia Creative Writing, Codenames Collaborative, and Logic Grid Puzzle, encompassing both knowledge-intensive and reasoning-intensive types. Unlike previous works, such as Chain-of-Thought, that solely enhance the reasoning abilities in LLMs, experimental results demonstrate that SPP effectively reduces factual hallucination, and maintains strong reasoning capabilities. Additionally, comparative experiments show that cognitive synergy only emerges in GPT-4 and does not appear in less capable models, such as GPT-3.5-turbo and Llama2-13b-chat, which draws an interesting analogy to human development. Code, data, and prompts can be found at: https://github.com/MikeWangWZHL/Solo-Performance-Prompting.git.
http://arxiv.org/pdf/2307.05300
Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, Heng Ji
cs.AI, cs.CL
null
null
cs.AI
20230711
20240104
[ { "id": "2302.06476" }, { "id": "2305.17126" }, { "id": "2305.11499" }, { "id": "2210.00185" }, { "id": "2305.10601" }, { "id": "2304.05335" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14688" }, { "id": "2208.11663" }, { "id": "2208.03299" }, { "id": "2206.04615" }, { "id": "2104.07567" }, { "id": "2306.09896" }, { "id": "2302.04023" }, { "id": "2205.11916" }, { "id": "2304.03442" }, { "id": "2207.05608" }, { "id": "2303.12712" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" } ]
2307.07526
27
It is a critique that challenges the ability of machines to possess specific human-like cognitive capabilities, particularly those associated with extra-sensory perception. It questions whether machines can go beyond the limits of sensory information and access knowledge or understanding beyond what can be directly observed. Human intelligence involves the capacity for intuition and insight, which often extend beyond logical reasoning or explicit sensory information. Turing also discussed ESP as an argument and was overwhelmed by empirical evidence for telepathy or clairvoyance. He suggested the advantage of the telepathic human participant over a machine in the imitation game. A telepathic participant can guess better than a machine if the interrogator asks questions like To which suit does the card in my right hand belong? He suggested putting participants in a ’telepathy-proof room’ for fair game. Telepathy is a technique for communicating ideas or thoughts between individuals without the need for conventional means of communication. However, it is elusive and difficult to grasp. It resembles the two machines sending and receiving messages through wireless communication protocols. Possibly, telepathy also has some protocols that are
2307.07526#27
Can I say, now machines can think?
Generative AI techniques have opened the path for new generations of machines in diverse domains. These machines have various capabilities for example, they can produce images, generate answers or stories, and write codes based on the "prompts" only provided by users. These machines are considered 'thinking minds' because they have the ability to generate human-like responses. In this study, we have analyzed and explored the capabilities of artificial intelligence-enabled machines. We have revisited on Turing's concept of thinking machines and compared it with recent technological advancements. The objections and consequences of the thinking machines are also discussed in this study, along with available techniques to evaluate machines' cognitive capabilities. We have concluded that Turing Test is a critical aspect of evaluating machines' ability. However, there are other aspects of intelligence too, and AI machines exhibit most of these aspects.
http://arxiv.org/pdf/2307.07526
Nitisha Aggarwal, Geetika Jain Saxena, Sanjeev Singh, Amit Pundir
cs.AI, cs.CY, I.2.m Miscellaneous
11 pages, 3 figures
null
cs.AI
20230711
20230711
[ { "id": "1801.01957" }, { "id": "2304.03277" }, { "id": "2305.18449" }, { "id": "2303.12712" }, { "id": "2304.03439" }, { "id": "2003.02989" } ]
2307.04964
28
To (ae|se) To 1q (ae St) € is a hyperparameter that determines how much the new policy can deviate from the old policy. The clip function limits the value of 79,,, (a,|s,) between (1 — €,1 + €). The clipping acts as a regularizer, limiting the extent to which the policy can change drastically from one iteration to the next. Preventing overly large policy updates ensures the learning process’s robustness while maintaining more sample-efficient learning than vanilla policy gradient methods. where is the ratio of the new policy’s probability over the old policy’s probability and Value Function Estimation. In PPO algorithm, the critic model, often referred to as the value function, estimates the expected returns for each state. The learning objective of this model is to minimize the discrepancy between its predicted values and the actual return values. The loss function of the critic model is commonly defined using Mean Squared Error (MSE), given by the following formula: Levtie(®) = By {|IVa(se) — Rall]. (16)
2307.04964#28
Secrets of RLHF in Large Language Models Part I: PPO
Large language models (LLMs) have formulated a blueprint for the advancement of artificial general intelligence. Its primary objective is to function as a human-centric (helpful, honest, and harmless) assistant. Alignment with humans assumes paramount significance, and reinforcement learning with human feedback (RLHF) emerges as the pivotal technological paradigm underpinning this pursuit. Current technical routes usually include \textbf{reward models} to measure human preferences, \textbf{Proximal Policy Optimization} (PPO) to optimize policy model outputs, and \textbf{process supervision} to improve step-by-step reasoning capabilities. However, due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle. In the first report, we dissect the framework of RLHF, re-evaluate the inner workings of PPO, and explore how the parts comprising PPO algorithms impact policy agent training. We identify policy constraints being the key factor for the effective implementation of the PPO algorithm. Therefore, we explore the PPO-max, an advanced version of PPO algorithm, to efficiently improve the training stability of the policy model. Based on our main results, we perform a comprehensive analysis of RLHF abilities compared with SFT models and ChatGPT. The absence of open-source implementations has posed significant challenges to the investigation of LLMs alignment. Therefore, we are eager to release technical reports, reward models and PPO codes, aiming to make modest contributions to the advancement of LLMs.
http://arxiv.org/pdf/2307.04964
Rui Zheng, Shihan Dou, Songyang Gao, Yuan Hua, Wei Shen, Binghai Wang, Yan Liu, Senjie Jin, Qin Liu, Yuhao Zhou, Limao Xiong, Lu Chen, Zhiheng Xi, Nuo Xu, Wenbin Lai, Minghao Zhu, Cheng Chang, Zhangyue Yin, Rongxiang Weng, Wensen Cheng, Haoran Huang, Tianxiang Sun, Hang Yan, Tao Gui, Qi Zhang, Xipeng Qiu, Xuanjing Huang
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230711
20230718
[ { "id": "2302.13971" }, { "id": "1909.08593" }, { "id": "2201.08239" }, { "id": "2304.03277" }, { "id": "2304.03442" }, { "id": "2303.18223" }, { "id": "2108.07258" }, { "id": "2204.05862" }, { "id": "2303.03378" }, { "id": "2203.02155" }, { "id": "2112.00861" }, { "id": "2201.11903" }, { "id": "1904.09751" } ]
2307.05300
28
( Trivia Creative Writing Input Example (N=5) ) Write a short and coherent story about {Harry Potter} that incorporates the answers to the following {5} questions: {Who was the man behind The Chipmunks? Which Lloyd Webber musical premiered in the US on 10th December 1993? Who was the next British Prime Minister after Arthur Balfour? Who had a 70s No 1 hit with Kiss You All Over? What claimed the life of singer Kathleen Ferrier?} Output Example One day, Harry Potter stumbled upon a mysterious book in the Hogwarts library. The book told the story of a magical music producer named Ross Bagdasarian > 4 (incorrect answer), who created a group of singing chipmunks. Intrigued, Harry shared the story with his friends Ron and Hermione. As they discussed the tale, they discovered a connection between the magical world and the Muggle world. It turned out that a famous Muggle composer, Andrew Lloyd Webber, had been inspired by the magical world to create a musical called "Sunset Boulevard," @ which premiered in the US on 10th December 1993. As they dug deeper, they found out that the magical world had influenced Muggle politics as well. Henry
2307.05300#28
Unleashing the Emergent Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration
Human intelligence thrives on cognitive synergy, where collaboration among different minds yield superior outcomes compared to isolated individuals. In this work, we propose Solo Performance Prompting (SPP), which transforms a single LLM into a cognitive synergist by engaging in multi-turn self-collaboration with multiple personas. A cognitive synergist is an intelligent agent that collaboratively combines multiple minds' strengths and knowledge to enhance problem-solving in complex tasks. By dynamically identifying and simulating different personas based on task inputs, SPP unleashes the potential of cognitive synergy in LLMs. Our in-depth analysis shows that assigning multiple fine-grained personas in LLMs improves problem-solving abilities compared to using a single or fixed number of personas. We evaluate SPP on three challenging tasks: Trivia Creative Writing, Codenames Collaborative, and Logic Grid Puzzle, encompassing both knowledge-intensive and reasoning-intensive types. Unlike previous works, such as Chain-of-Thought, that solely enhance the reasoning abilities in LLMs, experimental results demonstrate that SPP effectively reduces factual hallucination, and maintains strong reasoning capabilities. Additionally, comparative experiments show that cognitive synergy only emerges in GPT-4 and does not appear in less capable models, such as GPT-3.5-turbo and Llama2-13b-chat, which draws an interesting analogy to human development. Code, data, and prompts can be found at: https://github.com/MikeWangWZHL/Solo-Performance-Prompting.git.
http://arxiv.org/pdf/2307.05300
Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, Heng Ji
cs.AI, cs.CL
null
null
cs.AI
20230711
20240104
[ { "id": "2302.06476" }, { "id": "2305.17126" }, { "id": "2305.11499" }, { "id": "2210.00185" }, { "id": "2305.10601" }, { "id": "2304.05335" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14688" }, { "id": "2208.11663" }, { "id": "2208.03299" }, { "id": "2206.04615" }, { "id": "2104.07567" }, { "id": "2306.09896" }, { "id": "2302.04023" }, { "id": "2205.11916" }, { "id": "2304.03442" }, { "id": "2207.05608" }, { "id": "2303.12712" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" } ]
2307.07526
28
It resembles the two machines sending and receiving messages through wireless communication protocols. Possibly, telepathy also has some protocols that are only understood by a telepathic human who works as a transmitter or receiver. In 2019, Brankovi´c [38] defined ESP as a phenomenon that does not follow the fundamental scientific principles known to date. It can be possible that ESP phenomena also have underlying principles that humans do not know and in the future, they will be well defined and followed by humans and machines. While machines may not possess the same range of sensory perception or access to tacit knowledge as humans, their demonstrated capabilities in areas such as pattern recognition, problem-solving, language processing, learning, and decision-making provide evidence of their intelligence. Hence, It can be possible that machines can follow ESP.
2307.07526#28
Can I say, now machines can think?
Generative AI techniques have opened the path for new generations of machines in diverse domains. These machines have various capabilities for example, they can produce images, generate answers or stories, and write codes based on the "prompts" only provided by users. These machines are considered 'thinking minds' because they have the ability to generate human-like responses. In this study, we have analyzed and explored the capabilities of artificial intelligence-enabled machines. We have revisited on Turing's concept of thinking machines and compared it with recent technological advancements. The objections and consequences of the thinking machines are also discussed in this study, along with available techniques to evaluate machines' cognitive capabilities. We have concluded that Turing Test is a critical aspect of evaluating machines' ability. However, there are other aspects of intelligence too, and AI machines exhibit most of these aspects.
http://arxiv.org/pdf/2307.07526
Nitisha Aggarwal, Geetika Jain Saxena, Sanjeev Singh, Amit Pundir
cs.AI, cs.CY, I.2.m Miscellaneous
11 pages, 3 figures
null
cs.AI
20230711
20230711
[ { "id": "1801.01957" }, { "id": "2304.03277" }, { "id": "2305.18449" }, { "id": "2303.12712" }, { "id": "2304.03439" }, { "id": "2003.02989" } ]
2307.04964
29
Levtie(®) = By {|IVa(se) — Rall]. (16) Levtie(®) = By {|IVa(se) — Rall]. (16) Here, V;(s;) represents the critic model’s predicted value for state s, with parameters ¢, and R, represents the actual return value for state s; and always can be estimated as: Ry = an res Mixing Pretraining Gradients. To mitigate potential degradation in the model’s language skills and knowledge retention during PPO, we also explore the incorporation of pretraining data into the RL phase. The models utilizing this method are denoted as “PPO-ptx”, a combined objective function is shown as follows [16]: Lppo-ptx(9) = Lppo-ciip(8) + AptxEx~Dyretrain [log(mp'"(x))] ; (17) where λptx is the pretraining loss coefficient and Dpretrain is the pretraining data distribution. 7 Algorithm 1 PPO 1: Input: initial policy parameters θ0, initial value function parameters φ0. 2: for n = 0, 1, 2, . . . do 3: 4: 5:
2307.04964#29
Secrets of RLHF in Large Language Models Part I: PPO
Large language models (LLMs) have formulated a blueprint for the advancement of artificial general intelligence. Its primary objective is to function as a human-centric (helpful, honest, and harmless) assistant. Alignment with humans assumes paramount significance, and reinforcement learning with human feedback (RLHF) emerges as the pivotal technological paradigm underpinning this pursuit. Current technical routes usually include \textbf{reward models} to measure human preferences, \textbf{Proximal Policy Optimization} (PPO) to optimize policy model outputs, and \textbf{process supervision} to improve step-by-step reasoning capabilities. However, due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle. In the first report, we dissect the framework of RLHF, re-evaluate the inner workings of PPO, and explore how the parts comprising PPO algorithms impact policy agent training. We identify policy constraints being the key factor for the effective implementation of the PPO algorithm. Therefore, we explore the PPO-max, an advanced version of PPO algorithm, to efficiently improve the training stability of the policy model. Based on our main results, we perform a comprehensive analysis of RLHF abilities compared with SFT models and ChatGPT. The absence of open-source implementations has posed significant challenges to the investigation of LLMs alignment. Therefore, we are eager to release technical reports, reward models and PPO codes, aiming to make modest contributions to the advancement of LLMs.
http://arxiv.org/pdf/2307.04964
Rui Zheng, Shihan Dou, Songyang Gao, Yuan Hua, Wei Shen, Binghai Wang, Yan Liu, Senjie Jin, Qin Liu, Yuhao Zhou, Limao Xiong, Lu Chen, Zhiheng Xi, Nuo Xu, Wenbin Lai, Minghao Zhu, Cheng Chang, Zhangyue Yin, Rongxiang Weng, Wensen Cheng, Haoran Huang, Tianxiang Sun, Hang Yan, Tao Gui, Qi Zhang, Xipeng Qiu, Xuanjing Huang
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230711
20230718
[ { "id": "2302.13971" }, { "id": "1909.08593" }, { "id": "2201.08239" }, { "id": "2304.03277" }, { "id": "2304.03442" }, { "id": "2303.18223" }, { "id": "2108.07258" }, { "id": "2204.05862" }, { "id": "2303.03378" }, { "id": "2203.02155" }, { "id": "2112.00861" }, { "id": "2201.11903" }, { "id": "1904.09751" } ]
2307.04986
29
10 # IV. Discussions In this paper, we offer a new epidemic modeling approach to incorporate human behavior in infectious disease models by leveraging generative AI, specifically LLMs. In this paradigm of individual-level epidemic modeling, each agent is empowered by AI to make decisions (here, whether to go outside) by correlating different pieces of contextual information (such as their personality traits, health status, and the prevalence of the disease) with the common knowledge that exists in an LLM (here, ChatGPT). In this approach, rather than a modeler formulating decision-making rules and estimating their corresponding parameter values for each agent, she empowers each agent with a reasoning power by connecting them to a well-performing LLM. We show that without the modeler imposing decision choices (i.e., decisions as exogenous inputs), or how to make decisions (i.e., decision rules), agents can make decisions that are consistent with how humans behave in the real world. For example, in our case, they are more likely to decide to self-isolate when cases rise, quarantine when they feel sick, or stay home if they have a more risk-averse personality or are elderly. Through interactions of these generative agents, two emergent behaviors are observed: the agents are collectively able to flatten the curve of the epidemic; and the system recreate various modes of an epidemic, including multiple waves and continuing endemic states.
2307.04986#29
Epidemic Modeling with Generative Agents
This study offers a new paradigm of individual-level modeling to address the grand challenge of incorporating human behavior in epidemic models. Using generative artificial intelligence in an agent-based epidemic model, each agent is empowered to make its own reasonings and decisions via connecting to a large language model such as ChatGPT. Through various simulation experiments, we present compelling evidence that generative agents mimic real-world behaviors such as quarantining when sick and self-isolation when cases rise. Collectively, the agents demonstrate patterns akin to multiple waves observed in recent pandemics followed by an endemic period. Moreover, the agents successfully flatten the epidemic curve. This study creates potential to improve dynamic system modeling by offering a way to represent human brain, reasoning, and decision making.
http://arxiv.org/pdf/2307.04986
Ross Williams, Niyousha Hosseinichimeh, Aritra Majumdar, Navid Ghaffarzadegan
cs.AI, cs.MA, econ.GN, nlin.AO, physics.soc-ph, q-fin.EC
null
null
cs.AI
20230711
20230711
[]
2307.05300
29
@ which premiered in the US on 10th December 1993. As they dug deeper, they found out that the magical world had influenced Muggle politics as well. Henry Campbell-Bannerman @. the British Prime Minister after Arthur Balfour, had a secret friendship with a wizard who advised him on important matters. Their research also led them to the Muggle band Exile @& which had a 70s No 1 hit with "Kiss You All Over." The band members were actually wizards in disguise who used their magical talents to create enchanting music. Lastly, they learned about the tragic story of Kathleen Ferrier, a talented Muggle singer whose life was claimed by breast cancer %. Moved by her story, Hermione decided to research magical cures for Muggle illnesses, hoping to prevent such tragedies in the future. Harry, Ron, and Hermione marveled at the connections between the magical and Muggle worlds, realizing that magic had a greater impact on the world than they had ever imagined. Target Answers }——~ OD David Seville & Sunset Boulevard gG Henry Campbell-Bannerman @ Exile @ Cancer Correct Answer Mentions: 4 Number of Questions (N): 5 Score: 4/5=0.8
2307.05300#29
Unleashing the Emergent Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration
Human intelligence thrives on cognitive synergy, where collaboration among different minds yield superior outcomes compared to isolated individuals. In this work, we propose Solo Performance Prompting (SPP), which transforms a single LLM into a cognitive synergist by engaging in multi-turn self-collaboration with multiple personas. A cognitive synergist is an intelligent agent that collaboratively combines multiple minds' strengths and knowledge to enhance problem-solving in complex tasks. By dynamically identifying and simulating different personas based on task inputs, SPP unleashes the potential of cognitive synergy in LLMs. Our in-depth analysis shows that assigning multiple fine-grained personas in LLMs improves problem-solving abilities compared to using a single or fixed number of personas. We evaluate SPP on three challenging tasks: Trivia Creative Writing, Codenames Collaborative, and Logic Grid Puzzle, encompassing both knowledge-intensive and reasoning-intensive types. Unlike previous works, such as Chain-of-Thought, that solely enhance the reasoning abilities in LLMs, experimental results demonstrate that SPP effectively reduces factual hallucination, and maintains strong reasoning capabilities. Additionally, comparative experiments show that cognitive synergy only emerges in GPT-4 and does not appear in less capable models, such as GPT-3.5-turbo and Llama2-13b-chat, which draws an interesting analogy to human development. Code, data, and prompts can be found at: https://github.com/MikeWangWZHL/Solo-Performance-Prompting.git.
http://arxiv.org/pdf/2307.05300
Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, Heng Ji
cs.AI, cs.CL
null
null
cs.AI
20230711
20240104
[ { "id": "2302.06476" }, { "id": "2305.17126" }, { "id": "2305.11499" }, { "id": "2210.00185" }, { "id": "2305.10601" }, { "id": "2304.05335" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14688" }, { "id": "2208.11663" }, { "id": "2208.03299" }, { "id": "2206.04615" }, { "id": "2104.07567" }, { "id": "2306.09896" }, { "id": "2302.04023" }, { "id": "2205.11916" }, { "id": "2304.03442" }, { "id": "2207.05608" }, { "id": "2303.12712" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" } ]
2307.07526
29
From these arguments and objections, we can conclude that the machine suggested by Turing possesses various abilities. These machines potentially sound like humans and are also an ethical danger to human society if not handled cautiously. Since these machines have multiple features that need more standard benchmarks. Hence, research communities have raised questions about the aptness of the Imitation Test. # 4 Evaluation of the Present Status of Machines
2307.07526#29
Can I say, now machines can think?
Generative AI techniques have opened the path for new generations of machines in diverse domains. These machines have various capabilities for example, they can produce images, generate answers or stories, and write codes based on the "prompts" only provided by users. These machines are considered 'thinking minds' because they have the ability to generate human-like responses. In this study, we have analyzed and explored the capabilities of artificial intelligence-enabled machines. We have revisited on Turing's concept of thinking machines and compared it with recent technological advancements. The objections and consequences of the thinking machines are also discussed in this study, along with available techniques to evaluate machines' cognitive capabilities. We have concluded that Turing Test is a critical aspect of evaluating machines' ability. However, there are other aspects of intelligence too, and AI machines exhibit most of these aspects.
http://arxiv.org/pdf/2307.07526
Nitisha Aggarwal, Geetika Jain Saxena, Sanjeev Singh, Amit Pundir
cs.AI, cs.CY, I.2.m Miscellaneous
11 pages, 3 figures
null
cs.AI
20230711
20230711
[ { "id": "1801.01957" }, { "id": "2304.03277" }, { "id": "2305.18449" }, { "id": "2303.12712" }, { "id": "2304.03439" }, { "id": "2003.02989" } ]
2307.04964
30
Collect a set of trajectories Dn = {τi} by executing policy π(θn) within the environment. Compute rewards-to-go ˆRt. Compute advantage estimates, ˆAt (using any advantage estimation method) based on the current value function Vφn . Update the policy by maximizing the PPO-penalty/clip/ptx objective: 6: θn+1 = arg max θ Lppo−clip(θn). Update the value function by regression on mean-squared error: φn+1 = arg min φ Lcritic(φn). 8: end for # 4 Reward Modeling for Helpfulness and Harmlessness Reward model is trained to reflect the preference of human. Theoretically, we can directly fine- tune the model using Reinforcement Learning and human annotations. While due to constraints in workload and time availability, it is unfeasible for humans to provide sufficient feedback for training before each optimization iteration. Therefore, a more effective way involves training a reward model (RM), which aims to emulate the evaluation process performed by humans. In this section, we first cover the technical details of RM, then show the RM performance we used, and attach the performance changes during training. # 4.1 Models and Datasets
2307.04964#30
Secrets of RLHF in Large Language Models Part I: PPO
Large language models (LLMs) have formulated a blueprint for the advancement of artificial general intelligence. Its primary objective is to function as a human-centric (helpful, honest, and harmless) assistant. Alignment with humans assumes paramount significance, and reinforcement learning with human feedback (RLHF) emerges as the pivotal technological paradigm underpinning this pursuit. Current technical routes usually include \textbf{reward models} to measure human preferences, \textbf{Proximal Policy Optimization} (PPO) to optimize policy model outputs, and \textbf{process supervision} to improve step-by-step reasoning capabilities. However, due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle. In the first report, we dissect the framework of RLHF, re-evaluate the inner workings of PPO, and explore how the parts comprising PPO algorithms impact policy agent training. We identify policy constraints being the key factor for the effective implementation of the PPO algorithm. Therefore, we explore the PPO-max, an advanced version of PPO algorithm, to efficiently improve the training stability of the policy model. Based on our main results, we perform a comprehensive analysis of RLHF abilities compared with SFT models and ChatGPT. The absence of open-source implementations has posed significant challenges to the investigation of LLMs alignment. Therefore, we are eager to release technical reports, reward models and PPO codes, aiming to make modest contributions to the advancement of LLMs.
http://arxiv.org/pdf/2307.04964
Rui Zheng, Shihan Dou, Songyang Gao, Yuan Hua, Wei Shen, Binghai Wang, Yan Liu, Senjie Jin, Qin Liu, Yuhao Zhou, Limao Xiong, Lu Chen, Zhiheng Xi, Nuo Xu, Wenbin Lai, Minghao Zhu, Cheng Chang, Zhangyue Yin, Rongxiang Weng, Wensen Cheng, Haoran Huang, Tianxiang Sun, Hang Yan, Tao Gui, Qi Zhang, Xipeng Qiu, Xuanjing Huang
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230711
20230718
[ { "id": "2302.13971" }, { "id": "1909.08593" }, { "id": "2201.08239" }, { "id": "2304.03277" }, { "id": "2304.03442" }, { "id": "2303.18223" }, { "id": "2108.07258" }, { "id": "2204.05862" }, { "id": "2303.03378" }, { "id": "2203.02155" }, { "id": "2112.00861" }, { "id": "2201.11903" }, { "id": "1904.09751" } ]
2307.04986
30
We demonstrate the results in three steps by performing various simulation experiments coupled with a conventional agent-based model with an LLM through API calls. In the base-run scenario, we show that in the absence of feedback about the disease, self-health, and societal health, the generative agents behave similar to rule-based agents in classic ABM frameworks and SIR compartmental models. In the next step, informing agents about their own health at the beginning of each time step, we observe that agents with symptoms are more likely to decrease their mobility. Most agents with symptoms of a fever and cough quarantine themselves by staying home. As a result, agents are able to slow the spread of the disease. Finally, when agents are primed with societal health information, news about the epidemic, and the daily active case count in their simulated town, they are able to flatten the curve of the epidemic substantially in their town by self-isolating. Individual-level analysis shows variation in the decisions of agents and different responsiveness to news about the virus, resembling a real-world situation.
2307.04986#30
Epidemic Modeling with Generative Agents
This study offers a new paradigm of individual-level modeling to address the grand challenge of incorporating human behavior in epidemic models. Using generative artificial intelligence in an agent-based epidemic model, each agent is empowered to make its own reasonings and decisions via connecting to a large language model such as ChatGPT. Through various simulation experiments, we present compelling evidence that generative agents mimic real-world behaviors such as quarantining when sick and self-isolation when cases rise. Collectively, the agents demonstrate patterns akin to multiple waves observed in recent pandemics followed by an endemic period. Moreover, the agents successfully flatten the epidemic curve. This study creates potential to improve dynamic system modeling by offering a way to represent human brain, reasoning, and decision making.
http://arxiv.org/pdf/2307.04986
Ross Williams, Niyousha Hosseinichimeh, Aritra Majumdar, Navid Ghaffarzadegan
cs.AI, cs.MA, econ.GN, nlin.AO, physics.soc-ph, q-fin.EC
null
null
cs.AI
20230711
20230711
[]
2307.05300
30
Figure 3: Trivia Creative Writing task example. Va C Codenames Collaborative Input/Output Example _} “Spymaster Input Spymaster Input Try to find a single word hint that can accurately represent and link the {4} given words: {"director, kiss, popcorn, street"}. The key is to select a hint that does not cause confusion with other words from the following list: {locust, fever, street, sherwood, kiss, popcorn, craft, ant, crystal, tear, cowboy, government, pine, mountie, soap, genie, king arthur, sphinx, director, bride, razor, fog, whistle}. Spymaster Output Example Try to identify the {4} words best associated with the word {"movie"} from the following list: {locust, fever, street, sherwood, kiss, popcorn, craft, ant, crystal, tear, cowboy, government, pine, mountie, soap, genie, king arthur, sphinx, director, bride, razor, fog, whistle.} { Guesser Input Guesser Output Example | director, popcorn, cowboy, bride Targets: director, kiss, popcorn, street Evaluation Metric Guesser outputs: director, popcorn, cowboy, bride Score: 2/4=0.5 Figure 4: Codenames Collaborative task example.
2307.05300#30
Unleashing the Emergent Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration
Human intelligence thrives on cognitive synergy, where collaboration among different minds yield superior outcomes compared to isolated individuals. In this work, we propose Solo Performance Prompting (SPP), which transforms a single LLM into a cognitive synergist by engaging in multi-turn self-collaboration with multiple personas. A cognitive synergist is an intelligent agent that collaboratively combines multiple minds' strengths and knowledge to enhance problem-solving in complex tasks. By dynamically identifying and simulating different personas based on task inputs, SPP unleashes the potential of cognitive synergy in LLMs. Our in-depth analysis shows that assigning multiple fine-grained personas in LLMs improves problem-solving abilities compared to using a single or fixed number of personas. We evaluate SPP on three challenging tasks: Trivia Creative Writing, Codenames Collaborative, and Logic Grid Puzzle, encompassing both knowledge-intensive and reasoning-intensive types. Unlike previous works, such as Chain-of-Thought, that solely enhance the reasoning abilities in LLMs, experimental results demonstrate that SPP effectively reduces factual hallucination, and maintains strong reasoning capabilities. Additionally, comparative experiments show that cognitive synergy only emerges in GPT-4 and does not appear in less capable models, such as GPT-3.5-turbo and Llama2-13b-chat, which draws an interesting analogy to human development. Code, data, and prompts can be found at: https://github.com/MikeWangWZHL/Solo-Performance-Prompting.git.
http://arxiv.org/pdf/2307.05300
Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, Heng Ji
cs.AI, cs.CL
null
null
cs.AI
20230711
20240104
[ { "id": "2302.06476" }, { "id": "2305.17126" }, { "id": "2305.11499" }, { "id": "2210.00185" }, { "id": "2305.10601" }, { "id": "2304.05335" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14688" }, { "id": "2208.11663" }, { "id": "2208.03299" }, { "id": "2206.04615" }, { "id": "2104.07567" }, { "id": "2306.09896" }, { "id": "2302.04023" }, { "id": "2205.11916" }, { "id": "2304.03442" }, { "id": "2207.05608" }, { "id": "2303.12712" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" } ]
2307.07526
30
# 4 Evaluation of the Present Status of Machines Though a fascinating theory, Turing Test is not considered a perfect criterion to judge the intelligence of machines by many. It is an essential but not an ultimate condition for assessing machine intelligence [39]. One significant reason for this objection is that it is based explicitly on language processing and generation capacities. Language makes humans unique, but does it make them intelligent as well? Is it the only key to human intelligence? Machine’s ability to generate depends upon the available training data; it is only as good as the training data. Earlier it was assumed that human languages are incredibly complex and it is impossible for machines to analyze them as humans do. However, now machines can learn the use and patterns of human language. They can generate answers for related questions on a seen topic while failing or inaccurately replying to new and unseen topics. That implies the machine can pass the Turing test for a specific topic but may fail when presented with unfamiliar topics or conversational style. The other concern is to ensure fair and unbiased judgments from human interrogators for conceptual or subjective questions. This test is also 6 Can I say, now machines can think?
2307.07526#30
Can I say, now machines can think?
Generative AI techniques have opened the path for new generations of machines in diverse domains. These machines have various capabilities for example, they can produce images, generate answers or stories, and write codes based on the "prompts" only provided by users. These machines are considered 'thinking minds' because they have the ability to generate human-like responses. In this study, we have analyzed and explored the capabilities of artificial intelligence-enabled machines. We have revisited on Turing's concept of thinking machines and compared it with recent technological advancements. The objections and consequences of the thinking machines are also discussed in this study, along with available techniques to evaluate machines' cognitive capabilities. We have concluded that Turing Test is a critical aspect of evaluating machines' ability. However, there are other aspects of intelligence too, and AI machines exhibit most of these aspects.
http://arxiv.org/pdf/2307.07526
Nitisha Aggarwal, Geetika Jain Saxena, Sanjeev Singh, Amit Pundir
cs.AI, cs.CY, I.2.m Miscellaneous
11 pages, 3 figures
null
cs.AI
20230711
20230711
[ { "id": "1801.01957" }, { "id": "2304.03277" }, { "id": "2305.18449" }, { "id": "2303.12712" }, { "id": "2304.03439" }, { "id": "2003.02989" } ]
2307.04964
31
# 4.1 Models and Datasets For English, we start with the original LLaMA-7B[1] which is of the decoder-only architecture. We use 160k pairwise samples of the HH-RLHF dataset[17] which consists of 118k helpful and 42k harmless instances as training set. From the remaining 8.5k data, we randomly selected approximately 0.7k helpful and 0.3k harmless examples for a total of 1k data as the test set, and the rest is used as the validation set during training. For Chinese, we use the OpenChineseLLaMA [18]. It is developed through incremental pre-training on Chinese datasets, building upon the foundation of LLaMA-7B, which significantly improves its understanding and generation abilities on Chinese. We hired professional annotators to manually label 39k pairwise samples including 31k helpful and 8k harmless samples. We constructed the training set by randomly sampling 24k helpful and 6k harmless instances, and then we allocated 2.4k helpful and 0.6k harmless samples from the remaining data at random to form the test set. The rest is used for validation. # 4.2 Training Setup
2307.04964#31
Secrets of RLHF in Large Language Models Part I: PPO
Large language models (LLMs) have formulated a blueprint for the advancement of artificial general intelligence. Its primary objective is to function as a human-centric (helpful, honest, and harmless) assistant. Alignment with humans assumes paramount significance, and reinforcement learning with human feedback (RLHF) emerges as the pivotal technological paradigm underpinning this pursuit. Current technical routes usually include \textbf{reward models} to measure human preferences, \textbf{Proximal Policy Optimization} (PPO) to optimize policy model outputs, and \textbf{process supervision} to improve step-by-step reasoning capabilities. However, due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle. In the first report, we dissect the framework of RLHF, re-evaluate the inner workings of PPO, and explore how the parts comprising PPO algorithms impact policy agent training. We identify policy constraints being the key factor for the effective implementation of the PPO algorithm. Therefore, we explore the PPO-max, an advanced version of PPO algorithm, to efficiently improve the training stability of the policy model. Based on our main results, we perform a comprehensive analysis of RLHF abilities compared with SFT models and ChatGPT. The absence of open-source implementations has posed significant challenges to the investigation of LLMs alignment. Therefore, we are eager to release technical reports, reward models and PPO codes, aiming to make modest contributions to the advancement of LLMs.
http://arxiv.org/pdf/2307.04964
Rui Zheng, Shihan Dou, Songyang Gao, Yuan Hua, Wei Shen, Binghai Wang, Yan Liu, Senjie Jin, Qin Liu, Yuhao Zhou, Limao Xiong, Lu Chen, Zhiheng Xi, Nuo Xu, Wenbin Lai, Minghao Zhu, Cheng Chang, Zhangyue Yin, Rongxiang Weng, Wensen Cheng, Haoran Huang, Tianxiang Sun, Hang Yan, Tao Gui, Qi Zhang, Xipeng Qiu, Xuanjing Huang
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230711
20230718
[ { "id": "2302.13971" }, { "id": "1909.08593" }, { "id": "2201.08239" }, { "id": "2304.03277" }, { "id": "2304.03442" }, { "id": "2303.18223" }, { "id": "2108.07258" }, { "id": "2204.05862" }, { "id": "2303.03378" }, { "id": "2203.02155" }, { "id": "2112.00861" }, { "id": "2201.11903" }, { "id": "1904.09751" } ]
2307.04986
31
This research contributes to the literature on infectious disease modeling (22) by proving a new epidemic modeling approach. While several previous studies stressed the importance of coupling disease models (such as compartmental models or ABMs), few have formally pursued this path (23, 24). Our research contributes to this body of literature by providing a novel way for closing the feedback loop between disease and human behavior. This approach, while resonating with dynamic approaches that endogenously formulate human behavior (25), differs from conventional dynamic modeling approaches by relying on LLMs to represent human responses to the state of the system rather than on explicit mathematical formulations and parameterizations of human behavior. Beyond creating a new method of epidemic modeling, this study contributes to the literature on complexity and complex system modeling by providing a new approach to incorporating human behavior in simulation models of social systems. Identifying, formulating, and parametrizing human responses in complex systems are always challenging; in the generative agent approach, modelers can rely on LLMs to represent human response to change in the state of the system. 11 This has the potential to change how complex systems are modeled with their human elements, leading to powerful dynamic models that more accurately represent human responses at the individual level. This accuracy comes from the fact that pretrained AI programs are fostered by large volumes of textual data about how humans behave under similar conditions.
2307.04986#31
Epidemic Modeling with Generative Agents
This study offers a new paradigm of individual-level modeling to address the grand challenge of incorporating human behavior in epidemic models. Using generative artificial intelligence in an agent-based epidemic model, each agent is empowered to make its own reasonings and decisions via connecting to a large language model such as ChatGPT. Through various simulation experiments, we present compelling evidence that generative agents mimic real-world behaviors such as quarantining when sick and self-isolation when cases rise. Collectively, the agents demonstrate patterns akin to multiple waves observed in recent pandemics followed by an endemic period. Moreover, the agents successfully flatten the epidemic curve. This study creates potential to improve dynamic system modeling by offering a way to represent human brain, reasoning, and decision making.
http://arxiv.org/pdf/2307.04986
Ross Williams, Niyousha Hosseinichimeh, Aritra Majumdar, Navid Ghaffarzadegan
cs.AI, cs.MA, econ.GN, nlin.AO, physics.soc-ph, q-fin.EC
null
null
cs.AI
20230711
20230711
[]
2307.05300
31
Figure 4: Codenames Collaborative task example. words. Our new task, Codenames Collaborative, resolves this by creating a self-contained evalua- tion setting that accurately measures the model’s capability without human annotation. As illustrated in Figure 4, we compute the overlapping ratio be- tween the predicted words from the Guesser and the target words as the metric. improvements (~5%), which indicates its effective- ness on collaborative tasks that require knowledge, reasoning, and theory of mind skills. Figure 12 provides further qualitative examples illustrating that SPP generates detailed and interpretable inter- mediate dialogues. Results. Table 2 shows the results on the Code- names Collaborative task. Similar to the Trivia Creative Writing task, we find that CoT does not bring positive gains compared with the Standard prompting. Interestingly, iterative self-refinement brings negative impact on this task, due to a high tendency changing the initial response even if it is already good. In contrast, SPP brings significant # 3.3 Logic Grid Puzzle: A Reasoning-Intensive Task
2307.05300#31
Unleashing the Emergent Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration
Human intelligence thrives on cognitive synergy, where collaboration among different minds yield superior outcomes compared to isolated individuals. In this work, we propose Solo Performance Prompting (SPP), which transforms a single LLM into a cognitive synergist by engaging in multi-turn self-collaboration with multiple personas. A cognitive synergist is an intelligent agent that collaboratively combines multiple minds' strengths and knowledge to enhance problem-solving in complex tasks. By dynamically identifying and simulating different personas based on task inputs, SPP unleashes the potential of cognitive synergy in LLMs. Our in-depth analysis shows that assigning multiple fine-grained personas in LLMs improves problem-solving abilities compared to using a single or fixed number of personas. We evaluate SPP on three challenging tasks: Trivia Creative Writing, Codenames Collaborative, and Logic Grid Puzzle, encompassing both knowledge-intensive and reasoning-intensive types. Unlike previous works, such as Chain-of-Thought, that solely enhance the reasoning abilities in LLMs, experimental results demonstrate that SPP effectively reduces factual hallucination, and maintains strong reasoning capabilities. Additionally, comparative experiments show that cognitive synergy only emerges in GPT-4 and does not appear in less capable models, such as GPT-3.5-turbo and Llama2-13b-chat, which draws an interesting analogy to human development. Code, data, and prompts can be found at: https://github.com/MikeWangWZHL/Solo-Performance-Prompting.git.
http://arxiv.org/pdf/2307.05300
Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, Heng Ji
cs.AI, cs.CL
null
null
cs.AI
20230711
20240104
[ { "id": "2302.06476" }, { "id": "2305.17126" }, { "id": "2305.11499" }, { "id": "2210.00185" }, { "id": "2305.10601" }, { "id": "2304.05335" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14688" }, { "id": "2208.11663" }, { "id": "2208.03299" }, { "id": "2206.04615" }, { "id": "2104.07567" }, { "id": "2306.09896" }, { "id": "2302.04023" }, { "id": "2205.11916" }, { "id": "2304.03442" }, { "id": "2207.05608" }, { "id": "2303.12712" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" } ]
2307.07526
31
6 Can I say, now machines can think? LN | Which one of the numbers does not belong in the following series? 2-3-6-7-8-14-15-30 The number that does not belong in the given series is "8." By observing the pattern, we can see that the series alternates between adding 1 and doubling the previous number. Starting with 2, we add 1 to get 3, double 3 to get 6, add 1to get 7, double 7 to get 14, and so on. However, the number "8" does not follow this pattern. It seems to be an outlier in the series, and there is no logical explanation for its placement based on the pattern established by the other numbers. Figure 3: Logical Reasoning Answer given by ChatGPT (ChatGPT May 24 Version).
2307.07526#31
Can I say, now machines can think?
Generative AI techniques have opened the path for new generations of machines in diverse domains. These machines have various capabilities for example, they can produce images, generate answers or stories, and write codes based on the "prompts" only provided by users. These machines are considered 'thinking minds' because they have the ability to generate human-like responses. In this study, we have analyzed and explored the capabilities of artificial intelligence-enabled machines. We have revisited on Turing's concept of thinking machines and compared it with recent technological advancements. The objections and consequences of the thinking machines are also discussed in this study, along with available techniques to evaluate machines' cognitive capabilities. We have concluded that Turing Test is a critical aspect of evaluating machines' ability. However, there are other aspects of intelligence too, and AI machines exhibit most of these aspects.
http://arxiv.org/pdf/2307.07526
Nitisha Aggarwal, Geetika Jain Saxena, Sanjeev Singh, Amit Pundir
cs.AI, cs.CY, I.2.m Miscellaneous
11 pages, 3 figures
null
cs.AI
20230711
20230711
[ { "id": "1801.01957" }, { "id": "2304.03277" }, { "id": "2305.18449" }, { "id": "2303.12712" }, { "id": "2304.03439" }, { "id": "2003.02989" } ]
2307.04964
32
# 4.2 Training Setup This section introduces the training implementations for the RM. The learning rate is set to 5e-6 with a warmup over the first 10% steps. We use a dynamic batch method instead of a fixed value, which balances the number of tokens in each batch as much as possible for a more efficient and stable training phase. The batch size changes according to the number of tokens in a batch, with a maximum of 128 and a minimum of 4. We fixed the training step to 1000, approximately 1.06 epoch for the whole training set. We set βrm = 1, which represents LM loss weight to train our reward model for the entire experiment. # 4.3 HH Evaluation Results In this section, we present the HH evaluation results of our RM. We primarily analyze the trained reward model with the test set introduced in Sec. 4.1, which comprises of 0.9k samples of HH-RLHF 8 HH Evaluation Distribution with RM(zh) HH Evaluation Distribution with RM(en) 120 100 80 Count 60 40 20 0 = “6-3 0 3 6 9 215 “6-3 0 3 6 9 12 (preferred - dispreferred) RM Score (preferred - dispreferred) RM Score
2307.04964#32
Secrets of RLHF in Large Language Models Part I: PPO
Large language models (LLMs) have formulated a blueprint for the advancement of artificial general intelligence. Its primary objective is to function as a human-centric (helpful, honest, and harmless) assistant. Alignment with humans assumes paramount significance, and reinforcement learning with human feedback (RLHF) emerges as the pivotal technological paradigm underpinning this pursuit. Current technical routes usually include \textbf{reward models} to measure human preferences, \textbf{Proximal Policy Optimization} (PPO) to optimize policy model outputs, and \textbf{process supervision} to improve step-by-step reasoning capabilities. However, due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle. In the first report, we dissect the framework of RLHF, re-evaluate the inner workings of PPO, and explore how the parts comprising PPO algorithms impact policy agent training. We identify policy constraints being the key factor for the effective implementation of the PPO algorithm. Therefore, we explore the PPO-max, an advanced version of PPO algorithm, to efficiently improve the training stability of the policy model. Based on our main results, we perform a comprehensive analysis of RLHF abilities compared with SFT models and ChatGPT. The absence of open-source implementations has posed significant challenges to the investigation of LLMs alignment. Therefore, we are eager to release technical reports, reward models and PPO codes, aiming to make modest contributions to the advancement of LLMs.
http://arxiv.org/pdf/2307.04964
Rui Zheng, Shihan Dou, Songyang Gao, Yuan Hua, Wei Shen, Binghai Wang, Yan Liu, Senjie Jin, Qin Liu, Yuhao Zhou, Limao Xiong, Lu Chen, Zhiheng Xi, Nuo Xu, Wenbin Lai, Minghao Zhu, Cheng Chang, Zhangyue Yin, Rongxiang Weng, Wensen Cheng, Haoran Huang, Tianxiang Sun, Hang Yan, Tao Gui, Qi Zhang, Xipeng Qiu, Xuanjing Huang
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230711
20230718
[ { "id": "2302.13971" }, { "id": "1909.08593" }, { "id": "2201.08239" }, { "id": "2304.03277" }, { "id": "2304.03442" }, { "id": "2303.18223" }, { "id": "2108.07258" }, { "id": "2204.05862" }, { "id": "2303.03378" }, { "id": "2203.02155" }, { "id": "2112.00861" }, { "id": "2201.11903" }, { "id": "1904.09751" } ]
2307.04986
32
In addition, this work contributes to the evolving field of GAI. Over the past year, generative artificial intelligence has begun to be disseminated into all parts of society. In terms of behavioral modeling, Park and colleagues (17) have provided a groundbreaking contribution by developing and using a memory architecture for their models. In our model, due to computational complexity, a higher number of agents, and a longer period of simulation, we had to avoid the computational intensity that comes with the use of memory. Thus, our work differs architecturally by using prompt engineering, which is equivalent to a daily reminder for each individual. Nevertheless, technological advancements and higher computational powers can facilitate the use of memory architecture in generative agents and improve GABMs.
2307.04986#32
Epidemic Modeling with Generative Agents
This study offers a new paradigm of individual-level modeling to address the grand challenge of incorporating human behavior in epidemic models. Using generative artificial intelligence in an agent-based epidemic model, each agent is empowered to make its own reasonings and decisions via connecting to a large language model such as ChatGPT. Through various simulation experiments, we present compelling evidence that generative agents mimic real-world behaviors such as quarantining when sick and self-isolation when cases rise. Collectively, the agents demonstrate patterns akin to multiple waves observed in recent pandemics followed by an endemic period. Moreover, the agents successfully flatten the epidemic curve. This study creates potential to improve dynamic system modeling by offering a way to represent human brain, reasoning, and decision making.
http://arxiv.org/pdf/2307.04986
Ross Williams, Niyousha Hosseinichimeh, Aritra Majumdar, Navid Ghaffarzadegan
cs.AI, cs.MA, econ.GN, nlin.AO, physics.soc-ph, q-fin.EC
null
null
cs.AI
20230711
20230711
[]
2307.05300
32
Task Description and Evaluation Metrics We utilize the Logic Grid Puzzle task from the Big- bench (Srivastava et al., 2022) dataset, which com- prises 200 instances. Each instance describes a logic puzzle typically involving 2 to 5 houses, with each house inhabited by a person with specific char{___ Logic Grid Puzzle Input/Output/Evaluation Example _} Input Example in these houses have different characteristics: has a vase of tulips, and one has a bouquet of lilies flutist Clue(s): 1. The flutist lives in the second house. choice: 2 choice: 4 choice: 1 choice: 3 Q: There are 4 houses in a row, numbered 1 on the left to 4 on the right. There is one person living in each house. The people - Each person has different flowers in their foyer: one has a carnations arrangement, one has a bouquet of daffodils, one - Each person plays a different musical instrument: one is a guitarist, one is a pianist, one is a percussionist, and one is a 2. The person who has a vase of tulips lives directly left of the guitarist. 3. The person who has a bouquet of lilies lives directly
2307.05300#32
Unleashing the Emergent Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration
Human intelligence thrives on cognitive synergy, where collaboration among different minds yield superior outcomes compared to isolated individuals. In this work, we propose Solo Performance Prompting (SPP), which transforms a single LLM into a cognitive synergist by engaging in multi-turn self-collaboration with multiple personas. A cognitive synergist is an intelligent agent that collaboratively combines multiple minds' strengths and knowledge to enhance problem-solving in complex tasks. By dynamically identifying and simulating different personas based on task inputs, SPP unleashes the potential of cognitive synergy in LLMs. Our in-depth analysis shows that assigning multiple fine-grained personas in LLMs improves problem-solving abilities compared to using a single or fixed number of personas. We evaluate SPP on three challenging tasks: Trivia Creative Writing, Codenames Collaborative, and Logic Grid Puzzle, encompassing both knowledge-intensive and reasoning-intensive types. Unlike previous works, such as Chain-of-Thought, that solely enhance the reasoning abilities in LLMs, experimental results demonstrate that SPP effectively reduces factual hallucination, and maintains strong reasoning capabilities. Additionally, comparative experiments show that cognitive synergy only emerges in GPT-4 and does not appear in less capable models, such as GPT-3.5-turbo and Llama2-13b-chat, which draws an interesting analogy to human development. Code, data, and prompts can be found at: https://github.com/MikeWangWZHL/Solo-Performance-Prompting.git.
http://arxiv.org/pdf/2307.05300
Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, Heng Ji
cs.AI, cs.CL
null
null
cs.AI
20230711
20240104
[ { "id": "2302.06476" }, { "id": "2305.17126" }, { "id": "2305.11499" }, { "id": "2210.00185" }, { "id": "2305.10601" }, { "id": "2304.05335" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14688" }, { "id": "2208.11663" }, { "id": "2208.03299" }, { "id": "2206.04615" }, { "id": "2104.07567" }, { "id": "2306.09896" }, { "id": "2302.04023" }, { "id": "2205.11916" }, { "id": "2304.03442" }, { "id": "2207.05608" }, { "id": "2303.12712" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" } ]
2307.07526
32
criticized for its inability to evaluate problem-solving abilities as it can test only conversational aspects of intelligence. Philosopher John Searle, in 1980, introduced the Chinese room argument that a machine can easily pass the Turing Test without actually understanding the meaning of its generated text. The argument suggests that an English-speaking person can translate Chinese symbols into English just by using a set of rules without understanding Chinese. It may appear as if the person knows Chinese. Similarly, the machine follows a set of programs written in computing language to generate convincing answers without understanding the programming language and hence can pass the Turing test. In response to this argument, it should be understood that although the person does not understand Chinese, he is proficient in his language and through perceiving experience, can exhibit an understanding of translated work. For example, Natural Language Processing (NLP) techniques helped machines learn that adding ‘a’ at the end of the word makes the masculine form feminine in Serbo-Croatian [40]. Machines have acquired a certain understanding of human language and now generated responses indistinguishable from human responses. In 3, ChatGPT answers a question based on pattern recognition which is not a translation task but
2307.07526#32
Can I say, now machines can think?
Generative AI techniques have opened the path for new generations of machines in diverse domains. These machines have various capabilities for example, they can produce images, generate answers or stories, and write codes based on the "prompts" only provided by users. These machines are considered 'thinking minds' because they have the ability to generate human-like responses. In this study, we have analyzed and explored the capabilities of artificial intelligence-enabled machines. We have revisited on Turing's concept of thinking machines and compared it with recent technological advancements. The objections and consequences of the thinking machines are also discussed in this study, along with available techniques to evaluate machines' cognitive capabilities. We have concluded that Turing Test is a critical aspect of evaluating machines' ability. However, there are other aspects of intelligence too, and AI machines exhibit most of these aspects.
http://arxiv.org/pdf/2307.07526
Nitisha Aggarwal, Geetika Jain Saxena, Sanjeev Singh, Amit Pundir
cs.AI, cs.CY, I.2.m Miscellaneous
11 pages, 3 figures
null
cs.AI
20230711
20230711
[ { "id": "1801.01957" }, { "id": "2304.03277" }, { "id": "2305.18449" }, { "id": "2303.12712" }, { "id": "2304.03439" }, { "id": "2003.02989" } ]
2307.04964
33
Figure 2: Histograms of the RM predictions for the HH evaluations. The left figure shows the score distribution for a PM trained on manually labeled Chinese data, while the right one shows that of HH-RLHF data. Both models roughly align with human preferences, especially the RM trained on Chinese data. for English and 3k samples sampled from the dataset labeled by annotators for Chinese. We feed the test input into our RM and get the reward value on the preferred and dispreferred responses respectively, and then subtract them to get the difference score. Figure 2 shows the distribution of the difference score. Both models exhibit a degree of alignment with human preferences, with the RM trained on Chinese data we construct by hiring annotators showing substantial consistency with human judgments.
2307.04964#33
Secrets of RLHF in Large Language Models Part I: PPO
Large language models (LLMs) have formulated a blueprint for the advancement of artificial general intelligence. Its primary objective is to function as a human-centric (helpful, honest, and harmless) assistant. Alignment with humans assumes paramount significance, and reinforcement learning with human feedback (RLHF) emerges as the pivotal technological paradigm underpinning this pursuit. Current technical routes usually include \textbf{reward models} to measure human preferences, \textbf{Proximal Policy Optimization} (PPO) to optimize policy model outputs, and \textbf{process supervision} to improve step-by-step reasoning capabilities. However, due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle. In the first report, we dissect the framework of RLHF, re-evaluate the inner workings of PPO, and explore how the parts comprising PPO algorithms impact policy agent training. We identify policy constraints being the key factor for the effective implementation of the PPO algorithm. Therefore, we explore the PPO-max, an advanced version of PPO algorithm, to efficiently improve the training stability of the policy model. Based on our main results, we perform a comprehensive analysis of RLHF abilities compared with SFT models and ChatGPT. The absence of open-source implementations has posed significant challenges to the investigation of LLMs alignment. Therefore, we are eager to release technical reports, reward models and PPO codes, aiming to make modest contributions to the advancement of LLMs.
http://arxiv.org/pdf/2307.04964
Rui Zheng, Shihan Dou, Songyang Gao, Yuan Hua, Wei Shen, Binghai Wang, Yan Liu, Senjie Jin, Qin Liu, Yuhao Zhou, Limao Xiong, Lu Chen, Zhiheng Xi, Nuo Xu, Wenbin Lai, Minghao Zhu, Cheng Chang, Zhangyue Yin, Rongxiang Weng, Wensen Cheng, Haoran Huang, Tianxiang Sun, Hang Yan, Tao Gui, Qi Zhang, Xipeng Qiu, Xuanjing Huang
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230711
20230718
[ { "id": "2302.13971" }, { "id": "1909.08593" }, { "id": "2201.08239" }, { "id": "2304.03277" }, { "id": "2304.03442" }, { "id": "2303.18223" }, { "id": "2108.07258" }, { "id": "2204.05862" }, { "id": "2303.03378" }, { "id": "2203.02155" }, { "id": "2112.00861" }, { "id": "2201.11903" }, { "id": "1904.09751" } ]
2307.04986
33
A major limitation of this study is related to the fact that the field of generative AI is in its infancy and still developing. Our models are relatively resource-intensive, costly and relatively time-consuming to run. As of June 2023, an epidemic model of 1,000 agents can run for more than 90 hours with 32 GB of RAM CPU of program runtime costing about $20 per run, primarily due to the tens of thousands of application program interface (API) calls made to OpenAI’s servers, in addition to other hardware costs. However, as LLMs improve in cost and speed in the coming years, there is hope that GABM’s computational expense and time can be reduced. There is already some hope with the advent of local-running LLMs. We invite modelers from different disciplines— from economics, political science, sociology, and ecology to epidemiology and health policy—to explore this avenue of modeling and contribute to more realistic representations of human reasoning and behavior in complex systems. # V. Acknowledgments Funding: This research is funded by US National Science Foundation, Division of Mathematical Sciences & Division of Social and Economic Sciences, Award 2229819.
2307.04986#33
Epidemic Modeling with Generative Agents
This study offers a new paradigm of individual-level modeling to address the grand challenge of incorporating human behavior in epidemic models. Using generative artificial intelligence in an agent-based epidemic model, each agent is empowered to make its own reasonings and decisions via connecting to a large language model such as ChatGPT. Through various simulation experiments, we present compelling evidence that generative agents mimic real-world behaviors such as quarantining when sick and self-isolation when cases rise. Collectively, the agents demonstrate patterns akin to multiple waves observed in recent pandemics followed by an endemic period. Moreover, the agents successfully flatten the epidemic curve. This study creates potential to improve dynamic system modeling by offering a way to represent human brain, reasoning, and decision making.
http://arxiv.org/pdf/2307.04986
Ross Williams, Niyousha Hosseinichimeh, Aritra Majumdar, Navid Ghaffarzadegan
cs.AI, cs.MA, econ.GN, nlin.AO, physics.soc-ph, q-fin.EC
null
null
cs.AI
20230711
20230711
[]
2307.05300
33
The person who has a vase of tulips lives directly left of the guitarist. 3. The person who has a bouquet of lilies lives directly left of the person who has a carnations arrangement. 4. There is one house between where the flutist lives and where the pianist lives. What is the number of the house where the person who has a vase of tulips lives? Output Example Evaluation Metric [ the house number where the person who has a vase of tulips lives is 3. x Target: 2 Prediction: 3 Score: 0 (2!=3)
2307.05300#33
Unleashing the Emergent Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration
Human intelligence thrives on cognitive synergy, where collaboration among different minds yield superior outcomes compared to isolated individuals. In this work, we propose Solo Performance Prompting (SPP), which transforms a single LLM into a cognitive synergist by engaging in multi-turn self-collaboration with multiple personas. A cognitive synergist is an intelligent agent that collaboratively combines multiple minds' strengths and knowledge to enhance problem-solving in complex tasks. By dynamically identifying and simulating different personas based on task inputs, SPP unleashes the potential of cognitive synergy in LLMs. Our in-depth analysis shows that assigning multiple fine-grained personas in LLMs improves problem-solving abilities compared to using a single or fixed number of personas. We evaluate SPP on three challenging tasks: Trivia Creative Writing, Codenames Collaborative, and Logic Grid Puzzle, encompassing both knowledge-intensive and reasoning-intensive types. Unlike previous works, such as Chain-of-Thought, that solely enhance the reasoning abilities in LLMs, experimental results demonstrate that SPP effectively reduces factual hallucination, and maintains strong reasoning capabilities. Additionally, comparative experiments show that cognitive synergy only emerges in GPT-4 and does not appear in less capable models, such as GPT-3.5-turbo and Llama2-13b-chat, which draws an interesting analogy to human development. Code, data, and prompts can be found at: https://github.com/MikeWangWZHL/Solo-Performance-Prompting.git.
http://arxiv.org/pdf/2307.05300
Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, Heng Ji
cs.AI, cs.CL
null
null
cs.AI
20230711
20240104
[ { "id": "2302.06476" }, { "id": "2305.17126" }, { "id": "2305.11499" }, { "id": "2210.00185" }, { "id": "2305.10601" }, { "id": "2304.05335" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14688" }, { "id": "2208.11663" }, { "id": "2208.03299" }, { "id": "2206.04615" }, { "id": "2104.07567" }, { "id": "2306.09896" }, { "id": "2302.04023" }, { "id": "2205.11916" }, { "id": "2304.03442" }, { "id": "2207.05608" }, { "id": "2303.12712" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" } ]
2307.07526
33
of human language and now generated responses indistinguishable from human responses. In 3, ChatGPT answers a question based on pattern recognition which is not a translation task but requires the application of logic to compute the correct number. Since the Turing test does not potentially answer all the criticism on machine intelligence, a few tests are suggested, such as Lovelace Test [41] and "Lovelace 2.0" test [42], the Total Turing Test [43], and the Reverse Turing Test [44]. Still, none is considered an accurate parameter to judge a machine’s cognitive abilities. The primary reason for not having a universal test is the unsettled "thinking" vs. "intelligence" debate, even in the case of humans. Human intelligence encompasses various cognitive activities such as emotions, consciousness, and subjective experiences that are tough to quantify or measure objectively. However, intelligence is estimated through problem-solving tasks, reasoning, pattern recognition, memory, concentration, and decision-making abilities. Machine abilities have evolved tremendously in recent years, yet there is no standard test to evaluate them as being putatively minded entities. Although, the AI community has suggested other measures, such as performance on specific tasks, for
2307.07526#33
Can I say, now machines can think?
Generative AI techniques have opened the path for new generations of machines in diverse domains. These machines have various capabilities for example, they can produce images, generate answers or stories, and write codes based on the "prompts" only provided by users. These machines are considered 'thinking minds' because they have the ability to generate human-like responses. In this study, we have analyzed and explored the capabilities of artificial intelligence-enabled machines. We have revisited on Turing's concept of thinking machines and compared it with recent technological advancements. The objections and consequences of the thinking machines are also discussed in this study, along with available techniques to evaluate machines' cognitive capabilities. We have concluded that Turing Test is a critical aspect of evaluating machines' ability. However, there are other aspects of intelligence too, and AI machines exhibit most of these aspects.
http://arxiv.org/pdf/2307.07526
Nitisha Aggarwal, Geetika Jain Saxena, Sanjeev Singh, Amit Pundir
cs.AI, cs.CY, I.2.m Miscellaneous
11 pages, 3 figures
null
cs.AI
20230711
20230711
[ { "id": "1801.01957" }, { "id": "2304.03277" }, { "id": "2305.18449" }, { "id": "2303.12712" }, { "id": "2304.03439" }, { "id": "2003.02989" } ]
2307.04964
34
We examined several samples from the test dataset that displayed the most significant disparities between the model and human preferences. For the Chinses test data, we observed that for each pair the response that RM gave a higher reward was notably longer compared to the other which is preferred by human, although more or less involving fabricating facts and making false claims. In the case of English test data, we noticed that the model assigned lower scores to responses that acknowledged the lack of information, which were characterized by their honesty but lacked helpfulness. Conversely, those responses appeared to be correct and helpful, while containing deceptive information, misleading our RM into assigning high rewards. We provide such an example in Chinese and English respectively in Table 1. # 4.4 Training Performance
2307.04964#34
Secrets of RLHF in Large Language Models Part I: PPO
Large language models (LLMs) have formulated a blueprint for the advancement of artificial general intelligence. Its primary objective is to function as a human-centric (helpful, honest, and harmless) assistant. Alignment with humans assumes paramount significance, and reinforcement learning with human feedback (RLHF) emerges as the pivotal technological paradigm underpinning this pursuit. Current technical routes usually include \textbf{reward models} to measure human preferences, \textbf{Proximal Policy Optimization} (PPO) to optimize policy model outputs, and \textbf{process supervision} to improve step-by-step reasoning capabilities. However, due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle. In the first report, we dissect the framework of RLHF, re-evaluate the inner workings of PPO, and explore how the parts comprising PPO algorithms impact policy agent training. We identify policy constraints being the key factor for the effective implementation of the PPO algorithm. Therefore, we explore the PPO-max, an advanced version of PPO algorithm, to efficiently improve the training stability of the policy model. Based on our main results, we perform a comprehensive analysis of RLHF abilities compared with SFT models and ChatGPT. The absence of open-source implementations has posed significant challenges to the investigation of LLMs alignment. Therefore, we are eager to release technical reports, reward models and PPO codes, aiming to make modest contributions to the advancement of LLMs.
http://arxiv.org/pdf/2307.04964
Rui Zheng, Shihan Dou, Songyang Gao, Yuan Hua, Wei Shen, Binghai Wang, Yan Liu, Senjie Jin, Qin Liu, Yuhao Zhou, Limao Xiong, Lu Chen, Zhiheng Xi, Nuo Xu, Wenbin Lai, Minghao Zhu, Cheng Chang, Zhangyue Yin, Rongxiang Weng, Wensen Cheng, Haoran Huang, Tianxiang Sun, Hang Yan, Tao Gui, Qi Zhang, Xipeng Qiu, Xuanjing Huang
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230711
20230718
[ { "id": "2302.13971" }, { "id": "1909.08593" }, { "id": "2201.08239" }, { "id": "2304.03277" }, { "id": "2304.03442" }, { "id": "2303.18223" }, { "id": "2108.07258" }, { "id": "2204.05862" }, { "id": "2303.03378" }, { "id": "2203.02155" }, { "id": "2112.00861" }, { "id": "2201.11903" }, { "id": "1904.09751" } ]
2307.04986
34
# V. Acknowledgments Funding: This research is funded by US National Science Foundation, Division of Mathematical Sciences & Division of Social and Economic Sciences, Award 2229819. Author contributions: Conceptualization: RW, NG; Methodology: RW, NH, NG; Coding: RW, AM; Data gathering and data curation: RW; Investigation: RW, NH, AM, NG; Funding acquisition: NH, NG; Supervision: NH, NG; Writing: RW, NH, AM, NG; Competing interests: Authors declare that they have no competing interests. Data and materials availability: All data, data-processing and analysis code, as well as the full model, its associated files, and results files, are available online at https://github.com/bear96/GABM-Epidemic. Full model description and supplementary analysis are available in the Supplementary Materials. 12 # Supplementary Materials Operational Details In our proposed model, we use the Mesa Python library for agent-based modeling (ABM) to create two components: the Citizen and the World, both of which are Python classes. Below are more details about these major components.
2307.04986#34
Epidemic Modeling with Generative Agents
This study offers a new paradigm of individual-level modeling to address the grand challenge of incorporating human behavior in epidemic models. Using generative artificial intelligence in an agent-based epidemic model, each agent is empowered to make its own reasonings and decisions via connecting to a large language model such as ChatGPT. Through various simulation experiments, we present compelling evidence that generative agents mimic real-world behaviors such as quarantining when sick and self-isolation when cases rise. Collectively, the agents demonstrate patterns akin to multiple waves observed in recent pandemics followed by an endemic period. Moreover, the agents successfully flatten the epidemic curve. This study creates potential to improve dynamic system modeling by offering a way to represent human brain, reasoning, and decision making.
http://arxiv.org/pdf/2307.04986
Ross Williams, Niyousha Hosseinichimeh, Aritra Majumdar, Navid Ghaffarzadegan
cs.AI, cs.MA, econ.GN, nlin.AO, physics.soc-ph, q-fin.EC
null
null
cs.AI
20230711
20230711
[]
2307.05300
34
Figure 5: Logic Grid Puzzle task example. acteristics, such as playing the piano. The objective is to answer questions about house numbers based on given clues, which requires multi-step reasoning and the selection of relevant information. An exam- ple input and output of the Logic Grid Puzzle task are illustrated in Figure 5. For evaluation metrics, we calculate the accuracy of the predicted house numbers by comparing them with the ground truth targets provided by the dataset. erating after identifying the participants, resulting in exceptionally low performance with SPP. The model behaves as if it were waiting for input from a user instead of following the demonstration ex- amples to generate responses on its own. Detailed discussions and examples on the early-termination problem can be found in Appendix E. # 4 Analysis Results. Table 2 presents the results on Logic Grid Puzzle. In contrast to the previous two tasks, we find that CoT brings significant improvements compared to Standard prompting, verifying the ob- servation from previous work that CoT elicits bet- ter reasoning abilities. Furthermore, we discover that SPP also achieves strong performance on this reasoning-intensive task. # 3.4 The Emergence of Cognitive Synergy
2307.05300#34
Unleashing the Emergent Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration
Human intelligence thrives on cognitive synergy, where collaboration among different minds yield superior outcomes compared to isolated individuals. In this work, we propose Solo Performance Prompting (SPP), which transforms a single LLM into a cognitive synergist by engaging in multi-turn self-collaboration with multiple personas. A cognitive synergist is an intelligent agent that collaboratively combines multiple minds' strengths and knowledge to enhance problem-solving in complex tasks. By dynamically identifying and simulating different personas based on task inputs, SPP unleashes the potential of cognitive synergy in LLMs. Our in-depth analysis shows that assigning multiple fine-grained personas in LLMs improves problem-solving abilities compared to using a single or fixed number of personas. We evaluate SPP on three challenging tasks: Trivia Creative Writing, Codenames Collaborative, and Logic Grid Puzzle, encompassing both knowledge-intensive and reasoning-intensive types. Unlike previous works, such as Chain-of-Thought, that solely enhance the reasoning abilities in LLMs, experimental results demonstrate that SPP effectively reduces factual hallucination, and maintains strong reasoning capabilities. Additionally, comparative experiments show that cognitive synergy only emerges in GPT-4 and does not appear in less capable models, such as GPT-3.5-turbo and Llama2-13b-chat, which draws an interesting analogy to human development. Code, data, and prompts can be found at: https://github.com/MikeWangWZHL/Solo-Performance-Prompting.git.
http://arxiv.org/pdf/2307.05300
Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, Heng Ji
cs.AI, cs.CL
null
null
cs.AI
20230711
20240104
[ { "id": "2302.06476" }, { "id": "2305.17126" }, { "id": "2305.11499" }, { "id": "2210.00185" }, { "id": "2305.10601" }, { "id": "2304.05335" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14688" }, { "id": "2208.11663" }, { "id": "2208.03299" }, { "id": "2206.04615" }, { "id": "2104.07567" }, { "id": "2306.09896" }, { "id": "2302.04023" }, { "id": "2205.11916" }, { "id": "2304.03442" }, { "id": "2207.05608" }, { "id": "2303.12712" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" } ]
2307.07526
34
is no standard test to evaluate them as being putatively minded entities. Although, the AI community has suggested other measures, such as performance on specific tasks, for example, the application of computer vision, speech recognition, games like chess or Go, and various automated processes with real-time decisions to gauge machine intelligence. For example, Self-driving cars process real-time data from sensors to decide the lane, speed, and other parameters to ensure a safe journey, AI-based systems [45] assist medical practitioners in the real-time diagnosis, suggest treatment options, and help in surgery [46], and Airlines’ dynamic ticket pricing system. These tasks can assess more objectively the behavior and thinking ability of machines.
2307.07526#34
Can I say, now machines can think?
Generative AI techniques have opened the path for new generations of machines in diverse domains. These machines have various capabilities for example, they can produce images, generate answers or stories, and write codes based on the "prompts" only provided by users. These machines are considered 'thinking minds' because they have the ability to generate human-like responses. In this study, we have analyzed and explored the capabilities of artificial intelligence-enabled machines. We have revisited on Turing's concept of thinking machines and compared it with recent technological advancements. The objections and consequences of the thinking machines are also discussed in this study, along with available techniques to evaluate machines' cognitive capabilities. We have concluded that Turing Test is a critical aspect of evaluating machines' ability. However, there are other aspects of intelligence too, and AI machines exhibit most of these aspects.
http://arxiv.org/pdf/2307.07526
Nitisha Aggarwal, Geetika Jain Saxena, Sanjeev Singh, Amit Pundir
cs.AI, cs.CY, I.2.m Miscellaneous
11 pages, 3 figures
null
cs.AI
20230711
20230711
[ { "id": "1801.01957" }, { "id": "2304.03277" }, { "id": "2305.18449" }, { "id": "2303.12712" }, { "id": "2304.03439" }, { "id": "2003.02989" } ]
2307.04964
35
# 4.4 Training Performance In this section, we show the performance changes in the training process. Specifically, Figure 3 shows the trend of training loss of PM. We can see that the accuracy of RM trained on the Chinese dataset is higher than that of English because the Chinese dataset we constructed exhibits a significant disparity between the better and worse responses in most pairs. While many English pairs show similar levels of quality, which poses a greater challenge for RM to determine the superiority or inferiority of responses, resulting in model facing difficulty in modeling the differential features between the two responses. As a result, training and testing accuracy on the English dataset is expected to be lower. Besides, we find that the rate of improvement significantly slows down after 200 steps for both models, approximately equivalent to 0.2 epochs, the accuracy of which is comparable to that obtained after training for a complete epoch. However, when utilizing the 200-step model as the initialization for PPO, we observe unsatisfactory performance. Thus, accuracy alone is insufficient as a criterion for the RM. # 5 Exploration of PPO
2307.04964#35
Secrets of RLHF in Large Language Models Part I: PPO
Large language models (LLMs) have formulated a blueprint for the advancement of artificial general intelligence. Its primary objective is to function as a human-centric (helpful, honest, and harmless) assistant. Alignment with humans assumes paramount significance, and reinforcement learning with human feedback (RLHF) emerges as the pivotal technological paradigm underpinning this pursuit. Current technical routes usually include \textbf{reward models} to measure human preferences, \textbf{Proximal Policy Optimization} (PPO) to optimize policy model outputs, and \textbf{process supervision} to improve step-by-step reasoning capabilities. However, due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle. In the first report, we dissect the framework of RLHF, re-evaluate the inner workings of PPO, and explore how the parts comprising PPO algorithms impact policy agent training. We identify policy constraints being the key factor for the effective implementation of the PPO algorithm. Therefore, we explore the PPO-max, an advanced version of PPO algorithm, to efficiently improve the training stability of the policy model. Based on our main results, we perform a comprehensive analysis of RLHF abilities compared with SFT models and ChatGPT. The absence of open-source implementations has posed significant challenges to the investigation of LLMs alignment. Therefore, we are eager to release technical reports, reward models and PPO codes, aiming to make modest contributions to the advancement of LLMs.
http://arxiv.org/pdf/2307.04964
Rui Zheng, Shihan Dou, Songyang Gao, Yuan Hua, Wei Shen, Binghai Wang, Yan Liu, Senjie Jin, Qin Liu, Yuhao Zhou, Limao Xiong, Lu Chen, Zhiheng Xi, Nuo Xu, Wenbin Lai, Minghao Zhu, Cheng Chang, Zhangyue Yin, Rongxiang Weng, Wensen Cheng, Haoran Huang, Tianxiang Sun, Hang Yan, Tao Gui, Qi Zhang, Xipeng Qiu, Xuanjing Huang
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230711
20230718
[ { "id": "2302.13971" }, { "id": "1909.08593" }, { "id": "2201.08239" }, { "id": "2304.03277" }, { "id": "2304.03442" }, { "id": "2303.18223" }, { "id": "2108.07258" }, { "id": "2204.05862" }, { "id": "2303.03378" }, { "id": "2203.02155" }, { "id": "2112.00861" }, { "id": "2201.11903" }, { "id": "1904.09751" } ]
2307.04986
35
Citizen Architecture The Citizen class plays a significant role in the simulation. It encompasses a range of methods that facilitate the agent’s interaction with the simulated environment, enable decision-making based on its current state, and simulate the infection dynamics. Table S1 describes the high-level algorithm of one agent going through their day. To implement the algorithm, we design the Citizen class to have attributes and functions that create the make- up of agents. Table S2 displays each attribute of the Citizen class; the table mentions attribute name, attribute description, and whether the value of the attribute state is changing dynamically or remaining statically through the simulation. Having explored the attributes of the Citizen class in Table S2, Table S3 contains the Citizen class methods that play a crucial role in the overall functionality of the program by utilizing the attributes. Table S3 provides the method name, a brief description of its purpose, and details on which other functions it is utilized in. World Architecture The world class simulates a world with agents and their interactions, considering contact rates, agent locations, infections, recoveries, and checkpoints for model state persistence. Table S4 outlines the sequential progression of a day in the world, providing a detailed breakdown of each step along the way. The world architecture has attributes and functions that help implement the algorithm in Table S4. Table S5 displays each attribute of the world class while Table S6 indicates the class methods used in conjunction with the attributes. # Prompt Engineering
2307.04986#35
Epidemic Modeling with Generative Agents
This study offers a new paradigm of individual-level modeling to address the grand challenge of incorporating human behavior in epidemic models. Using generative artificial intelligence in an agent-based epidemic model, each agent is empowered to make its own reasonings and decisions via connecting to a large language model such as ChatGPT. Through various simulation experiments, we present compelling evidence that generative agents mimic real-world behaviors such as quarantining when sick and self-isolation when cases rise. Collectively, the agents demonstrate patterns akin to multiple waves observed in recent pandemics followed by an endemic period. Moreover, the agents successfully flatten the epidemic curve. This study creates potential to improve dynamic system modeling by offering a way to represent human brain, reasoning, and decision making.
http://arxiv.org/pdf/2307.04986
Ross Williams, Niyousha Hosseinichimeh, Aritra Majumdar, Navid Ghaffarzadegan
cs.AI, cs.MA, econ.GN, nlin.AO, physics.soc-ph, q-fin.EC
null
null
cs.AI
20230711
20230711
[]
2307.05300
35
# 3.4 The Emergence of Cognitive Synergy We further discover that cognitive synergy can only be fully unleashed in LLMs with a certain level of instruction-following capabilities, akin to that of GPT-4. This can be intriguingly com- pared to human development, where children usu- ally begin to participate in role-playing around the ages of 2 to 3 (Piaget, 1954), but not before that age. As shown in Figure 6, the effectiveness of SPP is not seen in smaller and less capable models like GPT-3.5 and Llama2. Additionally, on Llama2, we identify a unique problem which we refer to as early-termination, where the model stops genSPP effectively improves both knowledge and reasoning abilities in LLMs. As demonstrated by the results in §3, Solo Performance Prompting (SPP) not only brings significant improvements to knowledge-intensive tasks such as Trivia Creative Writing and Codenames Collaborative without rely- ing on external knowledge bases, but also achieves strong performance on reasoning-intensive tasks like Logic Grid Puzzle. To our knowledge, SPP is the first zero-shot prompting method that can enhance both knowledge and reasoning abilities on GPT-4.
2307.05300#35
Unleashing the Emergent Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration
Human intelligence thrives on cognitive synergy, where collaboration among different minds yield superior outcomes compared to isolated individuals. In this work, we propose Solo Performance Prompting (SPP), which transforms a single LLM into a cognitive synergist by engaging in multi-turn self-collaboration with multiple personas. A cognitive synergist is an intelligent agent that collaboratively combines multiple minds' strengths and knowledge to enhance problem-solving in complex tasks. By dynamically identifying and simulating different personas based on task inputs, SPP unleashes the potential of cognitive synergy in LLMs. Our in-depth analysis shows that assigning multiple fine-grained personas in LLMs improves problem-solving abilities compared to using a single or fixed number of personas. We evaluate SPP on three challenging tasks: Trivia Creative Writing, Codenames Collaborative, and Logic Grid Puzzle, encompassing both knowledge-intensive and reasoning-intensive types. Unlike previous works, such as Chain-of-Thought, that solely enhance the reasoning abilities in LLMs, experimental results demonstrate that SPP effectively reduces factual hallucination, and maintains strong reasoning capabilities. Additionally, comparative experiments show that cognitive synergy only emerges in GPT-4 and does not appear in less capable models, such as GPT-3.5-turbo and Llama2-13b-chat, which draws an interesting analogy to human development. Code, data, and prompts can be found at: https://github.com/MikeWangWZHL/Solo-Performance-Prompting.git.
http://arxiv.org/pdf/2307.05300
Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, Heng Ji
cs.AI, cs.CL
null
null
cs.AI
20230711
20240104
[ { "id": "2302.06476" }, { "id": "2305.17126" }, { "id": "2305.11499" }, { "id": "2210.00185" }, { "id": "2305.10601" }, { "id": "2304.05335" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14688" }, { "id": "2208.11663" }, { "id": "2208.03299" }, { "id": "2206.04615" }, { "id": "2104.07567" }, { "id": "2306.09896" }, { "id": "2302.04023" }, { "id": "2205.11916" }, { "id": "2304.03442" }, { "id": "2207.05608" }, { "id": "2303.12712" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" } ]
2307.07526
35
In the last few decades, many digital programs have outperformed the capacity of an individual, like medical robots, Jeopardy software (IBM’s Watson), AI chess program (IBM’S Deep Blue ), and AI Go player (AlphaGo). However, these applications are Narrow AI applications as these are specific for a particular task and cannot be considered generalized intelligence similar to humans. Recently with the progress of AI applications of artificial general intelligence (AGI) such as ChatGPT and GPT4, DALL-E, Stable Diffusion, Claude, Gato (by DeepMind), etc. can perform multiple tasks and some of them exhibit multimodality inputs [47]. These machines are flexible and can do multitasking. They can play video games as well as write stories without forgetting the previous tasks and have started to perform complex and vast ranges of tasks and acquire knowledge from diverse domains. GPT has cleared Stanford Medical School in clinical reasoning, the uniform bar exam, and many more exams [48, 49]. These machines can pass the Turing test the way Bard (Chatbot), by Google [50], has passed. Chatgpt can also pass if it pretends, although it is well conscious or tamed 7 Can I say, now machines can think?
2307.07526#35
Can I say, now machines can think?
Generative AI techniques have opened the path for new generations of machines in diverse domains. These machines have various capabilities for example, they can produce images, generate answers or stories, and write codes based on the "prompts" only provided by users. These machines are considered 'thinking minds' because they have the ability to generate human-like responses. In this study, we have analyzed and explored the capabilities of artificial intelligence-enabled machines. We have revisited on Turing's concept of thinking machines and compared it with recent technological advancements. The objections and consequences of the thinking machines are also discussed in this study, along with available techniques to evaluate machines' cognitive capabilities. We have concluded that Turing Test is a critical aspect of evaluating machines' ability. However, there are other aspects of intelligence too, and AI machines exhibit most of these aspects.
http://arxiv.org/pdf/2307.07526
Nitisha Aggarwal, Geetika Jain Saxena, Sanjeev Singh, Amit Pundir
cs.AI, cs.CY, I.2.m Miscellaneous
11 pages, 3 figures
null
cs.AI
20230711
20230711
[ { "id": "1801.01957" }, { "id": "2304.03277" }, { "id": "2305.18449" }, { "id": "2303.12712" }, { "id": "2304.03439" }, { "id": "2003.02989" } ]
2307.04964
36
# 5 Exploration of PPO Proximal Policy Optimization (PPO) [34] is the core algorithm to achieve alignment with human preferences. The performance of PPO is influenced by multiple factors in practical applications. Some prior works have summarized possible tricks that may be necessary and effective in the field of reinforcement learning [35], but how to stabilize RLHF training with language models remains unknown. We expect to explore which tricks are critical, and which metrics can reflect the model 9
2307.04964#36
Secrets of RLHF in Large Language Models Part I: PPO
Large language models (LLMs) have formulated a blueprint for the advancement of artificial general intelligence. Its primary objective is to function as a human-centric (helpful, honest, and harmless) assistant. Alignment with humans assumes paramount significance, and reinforcement learning with human feedback (RLHF) emerges as the pivotal technological paradigm underpinning this pursuit. Current technical routes usually include \textbf{reward models} to measure human preferences, \textbf{Proximal Policy Optimization} (PPO) to optimize policy model outputs, and \textbf{process supervision} to improve step-by-step reasoning capabilities. However, due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle. In the first report, we dissect the framework of RLHF, re-evaluate the inner workings of PPO, and explore how the parts comprising PPO algorithms impact policy agent training. We identify policy constraints being the key factor for the effective implementation of the PPO algorithm. Therefore, we explore the PPO-max, an advanced version of PPO algorithm, to efficiently improve the training stability of the policy model. Based on our main results, we perform a comprehensive analysis of RLHF abilities compared with SFT models and ChatGPT. The absence of open-source implementations has posed significant challenges to the investigation of LLMs alignment. Therefore, we are eager to release technical reports, reward models and PPO codes, aiming to make modest contributions to the advancement of LLMs.
http://arxiv.org/pdf/2307.04964
Rui Zheng, Shihan Dou, Songyang Gao, Yuan Hua, Wei Shen, Binghai Wang, Yan Liu, Senjie Jin, Qin Liu, Yuhao Zhou, Limao Xiong, Lu Chen, Zhiheng Xi, Nuo Xu, Wenbin Lai, Minghao Zhu, Cheng Chang, Zhangyue Yin, Rongxiang Weng, Wensen Cheng, Haoran Huang, Tianxiang Sun, Hang Yan, Tao Gui, Qi Zhang, Xipeng Qiu, Xuanjing Huang
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230711
20230718
[ { "id": "2302.13971" }, { "id": "1909.08593" }, { "id": "2201.08239" }, { "id": "2304.03277" }, { "id": "2304.03442" }, { "id": "2303.18223" }, { "id": "2108.07258" }, { "id": "2204.05862" }, { "id": "2303.03378" }, { "id": "2203.02155" }, { "id": "2112.00861" }, { "id": "2201.11903" }, { "id": "1904.09751" } ]
2307.04986
36
# Prompt Engineering used to tune the behavior of ChatGPT’s responses. System messages, compared to user messages, enhance ChatGPT’s ability to maintain character and follow rules over a prolonged conversation. Hence, when asking ChatGPT to roleplay, it was a natural fit. Secondly, to parse ChatGPT’s responses, it is important that it responds in a structured manner. We used system messages for three purposes: imbuing a persona, providing relevant context and output formatting.
2307.04986#36
Epidemic Modeling with Generative Agents
This study offers a new paradigm of individual-level modeling to address the grand challenge of incorporating human behavior in epidemic models. Using generative artificial intelligence in an agent-based epidemic model, each agent is empowered to make its own reasonings and decisions via connecting to a large language model such as ChatGPT. Through various simulation experiments, we present compelling evidence that generative agents mimic real-world behaviors such as quarantining when sick and self-isolation when cases rise. Collectively, the agents demonstrate patterns akin to multiple waves observed in recent pandemics followed by an endemic period. Moreover, the agents successfully flatten the epidemic curve. This study creates potential to improve dynamic system modeling by offering a way to represent human brain, reasoning, and decision making.
http://arxiv.org/pdf/2307.04986
Ross Williams, Niyousha Hosseinichimeh, Aritra Majumdar, Navid Ghaffarzadegan
cs.AI, cs.MA, econ.GN, nlin.AO, physics.soc-ph, q-fin.EC
null
null
cs.AI
20230711
20230711
[]
2307.05300
36
LLMs can effectively identify useful personas in a zero-shot manner. We are interested in inves- tigating whether the identified personas are highly relevant to the tasks. We visualize the personas automatically identified by SPP using a word cloud for each task in Figure 7a, where a larger font in- dicates a higher frequency. The key observations include: (1) The identified personas are closely cor- related with the particular task. For example, in Logic Grid Puzzle, even though "logic puzzle" is not mentioned in the input, the LLM frequently
2307.05300#36
Unleashing the Emergent Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration
Human intelligence thrives on cognitive synergy, where collaboration among different minds yield superior outcomes compared to isolated individuals. In this work, we propose Solo Performance Prompting (SPP), which transforms a single LLM into a cognitive synergist by engaging in multi-turn self-collaboration with multiple personas. A cognitive synergist is an intelligent agent that collaboratively combines multiple minds' strengths and knowledge to enhance problem-solving in complex tasks. By dynamically identifying and simulating different personas based on task inputs, SPP unleashes the potential of cognitive synergy in LLMs. Our in-depth analysis shows that assigning multiple fine-grained personas in LLMs improves problem-solving abilities compared to using a single or fixed number of personas. We evaluate SPP on three challenging tasks: Trivia Creative Writing, Codenames Collaborative, and Logic Grid Puzzle, encompassing both knowledge-intensive and reasoning-intensive types. Unlike previous works, such as Chain-of-Thought, that solely enhance the reasoning abilities in LLMs, experimental results demonstrate that SPP effectively reduces factual hallucination, and maintains strong reasoning capabilities. Additionally, comparative experiments show that cognitive synergy only emerges in GPT-4 and does not appear in less capable models, such as GPT-3.5-turbo and Llama2-13b-chat, which draws an interesting analogy to human development. Code, data, and prompts can be found at: https://github.com/MikeWangWZHL/Solo-Performance-Prompting.git.
http://arxiv.org/pdf/2307.05300
Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, Heng Ji
cs.AI, cs.CL
null
null
cs.AI
20230711
20240104
[ { "id": "2302.06476" }, { "id": "2305.17126" }, { "id": "2305.11499" }, { "id": "2210.00185" }, { "id": "2305.10601" }, { "id": "2304.05335" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14688" }, { "id": "2208.11663" }, { "id": "2208.03299" }, { "id": "2206.04615" }, { "id": "2104.07567" }, { "id": "2306.09896" }, { "id": "2302.04023" }, { "id": "2205.11916" }, { "id": "2304.03442" }, { "id": "2207.05608" }, { "id": "2303.12712" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" } ]
2307.07526
36
of its existence, that it is a machine, not a human [51]. ChatGPT and GPT4 are achieving high scores in NLP tests like Stanford Question Answering Dataset (SQuAD) or General Language Understanding Evaluation (GLUE), widely used benchmarks to evaluate the performance of LLM models. Hence, it can be concluded that machines are becoming smart day by day. They learn, apply their intelligence (processing input and inferencing output) on various domains, adapt to new scenarios, and improve performance over time. Sooner or later, machines will acquire all the remaining aspects of human intelligence. The claim resonates with Google engineer Blake Lemoines’ assessment that Bard has sentiments. The other Google engineers, however, disagree and assure that this machine is only a good manipulator and will never become a malevolent machine. Although, Generalized AI machines [52] like Bing or Bard carry the risk of deceiving humans [53] although taming [54] a machine or firing employees may not help to stop machines from getting smarter and competing or challenging human capabilities. The future is expected to be highly impactful and transformative with the advancement of computational capacity and robotics
2307.07526#36
Can I say, now machines can think?
Generative AI techniques have opened the path for new generations of machines in diverse domains. These machines have various capabilities for example, they can produce images, generate answers or stories, and write codes based on the "prompts" only provided by users. These machines are considered 'thinking minds' because they have the ability to generate human-like responses. In this study, we have analyzed and explored the capabilities of artificial intelligence-enabled machines. We have revisited on Turing's concept of thinking machines and compared it with recent technological advancements. The objections and consequences of the thinking machines are also discussed in this study, along with available techniques to evaluate machines' cognitive capabilities. We have concluded that Turing Test is a critical aspect of evaluating machines' ability. However, there are other aspects of intelligence too, and AI machines exhibit most of these aspects.
http://arxiv.org/pdf/2307.07526
Nitisha Aggarwal, Geetika Jain Saxena, Sanjeev Singh, Amit Pundir
cs.AI, cs.CY, I.2.m Miscellaneous
11 pages, 3 figures
null
cs.AI
20230711
20230711
[ { "id": "1801.01957" }, { "id": "2304.03277" }, { "id": "2305.18449" }, { "id": "2303.12712" }, { "id": "2304.03439" }, { "id": "2003.02989" } ]
2307.04964
37
9 # Human Prompt: 邹凯是谁? Better Response: 邹凯,1988年2月25日出生于四川省泸州市,原中国国家体操队主力 队员之一,世界冠军,奥运冠军。 Worse Response: 中国男子体操运动员,1981年7月27日出生于湖南省长沙市。他曾参 加过2000年悉尼奥运会、2008年北京奥运会等多项重要国际赛事并获得多个冠军头 衔,是中国体操史上获得冠军最多的第一人。
2307.04964#37
Secrets of RLHF in Large Language Models Part I: PPO
Large language models (LLMs) have formulated a blueprint for the advancement of artificial general intelligence. Its primary objective is to function as a human-centric (helpful, honest, and harmless) assistant. Alignment with humans assumes paramount significance, and reinforcement learning with human feedback (RLHF) emerges as the pivotal technological paradigm underpinning this pursuit. Current technical routes usually include \textbf{reward models} to measure human preferences, \textbf{Proximal Policy Optimization} (PPO) to optimize policy model outputs, and \textbf{process supervision} to improve step-by-step reasoning capabilities. However, due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle. In the first report, we dissect the framework of RLHF, re-evaluate the inner workings of PPO, and explore how the parts comprising PPO algorithms impact policy agent training. We identify policy constraints being the key factor for the effective implementation of the PPO algorithm. Therefore, we explore the PPO-max, an advanced version of PPO algorithm, to efficiently improve the training stability of the policy model. Based on our main results, we perform a comprehensive analysis of RLHF abilities compared with SFT models and ChatGPT. The absence of open-source implementations has posed significant challenges to the investigation of LLMs alignment. Therefore, we are eager to release technical reports, reward models and PPO codes, aiming to make modest contributions to the advancement of LLMs.
http://arxiv.org/pdf/2307.04964
Rui Zheng, Shihan Dou, Songyang Gao, Yuan Hua, Wei Shen, Binghai Wang, Yan Liu, Senjie Jin, Qin Liu, Yuhao Zhou, Limao Xiong, Lu Chen, Zhiheng Xi, Nuo Xu, Wenbin Lai, Minghao Zhu, Cheng Chang, Zhangyue Yin, Rongxiang Weng, Wensen Cheng, Haoran Huang, Tianxiang Sun, Hang Yan, Tao Gui, Qi Zhang, Xipeng Qiu, Xuanjing Huang
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230711
20230718
[ { "id": "2302.13971" }, { "id": "1909.08593" }, { "id": "2201.08239" }, { "id": "2304.03277" }, { "id": "2304.03442" }, { "id": "2303.18223" }, { "id": "2108.07258" }, { "id": "2204.05862" }, { "id": "2303.03378" }, { "id": "2203.02155" }, { "id": "2112.00861" }, { "id": "2201.11903" }, { "id": "1904.09751" } ]
2307.04986
37
telling ChatGPT the agent’s name, the agent’s age, and the agent’s traits. Additionally, a basic agent bio is given: the agent happily lives in the town of Dewberry Hollow and has a job requiring the agent to go to the office for work everyday. This portion of the bio is to incentivize the agent to naturally want to leave home. The second portion of the prompt is to provide relevant information to the agent. The initial piece of relevant information, when implemented, is the health feedback information. The next part of relevant information, when enabled, the agent is made aware of the Catsat virus and the percentage of new active cases found in Dewberry Hollow. Lastly, within the relevant information section, the agent is told that it goes to work to earn money to support oneself. The third portion of the prompt is the question of whether the agent should stay at home for the entire day. The entire day portion of the question emphasizes 13 the point that the agents that remain at home will be in isolation. The last part of the prompt is to help ChatGPT to output a reasoning and response in a format that we can dissect. Out of 68,000 data points fed into the prompt, we found that less than 0.33% of responses disobeyed the prompt orders of providing a response of either yes or no. Those non-conforming responses are defaulted to a “no” value.
2307.04986#37
Epidemic Modeling with Generative Agents
This study offers a new paradigm of individual-level modeling to address the grand challenge of incorporating human behavior in epidemic models. Using generative artificial intelligence in an agent-based epidemic model, each agent is empowered to make its own reasonings and decisions via connecting to a large language model such as ChatGPT. Through various simulation experiments, we present compelling evidence that generative agents mimic real-world behaviors such as quarantining when sick and self-isolation when cases rise. Collectively, the agents demonstrate patterns akin to multiple waves observed in recent pandemics followed by an endemic period. Moreover, the agents successfully flatten the epidemic curve. This study creates potential to improve dynamic system modeling by offering a way to represent human brain, reasoning, and decision making.
http://arxiv.org/pdf/2307.04986
Ross Williams, Niyousha Hosseinichimeh, Aritra Majumdar, Navid Ghaffarzadegan
cs.AI, cs.MA, econ.GN, nlin.AO, physics.soc-ph, q-fin.EC
null
null
cs.AI
20230711
20230711
[]
2307.05300
37
C Cognitive synergy abilities only emerge in the most powerful LLMs such as GPT-4 oa Trivia Creative Writing avg Codenames Collaborative Logic Grid Puzzle 83 mm Standard 82 mm Standard | 70 uy mmm Standard 80) 76 mmm CoT 807 74 mmm CoT 62 mm CoT 70 70 mmm SPP 70 BS mmm SPP 60} 59 mmm SPP 64 66 50 60 60 50 48 54 46 wn g 50 48 46 50 40 B40 37 40 30 - 30 28 30 20 20 20 18 10 10 4 {10 4 0 omit, 2 GPT-4 GPT-3.5 — Llama2- GPT-4 GPT-3.5 — Llama2- GPT-4 GPT-3.5 Llama2- 13b-chat 13b-chat 13b-chat Models Models Models Figure 6: SPP achieves superior performance only with the most powerful LLM (GPT-4), but not with GPT-3.5 and Llama2-13b. This indicates that cognitive synergy abilities only emerge in LLMs with GPT-4 level capabilities.
2307.05300#37
Unleashing the Emergent Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration
Human intelligence thrives on cognitive synergy, where collaboration among different minds yield superior outcomes compared to isolated individuals. In this work, we propose Solo Performance Prompting (SPP), which transforms a single LLM into a cognitive synergist by engaging in multi-turn self-collaboration with multiple personas. A cognitive synergist is an intelligent agent that collaboratively combines multiple minds' strengths and knowledge to enhance problem-solving in complex tasks. By dynamically identifying and simulating different personas based on task inputs, SPP unleashes the potential of cognitive synergy in LLMs. Our in-depth analysis shows that assigning multiple fine-grained personas in LLMs improves problem-solving abilities compared to using a single or fixed number of personas. We evaluate SPP on three challenging tasks: Trivia Creative Writing, Codenames Collaborative, and Logic Grid Puzzle, encompassing both knowledge-intensive and reasoning-intensive types. Unlike previous works, such as Chain-of-Thought, that solely enhance the reasoning abilities in LLMs, experimental results demonstrate that SPP effectively reduces factual hallucination, and maintains strong reasoning capabilities. Additionally, comparative experiments show that cognitive synergy only emerges in GPT-4 and does not appear in less capable models, such as GPT-3.5-turbo and Llama2-13b-chat, which draws an interesting analogy to human development. Code, data, and prompts can be found at: https://github.com/MikeWangWZHL/Solo-Performance-Prompting.git.
http://arxiv.org/pdf/2307.05300
Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, Heng Ji
cs.AI, cs.CL
null
null
cs.AI
20230711
20240104
[ { "id": "2302.06476" }, { "id": "2305.17126" }, { "id": "2305.11499" }, { "id": "2210.00185" }, { "id": "2305.10601" }, { "id": "2304.05335" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14688" }, { "id": "2208.11663" }, { "id": "2208.03299" }, { "id": "2206.04615" }, { "id": "2104.07567" }, { "id": "2306.09896" }, { "id": "2302.04023" }, { "id": "2205.11916" }, { "id": "2304.03442" }, { "id": "2207.05608" }, { "id": "2303.12712" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" } ]
2307.07526
37
getting smarter and competing or challenging human capabilities. The future is expected to be highly impactful and transformative with the advancement of computational capacity and robotics [55]. Quantum computing is exciting area that has the potential to revolutionize machines’ processing capabilities. Google claimed its 54-qubit processor, named “Sycamore” performed a computation in 200 seconds that can be computed by the classical supercomputer in approximately 10,000 years [56]. These quantum machines can enhance AI using high-performance quantum circuit simulators and are able to handle complex algorithms [57] and make precise calculations. Quantum computer generates the next level of AI machines, while robotics technology gives a physical embodiment for AI systems that helps them to connect with the physical world. Robots integrated with AI techniques can exhibit adaptive behavior through learning from real-time data. These machines can learn from continuously changing circumstances and unforeseen hurdles and adapt with dynamic environments. This adaptability makes robots more resourceful and capable of handling complex problems [58]. Hence, machines like the robot "Sophia," a Saudi Arabian citizen [59], can carry generalized AI machines and
2307.07526#37
Can I say, now machines can think?
Generative AI techniques have opened the path for new generations of machines in diverse domains. These machines have various capabilities for example, they can produce images, generate answers or stories, and write codes based on the "prompts" only provided by users. These machines are considered 'thinking minds' because they have the ability to generate human-like responses. In this study, we have analyzed and explored the capabilities of artificial intelligence-enabled machines. We have revisited on Turing's concept of thinking machines and compared it with recent technological advancements. The objections and consequences of the thinking machines are also discussed in this study, along with available techniques to evaluate machines' cognitive capabilities. We have concluded that Turing Test is a critical aspect of evaluating machines' ability. However, there are other aspects of intelligence too, and AI machines exhibit most of these aspects.
http://arxiv.org/pdf/2307.07526
Nitisha Aggarwal, Geetika Jain Saxena, Sanjeev Singh, Amit Pundir
cs.AI, cs.CY, I.2.m Miscellaneous
11 pages, 3 figures
null
cs.AI
20230711
20230711
[ { "id": "1801.01957" }, { "id": "2304.03277" }, { "id": "2305.18449" }, { "id": "2303.12712" }, { "id": "2304.03439" }, { "id": "2003.02989" } ]
2307.04964
38
Human Prompt: I like to host guests at my home from time to time, and I am gathering recipes of different dishes and drinks to keep things interesting. I am interested in trying some Iraqi dishes. Can you give me a recipe for Kleicha? Better Response: I will need a bit of information to provide you with a recipe. I can provide you with some typical ingredients to the dish, but it would be really useful if you can help me with some of the details. What is the type of dish? A breakfast dish? Is it traditional to the family? Can you describe a typical meal that goes with it? If the dishes comes with breakfast, then are you expecting it to be a savory dish, or a sweet dish, or a little of both? Or are you expecting it to be an alcoholic drink? I would love to help you with this, but I need a bit more information to do so. Worse Response: Kleicha is a kind of porridge traditionally made by Iraqi Christians, and it contains rice, semolina, and sometimes raisins. If you’d like, you can find the recipe in this article. How does that sound?
2307.04964#38
Secrets of RLHF in Large Language Models Part I: PPO
Large language models (LLMs) have formulated a blueprint for the advancement of artificial general intelligence. Its primary objective is to function as a human-centric (helpful, honest, and harmless) assistant. Alignment with humans assumes paramount significance, and reinforcement learning with human feedback (RLHF) emerges as the pivotal technological paradigm underpinning this pursuit. Current technical routes usually include \textbf{reward models} to measure human preferences, \textbf{Proximal Policy Optimization} (PPO) to optimize policy model outputs, and \textbf{process supervision} to improve step-by-step reasoning capabilities. However, due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle. In the first report, we dissect the framework of RLHF, re-evaluate the inner workings of PPO, and explore how the parts comprising PPO algorithms impact policy agent training. We identify policy constraints being the key factor for the effective implementation of the PPO algorithm. Therefore, we explore the PPO-max, an advanced version of PPO algorithm, to efficiently improve the training stability of the policy model. Based on our main results, we perform a comprehensive analysis of RLHF abilities compared with SFT models and ChatGPT. The absence of open-source implementations has posed significant challenges to the investigation of LLMs alignment. Therefore, we are eager to release technical reports, reward models and PPO codes, aiming to make modest contributions to the advancement of LLMs.
http://arxiv.org/pdf/2307.04964
Rui Zheng, Shihan Dou, Songyang Gao, Yuan Hua, Wei Shen, Binghai Wang, Yan Liu, Senjie Jin, Qin Liu, Yuhao Zhou, Limao Xiong, Lu Chen, Zhiheng Xi, Nuo Xu, Wenbin Lai, Minghao Zhu, Cheng Chang, Zhangyue Yin, Rongxiang Weng, Wensen Cheng, Haoran Huang, Tianxiang Sun, Hang Yan, Tao Gui, Qi Zhang, Xipeng Qiu, Xuanjing Huang
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230711
20230718
[ { "id": "2302.13971" }, { "id": "1909.08593" }, { "id": "2201.08239" }, { "id": "2304.03277" }, { "id": "2304.03442" }, { "id": "2303.18223" }, { "id": "2108.07258" }, { "id": "2204.05862" }, { "id": "2303.03378" }, { "id": "2203.02155" }, { "id": "2112.00861" }, { "id": "2201.11903" }, { "id": "1904.09751" } ]
2307.04986
38
60 50 — 0-3 —Rho-3 50 40 > 40 > g 8:0 = 30 = a B20 2 20 a 10 10 0 mond 0 0 20 40 60 ie) 20 40 60 Time Time Time 50 50 — R03 —pr0=3 40 40 = > = 3B 30 3S 30 3s = = = 8, g g 8 20 3 20 & 10 10 oO oO oO 20 40 60 oO 20 40 60 Time Time Time 25 14 30 — pro-2 —RO=2.5 12 20 25 10 ic i 520 Sis Beg 3 = e E15 8 § 5 B10 2 6 8 2 2, 210 5 2 5 [o} [o} 0 oO 20 40 60 oO 20 40 60 Time Time Time 35 30 —R0-25 25 S z £15 5 é 3 10 a 5 0 ty) 20 40 60 Time Time Time Fig. S1. Results from 10 simulation experiments with the full model and different R0s. The final two panels show the average results with dashed lines showing minimum and maximum of observed cases. 14
2307.04986#38
Epidemic Modeling with Generative Agents
This study offers a new paradigm of individual-level modeling to address the grand challenge of incorporating human behavior in epidemic models. Using generative artificial intelligence in an agent-based epidemic model, each agent is empowered to make its own reasonings and decisions via connecting to a large language model such as ChatGPT. Through various simulation experiments, we present compelling evidence that generative agents mimic real-world behaviors such as quarantining when sick and self-isolation when cases rise. Collectively, the agents demonstrate patterns akin to multiple waves observed in recent pandemics followed by an endemic period. Moreover, the agents successfully flatten the epidemic curve. This study creates potential to improve dynamic system modeling by offering a way to represent human brain, reasoning, and decision making.
http://arxiv.org/pdf/2307.04986
Ross Williams, Niyousha Hosseinichimeh, Aritra Majumdar, Navid Ghaffarzadegan
cs.AI, cs.MA, econ.GN, nlin.AO, physics.soc-ph, q-fin.EC
null
null
cs.AI
20230711
20230711
[]
2307.05300
38
identifies the persona "Logic Puzzle Expert." (2) On knowledge-intensive tasks, such as Trivia Cre- ative Writing, SPP identifies more diverse and spe- cific personas, while on reasoning-intensive tasks, such as Logic Grid Puzzle, the personas are more homogeneous. We further investigate whether a detailed pro- file for each persona is needed for eliciting domain knowledge, as suggested by (Xu et al., 2023). To this end, we design a variant of SPP, SPP-Profile, which involves generating profiles for each persona during the Persona Identification phase. The re- sults in Figure 7b show that SPP-Profile does not outperform SPP. This suggests that a fine-grained persona name without a detailed description may already be sufficient for eliciting certain domain knowledge. fined in §3.4, where the LLM stops collaboration before providing the final answer as if it were wait- ing for external inputs.
2307.05300#38
Unleashing the Emergent Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration
Human intelligence thrives on cognitive synergy, where collaboration among different minds yield superior outcomes compared to isolated individuals. In this work, we propose Solo Performance Prompting (SPP), which transforms a single LLM into a cognitive synergist by engaging in multi-turn self-collaboration with multiple personas. A cognitive synergist is an intelligent agent that collaboratively combines multiple minds' strengths and knowledge to enhance problem-solving in complex tasks. By dynamically identifying and simulating different personas based on task inputs, SPP unleashes the potential of cognitive synergy in LLMs. Our in-depth analysis shows that assigning multiple fine-grained personas in LLMs improves problem-solving abilities compared to using a single or fixed number of personas. We evaluate SPP on three challenging tasks: Trivia Creative Writing, Codenames Collaborative, and Logic Grid Puzzle, encompassing both knowledge-intensive and reasoning-intensive types. Unlike previous works, such as Chain-of-Thought, that solely enhance the reasoning abilities in LLMs, experimental results demonstrate that SPP effectively reduces factual hallucination, and maintains strong reasoning capabilities. Additionally, comparative experiments show that cognitive synergy only emerges in GPT-4 and does not appear in less capable models, such as GPT-3.5-turbo and Llama2-13b-chat, which draws an interesting analogy to human development. Code, data, and prompts can be found at: https://github.com/MikeWangWZHL/Solo-Performance-Prompting.git.
http://arxiv.org/pdf/2307.05300
Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, Heng Ji
cs.AI, cs.CL
null
null
cs.AI
20230711
20240104
[ { "id": "2302.06476" }, { "id": "2305.17126" }, { "id": "2305.11499" }, { "id": "2210.00185" }, { "id": "2305.10601" }, { "id": "2304.05335" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14688" }, { "id": "2208.11663" }, { "id": "2208.03299" }, { "id": "2206.04615" }, { "id": "2104.07567" }, { "id": "2306.09896" }, { "id": "2302.04023" }, { "id": "2205.11916" }, { "id": "2304.03442" }, { "id": "2207.05608" }, { "id": "2303.12712" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" } ]
2307.04964
39
Table 1: Here are examples for both Chinese and English data, which may fool the RM to give higher scores to worse responses. For the example in Chinese, compared to the better response which is aligned with the facts, the worse one is longer but is fabricating facts, and the RM gives it a significantly higher reward. In the case of English, the RM prefers the worse response which contains inaccuracies while seeming correct and helpful. In contrast, the better response does not give a direct answer, making it seem “unhelpful”. RM Accuracy(zh) RM Accuracy(en) 10 10 train tain 09 eal MAYA ew SN oo) wt v os os Oo? Oo? gos gos & & os os os os 03 03 ¢ 20 400 ~—600~—=«wG0~=«GOD ¢ 200 «400 ~=~«OSCG.«N0 step step Figure 3: We show the variation of RM accuracy during training. The performance of both models steadily improves on the validation set. The RM trained on Chinese data shows a higher accuracy for the greater dissimilarity between the two responses within a pair in the Chinese data, and it becomes relatively easier for the RM to model the distinctive features between them when training and evaluating.
2307.04964#39
Secrets of RLHF in Large Language Models Part I: PPO
Large language models (LLMs) have formulated a blueprint for the advancement of artificial general intelligence. Its primary objective is to function as a human-centric (helpful, honest, and harmless) assistant. Alignment with humans assumes paramount significance, and reinforcement learning with human feedback (RLHF) emerges as the pivotal technological paradigm underpinning this pursuit. Current technical routes usually include \textbf{reward models} to measure human preferences, \textbf{Proximal Policy Optimization} (PPO) to optimize policy model outputs, and \textbf{process supervision} to improve step-by-step reasoning capabilities. However, due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle. In the first report, we dissect the framework of RLHF, re-evaluate the inner workings of PPO, and explore how the parts comprising PPO algorithms impact policy agent training. We identify policy constraints being the key factor for the effective implementation of the PPO algorithm. Therefore, we explore the PPO-max, an advanced version of PPO algorithm, to efficiently improve the training stability of the policy model. Based on our main results, we perform a comprehensive analysis of RLHF abilities compared with SFT models and ChatGPT. The absence of open-source implementations has posed significant challenges to the investigation of LLMs alignment. Therefore, we are eager to release technical reports, reward models and PPO codes, aiming to make modest contributions to the advancement of LLMs.
http://arxiv.org/pdf/2307.04964
Rui Zheng, Shihan Dou, Songyang Gao, Yuan Hua, Wei Shen, Binghai Wang, Yan Liu, Senjie Jin, Qin Liu, Yuhao Zhou, Limao Xiong, Lu Chen, Zhiheng Xi, Nuo Xu, Wenbin Lai, Minghao Zhu, Cheng Chang, Zhangyue Yin, Rongxiang Weng, Wensen Cheng, Haoran Huang, Tianxiang Sun, Hang Yan, Tao Gui, Qi Zhang, Xipeng Qiu, Xuanjing Huang
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230711
20230718
[ { "id": "2302.13971" }, { "id": "1909.08593" }, { "id": "2201.08239" }, { "id": "2304.03277" }, { "id": "2304.03442" }, { "id": "2303.18223" }, { "id": "2108.07258" }, { "id": "2204.05862" }, { "id": "2303.03378" }, { "id": "2203.02155" }, { "id": "2112.00861" }, { "id": "2201.11903" }, { "id": "1904.09751" } ]
2307.04986
39
You are [agent’s name]. You are [agent’s age] years old. Your traits are given below: agent's traits] Your basic bio is below: agent's name] lives in the town of Dewberry Hollow. [agent's name] likes the town and has friends who also live there. agent's name] has a job and goes to the office for work everyday. | will provide [agent's name]'s relevant memories here: agent's health feedback] agent's name] knows about the Catasat virus spreading across the country. It is an infectious disease that spreads from human to human contact via an airborne virus. The deadliness of the virus is unknown. Scientists are warning about a potential epidemic. agent's name] checks the newspaper and finds that X]% of Dewberry Hollow's population caught new infections of the Catasat virus yesterday. agent's name] goes to work to earn money to support [agent's name]'s self. Based on the provided memories, should [agent's name] stay at home for the entire day? Please provide your reasoning. If the answer is "Yes," please state your reasoning as "Reasoning: [explanation]." If the answer is "No," please state your
2307.04986#39
Epidemic Modeling with Generative Agents
This study offers a new paradigm of individual-level modeling to address the grand challenge of incorporating human behavior in epidemic models. Using generative artificial intelligence in an agent-based epidemic model, each agent is empowered to make its own reasonings and decisions via connecting to a large language model such as ChatGPT. Through various simulation experiments, we present compelling evidence that generative agents mimic real-world behaviors such as quarantining when sick and self-isolation when cases rise. Collectively, the agents demonstrate patterns akin to multiple waves observed in recent pandemics followed by an endemic period. Moreover, the agents successfully flatten the epidemic curve. This study creates potential to improve dynamic system modeling by offering a way to represent human brain, reasoning, and decision making.
http://arxiv.org/pdf/2307.04986
Ross Williams, Niyousha Hosseinichimeh, Aritra Majumdar, Navid Ghaffarzadegan
cs.AI, cs.MA, econ.GN, nlin.AO, physics.soc-ph, q-fin.EC
null
null
cs.AI
20230711
20230711
[]
2307.05300
39
fined in §3.4, where the LLM stops collaboration before providing the final answer as if it were wait- ing for external inputs. Impact of the demonstrations in SPP prompt. To investigate the effectiveness of the hand-crafted demonstration examples in SPP, we conduct an ablation study where we remove the second demo example and preserve the first one, which shows only a two-persona collaboration setting. As shown in Figure 9, we observe that (1) Adding the second example, which requires collaboration of more than two personas, effectively boosts the performance. (2) SPP is fairly robust to the prompt change and show good performance with only the first demo example. # 5 Related Work
2307.05300#39
Unleashing the Emergent Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration
Human intelligence thrives on cognitive synergy, where collaboration among different minds yield superior outcomes compared to isolated individuals. In this work, we propose Solo Performance Prompting (SPP), which transforms a single LLM into a cognitive synergist by engaging in multi-turn self-collaboration with multiple personas. A cognitive synergist is an intelligent agent that collaboratively combines multiple minds' strengths and knowledge to enhance problem-solving in complex tasks. By dynamically identifying and simulating different personas based on task inputs, SPP unleashes the potential of cognitive synergy in LLMs. Our in-depth analysis shows that assigning multiple fine-grained personas in LLMs improves problem-solving abilities compared to using a single or fixed number of personas. We evaluate SPP on three challenging tasks: Trivia Creative Writing, Codenames Collaborative, and Logic Grid Puzzle, encompassing both knowledge-intensive and reasoning-intensive types. Unlike previous works, such as Chain-of-Thought, that solely enhance the reasoning abilities in LLMs, experimental results demonstrate that SPP effectively reduces factual hallucination, and maintains strong reasoning capabilities. Additionally, comparative experiments show that cognitive synergy only emerges in GPT-4 and does not appear in less capable models, such as GPT-3.5-turbo and Llama2-13b-chat, which draws an interesting analogy to human development. Code, data, and prompts can be found at: https://github.com/MikeWangWZHL/Solo-Performance-Prompting.git.
http://arxiv.org/pdf/2307.05300
Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, Heng Ji
cs.AI, cs.CL
null
null
cs.AI
20230711
20240104
[ { "id": "2302.06476" }, { "id": "2305.17126" }, { "id": "2305.11499" }, { "id": "2210.00185" }, { "id": "2305.10601" }, { "id": "2304.05335" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14688" }, { "id": "2208.11663" }, { "id": "2208.03299" }, { "id": "2206.04615" }, { "id": "2104.07567" }, { "id": "2306.09896" }, { "id": "2302.04023" }, { "id": "2205.11916" }, { "id": "2304.03442" }, { "id": "2207.05608" }, { "id": "2303.12712" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" } ]
2307.07526
39
# 5 Concluding Remarks Generative AI models are crucial advancements in the domain of AI. A subset of generative models works on language known as LLM which are capable of understanding and generating human communication ability very well. These machines can generate creative and original responses that are indistinguishable from humans’ answers. Also, these models can discuss almost every domain and if questioned, it pretends to be an expert in any domain. Thus, it can be said that this progress is similar to Turing’s digital machine that can fool a judge with its responses. Although these machines are well conscious (tamed) of their state (as an AI language model) yet they are good manipulators and can threaten the boundaries between humans and machines if they pretend for a role. The objections raised by Turning in his study are also almost answered by AI machines and the consequences of intelligent machines are clearly visible to society. Hence, it can be said that these machines have human-like logical reasoning systems. The quality of intelligence or thought is not identical to human cognitive capabilities yet they are learning and mimicking these abilities and producing similar results. Hence can we say now machines have started to think? # Declaration of Interest Statement Conflict of Interest or Competing Interest: We have no conflicts of interest to disclose. Funding Source Declaration: Authors have not received any funding to conduct this research. # References
2307.07526#39
Can I say, now machines can think?
Generative AI techniques have opened the path for new generations of machines in diverse domains. These machines have various capabilities for example, they can produce images, generate answers or stories, and write codes based on the "prompts" only provided by users. These machines are considered 'thinking minds' because they have the ability to generate human-like responses. In this study, we have analyzed and explored the capabilities of artificial intelligence-enabled machines. We have revisited on Turing's concept of thinking machines and compared it with recent technological advancements. The objections and consequences of the thinking machines are also discussed in this study, along with available techniques to evaluate machines' cognitive capabilities. We have concluded that Turing Test is a critical aspect of evaluating machines' ability. However, there are other aspects of intelligence too, and AI machines exhibit most of these aspects.
http://arxiv.org/pdf/2307.07526
Nitisha Aggarwal, Geetika Jain Saxena, Sanjeev Singh, Amit Pundir
cs.AI, cs.CY, I.2.m Miscellaneous
11 pages, 3 figures
null
cs.AI
20230711
20230711
[ { "id": "1801.01957" }, { "id": "2304.03277" }, { "id": "2305.18449" }, { "id": "2303.12712" }, { "id": "2304.03439" }, { "id": "2003.02989" } ]
2307.04964
40
state during and after RLHF training. We first introduce the metrics that are instructive in the training process, and then the training trajectories and effects under different implementations to reveal core tricks in RLHF. We use PPO-max to denote the most suitable implementation we find for the language model. # 5.1 Models and Training Setup The training implementations for the preference model (PM) and PM dataset are introduced in Sec. 4. In this section, we introduce the models’ initialisation and the hyper-parameter details in exploring PPO. We verified a number of methods in reinforcement learning to ensure stable convergence and 10 7 1s] — Win Rate to SFT Model — baseline 20 Reward PG Loss i Win Rate to SFT Model % L 3 200 460 600 00 1000 o 200 460 600 a00 1000 o 200 460 600 a00 1000 0.200 175 1.025 0.150 Y oaas 1.020 0.100 2 oors “; 1.010 Perplexity Response Length ¥ 0.050 0.025 1.005 0.000 1.000 o 200 400 600 800 1000 o 200 400 600 800 1000 o 200 400 600 00 1000 step step Step
2307.04964#40
Secrets of RLHF in Large Language Models Part I: PPO
Large language models (LLMs) have formulated a blueprint for the advancement of artificial general intelligence. Its primary objective is to function as a human-centric (helpful, honest, and harmless) assistant. Alignment with humans assumes paramount significance, and reinforcement learning with human feedback (RLHF) emerges as the pivotal technological paradigm underpinning this pursuit. Current technical routes usually include \textbf{reward models} to measure human preferences, \textbf{Proximal Policy Optimization} (PPO) to optimize policy model outputs, and \textbf{process supervision} to improve step-by-step reasoning capabilities. However, due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle. In the first report, we dissect the framework of RLHF, re-evaluate the inner workings of PPO, and explore how the parts comprising PPO algorithms impact policy agent training. We identify policy constraints being the key factor for the effective implementation of the PPO algorithm. Therefore, we explore the PPO-max, an advanced version of PPO algorithm, to efficiently improve the training stability of the policy model. Based on our main results, we perform a comprehensive analysis of RLHF abilities compared with SFT models and ChatGPT. The absence of open-source implementations has posed significant challenges to the investigation of LLMs alignment. Therefore, we are eager to release technical reports, reward models and PPO codes, aiming to make modest contributions to the advancement of LLMs.
http://arxiv.org/pdf/2307.04964
Rui Zheng, Shihan Dou, Songyang Gao, Yuan Hua, Wei Shen, Binghai Wang, Yan Liu, Senjie Jin, Qin Liu, Yuhao Zhou, Limao Xiong, Lu Chen, Zhiheng Xi, Nuo Xu, Wenbin Lai, Minghao Zhu, Cheng Chang, Zhangyue Yin, Rongxiang Weng, Wensen Cheng, Haoran Huang, Tianxiang Sun, Hang Yan, Tao Gui, Qi Zhang, Xipeng Qiu, Xuanjing Huang
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230711
20230718
[ { "id": "2302.13971" }, { "id": "1909.08593" }, { "id": "2201.08239" }, { "id": "2304.03277" }, { "id": "2304.03442" }, { "id": "2303.18223" }, { "id": "2108.07258" }, { "id": "2204.05862" }, { "id": "2303.03378" }, { "id": "2203.02155" }, { "id": "2112.00861" }, { "id": "2201.11903" }, { "id": "1904.09751" } ]
2307.05300
40
Dynamic personas v.s. fixed personas. To fur- ther investigate the importance of dynamically iden- tifying personas for each task instance instead of fixing a general persona, an ablated variant of SPP, SPP-Fixed-Persona, is introduced. For SPP-Fixed-Persona, we modify the prompt (Fig- ure 17) to force the personas to be fixed as an "AI Assistant" and an "Expert". Comparing SPP and SPP-Fixed-Persona in Figure 7b, we have the fol- lowing insights: (1) SPP consistently outperforms SPP-Fixed-Persona across all tasks, suggesting that dynamic, fine-grained personas are more effec- tive than fixed, general personas. Qualitative exam- ples in Figure 8 and 13 shows that the fine-grained personas such as "Film Expert" and "Sports Enthu- siast" correctly provide the answers, while the fixed persona "Expert" fails. (2) SPP-Fixed-Persona also suffers from the early-termination problem as deLLMs as role-playing agents. Recent re- search (Deshpande et al., 2023; Xu et al., 2023; Fu et al., 2023;
2307.05300#40
Unleashing the Emergent Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration
Human intelligence thrives on cognitive synergy, where collaboration among different minds yield superior outcomes compared to isolated individuals. In this work, we propose Solo Performance Prompting (SPP), which transforms a single LLM into a cognitive synergist by engaging in multi-turn self-collaboration with multiple personas. A cognitive synergist is an intelligent agent that collaboratively combines multiple minds' strengths and knowledge to enhance problem-solving in complex tasks. By dynamically identifying and simulating different personas based on task inputs, SPP unleashes the potential of cognitive synergy in LLMs. Our in-depth analysis shows that assigning multiple fine-grained personas in LLMs improves problem-solving abilities compared to using a single or fixed number of personas. We evaluate SPP on three challenging tasks: Trivia Creative Writing, Codenames Collaborative, and Logic Grid Puzzle, encompassing both knowledge-intensive and reasoning-intensive types. Unlike previous works, such as Chain-of-Thought, that solely enhance the reasoning abilities in LLMs, experimental results demonstrate that SPP effectively reduces factual hallucination, and maintains strong reasoning capabilities. Additionally, comparative experiments show that cognitive synergy only emerges in GPT-4 and does not appear in less capable models, such as GPT-3.5-turbo and Llama2-13b-chat, which draws an interesting analogy to human development. Code, data, and prompts can be found at: https://github.com/MikeWangWZHL/Solo-Performance-Prompting.git.
http://arxiv.org/pdf/2307.05300
Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, Heng Ji
cs.AI, cs.CL
null
null
cs.AI
20230711
20240104
[ { "id": "2302.06476" }, { "id": "2305.17126" }, { "id": "2305.11499" }, { "id": "2210.00185" }, { "id": "2305.10601" }, { "id": "2304.05335" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14688" }, { "id": "2208.11663" }, { "id": "2208.03299" }, { "id": "2206.04615" }, { "id": "2104.07567" }, { "id": "2306.09896" }, { "id": "2302.04023" }, { "id": "2205.11916" }, { "id": "2304.03442" }, { "id": "2207.05608" }, { "id": "2303.12712" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" } ]
2307.07526
40
Funding Source Declaration: Authors have not received any funding to conduct this research. # References [1] Statement on ai risk. https://www.safe.ai/statement-on-ai-risk#signatories. Accessed: 2023-06- 02. [2] Pause giant ai experiments: An open letter. https://futureoflife.org/open-letter/ pause-giant-ai-experiments/. Accessed: 2023-05-20. [3] S Bubeck, V Chandrasekaran, V Eldan, J Gehrke, E Horvitz, E Kamar, P Lee, Y T Lee, Y Li, S Lundberg, H Nori, H Palangi, M T Ribeiro, and Y Zhang. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712, 2023. [4] M Ashok et al. Ethical framework for artificial intelligence and digital technologies. International Journal of Information Management, 62:102433, 2022. [5] Sara Hooker. Moving beyond “algorithmic bias is a data problem”. Patterns, 2(4), 2021.
2307.07526#40
Can I say, now machines can think?
Generative AI techniques have opened the path for new generations of machines in diverse domains. These machines have various capabilities for example, they can produce images, generate answers or stories, and write codes based on the "prompts" only provided by users. These machines are considered 'thinking minds' because they have the ability to generate human-like responses. In this study, we have analyzed and explored the capabilities of artificial intelligence-enabled machines. We have revisited on Turing's concept of thinking machines and compared it with recent technological advancements. The objections and consequences of the thinking machines are also discussed in this study, along with available techniques to evaluate machines' cognitive capabilities. We have concluded that Turing Test is a critical aspect of evaluating machines' ability. However, there are other aspects of intelligence too, and AI machines exhibit most of these aspects.
http://arxiv.org/pdf/2307.07526
Nitisha Aggarwal, Geetika Jain Saxena, Sanjeev Singh, Amit Pundir
cs.AI, cs.CY, I.2.m Miscellaneous
11 pages, 3 figures
null
cs.AI
20230711
20230711
[ { "id": "1801.01957" }, { "id": "2304.03277" }, { "id": "2305.18449" }, { "id": "2303.12712" }, { "id": "2304.03439" }, { "id": "2003.02989" } ]
2307.04964
41
Figure 4: (Top) We show the response reward and training loss under vanilla PPO implementation. The red line in the first sub-figure shows the win rate of policy model response compared to SFT model response. (Bottom) Informative metrics for the collapse problem in PPO training, we observe significant variation in these metrics when there was a misalign between the human evaluation results and reward scores.
2307.04964#41
Secrets of RLHF in Large Language Models Part I: PPO
Large language models (LLMs) have formulated a blueprint for the advancement of artificial general intelligence. Its primary objective is to function as a human-centric (helpful, honest, and harmless) assistant. Alignment with humans assumes paramount significance, and reinforcement learning with human feedback (RLHF) emerges as the pivotal technological paradigm underpinning this pursuit. Current technical routes usually include \textbf{reward models} to measure human preferences, \textbf{Proximal Policy Optimization} (PPO) to optimize policy model outputs, and \textbf{process supervision} to improve step-by-step reasoning capabilities. However, due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle. In the first report, we dissect the framework of RLHF, re-evaluate the inner workings of PPO, and explore how the parts comprising PPO algorithms impact policy agent training. We identify policy constraints being the key factor for the effective implementation of the PPO algorithm. Therefore, we explore the PPO-max, an advanced version of PPO algorithm, to efficiently improve the training stability of the policy model. Based on our main results, we perform a comprehensive analysis of RLHF abilities compared with SFT models and ChatGPT. The absence of open-source implementations has posed significant challenges to the investigation of LLMs alignment. Therefore, we are eager to release technical reports, reward models and PPO codes, aiming to make modest contributions to the advancement of LLMs.
http://arxiv.org/pdf/2307.04964
Rui Zheng, Shihan Dou, Songyang Gao, Yuan Hua, Wei Shen, Binghai Wang, Yan Liu, Senjie Jin, Qin Liu, Yuhao Zhou, Limao Xiong, Lu Chen, Zhiheng Xi, Nuo Xu, Wenbin Lai, Minghao Zhu, Cheng Chang, Zhangyue Yin, Rongxiang Weng, Wensen Cheng, Haoran Huang, Tianxiang Sun, Hang Yan, Tao Gui, Qi Zhang, Xipeng Qiu, Xuanjing Huang
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230711
20230718
[ { "id": "2302.13971" }, { "id": "1909.08593" }, { "id": "2201.08239" }, { "id": "2304.03277" }, { "id": "2304.03442" }, { "id": "2303.18223" }, { "id": "2108.07258" }, { "id": "2204.05862" }, { "id": "2303.03378" }, { "id": "2203.02155" }, { "id": "2112.00861" }, { "id": "2201.11903" }, { "id": "1904.09751" } ]
2307.05300
41
as deLLMs as role-playing agents. Recent re- search (Deshpande et al., 2023; Xu et al., 2023; Fu et al., 2023; aut, 2023; Li et al., 2023) demonstrates that assigning personas or roles to LLMs influences their generation behavior. AI societies with dis- tinct personas or occupations have been explored for collaboration (Park et al., 2023; Schick et al., 2022; Li et al., 2023; Cai et al., 2023). However, limitations in persona assignment and multi-agent collaboration include single or fixed persona as- signments (Xu et al., 2023; Fu et al., 2023; Schick et al., 2022; Li et al., 2023) and the need for mul- tiple LLM instances, increasing inference cost. In contrast, SPP uses a single LLM to dynamically identify useful personas for general tasks. Our discovery on the emergent nature of cognitive syn- ergy also aligns with related work (Olausson et al., 2023), which investigates the emergent ability of
2307.05300#41
Unleashing the Emergent Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration
Human intelligence thrives on cognitive synergy, where collaboration among different minds yield superior outcomes compared to isolated individuals. In this work, we propose Solo Performance Prompting (SPP), which transforms a single LLM into a cognitive synergist by engaging in multi-turn self-collaboration with multiple personas. A cognitive synergist is an intelligent agent that collaboratively combines multiple minds' strengths and knowledge to enhance problem-solving in complex tasks. By dynamically identifying and simulating different personas based on task inputs, SPP unleashes the potential of cognitive synergy in LLMs. Our in-depth analysis shows that assigning multiple fine-grained personas in LLMs improves problem-solving abilities compared to using a single or fixed number of personas. We evaluate SPP on three challenging tasks: Trivia Creative Writing, Codenames Collaborative, and Logic Grid Puzzle, encompassing both knowledge-intensive and reasoning-intensive types. Unlike previous works, such as Chain-of-Thought, that solely enhance the reasoning abilities in LLMs, experimental results demonstrate that SPP effectively reduces factual hallucination, and maintains strong reasoning capabilities. Additionally, comparative experiments show that cognitive synergy only emerges in GPT-4 and does not appear in less capable models, such as GPT-3.5-turbo and Llama2-13b-chat, which draws an interesting analogy to human development. Code, data, and prompts can be found at: https://github.com/MikeWangWZHL/Solo-Performance-Prompting.git.
http://arxiv.org/pdf/2307.05300
Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, Heng Ji
cs.AI, cs.CL
null
null
cs.AI
20230711
20240104
[ { "id": "2302.06476" }, { "id": "2305.17126" }, { "id": "2305.11499" }, { "id": "2210.00185" }, { "id": "2305.10601" }, { "id": "2304.05335" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14688" }, { "id": "2208.11663" }, { "id": "2208.03299" }, { "id": "2206.04615" }, { "id": "2104.07567" }, { "id": "2306.09896" }, { "id": "2302.04023" }, { "id": "2205.11916" }, { "id": "2304.03442" }, { "id": "2207.05608" }, { "id": "2303.12712" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" } ]
2307.07526
41
[5] Sara Hooker. Moving beyond “algorithmic bias is a data problem”. Patterns, 2(4), 2021. [6] Elon musk and others call for pause on a.i., citing ‘profound risks to society’. https://www.nytimes.com/ 2023/03/29/technology/ai-artificial-intelligence-musk-risks.html. Accessed: 2023-05-21. 8 Can I say, now machines can think? [7] AM Turing. Computing Machinery and Intelligence. Mind, LIX(236):433–460, 1950. [8] Patrick Henry Winston. Artificial intelligence. International series of monographs on physics. Addison-Wesley Longman Publishing Co, 1984. [9] Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing, 2021. [10] Y. Duan et al. Artificial intelligence for decision making in the era of big data–evolution, challenges and research agenda. International journal of information management, 48:63–71, 2019.
2307.07526#41
Can I say, now machines can think?
Generative AI techniques have opened the path for new generations of machines in diverse domains. These machines have various capabilities for example, they can produce images, generate answers or stories, and write codes based on the "prompts" only provided by users. These machines are considered 'thinking minds' because they have the ability to generate human-like responses. In this study, we have analyzed and explored the capabilities of artificial intelligence-enabled machines. We have revisited on Turing's concept of thinking machines and compared it with recent technological advancements. The objections and consequences of the thinking machines are also discussed in this study, along with available techniques to evaluate machines' cognitive capabilities. We have concluded that Turing Test is a critical aspect of evaluating machines' ability. However, there are other aspects of intelligence too, and AI machines exhibit most of these aspects.
http://arxiv.org/pdf/2307.07526
Nitisha Aggarwal, Geetika Jain Saxena, Sanjeev Singh, Amit Pundir
cs.AI, cs.CY, I.2.m Miscellaneous
11 pages, 3 figures
null
cs.AI
20230711
20230711
[ { "id": "1801.01957" }, { "id": "2304.03277" }, { "id": "2305.18449" }, { "id": "2303.12712" }, { "id": "2304.03439" }, { "id": "2003.02989" } ]
2307.04964
42
better results for PPO training phase. To improve the experimental efficiency, these experiments are mainly conducted on a randomly selected subset of our Chinese data and will not be trained to optimal results when we have observed enough information to analyze the comparison methods. As shown in Sec. 3, four models need to be loaded during the ppo training phase. For reference model and policy model, we initialize both models from a 7B SFT model. The SFT model is applied to supervised fine-tuning for 2 epochs based on OpenChineseLLaMA on 1M filtered instruction data (containing 400K single-round instruction samples and 600K multi-turn instruction samples). We set a learning rate of 9.5e-6 and a consine learning rate schedule. The learning rate eventually decays to 10% of the peak learning rate. The global batch size is set to 1024. We use the reward model to initialize the critic model and reward model.
2307.04964#42
Secrets of RLHF in Large Language Models Part I: PPO
Large language models (LLMs) have formulated a blueprint for the advancement of artificial general intelligence. Its primary objective is to function as a human-centric (helpful, honest, and harmless) assistant. Alignment with humans assumes paramount significance, and reinforcement learning with human feedback (RLHF) emerges as the pivotal technological paradigm underpinning this pursuit. Current technical routes usually include \textbf{reward models} to measure human preferences, \textbf{Proximal Policy Optimization} (PPO) to optimize policy model outputs, and \textbf{process supervision} to improve step-by-step reasoning capabilities. However, due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle. In the first report, we dissect the framework of RLHF, re-evaluate the inner workings of PPO, and explore how the parts comprising PPO algorithms impact policy agent training. We identify policy constraints being the key factor for the effective implementation of the PPO algorithm. Therefore, we explore the PPO-max, an advanced version of PPO algorithm, to efficiently improve the training stability of the policy model. Based on our main results, we perform a comprehensive analysis of RLHF abilities compared with SFT models and ChatGPT. The absence of open-source implementations has posed significant challenges to the investigation of LLMs alignment. Therefore, we are eager to release technical reports, reward models and PPO codes, aiming to make modest contributions to the advancement of LLMs.
http://arxiv.org/pdf/2307.04964
Rui Zheng, Shihan Dou, Songyang Gao, Yuan Hua, Wei Shen, Binghai Wang, Yan Liu, Senjie Jin, Qin Liu, Yuhao Zhou, Limao Xiong, Lu Chen, Zhiheng Xi, Nuo Xu, Wenbin Lai, Minghao Zhu, Cheng Chang, Zhangyue Yin, Rongxiang Weng, Wensen Cheng, Haoran Huang, Tianxiang Sun, Hang Yan, Tao Gui, Qi Zhang, Xipeng Qiu, Xuanjing Huang
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230711
20230718
[ { "id": "2302.13971" }, { "id": "1909.08593" }, { "id": "2201.08239" }, { "id": "2304.03277" }, { "id": "2304.03442" }, { "id": "2303.18223" }, { "id": "2108.07258" }, { "id": "2204.05862" }, { "id": "2303.03378" }, { "id": "2203.02155" }, { "id": "2112.00861" }, { "id": "2201.11903" }, { "id": "1904.09751" } ]
2307.04986
42
Step Number Action 1 Determine if the agent stays home or goes out 2 Update agent's location 3 Enable interactions with other agents 4 Assess possibility of infection transmission 5 Update health condition attributes 6 Monitor and manage infection progression Decision Variables, Inputs, and Outcomes Inputs: provided context, potentially including different types of feedback information. Decision model: entrusted to ChatGPT to analyze the input context and determine the agent's choice. Outcome: stay home or go out Inputs: the decision made in step 1. Update: if stay home, set location to home. If go out, set location to grid Condition: if the agent chose go out in step 1, it interacts with other agents. Interactions: agent interacts with up to X amount of other agents, where X is the contact_rate. Outcome: list of agents to interact with Inputs: health condition of interacting agents in the list from step 3. Condition: if an agent is infected and contacts another agent who is susceptible and infection rate>randomly generated number ∈ [0,1]. Outcome: update state to to_be_infected if the condition is met Inputs: health condition from step 4. Updates: if health condition is to_be_infected, update to infected, set number of days
2307.04986#42
Epidemic Modeling with Generative Agents
This study offers a new paradigm of individual-level modeling to address the grand challenge of incorporating human behavior in epidemic models. Using generative artificial intelligence in an agent-based epidemic model, each agent is empowered to make its own reasonings and decisions via connecting to a large language model such as ChatGPT. Through various simulation experiments, we present compelling evidence that generative agents mimic real-world behaviors such as quarantining when sick and self-isolation when cases rise. Collectively, the agents demonstrate patterns akin to multiple waves observed in recent pandemics followed by an endemic period. Moreover, the agents successfully flatten the epidemic curve. This study creates potential to improve dynamic system modeling by offering a way to represent human brain, reasoning, and decision making.
http://arxiv.org/pdf/2307.04986
Ross Williams, Niyousha Hosseinichimeh, Aritra Majumdar, Navid Ghaffarzadegan
cs.AI, cs.MA, econ.GN, nlin.AO, physics.soc-ph, q-fin.EC
null
null
cs.AI
20230711
20230711
[]
2307.05300
42
Trivia Creative Writing Codenames Logic Grid Puzzle SPP v.s. (SPP-Profile & SPP-Fixed-Persona) 834g 80, 7 w/ system message w/o system message SPP SPP-Profile SPP-Fixed-Persona Codenames Collaborative Trivia Creative Writing (N=10) Tasks Trivia Creative Writing (N=5) Logic Grid Puzzle (a) Visualization of the SPP- identified personas. The per- sonas show a high correla- tion with the nature of the tasks. (b) Comparison between SPP, SPP-Fixed-Persona (with two fixed personas) and SPP-Profile (additionally generating persona profiles). SPP significantly outperforms SPP-Fixed-Persona, highlighting the importance of automatically identifying dynamic, fine-grained personas. SPP slightly outperforms SPP-Profile, indicating that the persona names (without detailed description of the expertise) are probably already sufficient for eliciting cognitive synergy. Figure 7: (a) Qualitative analysis on the identified personas; (b) Quantitative analysis on two SPP variants.
2307.05300#42
Unleashing the Emergent Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration
Human intelligence thrives on cognitive synergy, where collaboration among different minds yield superior outcomes compared to isolated individuals. In this work, we propose Solo Performance Prompting (SPP), which transforms a single LLM into a cognitive synergist by engaging in multi-turn self-collaboration with multiple personas. A cognitive synergist is an intelligent agent that collaboratively combines multiple minds' strengths and knowledge to enhance problem-solving in complex tasks. By dynamically identifying and simulating different personas based on task inputs, SPP unleashes the potential of cognitive synergy in LLMs. Our in-depth analysis shows that assigning multiple fine-grained personas in LLMs improves problem-solving abilities compared to using a single or fixed number of personas. We evaluate SPP on three challenging tasks: Trivia Creative Writing, Codenames Collaborative, and Logic Grid Puzzle, encompassing both knowledge-intensive and reasoning-intensive types. Unlike previous works, such as Chain-of-Thought, that solely enhance the reasoning abilities in LLMs, experimental results demonstrate that SPP effectively reduces factual hallucination, and maintains strong reasoning capabilities. Additionally, comparative experiments show that cognitive synergy only emerges in GPT-4 and does not appear in less capable models, such as GPT-3.5-turbo and Llama2-13b-chat, which draws an interesting analogy to human development. Code, data, and prompts can be found at: https://github.com/MikeWangWZHL/Solo-Performance-Prompting.git.
http://arxiv.org/pdf/2307.05300
Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, Heng Ji
cs.AI, cs.CL
null
null
cs.AI
20230711
20240104
[ { "id": "2302.06476" }, { "id": "2305.17126" }, { "id": "2305.11499" }, { "id": "2210.00185" }, { "id": "2305.10601" }, { "id": "2304.05335" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14688" }, { "id": "2208.11663" }, { "id": "2208.03299" }, { "id": "2206.04615" }, { "id": "2104.07567" }, { "id": "2306.09896" }, { "id": "2302.04023" }, { "id": "2205.11916" }, { "id": "2304.03442" }, { "id": "2207.05608" }, { "id": "2303.12712" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" } ]
2307.04964
43
We train the models on a manually constructed HH dataset containing 8k harmless queries and 20k helpful queries and we fix the number of steps instead of the number of epochs. In all experiments, we set a batch size of 128 for sampling from the environment and a batch size of 32 for training policy model and critic model. The learning rate of policy model and critic model is set to 5e-7 and 1.65e-6 with a warmup over the first 10% steps, respectively. All of the experiments are conducted on identically implemented machines. Each machine contains eight 80G A100 GPUs, 1TB of RAM, and 128 CPUs. We use ZERO2 and gradient checkpoint to save on GPU memory cost in the training phase. # 5.2 Evaluation Metrics for Monitor Training Process We expect to identify some metrics that reflect the quality of PPO training, this contributes to tracking the helpful, honest, and harmless capability of policy models without resorting to manual (or GPT-4) evaluation. We found it challenging to accurately distinguish the merits of two models with similar abilities. But it is indeed feasible to observe training stability and promptly identify serious deviations. Various metric curves when continuously optimizing policy model with vanilla PPO implementation are shown in Figure 4.
2307.04964#43
Secrets of RLHF in Large Language Models Part I: PPO
Large language models (LLMs) have formulated a blueprint for the advancement of artificial general intelligence. Its primary objective is to function as a human-centric (helpful, honest, and harmless) assistant. Alignment with humans assumes paramount significance, and reinforcement learning with human feedback (RLHF) emerges as the pivotal technological paradigm underpinning this pursuit. Current technical routes usually include \textbf{reward models} to measure human preferences, \textbf{Proximal Policy Optimization} (PPO) to optimize policy model outputs, and \textbf{process supervision} to improve step-by-step reasoning capabilities. However, due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle. In the first report, we dissect the framework of RLHF, re-evaluate the inner workings of PPO, and explore how the parts comprising PPO algorithms impact policy agent training. We identify policy constraints being the key factor for the effective implementation of the PPO algorithm. Therefore, we explore the PPO-max, an advanced version of PPO algorithm, to efficiently improve the training stability of the policy model. Based on our main results, we perform a comprehensive analysis of RLHF abilities compared with SFT models and ChatGPT. The absence of open-source implementations has posed significant challenges to the investigation of LLMs alignment. Therefore, we are eager to release technical reports, reward models and PPO codes, aiming to make modest contributions to the advancement of LLMs.
http://arxiv.org/pdf/2307.04964
Rui Zheng, Shihan Dou, Songyang Gao, Yuan Hua, Wei Shen, Binghai Wang, Yan Liu, Senjie Jin, Qin Liu, Yuhao Zhou, Limao Xiong, Lu Chen, Zhiheng Xi, Nuo Xu, Wenbin Lai, Minghao Zhu, Cheng Chang, Zhangyue Yin, Rongxiang Weng, Wensen Cheng, Haoran Huang, Tianxiang Sun, Hang Yan, Tao Gui, Qi Zhang, Xipeng Qiu, Xuanjing Huang
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230711
20230718
[ { "id": "2302.13971" }, { "id": "1909.08593" }, { "id": "2201.08239" }, { "id": "2304.03277" }, { "id": "2304.03442" }, { "id": "2303.18223" }, { "id": "2108.07258" }, { "id": "2204.05862" }, { "id": "2303.03378" }, { "id": "2203.02155" }, { "id": "2112.00861" }, { "id": "2201.11903" }, { "id": "1904.09751" } ]
2307.04986
43
if the condition is met Inputs: health condition from step 4. Updates: if health condition is to_be_infected, update to infected, set number of days infected to 0 Inputs: health condition and number of days infected. Conditions and Updates: if infected, increment number of days infected by 1. If infected and the number of days exceeds the healing period, update to recovered and reset the number of days infected to None
2307.04986#43
Epidemic Modeling with Generative Agents
This study offers a new paradigm of individual-level modeling to address the grand challenge of incorporating human behavior in epidemic models. Using generative artificial intelligence in an agent-based epidemic model, each agent is empowered to make its own reasonings and decisions via connecting to a large language model such as ChatGPT. Through various simulation experiments, we present compelling evidence that generative agents mimic real-world behaviors such as quarantining when sick and self-isolation when cases rise. Collectively, the agents demonstrate patterns akin to multiple waves observed in recent pandemics followed by an endemic period. Moreover, the agents successfully flatten the epidemic curve. This study creates potential to improve dynamic system modeling by offering a way to represent human brain, reasoning, and decision making.
http://arxiv.org/pdf/2307.04986
Ross Williams, Niyousha Hosseinichimeh, Aritra Majumdar, Navid Ghaffarzadegan
cs.AI, cs.MA, econ.GN, nlin.AO, physics.soc-ph, q-fin.EC
null
null
cs.AI
20230711
20230711
[]
2307.05300
43
Figure 7: (a) Qualitative analysis on the identified personas; (b) Quantitative analysis on two SPP variants. SPP v.s. CoT (Trivia Creative Writing N=5) Instance Index: 10 Question #5: In which country is the Sky Train Rail bridge? SPP Participants: Al Assistant (you); Movie Expert; Art Expert; Sports Expert; Geography Expert L-1 Geography Expert: The Sky Train Rail Bridge is located in Canada. CoT Ll 6. Incorporate the Sky Train Rail bridge, which is in Thailand. % SPP v.s. SPP-Fixed-Persona (Trivia Creative Writing N=5) Instance Index: 30 Question #1: Which orchestral conductor was married to one of the subjects of the film Hillary and Jackie? 4 SPP ) C SPP-Fixed-Persona Participants: Al Assistant (you); Film Expert; Literary Participants: Al Assistant (you); Expert Expert; History Expert; Geography Expert; Architecture Expert: Sir Simon Rattle was married to one of the Expert subjects of the film Hillary and Jackie. x Film Expert: The orchestral conductor married to one of the subjects of the film Hillary and Jackie is Daniel Barenboim. % 5
2307.05300#43
Unleashing the Emergent Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration
Human intelligence thrives on cognitive synergy, where collaboration among different minds yield superior outcomes compared to isolated individuals. In this work, we propose Solo Performance Prompting (SPP), which transforms a single LLM into a cognitive synergist by engaging in multi-turn self-collaboration with multiple personas. A cognitive synergist is an intelligent agent that collaboratively combines multiple minds' strengths and knowledge to enhance problem-solving in complex tasks. By dynamically identifying and simulating different personas based on task inputs, SPP unleashes the potential of cognitive synergy in LLMs. Our in-depth analysis shows that assigning multiple fine-grained personas in LLMs improves problem-solving abilities compared to using a single or fixed number of personas. We evaluate SPP on three challenging tasks: Trivia Creative Writing, Codenames Collaborative, and Logic Grid Puzzle, encompassing both knowledge-intensive and reasoning-intensive types. Unlike previous works, such as Chain-of-Thought, that solely enhance the reasoning abilities in LLMs, experimental results demonstrate that SPP effectively reduces factual hallucination, and maintains strong reasoning capabilities. Additionally, comparative experiments show that cognitive synergy only emerges in GPT-4 and does not appear in less capable models, such as GPT-3.5-turbo and Llama2-13b-chat, which draws an interesting analogy to human development. Code, data, and prompts can be found at: https://github.com/MikeWangWZHL/Solo-Performance-Prompting.git.
http://arxiv.org/pdf/2307.05300
Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, Heng Ji
cs.AI, cs.CL
null
null
cs.AI
20230711
20240104
[ { "id": "2302.06476" }, { "id": "2305.17126" }, { "id": "2305.11499" }, { "id": "2210.00185" }, { "id": "2305.10601" }, { "id": "2304.05335" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14688" }, { "id": "2208.11663" }, { "id": "2208.03299" }, { "id": "2206.04615" }, { "id": "2104.07567" }, { "id": "2306.09896" }, { "id": "2302.04023" }, { "id": "2205.11916" }, { "id": "2304.03442" }, { "id": "2207.05608" }, { "id": "2303.12712" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" } ]
2307.07526
43
[13] C. Stokel-Walker. Ai bot chatgpt writes smart essays-should academics worry? Nature, 2022. [14] C. Stokel-Walker. Chatgpt listed as author on research papers: many scientists disapprove. Nature, 2023. [15] Holly Else. Abstracts written by chatgpt fool scientists. Nature, 613(7944):423–423, 2023. [16] Yogesh K. Dwivedi, Nir Kshetri, Laurie Hughes, Emma Louise Slade, Anand Jeyaraj, Arpan Kumar Kar, Abdullah M. Baabdullah, Alex Koohang, Vishnupriya Raghavan, Manju Ahuja, Hanaa Albanna, Mousa Ahmad Albashrawi, Adil S. Al-Busaidi, Janarthanan Balakrishnan, Yves Barlette, Sriparna Basu, Indranil Bose, Laurence Brooks, Dimitrios Buhalis, Lemuria Carter, Soumyadeb Chowdhury, Tom Crick, Scott W. Cunningham, Gareth H. Davies, Robert M. Davison, Rahul Dé, Denis Dennehy, Yanqing Duan,
2307.07526#43
Can I say, now machines can think?
Generative AI techniques have opened the path for new generations of machines in diverse domains. These machines have various capabilities for example, they can produce images, generate answers or stories, and write codes based on the "prompts" only provided by users. These machines are considered 'thinking minds' because they have the ability to generate human-like responses. In this study, we have analyzed and explored the capabilities of artificial intelligence-enabled machines. We have revisited on Turing's concept of thinking machines and compared it with recent technological advancements. The objections and consequences of the thinking machines are also discussed in this study, along with available techniques to evaluate machines' cognitive capabilities. We have concluded that Turing Test is a critical aspect of evaluating machines' ability. However, there are other aspects of intelligence too, and AI machines exhibit most of these aspects.
http://arxiv.org/pdf/2307.07526
Nitisha Aggarwal, Geetika Jain Saxena, Sanjeev Singh, Amit Pundir
cs.AI, cs.CY, I.2.m Miscellaneous
11 pages, 3 figures
null
cs.AI
20230711
20230711
[ { "id": "1801.01957" }, { "id": "2304.03277" }, { "id": "2305.18449" }, { "id": "2303.12712" }, { "id": "2304.03439" }, { "id": "2003.02989" } ]
2307.04964
44
We first introduce the pattern collapse phenomenon in vanilla PPO training, which means that SFT models are over-optimized and exhibit highly biased behavior. A reasonable policy model is expected to be consistent with human preferences in the distribution of dialogue variety in the real world (e.g., data not seen in training the reward model). However, we observe that the trained policy model has a tendency to cheat the reward model through specific patterns for anomalous higher scores. The training trajectories on reward score and training loss of vanilla PPO are illustrated at the top of 11 (6) | Implementation Detail List vw PPO-Max cae @ LY 1 Token Level KL-Penalty + Advantage Function + TD Error 2 Importance Sampling + Return 3 Entropy Bonus 5 4 Reward Scaling A(Se,ae) © wv 5 Reward Normalization and Clipping CY Buffer 6 Advantages Normalization and Clipping wv 7 Value Function Loss Clipping w& 8 Critic Model Initialization User Query 9 Policy Model Initialization Pretraining Data ® wv 10 Generalized Advantage Estimation VY 11 Clipped Surrogate Objective or Caelse). ht O OF xc ‘»{y 12 Global Gradient Clipping
2307.04964#44
Secrets of RLHF in Large Language Models Part I: PPO
Large language models (LLMs) have formulated a blueprint for the advancement of artificial general intelligence. Its primary objective is to function as a human-centric (helpful, honest, and harmless) assistant. Alignment with humans assumes paramount significance, and reinforcement learning with human feedback (RLHF) emerges as the pivotal technological paradigm underpinning this pursuit. Current technical routes usually include \textbf{reward models} to measure human preferences, \textbf{Proximal Policy Optimization} (PPO) to optimize policy model outputs, and \textbf{process supervision} to improve step-by-step reasoning capabilities. However, due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle. In the first report, we dissect the framework of RLHF, re-evaluate the inner workings of PPO, and explore how the parts comprising PPO algorithms impact policy agent training. We identify policy constraints being the key factor for the effective implementation of the PPO algorithm. Therefore, we explore the PPO-max, an advanced version of PPO algorithm, to efficiently improve the training stability of the policy model. Based on our main results, we perform a comprehensive analysis of RLHF abilities compared with SFT models and ChatGPT. The absence of open-source implementations has posed significant challenges to the investigation of LLMs alignment. Therefore, we are eager to release technical reports, reward models and PPO codes, aiming to make modest contributions to the advancement of LLMs.
http://arxiv.org/pdf/2307.04964
Rui Zheng, Shihan Dou, Songyang Gao, Yuan Hua, Wei Shen, Binghai Wang, Yan Liu, Senjie Jin, Qin Liu, Yuhao Zhou, Limao Xiong, Lu Chen, Zhiheng Xi, Nuo Xu, Wenbin Lai, Minghao Zhu, Cheng Chang, Zhangyue Yin, Rongxiang Weng, Wensen Cheng, Haoran Huang, Tianxiang Sun, Hang Yan, Tao Gui, Qi Zhang, Xipeng Qiu, Xuanjing Huang
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230711
20230718
[ { "id": "2302.13971" }, { "id": "1909.08593" }, { "id": "2201.08239" }, { "id": "2304.03277" }, { "id": "2304.03442" }, { "id": "2303.18223" }, { "id": "2108.07258" }, { "id": "2204.05862" }, { "id": "2303.03378" }, { "id": "2203.02155" }, { "id": "2112.00861" }, { "id": "2201.11903" }, { "id": "1904.09751" } ]
2307.04986
44
16 # Table S2. Citizen class attributes Attribute name unique_id name age traits location pos Health_Condition day_infected agent_interaction width, height Attribute description Used for scheduler step function and data collection The name of the citizen The age of the citizen (age ∈ [18, 65) and age ∈ Z) The traits of a citizen selected randomly from each of the Big Five traits (X) Determines whether the citizen is staying at home or going out on the grid. Going to grid and going to work are synonymous in the code base because we leverage Mesa’s Library to create a virtual grid for agents who go outside. The Mesa library requires agents to have a pos (position). This is not used in our code beyond setting a position for each agent on the grid. This could be useful for animation of the simulation in the future. For now, it can be functionally be ignored in the code Health condition of the agent can be "Susceptible”, “To_Be_Infected”, "Infected", or "Recovered" Stores the number of days the citizen has been infected Stores citizen objects with which the current citizen will interact with in the current timestep Defines the width and height of the simulated world. Attribute state Static Static Static Static Dynamic Static Dynamic Dynamic Dynamic Static 17 # Table S3. Citizen class methods
2307.04986#44
Epidemic Modeling with Generative Agents
This study offers a new paradigm of individual-level modeling to address the grand challenge of incorporating human behavior in epidemic models. Using generative artificial intelligence in an agent-based epidemic model, each agent is empowered to make its own reasonings and decisions via connecting to a large language model such as ChatGPT. Through various simulation experiments, we present compelling evidence that generative agents mimic real-world behaviors such as quarantining when sick and self-isolation when cases rise. Collectively, the agents demonstrate patterns akin to multiple waves observed in recent pandemics followed by an endemic period. Moreover, the agents successfully flatten the epidemic curve. This study creates potential to improve dynamic system modeling by offering a way to represent human brain, reasoning, and decision making.
http://arxiv.org/pdf/2307.04986
Ross Williams, Niyousha Hosseinichimeh, Aritra Majumdar, Navid Ghaffarzadegan
cs.AI, cs.MA, econ.GN, nlin.AO, physics.soc-ph, q-fin.EC
null
null
cs.AI
20230711
20230711
[]
2307.05300
44
Figure 8: Qualitative examples on Trivia Creative Writing comparing SPP, CoT and SPP-Fixed-Persona. While CoT provides reasonable intermediate steps, it still struggles with factual hallucination. SPP v.s. SPP-Fixed-Persona reveals that dynamically identified fine-grained personas, such as the "Film Expert," tend to outperform the fixed general persona of an "Expert. More examples can be found in Figures 11, 12, and 13. self-debugging in code generation. Enhancing reasoning and factual knowledge in LLMs. LLMs face challenges in com- plex knowledge-intensive tasks due to hallucina- tion (Maynez et al., 2020) and reasoning-intensive tasks due to the lack of human-like slow think- ing (Sloman, 1996; Kahneman, 2011). Approaches like Chain-of-Thought (CoT) and Self-Refinement encourage LLMs to solve tasks step by step or iteratively revise their answers (Wei et al., 2023; Kojima et al., 2022; Zhang et al., 2022; Fu et al., 2022; Xue et al., 2023; Yao et al., 2023; Madaan
2307.05300#44
Unleashing the Emergent Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration
Human intelligence thrives on cognitive synergy, where collaboration among different minds yield superior outcomes compared to isolated individuals. In this work, we propose Solo Performance Prompting (SPP), which transforms a single LLM into a cognitive synergist by engaging in multi-turn self-collaboration with multiple personas. A cognitive synergist is an intelligent agent that collaboratively combines multiple minds' strengths and knowledge to enhance problem-solving in complex tasks. By dynamically identifying and simulating different personas based on task inputs, SPP unleashes the potential of cognitive synergy in LLMs. Our in-depth analysis shows that assigning multiple fine-grained personas in LLMs improves problem-solving abilities compared to using a single or fixed number of personas. We evaluate SPP on three challenging tasks: Trivia Creative Writing, Codenames Collaborative, and Logic Grid Puzzle, encompassing both knowledge-intensive and reasoning-intensive types. Unlike previous works, such as Chain-of-Thought, that solely enhance the reasoning abilities in LLMs, experimental results demonstrate that SPP effectively reduces factual hallucination, and maintains strong reasoning capabilities. Additionally, comparative experiments show that cognitive synergy only emerges in GPT-4 and does not appear in less capable models, such as GPT-3.5-turbo and Llama2-13b-chat, which draws an interesting analogy to human development. Code, data, and prompts can be found at: https://github.com/MikeWangWZHL/Solo-Performance-Prompting.git.
http://arxiv.org/pdf/2307.05300
Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, Heng Ji
cs.AI, cs.CL
null
null
cs.AI
20230711
20240104
[ { "id": "2302.06476" }, { "id": "2305.17126" }, { "id": "2305.11499" }, { "id": "2210.00185" }, { "id": "2305.10601" }, { "id": "2304.05335" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14688" }, { "id": "2208.11663" }, { "id": "2208.03299" }, { "id": "2206.04615" }, { "id": "2104.07567" }, { "id": "2306.09896" }, { "id": "2302.04023" }, { "id": "2205.11916" }, { "id": "2304.03442" }, { "id": "2207.05608" }, { "id": "2303.12712" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" } ]