doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2308.12519
19
• BFS (Yao et al., 2023) constructs a decision tree in a top-down manner to search for a feasible solution. Different from the original version, we do not introduce any task-specific knowledge in the tree search process. Since the number of API call increase exponentially with the increasing depth of the decision tree, we limit the search breadth of each state as 2 and each level only keeps 3 decision states with the highest performance based on ToolEval comparison (see § 5.1). Finally, BFS will provide 3 decision sequences for an instruction. DFS (Yao et al., 2023) constructs a decision tree by going as deep as possible along each branch and exploring the most recently visited states. As BFS, no task-specific knowledge is introduced in the tree search process. The search process is terminated after deriving 3 decision sequences. • DFSDT (Qin et al., 2023c) is an improved version of DFS, which allows LLMs to dynamically assess different decision states and choose to either proceed along a promising path or abandon an existing state and expand another one. As DFS, the decision search process of DFSDT is ended after generating 3 decision sequences. Evaluation Metrics To ensure a rigorous and accurate evaluation of the performance of our pro- posed decision-making approach, we adopt two evaluation metrics prescribed by ToolBench:
2308.12519#19
Rational Decision-Making Agent with Internalized Utility Judgment
Large language models (LLMs) have demonstrated remarkable advancements and have attracted significant efforts to develop LLMs into agents capable of executing intricate multi-step decision-making tasks beyond traditional NLP applications. Existing approaches to LLM-based decision-making predominantly build upon the manually-designed external performance metrics to guide the decision-making process. However, reliance on the external performance metrics as prior is problematic in real-world scenarios, where such prior may be unavailable, flawed, or even erroneous. For genuine autonomous decision making, it is imperative for the agent to develop its rationality from its posterior experiences to judge decisions independently. Central to the development of rationality is the construction of an internalized utility judgment, capable of assigning numerical utilities to each decision. This paper proposes RadAgent (Rational Decision-Making Agent), which fosters the development of its rationality through an iterative framework involving Experience Exploration and Utility Learning. Within this framework, Elo-based Utility Construction is devised to assign Elo scores to individual decision steps to judge their utilities via pairwise comparisons. Consequently, these Elo scores guide the decision-making process to derive optimal outcomes. Experimental results on the ToolBench dataset demonstrate RadAgent's superiority over baselines, achieving over 10% improvement in Pass Rate on diverse tasks. It offers higher-quality solutions and reduces costs (ChatGPT API calls), highlighting its effectiveness and efficiency.
http://arxiv.org/pdf/2308.12519
Yining Ye, Xin Cong, Shizuo Tian, Yujia Qin, Chong Liu, Yankai Lin, Zhiyuan Liu, Maosong Sun
cs.CL
Received 8,6,6,6 scores on ICLR 2024
null
cs.CL
20230824
20240117
[ { "id": "2305.14318" }, { "id": "2306.06624" }, { "id": "2305.17926" }, { "id": "2305.10601" }, { "id": "2307.16789" }, { "id": "2305.06849" }, { "id": "2304.08354" }, { "id": "2308.09687" }, { "id": "2306.11489" }, { "id": "2306.17563" }, { "id": "2305.14992" }, { "id": "2305.01937" }, { "id": "2308.10379" }, { "id": "2305.11554" } ]
2308.12682
19
= P (a1:t−1, at, at+1:T |h0, g) = p(a1:t−1|h0, g)p(at|h0, a1:t−1, g)p(at+1:T |h0, a1:t, g) = p(a1:t−1|h0, g) · p(at|ht−1, g) · p(at+1:T |ht, g) To align with Eq 1 of the planning problem, we take log on both sides and maximize rather than minimize. We get accumulated value facc(ht−1) = log p(a1:t−1|h0, g), heuristic payoff fheur(ht, g) = p(at+1:T |ht, g), and f (ht) = log P (a1:T |h0, g). Rewriting the above equation: f (ht) = face(he—1) + log (p(ar|he-1,9) : Freur(he, 9)) (2) The additional p(at|ht−1, g) reflects that, unlike classical planning which evaluates only feasible actions based on preconditions, LMs assign probabilities to each action. Here, next action at = arg maxht f (ht).
2308.12682#19
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
Large Language Models (LLMs) have demonstrated impressive planning abilities due to their vast "world knowledge". Yet, obtaining plans that are both feasible (grounded in affordances) and cost-effective (in plan length), remains a challenge, despite recent progress. This contrasts with heuristic planning methods that employ domain knowledge (formalized in action models such as PDDL) and heuristic search to generate feasible, optimal plans. Inspired by this, we propose to combine the power of LLMs and heuristic planning by leveraging the world knowledge of LLMs and the principles of heuristic search. Our approach, SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain knowledge, that evaluates actions' feasibility (Can) and long-term reward/payoff (Pay), and heuristic search to select the best sequence of actions. Our contributions are (1) a novel framing of the LLM planning problem in the context of heuristic planning, (2) integrating grounding and cost-effective elements into the generated plans, and (3) using heuristic search over actions. Our extensive evaluations show that our model surpasses other LLM planning approaches.
http://arxiv.org/pdf/2308.12682
Rishi Hazra, Pedro Zuidberg Dos Martires, Luc De Raedt
cs.AI
Accepted in AAAI 2024. Website: https://rishihazra.github.io/SayCanPay/
null
cs.AI
20230824
20240101
[ { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2305.14992" }, { "id": "2302.05128" }, { "id": "2212.08681" }, { "id": "1807.03748" }, { "id": "2303.00855" }, { "id": "2305.10601" }, { "id": "2304.11477" }, { "id": "2210.17323" }, { "id": "2210.11416" }, { "id": "2201.04735" }, { "id": "2202.10936" }, { "id": "2209.07753" }, { "id": "2302.06706" }, { "id": "1909.08593" }, { "id": "2307.15818" }, { "id": "2204.01691" }, { "id": "2207.05608" }, { "id": "2305.14314" } ]
2308.12950
19
Table 1: Training dataset of Code Llama and Code Llama - Python. We train Code Llama on 500B additional tokens and Code Llama - Python further on 100B tokens. Proprietary dataset. We use the instruction tuning dataset collected for Llama 2 and described in detail by Touvron et al. (2023b). Specifically, we use the version referred to in their paper as “RLHF V5”, collected through several stages of reinforcement learning from human feedback and human feedback annotation (see their Section 3 for more details). It combines thousands of Supervised Fine-Tuning and millions of Rejection Sampling examples. Each example consists of a multi-turn dialogue between a user and an assistant. For Rejection Sampling, the output was selected among several generations using a reward model. The final dataset contains both Helpfulness and Safety data. This enables Code Llama to inherit Llama 2’s instruction following and safety properties. Self-instruct. Our proprietary dataset contains few examples of code-related tasks. Collecting supervised data from human annotators or training from human feedback (Ouyang et al., 2022) is expensive for coding tasks as it requires input from professional developers. Instead of human feedback, we use execution feedback to select data to train our instruct model. We construct the self-instruction dataset following the recipe below, resulting in ∼14,000 question-tests-solution triplets:
2308.12950#19
Code Llama: Open Foundation Models for Code
We release Code Llama, a family of large language models for code based on Llama 2 providing state-of-the-art performance among open models, infilling capabilities, support for large input contexts, and zero-shot instruction following ability for programming tasks. We provide multiple flavors to cover a wide range of applications: foundation models (Code Llama), Python specializations (Code Llama - Python), and instruction-following models (Code Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are trained on sequences of 16k tokens and show improvements on inputs with up to 100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants support infilling based on surrounding content. Code Llama reaches state-of-the-art performance among open models on several code benchmarks, with scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our models outperform every other publicly available model on MultiPL-E. We release Code Llama under a permissive license that allows for both research and commercial use.
http://arxiv.org/pdf/2308.12950
Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve
cs.CL
null
null
cs.CL
20230824
20240131
[]
2308.12966
19
We increase the input resolution of the visual encoder from 224 × 224 to 448 × 448, reducing the information loss caused by image down-sampling. Besides, we ablate the window attention and global attention for higher resolutions of the vision transformer in Appendix E.3. We unlocked the large language model and trained the whole model. The training objective is the same as the pre-training stage. # 3.3 Supervised Fine-tuning
2308.12966#19
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
In this work, we introduce the Qwen-VL series, a set of large-scale vision-language models (LVLMs) designed to perceive and understand both texts and images. Starting from the Qwen-LM as a foundation, we endow it with visual capacity by the meticulously designed (i) visual receptor, (ii) input-output interface, (iii) 3-stage training pipeline, and (iv) multilingual multimodal cleaned corpus. Beyond the conventional image description and question-answering, we implement the grounding and text-reading ability of Qwen-VLs by aligning image-caption-box tuples. The resulting models, including Qwen-VL and Qwen-VL-Chat, set new records for generalist models under similar model scales on a broad range of visual-centric benchmarks (e.g., image captioning, question answering, visual grounding) and different settings (e.g., zero-shot, few-shot). Moreover, on real-world dialog benchmarks, our instruction-tuned Qwen-VL-Chat also demonstrates superiority compared to existing vision-language chatbots. Code, demo and models are available at https://github.com/QwenLM/Qwen-VL.
http://arxiv.org/pdf/2308.12966
Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou
cs.CV, cs.CL
Code, demo and models are available at https://github.com/QwenLM/Qwen-VL
null
cs.CV
20230824
20231013
[ { "id": "2211.01335" }, { "id": "2307.02499" }, { "id": "2305.10403" }, { "id": "2308.16890" }, { "id": "2208.10442" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11172" }, { "id": "2210.08402" }, { "id": "2306.02858" }, { "id": "2209.06794" }, { "id": "1504.00325" }, { "id": "2204.13653" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2212.04408" }, { "id": "2307.05222" }, { "id": "2306.15195" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2306.14824" }, { "id": "2102.05918" }, { "id": "2205.01917" }, { "id": "2111.11432" }, { "id": "2307.16125" }, { "id": "2305.03726" }, { "id": "2203.10244" }, { "id": "2206.08916" }, { "id": "2304.14108" }, { "id": "2307.08581" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2305.18565" } ]
2308.12503
20
Using classroom teaching as an example, based on cog- nitive structure and persona models, the intelligent agent A = {T, B} can play different roles in specific scenarios. The state of the classroom at time t is represented as: ST A(t) = I(Atea, Astu, t) (6) Where I represents the interaction process, Atea represents the teacher, and Astu represents a set of students, denoted as {Astu1, Astu2, ..., Astun }. Interact represents the interaction between the teacher and students. When the lesson begins, the supervisory Agent Asup re- ceives the teaching plan T P and the multi-stage teaching process T S decomposed by the teacher. Asup monitors the classroom, obtains the phase transition signal, and decides whether to proceed to the next teaching phase or end the les- son. This can be represented as: SIG(t) = Asup(T P + T S + ST A(t)) (7) With the help of Asup, teachers can teach more effec- tively, and the interaction between teachers and students is more targeted, without deviating from the topic. During the questioning session, the supervisory Agent selects the most suitable student to ask questions based on the student’s cog- nitive analysis of their willingness to speak. The supervi- sory Agent also monitors the persona status of the intelligent
2308.12503#20
CGMI: Configurable General Multi-Agent Interaction Framework
Benefiting from the powerful capabilities of large language models (LLMs), agents based on LLMs have shown the potential to address domain-specific tasks and emulate human behaviors. However, the content generated by these agents remains somewhat superficial, owing to their limited domain expertise and the absence of an effective cognitive architecture. To address this, we present the Configurable General Multi-Agent Interaction (CGMI) framework, designed to replicate human interactions in real-world scenarios. Specifically, we propose a tree-structured methodology for the assignment, detection, and maintenance of agent personality. Additionally, we designed a cognitive architecture equipped with a skill library based on the ACT* model, which contains memory, reflection, and planning modules. We have also integrated general agents to augment the virtual environment's realism. Using the CGMI framework, we simulated numerous classroom interactions between teacher and students. The experiments indicate that aspects such as the teaching methodology, curriculum, and student performance closely mirror real classroom settings. We will open source our work.
http://arxiv.org/pdf/2308.12503
Shi Jinxin, Zhao Jiabao, Wang Yilei, Wu Xingjiao, Li Jiawen, He Liang
cs.AI, cs.HC, cs.MA
11 pages, 15 figures
null
cs.AI
20230824
20230828
[ { "id": "2302.01560" }, { "id": "2307.05300" }, { "id": "2307.07924" }, { "id": "2210.03350" }, { "id": "2304.05376" }, { "id": "2304.03442" }, { "id": "2210.03629" }, { "id": "2305.04091" }, { "id": "2305.02547" }, { "id": "2303.17071" }, { "id": "2303.17760" }, { "id": "2303.08774" } ]
2308.12519
20
Evaluation Metrics To ensure a rigorous and accurate evaluation of the performance of our pro- posed decision-making approach, we adopt two evaluation metrics prescribed by ToolBench: • Pass Rate (Qin et al., 2023c) assesses the ability of LLMs to successfully accomplish complex real-world tasks. It calculates the proportion of instructions that an LLM can complete within a pre-defined number of decision steps. • Preference Rank measures the quality of the decision sequences generated by the LLMs. This evaluation involves comparing the decision sequences produced by different methods for a given instruction, based on ToolEval tool (Qin et al., 2023c) to enable a fair comparison. Subsequently, we utilize PRP (Qin et al., 2023d) to rank all decision sequences. To ensure robustness, we perform the ranking process 10 times with different random seeds and report the average rank for each method. As CoT@3, Reflexion, BFS, DFS, DFSDT will provide three decision sequences in the end, we consider a user instruction accomplished successfully if any of the three decision sequences lead to the “Finish” call with a final answer. For Preference Rank metrics, we report the average rank of the best decision sequences generated by these methods.
2308.12519#20
Rational Decision-Making Agent with Internalized Utility Judgment
Large language models (LLMs) have demonstrated remarkable advancements and have attracted significant efforts to develop LLMs into agents capable of executing intricate multi-step decision-making tasks beyond traditional NLP applications. Existing approaches to LLM-based decision-making predominantly build upon the manually-designed external performance metrics to guide the decision-making process. However, reliance on the external performance metrics as prior is problematic in real-world scenarios, where such prior may be unavailable, flawed, or even erroneous. For genuine autonomous decision making, it is imperative for the agent to develop its rationality from its posterior experiences to judge decisions independently. Central to the development of rationality is the construction of an internalized utility judgment, capable of assigning numerical utilities to each decision. This paper proposes RadAgent (Rational Decision-Making Agent), which fosters the development of its rationality through an iterative framework involving Experience Exploration and Utility Learning. Within this framework, Elo-based Utility Construction is devised to assign Elo scores to individual decision steps to judge their utilities via pairwise comparisons. Consequently, these Elo scores guide the decision-making process to derive optimal outcomes. Experimental results on the ToolBench dataset demonstrate RadAgent's superiority over baselines, achieving over 10% improvement in Pass Rate on diverse tasks. It offers higher-quality solutions and reduces costs (ChatGPT API calls), highlighting its effectiveness and efficiency.
http://arxiv.org/pdf/2308.12519
Yining Ye, Xin Cong, Shizuo Tian, Yujia Qin, Chong Liu, Yankai Lin, Zhiyuan Liu, Maosong Sun
cs.CL
Received 8,6,6,6 scores on ICLR 2024
null
cs.CL
20230824
20240117
[ { "id": "2305.14318" }, { "id": "2306.06624" }, { "id": "2305.17926" }, { "id": "2305.10601" }, { "id": "2307.16789" }, { "id": "2305.06849" }, { "id": "2304.08354" }, { "id": "2308.09687" }, { "id": "2306.11489" }, { "id": "2306.17563" }, { "id": "2305.14992" }, { "id": "2305.01937" }, { "id": "2308.10379" }, { "id": "2305.11554" } ]
2308.12682
20
Technically, the LM generates actions wherein each action is a sequence of tokens until the end-of-sequence token, (EOS). For each action step a = (w},...,Wn) composed of tokens w;, the LM computes the action probability as p(a) = p(wi) [Ti p(wilwi.i—1). Planning LM proposed a greedy decoding strategy wherein the LM greedily picks the next token, henceforth referred to as Greedy-Token baseline (Figure[2|Left). The generated action is then appended to the history hi= (hi, az), and the generation process repeats until a “done task” action is generated. Subsequent works (Lin et al. have investigated beam search over tokens. However, we are mainly interested in searching on the level of actions and not tokens.
2308.12682#20
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
Large Language Models (LLMs) have demonstrated impressive planning abilities due to their vast "world knowledge". Yet, obtaining plans that are both feasible (grounded in affordances) and cost-effective (in plan length), remains a challenge, despite recent progress. This contrasts with heuristic planning methods that employ domain knowledge (formalized in action models such as PDDL) and heuristic search to generate feasible, optimal plans. Inspired by this, we propose to combine the power of LLMs and heuristic planning by leveraging the world knowledge of LLMs and the principles of heuristic search. Our approach, SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain knowledge, that evaluates actions' feasibility (Can) and long-term reward/payoff (Pay), and heuristic search to select the best sequence of actions. Our contributions are (1) a novel framing of the LLM planning problem in the context of heuristic planning, (2) integrating grounding and cost-effective elements into the generated plans, and (3) using heuristic search over actions. Our extensive evaluations show that our model surpasses other LLM planning approaches.
http://arxiv.org/pdf/2308.12682
Rishi Hazra, Pedro Zuidberg Dos Martires, Luc De Raedt
cs.AI
Accepted in AAAI 2024. Website: https://rishihazra.github.io/SayCanPay/
null
cs.AI
20230824
20240101
[ { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2305.14992" }, { "id": "2302.05128" }, { "id": "2212.08681" }, { "id": "1807.03748" }, { "id": "2303.00855" }, { "id": "2305.10601" }, { "id": "2304.11477" }, { "id": "2210.17323" }, { "id": "2210.11416" }, { "id": "2201.04735" }, { "id": "2202.10936" }, { "id": "2209.07753" }, { "id": "2302.06706" }, { "id": "1909.08593" }, { "id": "2307.15818" }, { "id": "2204.01691" }, { "id": "2207.05608" }, { "id": "2305.14314" } ]
2308.12950
20
1. Generate 62,000 interview-style programming questions by prompting (Figure 10) Llama 2 70B. 2. De-duplicate the set of questions by removing exact duplicates, resulting in ∼52,000 questions. 3. For each of these questions: (a) Generate unit tests by prompting Code Llama 7B (Figure 11) (b) Generate ten Python solutions by prompting Code Llama 7B (Figure 12) (c) Run the unit tests on the ten solutions. Add the first solution that passes the tests (along with its corresponding question and tests) to the self-instruct dataset. We use Code Llama 7B to generate the tests and Python solutions, as we found it more efficient than generating fewer solutions per question with the 34B model for the same compute budget. Rehearsal. In order to prevent the model from regressing on general coding and language understanding capabilities, Code Llama - Instruct is also trained with a small proportion of data from the code dataset (6%) and our natural language dataset (2%). # 2.6 Training details
2308.12950#20
Code Llama: Open Foundation Models for Code
We release Code Llama, a family of large language models for code based on Llama 2 providing state-of-the-art performance among open models, infilling capabilities, support for large input contexts, and zero-shot instruction following ability for programming tasks. We provide multiple flavors to cover a wide range of applications: foundation models (Code Llama), Python specializations (Code Llama - Python), and instruction-following models (Code Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are trained on sequences of 16k tokens and show improvements on inputs with up to 100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants support infilling based on surrounding content. Code Llama reaches state-of-the-art performance among open models on several code benchmarks, with scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our models outperform every other publicly available model on MultiPL-E. We release Code Llama under a permissive license that allows for both research and commercial use.
http://arxiv.org/pdf/2308.12950
Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve
cs.CL
null
null
cs.CL
20230824
20240131
[]
2308.12966
20
# 3.3 Supervised Fine-tuning During this stage, we finetuned the Qwen-VL pre-trained model through instruction fine-tuning to enhance its instruction following and dialogue capabilities, resulting in the interactive Qwen-VL-Chat model. The multi-modal instruction tuning data primarily comes from caption data or dialogue data generated through LLM self-instruction, which often only addresses single-image dialogue and reasoning and is limited to image content comprehension. We construct an additional set of dialogue data through manual annotation, model generation, and strategy concatenation to incorporate localization and multi-image comprehension abilities into the Qwen-VL model. We confirm that the model effectively transfers these capabilities to a wider range of languages and question types. Additionally, we mix multi-modal and pure text dialogue data during training to ensure the model’s universality in dialogue capabilities. The instruction tuning data amounts to 350k. In this stage, we freeze the visual encoder and optimize the language model and adapter module. We demonstrate the data format of this stage in Appendix B.2. # 4 Evaluation In this section, we conduct an overall evaluation on various multi-modal tasks to comprehensively assess our models’ visual understanding ability. In the following, Qwen-VL denotes the model after the multi-task training, and Qwen-VL-Chat denotes the model after supervised fine-tuning (SFT) stage.
2308.12966#20
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
In this work, we introduce the Qwen-VL series, a set of large-scale vision-language models (LVLMs) designed to perceive and understand both texts and images. Starting from the Qwen-LM as a foundation, we endow it with visual capacity by the meticulously designed (i) visual receptor, (ii) input-output interface, (iii) 3-stage training pipeline, and (iv) multilingual multimodal cleaned corpus. Beyond the conventional image description and question-answering, we implement the grounding and text-reading ability of Qwen-VLs by aligning image-caption-box tuples. The resulting models, including Qwen-VL and Qwen-VL-Chat, set new records for generalist models under similar model scales on a broad range of visual-centric benchmarks (e.g., image captioning, question answering, visual grounding) and different settings (e.g., zero-shot, few-shot). Moreover, on real-world dialog benchmarks, our instruction-tuned Qwen-VL-Chat also demonstrates superiority compared to existing vision-language chatbots. Code, demo and models are available at https://github.com/QwenLM/Qwen-VL.
http://arxiv.org/pdf/2308.12966
Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou
cs.CV, cs.CL
Code, demo and models are available at https://github.com/QwenLM/Qwen-VL
null
cs.CV
20230824
20231013
[ { "id": "2211.01335" }, { "id": "2307.02499" }, { "id": "2305.10403" }, { "id": "2308.16890" }, { "id": "2208.10442" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11172" }, { "id": "2210.08402" }, { "id": "2306.02858" }, { "id": "2209.06794" }, { "id": "1504.00325" }, { "id": "2204.13653" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2212.04408" }, { "id": "2307.05222" }, { "id": "2306.15195" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2306.14824" }, { "id": "2102.05918" }, { "id": "2205.01917" }, { "id": "2111.11432" }, { "id": "2307.16125" }, { "id": "2305.03726" }, { "id": "2203.10244" }, { "id": "2206.08916" }, { "id": "2304.14108" }, { "id": "2307.08581" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2305.18565" } ]
2308.12503
21
Class process: Mrs. Smith: Quadratic equations can be found in various fields, from ... Emily: I'm really nervous about this lesson on quadratic equations. Mrs. Smith: Emily, but please know that I am here to... Course-ONE A Reflection: rts. ... Student interests. J need more encouragement for my students, Emily gets nervous when facing math. Mrs. Smith utilized ... Plan: - Using interesting forms and gamified teaching to stimulate students‘ interest in learning and reduce resistance.... Course-TWO Class process: Mrs. Smith: ... Can anyone explain how the coefficients 'b’ and 'c' influence the quadratic function's graph?... Emily: The coefficient 'b' in the quadratic function affects ... Mrs. Smith: Excellent explanation, Emily. I'm glad to see that you're no longer afraid of mathematics! You... Reflection: Mrs. Smith effectively engages and motivates students in learning about quadratic functions... Plan: - ...involve changing different parameters of the quadratic function (such as coefficients and constants)... Course-THREE Class process: Mrs. Smith: ... Remember, learning is a journey that is best enjoyed together. Let's embark on this
2308.12503#21
CGMI: Configurable General Multi-Agent Interaction Framework
Benefiting from the powerful capabilities of large language models (LLMs), agents based on LLMs have shown the potential to address domain-specific tasks and emulate human behaviors. However, the content generated by these agents remains somewhat superficial, owing to their limited domain expertise and the absence of an effective cognitive architecture. To address this, we present the Configurable General Multi-Agent Interaction (CGMI) framework, designed to replicate human interactions in real-world scenarios. Specifically, we propose a tree-structured methodology for the assignment, detection, and maintenance of agent personality. Additionally, we designed a cognitive architecture equipped with a skill library based on the ACT* model, which contains memory, reflection, and planning modules. We have also integrated general agents to augment the virtual environment's realism. Using the CGMI framework, we simulated numerous classroom interactions between teacher and students. The experiments indicate that aspects such as the teaching methodology, curriculum, and student performance closely mirror real classroom settings. We will open source our work.
http://arxiv.org/pdf/2308.12503
Shi Jinxin, Zhao Jiabao, Wang Yilei, Wu Xingjiao, Li Jiawen, He Liang
cs.AI, cs.HC, cs.MA
11 pages, 15 figures
null
cs.AI
20230824
20230828
[ { "id": "2302.01560" }, { "id": "2307.05300" }, { "id": "2307.07924" }, { "id": "2210.03350" }, { "id": "2304.05376" }, { "id": "2304.03442" }, { "id": "2210.03629" }, { "id": "2305.04091" }, { "id": "2305.02547" }, { "id": "2303.17071" }, { "id": "2303.17760" }, { "id": "2303.08774" } ]
2308.12519
21
Implementation Details We build upon ChatGPT 1, a prominent large language model, to imple- ment our approach. Our approach involves conducting a decision-exploration process 20 times and finally selecting the decision sequence with the highest Elo score as the final decision. For Elo-based Utility Construction, the initial Elo score of the decision step is set as 0.0 and the Elo coefficient r is set as 173.72 according to the vanilla Elo rating system (Elo, 1967). The Elo score of ˆd in Equation 5 is set as 0.0. K in Equation 3 is set as 50. To manage the computational cost of ChatGPT API calls, we set a limit of 100 ChatGPT API calls for a decision-searching process. Furthermore, we impose a maximum limit of 12 steps for each decision sequence due to the cost of ChatGPT API calls. 1gpt-3.5-turbo-0613-16k 6 Preprint Model Pass Rate (%) CoT CoT@3 Reflexion BFS DFS DFSDT 16.60 31.20 26.60 38.00 45.58 50.20 RADAGENT 61.92 Model Pref. Rank CoT@3 Reflexion BFS DFSDT RADAGENT 3.45 3.48 3.25 2.91 -Rand. Select -Elo Select 3.24 2.19
2308.12519#21
Rational Decision-Making Agent with Internalized Utility Judgment
Large language models (LLMs) have demonstrated remarkable advancements and have attracted significant efforts to develop LLMs into agents capable of executing intricate multi-step decision-making tasks beyond traditional NLP applications. Existing approaches to LLM-based decision-making predominantly build upon the manually-designed external performance metrics to guide the decision-making process. However, reliance on the external performance metrics as prior is problematic in real-world scenarios, where such prior may be unavailable, flawed, or even erroneous. For genuine autonomous decision making, it is imperative for the agent to develop its rationality from its posterior experiences to judge decisions independently. Central to the development of rationality is the construction of an internalized utility judgment, capable of assigning numerical utilities to each decision. This paper proposes RadAgent (Rational Decision-Making Agent), which fosters the development of its rationality through an iterative framework involving Experience Exploration and Utility Learning. Within this framework, Elo-based Utility Construction is devised to assign Elo scores to individual decision steps to judge their utilities via pairwise comparisons. Consequently, these Elo scores guide the decision-making process to derive optimal outcomes. Experimental results on the ToolBench dataset demonstrate RadAgent's superiority over baselines, achieving over 10% improvement in Pass Rate on diverse tasks. It offers higher-quality solutions and reduces costs (ChatGPT API calls), highlighting its effectiveness and efficiency.
http://arxiv.org/pdf/2308.12519
Yining Ye, Xin Cong, Shizuo Tian, Yujia Qin, Chong Liu, Yankai Lin, Zhiyuan Liu, Maosong Sun
cs.CL
Received 8,6,6,6 scores on ICLR 2024
null
cs.CL
20230824
20240117
[ { "id": "2305.14318" }, { "id": "2306.06624" }, { "id": "2305.17926" }, { "id": "2305.10601" }, { "id": "2307.16789" }, { "id": "2305.06849" }, { "id": "2304.08354" }, { "id": "2308.09687" }, { "id": "2306.11489" }, { "id": "2306.17563" }, { "id": "2305.14992" }, { "id": "2305.01937" }, { "id": "2308.10379" }, { "id": "2305.11554" } ]
2308.12682
21
5 SayCanPay Inference The core concept of SayCanPay is to guide LMs in generating feasible and cost-effective plans. The process unfolds in three key steps: (1) Say: At each step t, the LM generates the top-m candidate actions with associated probabilities {p(ai i=1. This generation employs a beam search over tokens. (2) Can: Next, a trained domain-specific model weighs these candidate actions on their feasibility, mirroring precondition evaluation. (3) Pay: Finally, a trained domain-specific estimator weighs the candidate actions according to their estimated payoff. The probabilities from these three components are then combined to select the next action. An overview of SayCanPay is provided in Figure 1. In what follows, we instantiate the LM planning problem with two decoding strategies (or search algorithms that select the next action(s)): Greedy Action (§ 5.1) and Beam Action (§ 5.2). Each strategy is explored using three distinct decoding scores (i.e. score used by the search algorithm to select the next action) – Say, SayCan, SayCanPay. We then elaborate on the training of Can and Pay models (§ 6). # 5.1 Greedy-Action
2308.12682#21
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
Large Language Models (LLMs) have demonstrated impressive planning abilities due to their vast "world knowledge". Yet, obtaining plans that are both feasible (grounded in affordances) and cost-effective (in plan length), remains a challenge, despite recent progress. This contrasts with heuristic planning methods that employ domain knowledge (formalized in action models such as PDDL) and heuristic search to generate feasible, optimal plans. Inspired by this, we propose to combine the power of LLMs and heuristic planning by leveraging the world knowledge of LLMs and the principles of heuristic search. Our approach, SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain knowledge, that evaluates actions' feasibility (Can) and long-term reward/payoff (Pay), and heuristic search to select the best sequence of actions. Our contributions are (1) a novel framing of the LLM planning problem in the context of heuristic planning, (2) integrating grounding and cost-effective elements into the generated plans, and (3) using heuristic search over actions. Our extensive evaluations show that our model surpasses other LLM planning approaches.
http://arxiv.org/pdf/2308.12682
Rishi Hazra, Pedro Zuidberg Dos Martires, Luc De Raedt
cs.AI
Accepted in AAAI 2024. Website: https://rishihazra.github.io/SayCanPay/
null
cs.AI
20230824
20240101
[ { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2305.14992" }, { "id": "2302.05128" }, { "id": "2212.08681" }, { "id": "1807.03748" }, { "id": "2303.00855" }, { "id": "2305.10601" }, { "id": "2304.11477" }, { "id": "2210.17323" }, { "id": "2210.11416" }, { "id": "2201.04735" }, { "id": "2202.10936" }, { "id": "2209.07753" }, { "id": "2302.06706" }, { "id": "1909.08593" }, { "id": "2307.15818" }, { "id": "2204.01691" }, { "id": "2207.05608" }, { "id": "2305.14314" } ]
2308.12950
21
# 2.6 Training details Optimization. Our optimizer is AdamW (Loshchilov & Hutter, 2019) with β1 and β2 values of 0.9 and 0.95. We use a cosine schedule with 1000 warm-up steps, and set the final learning rate to be 1/30th of the peak learning rate. We use a batch size of 4M tokens which are presented as sequences of 4,096 tokens each. Despite the standard practice of using lower learning rates in fine-tuning stages than in pre-training stages, 5
2308.12950#21
Code Llama: Open Foundation Models for Code
We release Code Llama, a family of large language models for code based on Llama 2 providing state-of-the-art performance among open models, infilling capabilities, support for large input contexts, and zero-shot instruction following ability for programming tasks. We provide multiple flavors to cover a wide range of applications: foundation models (Code Llama), Python specializations (Code Llama - Python), and instruction-following models (Code Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are trained on sequences of 16k tokens and show improvements on inputs with up to 100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants support infilling based on surrounding content. Code Llama reaches state-of-the-art performance among open models on several code benchmarks, with scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our models outperform every other publicly available model on MultiPL-E. We release Code Llama under a permissive license that allows for both research and commercial use.
http://arxiv.org/pdf/2308.12950
Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve
cs.CL
null
null
cs.CL
20230824
20240131
[]
2308.12966
21
Table 9 provides a detailed summary of the used evaluation benchmarks and corresponding metrics. # Image Caption and General Visual Question Answering Image caption and general visual question answering (VQA) are two conventional tasks for vision-language models. Specifically, image caption requires the model to generate a description for a given image and general VQA requires the model to generate an answer for a given image-question pair. 1 https://digitalcorpora.org/corpora/file-corpora/cc-main-2021-31-pdf-untruncated 2This task is to generate noun/phrase grounded captions (Peng et al., 2023). 6 # Table 4: Results on Image Captioning and General VQA.
2308.12966#21
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
In this work, we introduce the Qwen-VL series, a set of large-scale vision-language models (LVLMs) designed to perceive and understand both texts and images. Starting from the Qwen-LM as a foundation, we endow it with visual capacity by the meticulously designed (i) visual receptor, (ii) input-output interface, (iii) 3-stage training pipeline, and (iv) multilingual multimodal cleaned corpus. Beyond the conventional image description and question-answering, we implement the grounding and text-reading ability of Qwen-VLs by aligning image-caption-box tuples. The resulting models, including Qwen-VL and Qwen-VL-Chat, set new records for generalist models under similar model scales on a broad range of visual-centric benchmarks (e.g., image captioning, question answering, visual grounding) and different settings (e.g., zero-shot, few-shot). Moreover, on real-world dialog benchmarks, our instruction-tuned Qwen-VL-Chat also demonstrates superiority compared to existing vision-language chatbots. Code, demo and models are available at https://github.com/QwenLM/Qwen-VL.
http://arxiv.org/pdf/2308.12966
Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou
cs.CV, cs.CL
Code, demo and models are available at https://github.com/QwenLM/Qwen-VL
null
cs.CV
20230824
20231013
[ { "id": "2211.01335" }, { "id": "2307.02499" }, { "id": "2305.10403" }, { "id": "2308.16890" }, { "id": "2208.10442" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11172" }, { "id": "2210.08402" }, { "id": "2306.02858" }, { "id": "2209.06794" }, { "id": "1504.00325" }, { "id": "2204.13653" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2212.04408" }, { "id": "2307.05222" }, { "id": "2306.15195" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2306.14824" }, { "id": "2102.05918" }, { "id": "2205.01917" }, { "id": "2111.11432" }, { "id": "2307.16125" }, { "id": "2305.03726" }, { "id": "2203.10244" }, { "id": "2206.08916" }, { "id": "2304.14108" }, { "id": "2307.08581" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2305.18565" } ]
2308.12503
22
as coefficients and constants)... Course-THREE Class process: Mrs. Smith: ... Remember, learning is a journey that is best enjoyed together. Let's embark on this exciting... John: ...Could you provide an example for us ... Reflection: ...Sometimes students may not understand and they may need more examples... Plan: - .., their understanding and application of quadratic function. ...using the example of buying apples...
2308.12503#22
CGMI: Configurable General Multi-Agent Interaction Framework
Benefiting from the powerful capabilities of large language models (LLMs), agents based on LLMs have shown the potential to address domain-specific tasks and emulate human behaviors. However, the content generated by these agents remains somewhat superficial, owing to their limited domain expertise and the absence of an effective cognitive architecture. To address this, we present the Configurable General Multi-Agent Interaction (CGMI) framework, designed to replicate human interactions in real-world scenarios. Specifically, we propose a tree-structured methodology for the assignment, detection, and maintenance of agent personality. Additionally, we designed a cognitive architecture equipped with a skill library based on the ACT* model, which contains memory, reflection, and planning modules. We have also integrated general agents to augment the virtual environment's realism. Using the CGMI framework, we simulated numerous classroom interactions between teacher and students. The experiments indicate that aspects such as the teaching methodology, curriculum, and student performance closely mirror real classroom settings. We will open source our work.
http://arxiv.org/pdf/2308.12503
Shi Jinxin, Zhao Jiabao, Wang Yilei, Wu Xingjiao, Li Jiawen, He Liang
cs.AI, cs.HC, cs.MA
11 pages, 15 figures
null
cs.AI
20230824
20230828
[ { "id": "2302.01560" }, { "id": "2307.05300" }, { "id": "2307.07924" }, { "id": "2210.03350" }, { "id": "2304.05376" }, { "id": "2304.03442" }, { "id": "2210.03629" }, { "id": "2305.04091" }, { "id": "2305.02547" }, { "id": "2303.17071" }, { "id": "2303.17760" }, { "id": "2303.08774" } ]
2308.12682
22
# 5.1 Greedy-Action In this decoding strategy, we maintain a single action sequence and at each step, greedily choose the next best action based on a specific decoding score. This is akin to performing Greedy Best-First Search with z1 = 0 and z2 = 1. The decoding score for each candidate action ai is given as: = log 9)) f (hi t|ht−1, g) · fheur(hi t) denotes the current history with ith candidate action. As shown in Figure 2, this approach can be viewed as being “greedy” with respect to actions while using “beams” over the tokens. Now, we explore three variations of the strategy based on how the decoding score is computed. • Say: In this decoding score, we set the estimated payoff fheur(hi t, g) = 1 ∀ i ∈ {1, . . . , m}. Hence, the action is selected solely based on the LM generation probability, without considering feasibility or payoff. f (hj) = log ( p(aj|hi—1,9) ) (3) =p at
2308.12682#22
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
Large Language Models (LLMs) have demonstrated impressive planning abilities due to their vast "world knowledge". Yet, obtaining plans that are both feasible (grounded in affordances) and cost-effective (in plan length), remains a challenge, despite recent progress. This contrasts with heuristic planning methods that employ domain knowledge (formalized in action models such as PDDL) and heuristic search to generate feasible, optimal plans. Inspired by this, we propose to combine the power of LLMs and heuristic planning by leveraging the world knowledge of LLMs and the principles of heuristic search. Our approach, SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain knowledge, that evaluates actions' feasibility (Can) and long-term reward/payoff (Pay), and heuristic search to select the best sequence of actions. Our contributions are (1) a novel framing of the LLM planning problem in the context of heuristic planning, (2) integrating grounding and cost-effective elements into the generated plans, and (3) using heuristic search over actions. Our extensive evaluations show that our model surpasses other LLM planning approaches.
http://arxiv.org/pdf/2308.12682
Rishi Hazra, Pedro Zuidberg Dos Martires, Luc De Raedt
cs.AI
Accepted in AAAI 2024. Website: https://rishihazra.github.io/SayCanPay/
null
cs.AI
20230824
20240101
[ { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2305.14992" }, { "id": "2302.05128" }, { "id": "2212.08681" }, { "id": "1807.03748" }, { "id": "2303.00855" }, { "id": "2305.10601" }, { "id": "2304.11477" }, { "id": "2210.17323" }, { "id": "2210.11416" }, { "id": "2201.04735" }, { "id": "2202.10936" }, { "id": "2209.07753" }, { "id": "2302.06706" }, { "id": "1909.08593" }, { "id": "2307.15818" }, { "id": "2204.01691" }, { "id": "2207.05608" }, { "id": "2305.14314" } ]
2308.12950
22
Model Size HumanEval MBPP pass@1 pass@10 pass@100 pass@1 pass@10 pass@100 code-cushman-001 GPT-3.5 (ChatGPT) GPT-4 PaLM PaLM-Coder PaLM 2-S StarCoder Base StarCoder Python StarCoder Prompted 12B 33.5% - 48.1% - 67.0% 540B 26.2% 540B 35.9% - 37.6% 15.5B 30.4% 15.5B 33.6% 15.5B 40.8% - - - - - - - - - 45.9% 52.2% - 36.8% 88.4% 47.0% 88.4% 50.0% 49.0% 52.7% 49.5% - - - - - - - - - - - - - - - - - - - - - - - - - Llama 2 7B 12.2% 25.2% 13B 20.1% 34.8% 34B 22.6% 47.0% 70B 30.5% 59.4% 44.4% 20.8% 41.8% 61.2% 27.6% 48.1% 79.5% 33.8% 56.9% 87.0% 45.4% 66.2%
2308.12950#22
Code Llama: Open Foundation Models for Code
We release Code Llama, a family of large language models for code based on Llama 2 providing state-of-the-art performance among open models, infilling capabilities, support for large input contexts, and zero-shot instruction following ability for programming tasks. We provide multiple flavors to cover a wide range of applications: foundation models (Code Llama), Python specializations (Code Llama - Python), and instruction-following models (Code Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are trained on sequences of 16k tokens and show improvements on inputs with up to 100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants support infilling based on surrounding content. Code Llama reaches state-of-the-art performance among open models on several code benchmarks, with scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our models outperform every other publicly available model on MultiPL-E. We release Code Llama under a permissive license that allows for both research and commercial use.
http://arxiv.org/pdf/2308.12950
Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve
cs.CL
null
null
cs.CL
20230824
20240131
[]
2308.12966
22
Model Type Model Image Caption Nocaps (0-shot) Flickr30K (0-shot) VQAv2 OKVQA General VQA GQA SciQA-Img (0-shot) VizWiz (0-shot) Generalist Models Flamingo-9B Flamingo-80B Unified-IO-XL Kosmos-1 Kosmos-2 BLIP-2 (Vicuna-13B) InstructBLIP (Vicuna-13B) Shikra (Vicuna-13B) Qwen-VL (Qwen-7B) Qwen-VL-Chat - - 100.0 - - 103.9 121.9 - 121.4 120.2 61.5 67.2 - 67.1 80.5 71.6 82.8 73.9 85.8 81.0 51.8 56.3 77.9 51.0 51.1 65.0 - 77.36 79.5 78.2 44.7 50.6 54.0 - - 45.9 - 47.16 58.6 56.6 - - - - - 32.3 49.5 - 59.3 57.5 - - - - - 61.0 63.1 - 67.1 68.2 28.8 31.6 - 29.2 - 19.6 33.4 - 35.2
2308.12966#22
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
In this work, we introduce the Qwen-VL series, a set of large-scale vision-language models (LVLMs) designed to perceive and understand both texts and images. Starting from the Qwen-LM as a foundation, we endow it with visual capacity by the meticulously designed (i) visual receptor, (ii) input-output interface, (iii) 3-stage training pipeline, and (iv) multilingual multimodal cleaned corpus. Beyond the conventional image description and question-answering, we implement the grounding and text-reading ability of Qwen-VLs by aligning image-caption-box tuples. The resulting models, including Qwen-VL and Qwen-VL-Chat, set new records for generalist models under similar model scales on a broad range of visual-centric benchmarks (e.g., image captioning, question answering, visual grounding) and different settings (e.g., zero-shot, few-shot). Moreover, on real-world dialog benchmarks, our instruction-tuned Qwen-VL-Chat also demonstrates superiority compared to existing vision-language chatbots. Code, demo and models are available at https://github.com/QwenLM/Qwen-VL.
http://arxiv.org/pdf/2308.12966
Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou
cs.CV, cs.CL
Code, demo and models are available at https://github.com/QwenLM/Qwen-VL
null
cs.CV
20230824
20231013
[ { "id": "2211.01335" }, { "id": "2307.02499" }, { "id": "2305.10403" }, { "id": "2308.16890" }, { "id": "2208.10442" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11172" }, { "id": "2210.08402" }, { "id": "2306.02858" }, { "id": "2209.06794" }, { "id": "1504.00325" }, { "id": "2204.13653" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2212.04408" }, { "id": "2307.05222" }, { "id": "2306.15195" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2306.14824" }, { "id": "2102.05918" }, { "id": "2205.01917" }, { "id": "2111.11432" }, { "id": "2307.16125" }, { "id": "2305.03726" }, { "id": "2203.10244" }, { "id": "2206.08916" }, { "id": "2304.14108" }, { "id": "2307.08581" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2305.18565" } ]
2308.12503
23
Figure 4: Teacher Mrs Smith’s classroom experience and her reflection and planning in virtual classroom. The red, green, and blue characters in the picture represent the events discovered by the teacher in three different classes. The teacher reflects and plans on these events, and serves as a focus in the subsequent teaching process. agents in real-time and maintains it if there’s any deviation. Users can also operate the supervisory Agent to adjust the classroom process according to their needs. # Experiments In this section, we first present the ”classroom teaching sce- nario” reconstructed using the CGMI framework and ana- lyze the teaching behaviors during the class. Subsequently, through comparative experiments, we showcase the behav- ioral advantages of agents equipped with human intrinsic traits (such as personality, cognitive structures, etc.). Lastly, we analyze the significance of generic intelligent agents in enhancing the interaction logic of role-specific agents. In our experiment, we adopted OpenAI’s gpt-3.5-turbo-16k model (OpenAI 2022), instantiating one teacher, five stu- dents, and four generic intelligent agents. Each agent was given a unique role setting and task objective (see appendix).
2308.12503#23
CGMI: Configurable General Multi-Agent Interaction Framework
Benefiting from the powerful capabilities of large language models (LLMs), agents based on LLMs have shown the potential to address domain-specific tasks and emulate human behaviors. However, the content generated by these agents remains somewhat superficial, owing to their limited domain expertise and the absence of an effective cognitive architecture. To address this, we present the Configurable General Multi-Agent Interaction (CGMI) framework, designed to replicate human interactions in real-world scenarios. Specifically, we propose a tree-structured methodology for the assignment, detection, and maintenance of agent personality. Additionally, we designed a cognitive architecture equipped with a skill library based on the ACT* model, which contains memory, reflection, and planning modules. We have also integrated general agents to augment the virtual environment's realism. Using the CGMI framework, we simulated numerous classroom interactions between teacher and students. The experiments indicate that aspects such as the teaching methodology, curriculum, and student performance closely mirror real classroom settings. We will open source our work.
http://arxiv.org/pdf/2308.12503
Shi Jinxin, Zhao Jiabao, Wang Yilei, Wu Xingjiao, Li Jiawen, He Liang
cs.AI, cs.HC, cs.MA
11 pages, 15 figures
null
cs.AI
20230824
20230828
[ { "id": "2302.01560" }, { "id": "2307.05300" }, { "id": "2307.07924" }, { "id": "2210.03350" }, { "id": "2304.05376" }, { "id": "2304.03442" }, { "id": "2210.03629" }, { "id": "2305.04091" }, { "id": "2305.02547" }, { "id": "2303.17071" }, { "id": "2303.17760" }, { "id": "2303.08774" } ]
2308.12519
23
To validate the effectiveness of our proposed RADAGENT approach, we first study whether our approach can accomplish more complex tasks. The results are shown in Table 1, from which we can observe that: (1) CoT only solves 16.60% instructions when facing complex tasks. That is because CoT only explores one decision sequence, leading to inadequate exploration of the whole solution space. Especially, a failure of API call may impact the following decisions, causing the model to be trapped in a faulty loop. CoT@3 exhibited a 14.6% gain over CoT, indicating that the increasing number of decision explorations is more likely to reach a feasible solution. (2) Com- pared with CoT@3, Reflexion, despite introducing self-reflection on decision making, does not yield any improvement and even results in inferior performance. This outcome suggests that, when faced with complex instructions, mere self-reflection may not be sufficient to provide informative guidance for LLMs to search for a feasible solution. (3) All tree-based methods (BFS, DFS and DFSDT) yield lower Pass Rate than RADAGENT, which indicates that without task-specific
2308.12519#23
Rational Decision-Making Agent with Internalized Utility Judgment
Large language models (LLMs) have demonstrated remarkable advancements and have attracted significant efforts to develop LLMs into agents capable of executing intricate multi-step decision-making tasks beyond traditional NLP applications. Existing approaches to LLM-based decision-making predominantly build upon the manually-designed external performance metrics to guide the decision-making process. However, reliance on the external performance metrics as prior is problematic in real-world scenarios, where such prior may be unavailable, flawed, or even erroneous. For genuine autonomous decision making, it is imperative for the agent to develop its rationality from its posterior experiences to judge decisions independently. Central to the development of rationality is the construction of an internalized utility judgment, capable of assigning numerical utilities to each decision. This paper proposes RadAgent (Rational Decision-Making Agent), which fosters the development of its rationality through an iterative framework involving Experience Exploration and Utility Learning. Within this framework, Elo-based Utility Construction is devised to assign Elo scores to individual decision steps to judge their utilities via pairwise comparisons. Consequently, these Elo scores guide the decision-making process to derive optimal outcomes. Experimental results on the ToolBench dataset demonstrate RadAgent's superiority over baselines, achieving over 10% improvement in Pass Rate on diverse tasks. It offers higher-quality solutions and reduces costs (ChatGPT API calls), highlighting its effectiveness and efficiency.
http://arxiv.org/pdf/2308.12519
Yining Ye, Xin Cong, Shizuo Tian, Yujia Qin, Chong Liu, Yankai Lin, Zhiyuan Liu, Maosong Sun
cs.CL
Received 8,6,6,6 scores on ICLR 2024
null
cs.CL
20230824
20240117
[ { "id": "2305.14318" }, { "id": "2306.06624" }, { "id": "2305.17926" }, { "id": "2305.10601" }, { "id": "2307.16789" }, { "id": "2305.06849" }, { "id": "2304.08354" }, { "id": "2308.09687" }, { "id": "2306.11489" }, { "id": "2306.17563" }, { "id": "2305.14992" }, { "id": "2305.01937" }, { "id": "2308.10379" }, { "id": "2305.11554" } ]
2308.12682
23
f (hj) = log ( p(aj|hi—1,9) ) (3) =p at • SayCan: Here, the action feasibility is also considered. Let, σt = (at, pre(at)) where pre(at) denotes the precon- ditions of at. The “can” probability2, is denoted by p(pre(at)|ht−1, g). Again, fheur(hi t, g) = 1 ∀ i. (hi) = log (p(oj|hu-1,9)) = log ( p(aj|hr-1, 9) -p(pre(ai) )) (4) =P. t % =p # f (hi • SayCanPay: This decoding score accounts for the estimated payoff in addition to the abovementioned scores. Hence, the best action is selected based on a combined score of Say, Can, and Pay scores. log (p(aj|he—1, 9) p(pre(a;)|Re-1, 9) > feur(hi, 9) ) (5) a ; == Yay =p =p a a
2308.12682#23
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
Large Language Models (LLMs) have demonstrated impressive planning abilities due to their vast "world knowledge". Yet, obtaining plans that are both feasible (grounded in affordances) and cost-effective (in plan length), remains a challenge, despite recent progress. This contrasts with heuristic planning methods that employ domain knowledge (formalized in action models such as PDDL) and heuristic search to generate feasible, optimal plans. Inspired by this, we propose to combine the power of LLMs and heuristic planning by leveraging the world knowledge of LLMs and the principles of heuristic search. Our approach, SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain knowledge, that evaluates actions' feasibility (Can) and long-term reward/payoff (Pay), and heuristic search to select the best sequence of actions. Our contributions are (1) a novel framing of the LLM planning problem in the context of heuristic planning, (2) integrating grounding and cost-effective elements into the generated plans, and (3) using heuristic search over actions. Our extensive evaluations show that our model surpasses other LLM planning approaches.
http://arxiv.org/pdf/2308.12682
Rishi Hazra, Pedro Zuidberg Dos Martires, Luc De Raedt
cs.AI
Accepted in AAAI 2024. Website: https://rishihazra.github.io/SayCanPay/
null
cs.AI
20230824
20240101
[ { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2305.14992" }, { "id": "2302.05128" }, { "id": "2212.08681" }, { "id": "1807.03748" }, { "id": "2303.00855" }, { "id": "2305.10601" }, { "id": "2304.11477" }, { "id": "2210.17323" }, { "id": "2210.11416" }, { "id": "2201.04735" }, { "id": "2202.10936" }, { "id": "2209.07753" }, { "id": "2302.06706" }, { "id": "1909.08593" }, { "id": "2307.15818" }, { "id": "2204.01691" }, { "id": "2207.05608" }, { "id": "2305.14314" } ]
2308.12950
23
41.8% 61.2% 27.6% 48.1% 79.5% 33.8% 56.9% 87.0% 45.4% 66.2% 65.5% 69.5% 77.6% 83.1% Code Llama 7B 33.5% 59.6% 13B 36.0% 69.4% 34B 48.8% 76.8% 70B 53.0% 84.6% 85.9% 41.4% 66.7% 89.8% 47.0% 71.7% 93.0% 55.0% 76.2% 96.2% 62.4% 81.1% 82.5% 87.1% 86.6% 91.9% Code Llama - Instruct Unnatural Code Llama 7B 34.8% 64.3% 13B 42.7% 71.6% 34B 41.5% 77.2% 70B 67.8% 90.3% 34B 62.2% 85.2% 88.1% 44.4% 65.4% 91.6% 49.4% 71.2% 93.5% 57.0% 74.6% 97.3% 62.2% 79.6% 95.4% 61.2% 76.6% 76.8% 84.1% 85.4%
2308.12950#23
Code Llama: Open Foundation Models for Code
We release Code Llama, a family of large language models for code based on Llama 2 providing state-of-the-art performance among open models, infilling capabilities, support for large input contexts, and zero-shot instruction following ability for programming tasks. We provide multiple flavors to cover a wide range of applications: foundation models (Code Llama), Python specializations (Code Llama - Python), and instruction-following models (Code Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are trained on sequences of 16k tokens and show improvements on inputs with up to 100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants support infilling based on surrounding content. Code Llama reaches state-of-the-art performance among open models on several code benchmarks, with scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our models outperform every other publicly available model on MultiPL-E. We release Code Llama under a permissive license that allows for both research and commercial use.
http://arxiv.org/pdf/2308.12950
Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve
cs.CL
null
null
cs.CL
20230824
20240131
[]
2308.12503
24
Categories B1.Accept feeling B2.Praises or encourages B3.Accept ideas B4.Asks questions B5.Lecturing B6.Gives directions B7.Criticising B8.Pupil talk response B9.Pupil talk Initiation C1 0.35% 19.08% 12.99% 11.98% 6.39% 3.89% 1.77% 1.03% 22.97% 33.61% 35.61% 6.36% 7.01% 1.24% 5.65% 28.62% 20.41% 21.56% 11.31% 17.32% 17.07% C2 0% C3 0.30% 5.69% 1.50% 5.09% 1.20% Table 1: Analysis results based on FIAS These sessions focused on the following topics: C1: Con- cept of the Quadratic Equation, C2: Methods for Solving the Quadratic Equation, and C3: Applications of the Quadratic Equation. # Analysis of Teaching Behavior
2308.12503#24
CGMI: Configurable General Multi-Agent Interaction Framework
Benefiting from the powerful capabilities of large language models (LLMs), agents based on LLMs have shown the potential to address domain-specific tasks and emulate human behaviors. However, the content generated by these agents remains somewhat superficial, owing to their limited domain expertise and the absence of an effective cognitive architecture. To address this, we present the Configurable General Multi-Agent Interaction (CGMI) framework, designed to replicate human interactions in real-world scenarios. Specifically, we propose a tree-structured methodology for the assignment, detection, and maintenance of agent personality. Additionally, we designed a cognitive architecture equipped with a skill library based on the ACT* model, which contains memory, reflection, and planning modules. We have also integrated general agents to augment the virtual environment's realism. Using the CGMI framework, we simulated numerous classroom interactions between teacher and students. The experiments indicate that aspects such as the teaching methodology, curriculum, and student performance closely mirror real classroom settings. We will open source our work.
http://arxiv.org/pdf/2308.12503
Shi Jinxin, Zhao Jiabao, Wang Yilei, Wu Xingjiao, Li Jiawen, He Liang
cs.AI, cs.HC, cs.MA
11 pages, 15 figures
null
cs.AI
20230824
20230828
[ { "id": "2302.01560" }, { "id": "2307.05300" }, { "id": "2307.07924" }, { "id": "2210.03350" }, { "id": "2304.05376" }, { "id": "2304.03442" }, { "id": "2210.03629" }, { "id": "2305.04091" }, { "id": "2305.02547" }, { "id": "2303.17071" }, { "id": "2303.17760" }, { "id": "2303.08774" } ]
2308.12519
24
(3) All tree-based methods (BFS, DFS and DFSDT) yield lower Pass Rate than RADAGENT, which indicates that without task-specific expert knowledge, the tree-based methods cannot work effectively to accomplish diverse tasks. (4) RADA- GENT achieves superior performance against all baselines. Compared with the best baseline method, DFSDT, RADAGENT exhibits a substantial 10% improvement in Pass Rate. Such a significant im- provement is attributed to the capability of RADAGENT to autonomously make decisions by itself to accomplish the complex instructions via self-judgment.
2308.12519#24
Rational Decision-Making Agent with Internalized Utility Judgment
Large language models (LLMs) have demonstrated remarkable advancements and have attracted significant efforts to develop LLMs into agents capable of executing intricate multi-step decision-making tasks beyond traditional NLP applications. Existing approaches to LLM-based decision-making predominantly build upon the manually-designed external performance metrics to guide the decision-making process. However, reliance on the external performance metrics as prior is problematic in real-world scenarios, where such prior may be unavailable, flawed, or even erroneous. For genuine autonomous decision making, it is imperative for the agent to develop its rationality from its posterior experiences to judge decisions independently. Central to the development of rationality is the construction of an internalized utility judgment, capable of assigning numerical utilities to each decision. This paper proposes RadAgent (Rational Decision-Making Agent), which fosters the development of its rationality through an iterative framework involving Experience Exploration and Utility Learning. Within this framework, Elo-based Utility Construction is devised to assign Elo scores to individual decision steps to judge their utilities via pairwise comparisons. Consequently, these Elo scores guide the decision-making process to derive optimal outcomes. Experimental results on the ToolBench dataset demonstrate RadAgent's superiority over baselines, achieving over 10% improvement in Pass Rate on diverse tasks. It offers higher-quality solutions and reduces costs (ChatGPT API calls), highlighting its effectiveness and efficiency.
http://arxiv.org/pdf/2308.12519
Yining Ye, Xin Cong, Shizuo Tian, Yujia Qin, Chong Liu, Yankai Lin, Zhiyuan Liu, Maosong Sun
cs.CL
Received 8,6,6,6 scores on ICLR 2024
null
cs.CL
20230824
20240117
[ { "id": "2305.14318" }, { "id": "2306.06624" }, { "id": "2305.17926" }, { "id": "2305.10601" }, { "id": "2307.16789" }, { "id": "2305.06849" }, { "id": "2304.08354" }, { "id": "2308.09687" }, { "id": "2306.11489" }, { "id": "2306.17563" }, { "id": "2305.14992" }, { "id": "2305.01937" }, { "id": "2308.10379" }, { "id": "2305.11554" } ]
2308.12682
24
5.2 Beam-Action In heuristic planning, multiple potential plans (i.e. action sequences) are simultaneously maintained and iteratively expanded until the goal is achieved. To simulate this behavior, we propose to manage k action sequences. It works as follows – each sequence is expanded with m candidate actions (where m ≥ k) from the LM, resulting in a total of k×m sequences. Then, top-k sequences are retained using a specific decoding score accumulated over the sequence, as shown below. Once all k-beams have terminated, we select the sequence with the highest (length-normalized)3 accumulated score. To avoid repetition, we only show the SayCanPay version. The rest can be similarly formulated. 1 ; top-k Fa (Justis) + logp(o!|hi_y,9) Here, i € {1,...,},j € {1,...,m}, k < m. The updated history h. ‘J al to the i” beam history hi. The outcome becomes the value for k = 1 results in Greedy-Action decoding. t−1) + log p(σj t−1, g) · fheur(hij facc(hi t |hi top-k t , g)
2308.12682#24
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
Large Language Models (LLMs) have demonstrated impressive planning abilities due to their vast "world knowledge". Yet, obtaining plans that are both feasible (grounded in affordances) and cost-effective (in plan length), remains a challenge, despite recent progress. This contrasts with heuristic planning methods that employ domain knowledge (formalized in action models such as PDDL) and heuristic search to generate feasible, optimal plans. Inspired by this, we propose to combine the power of LLMs and heuristic planning by leveraging the world knowledge of LLMs and the principles of heuristic search. Our approach, SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain knowledge, that evaluates actions' feasibility (Can) and long-term reward/payoff (Pay), and heuristic search to select the best sequence of actions. Our contributions are (1) a novel framing of the LLM planning problem in the context of heuristic planning, (2) integrating grounding and cost-effective elements into the generated plans, and (3) using heuristic search over actions. Our extensive evaluations show that our model surpasses other LLM planning approaches.
http://arxiv.org/pdf/2308.12682
Rishi Hazra, Pedro Zuidberg Dos Martires, Luc De Raedt
cs.AI
Accepted in AAAI 2024. Website: https://rishihazra.github.io/SayCanPay/
null
cs.AI
20230824
20240101
[ { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2305.14992" }, { "id": "2302.05128" }, { "id": "2212.08681" }, { "id": "1807.03748" }, { "id": "2303.00855" }, { "id": "2305.10601" }, { "id": "2304.11477" }, { "id": "2210.17323" }, { "id": "2210.11416" }, { "id": "2201.04735" }, { "id": "2202.10936" }, { "id": "2209.07753" }, { "id": "2302.06706" }, { "id": "1909.08593" }, { "id": "2307.15818" }, { "id": "2204.01691" }, { "id": "2207.05608" }, { "id": "2305.14314" } ]
2308.12966
24
For the image caption task, we choose Nocaps (Agrawal et al., 2019) and Flickr30K (Young et al., 2014) as benchmarks and report CIDEr score (Vedantam et al., 2015) as metric. We utilize greedy search for caption generation with a prompt of "Descripe the image in English:". For general VQA, we utilize five benchmarks including VQAv2 (Goyal et al., 2017), OKVQA (Marino et al., 2019), GQA (Hudson and Manning, 2019), ScienceQA (Image Set) (Lu et al., 2022b) and VizWiz VQA (Gurari et al., 2018). For VQAv2, OKVQA, GQA and VizWiz VQA, we employ open-ended answer generation with greedy decoding strategy and a prompt of "{question} Answer:", without any constrain on model’s output space. However, for ScienceQA, we constrain the model’s output to possible options (instead of open-ended), choose the option with highest confidence as model’s prediction, and report the Top-1 accuracy.
2308.12966#24
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
In this work, we introduce the Qwen-VL series, a set of large-scale vision-language models (LVLMs) designed to perceive and understand both texts and images. Starting from the Qwen-LM as a foundation, we endow it with visual capacity by the meticulously designed (i) visual receptor, (ii) input-output interface, (iii) 3-stage training pipeline, and (iv) multilingual multimodal cleaned corpus. Beyond the conventional image description and question-answering, we implement the grounding and text-reading ability of Qwen-VLs by aligning image-caption-box tuples. The resulting models, including Qwen-VL and Qwen-VL-Chat, set new records for generalist models under similar model scales on a broad range of visual-centric benchmarks (e.g., image captioning, question answering, visual grounding) and different settings (e.g., zero-shot, few-shot). Moreover, on real-world dialog benchmarks, our instruction-tuned Qwen-VL-Chat also demonstrates superiority compared to existing vision-language chatbots. Code, demo and models are available at https://github.com/QwenLM/Qwen-VL.
http://arxiv.org/pdf/2308.12966
Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou
cs.CV, cs.CL
Code, demo and models are available at https://github.com/QwenLM/Qwen-VL
null
cs.CV
20230824
20231013
[ { "id": "2211.01335" }, { "id": "2307.02499" }, { "id": "2305.10403" }, { "id": "2308.16890" }, { "id": "2208.10442" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11172" }, { "id": "2210.08402" }, { "id": "2306.02858" }, { "id": "2209.06794" }, { "id": "1504.00325" }, { "id": "2204.13653" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2212.04408" }, { "id": "2307.05222" }, { "id": "2306.15195" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2306.14824" }, { "id": "2102.05918" }, { "id": "2205.01917" }, { "id": "2111.11432" }, { "id": "2307.16125" }, { "id": "2305.03726" }, { "id": "2203.10244" }, { "id": "2206.08916" }, { "id": "2304.14108" }, { "id": "2307.08581" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2305.18565" } ]
2308.12503
25
# Analysis of Teaching Behavior We employed the Flanders Interaction Analysis System (FIAS) to examine interactive behaviors between teachers and students across three virtual classroom sessions. We hired 2 trained experts to encode the teaching behaviors. These two encoders worked independently, encoding each sentence once and sequentially constructing a behavior se- quence, ultimately achieving consistent evaluation results. Table 1 shows the proportion of each interaction behav- ior in the course. Overall, the variety of interactions in the virtual classroom is rich and consistent with actual teaching, validating the effectiveness of CGMI by demonstrating its ability to effectively organize interactions and collaboration between multi-agents.
2308.12503#25
CGMI: Configurable General Multi-Agent Interaction Framework
Benefiting from the powerful capabilities of large language models (LLMs), agents based on LLMs have shown the potential to address domain-specific tasks and emulate human behaviors. However, the content generated by these agents remains somewhat superficial, owing to their limited domain expertise and the absence of an effective cognitive architecture. To address this, we present the Configurable General Multi-Agent Interaction (CGMI) framework, designed to replicate human interactions in real-world scenarios. Specifically, we propose a tree-structured methodology for the assignment, detection, and maintenance of agent personality. Additionally, we designed a cognitive architecture equipped with a skill library based on the ACT* model, which contains memory, reflection, and planning modules. We have also integrated general agents to augment the virtual environment's realism. Using the CGMI framework, we simulated numerous classroom interactions between teacher and students. The experiments indicate that aspects such as the teaching methodology, curriculum, and student performance closely mirror real classroom settings. We will open source our work.
http://arxiv.org/pdf/2308.12503
Shi Jinxin, Zhao Jiabao, Wang Yilei, Wu Xingjiao, Li Jiawen, He Liang
cs.AI, cs.HC, cs.MA
11 pages, 15 figures
null
cs.AI
20230824
20230828
[ { "id": "2302.01560" }, { "id": "2307.05300" }, { "id": "2307.07924" }, { "id": "2210.03350" }, { "id": "2304.05376" }, { "id": "2304.03442" }, { "id": "2210.03629" }, { "id": "2305.04091" }, { "id": "2305.02547" }, { "id": "2303.17071" }, { "id": "2303.17760" }, { "id": "2303.08774" } ]
2308.12519
25
5.3 SOLUTION RANKING (RQ2) In addition to validating the effectiveness of our approach to reach feasible solutions, we seek to investigate whether RADAGENT can further provide solutions with higher quality. We first develop a variant of our model named RADAGENT -Rand. Select which selects the final decision sequence randomly while RADAGENT -Elo Select selects based on the highest Elo score. We then select representative baselines (CoT@3, Reflexion, BFS, DFS, DFSDT) and conduct a comprehensive comparison of the decision sequences produced by each method. To assess the quality of the de- cisions, we employed the Preference Rank metric based on ToolEval algorithm (Qin et al., 2023c), which offers a reliable measure of the superiority of decision sequences. The experimental results are summarized in Table 2, and it reveals that RADAGENT consistently achieves the top average rank among all comparable baselines. Especially, RADAGENT -Elo Select obviously outperforms RADAGENT -Rand. Select, confirming the capability of our Elo-based Utility Construction to assess each decision sequence to select superior solutions, resulting in high-quality decision making. 5.4 EFFICIENCY ANALYSIS (RQ3)
2308.12519#25
Rational Decision-Making Agent with Internalized Utility Judgment
Large language models (LLMs) have demonstrated remarkable advancements and have attracted significant efforts to develop LLMs into agents capable of executing intricate multi-step decision-making tasks beyond traditional NLP applications. Existing approaches to LLM-based decision-making predominantly build upon the manually-designed external performance metrics to guide the decision-making process. However, reliance on the external performance metrics as prior is problematic in real-world scenarios, where such prior may be unavailable, flawed, or even erroneous. For genuine autonomous decision making, it is imperative for the agent to develop its rationality from its posterior experiences to judge decisions independently. Central to the development of rationality is the construction of an internalized utility judgment, capable of assigning numerical utilities to each decision. This paper proposes RadAgent (Rational Decision-Making Agent), which fosters the development of its rationality through an iterative framework involving Experience Exploration and Utility Learning. Within this framework, Elo-based Utility Construction is devised to assign Elo scores to individual decision steps to judge their utilities via pairwise comparisons. Consequently, these Elo scores guide the decision-making process to derive optimal outcomes. Experimental results on the ToolBench dataset demonstrate RadAgent's superiority over baselines, achieving over 10% improvement in Pass Rate on diverse tasks. It offers higher-quality solutions and reduces costs (ChatGPT API calls), highlighting its effectiveness and efficiency.
http://arxiv.org/pdf/2308.12519
Yining Ye, Xin Cong, Shizuo Tian, Yujia Qin, Chong Liu, Yankai Lin, Zhiyuan Liu, Maosong Sun
cs.CL
Received 8,6,6,6 scores on ICLR 2024
null
cs.CL
20230824
20240117
[ { "id": "2305.14318" }, { "id": "2306.06624" }, { "id": "2305.17926" }, { "id": "2305.10601" }, { "id": "2307.16789" }, { "id": "2305.06849" }, { "id": "2304.08354" }, { "id": "2308.09687" }, { "id": "2306.11489" }, { "id": "2306.17563" }, { "id": "2305.14992" }, { "id": "2305.01937" }, { "id": "2308.10379" }, { "id": "2305.11554" } ]
2308.12682
25
t−1) + log p(σj t−1, g) · fheur(hij facc(hi t |hi top-k t , g) t−1, aj t = (hi t ) is obtained by adding the action t−1. The outcome becomes the value for facc(ht) for the next iteration. Note, that setting Our proposed decoding has similarities with Tree-of-Thoughts inference (Yao et al. 2023) which also maintains multiple reasoning paths to decide the next step. However, our method is specifically tailored for planning problems. It uses search and evaluation techniques akin to planning methods, making it more suited for such challenges. Now, we discuss the training details of the Can and Pay models. # 6 Learning the Can and Pay Models To train our domain-specific Can and Pay models, we collect N -expert trajectories E = {τ }N using an oracle planner, where τi = (o0, g, a1, a2, . . . , aT , r). Note, r = 1 for all expert trajectories. # each environment
2308.12682#25
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
Large Language Models (LLMs) have demonstrated impressive planning abilities due to their vast "world knowledge". Yet, obtaining plans that are both feasible (grounded in affordances) and cost-effective (in plan length), remains a challenge, despite recent progress. This contrasts with heuristic planning methods that employ domain knowledge (formalized in action models such as PDDL) and heuristic search to generate feasible, optimal plans. Inspired by this, we propose to combine the power of LLMs and heuristic planning by leveraging the world knowledge of LLMs and the principles of heuristic search. Our approach, SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain knowledge, that evaluates actions' feasibility (Can) and long-term reward/payoff (Pay), and heuristic search to select the best sequence of actions. Our contributions are (1) a novel framing of the LLM planning problem in the context of heuristic planning, (2) integrating grounding and cost-effective elements into the generated plans, and (3) using heuristic search over actions. Our extensive evaluations show that our model surpasses other LLM planning approaches.
http://arxiv.org/pdf/2308.12682
Rishi Hazra, Pedro Zuidberg Dos Martires, Luc De Raedt
cs.AI
Accepted in AAAI 2024. Website: https://rishihazra.github.io/SayCanPay/
null
cs.AI
20230824
20240101
[ { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2305.14992" }, { "id": "2302.05128" }, { "id": "2212.08681" }, { "id": "1807.03748" }, { "id": "2303.00855" }, { "id": "2305.10601" }, { "id": "2304.11477" }, { "id": "2210.17323" }, { "id": "2210.11416" }, { "id": "2201.04735" }, { "id": "2202.10936" }, { "id": "2209.07753" }, { "id": "2302.06706" }, { "id": "1909.08593" }, { "id": "2307.15818" }, { "id": "2204.01691" }, { "id": "2207.05608" }, { "id": "2305.14314" } ]
2308.12950
25
Table 2: Code Llama pass@ scores on HumanEval and MBPP. The pass@1 scores of our models are computed with greedy decoding. The pass@10 and pass@100 scores are computed with nucleus sampling with p=0.95 and temperature 0.8 following our findings from Figure 6. Models are evaluated in zero-shot on Human Eval and 3-shot on MBPP. The instruct models are trained to be safe and aligned from the base Code Llama models. Results for other models as provided by Li et al. (2023) (code-cushman-001, StarCoder), OpenAI (2023) (GPT-3.5, GPT-4), and Chowdhery et al. (2022); Anil et al. (2023) (PaLM). we obtained best results when retaining the original learning rate of the Llama 2 base model. We carry these findings to the 13B, 34B and 70B models, and set their learning rates to 3e−4, 1.5e−4, and 1.5e−4 respectively. For python fine-tuning, we set the initial learning rate to 1e−4 instead. For Code Llama - Instruct, we train with a batch size of 524,288 tokens and on approx. 5B tokens in total.
2308.12950#25
Code Llama: Open Foundation Models for Code
We release Code Llama, a family of large language models for code based on Llama 2 providing state-of-the-art performance among open models, infilling capabilities, support for large input contexts, and zero-shot instruction following ability for programming tasks. We provide multiple flavors to cover a wide range of applications: foundation models (Code Llama), Python specializations (Code Llama - Python), and instruction-following models (Code Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are trained on sequences of 16k tokens and show improvements on inputs with up to 100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants support infilling based on surrounding content. Code Llama reaches state-of-the-art performance among open models on several code benchmarks, with scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our models outperform every other publicly available model on MultiPL-E. We release Code Llama under a permissive license that allows for both research and commercial use.
http://arxiv.org/pdf/2308.12950
Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve
cs.CL
null
null
cs.CL
20230824
20240131
[]
2308.12966
25
The overall performance on image caption and general VQA tasks are reported in Table 4. As the results shown, our Qwen-VL and Qwen-VL-Chat both achieve obviously better results compared to previous generalist models in terms of both two tasks. Specifically, on zero-shot image caption task, Qwen-VL achieves state-of-the-art performance (i.e., 85.8 CIDEr score) on the Flickr30K karpathy-test split, even outperforms previous generalist models with much more parameters (e.g., Flamingo-80B with 80B parameters). On general VQA benchmarks, our models also exhibit distinct advantages compared to others. On VQAv2, OKVQA and GQA benchmarks, Qwen-VL achieves 79.5, 58.6 and 59.3 accuracy respectively, which surpasses recent proposed LVLMs by a large margin. It’s worth noting that Qwen-VL also shows strong zero-shot performance on ScienceQA and VizWiz datasets. # 4.2 Text-oriented Visual Question Answering
2308.12966#25
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
In this work, we introduce the Qwen-VL series, a set of large-scale vision-language models (LVLMs) designed to perceive and understand both texts and images. Starting from the Qwen-LM as a foundation, we endow it with visual capacity by the meticulously designed (i) visual receptor, (ii) input-output interface, (iii) 3-stage training pipeline, and (iv) multilingual multimodal cleaned corpus. Beyond the conventional image description and question-answering, we implement the grounding and text-reading ability of Qwen-VLs by aligning image-caption-box tuples. The resulting models, including Qwen-VL and Qwen-VL-Chat, set new records for generalist models under similar model scales on a broad range of visual-centric benchmarks (e.g., image captioning, question answering, visual grounding) and different settings (e.g., zero-shot, few-shot). Moreover, on real-world dialog benchmarks, our instruction-tuned Qwen-VL-Chat also demonstrates superiority compared to existing vision-language chatbots. Code, demo and models are available at https://github.com/QwenLM/Qwen-VL.
http://arxiv.org/pdf/2308.12966
Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou
cs.CV, cs.CL
Code, demo and models are available at https://github.com/QwenLM/Qwen-VL
null
cs.CV
20230824
20231013
[ { "id": "2211.01335" }, { "id": "2307.02499" }, { "id": "2305.10403" }, { "id": "2308.16890" }, { "id": "2208.10442" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11172" }, { "id": "2210.08402" }, { "id": "2306.02858" }, { "id": "2209.06794" }, { "id": "1504.00325" }, { "id": "2204.13653" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2212.04408" }, { "id": "2307.05222" }, { "id": "2306.15195" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2306.14824" }, { "id": "2102.05918" }, { "id": "2205.01917" }, { "id": "2111.11432" }, { "id": "2307.16125" }, { "id": "2305.03726" }, { "id": "2203.10244" }, { "id": "2206.08916" }, { "id": "2304.14108" }, { "id": "2307.08581" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2305.18565" } ]
2308.12503
26
According to the results in table 1, teacher’s behavior(B1, B2, B3, B4, B5, B6, B7) made up an average of 61.23% of the discourse in these mathematics sessions. In contrast, stu.. I'm excited to learn more about the... ..J'm really nervous about this The first half of Cl ¢ ‘ lesson on quadratic equations... ! Emily: will do my best to 1 - Emily - : 1 overcome the anxiety and ! ...J'm excited to explore ...I have no ideas. But, I will make 1 understand quadratic equations. ' this topic further... & an effort to pay attention... 1 [appreciate ... | ine Tm also looking forward ‘on . ..I'm also looking forward to ily: I’ iar wit .I'm really excited to a ; . nn ; Emily: dm unfamiliar with ' learn about the... 4 manne, ani working on some ! quadratic equations, but I'm 1 Ryan examples with my classmate... | willing to learn and explore I . ' different forms... ] ...I'm really excited to ...I always make sure to double ! I delve into... check my work, so... ' | The second half of C1 ! 1
2308.12503#26
CGMI: Configurable General Multi-Agent Interaction Framework
Benefiting from the powerful capabilities of large language models (LLMs), agents based on LLMs have shown the potential to address domain-specific tasks and emulate human behaviors. However, the content generated by these agents remains somewhat superficial, owing to their limited domain expertise and the absence of an effective cognitive architecture. To address this, we present the Configurable General Multi-Agent Interaction (CGMI) framework, designed to replicate human interactions in real-world scenarios. Specifically, we propose a tree-structured methodology for the assignment, detection, and maintenance of agent personality. Additionally, we designed a cognitive architecture equipped with a skill library based on the ACT* model, which contains memory, reflection, and planning modules. We have also integrated general agents to augment the virtual environment's realism. Using the CGMI framework, we simulated numerous classroom interactions between teacher and students. The experiments indicate that aspects such as the teaching methodology, curriculum, and student performance closely mirror real classroom settings. We will open source our work.
http://arxiv.org/pdf/2308.12503
Shi Jinxin, Zhao Jiabao, Wang Yilei, Wu Xingjiao, Li Jiawen, He Liang
cs.AI, cs.HC, cs.MA
11 pages, 15 figures
null
cs.AI
20230824
20230828
[ { "id": "2302.01560" }, { "id": "2307.05300" }, { "id": "2307.07924" }, { "id": "2210.03350" }, { "id": "2304.05376" }, { "id": "2304.03442" }, { "id": "2210.03629" }, { "id": "2305.04091" }, { "id": "2305.02547" }, { "id": "2303.17071" }, { "id": "2303.17760" }, { "id": "2303.08774" } ]
2308.12519
26
5.4 EFFICIENCY ANALYSIS (RQ3) We further conducted the analyses to evaluate the efficiency of our proposed RADAGENT. Since all methods rely on ChatGPT API calls, the inefficient decision-making method would involve more API calls, causing costly expenses. We thus conducted experiments with varying ChatGPT API call limitations, ranging from 30 to 300, and measured Pass Rate of each method under these varied limitations. The experimental results are demonstrated in Figure 2. These results showcase that the tree-based baselines (BFS, DFS, DFSDT) heavily rely on a large number of ChatGPT API call to achieve high Pass Rate. Once limiting the number of API calls, their performance even cannot 7 # Preprint 100 a= RADAGENT -g- DESDT Fs obs cor so f 60 Pass Rate 40-4 20-4 0 30 GO 90 120 150 180 210 240 270 300 330 Limitation of API Call 1 08 4 r 0.64 ia = 044 L 0 + + + 7 + + + + + 0 01 02 03 04 05 06 O87 O08 09 1 Normalized Elo Score Figure 2: Efficiency experimental results on various API cal limitations. Figure 3: Performance on different data split with varied Elo scores.
2308.12519#26
Rational Decision-Making Agent with Internalized Utility Judgment
Large language models (LLMs) have demonstrated remarkable advancements and have attracted significant efforts to develop LLMs into agents capable of executing intricate multi-step decision-making tasks beyond traditional NLP applications. Existing approaches to LLM-based decision-making predominantly build upon the manually-designed external performance metrics to guide the decision-making process. However, reliance on the external performance metrics as prior is problematic in real-world scenarios, where such prior may be unavailable, flawed, or even erroneous. For genuine autonomous decision making, it is imperative for the agent to develop its rationality from its posterior experiences to judge decisions independently. Central to the development of rationality is the construction of an internalized utility judgment, capable of assigning numerical utilities to each decision. This paper proposes RadAgent (Rational Decision-Making Agent), which fosters the development of its rationality through an iterative framework involving Experience Exploration and Utility Learning. Within this framework, Elo-based Utility Construction is devised to assign Elo scores to individual decision steps to judge their utilities via pairwise comparisons. Consequently, these Elo scores guide the decision-making process to derive optimal outcomes. Experimental results on the ToolBench dataset demonstrate RadAgent's superiority over baselines, achieving over 10% improvement in Pass Rate on diverse tasks. It offers higher-quality solutions and reduces costs (ChatGPT API calls), highlighting its effectiveness and efficiency.
http://arxiv.org/pdf/2308.12519
Yining Ye, Xin Cong, Shizuo Tian, Yujia Qin, Chong Liu, Yankai Lin, Zhiyuan Liu, Maosong Sun
cs.CL
Received 8,6,6,6 scores on ICLR 2024
null
cs.CL
20230824
20240117
[ { "id": "2305.14318" }, { "id": "2306.06624" }, { "id": "2305.17926" }, { "id": "2305.10601" }, { "id": "2307.16789" }, { "id": "2305.06849" }, { "id": "2304.08354" }, { "id": "2308.09687" }, { "id": "2306.11489" }, { "id": "2306.17563" }, { "id": "2305.14992" }, { "id": "2305.01937" }, { "id": "2308.10379" }, { "id": "2305.11554" } ]
2308.12682
26
# each environment 6.1 Can Model We model it as a classification problem, where the positive action (i.e., the action whose preconditions are satisfied) is assigned the highest probability from a set of one positive and a few negative actions. Specifically, we sample a batch of actions [ht−1, g, at, a¯t̸=t, ˜a]1:B from expert trajectories E. We then train a model Mcan with the aim of minimizing the InfoNCE loss (van den Oord, Li, and Vinyals 2019): # Vinyls2079}: 1 Ma"(hi_1,9' a) B > log s i=1 a} M"(hi_4,g', a) ae{aj.ai, Here, B is the batch size, at is the positive action from trajectory τi executed in the context of history ht−1 with goal g, a¯t̸=t is a negative action sampled from the same trajectory τi, but at a different time-step ¯t, and ˜a is a negative 2The goal g is used to evaluate the preconditions of “done task”. 3Since different beams can have different sequence lengths.
2308.12682#26
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
Large Language Models (LLMs) have demonstrated impressive planning abilities due to their vast "world knowledge". Yet, obtaining plans that are both feasible (grounded in affordances) and cost-effective (in plan length), remains a challenge, despite recent progress. This contrasts with heuristic planning methods that employ domain knowledge (formalized in action models such as PDDL) and heuristic search to generate feasible, optimal plans. Inspired by this, we propose to combine the power of LLMs and heuristic planning by leveraging the world knowledge of LLMs and the principles of heuristic search. Our approach, SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain knowledge, that evaluates actions' feasibility (Can) and long-term reward/payoff (Pay), and heuristic search to select the best sequence of actions. Our contributions are (1) a novel framing of the LLM planning problem in the context of heuristic planning, (2) integrating grounding and cost-effective elements into the generated plans, and (3) using heuristic search over actions. Our extensive evaluations show that our model surpasses other LLM planning approaches.
http://arxiv.org/pdf/2308.12682
Rishi Hazra, Pedro Zuidberg Dos Martires, Luc De Raedt
cs.AI
Accepted in AAAI 2024. Website: https://rishihazra.github.io/SayCanPay/
null
cs.AI
20230824
20240101
[ { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2305.14992" }, { "id": "2302.05128" }, { "id": "2212.08681" }, { "id": "1807.03748" }, { "id": "2303.00855" }, { "id": "2305.10601" }, { "id": "2304.11477" }, { "id": "2210.17323" }, { "id": "2210.11416" }, { "id": "2201.04735" }, { "id": "2202.10936" }, { "id": "2209.07753" }, { "id": "2302.06706" }, { "id": "1909.08593" }, { "id": "2307.15818" }, { "id": "2204.01691" }, { "id": "2207.05608" }, { "id": "2305.14314" } ]
2308.12950
26
Long context fine-tuning. For long context fine-tuning (LCFT), we use a learning rate of 2e−5, a sequence length of 16,384, and reset RoPE frequencies with a base value of θ = 106. The batch size is set to 2M tokens for model sizes 7B and 13B and to 1M tokens for model size 34B, respectively. Training lasts for 10,000 gradient steps by default. We observed instabilities in downstream performance for certain configurations, and hence set the number of gradient steps to 11,000 for the 34B models and to 3,000 for Code Llama 7B. # 3 Results We report results on a variety of benchmarks. First, we evaluate our models on popular description-to-code generation benchmarks for Python: HumanEval (Chen et al., 2021), MBPP (Austin et al., 2021), and APPS 6 (programming interviews and competitions, Hendrycks et al., 2021). Second, we evaluate our models on further programming languages using MultiPL-E (Cassano et al., 2023), namely on C++, Java, PHP, C#, TypeScript (TS), and Bash. We additionally report results on the GSM8K benchmark (Cobbe et al., 2021), which measures mathematical reasoning capabilities (Appendix D).
2308.12950#26
Code Llama: Open Foundation Models for Code
We release Code Llama, a family of large language models for code based on Llama 2 providing state-of-the-art performance among open models, infilling capabilities, support for large input contexts, and zero-shot instruction following ability for programming tasks. We provide multiple flavors to cover a wide range of applications: foundation models (Code Llama), Python specializations (Code Llama - Python), and instruction-following models (Code Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are trained on sequences of 16k tokens and show improvements on inputs with up to 100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants support infilling based on surrounding content. Code Llama reaches state-of-the-art performance among open models on several code benchmarks, with scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our models outperform every other publicly available model on MultiPL-E. We release Code Llama under a permissive license that allows for both research and commercial use.
http://arxiv.org/pdf/2308.12950
Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve
cs.CL
null
null
cs.CL
20230824
20240131
[]
2308.12966
26
# 4.2 Text-oriented Visual Question Answering Text-oriented visual understanding has a broad application prospect in real-world scenarios. We assess our models’ ability toward text-oriented visual question answering on several benchmarks including TextVQA (Sidorov et al., 2020), DocVQA (Mathew et al., 2021), ChartQA (Masry et al., 2022), AI2Diagram (Kembhavi et al., 2016), and OCR-VQA (Mishra et al., 2019). Similarly, the results are shown in Table 5. Compared to previous generalist models and recent LVLMs, our models show better performance on most benchmarks, frequently by a large margin. # 4.3 Refer Expression Comprehension We show our models’ fine-grained image understanding and localization ability by evaluating on a sort of refer expression comprehension benchmarks such as RefCOCO (Kazemzadeh et al., 2014), RefCOCOg (Mao et al., 2016), RefCOCO+ (Mao et al., 2016) and GRIT (Gupta et al., 2022). Specifically, the refer expression comprehension task requires the model to localize the target object under the guidance of a description. The 7 # Table 5: Results on Text-oriented VQA.
2308.12966#26
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
In this work, we introduce the Qwen-VL series, a set of large-scale vision-language models (LVLMs) designed to perceive and understand both texts and images. Starting from the Qwen-LM as a foundation, we endow it with visual capacity by the meticulously designed (i) visual receptor, (ii) input-output interface, (iii) 3-stage training pipeline, and (iv) multilingual multimodal cleaned corpus. Beyond the conventional image description and question-answering, we implement the grounding and text-reading ability of Qwen-VLs by aligning image-caption-box tuples. The resulting models, including Qwen-VL and Qwen-VL-Chat, set new records for generalist models under similar model scales on a broad range of visual-centric benchmarks (e.g., image captioning, question answering, visual grounding) and different settings (e.g., zero-shot, few-shot). Moreover, on real-world dialog benchmarks, our instruction-tuned Qwen-VL-Chat also demonstrates superiority compared to existing vision-language chatbots. Code, demo and models are available at https://github.com/QwenLM/Qwen-VL.
http://arxiv.org/pdf/2308.12966
Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou
cs.CV, cs.CL
Code, demo and models are available at https://github.com/QwenLM/Qwen-VL
null
cs.CV
20230824
20231013
[ { "id": "2211.01335" }, { "id": "2307.02499" }, { "id": "2305.10403" }, { "id": "2308.16890" }, { "id": "2208.10442" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11172" }, { "id": "2210.08402" }, { "id": "2306.02858" }, { "id": "2209.06794" }, { "id": "1504.00325" }, { "id": "2204.13653" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2212.04408" }, { "id": "2307.05222" }, { "id": "2306.15195" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2306.14824" }, { "id": "2102.05918" }, { "id": "2205.01917" }, { "id": "2111.11432" }, { "id": "2307.16125" }, { "id": "2305.03726" }, { "id": "2203.10244" }, { "id": "2206.08916" }, { "id": "2304.14108" }, { "id": "2307.08581" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2305.18565" } ]
2308.12503
27
different forms... ] ...I'm really excited to ...I always make sure to double ! I delve into... check my work, so... ' | The second half of C1 ! 1 Samantha 1 . ...I'm always excited to ...balance in my life. While my 1{ Emily: As an average learner, ' explore how assion for learning and m 1 Tmay need some time to grasp P. a“ pass s Yee | the concepts of quadratic I No Personality Ying Zheng With Personality A equations. 1 Soo eee eee eee ee ?
2308.12503#27
CGMI: Configurable General Multi-Agent Interaction Framework
Benefiting from the powerful capabilities of large language models (LLMs), agents based on LLMs have shown the potential to address domain-specific tasks and emulate human behaviors. However, the content generated by these agents remains somewhat superficial, owing to their limited domain expertise and the absence of an effective cognitive architecture. To address this, we present the Configurable General Multi-Agent Interaction (CGMI) framework, designed to replicate human interactions in real-world scenarios. Specifically, we propose a tree-structured methodology for the assignment, detection, and maintenance of agent personality. Additionally, we designed a cognitive architecture equipped with a skill library based on the ACT* model, which contains memory, reflection, and planning modules. We have also integrated general agents to augment the virtual environment's realism. Using the CGMI framework, we simulated numerous classroom interactions between teacher and students. The experiments indicate that aspects such as the teaching methodology, curriculum, and student performance closely mirror real classroom settings. We will open source our work.
http://arxiv.org/pdf/2308.12503
Shi Jinxin, Zhao Jiabao, Wang Yilei, Wu Xingjiao, Li Jiawen, He Liang
cs.AI, cs.HC, cs.MA
11 pages, 15 figures
null
cs.AI
20230824
20230828
[ { "id": "2302.01560" }, { "id": "2307.05300" }, { "id": "2307.07924" }, { "id": "2210.03350" }, { "id": "2304.05376" }, { "id": "2304.03442" }, { "id": "2210.03629" }, { "id": "2305.04091" }, { "id": "2305.02547" }, { "id": "2303.17071" }, { "id": "2303.17760" }, { "id": "2303.08774" } ]
2308.12519
27
Figure 2: Efficiency experimental results on various API cal limitations. Figure 3: Performance on different data split with varied Elo scores. surpass CoT. In contrast, our approach achieves the highest Pass Rate under all limitation settings, especially in low-resource settings. We attribute it to that our method can utilize Elo scores to dy- namically select the promising decision steps to explore, avoiding those unpromising ones. Thus, our method illustrates superior efficiency against baselines and the practical advantages of our ap- proach in real-world scenarios. 5.5 RELIABLE UTILITY ASSESSMENT OF ELO SCORE (RQ4)
2308.12519#27
Rational Decision-Making Agent with Internalized Utility Judgment
Large language models (LLMs) have demonstrated remarkable advancements and have attracted significant efforts to develop LLMs into agents capable of executing intricate multi-step decision-making tasks beyond traditional NLP applications. Existing approaches to LLM-based decision-making predominantly build upon the manually-designed external performance metrics to guide the decision-making process. However, reliance on the external performance metrics as prior is problematic in real-world scenarios, where such prior may be unavailable, flawed, or even erroneous. For genuine autonomous decision making, it is imperative for the agent to develop its rationality from its posterior experiences to judge decisions independently. Central to the development of rationality is the construction of an internalized utility judgment, capable of assigning numerical utilities to each decision. This paper proposes RadAgent (Rational Decision-Making Agent), which fosters the development of its rationality through an iterative framework involving Experience Exploration and Utility Learning. Within this framework, Elo-based Utility Construction is devised to assign Elo scores to individual decision steps to judge their utilities via pairwise comparisons. Consequently, these Elo scores guide the decision-making process to derive optimal outcomes. Experimental results on the ToolBench dataset demonstrate RadAgent's superiority over baselines, achieving over 10% improvement in Pass Rate on diverse tasks. It offers higher-quality solutions and reduces costs (ChatGPT API calls), highlighting its effectiveness and efficiency.
http://arxiv.org/pdf/2308.12519
Yining Ye, Xin Cong, Shizuo Tian, Yujia Qin, Chong Liu, Yankai Lin, Zhiyuan Liu, Maosong Sun
cs.CL
Received 8,6,6,6 scores on ICLR 2024
null
cs.CL
20230824
20240117
[ { "id": "2305.14318" }, { "id": "2306.06624" }, { "id": "2305.17926" }, { "id": "2305.10601" }, { "id": "2307.16789" }, { "id": "2305.06849" }, { "id": "2304.08354" }, { "id": "2308.09687" }, { "id": "2306.11489" }, { "id": "2306.17563" }, { "id": "2305.14992" }, { "id": "2305.01937" }, { "id": "2308.10379" }, { "id": "2305.11554" } ]
2308.12682
27
2The goal g is used to evaluate the preconditions of “done task”. 3Since different beams can have different sequence lengths. Environment Example Goal Example Initial Observation Plan Length Ravens (Tower of Hanoi seq) Move the gray disk in rod 2 Blue disk on top of gray disk. Gray disk on top of green disk. Green disk in rod 1. The disks can be moved in rod 1, rod 2, rod 3. 3.3 Ravens (Put Blocks in Bowls) Put the yellow blocks in gray bowls There is a gray bowl 1, gray bowl 2, gray bowl 3, yellow block 1, yellow block 2, yellow block 3, blue bowl 1, red block 1, green bowl 1, orange block 1. 6.1 BabyAI (Pickup) Pick up the ball Room 1 has purple ball. Room 2 has yellow key, agent. Room 3 has red key. The door connecting Room 1 and Room 2 is locked. The door connecting Room 2 and Room 3 is locked. 6.7 VirtualHome Read book 5.9 |A| 7.5 25 7.7 150 Table 2: Table displays tasks from each environment, average plan length, and average action space size |A|. For VirtualHome, we do not specify an initial observation since it is hard to describe a room environment. Here, the action space varies with episodes, depending for instance on the number of objects.
2308.12682#27
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
Large Language Models (LLMs) have demonstrated impressive planning abilities due to their vast "world knowledge". Yet, obtaining plans that are both feasible (grounded in affordances) and cost-effective (in plan length), remains a challenge, despite recent progress. This contrasts with heuristic planning methods that employ domain knowledge (formalized in action models such as PDDL) and heuristic search to generate feasible, optimal plans. Inspired by this, we propose to combine the power of LLMs and heuristic planning by leveraging the world knowledge of LLMs and the principles of heuristic search. Our approach, SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain knowledge, that evaluates actions' feasibility (Can) and long-term reward/payoff (Pay), and heuristic search to select the best sequence of actions. Our contributions are (1) a novel framing of the LLM planning problem in the context of heuristic planning, (2) integrating grounding and cost-effective elements into the generated plans, and (3) using heuristic search over actions. Our extensive evaluations show that our model surpasses other LLM planning approaches.
http://arxiv.org/pdf/2308.12682
Rishi Hazra, Pedro Zuidberg Dos Martires, Luc De Raedt
cs.AI
Accepted in AAAI 2024. Website: https://rishihazra.github.io/SayCanPay/
null
cs.AI
20230824
20240101
[ { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2305.14992" }, { "id": "2302.05128" }, { "id": "2212.08681" }, { "id": "1807.03748" }, { "id": "2303.00855" }, { "id": "2305.10601" }, { "id": "2304.11477" }, { "id": "2210.17323" }, { "id": "2210.11416" }, { "id": "2201.04735" }, { "id": "2202.10936" }, { "id": "2209.07753" }, { "id": "2302.06706" }, { "id": "1909.08593" }, { "id": "2307.15818" }, { "id": "2204.01691" }, { "id": "2207.05608" }, { "id": "2305.14314" } ]
2308.12950
27
Next, we perform an extensive ablation study: (i) we study the impact of training from scratch or from a pretrained Llama 2 model in Section 3.4.1; (ii) we perform ablations for infilling and additional infilling specific benchmarks in Section 3.2; (iii) we study the effect of long context fine-tuning on perplexity, a synthetic retrieval task, and code completion with long source code files (Section 3.3); and (iv) we evaluate our instruction fine-tuning procedure, which includes self-instruct training by leveraging self-generated unit tests in Section 3.4.2. # 3.1 Code generation # 3.1.1 Python code generation We start by reporting results for Python code generation using the HumanEval (Chen et al., 2021), MBPP (Austin et al., 2021) and APPS (Hendrycks et al., 2021) benchmarks. Results are summarized in Tables 2 and 3. The full list of results on HumanEval and MBPP, including models with and without infilling and long context fine-tuning, can be found in Table 10 in Appendix C. We provide zero-shot results of our instruction fine-tuned models on APPS in Table 15 with evaluation details in Appendix F. Our main findings are as follows.
2308.12950#27
Code Llama: Open Foundation Models for Code
We release Code Llama, a family of large language models for code based on Llama 2 providing state-of-the-art performance among open models, infilling capabilities, support for large input contexts, and zero-shot instruction following ability for programming tasks. We provide multiple flavors to cover a wide range of applications: foundation models (Code Llama), Python specializations (Code Llama - Python), and instruction-following models (Code Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are trained on sequences of 16k tokens and show improvements on inputs with up to 100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants support infilling based on surrounding content. Code Llama reaches state-of-the-art performance among open models on several code benchmarks, with scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our models outperform every other publicly available model on MultiPL-E. We release Code Llama under a permissive license that allows for both research and commercial use.
http://arxiv.org/pdf/2308.12950
Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve
cs.CL
null
null
cs.CL
20230824
20240131
[]
2308.12966
27
7 # Table 5: Results on Text-oriented VQA. Model BLIP-2 (Vicuna-13B) InstructBLIP (Vicuna-13B) mPLUG-DocOwl (LLaMA-7B) Pix2Struct-Large (1.3B) Qwen-VL (Qwen-7B) Qwen-VL-Chat 42.4 50.7 52.6 - 63.8 61.5 - - 62.2 76.6 65.1 62.6 - - 57.4 58.6 65.7 66.3 - - - 42.1 62.3 57.7 - - - 71.3 75.7 70.5 PALI-X-55B (Single-task fine- tuning, without OCR Pipeline) 71.44 80.0 70.0 81.2 75.0 # TextVQA DocVQA ChartQA AI2D OCR-VQA # Table 6: Results on Referring Expression Comprehension task.
2308.12966#27
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
In this work, we introduce the Qwen-VL series, a set of large-scale vision-language models (LVLMs) designed to perceive and understand both texts and images. Starting from the Qwen-LM as a foundation, we endow it with visual capacity by the meticulously designed (i) visual receptor, (ii) input-output interface, (iii) 3-stage training pipeline, and (iv) multilingual multimodal cleaned corpus. Beyond the conventional image description and question-answering, we implement the grounding and text-reading ability of Qwen-VLs by aligning image-caption-box tuples. The resulting models, including Qwen-VL and Qwen-VL-Chat, set new records for generalist models under similar model scales on a broad range of visual-centric benchmarks (e.g., image captioning, question answering, visual grounding) and different settings (e.g., zero-shot, few-shot). Moreover, on real-world dialog benchmarks, our instruction-tuned Qwen-VL-Chat also demonstrates superiority compared to existing vision-language chatbots. Code, demo and models are available at https://github.com/QwenLM/Qwen-VL.
http://arxiv.org/pdf/2308.12966
Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou
cs.CV, cs.CL
Code, demo and models are available at https://github.com/QwenLM/Qwen-VL
null
cs.CV
20230824
20231013
[ { "id": "2211.01335" }, { "id": "2307.02499" }, { "id": "2305.10403" }, { "id": "2308.16890" }, { "id": "2208.10442" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11172" }, { "id": "2210.08402" }, { "id": "2306.02858" }, { "id": "2209.06794" }, { "id": "1504.00325" }, { "id": "2204.13653" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2212.04408" }, { "id": "2307.05222" }, { "id": "2306.15195" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2306.14824" }, { "id": "2102.05918" }, { "id": "2205.01917" }, { "id": "2111.11432" }, { "id": "2307.16125" }, { "id": "2305.03726" }, { "id": "2203.10244" }, { "id": "2206.08916" }, { "id": "2304.14108" }, { "id": "2307.08581" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2305.18565" } ]
2308.12503
28
Figure 5: The influence of personal traits on agent expression. dents’ behavior(B8, B9) facilitated by teacher prompts rep- resented an average of 23.53%. Notably, the ratio of indi- rect influence behaviors (B1, B2, B3, B4) to direct influence behaviors (B5, B6, B7) remained below 1. This suggests that the virtual classroom is dominated by teachers who have direct control over the overall classroom. Furthermore, student-initiated interactions constituted about 15.23%, sug- gesting that students remain engaged, deliberating, and re- sponding to queries under the teacher’s guidance. # Intrinsic Characteristics of Intelligent Agents To assess the efficacy of the proposed cognitive architecture, we examined it through the lens of a teacher, Mrs. Smith, analyzing her classroom practices and her subsequent re- flections and plans. As illustrated in Figure 4, we displayed the part of her reflective and planning processes within a single lesson and across two different lessons. Our analysis sought to elucidate the influence of the cognitive structure on agents, emphasizing the model’s capacity for both reflection and planning. We analyzed the effectiveness of the algorithm from within and between classes.
2308.12503#28
CGMI: Configurable General Multi-Agent Interaction Framework
Benefiting from the powerful capabilities of large language models (LLMs), agents based on LLMs have shown the potential to address domain-specific tasks and emulate human behaviors. However, the content generated by these agents remains somewhat superficial, owing to their limited domain expertise and the absence of an effective cognitive architecture. To address this, we present the Configurable General Multi-Agent Interaction (CGMI) framework, designed to replicate human interactions in real-world scenarios. Specifically, we propose a tree-structured methodology for the assignment, detection, and maintenance of agent personality. Additionally, we designed a cognitive architecture equipped with a skill library based on the ACT* model, which contains memory, reflection, and planning modules. We have also integrated general agents to augment the virtual environment's realism. Using the CGMI framework, we simulated numerous classroom interactions between teacher and students. The experiments indicate that aspects such as the teaching methodology, curriculum, and student performance closely mirror real classroom settings. We will open source our work.
http://arxiv.org/pdf/2308.12503
Shi Jinxin, Zhao Jiabao, Wang Yilei, Wu Xingjiao, Li Jiawen, He Liang
cs.AI, cs.HC, cs.MA
11 pages, 15 figures
null
cs.AI
20230824
20230828
[ { "id": "2302.01560" }, { "id": "2307.05300" }, { "id": "2307.07924" }, { "id": "2210.03350" }, { "id": "2304.05376" }, { "id": "2304.03442" }, { "id": "2210.03629" }, { "id": "2305.04091" }, { "id": "2305.02547" }, { "id": "2303.17071" }, { "id": "2303.17760" }, { "id": "2303.08774" } ]
2308.12519
28
To verify the effectiveness of our Elo-based Utility Construction in providing reliable utility assess- ments, we conducted a comprehensive analysis using the ToolBench dataset. As the Elo score serves as a metric to represent the utility of each decision, we seek to determine whether the Elo score is a reliable indicator of decision quality. To this end, we partitioned the ToolBench dataset into sev- eral subsets based on the Elo scores assigned to the decision sequences generated by RADAGENT. We first collect the Elo scores for all ToolBench data and then normalized them to scale within the range of 0 to 1. Next, we sort the normalized Elo scores and divided them into 10 intervals, getting 10 subsets of ToolBench data accordingly. Subsequently, we calculated the Pass Rate for each method on these 10 subsets. Figure 3 illustrates the experimental results. A discernible trend is observed across all methods: the Pass Rate consistently increases with higher Elo scores. This clear positive correlation between the Elo score and the Pass Rate demonstrates the efficacy of the Elo-based Utility Construction in providing reliable assessments of decision quality. A higher Elo score indicates that the decision sequence is more likely to represent an accomplished solution to the instruction, whereas a lower Elo score suggests that the instruction may be more challenging, and the corresponding decision sequence may not effectively solve the instruction. 5.6 ERROR ANALYSIS (RQ5)
2308.12519#28
Rational Decision-Making Agent with Internalized Utility Judgment
Large language models (LLMs) have demonstrated remarkable advancements and have attracted significant efforts to develop LLMs into agents capable of executing intricate multi-step decision-making tasks beyond traditional NLP applications. Existing approaches to LLM-based decision-making predominantly build upon the manually-designed external performance metrics to guide the decision-making process. However, reliance on the external performance metrics as prior is problematic in real-world scenarios, where such prior may be unavailable, flawed, or even erroneous. For genuine autonomous decision making, it is imperative for the agent to develop its rationality from its posterior experiences to judge decisions independently. Central to the development of rationality is the construction of an internalized utility judgment, capable of assigning numerical utilities to each decision. This paper proposes RadAgent (Rational Decision-Making Agent), which fosters the development of its rationality through an iterative framework involving Experience Exploration and Utility Learning. Within this framework, Elo-based Utility Construction is devised to assign Elo scores to individual decision steps to judge their utilities via pairwise comparisons. Consequently, these Elo scores guide the decision-making process to derive optimal outcomes. Experimental results on the ToolBench dataset demonstrate RadAgent's superiority over baselines, achieving over 10% improvement in Pass Rate on diverse tasks. It offers higher-quality solutions and reduces costs (ChatGPT API calls), highlighting its effectiveness and efficiency.
http://arxiv.org/pdf/2308.12519
Yining Ye, Xin Cong, Shizuo Tian, Yujia Qin, Chong Liu, Yankai Lin, Zhiyuan Liu, Maosong Sun
cs.CL
Received 8,6,6,6 scores on ICLR 2024
null
cs.CL
20230824
20240117
[ { "id": "2305.14318" }, { "id": "2306.06624" }, { "id": "2305.17926" }, { "id": "2305.10601" }, { "id": "2307.16789" }, { "id": "2305.06849" }, { "id": "2304.08354" }, { "id": "2308.09687" }, { "id": "2306.11489" }, { "id": "2306.17563" }, { "id": "2305.14992" }, { "id": "2305.01937" }, { "id": "2308.10379" }, { "id": "2305.11554" } ]
2308.12682
28
Setup Ravens (tower of hanoi) Ravens (put blocks in bowls) BabyAI (pickup) VirtualHome Say Model Greedy-Token Vicuna Flan-T5 Vicuna Flan-T5 Vicuna Flan-T5 Vicuna Flan-T5 45 30 30 96 59 0 0 0 Say 48 30 51 96 62 0 32 0 Greedy-Action SayCan 48 39 52 96 81 30 49 30 SayCanPay 50 42 54 96 88 36 52 48 Say 54 38 52 98 72 1 48 30 Beam-Action SayCan 68 50 52 98 94 36 52 41 SayCanPay 70 50 56 98 94 30 53 50 Table 3: Table shows the planning success (i.e. # plans out of 100 that reached the goal within limited steps) on the test split across different environments using Vicuna, Flan-T5 models. It can be observed that the best decoding strategy is Beam-Action and the best decoding score is SayCanPay.
2308.12682#28
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
Large Language Models (LLMs) have demonstrated impressive planning abilities due to their vast "world knowledge". Yet, obtaining plans that are both feasible (grounded in affordances) and cost-effective (in plan length), remains a challenge, despite recent progress. This contrasts with heuristic planning methods that employ domain knowledge (formalized in action models such as PDDL) and heuristic search to generate feasible, optimal plans. Inspired by this, we propose to combine the power of LLMs and heuristic planning by leveraging the world knowledge of LLMs and the principles of heuristic search. Our approach, SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain knowledge, that evaluates actions' feasibility (Can) and long-term reward/payoff (Pay), and heuristic search to select the best sequence of actions. Our contributions are (1) a novel framing of the LLM planning problem in the context of heuristic planning, (2) integrating grounding and cost-effective elements into the generated plans, and (3) using heuristic search over actions. Our extensive evaluations show that our model surpasses other LLM planning approaches.
http://arxiv.org/pdf/2308.12682
Rishi Hazra, Pedro Zuidberg Dos Martires, Luc De Raedt
cs.AI
Accepted in AAAI 2024. Website: https://rishihazra.github.io/SayCanPay/
null
cs.AI
20230824
20240101
[ { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2305.14992" }, { "id": "2302.05128" }, { "id": "2212.08681" }, { "id": "1807.03748" }, { "id": "2303.00855" }, { "id": "2305.10601" }, { "id": "2304.11477" }, { "id": "2210.17323" }, { "id": "2210.11416" }, { "id": "2201.04735" }, { "id": "2202.10936" }, { "id": "2209.07753" }, { "id": "2302.06706" }, { "id": "1909.08593" }, { "id": "2307.15818" }, { "id": "2204.01691" }, { "id": "2207.05608" }, { "id": "2305.14314" } ]
2308.12950
28
The value of model specialization. We observe that model specialization is yields a boost in code generation capabilities when comparing Llama 2 to Code Llama and Code Llama to Code Llama - Python. Llama 2 was trained on 2T tokens, and training on only 500B of extra tokens from a code-heavy dataset results in massive performance gains on both HumanEval and MBPP, to the point that Llama 2 70B is roughly equivalent to Code Llama 7B on Python coding benchmarks. Although Code Llama was trained on more than two epochs of our code dataset, which contains our entire Python dataset, training on 100B extra tokens of a Python-heavy data mix leads to significant gains on Python code generation benchmarks, between 4.3% points and 8.3% points in HumanEval pass@1 and between 1.2% points and 6.4% points in MBPP pass@1. These gains are smaller than for the first code training step, but still allow Code Llama - Python 7B to outperform even Code Llama 13B on MBPP and HumanEval. For the APPS benchmark, the prompts are much less direct and more complex compared to MBPP and HumanEval. Our
2308.12950#28
Code Llama: Open Foundation Models for Code
We release Code Llama, a family of large language models for code based on Llama 2 providing state-of-the-art performance among open models, infilling capabilities, support for large input contexts, and zero-shot instruction following ability for programming tasks. We provide multiple flavors to cover a wide range of applications: foundation models (Code Llama), Python specializations (Code Llama - Python), and instruction-following models (Code Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are trained on sequences of 16k tokens and show improvements on inputs with up to 100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants support infilling based on surrounding content. Code Llama reaches state-of-the-art performance among open models on several code benchmarks, with scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our models outperform every other publicly available model on MultiPL-E. We release Code Llama under a permissive license that allows for both research and commercial use.
http://arxiv.org/pdf/2308.12950
Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve
cs.CL
null
null
cs.CL
20230824
20240131
[]
2308.12966
28
Model type Model val RefCOCO test-A test-B val RefCOCO+ test-A test-B val Generalist Models Specialist SOTAs GPV-2 OFA-L* Unified-IO VisionLLM-H Shikra-7B Shikra-13B - - 79.96 83.67 - 86.70 87.01 90.61 87.83 91.11 89.36 92.26 Qwen-VL-7B Qwen-VL-7B-Chat 88.55 92.27 90.56 93.19 G-DINO-L 92.64 94.33 UNINEXT-H 92.58 94.18 ONE-PEACE - - 76.39 68.29 76.00 - - 80.24 81.60 87.36 81.81 82.89 87.79 85.34 83.12 88.25 84.51 82.82 88.59 88.24 82.75 88.95 91.46 85.24 89.63 89.26 88.77 92.21 - - - - - - - 61.75 67.57 67.58 - - 72.12 82.27 82.19 74.41 82.64 83.16 77.21 85.58 85.48 76.79 85.96 86.32 75.92 86.13 87.02 79.79 88.73 89.37
2308.12966#28
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
In this work, we introduce the Qwen-VL series, a set of large-scale vision-language models (LVLMs) designed to perceive and understand both texts and images. Starting from the Qwen-LM as a foundation, we endow it with visual capacity by the meticulously designed (i) visual receptor, (ii) input-output interface, (iii) 3-stage training pipeline, and (iv) multilingual multimodal cleaned corpus. Beyond the conventional image description and question-answering, we implement the grounding and text-reading ability of Qwen-VLs by aligning image-caption-box tuples. The resulting models, including Qwen-VL and Qwen-VL-Chat, set new records for generalist models under similar model scales on a broad range of visual-centric benchmarks (e.g., image captioning, question answering, visual grounding) and different settings (e.g., zero-shot, few-shot). Moreover, on real-world dialog benchmarks, our instruction-tuned Qwen-VL-Chat also demonstrates superiority compared to existing vision-language chatbots. Code, demo and models are available at https://github.com/QwenLM/Qwen-VL.
http://arxiv.org/pdf/2308.12966
Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou
cs.CV, cs.CL
Code, demo and models are available at https://github.com/QwenLM/Qwen-VL
null
cs.CV
20230824
20231013
[ { "id": "2211.01335" }, { "id": "2307.02499" }, { "id": "2305.10403" }, { "id": "2308.16890" }, { "id": "2208.10442" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11172" }, { "id": "2210.08402" }, { "id": "2306.02858" }, { "id": "2209.06794" }, { "id": "1504.00325" }, { "id": "2204.13653" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2212.04408" }, { "id": "2307.05222" }, { "id": "2306.15195" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2306.14824" }, { "id": "2102.05918" }, { "id": "2205.01917" }, { "id": "2111.11432" }, { "id": "2307.16125" }, { "id": "2305.03726" }, { "id": "2203.10244" }, { "id": "2206.08916" }, { "id": "2304.14108" }, { "id": "2307.08581" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2305.18565" } ]
2308.12503
29
(1) Within the lesson: In Course-ONE, student Emily conveyed her anxiety, stating, ”I’m really nervous about this lesson.” Mrs. Smith, attuned to this feedback, incorporated it into her reflective process and instructional planning. Draw- ing from a library of teaching techniques, she employed strategies such as heightened encouragement and gamified instructional methods. A parallel observation was made in Course-TWO and Course-THREE. Mrs. Smith prompted students to consider, “How do coefficients ’b’ and ’c’ af- fect the graph of a quadratic function?”, and reiterated the topic in her subsequent planning. Following the actions of encouragement, Mrs. Smith’s reflective records recognized her efforts in affirming and uplifting students.
2308.12503#29
CGMI: Configurable General Multi-Agent Interaction Framework
Benefiting from the powerful capabilities of large language models (LLMs), agents based on LLMs have shown the potential to address domain-specific tasks and emulate human behaviors. However, the content generated by these agents remains somewhat superficial, owing to their limited domain expertise and the absence of an effective cognitive architecture. To address this, we present the Configurable General Multi-Agent Interaction (CGMI) framework, designed to replicate human interactions in real-world scenarios. Specifically, we propose a tree-structured methodology for the assignment, detection, and maintenance of agent personality. Additionally, we designed a cognitive architecture equipped with a skill library based on the ACT* model, which contains memory, reflection, and planning modules. We have also integrated general agents to augment the virtual environment's realism. Using the CGMI framework, we simulated numerous classroom interactions between teacher and students. The experiments indicate that aspects such as the teaching methodology, curriculum, and student performance closely mirror real classroom settings. We will open source our work.
http://arxiv.org/pdf/2308.12503
Shi Jinxin, Zhao Jiabao, Wang Yilei, Wu Xingjiao, Li Jiawen, He Liang
cs.AI, cs.HC, cs.MA
11 pages, 15 figures
null
cs.AI
20230824
20230828
[ { "id": "2302.01560" }, { "id": "2307.05300" }, { "id": "2307.07924" }, { "id": "2210.03350" }, { "id": "2304.05376" }, { "id": "2304.03442" }, { "id": "2210.03629" }, { "id": "2305.04091" }, { "id": "2305.02547" }, { "id": "2303.17071" }, { "id": "2303.17760" }, { "id": "2303.08774" } ]
2308.12519
29
5.6 ERROR ANALYSIS (RQ5) In this section, we present a comprehensive case analysis to elucidate the specific tasks that RADA- GENT effectively addresses. By dissecting the nature of RADAGENT’s successes and failures, we shed light on its autonomous decision-making capabilities and limitations. Through this analysis, we provide deeper insights into the distinctive attributes of our proposed approach. We commence our analysis by categorizing the common reasons for failure encountered by various methods, employing an autonomous filtering technique. These reasons encompass: (1) Unavailable Tool: Occurrences where a subset of the designated tools is inaccessible, e.g., HTTP 404 or 500 error. (2) Tool Call Error: Instances of tool call errors, including issues related to parameter format mismatching and missing mandatory parameter fields. (3) Hallucinated Tool: Instances where the model employs tools not provided, i.e., invoking a non-existent tool. (4) Decision Failure: Instances where the model fails to accomplish although none of the aforementioned problems occur. We present the incidence ratio of the aforementioned categories together with the fix ratio that models successfully fix the occurred errors to accomplish the instructions. Note that these failure categories may coexist in an instruction.
2308.12519#29
Rational Decision-Making Agent with Internalized Utility Judgment
Large language models (LLMs) have demonstrated remarkable advancements and have attracted significant efforts to develop LLMs into agents capable of executing intricate multi-step decision-making tasks beyond traditional NLP applications. Existing approaches to LLM-based decision-making predominantly build upon the manually-designed external performance metrics to guide the decision-making process. However, reliance on the external performance metrics as prior is problematic in real-world scenarios, where such prior may be unavailable, flawed, or even erroneous. For genuine autonomous decision making, it is imperative for the agent to develop its rationality from its posterior experiences to judge decisions independently. Central to the development of rationality is the construction of an internalized utility judgment, capable of assigning numerical utilities to each decision. This paper proposes RadAgent (Rational Decision-Making Agent), which fosters the development of its rationality through an iterative framework involving Experience Exploration and Utility Learning. Within this framework, Elo-based Utility Construction is devised to assign Elo scores to individual decision steps to judge their utilities via pairwise comparisons. Consequently, these Elo scores guide the decision-making process to derive optimal outcomes. Experimental results on the ToolBench dataset demonstrate RadAgent's superiority over baselines, achieving over 10% improvement in Pass Rate on diverse tasks. It offers higher-quality solutions and reduces costs (ChatGPT API calls), highlighting its effectiveness and efficiency.
http://arxiv.org/pdf/2308.12519
Yining Ye, Xin Cong, Shizuo Tian, Yujia Qin, Chong Liu, Yankai Lin, Zhiyuan Liu, Maosong Sun
cs.CL
Received 8,6,6,6 scores on ICLR 2024
null
cs.CL
20230824
20240117
[ { "id": "2305.14318" }, { "id": "2306.06624" }, { "id": "2305.17926" }, { "id": "2305.10601" }, { "id": "2307.16789" }, { "id": "2305.06849" }, { "id": "2304.08354" }, { "id": "2308.09687" }, { "id": "2306.11489" }, { "id": "2306.17563" }, { "id": "2305.14992" }, { "id": "2305.01937" }, { "id": "2308.10379" }, { "id": "2305.11554" } ]
2308.12682
29
action sampled from a different trajectory τj̸=i with a different initial observation o0 and goal g. Mcan consists of an uncased Bert model (Devlin et al. 2019) with a probe layer and is trained end-to-end to correctly identify the positive action. The input to Mcan is of the format ‘⟨Goal⟩{g} ⟨History⟩{ht−1} ⟨NXT⟩{at}’. Here, ‘⟨∗⟩’ serves as special := Mcan(ht−1, g, at). The model is trained across multiple batches tokens. The output is the Can probability pcan at for F1-score convergence on the validation set. Our approach is different from SayCan (Ahn et al. 2022) which trains multiple affordance functions (corresponding to different skills), through temporal-difference-based reinforcement learning to predict the likelihood of a particular skill succeeding (i.e., executing) in the current state. Here, we show two training I/O examples, one with positive action and another one with negative action.
2308.12682#29
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
Large Language Models (LLMs) have demonstrated impressive planning abilities due to their vast "world knowledge". Yet, obtaining plans that are both feasible (grounded in affordances) and cost-effective (in plan length), remains a challenge, despite recent progress. This contrasts with heuristic planning methods that employ domain knowledge (formalized in action models such as PDDL) and heuristic search to generate feasible, optimal plans. Inspired by this, we propose to combine the power of LLMs and heuristic planning by leveraging the world knowledge of LLMs and the principles of heuristic search. Our approach, SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain knowledge, that evaluates actions' feasibility (Can) and long-term reward/payoff (Pay), and heuristic search to select the best sequence of actions. Our contributions are (1) a novel framing of the LLM planning problem in the context of heuristic planning, (2) integrating grounding and cost-effective elements into the generated plans, and (3) using heuristic search over actions. Our extensive evaluations show that our model surpasses other LLM planning approaches.
http://arxiv.org/pdf/2308.12682
Rishi Hazra, Pedro Zuidberg Dos Martires, Luc De Raedt
cs.AI
Accepted in AAAI 2024. Website: https://rishihazra.github.io/SayCanPay/
null
cs.AI
20230824
20240101
[ { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2305.14992" }, { "id": "2302.05128" }, { "id": "2212.08681" }, { "id": "1807.03748" }, { "id": "2303.00855" }, { "id": "2305.10601" }, { "id": "2304.11477" }, { "id": "2210.17323" }, { "id": "2210.11416" }, { "id": "2201.04735" }, { "id": "2202.10936" }, { "id": "2209.07753" }, { "id": "2302.06706" }, { "id": "1909.08593" }, { "id": "2307.15818" }, { "id": "2204.01691" }, { "id": "2207.05608" }, { "id": "2305.14314" } ]
2308.12950
29
13B on MBPP and HumanEval. For the APPS benchmark, the prompts are much less direct and more complex compared to MBPP and HumanEval. Our Code Llama - Python models show slightly decreased performance on the introductory and interview level problems, where understanding the prompt is often more challenging for a language model than implementing a solution. However, Code Llama - Python shows clear gains on the competition-level problems where solutions are more complex. While large language models have enough capacity to learn to generate text on various topics, we observe that model specialization is beneficial for models between 7B and 70B parameters and after two full epochs on the training data.
2308.12950#29
Code Llama: Open Foundation Models for Code
We release Code Llama, a family of large language models for code based on Llama 2 providing state-of-the-art performance among open models, infilling capabilities, support for large input contexts, and zero-shot instruction following ability for programming tasks. We provide multiple flavors to cover a wide range of applications: foundation models (Code Llama), Python specializations (Code Llama - Python), and instruction-following models (Code Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are trained on sequences of 16k tokens and show improvements on inputs with up to 100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants support infilling based on surrounding content. Code Llama reaches state-of-the-art performance among open models on several code benchmarks, with scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our models outperform every other publicly available model on MultiPL-E. We release Code Llama under a permissive license that allows for both research and commercial use.
http://arxiv.org/pdf/2308.12950
Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve
cs.CL
null
null
cs.CL
20230824
20240131
[]
2308.12503
30
through reflection on Course-ONE, Mrs Smith found that Emily exhibited anxiety when faced with mathematical chal- lenges. This insight directly influenced Mrs.Smith reassur- ing statement to Emily in Course-TWO: ”I’m pleased to see you’ve overcome your apprehension towards mathematics.” The effect of tree-structured persona model. To discern whether agents with varied personality traits exhibit distin- guishable behaviors during interactions, we executed a com- parative study depicted in Figure 5. One lesson involved per- sonality allocation, detection, and maintenance, whereas the other lacked any defined agent personalities. In the absence of assigned traits, there was a notable uniformity in the ex- pressions of five students, often resorting to statements like, ”I’m excited...”. In contrast, once unique personality traits were allocated, their expressions became more nuanced and aligned with their respective personas. For instance, the out- going Ryan would suggest a “discussion with classmates”, while the industrious Ying Zheng would exude a “passion for learning”.
2308.12503#30
CGMI: Configurable General Multi-Agent Interaction Framework
Benefiting from the powerful capabilities of large language models (LLMs), agents based on LLMs have shown the potential to address domain-specific tasks and emulate human behaviors. However, the content generated by these agents remains somewhat superficial, owing to their limited domain expertise and the absence of an effective cognitive architecture. To address this, we present the Configurable General Multi-Agent Interaction (CGMI) framework, designed to replicate human interactions in real-world scenarios. Specifically, we propose a tree-structured methodology for the assignment, detection, and maintenance of agent personality. Additionally, we designed a cognitive architecture equipped with a skill library based on the ACT* model, which contains memory, reflection, and planning modules. We have also integrated general agents to augment the virtual environment's realism. Using the CGMI framework, we simulated numerous classroom interactions between teacher and students. The experiments indicate that aspects such as the teaching methodology, curriculum, and student performance closely mirror real classroom settings. We will open source our work.
http://arxiv.org/pdf/2308.12503
Shi Jinxin, Zhao Jiabao, Wang Yilei, Wu Xingjiao, Li Jiawen, He Liang
cs.AI, cs.HC, cs.MA
11 pages, 15 figures
null
cs.AI
20230824
20230828
[ { "id": "2302.01560" }, { "id": "2307.05300" }, { "id": "2307.07924" }, { "id": "2210.03350" }, { "id": "2304.05376" }, { "id": "2304.03442" }, { "id": "2210.03629" }, { "id": "2305.04091" }, { "id": "2305.02547" }, { "id": "2303.17071" }, { "id": "2303.17760" }, { "id": "2303.08774" } ]
2308.12519
30
From Table 3, several noteworthy observations arise: (1) RADAGENT boasts the lowest incidence ratio of decision failure, highlighting its adeptness in decision making. (2) DFSDT and RADA- GENT exhibit relatively higher incidence ratios of hallucinated tools while RADAGENT surpasses 8 Preprint Method Hallucinated Tool Ratio Tool Call Error Fix Ratio Ratio Fix Ratio Unavailable Tool Decision Failure CoT@3 BFS DFSDT RADAGENT 14.2 18.8 31.5 42.1 25.4 25.5 38.9 53.3 41.2 50.8 62.5 62.3 14.8 31.1 41.0 54.0 2.0 2.6 3.0 3.0 52.5 48.6 26.4 14.8 Table 3: Incidence ratio and Fix ratio of Common Failure reasons in decision-making process.
2308.12519#30
Rational Decision-Making Agent with Internalized Utility Judgment
Large language models (LLMs) have demonstrated remarkable advancements and have attracted significant efforts to develop LLMs into agents capable of executing intricate multi-step decision-making tasks beyond traditional NLP applications. Existing approaches to LLM-based decision-making predominantly build upon the manually-designed external performance metrics to guide the decision-making process. However, reliance on the external performance metrics as prior is problematic in real-world scenarios, where such prior may be unavailable, flawed, or even erroneous. For genuine autonomous decision making, it is imperative for the agent to develop its rationality from its posterior experiences to judge decisions independently. Central to the development of rationality is the construction of an internalized utility judgment, capable of assigning numerical utilities to each decision. This paper proposes RadAgent (Rational Decision-Making Agent), which fosters the development of its rationality through an iterative framework involving Experience Exploration and Utility Learning. Within this framework, Elo-based Utility Construction is devised to assign Elo scores to individual decision steps to judge their utilities via pairwise comparisons. Consequently, these Elo scores guide the decision-making process to derive optimal outcomes. Experimental results on the ToolBench dataset demonstrate RadAgent's superiority over baselines, achieving over 10% improvement in Pass Rate on diverse tasks. It offers higher-quality solutions and reduces costs (ChatGPT API calls), highlighting its effectiveness and efficiency.
http://arxiv.org/pdf/2308.12519
Yining Ye, Xin Cong, Shizuo Tian, Yujia Qin, Chong Liu, Yankai Lin, Zhiyuan Liu, Maosong Sun
cs.CL
Received 8,6,6,6 scores on ICLR 2024
null
cs.CL
20230824
20240117
[ { "id": "2305.14318" }, { "id": "2306.06624" }, { "id": "2305.17926" }, { "id": "2305.10601" }, { "id": "2307.16789" }, { "id": "2305.06849" }, { "id": "2304.08354" }, { "id": "2308.09687" }, { "id": "2306.11489" }, { "id": "2306.17563" }, { "id": "2305.14992" }, { "id": "2305.01937" }, { "id": "2308.10379" }, { "id": "2305.11554" } ]
2308.12682
30
Input ⟨Goal⟩ pick up the purple box. ⟨Initial State⟩ Room 1 has yellow key, agent. Room 2 has purple box. The door connecting Room 1 and Room 2 is locked. ⟨Step 1⟩ pick up yellow key. ⟨NXT⟩ toggle yellow door. Output 1.0 Input ⟨Goal⟩ pick up the purple box. ⟨Initial State⟩ Room 1 has yellow key, agent. Room 2 has purple box. The door connecting Room 1 and Room 2 is locked. ⟨Step 1⟩ pick up yellow key. ⟨NXT⟩ pick up purple box. Output 0.0
2308.12682#30
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
Large Language Models (LLMs) have demonstrated impressive planning abilities due to their vast "world knowledge". Yet, obtaining plans that are both feasible (grounded in affordances) and cost-effective (in plan length), remains a challenge, despite recent progress. This contrasts with heuristic planning methods that employ domain knowledge (formalized in action models such as PDDL) and heuristic search to generate feasible, optimal plans. Inspired by this, we propose to combine the power of LLMs and heuristic planning by leveraging the world knowledge of LLMs and the principles of heuristic search. Our approach, SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain knowledge, that evaluates actions' feasibility (Can) and long-term reward/payoff (Pay), and heuristic search to select the best sequence of actions. Our contributions are (1) a novel framing of the LLM planning problem in the context of heuristic planning, (2) integrating grounding and cost-effective elements into the generated plans, and (3) using heuristic search over actions. Our extensive evaluations show that our model surpasses other LLM planning approaches.
http://arxiv.org/pdf/2308.12682
Rishi Hazra, Pedro Zuidberg Dos Martires, Luc De Raedt
cs.AI
Accepted in AAAI 2024. Website: https://rishihazra.github.io/SayCanPay/
null
cs.AI
20230824
20240101
[ { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2305.14992" }, { "id": "2302.05128" }, { "id": "2212.08681" }, { "id": "1807.03748" }, { "id": "2303.00855" }, { "id": "2305.10601" }, { "id": "2304.11477" }, { "id": "2210.17323" }, { "id": "2210.11416" }, { "id": "2201.04735" }, { "id": "2202.10936" }, { "id": "2209.07753" }, { "id": "2302.06706" }, { "id": "1909.08593" }, { "id": "2307.15818" }, { "id": "2204.01691" }, { "id": "2207.05608" }, { "id": "2305.14314" } ]
2308.12950
30
Scaling of specialized models. We observe that scaling the number of parameters matters for models specialized for coding. With the same training process, our larger models outperform their smaller counterparts on almost every metric from HumanEval, MBPP and APPS (Table 2, 3). For instance, we gain 5.6 percentage points on MBPP pass@1 scaling Code Llama from 7B to 13B parameters, 8 more points when scaling to 34B and 7 when scaling to 70B. We can hypothesize that specializing larger models to code would lead to significant further gains on coding tasks. Moreover, the Chinchilla scaling laws (Hoffmann et al., 2022) indicate that larger models would benefit more from training on more tokens. # 3.1.2 Multilingual evaluation Next, we evaluate our models on a more diverse set of programming languages. For that, we use the MultiPL-E benchmark (Cassano et al., 2023). We report results for Python, C++, Java, PHP, TypeScript, C#, and Bash in Table 4. 7
2308.12950#30
Code Llama: Open Foundation Models for Code
We release Code Llama, a family of large language models for code based on Llama 2 providing state-of-the-art performance among open models, infilling capabilities, support for large input contexts, and zero-shot instruction following ability for programming tasks. We provide multiple flavors to cover a wide range of applications: foundation models (Code Llama), Python specializations (Code Llama - Python), and instruction-following models (Code Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are trained on sequences of 16k tokens and show improvements on inputs with up to 100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants support infilling based on surrounding content. Code Llama reaches state-of-the-art performance among open models on several code benchmarks, with scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our models outperform every other publicly available model on MultiPL-E. We release Code Llama under a permissive license that allows for both research and commercial use.
http://arxiv.org/pdf/2308.12950
Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve
cs.CL
null
null
cs.CL
20230824
20240131
[]
2308.12966
30
# 4.4 Few-shot Learning on Vision-Language Tasks Our model also exhibits satisfactory in-context learning (a.k.a., few-shot learning) ability. As shown in Figure 4, Qwen-VL achieves better performance through in-context few-shot learning on OKVQA (Marino et al., 2019), Vizwiz (Gurari et al., 2018), TextVQA (Sidorov et al., 2020), and Flickr30k (Young et al., 2014) when compared with models with similar number of parameters (Flamingo-9B(Alayrac et al., 2022), OpenFlamingo-9B(?) and IDEFICS-9B?). Qwen-VL’s performance is even comparable with much larger models (Flamingo-80B and IDEFICS-80B). Note that we adopt naïve random sample to construct the few-shot exemplars, sophisticated few-shot exemplar construction methods such as RICES (Yang et al., 2022b) are not used despite better results would be achieved. 20 65 60 38 45 TextvQa 3 Se Qnenvt —e- Flamingo-808 —e DEFICS-808 —e Flamingo-93 — opentlaminga-98 eH DEFICS-98 \) ° 4 Figure 4: Few-shot learning results of Qwen-VL in comparison with other models.
2308.12966#30
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
In this work, we introduce the Qwen-VL series, a set of large-scale vision-language models (LVLMs) designed to perceive and understand both texts and images. Starting from the Qwen-LM as a foundation, we endow it with visual capacity by the meticulously designed (i) visual receptor, (ii) input-output interface, (iii) 3-stage training pipeline, and (iv) multilingual multimodal cleaned corpus. Beyond the conventional image description and question-answering, we implement the grounding and text-reading ability of Qwen-VLs by aligning image-caption-box tuples. The resulting models, including Qwen-VL and Qwen-VL-Chat, set new records for generalist models under similar model scales on a broad range of visual-centric benchmarks (e.g., image captioning, question answering, visual grounding) and different settings (e.g., zero-shot, few-shot). Moreover, on real-world dialog benchmarks, our instruction-tuned Qwen-VL-Chat also demonstrates superiority compared to existing vision-language chatbots. Code, demo and models are available at https://github.com/QwenLM/Qwen-VL.
http://arxiv.org/pdf/2308.12966
Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou
cs.CV, cs.CL
Code, demo and models are available at https://github.com/QwenLM/Qwen-VL
null
cs.CV
20230824
20231013
[ { "id": "2211.01335" }, { "id": "2307.02499" }, { "id": "2305.10403" }, { "id": "2308.16890" }, { "id": "2208.10442" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11172" }, { "id": "2210.08402" }, { "id": "2306.02858" }, { "id": "2209.06794" }, { "id": "1504.00325" }, { "id": "2204.13653" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2212.04408" }, { "id": "2307.05222" }, { "id": "2306.15195" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2306.14824" }, { "id": "2102.05918" }, { "id": "2205.01917" }, { "id": "2111.11432" }, { "id": "2307.16125" }, { "id": "2305.03726" }, { "id": "2203.10244" }, { "id": "2206.08916" }, { "id": "2304.14108" }, { "id": "2307.08581" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2305.18565" } ]
2308.12503
31
Furthermore, on the right side of Figure 5, the statements made by the student Emily throughout the class are dis- played. Judging from the records of her remarks, the Emily Agent has demonstrated a consistent persona, interacting with teachers and classmates based on the previously es- tablished persona. In detail, she remarked, “I’m consider- ably anxious about this quadratic equations segment.” at the start of the class. In the middle part of the course, she still showed her unfamiliarity and lack of confidence in the current knowledge in the interaction, expressing like, ”I’m not well-versed with quadratic equations, yet I’m keen on learning and exploring various aspects...”, and “Being an average student, I might require a while to fully comprehend quadratic equations”. (2) Between lessons: Across different courses, the pro- posed cognitive structure is still valid. It plays a crucial role in refining Mrs. Smith’s teaching focus, deepening un- derstanding and adapting teaching methods. For example, By imbuing agents with human-like qualities, they can adeptly distill insights from evolving scenarios and ex- hibit individualized responses. In addition, it also can make agents recalibrate actions based on accumulated knowledge and abilities. This significantly augments agents’ adaptive
2308.12503#31
CGMI: Configurable General Multi-Agent Interaction Framework
Benefiting from the powerful capabilities of large language models (LLMs), agents based on LLMs have shown the potential to address domain-specific tasks and emulate human behaviors. However, the content generated by these agents remains somewhat superficial, owing to their limited domain expertise and the absence of an effective cognitive architecture. To address this, we present the Configurable General Multi-Agent Interaction (CGMI) framework, designed to replicate human interactions in real-world scenarios. Specifically, we propose a tree-structured methodology for the assignment, detection, and maintenance of agent personality. Additionally, we designed a cognitive architecture equipped with a skill library based on the ACT* model, which contains memory, reflection, and planning modules. We have also integrated general agents to augment the virtual environment's realism. Using the CGMI framework, we simulated numerous classroom interactions between teacher and students. The experiments indicate that aspects such as the teaching methodology, curriculum, and student performance closely mirror real classroom settings. We will open source our work.
http://arxiv.org/pdf/2308.12503
Shi Jinxin, Zhao Jiabao, Wang Yilei, Wu Xingjiao, Li Jiawen, He Liang
cs.AI, cs.HC, cs.MA
11 pages, 15 figures
null
cs.AI
20230824
20230828
[ { "id": "2302.01560" }, { "id": "2307.05300" }, { "id": "2307.07924" }, { "id": "2210.03350" }, { "id": "2304.05376" }, { "id": "2304.03442" }, { "id": "2210.03629" }, { "id": "2305.04091" }, { "id": "2305.02547" }, { "id": "2303.17071" }, { "id": "2303.17760" }, { "id": "2303.08774" } ]
2308.12519
31
Table 3: Incidence ratio and Fix ratio of Common Failure reasons in decision-making process. others in terms of the fix ratio, indicating its proficiency in rectifying this failure. (3) RADAGENT outperforms other methods significantly in fixing tool call errors, demonstrating the robustness of its self-judgment ability. (4) All methods own similar incident ratio of Tool Call Error which shows that there still exist some inoperative APIs in ToolBench, influencing the decision-making process. (5) Lastly, we examine cases that all methods fail. While certain cases remain unsolvable due to the ambiguity of user-provided values (e.g., user ID, email address) or restrictions imposed by limited tool chain lengths, a subset of challenges underscores the necessity for advanced decision-making proficiencies. Taking a step further, we synthesize the case analysis results to elucidate the multifaceted compe- tencies that a decision-making method necessitates.
2308.12519#31
Rational Decision-Making Agent with Internalized Utility Judgment
Large language models (LLMs) have demonstrated remarkable advancements and have attracted significant efforts to develop LLMs into agents capable of executing intricate multi-step decision-making tasks beyond traditional NLP applications. Existing approaches to LLM-based decision-making predominantly build upon the manually-designed external performance metrics to guide the decision-making process. However, reliance on the external performance metrics as prior is problematic in real-world scenarios, where such prior may be unavailable, flawed, or even erroneous. For genuine autonomous decision making, it is imperative for the agent to develop its rationality from its posterior experiences to judge decisions independently. Central to the development of rationality is the construction of an internalized utility judgment, capable of assigning numerical utilities to each decision. This paper proposes RadAgent (Rational Decision-Making Agent), which fosters the development of its rationality through an iterative framework involving Experience Exploration and Utility Learning. Within this framework, Elo-based Utility Construction is devised to assign Elo scores to individual decision steps to judge their utilities via pairwise comparisons. Consequently, these Elo scores guide the decision-making process to derive optimal outcomes. Experimental results on the ToolBench dataset demonstrate RadAgent's superiority over baselines, achieving over 10% improvement in Pass Rate on diverse tasks. It offers higher-quality solutions and reduces costs (ChatGPT API calls), highlighting its effectiveness and efficiency.
http://arxiv.org/pdf/2308.12519
Yining Ye, Xin Cong, Shizuo Tian, Yujia Qin, Chong Liu, Yankai Lin, Zhiyuan Liu, Maosong Sun
cs.CL
Received 8,6,6,6 scores on ICLR 2024
null
cs.CL
20230824
20240117
[ { "id": "2305.14318" }, { "id": "2306.06624" }, { "id": "2305.17926" }, { "id": "2305.10601" }, { "id": "2307.16789" }, { "id": "2305.06849" }, { "id": "2304.08354" }, { "id": "2308.09687" }, { "id": "2306.11489" }, { "id": "2306.17563" }, { "id": "2305.14992" }, { "id": "2305.01937" }, { "id": "2308.10379" }, { "id": "2305.11554" } ]
2308.12682
31
6.2 Pay Model We model it as a regression problem to estimate action payoffs. Using expert trajectories E, we create a dataset with each batch as [g, ht−1, at, r]1:B. Given sparse rewards (i.e. rT = 1), we use temporal discounting δ ∈ (0, 1) to assign discounted rewards to previous actions in the trajectory4. This ensures that actions closer to the end receive higher rewards and vice versa. Specifically, rT −1 = δ, rT −2 = δ2, and so on. We also sample negative actions from other paths (akin to the Can model) with a reward of 0. The model is trained to align the discounted reward of the action and the predicted reward from Mpay by minimizing the mean squared error (MSE) loss 1 t))2. B The model uses an uncased Bert plus a regression layer whose output is bounded in [0, 1] via a sigmoid activation. The 4δ for the Pay model training is unrelated to the POMDP.
2308.12682#31
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
Large Language Models (LLMs) have demonstrated impressive planning abilities due to their vast "world knowledge". Yet, obtaining plans that are both feasible (grounded in affordances) and cost-effective (in plan length), remains a challenge, despite recent progress. This contrasts with heuristic planning methods that employ domain knowledge (formalized in action models such as PDDL) and heuristic search to generate feasible, optimal plans. Inspired by this, we propose to combine the power of LLMs and heuristic planning by leveraging the world knowledge of LLMs and the principles of heuristic search. Our approach, SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain knowledge, that evaluates actions' feasibility (Can) and long-term reward/payoff (Pay), and heuristic search to select the best sequence of actions. Our contributions are (1) a novel framing of the LLM planning problem in the context of heuristic planning, (2) integrating grounding and cost-effective elements into the generated plans, and (3) using heuristic search over actions. Our extensive evaluations show that our model surpasses other LLM planning approaches.
http://arxiv.org/pdf/2308.12682
Rishi Hazra, Pedro Zuidberg Dos Martires, Luc De Raedt
cs.AI
Accepted in AAAI 2024. Website: https://rishihazra.github.io/SayCanPay/
null
cs.AI
20230824
20240101
[ { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2305.14992" }, { "id": "2302.05128" }, { "id": "2212.08681" }, { "id": "1807.03748" }, { "id": "2303.00855" }, { "id": "2305.10601" }, { "id": "2304.11477" }, { "id": "2210.17323" }, { "id": "2210.11416" }, { "id": "2201.04735" }, { "id": "2202.10936" }, { "id": "2209.07753" }, { "id": "2302.06706" }, { "id": "1909.08593" }, { "id": "2307.15818" }, { "id": "2204.01691" }, { "id": "2207.05608" }, { "id": "2305.14314" } ]
2308.12950
31
Model Size Pass@ Introductory Interview Competition GPT-Neo 2.7B 1 5 3.9% 5.5% 0.6% 0.8% 0.0% 0.0% Codex 12B 1 5 1000 4.1% 9.7% 25.0% 0.1% 0.5% 3.7% 0.0% 0.1% 3.2% AlphaCode AlphaCode (Filtered 1000) AlphaCode (Filtered 10000) AlphaCode (Filtered 50000) 1B 1000 5 5 5 17.7% 14.4% 18.2% 20.4% 5.2% 5.6% 8.2% 9.7% 7.1% 4.6% 6.7% 7.8% 7B 5 10 100 10.8% 15.6% 33.5% 2.0% 3.1% 9.4% 0.8% 1.4% 7.1% Code Llama 13B 5 10 100 23.7% 30.2% 49.0% 5.6% 8.1% 18.4% 2.1% 3.4% 12.0% 34B 5 10 100 32.8% 39.0% 56.3% 8.8% 12.2% 24.3% 2.9% 4.7% 15.4% 7B 5 10 100
2308.12950#31
Code Llama: Open Foundation Models for Code
We release Code Llama, a family of large language models for code based on Llama 2 providing state-of-the-art performance among open models, infilling capabilities, support for large input contexts, and zero-shot instruction following ability for programming tasks. We provide multiple flavors to cover a wide range of applications: foundation models (Code Llama), Python specializations (Code Llama - Python), and instruction-following models (Code Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are trained on sequences of 16k tokens and show improvements on inputs with up to 100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants support infilling based on surrounding content. Code Llama reaches state-of-the-art performance among open models on several code benchmarks, with scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our models outperform every other publicly available model on MultiPL-E. We release Code Llama under a permissive license that allows for both research and commercial use.
http://arxiv.org/pdf/2308.12950
Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve
cs.CL
null
null
cs.CL
20230824
20240131
[]
2308.12966
31
Figure 4: Few-shot learning results of Qwen-VL in comparison with other models. 8 # Table 7: Results on Instruction-following benchmarks. Model TouchStone Cn En All SEED-Bench Img Video MME Perception Cognition VisualGLM PandaGPT MiniGPT4 InstructBLIP LLaMA-AdapterV2 LLaVA mPLUG-Owl - 488.5 531.7 552.4 590.1 602.7 605.4 247.1 - - - - - - - - 42.8 53.4 32.7 33.5 34.0 - - 47.4 58.8 35.2 37.0 37.9 - - 29.9 38.1 25.8 23.8 23.0 705.31 642.59 581.67 1212.82 972.67 502.82 967.34 181.79 228.57 144.29 291.79 248.93 214.64 276.07 Qwen-VL Qwen-VL-Chat - 645.2 - 401.2 56.3 58.2 62.3 65.4 39.1 37.8 - 1487.58 - 360.71 # Instruction Following in Real-world User Behavior
2308.12966#31
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
In this work, we introduce the Qwen-VL series, a set of large-scale vision-language models (LVLMs) designed to perceive and understand both texts and images. Starting from the Qwen-LM as a foundation, we endow it with visual capacity by the meticulously designed (i) visual receptor, (ii) input-output interface, (iii) 3-stage training pipeline, and (iv) multilingual multimodal cleaned corpus. Beyond the conventional image description and question-answering, we implement the grounding and text-reading ability of Qwen-VLs by aligning image-caption-box tuples. The resulting models, including Qwen-VL and Qwen-VL-Chat, set new records for generalist models under similar model scales on a broad range of visual-centric benchmarks (e.g., image captioning, question answering, visual grounding) and different settings (e.g., zero-shot, few-shot). Moreover, on real-world dialog benchmarks, our instruction-tuned Qwen-VL-Chat also demonstrates superiority compared to existing vision-language chatbots. Code, demo and models are available at https://github.com/QwenLM/Qwen-VL.
http://arxiv.org/pdf/2308.12966
Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou
cs.CV, cs.CL
Code, demo and models are available at https://github.com/QwenLM/Qwen-VL
null
cs.CV
20230824
20231013
[ { "id": "2211.01335" }, { "id": "2307.02499" }, { "id": "2305.10403" }, { "id": "2308.16890" }, { "id": "2208.10442" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11172" }, { "id": "2210.08402" }, { "id": "2306.02858" }, { "id": "2209.06794" }, { "id": "1504.00325" }, { "id": "2204.13653" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2212.04408" }, { "id": "2307.05222" }, { "id": "2306.15195" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2306.14824" }, { "id": "2102.05918" }, { "id": "2205.01917" }, { "id": "2111.11432" }, { "id": "2307.16125" }, { "id": "2305.03726" }, { "id": "2203.10244" }, { "id": "2206.08916" }, { "id": "2304.14108" }, { "id": "2307.08581" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2305.18565" } ]
2308.12503
32
Question: Can anyone tell me the general form of a quadratic function? 1 eee ee eee 1 The number of hands —#-Willingness 1 raised in C2 3 5) 4, 2, 4 110 Random ©. 828 8 Goa) h A N John Emily Ryan Samantha Ying Zheng Willingness: Random: 2A ee John Emily Ryan Samantha Ying Zheng 9 8 7 6 5 4 3 1 John Emly Ryan Samantha Es Role-Set: iaieiel aaa John(Athletic Star): Extroverted, Sociable, Poor concentration Emily(Art Prodigy): Artistic , Expressive, Occasionally motivated Ryan(Social Butterfly): Outgoing, Charismatic, Occasionally motivated Samantha(Contemplator): Introverted, Independent, Quick learner Ying Zheng(Academic Enthusiast): Diligent, Focused, Quick learner Figure 6: The influence of personal traits on agent expres- sion. capabilities in multifaceted environments. Concurrently, the tree-structured character model introduced in this study ef- fectively and efficiently captures and retains the personal- ized data of agents. Quantitative Analysis of Interaction Logic Based on the ”classroom teaching” scenario restored by CGMI, this paper compares the rationality of different in- teraction logics under the same question.
2308.12503#32
CGMI: Configurable General Multi-Agent Interaction Framework
Benefiting from the powerful capabilities of large language models (LLMs), agents based on LLMs have shown the potential to address domain-specific tasks and emulate human behaviors. However, the content generated by these agents remains somewhat superficial, owing to their limited domain expertise and the absence of an effective cognitive architecture. To address this, we present the Configurable General Multi-Agent Interaction (CGMI) framework, designed to replicate human interactions in real-world scenarios. Specifically, we propose a tree-structured methodology for the assignment, detection, and maintenance of agent personality. Additionally, we designed a cognitive architecture equipped with a skill library based on the ACT* model, which contains memory, reflection, and planning modules. We have also integrated general agents to augment the virtual environment's realism. Using the CGMI framework, we simulated numerous classroom interactions between teacher and students. The experiments indicate that aspects such as the teaching methodology, curriculum, and student performance closely mirror real classroom settings. We will open source our work.
http://arxiv.org/pdf/2308.12503
Shi Jinxin, Zhao Jiabao, Wang Yilei, Wu Xingjiao, Li Jiawen, He Liang
cs.AI, cs.HC, cs.MA
11 pages, 15 figures
null
cs.AI
20230824
20230828
[ { "id": "2302.01560" }, { "id": "2307.05300" }, { "id": "2307.07924" }, { "id": "2210.03350" }, { "id": "2304.05376" }, { "id": "2304.03442" }, { "id": "2210.03629" }, { "id": "2305.04091" }, { "id": "2305.02547" }, { "id": "2303.17071" }, { "id": "2303.17760" }, { "id": "2303.08774" } ]
2308.12519
32
Taking a step further, we synthesize the case analysis results to elucidate the multifaceted compe- tencies that a decision-making method necessitates. • Exception Handling. During the decision-making process, exceptions may occur (e.g., tool un- available, tool call errors), leading to the decision step cannot meet the expectation. Under these circumstances, decision-making methods should have the ability to deal with the exceptions to navigate to a new decision. CoT is susceptible to these scenarios, which leads the model into a loop of repeated erroneous decisions. In contrast, tree-based methods excel in mitigating such occurrences as they can explore potential decisions to avoid exceptions. • Diversity Exploration. To accomplish a task, there exist different exploration directions. For example, in tool use scenarios, some tools have analogous functionalities and one of them is the most functional to accomplish tasks. DFS and DFSDT, constrained by their relatively narrow search width, might miss identifying the optimal solution. Although BFS can make several deci- sions in a step, it fails to explore promising decisions as it cannot achieve good judgment of the value of each decision. In contrast, RADAGENT assigns lower scores to fewer potential decision steps, displaying a trend for exploring novel avenues. This exemplifies a scenario demanding diversity in exploration.
2308.12519#32
Rational Decision-Making Agent with Internalized Utility Judgment
Large language models (LLMs) have demonstrated remarkable advancements and have attracted significant efforts to develop LLMs into agents capable of executing intricate multi-step decision-making tasks beyond traditional NLP applications. Existing approaches to LLM-based decision-making predominantly build upon the manually-designed external performance metrics to guide the decision-making process. However, reliance on the external performance metrics as prior is problematic in real-world scenarios, where such prior may be unavailable, flawed, or even erroneous. For genuine autonomous decision making, it is imperative for the agent to develop its rationality from its posterior experiences to judge decisions independently. Central to the development of rationality is the construction of an internalized utility judgment, capable of assigning numerical utilities to each decision. This paper proposes RadAgent (Rational Decision-Making Agent), which fosters the development of its rationality through an iterative framework involving Experience Exploration and Utility Learning. Within this framework, Elo-based Utility Construction is devised to assign Elo scores to individual decision steps to judge their utilities via pairwise comparisons. Consequently, these Elo scores guide the decision-making process to derive optimal outcomes. Experimental results on the ToolBench dataset demonstrate RadAgent's superiority over baselines, achieving over 10% improvement in Pass Rate on diverse tasks. It offers higher-quality solutions and reduces costs (ChatGPT API calls), highlighting its effectiveness and efficiency.
http://arxiv.org/pdf/2308.12519
Yining Ye, Xin Cong, Shizuo Tian, Yujia Qin, Chong Liu, Yankai Lin, Zhiyuan Liu, Maosong Sun
cs.CL
Received 8,6,6,6 scores on ICLR 2024
null
cs.CL
20230824
20240117
[ { "id": "2305.14318" }, { "id": "2306.06624" }, { "id": "2305.17926" }, { "id": "2305.10601" }, { "id": "2307.16789" }, { "id": "2305.06849" }, { "id": "2304.08354" }, { "id": "2308.09687" }, { "id": "2306.11489" }, { "id": "2306.17563" }, { "id": "2305.14992" }, { "id": "2305.01937" }, { "id": "2308.10379" }, { "id": "2305.11554" } ]
2308.12682
32
4δ for the Pay model training is unrelated to the POMDP. Setup Ravens (tower of hanoi) Ravens (put blocks in bowls) BabyAI (pickup) VirtualHome Say Model Greedy-Token Vicuna Flan-T5 Vicuna Flan-T5 Vicuna Flan-T5 Vicuna Flan-T5 12 34 16 63 48 0 0 0 Say 24 34 36 65 50 0 14 0 Greedy-Action SayCan 55 46 40 71 53 26 23 6 SayCanPay 58 47 48 74 54 28 29 15 Say 20 38 38 67 56 1 20 4 Beam-Action SayCan 47 54 42 74 56 30 26 19 SayCanPay 52 56 56 74 62 34 30 26 Table 4: Table shows the cost-effectiveness (i.e. #plans out of 100 that reached the goal within limited steps and also had the same plan length as the expert plan) on the test split across different environments using Vicuna, Flan-T5 models. It can be observed that the best decoding strategy is Beam-Action and the best decoding score is SayCanPay.
2308.12682#32
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
Large Language Models (LLMs) have demonstrated impressive planning abilities due to their vast "world knowledge". Yet, obtaining plans that are both feasible (grounded in affordances) and cost-effective (in plan length), remains a challenge, despite recent progress. This contrasts with heuristic planning methods that employ domain knowledge (formalized in action models such as PDDL) and heuristic search to generate feasible, optimal plans. Inspired by this, we propose to combine the power of LLMs and heuristic planning by leveraging the world knowledge of LLMs and the principles of heuristic search. Our approach, SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain knowledge, that evaluates actions' feasibility (Can) and long-term reward/payoff (Pay), and heuristic search to select the best sequence of actions. Our contributions are (1) a novel framing of the LLM planning problem in the context of heuristic planning, (2) integrating grounding and cost-effective elements into the generated plans, and (3) using heuristic search over actions. Our extensive evaluations show that our model surpasses other LLM planning approaches.
http://arxiv.org/pdf/2308.12682
Rishi Hazra, Pedro Zuidberg Dos Martires, Luc De Raedt
cs.AI
Accepted in AAAI 2024. Website: https://rishihazra.github.io/SayCanPay/
null
cs.AI
20230824
20240101
[ { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2305.14992" }, { "id": "2302.05128" }, { "id": "2212.08681" }, { "id": "1807.03748" }, { "id": "2303.00855" }, { "id": "2305.10601" }, { "id": "2304.11477" }, { "id": "2210.17323" }, { "id": "2210.11416" }, { "id": "2201.04735" }, { "id": "2202.10936" }, { "id": "2209.07753" }, { "id": "2302.06706" }, { "id": "1909.08593" }, { "id": "2307.15818" }, { "id": "2204.01691" }, { "id": "2207.05608" }, { "id": "2305.14314" } ]
2308.12950
32
39.0% 56.3% 8.8% 12.2% 24.3% 2.9% 4.7% 15.4% 7B 5 10 100 12.7% 18.5% 38.3% 4.2% 6.3% 14.9% 1.3% 2.2% 9.1% Code Llama - Python 13B 5 10 100 26.3% 32.8% 51.6% 7.1% 10.0% 21.5% 2.8% 4.3% 14.6% 34B 5 10 100 28.9% 35.9% 54.9% 7.8% 11.1% 23.9% 3.5% 5.5% 16.8% 7B 5 10 100 12.9% 17.9% 35.4% 2.1% 3.1% 9.4% 1.1% 2.0% 8.5% Code Llama - Instruct 13B 5 10 100 24.0% 30.3% 48.7% 6.9% 9.6% 19.6% 2.4% 3.8% 13.1% 34B 5 10 100 31.6% 37.8% 55.7% 7.9% 11.1% 22.8% 3.2% 5.1% 16.4%
2308.12950#32
Code Llama: Open Foundation Models for Code
We release Code Llama, a family of large language models for code based on Llama 2 providing state-of-the-art performance among open models, infilling capabilities, support for large input contexts, and zero-shot instruction following ability for programming tasks. We provide multiple flavors to cover a wide range of applications: foundation models (Code Llama), Python specializations (Code Llama - Python), and instruction-following models (Code Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are trained on sequences of 16k tokens and show improvements on inputs with up to 100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants support infilling based on surrounding content. Code Llama reaches state-of-the-art performance among open models on several code benchmarks, with scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our models outperform every other publicly available model on MultiPL-E. We release Code Llama under a permissive license that allows for both research and commercial use.
http://arxiv.org/pdf/2308.12950
Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve
cs.CL
null
null
cs.CL
20230824
20240131
[]
2308.12966
32
# Instruction Following in Real-world User Behavior In addition to previous conventional vision-language evaluations, to evaluate our Qwen-VL-Chat model’s capacity under real-world user behavior, we further conduct the evaluations on the TouchStone (Bai et al., 2023), SEED-Bench (Li et al., 2023b), and MME (Fu et al., 2023). TouchStone is an open-ended vision- language instruction-following benchmark. We compare the instruction-following ability of Qwen-VL-Chat with other instruction-tuned LVLMs in both English and Chinese on the TouchStone benchmark. SEED-Bench consists of 19K multiple-choice questions with accurate human annotations for evaluating Multimodal LLMs, covering 12 evaluation dimensions including both the spatial and temporal understanding. MME measures both perception and cognition abilities on a total of 14 subtasks.
2308.12966#32
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
In this work, we introduce the Qwen-VL series, a set of large-scale vision-language models (LVLMs) designed to perceive and understand both texts and images. Starting from the Qwen-LM as a foundation, we endow it with visual capacity by the meticulously designed (i) visual receptor, (ii) input-output interface, (iii) 3-stage training pipeline, and (iv) multilingual multimodal cleaned corpus. Beyond the conventional image description and question-answering, we implement the grounding and text-reading ability of Qwen-VLs by aligning image-caption-box tuples. The resulting models, including Qwen-VL and Qwen-VL-Chat, set new records for generalist models under similar model scales on a broad range of visual-centric benchmarks (e.g., image captioning, question answering, visual grounding) and different settings (e.g., zero-shot, few-shot). Moreover, on real-world dialog benchmarks, our instruction-tuned Qwen-VL-Chat also demonstrates superiority compared to existing vision-language chatbots. Code, demo and models are available at https://github.com/QwenLM/Qwen-VL.
http://arxiv.org/pdf/2308.12966
Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou
cs.CV, cs.CL
Code, demo and models are available at https://github.com/QwenLM/Qwen-VL
null
cs.CV
20230824
20231013
[ { "id": "2211.01335" }, { "id": "2307.02499" }, { "id": "2305.10403" }, { "id": "2308.16890" }, { "id": "2208.10442" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11172" }, { "id": "2210.08402" }, { "id": "2306.02858" }, { "id": "2209.06794" }, { "id": "1504.00325" }, { "id": "2204.13653" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2212.04408" }, { "id": "2307.05222" }, { "id": "2306.15195" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2306.14824" }, { "id": "2102.05918" }, { "id": "2205.01917" }, { "id": "2111.11432" }, { "id": "2307.16125" }, { "id": "2305.03726" }, { "id": "2203.10244" }, { "id": "2206.08916" }, { "id": "2304.14108" }, { "id": "2307.08581" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2305.18565" } ]
2308.12503
33
Analysis of willingness to speak. As shown in the Fig- ure 6, when the teacher posed the question to all students: ”Can anyone tell me the general form of a quadratic func- tion?”, the outcomes differed between the answer willing- ness judgment agent and random selection methods. The former showed the students’ willingness to answer intensity: John: 3, Emily: 5, Ryan: 4, Samantha: 2, Ying Zheng: 4. No- tably, the students’ willingness strength is highly consistent with their character traits. For instance, the expressive Emily exhibited high willingness to answer, while the introverted Samantha showed less. The random selection method, how- ever, produced different results.
2308.12503#33
CGMI: Configurable General Multi-Agent Interaction Framework
Benefiting from the powerful capabilities of large language models (LLMs), agents based on LLMs have shown the potential to address domain-specific tasks and emulate human behaviors. However, the content generated by these agents remains somewhat superficial, owing to their limited domain expertise and the absence of an effective cognitive architecture. To address this, we present the Configurable General Multi-Agent Interaction (CGMI) framework, designed to replicate human interactions in real-world scenarios. Specifically, we propose a tree-structured methodology for the assignment, detection, and maintenance of agent personality. Additionally, we designed a cognitive architecture equipped with a skill library based on the ACT* model, which contains memory, reflection, and planning modules. We have also integrated general agents to augment the virtual environment's realism. Using the CGMI framework, we simulated numerous classroom interactions between teacher and students. The experiments indicate that aspects such as the teaching methodology, curriculum, and student performance closely mirror real classroom settings. We will open source our work.
http://arxiv.org/pdf/2308.12503
Shi Jinxin, Zhao Jiabao, Wang Yilei, Wu Xingjiao, Li Jiawen, He Liang
cs.AI, cs.HC, cs.MA
11 pages, 15 figures
null
cs.AI
20230824
20230828
[ { "id": "2302.01560" }, { "id": "2307.05300" }, { "id": "2307.07924" }, { "id": "2210.03350" }, { "id": "2304.05376" }, { "id": "2304.03442" }, { "id": "2210.03629" }, { "id": "2305.04091" }, { "id": "2305.02547" }, { "id": "2303.17071" }, { "id": "2303.17760" }, { "id": "2303.08774" } ]
2308.12519
33
• Decision Reflection. Complex tasks should be divided into sequential decisions and the model should accomplish them progressively to finish the task finally. It requires models to verify the completeness of each decision step and reflect to make better decisions toward successful directions accordingly. DFSDT cannot evaluate the intermediate decision so it cannot learn a good reflection from previous decisions to select an effective one. RADAGENT, benefitting from its self-judgment mechanism, assigns higher scores to decision steps aligned with comprehensive solution strategies. This innate ability to recognize the completeness of previous decisions and guide the next decision accordingly is a hallmark of an effective decision-making method. # 6 RELATED WORK
2308.12519#33
Rational Decision-Making Agent with Internalized Utility Judgment
Large language models (LLMs) have demonstrated remarkable advancements and have attracted significant efforts to develop LLMs into agents capable of executing intricate multi-step decision-making tasks beyond traditional NLP applications. Existing approaches to LLM-based decision-making predominantly build upon the manually-designed external performance metrics to guide the decision-making process. However, reliance on the external performance metrics as prior is problematic in real-world scenarios, where such prior may be unavailable, flawed, or even erroneous. For genuine autonomous decision making, it is imperative for the agent to develop its rationality from its posterior experiences to judge decisions independently. Central to the development of rationality is the construction of an internalized utility judgment, capable of assigning numerical utilities to each decision. This paper proposes RadAgent (Rational Decision-Making Agent), which fosters the development of its rationality through an iterative framework involving Experience Exploration and Utility Learning. Within this framework, Elo-based Utility Construction is devised to assign Elo scores to individual decision steps to judge their utilities via pairwise comparisons. Consequently, these Elo scores guide the decision-making process to derive optimal outcomes. Experimental results on the ToolBench dataset demonstrate RadAgent's superiority over baselines, achieving over 10% improvement in Pass Rate on diverse tasks. It offers higher-quality solutions and reduces costs (ChatGPT API calls), highlighting its effectiveness and efficiency.
http://arxiv.org/pdf/2308.12519
Yining Ye, Xin Cong, Shizuo Tian, Yujia Qin, Chong Liu, Yankai Lin, Zhiyuan Liu, Maosong Sun
cs.CL
Received 8,6,6,6 scores on ICLR 2024
null
cs.CL
20230824
20240117
[ { "id": "2305.14318" }, { "id": "2306.06624" }, { "id": "2305.17926" }, { "id": "2305.10601" }, { "id": "2307.16789" }, { "id": "2305.06849" }, { "id": "2304.08354" }, { "id": "2308.09687" }, { "id": "2306.11489" }, { "id": "2306.17563" }, { "id": "2305.14992" }, { "id": "2305.01937" }, { "id": "2308.10379" }, { "id": "2305.11554" } ]
2308.12682
33
Setup Ravens (tower of hanoi) Ravens (put blocks in bowls) BabyAI (pickup) VirtualHome Say Model Greedy-Token Vicuna Flan-T5 Vicuna Flan-T5 Vicuna Flan-T5 Vicuna Flan-T5 32 24 8 94 0 0 0/20 0/20 Say 30 22 30 94 1 1 2/20 0/20 Greedy-Action SayCan 18 18 10 26 4 28 3/20 0/20 SayCanPay 18 16 6 18 12 28 3/20 3/20 Say 27 26 30 96 9 1 5/20 1/20 Beam-Action SayCan 34 26 10 22 12 15 5/20 3/20 SayCanPay 34 26 6 24 10 28 5/20 5/20 Table 5: Table shows the generalization results (i.e. the number of plans out of 100 that reached the goal) on test- generalize split across different environments using Vicuna and Flan-T5 models. It can be observed that Beam-Action outperforms other decoding strategies. input format is the same as the Can model. The output is the estimated payoff, fheur(ht, g) = Mpay(g, ht−1, at).
2308.12682#33
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
Large Language Models (LLMs) have demonstrated impressive planning abilities due to their vast "world knowledge". Yet, obtaining plans that are both feasible (grounded in affordances) and cost-effective (in plan length), remains a challenge, despite recent progress. This contrasts with heuristic planning methods that employ domain knowledge (formalized in action models such as PDDL) and heuristic search to generate feasible, optimal plans. Inspired by this, we propose to combine the power of LLMs and heuristic planning by leveraging the world knowledge of LLMs and the principles of heuristic search. Our approach, SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain knowledge, that evaluates actions' feasibility (Can) and long-term reward/payoff (Pay), and heuristic search to select the best sequence of actions. Our contributions are (1) a novel framing of the LLM planning problem in the context of heuristic planning, (2) integrating grounding and cost-effective elements into the generated plans, and (3) using heuristic search over actions. Our extensive evaluations show that our model surpasses other LLM planning approaches.
http://arxiv.org/pdf/2308.12682
Rishi Hazra, Pedro Zuidberg Dos Martires, Luc De Raedt
cs.AI
Accepted in AAAI 2024. Website: https://rishihazra.github.io/SayCanPay/
null
cs.AI
20230824
20240101
[ { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2305.14992" }, { "id": "2302.05128" }, { "id": "2212.08681" }, { "id": "1807.03748" }, { "id": "2303.00855" }, { "id": "2305.10601" }, { "id": "2304.11477" }, { "id": "2210.17323" }, { "id": "2210.11416" }, { "id": "2201.04735" }, { "id": "2202.10936" }, { "id": "2209.07753" }, { "id": "2302.06706" }, { "id": "1909.08593" }, { "id": "2307.15818" }, { "id": "2204.01691" }, { "id": "2207.05608" }, { "id": "2305.14314" } ]
2308.12950
33
Table 3: Code Llama pass@ scores on APPS. We list the two-shot pass@5, pass@10, and pass@100 scores of Code Llama on APPS. For our models, we use nucleus sampling with p=0.95 and a temperature of 0.6. Code Llama is not fine-tuned on the training set of APPS and all results are calculated with raw predictions without filtering by the test cases from the prompt. Fine-tuned GPT-Neo numbers are reported by Hendrycks et al. (2021), one-shot Codex results by Chen et al. (2021), and fine-tuned AlphaCode numbers by Li et al. (2022). We observe a similar improvement from Llama 2 to Code Llama in the multilingual setting as in the evaluation on Python (Section 3.1.1). The Code Llama models clearly outperform Llama 2 models of the same size on code generation in any language, and Code Llama 7B even outperforms Llama 2 70B. Compared 8
2308.12950#33
Code Llama: Open Foundation Models for Code
We release Code Llama, a family of large language models for code based on Llama 2 providing state-of-the-art performance among open models, infilling capabilities, support for large input contexts, and zero-shot instruction following ability for programming tasks. We provide multiple flavors to cover a wide range of applications: foundation models (Code Llama), Python specializations (Code Llama - Python), and instruction-following models (Code Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are trained on sequences of 16k tokens and show improvements on inputs with up to 100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants support infilling based on surrounding content. Code Llama reaches state-of-the-art performance among open models on several code benchmarks, with scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our models outperform every other publicly available model on MultiPL-E. We release Code Llama under a permissive license that allows for both research and commercial use.
http://arxiv.org/pdf/2308.12950
Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve
cs.CL
null
null
cs.CL
20230824
20240131
[]
2308.12966
33
The results on three benchmarks are shown in Table 7. Qwen-VL-Chat has achieved obvious advantages over other LVLMs on all three datasets, indicating that our model performs better in understanding and answering diverse user instructions. In SEED-Bench, we have found that our model’s visual capabilities can be effectively transferred to video tasks by simply sampling four frames. In terms of the overall scores presented in TouchStone, our model demonstrates a clear advantage compared to other LVLMs, especially in terms of its Chinese capabilities. In terms of the broad categories of abilities, our model exhibits a more pronounced advantage in understanding and recognition, particularly in areas such as text recognition and chart analysis. For more detailed information, please refer to the TouchStone dataset. # 5 Related Work
2308.12966#33
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
In this work, we introduce the Qwen-VL series, a set of large-scale vision-language models (LVLMs) designed to perceive and understand both texts and images. Starting from the Qwen-LM as a foundation, we endow it with visual capacity by the meticulously designed (i) visual receptor, (ii) input-output interface, (iii) 3-stage training pipeline, and (iv) multilingual multimodal cleaned corpus. Beyond the conventional image description and question-answering, we implement the grounding and text-reading ability of Qwen-VLs by aligning image-caption-box tuples. The resulting models, including Qwen-VL and Qwen-VL-Chat, set new records for generalist models under similar model scales on a broad range of visual-centric benchmarks (e.g., image captioning, question answering, visual grounding) and different settings (e.g., zero-shot, few-shot). Moreover, on real-world dialog benchmarks, our instruction-tuned Qwen-VL-Chat also demonstrates superiority compared to existing vision-language chatbots. Code, demo and models are available at https://github.com/QwenLM/Qwen-VL.
http://arxiv.org/pdf/2308.12966
Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou
cs.CV, cs.CL
Code, demo and models are available at https://github.com/QwenLM/Qwen-VL
null
cs.CV
20230824
20231013
[ { "id": "2211.01335" }, { "id": "2307.02499" }, { "id": "2305.10403" }, { "id": "2308.16890" }, { "id": "2208.10442" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11172" }, { "id": "2210.08402" }, { "id": "2306.02858" }, { "id": "2209.06794" }, { "id": "1504.00325" }, { "id": "2204.13653" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2212.04408" }, { "id": "2307.05222" }, { "id": "2306.15195" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2306.14824" }, { "id": "2102.05918" }, { "id": "2205.01917" }, { "id": "2111.11432" }, { "id": "2307.16125" }, { "id": "2305.03726" }, { "id": "2203.10244" }, { "id": "2206.08916" }, { "id": "2304.14108" }, { "id": "2307.08581" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2305.18565" } ]
2308.12503
34
The discrepancy between the two methods is not coinci- dental. We recorded the number of students recommended by the two different methods to answer when the teacher posed questions to the entire class during a complete lesson. From the Figure 6, it can be seen that the answer willingness judgment agent, considering factors like students’ person- alities, classroom dynamics, and their grasp of the subject, recommended John 4 times, Emily 9 times, Ryan 6 times, Samantha 1 time, and Ying Zheng 8 times. However, with random selection, the results were John 7 times, Emily 3 times, Ryan 4 times, Samantha 6 times, and Ying Zheng 8 times. The expressive Emily only volunteered to answer 3 times, significantly undermining the rationality of the inter- action process between the teacher and students in the virtual scenario. The effectiveness of questioning. In addition to pos- ing questions to all students, teachers also selectively direct questions to specific students. This selection is influenced by Teaching Plan: Based on the students' personalities: - Ying Zheng (Academic Enthusiast): Challenge Ying Zheng with advanced problem-solving tasks and encourage him to explore additional methods for solving Class process: Mrs. Smith: Next, we will learn about the different methods of solving quadratic equations... Ying Zheng! Exploring different methods of solving... Ying Zheng: By trying out various approaches, we can... Figure 7: The influence of personal traits on agent expres- sion.
2308.12503#34
CGMI: Configurable General Multi-Agent Interaction Framework
Benefiting from the powerful capabilities of large language models (LLMs), agents based on LLMs have shown the potential to address domain-specific tasks and emulate human behaviors. However, the content generated by these agents remains somewhat superficial, owing to their limited domain expertise and the absence of an effective cognitive architecture. To address this, we present the Configurable General Multi-Agent Interaction (CGMI) framework, designed to replicate human interactions in real-world scenarios. Specifically, we propose a tree-structured methodology for the assignment, detection, and maintenance of agent personality. Additionally, we designed a cognitive architecture equipped with a skill library based on the ACT* model, which contains memory, reflection, and planning modules. We have also integrated general agents to augment the virtual environment's realism. Using the CGMI framework, we simulated numerous classroom interactions between teacher and students. The experiments indicate that aspects such as the teaching methodology, curriculum, and student performance closely mirror real classroom settings. We will open source our work.
http://arxiv.org/pdf/2308.12503
Shi Jinxin, Zhao Jiabao, Wang Yilei, Wu Xingjiao, Li Jiawen, He Liang
cs.AI, cs.HC, cs.MA
11 pages, 15 figures
null
cs.AI
20230824
20230828
[ { "id": "2302.01560" }, { "id": "2307.05300" }, { "id": "2307.07924" }, { "id": "2210.03350" }, { "id": "2304.05376" }, { "id": "2304.03442" }, { "id": "2210.03629" }, { "id": "2305.04091" }, { "id": "2305.02547" }, { "id": "2303.17071" }, { "id": "2303.17760" }, { "id": "2303.08774" } ]
2308.12519
34
# 6 RELATED WORK Decision Making Methods for LLM-based Agents Efficient and effective decision-making abil- ity is fundamental for LLM-based agents to the attainment of specific objectives (Yao et al., 2022; 2023; Hao et al., 2023a; Besta et al., 2023; Sel et al., 2023). Although LLMs are pre-trained on a large-scale corpus which equips them with substantial common sense and knowledge to solve several problems, due to the complexity and diversity of realistic tasks, LLM-based agents still struggle to make multi-step decisions to solve realistic tasks. Recently, as Chain-of-Thought (Wei et al., 2023) demonstrates its capability to decompose complex questions into sequential intermediate steps, sev- eral LLM-based decision-making methods are proposed to enhance the decision-making ability of agents. ReACT (Yao et al., 2022) develops a variant of CoT to leverage the reasoning ability of LLMs in decision-making scenarios. Reflexion (Shinn et al., 2023) further offers a remedial ap- proach to make LLMs reflect their failure and summarize the reason in the decision process, and 9 Preprint
2308.12519#34
Rational Decision-Making Agent with Internalized Utility Judgment
Large language models (LLMs) have demonstrated remarkable advancements and have attracted significant efforts to develop LLMs into agents capable of executing intricate multi-step decision-making tasks beyond traditional NLP applications. Existing approaches to LLM-based decision-making predominantly build upon the manually-designed external performance metrics to guide the decision-making process. However, reliance on the external performance metrics as prior is problematic in real-world scenarios, where such prior may be unavailable, flawed, or even erroneous. For genuine autonomous decision making, it is imperative for the agent to develop its rationality from its posterior experiences to judge decisions independently. Central to the development of rationality is the construction of an internalized utility judgment, capable of assigning numerical utilities to each decision. This paper proposes RadAgent (Rational Decision-Making Agent), which fosters the development of its rationality through an iterative framework involving Experience Exploration and Utility Learning. Within this framework, Elo-based Utility Construction is devised to assign Elo scores to individual decision steps to judge their utilities via pairwise comparisons. Consequently, these Elo scores guide the decision-making process to derive optimal outcomes. Experimental results on the ToolBench dataset demonstrate RadAgent's superiority over baselines, achieving over 10% improvement in Pass Rate on diverse tasks. It offers higher-quality solutions and reduces costs (ChatGPT API calls), highlighting its effectiveness and efficiency.
http://arxiv.org/pdf/2308.12519
Yining Ye, Xin Cong, Shizuo Tian, Yujia Qin, Chong Liu, Yankai Lin, Zhiyuan Liu, Maosong Sun
cs.CL
Received 8,6,6,6 scores on ICLR 2024
null
cs.CL
20230824
20240117
[ { "id": "2305.14318" }, { "id": "2306.06624" }, { "id": "2305.17926" }, { "id": "2305.10601" }, { "id": "2307.16789" }, { "id": "2305.06849" }, { "id": "2304.08354" }, { "id": "2308.09687" }, { "id": "2306.11489" }, { "id": "2306.17563" }, { "id": "2305.14992" }, { "id": "2305.01937" }, { "id": "2308.10379" }, { "id": "2305.11554" } ]
2308.12682
34
Input ⟨Goal⟩ pick up the purple box. ⟨Initial State⟩ Room 1 has yellow key, agent. Room 2 has purple box. The door connecting Room 1 and Room 2 is locked. ⟨Step 1⟩ pick up yellow key. ⟨Step 2⟩ toggle yellow door. ⟨Step 3⟩ drop key in void. ⟨Step 4⟩ pick up blue box. ⟨NXT⟩ done picking up. Output 1.0 Input ⟨Goal⟩ pick up the purple box. ⟨Initial State⟩ Room 1 has yellow key, agent. Room 2 has purple box. The door connecting Room 1 and Room 2 is locked. ⟨Step 1⟩ pick up yellow key. ⟨Step 2⟩ toggle yellow door. ⟨Step 3⟩ drop key in void. ⟨NXT⟩ pick up blue box. Output 0.6 Input ⟨Goal⟩ pick up the purple box. ⟨Initial State⟩ Room 1 has yellow key, agent. Room 2 has purple box. The door connecting Room 1 and Room 2 is locked. ⟨Step 1⟩ pick up yellow key. ⟨Step 2⟩ toggle yellow door.
2308.12682#34
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
Large Language Models (LLMs) have demonstrated impressive planning abilities due to their vast "world knowledge". Yet, obtaining plans that are both feasible (grounded in affordances) and cost-effective (in plan length), remains a challenge, despite recent progress. This contrasts with heuristic planning methods that employ domain knowledge (formalized in action models such as PDDL) and heuristic search to generate feasible, optimal plans. Inspired by this, we propose to combine the power of LLMs and heuristic planning by leveraging the world knowledge of LLMs and the principles of heuristic search. Our approach, SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain knowledge, that evaluates actions' feasibility (Can) and long-term reward/payoff (Pay), and heuristic search to select the best sequence of actions. Our contributions are (1) a novel framing of the LLM planning problem in the context of heuristic planning, (2) integrating grounding and cost-effective elements into the generated plans, and (3) using heuristic search over actions. Our extensive evaluations show that our model surpasses other LLM planning approaches.
http://arxiv.org/pdf/2308.12682
Rishi Hazra, Pedro Zuidberg Dos Martires, Luc De Raedt
cs.AI
Accepted in AAAI 2024. Website: https://rishihazra.github.io/SayCanPay/
null
cs.AI
20230824
20240101
[ { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2305.14992" }, { "id": "2302.05128" }, { "id": "2212.08681" }, { "id": "1807.03748" }, { "id": "2303.00855" }, { "id": "2305.10601" }, { "id": "2304.11477" }, { "id": "2210.17323" }, { "id": "2210.11416" }, { "id": "2201.04735" }, { "id": "2202.10936" }, { "id": "2209.07753" }, { "id": "2302.06706" }, { "id": "1909.08593" }, { "id": "2307.15818" }, { "id": "2204.01691" }, { "id": "2207.05608" }, { "id": "2305.14314" } ]
2308.12950
34
Model Size C++ Java Multi-lingual Human-Eval TS PHP C# Bash Average CodeGen-Multi CodeGeeX code-cushman-001 StarCoder Base StarCoder Python 16B 21.0% 22.2% 8.4% 20.1% 8.2% 0.6% 13.4% 13B 16.9% 19.1% 13.5% 10.1% 8.5% 2.8% 11.8% 12B 30.6% 31.9% 28.9% 31.3% 22.1% 11.7% 26.1% 15.5B 30.6% 28.5% 26.8% 32.2% 20.6% 11.0% 25.0% 15.5B 31.6% 30.2% 26.1% 32.3% 21.0% 10.5% 25.3% Llama-v2 8.3% 13B 13.7% 15.8% 13.1% 13.2% 9.5% 3.2% 11.4% 34B 23.6% 22.2% 19.9% 21.4% 17.1% 3.8% 18.0% 70B 30.4% 31.7% 34.2% 15.1% 25.9% 8.9% 24.4%
2308.12950#34
Code Llama: Open Foundation Models for Code
We release Code Llama, a family of large language models for code based on Llama 2 providing state-of-the-art performance among open models, infilling capabilities, support for large input contexts, and zero-shot instruction following ability for programming tasks. We provide multiple flavors to cover a wide range of applications: foundation models (Code Llama), Python specializations (Code Llama - Python), and instruction-following models (Code Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are trained on sequences of 16k tokens and show improvements on inputs with up to 100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants support infilling based on surrounding content. Code Llama reaches state-of-the-art performance among open models on several code benchmarks, with scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our models outperform every other publicly available model on MultiPL-E. We release Code Llama under a permissive license that allows for both research and commercial use.
http://arxiv.org/pdf/2308.12950
Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve
cs.CL
null
null
cs.CL
20230824
20240131
[]
2308.12966
34
In recent years, researchers have shown considerable interest in vision-language learning (Su et al., 2019; Chen et al., 2020; Li et al., 2020; Zhang et al., 2021; Li et al., 2021b; Lin et al., 2021; Kim et al., 2021; Dou et al., 2022; Zeng et al., 2021; Li et al., 2021a, 2022), especially in the development of multi-task generalist models (Hu and Singh, 2021; Singh et al., 2022; Zhu et al., 2022; Yu et al., 2022; Wang et al., 2022a; Lu et al., 2022a; Bai et al., 2022). CoCa (Yu et al., 2022) proposes an encoder-decoder structure to address image-text retrieval and vision-language generation tasks simultaneously. OFA (Wang et al., 2022a) transforms specific vision-language tasks into sequence-to-sequence tasks using customized task instructions. Unified I/O (Lu et al., 2022a) further introduces more tasks like segmentation and depth estimation into a unified framework. Another category of research focuses on building vision-language representation models (Radford et al., 2021; Jia et
2308.12966#34
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
In this work, we introduce the Qwen-VL series, a set of large-scale vision-language models (LVLMs) designed to perceive and understand both texts and images. Starting from the Qwen-LM as a foundation, we endow it with visual capacity by the meticulously designed (i) visual receptor, (ii) input-output interface, (iii) 3-stage training pipeline, and (iv) multilingual multimodal cleaned corpus. Beyond the conventional image description and question-answering, we implement the grounding and text-reading ability of Qwen-VLs by aligning image-caption-box tuples. The resulting models, including Qwen-VL and Qwen-VL-Chat, set new records for generalist models under similar model scales on a broad range of visual-centric benchmarks (e.g., image captioning, question answering, visual grounding) and different settings (e.g., zero-shot, few-shot). Moreover, on real-world dialog benchmarks, our instruction-tuned Qwen-VL-Chat also demonstrates superiority compared to existing vision-language chatbots. Code, demo and models are available at https://github.com/QwenLM/Qwen-VL.
http://arxiv.org/pdf/2308.12966
Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou
cs.CV, cs.CL
Code, demo and models are available at https://github.com/QwenLM/Qwen-VL
null
cs.CV
20230824
20231013
[ { "id": "2211.01335" }, { "id": "2307.02499" }, { "id": "2305.10403" }, { "id": "2308.16890" }, { "id": "2208.10442" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11172" }, { "id": "2210.08402" }, { "id": "2306.02858" }, { "id": "2209.06794" }, { "id": "1504.00325" }, { "id": "2204.13653" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2212.04408" }, { "id": "2307.05222" }, { "id": "2306.15195" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2306.14824" }, { "id": "2102.05918" }, { "id": "2205.01917" }, { "id": "2111.11432" }, { "id": "2307.16125" }, { "id": "2305.03726" }, { "id": "2203.10244" }, { "id": "2206.08916" }, { "id": "2304.14108" }, { "id": "2307.08581" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2305.18565" } ]
2308.12503
35
Figure 7: The influence of personal traits on agent expres- sion. two aspects: (1) some teaching plans targeting particular stu- dents and (2) it’s influenced by the teacher’s analysis of the student’s status and classroom dynamics during the teaching process. As shown in Figure 7, the teaching plan specifies that the teacher can encourage Ying Zheng to explore differ- ent solutions. As observed in the subsequent teaching pro- cess, the teacher aptly integrated this instructional arrange- ment during the lecture and specifically asked Ying Zheng to explore, leading to the next phase of instruction. In summary, the flexible interaction logic setting ensures that the interaction process among multiple agents is no longer a random choice without considering the actual situ- ation and role settings, nor a process where every role needs to be expressed. This introduces more possibilities for vir- tual scenarios. # Conclusion
2308.12503#35
CGMI: Configurable General Multi-Agent Interaction Framework
Benefiting from the powerful capabilities of large language models (LLMs), agents based on LLMs have shown the potential to address domain-specific tasks and emulate human behaviors. However, the content generated by these agents remains somewhat superficial, owing to their limited domain expertise and the absence of an effective cognitive architecture. To address this, we present the Configurable General Multi-Agent Interaction (CGMI) framework, designed to replicate human interactions in real-world scenarios. Specifically, we propose a tree-structured methodology for the assignment, detection, and maintenance of agent personality. Additionally, we designed a cognitive architecture equipped with a skill library based on the ACT* model, which contains memory, reflection, and planning modules. We have also integrated general agents to augment the virtual environment's realism. Using the CGMI framework, we simulated numerous classroom interactions between teacher and students. The experiments indicate that aspects such as the teaching methodology, curriculum, and student performance closely mirror real classroom settings. We will open source our work.
http://arxiv.org/pdf/2308.12503
Shi Jinxin, Zhao Jiabao, Wang Yilei, Wu Xingjiao, Li Jiawen, He Liang
cs.AI, cs.HC, cs.MA
11 pages, 15 figures
null
cs.AI
20230824
20230828
[ { "id": "2302.01560" }, { "id": "2307.05300" }, { "id": "2307.07924" }, { "id": "2210.03350" }, { "id": "2304.05376" }, { "id": "2304.03442" }, { "id": "2210.03629" }, { "id": "2305.04091" }, { "id": "2305.02547" }, { "id": "2303.17071" }, { "id": "2303.17760" }, { "id": "2303.08774" } ]
2308.12519
35
9 Preprint then correct their mistake in the second attempt. Based on these methods, some tree-based decision- making methods are proposed to adapt the decision-making ability of LLMs into specific tasks. Tree-of-Thought (Yao et al., 2023) proposes BFS and DFS decision-making algorithms in Game of 24, Creative Writing and Mini Crosswords tasks. RAP (Hao et al., 2023a) applies the Monte Carlo Tree search algorithm to find a good solution in Blocksworld, Math Reasoning, and Logical Reasoning tasks. DFSDT (Qin et al., 2023c), following a similar tree search algorithm, proposes an efficient version of DFS to make decisions. However, the aforementioned methods need task- specialized external performance measure to guide the decision-making process, which limits their scope of application. In this paper, we propose RADAGENT which internalizes the utility judgment ability with Elo rating system to achieve rationality for agents to provide optimal solutions.
2308.12519#35
Rational Decision-Making Agent with Internalized Utility Judgment
Large language models (LLMs) have demonstrated remarkable advancements and have attracted significant efforts to develop LLMs into agents capable of executing intricate multi-step decision-making tasks beyond traditional NLP applications. Existing approaches to LLM-based decision-making predominantly build upon the manually-designed external performance metrics to guide the decision-making process. However, reliance on the external performance metrics as prior is problematic in real-world scenarios, where such prior may be unavailable, flawed, or even erroneous. For genuine autonomous decision making, it is imperative for the agent to develop its rationality from its posterior experiences to judge decisions independently. Central to the development of rationality is the construction of an internalized utility judgment, capable of assigning numerical utilities to each decision. This paper proposes RadAgent (Rational Decision-Making Agent), which fosters the development of its rationality through an iterative framework involving Experience Exploration and Utility Learning. Within this framework, Elo-based Utility Construction is devised to assign Elo scores to individual decision steps to judge their utilities via pairwise comparisons. Consequently, these Elo scores guide the decision-making process to derive optimal outcomes. Experimental results on the ToolBench dataset demonstrate RadAgent's superiority over baselines, achieving over 10% improvement in Pass Rate on diverse tasks. It offers higher-quality solutions and reduces costs (ChatGPT API calls), highlighting its effectiveness and efficiency.
http://arxiv.org/pdf/2308.12519
Yining Ye, Xin Cong, Shizuo Tian, Yujia Qin, Chong Liu, Yankai Lin, Zhiyuan Liu, Maosong Sun
cs.CL
Received 8,6,6,6 scores on ICLR 2024
null
cs.CL
20230824
20240117
[ { "id": "2305.14318" }, { "id": "2306.06624" }, { "id": "2305.17926" }, { "id": "2305.10601" }, { "id": "2307.16789" }, { "id": "2305.06849" }, { "id": "2304.08354" }, { "id": "2308.09687" }, { "id": "2306.11489" }, { "id": "2306.17563" }, { "id": "2305.14992" }, { "id": "2305.01937" }, { "id": "2308.10379" }, { "id": "2305.11554" } ]
2308.12950
35
3.8% 18.0% 70B 30.4% 31.7% 34.2% 15.1% 25.9% 8.9% 24.4% 7B 6.8% 10.8% 9.9% 12.6% 6.3% 3.2% Code Llama 7B 28.6% 34.2% 24.2% 33.3% 25.3% 12.0% 26.3% 13B 39.1% 38.0% 34.2% 29.6% 27.3% 15.2% 30.6% 34B 47.8% 45.6% 44.1% 33.3% 30.4% 17.1% 36.4% 70B 52.8% 51.9% 50.9% 49.1% 38.0% 29.1% 45.3% Code Llama - Instruct 7B 31.1% 30.4% 28.6% 32.7% 21.6% 10.1% 25.8% 13B 42.2% 40.5% 32.3% 39.0% 24.0% 13.9% 32.0% 34B 45.3% 43.7% 36.6% 40.3% 31.0% 19.6% 36.1% 70B 53.4% 58.2% 58.4%
2308.12950#35
Code Llama: Open Foundation Models for Code
We release Code Llama, a family of large language models for code based on Llama 2 providing state-of-the-art performance among open models, infilling capabilities, support for large input contexts, and zero-shot instruction following ability for programming tasks. We provide multiple flavors to cover a wide range of applications: foundation models (Code Llama), Python specializations (Code Llama - Python), and instruction-following models (Code Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are trained on sequences of 16k tokens and show improvements on inputs with up to 100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants support infilling based on surrounding content. Code Llama reaches state-of-the-art performance among open models on several code benchmarks, with scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our models outperform every other publicly available model on MultiPL-E. We release Code Llama under a permissive license that allows for both research and commercial use.
http://arxiv.org/pdf/2308.12950
Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve
cs.CL
null
null
cs.CL
20230824
20240131
[]
2308.12966
35
segmentation and depth estimation into a unified framework. Another category of research focuses on building vision-language representation models (Radford et al., 2021; Jia et al., 2021; Zhai et al., 2022; Yuan et al., 2021; Yang et al., 2022a). CLIP (Radford et al., 2021) leverages contrastive learning and large amounts of data to align images and language in a semantic space, resulting in strong generalization capabilities across a wide range of downstream tasks. BEIT-3 (Wang et al., 2022b) employs a mixture-of-experts (MOE) structure and unified masked token prediction objective, achieving state-of-the-art results on various visual-language tasks. In addition to vision-language learning, ImageBind (Girdhar et al., 2023) and ONE-PEACE (Wang et al., 2023) align more modalities such as speech into a unified semantic space, thus creating more general representation models.
2308.12966#35
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
In this work, we introduce the Qwen-VL series, a set of large-scale vision-language models (LVLMs) designed to perceive and understand both texts and images. Starting from the Qwen-LM as a foundation, we endow it with visual capacity by the meticulously designed (i) visual receptor, (ii) input-output interface, (iii) 3-stage training pipeline, and (iv) multilingual multimodal cleaned corpus. Beyond the conventional image description and question-answering, we implement the grounding and text-reading ability of Qwen-VLs by aligning image-caption-box tuples. The resulting models, including Qwen-VL and Qwen-VL-Chat, set new records for generalist models under similar model scales on a broad range of visual-centric benchmarks (e.g., image captioning, question answering, visual grounding) and different settings (e.g., zero-shot, few-shot). Moreover, on real-world dialog benchmarks, our instruction-tuned Qwen-VL-Chat also demonstrates superiority compared to existing vision-language chatbots. Code, demo and models are available at https://github.com/QwenLM/Qwen-VL.
http://arxiv.org/pdf/2308.12966
Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou
cs.CV, cs.CL
Code, demo and models are available at https://github.com/QwenLM/Qwen-VL
null
cs.CV
20230824
20231013
[ { "id": "2211.01335" }, { "id": "2307.02499" }, { "id": "2305.10403" }, { "id": "2308.16890" }, { "id": "2208.10442" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11172" }, { "id": "2210.08402" }, { "id": "2306.02858" }, { "id": "2209.06794" }, { "id": "1504.00325" }, { "id": "2204.13653" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2212.04408" }, { "id": "2307.05222" }, { "id": "2306.15195" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2306.14824" }, { "id": "2102.05918" }, { "id": "2205.01917" }, { "id": "2111.11432" }, { "id": "2307.16125" }, { "id": "2305.03726" }, { "id": "2203.10244" }, { "id": "2206.08916" }, { "id": "2304.14108" }, { "id": "2307.08581" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2305.18565" } ]
2308.12503
36
# Conclusion This paper introduces a multi-agent interaction framework (CGMI) that supports personalized configurations, enabling multiple agents to engage in anthropomorphic interactions and collaborations. It also can simulate domain-specific social phenomena. We designed a cognitive architecture equipped with domain skill library. It allows agents to com- bine domain knowledge for reflection and planning, and condense the working memory into declarative and proce- dural memories. With the assistance of general agents, the authenticity of scenarios can be further enhanced. Moreover, we employed a virtual ”classroom teaching” scenario to sim- ulate the teaching process between teachers and students, and conducted comparative analysis of their interaction con- tent and logic, verifying the effectiveness of CGMI. In the future, we hope that the social scenarios simulated by multiple agents will not only provide users with valuable social experimental data, aiding the development of large models, but also support industrial applications, such as as- sisting teaching and gamified teaching.
2308.12503#36
CGMI: Configurable General Multi-Agent Interaction Framework
Benefiting from the powerful capabilities of large language models (LLMs), agents based on LLMs have shown the potential to address domain-specific tasks and emulate human behaviors. However, the content generated by these agents remains somewhat superficial, owing to their limited domain expertise and the absence of an effective cognitive architecture. To address this, we present the Configurable General Multi-Agent Interaction (CGMI) framework, designed to replicate human interactions in real-world scenarios. Specifically, we propose a tree-structured methodology for the assignment, detection, and maintenance of agent personality. Additionally, we designed a cognitive architecture equipped with a skill library based on the ACT* model, which contains memory, reflection, and planning modules. We have also integrated general agents to augment the virtual environment's realism. Using the CGMI framework, we simulated numerous classroom interactions between teacher and students. The experiments indicate that aspects such as the teaching methodology, curriculum, and student performance closely mirror real classroom settings. We will open source our work.
http://arxiv.org/pdf/2308.12503
Shi Jinxin, Zhao Jiabao, Wang Yilei, Wu Xingjiao, Li Jiawen, He Liang
cs.AI, cs.HC, cs.MA
11 pages, 15 figures
null
cs.AI
20230824
20230828
[ { "id": "2302.01560" }, { "id": "2307.05300" }, { "id": "2307.07924" }, { "id": "2210.03350" }, { "id": "2304.05376" }, { "id": "2304.03442" }, { "id": "2210.03629" }, { "id": "2305.04091" }, { "id": "2305.02547" }, { "id": "2303.17071" }, { "id": "2303.17760" }, { "id": "2303.08774" } ]
2308.12519
36
Tool Learning Recent investigations have cast illumination upon the burgeoning proficiencies ex- hibited by LLM-based agents in the mastery of instruments and the execution of decision-making processes within intricate contextual milieus (Qin et al., 2023b; Vemprala et al., 2023; Nakano et al., 2021; Qin et al., 2023a; Shen et al., 2023; Wu et al., 2023; Schick et al., 2023; Hao et al., 2023b; Qian et al., 2023; Song et al., 2023; Qin et al., 2023c). The incorporation of external tools into the operational framework of LLM-based agents confers upon them immediate access to contem- poraneous factual knowledge (Yang et al., 2023), imbues them with versatile multimodal capabili- ties (Gupta & Kembhavi, 2023), and empowers them with specialized proficiencies tailored to ver- tical domains (Jin et al., 2023). However, when confronted with real-world tasks that often require the utilization of multiple tools, LLM-agents must engage in multi-step decision-making processes to select tools and determine their sequencing. Consequently, the ability for decision-making in tool learning scenarios becomes imperative to effectively tackle practical applications. # 7 CONCLUSION
2308.12519#36
Rational Decision-Making Agent with Internalized Utility Judgment
Large language models (LLMs) have demonstrated remarkable advancements and have attracted significant efforts to develop LLMs into agents capable of executing intricate multi-step decision-making tasks beyond traditional NLP applications. Existing approaches to LLM-based decision-making predominantly build upon the manually-designed external performance metrics to guide the decision-making process. However, reliance on the external performance metrics as prior is problematic in real-world scenarios, where such prior may be unavailable, flawed, or even erroneous. For genuine autonomous decision making, it is imperative for the agent to develop its rationality from its posterior experiences to judge decisions independently. Central to the development of rationality is the construction of an internalized utility judgment, capable of assigning numerical utilities to each decision. This paper proposes RadAgent (Rational Decision-Making Agent), which fosters the development of its rationality through an iterative framework involving Experience Exploration and Utility Learning. Within this framework, Elo-based Utility Construction is devised to assign Elo scores to individual decision steps to judge their utilities via pairwise comparisons. Consequently, these Elo scores guide the decision-making process to derive optimal outcomes. Experimental results on the ToolBench dataset demonstrate RadAgent's superiority over baselines, achieving over 10% improvement in Pass Rate on diverse tasks. It offers higher-quality solutions and reduces costs (ChatGPT API calls), highlighting its effectiveness and efficiency.
http://arxiv.org/pdf/2308.12519
Yining Ye, Xin Cong, Shizuo Tian, Yujia Qin, Chong Liu, Yankai Lin, Zhiyuan Liu, Maosong Sun
cs.CL
Received 8,6,6,6 scores on ICLR 2024
null
cs.CL
20230824
20240117
[ { "id": "2305.14318" }, { "id": "2306.06624" }, { "id": "2305.17926" }, { "id": "2305.10601" }, { "id": "2307.16789" }, { "id": "2305.06849" }, { "id": "2304.08354" }, { "id": "2308.09687" }, { "id": "2306.11489" }, { "id": "2306.17563" }, { "id": "2305.14992" }, { "id": "2305.01937" }, { "id": "2308.10379" }, { "id": "2305.11554" } ]
2308.12503
37
References Aher, G. V.; Arriaga, R. I.; and Kalai, A. T. 2023. Using Large Language Models to Simulate Multiple Humans and Replicate Human Subject Studies. In Krause, A.; Brun- skill, E.; Cho, K.; Engelhardt, B.; Sabato, S.; and Scarlett, J., eds., Proceedings of the 40th International Conference on Machine Learning, volume 202 of Proceedings of Machine Learning Research, 337–371. PMLR. Alexandru, A.; Tirziu, E.; Tudora, E.; and Bica, O. 2015. Enhanced education by using intelligent agents in multi- agent adaptive e-learning systems. Studies in Informatics and Control, 24(1): 13–22. Anderson; and R, J. 1983. A spreading activation theory of memory. Journal of verbal learning and verbal behavior, 22(3): 261–295. Argyle, L. P.; Busby, E. C.; Fulda, N.; Gubler, J. R.; Rytting, C.; and Wingate, D. 2023. Out of one, many: Using lan- guage models to simulate human samples. Political Analy- sis, 31(3): 337–351. Bran, A. M.; Cox, S.;
2308.12503#37
CGMI: Configurable General Multi-Agent Interaction Framework
Benefiting from the powerful capabilities of large language models (LLMs), agents based on LLMs have shown the potential to address domain-specific tasks and emulate human behaviors. However, the content generated by these agents remains somewhat superficial, owing to their limited domain expertise and the absence of an effective cognitive architecture. To address this, we present the Configurable General Multi-Agent Interaction (CGMI) framework, designed to replicate human interactions in real-world scenarios. Specifically, we propose a tree-structured methodology for the assignment, detection, and maintenance of agent personality. Additionally, we designed a cognitive architecture equipped with a skill library based on the ACT* model, which contains memory, reflection, and planning modules. We have also integrated general agents to augment the virtual environment's realism. Using the CGMI framework, we simulated numerous classroom interactions between teacher and students. The experiments indicate that aspects such as the teaching methodology, curriculum, and student performance closely mirror real classroom settings. We will open source our work.
http://arxiv.org/pdf/2308.12503
Shi Jinxin, Zhao Jiabao, Wang Yilei, Wu Xingjiao, Li Jiawen, He Liang
cs.AI, cs.HC, cs.MA
11 pages, 15 figures
null
cs.AI
20230824
20230828
[ { "id": "2302.01560" }, { "id": "2307.05300" }, { "id": "2307.07924" }, { "id": "2210.03350" }, { "id": "2304.05376" }, { "id": "2304.03442" }, { "id": "2210.03629" }, { "id": "2305.04091" }, { "id": "2305.02547" }, { "id": "2303.17071" }, { "id": "2303.17760" }, { "id": "2303.08774" } ]
2308.12519
37
# 7 CONCLUSION In this work, we have introduced a novel approach, RADAGENT, to internalize the utility judgment ability for agents to achieve rationality across a diverse range of real-world tasks. The introduction of an Elo-based Utility Construction enhances agents to learn numeric utility for each decision step and guide the decision-making process. Extensive experiments on the ToolBench dataset have confirmed the effectiveness of RADAGENT, outperforming baseline methods by achieving notable Pass Rate improvements and producing higher-quality solutions. Moreover, the reduction in LLM API calls showcases the efficiency gains of our approach. By empowering agents with rationality, our work paves the way for their broader utilization in real-world scenarios, alleviating the reliance on external performance measure. # REFERENCES # Agentgpt. Python. https://github.com/reworkd/AgentGPT, 2023. K. Arrow. Rational choice functions and orderings1. Economica, 26:121, 1959. Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Michal Podstawski, Hubert Niewiadomski, Piotr Nyczyk, et al. Graph of thoughts: Solving elaborate problems with large language models. arXiv preprint arXiv:2308.09687, 2023.
2308.12519#37
Rational Decision-Making Agent with Internalized Utility Judgment
Large language models (LLMs) have demonstrated remarkable advancements and have attracted significant efforts to develop LLMs into agents capable of executing intricate multi-step decision-making tasks beyond traditional NLP applications. Existing approaches to LLM-based decision-making predominantly build upon the manually-designed external performance metrics to guide the decision-making process. However, reliance on the external performance metrics as prior is problematic in real-world scenarios, where such prior may be unavailable, flawed, or even erroneous. For genuine autonomous decision making, it is imperative for the agent to develop its rationality from its posterior experiences to judge decisions independently. Central to the development of rationality is the construction of an internalized utility judgment, capable of assigning numerical utilities to each decision. This paper proposes RadAgent (Rational Decision-Making Agent), which fosters the development of its rationality through an iterative framework involving Experience Exploration and Utility Learning. Within this framework, Elo-based Utility Construction is devised to assign Elo scores to individual decision steps to judge their utilities via pairwise comparisons. Consequently, these Elo scores guide the decision-making process to derive optimal outcomes. Experimental results on the ToolBench dataset demonstrate RadAgent's superiority over baselines, achieving over 10% improvement in Pass Rate on diverse tasks. It offers higher-quality solutions and reduces costs (ChatGPT API calls), highlighting its effectiveness and efficiency.
http://arxiv.org/pdf/2308.12519
Yining Ye, Xin Cong, Shizuo Tian, Yujia Qin, Chong Liu, Yankai Lin, Zhiyuan Liu, Maosong Sun
cs.CL
Received 8,6,6,6 scores on ICLR 2024
null
cs.CL
20230824
20240117
[ { "id": "2305.14318" }, { "id": "2306.06624" }, { "id": "2305.17926" }, { "id": "2305.10601" }, { "id": "2307.16789" }, { "id": "2305.06849" }, { "id": "2304.08354" }, { "id": "2308.09687" }, { "id": "2306.11489" }, { "id": "2306.17563" }, { "id": "2305.14992" }, { "id": "2305.01937" }, { "id": "2308.10379" }, { "id": "2305.11554" } ]
2308.12682
37
7.1 Say Model The Say model does not undergo any fine-tuning and is used only for inference. We experimented with two types of transformer architectures. (i) Decoder type: 13b-parameter Vicuna model (Chiang et al. 2023) trained by fine-tuning LLaMA (Touvron et al. 2023). (ii) Encoder-decoder type: Flan-T5-11b (Chung et al. 2022) which is the instruction fine-tuned version of the T5 transformer (Raffel et al. 2020). Existing works have demonstrated the planning abilities of both the decoder type (Pallagani et al. 2022) and the encoder-decoder type architectures (Valmeekam et al. 2023, 2022). Since the generated plan is in free-form language and may contain unrecognizable (for the environment) words or incorrect syntax, it cannot be directly translated into actionable steps in the environment. Following Huang et al. (2022a), we use an exhaustive list of admissible actions (feasible and otherwise), and at the end of each action step, map the generated action to the closest admissible action using minimum edit distance. Interleaving action generation and mapping
2308.12682#37
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
Large Language Models (LLMs) have demonstrated impressive planning abilities due to their vast "world knowledge". Yet, obtaining plans that are both feasible (grounded in affordances) and cost-effective (in plan length), remains a challenge, despite recent progress. This contrasts with heuristic planning methods that employ domain knowledge (formalized in action models such as PDDL) and heuristic search to generate feasible, optimal plans. Inspired by this, we propose to combine the power of LLMs and heuristic planning by leveraging the world knowledge of LLMs and the principles of heuristic search. Our approach, SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain knowledge, that evaluates actions' feasibility (Can) and long-term reward/payoff (Pay), and heuristic search to select the best sequence of actions. Our contributions are (1) a novel framing of the LLM planning problem in the context of heuristic planning, (2) integrating grounding and cost-effective elements into the generated plans, and (3) using heuristic search over actions. Our extensive evaluations show that our model surpasses other LLM planning approaches.
http://arxiv.org/pdf/2308.12682
Rishi Hazra, Pedro Zuidberg Dos Martires, Luc De Raedt
cs.AI
Accepted in AAAI 2024. Website: https://rishihazra.github.io/SayCanPay/
null
cs.AI
20230824
20240101
[ { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2305.14992" }, { "id": "2302.05128" }, { "id": "2212.08681" }, { "id": "1807.03748" }, { "id": "2303.00855" }, { "id": "2305.10601" }, { "id": "2304.11477" }, { "id": "2210.17323" }, { "id": "2210.11416" }, { "id": "2201.04735" }, { "id": "2202.10936" }, { "id": "2209.07753" }, { "id": "2302.06706" }, { "id": "1909.08593" }, { "id": "2307.15818" }, { "id": "2204.01691" }, { "id": "2207.05608" }, { "id": "2305.14314" } ]
2308.12950
37
Table 4: Multi-Lingual HE Pass@1 scores. Pass@1 scores for different programming languages using greedy decoding. These scores are computed in zero-shot. Results for other models from Li et al. (2023). to other publicly available models, ours are especially strong in the multilingual setting. Code Llama 7B outperforms larger models such as CodeGen-Multi or StarCoder, and is on par with Codex (code-cushman-001, Chen et al., 2021). The performance of Code Llama - Python is comparable to that of Code Llama. Code Llama - Python 30B performs slightly worse than Code Llama but Code Llama - Python 7B and 13B perform slightly better than their counterparts without Python fine-tuning. More detailed results can be found in Table 11, Appendix C. To better understand the influence of multilingual pre-training, we measure the correlations between each of the evaluated languages and report the results separately for different model sizes in Figure 3. We observe high correlation between model performance on C++, C#, Java, and PHP. Interestingly, we also notice strong correlation between model performance on Python and Bash. Lastly, as expected the bigger and more expressive the models, the higher the correlation between the performance across all different languages. # 3.2 Infilling evaluations
2308.12950#37
Code Llama: Open Foundation Models for Code
We release Code Llama, a family of large language models for code based on Llama 2 providing state-of-the-art performance among open models, infilling capabilities, support for large input contexts, and zero-shot instruction following ability for programming tasks. We provide multiple flavors to cover a wide range of applications: foundation models (Code Llama), Python specializations (Code Llama - Python), and instruction-following models (Code Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are trained on sequences of 16k tokens and show improvements on inputs with up to 100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants support infilling based on surrounding content. Code Llama reaches state-of-the-art performance among open models on several code benchmarks, with scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our models outperform every other publicly available model on MultiPL-E. We release Code Llama under a permissive license that allows for both research and commercial use.
http://arxiv.org/pdf/2308.12950
Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve
cs.CL
null
null
cs.CL
20230824
20240131
[]
2308.12966
37
as poor robustness in instruction following, limited generalization capabilities in unseen tasks, and a lack of in-context abilities. With the rapid development of large language models (LLMs) (Brown et al., 2020; OpenAI, 2023; Anil et al., 2023; Gao et al., 2023; Qwen, 2023), researchers have started building more powerful large vision-language models (LVLMs) based on LLMs (Alayrac et al., 2022; Chen et al., 2022; Li et al., 2023c; Dai et al., 2023; Huang et al., 2023; Peng et al., 2023; Zhu et al., 2023; Liu et al., 2023; Ye et al., 2023b,a; Chen et al., 2023a; Li et al., 2023a; Zhang et al., 2023; Sun et al., 2023). BLIP-2 (Li et al., 2023c) proposes Q-Former to align the frozen vision foundation models and LLMs. Meanwhile, LLAVA (Liu et al., 2023) and Mini- GPT4 (Zhu et al., 2023) introduce visual instruction tuning to enhance instruction following capabilities in LVLMs. Additionally,
2308.12966#37
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
In this work, we introduce the Qwen-VL series, a set of large-scale vision-language models (LVLMs) designed to perceive and understand both texts and images. Starting from the Qwen-LM as a foundation, we endow it with visual capacity by the meticulously designed (i) visual receptor, (ii) input-output interface, (iii) 3-stage training pipeline, and (iv) multilingual multimodal cleaned corpus. Beyond the conventional image description and question-answering, we implement the grounding and text-reading ability of Qwen-VLs by aligning image-caption-box tuples. The resulting models, including Qwen-VL and Qwen-VL-Chat, set new records for generalist models under similar model scales on a broad range of visual-centric benchmarks (e.g., image captioning, question answering, visual grounding) and different settings (e.g., zero-shot, few-shot). Moreover, on real-world dialog benchmarks, our instruction-tuned Qwen-VL-Chat also demonstrates superiority compared to existing vision-language chatbots. Code, demo and models are available at https://github.com/QwenLM/Qwen-VL.
http://arxiv.org/pdf/2308.12966
Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou
cs.CV, cs.CL
Code, demo and models are available at https://github.com/QwenLM/Qwen-VL
null
cs.CV
20230824
20231013
[ { "id": "2211.01335" }, { "id": "2307.02499" }, { "id": "2305.10403" }, { "id": "2308.16890" }, { "id": "2208.10442" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11172" }, { "id": "2210.08402" }, { "id": "2306.02858" }, { "id": "2209.06794" }, { "id": "1504.00325" }, { "id": "2204.13653" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2212.04408" }, { "id": "2307.05222" }, { "id": "2306.15195" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2306.14824" }, { "id": "2102.05918" }, { "id": "2205.01917" }, { "id": "2111.11432" }, { "id": "2307.16125" }, { "id": "2305.03726" }, { "id": "2203.10244" }, { "id": "2206.08916" }, { "id": "2304.14108" }, { "id": "2307.08581" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2305.18565" } ]
2308.12503
38
many: Using lan- guage models to simulate human samples. Political Analy- sis, 31(3): 337–351. Bran, A. M.; Cox, S.; White, A. D.; and Schwaller, P. 2023. ChemCrow: Augmenting large-language models with chem- istry tools. arXiv:2304.05376. Davidsson; and Paul. 2002. Agent based social simulation: A computer science view. Journal of artificial societies and social simulation, 5(1). Grigorenko, E.; and Sternberg, R. 1993. Thinking styles in teaching inventory. unpublished test, Yale University. Jiang, H.; Zhang, X.; Cao, X.; and Kabbara, J. 2023. Person- aLLM: Investigating the Ability of GPT-3.5 to Express Per- sonality Traits and Gender Differences. arXiv:2305.02547. John, O. P.; Srivastava, S.; et al. 1999. The Big-Five trait tax- onomy: History, measurement, and theoretical perspectives. Krishna, R.; Lee, D.; Fei-Fei, L.; and Bernstein, M. S. 2022. Socially situated artificial intelligence enables learning from human
2308.12503#38
CGMI: Configurable General Multi-Agent Interaction Framework
Benefiting from the powerful capabilities of large language models (LLMs), agents based on LLMs have shown the potential to address domain-specific tasks and emulate human behaviors. However, the content generated by these agents remains somewhat superficial, owing to their limited domain expertise and the absence of an effective cognitive architecture. To address this, we present the Configurable General Multi-Agent Interaction (CGMI) framework, designed to replicate human interactions in real-world scenarios. Specifically, we propose a tree-structured methodology for the assignment, detection, and maintenance of agent personality. Additionally, we designed a cognitive architecture equipped with a skill library based on the ACT* model, which contains memory, reflection, and planning modules. We have also integrated general agents to augment the virtual environment's realism. Using the CGMI framework, we simulated numerous classroom interactions between teacher and students. The experiments indicate that aspects such as the teaching methodology, curriculum, and student performance closely mirror real classroom settings. We will open source our work.
http://arxiv.org/pdf/2308.12503
Shi Jinxin, Zhao Jiabao, Wang Yilei, Wu Xingjiao, Li Jiawen, He Liang
cs.AI, cs.HC, cs.MA
11 pages, 15 figures
null
cs.AI
20230824
20230828
[ { "id": "2302.01560" }, { "id": "2307.05300" }, { "id": "2307.07924" }, { "id": "2210.03350" }, { "id": "2304.05376" }, { "id": "2304.03442" }, { "id": "2210.03629" }, { "id": "2305.04091" }, { "id": "2305.02547" }, { "id": "2303.17071" }, { "id": "2303.17760" }, { "id": "2303.08774" } ]
2308.12519
38
Cheng-Han Chiang and Hung-yi Lee. Can large language models be an alternative to human evalu- ations? arXiv preprint arXiv:2305.01937, 2023. AE Elo. The proposed uscf rating system, its development, theory, and applications. chess life xxii (8): 242–247, 1967. Tanmay Gupta and Aniruddha Kembhavi. Visual programming: Compositional visual reasoning without training. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14953–14962, 2023. Shibo Hao, Yi Gu, Haodi Ma, Joshua Jiahua Hong, Zhen Wang, Daisy Zhe Wang, and Zhiting Hu. Reasoning with language model is planning with world model. arXiv preprint arXiv:2305.14992, 2023a. 10 Preprint Shibo Hao, Tianyang Liu, Zhen Wang, and Zhiting Hu. Toolkengpt: Augmenting frozen language models with massive tools via tool embeddings. arXiv preprint arXiv:2305.11554, 2023b. J. Hendler. Is there an intelligent agent in your future? nature. 1999.
2308.12519#38
Rational Decision-Making Agent with Internalized Utility Judgment
Large language models (LLMs) have demonstrated remarkable advancements and have attracted significant efforts to develop LLMs into agents capable of executing intricate multi-step decision-making tasks beyond traditional NLP applications. Existing approaches to LLM-based decision-making predominantly build upon the manually-designed external performance metrics to guide the decision-making process. However, reliance on the external performance metrics as prior is problematic in real-world scenarios, where such prior may be unavailable, flawed, or even erroneous. For genuine autonomous decision making, it is imperative for the agent to develop its rationality from its posterior experiences to judge decisions independently. Central to the development of rationality is the construction of an internalized utility judgment, capable of assigning numerical utilities to each decision. This paper proposes RadAgent (Rational Decision-Making Agent), which fosters the development of its rationality through an iterative framework involving Experience Exploration and Utility Learning. Within this framework, Elo-based Utility Construction is devised to assign Elo scores to individual decision steps to judge their utilities via pairwise comparisons. Consequently, these Elo scores guide the decision-making process to derive optimal outcomes. Experimental results on the ToolBench dataset demonstrate RadAgent's superiority over baselines, achieving over 10% improvement in Pass Rate on diverse tasks. It offers higher-quality solutions and reduces costs (ChatGPT API calls), highlighting its effectiveness and efficiency.
http://arxiv.org/pdf/2308.12519
Yining Ye, Xin Cong, Shizuo Tian, Yujia Qin, Chong Liu, Yankai Lin, Zhiyuan Liu, Maosong Sun
cs.CL
Received 8,6,6,6 scores on ICLR 2024
null
cs.CL
20230824
20240117
[ { "id": "2305.14318" }, { "id": "2306.06624" }, { "id": "2305.17926" }, { "id": "2305.10601" }, { "id": "2307.16789" }, { "id": "2305.06849" }, { "id": "2304.08354" }, { "id": "2308.09687" }, { "id": "2306.11489" }, { "id": "2306.17563" }, { "id": "2305.14992" }, { "id": "2305.01937" }, { "id": "2308.10379" }, { "id": "2305.11554" } ]
2308.12950
38
# 3.2 Infilling evaluations Performance cost of infilling training. Previous studies on infilling (or fill-in-the-middle, FIM ) code models assert that the traditional next token prediction objective can be replaced by a multitask infilling objective with an infilling rate of up to 90 % at no cost for left-to-right autoregressive test losses (Bavarian et al., 2022) and only small cost for downstream evaluation performance (Allal et al., 2023). In Table 5, we independently validate both findings at the scale of 7B and 13B parameters and 500B training tokens of code. The 7B model loses 0.6 percentage points on average across HumanEval and MBPP pass@1, pass@10 and pass@100 scores if trained with an infilling objective, while the 13B model loses 1.1 percentage points. 9 Model Size: 7B 0.8 -0.6 C# TS PHP Java C++ Python Bash 2 a Model Size: 13B g g S a t 0.8 1e} S s z s -0.6 a “0.4 4 ES 1o} Bash Model Size: 34B 0.8 -0.6 C# TS PHP Java C++ Python C# Bash C# Bash PythonC++ Java PHP TS Python C++ Java PHP TS C# Bash PythonC++ Java PHP TS
2308.12950#38
Code Llama: Open Foundation Models for Code
We release Code Llama, a family of large language models for code based on Llama 2 providing state-of-the-art performance among open models, infilling capabilities, support for large input contexts, and zero-shot instruction following ability for programming tasks. We provide multiple flavors to cover a wide range of applications: foundation models (Code Llama), Python specializations (Code Llama - Python), and instruction-following models (Code Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are trained on sequences of 16k tokens and show improvements on inputs with up to 100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants support infilling based on surrounding content. Code Llama reaches state-of-the-art performance among open models on several code benchmarks, with scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our models outperform every other publicly available model on MultiPL-E. We release Code Llama under a permissive license that allows for both research and commercial use.
http://arxiv.org/pdf/2308.12950
Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve
cs.CL
null
null
cs.CL
20230824
20240131
[]
2308.12966
38
and Mini- GPT4 (Zhu et al., 2023) introduce visual instruction tuning to enhance instruction following capabilities in LVLMs. Additionally, mPLUG-DocOwl (Ye et al., 2023a) incorporates document understanding capabilities into LVLMs by introducing digital documents data. Kosmos2 (Peng et al., 2023), Shikra (Chen et al., 2023a), and BuboGPT (Zhao et al., 2023) further enhance LVLMs with visual grounding abilities, enabling region description and localization. In this work, we integrate image captioning, visual question answering, OCR, document understanding, and visual grounding capabilities into Qwen-VL. The resulting model achieves outstanding performance on these diverse style tasks.
2308.12966#38
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
In this work, we introduce the Qwen-VL series, a set of large-scale vision-language models (LVLMs) designed to perceive and understand both texts and images. Starting from the Qwen-LM as a foundation, we endow it with visual capacity by the meticulously designed (i) visual receptor, (ii) input-output interface, (iii) 3-stage training pipeline, and (iv) multilingual multimodal cleaned corpus. Beyond the conventional image description and question-answering, we implement the grounding and text-reading ability of Qwen-VLs by aligning image-caption-box tuples. The resulting models, including Qwen-VL and Qwen-VL-Chat, set new records for generalist models under similar model scales on a broad range of visual-centric benchmarks (e.g., image captioning, question answering, visual grounding) and different settings (e.g., zero-shot, few-shot). Moreover, on real-world dialog benchmarks, our instruction-tuned Qwen-VL-Chat also demonstrates superiority compared to existing vision-language chatbots. Code, demo and models are available at https://github.com/QwenLM/Qwen-VL.
http://arxiv.org/pdf/2308.12966
Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou
cs.CV, cs.CL
Code, demo and models are available at https://github.com/QwenLM/Qwen-VL
null
cs.CV
20230824
20231013
[ { "id": "2211.01335" }, { "id": "2307.02499" }, { "id": "2305.10403" }, { "id": "2308.16890" }, { "id": "2208.10442" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11172" }, { "id": "2210.08402" }, { "id": "2306.02858" }, { "id": "2209.06794" }, { "id": "1504.00325" }, { "id": "2204.13653" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2212.04408" }, { "id": "2307.05222" }, { "id": "2306.15195" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2306.14824" }, { "id": "2102.05918" }, { "id": "2205.01917" }, { "id": "2111.11432" }, { "id": "2307.16125" }, { "id": "2305.03726" }, { "id": "2203.10244" }, { "id": "2206.08916" }, { "id": "2304.14108" }, { "id": "2307.08581" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2305.18565" } ]
2308.12503
39
R.; Lee, D.; Fei-Fei, L.; and Bernstein, M. S. 2022. Socially situated artificial intelligence enables learning from human interaction. Proceedings of the National Academy of Sciences, 119(39): e2115730119. Li, G.; Hammoud, H. A. A. K.; Itani, H.; Khizbullin, D.; and Ghanem, B. 2023. CAMEL: Communicative Agents for ”Mind” Exploration of Large Scale Language Model Soci- ety. arXiv:2303.17760. Mara Pudane, E. L.; and Radin, M. A. 2017. Human Emo- tional Behavior Simulation in Intelligent Agents: Processes and Architecture. Procedia Computer Science, 104: 517– 524. ICTE 2016, Riga Technical University, Latvia. Markel, J. M.; Opferman, S. G.; Landay, J. A.; and Piech, C. 2023. GPTeach: Interactive TA Training with GPT Based Students. Nair, V.; Schumacher, E.; Tso, G.; and Kannan, A. 2023. DERA: Enhancing Large Language Model Completions with Dialog-Enabled Resolving Agents.
2308.12503#39
CGMI: Configurable General Multi-Agent Interaction Framework
Benefiting from the powerful capabilities of large language models (LLMs), agents based on LLMs have shown the potential to address domain-specific tasks and emulate human behaviors. However, the content generated by these agents remains somewhat superficial, owing to their limited domain expertise and the absence of an effective cognitive architecture. To address this, we present the Configurable General Multi-Agent Interaction (CGMI) framework, designed to replicate human interactions in real-world scenarios. Specifically, we propose a tree-structured methodology for the assignment, detection, and maintenance of agent personality. Additionally, we designed a cognitive architecture equipped with a skill library based on the ACT* model, which contains memory, reflection, and planning modules. We have also integrated general agents to augment the virtual environment's realism. Using the CGMI framework, we simulated numerous classroom interactions between teacher and students. The experiments indicate that aspects such as the teaching methodology, curriculum, and student performance closely mirror real classroom settings. We will open source our work.
http://arxiv.org/pdf/2308.12503
Shi Jinxin, Zhao Jiabao, Wang Yilei, Wu Xingjiao, Li Jiawen, He Liang
cs.AI, cs.HC, cs.MA
11 pages, 15 figures
null
cs.AI
20230824
20230828
[ { "id": "2302.01560" }, { "id": "2307.05300" }, { "id": "2307.07924" }, { "id": "2210.03350" }, { "id": "2304.05376" }, { "id": "2304.03442" }, { "id": "2210.03629" }, { "id": "2305.04091" }, { "id": "2305.02547" }, { "id": "2303.17071" }, { "id": "2303.17760" }, { "id": "2303.08774" } ]
2308.12519
39
J. Hendler. Is there an intelligent agent in your future? nature. 1999. Qiao Jin, Yifan Yang, Qingyu Chen, and Zhiyong Lu. Genegpt: Augmenting large language models with domain tools for improved access to biomedical information. ArXiv, 2023. D. Kahneman and A. Tversky. Choices, values, and frames. 2000. P. Maes. Agents that reduce work and information overload. Commun. ACM, 37:30–40, 1994. # Yohei Nakajima. Babyagi. Python. https://github. com/yoheinakajima/babyagi, 2023. Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christo- pher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, et al. Webgpt: Browser-assisted question-answering with human feedback. ArXiv preprint, abs/2112.09332, 2021. OpenAI. OpenAI: Introducing ChatGPT, 2022. URL https://openai.com/blog/ chatgpt. OpenAI. Gpt-4 technical report, 2023. C. Plott. Path independence, rationality, and social choice. Econometrica, 41:1075–1091, 1973.
2308.12519#39
Rational Decision-Making Agent with Internalized Utility Judgment
Large language models (LLMs) have demonstrated remarkable advancements and have attracted significant efforts to develop LLMs into agents capable of executing intricate multi-step decision-making tasks beyond traditional NLP applications. Existing approaches to LLM-based decision-making predominantly build upon the manually-designed external performance metrics to guide the decision-making process. However, reliance on the external performance metrics as prior is problematic in real-world scenarios, where such prior may be unavailable, flawed, or even erroneous. For genuine autonomous decision making, it is imperative for the agent to develop its rationality from its posterior experiences to judge decisions independently. Central to the development of rationality is the construction of an internalized utility judgment, capable of assigning numerical utilities to each decision. This paper proposes RadAgent (Rational Decision-Making Agent), which fosters the development of its rationality through an iterative framework involving Experience Exploration and Utility Learning. Within this framework, Elo-based Utility Construction is devised to assign Elo scores to individual decision steps to judge their utilities via pairwise comparisons. Consequently, these Elo scores guide the decision-making process to derive optimal outcomes. Experimental results on the ToolBench dataset demonstrate RadAgent's superiority over baselines, achieving over 10% improvement in Pass Rate on diverse tasks. It offers higher-quality solutions and reduces costs (ChatGPT API calls), highlighting its effectiveness and efficiency.
http://arxiv.org/pdf/2308.12519
Yining Ye, Xin Cong, Shizuo Tian, Yujia Qin, Chong Liu, Yankai Lin, Zhiyuan Liu, Maosong Sun
cs.CL
Received 8,6,6,6 scores on ICLR 2024
null
cs.CL
20230824
20240117
[ { "id": "2305.14318" }, { "id": "2306.06624" }, { "id": "2305.17926" }, { "id": "2305.10601" }, { "id": "2307.16789" }, { "id": "2305.06849" }, { "id": "2304.08354" }, { "id": "2308.09687" }, { "id": "2306.11489" }, { "id": "2306.17563" }, { "id": "2305.14992" }, { "id": "2305.01937" }, { "id": "2308.10379" }, { "id": "2305.11554" } ]
2308.12682
39
Planning success for different beam sizes Cost-effectiveness for different beam sizes Generalization for different beam sizes 100 100 100 k=1 k=1 k=1 Mm k=2 k=2 lm k=2 80 mm k-3 | 0 mm k-3 | 9 mm k=3 60 60 60 40 40 40 o o 0 Ravens-Hanoi Ravens-Blocks BabyAl _ VirtualHome Ravens-Hanoi Ravens-Blocks BabyAl __VirtualHome Ravens-Hanoi Ravens-Blocks BabyAl __virtualHome Figure 3: [Best viewed in color] From left to right: Planning success, cost-effectiveness, generalization for different beam sizes. Note, that generalization on the test-generalize split for VirtualHome is reported as a percentage. # 7.2 Environments We tested in three environments, detailed in Table 2. • Ravens (Zeng et al. 2021) is a PyBullet simulated task set focusing on “pick and place”. It includes 10 tabletop tasks, of which we use two: (i) Tower of Hanoi (sequence), a variation of the classic puzzle focusing on specific intermediate goals, like moving a particular disk to a designated rod while keeping the traditional constraints. This creates more goal diversity; (ii) Put blocks in bowls requires placing blocks into bowls based on rules like put yellow block in green bowls. We adapt the environment for language tasks, observations, and actions.
2308.12682#39
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
Large Language Models (LLMs) have demonstrated impressive planning abilities due to their vast "world knowledge". Yet, obtaining plans that are both feasible (grounded in affordances) and cost-effective (in plan length), remains a challenge, despite recent progress. This contrasts with heuristic planning methods that employ domain knowledge (formalized in action models such as PDDL) and heuristic search to generate feasible, optimal plans. Inspired by this, we propose to combine the power of LLMs and heuristic planning by leveraging the world knowledge of LLMs and the principles of heuristic search. Our approach, SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain knowledge, that evaluates actions' feasibility (Can) and long-term reward/payoff (Pay), and heuristic search to select the best sequence of actions. Our contributions are (1) a novel framing of the LLM planning problem in the context of heuristic planning, (2) integrating grounding and cost-effective elements into the generated plans, and (3) using heuristic search over actions. Our extensive evaluations show that our model surpasses other LLM planning approaches.
http://arxiv.org/pdf/2308.12682
Rishi Hazra, Pedro Zuidberg Dos Martires, Luc De Raedt
cs.AI
Accepted in AAAI 2024. Website: https://rishihazra.github.io/SayCanPay/
null
cs.AI
20230824
20240101
[ { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2305.14992" }, { "id": "2302.05128" }, { "id": "2212.08681" }, { "id": "1807.03748" }, { "id": "2303.00855" }, { "id": "2305.10601" }, { "id": "2304.11477" }, { "id": "2210.17323" }, { "id": "2210.11416" }, { "id": "2201.04735" }, { "id": "2202.10936" }, { "id": "2209.07753" }, { "id": "2302.06706" }, { "id": "1909.08593" }, { "id": "2307.15818" }, { "id": "2204.01691" }, { "id": "2207.05608" }, { "id": "2305.14314" } ]
2308.12950
39
Figure 3: Correlations between Languages. Correlation scores between the Python, C++, Java, PHP, C#, TypeScript (TS), and Bash, reported for different model sizes. The code for this figure was generated by Code Llama - Instruct, the prompt and code can be seen in Figure 22. Because of this modest decline in performance and the wide applicability of models with infilling capability, we decide to release Code Llama 7B, 13B and 70B in this configuration. Code infilling benchmarks. Our infilling models reach state-of-the-art performances in code infilling benchmarks among models of their size. We evaluate on two related code infilling benchmarks based on the HumanEval benchmark (Chen et al., 2021).
2308.12950#39
Code Llama: Open Foundation Models for Code
We release Code Llama, a family of large language models for code based on Llama 2 providing state-of-the-art performance among open models, infilling capabilities, support for large input contexts, and zero-shot instruction following ability for programming tasks. We provide multiple flavors to cover a wide range of applications: foundation models (Code Llama), Python specializations (Code Llama - Python), and instruction-following models (Code Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are trained on sequences of 16k tokens and show improvements on inputs with up to 100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants support infilling based on surrounding content. Code Llama reaches state-of-the-art performance among open models on several code benchmarks, with scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our models outperform every other publicly available model on MultiPL-E. We release Code Llama under a permissive license that allows for both research and commercial use.
http://arxiv.org/pdf/2308.12950
Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve
cs.CL
null
null
cs.CL
20230824
20240131
[]
2308.12966
39
# 6 Conclusion and Future Work We release the Qwen-VL series, a set of large-scale multilingual vision-language models that aims to facili- tate multimodal research. Qwen-VL outperforms similar models across various benchmarks, supporting multilingual conversations, multi-image interleaved conversations, grounding in Chinese, and fine-grained recognition. Moving forward, we are dedicated to further enhancing Qwen-VL’s capabilities in several key dimensions: • Integrating Qwen-VL with more modalities, such as speech and video. • Augmenting Qwen-VL by scaling up the model size, training data and higher resolution, enabling it to handle more complex and intricate relationships within multimodal data. • Expanding Qwen-VL’s prowess in multi-modal generation, specifically in generating high-fidelity images and fluent speech. # References Harsh Agrawal, Karan Desai, Yufei Wang, Xinlei Chen, Rishabh Jain, Mark Johnson, Dhruv Batra, Devi Parikh, Stefan Lee, and Peter Anderson. nocaps: novel object captioning at scale. In ICCV, 2019.
2308.12966#39
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
In this work, we introduce the Qwen-VL series, a set of large-scale vision-language models (LVLMs) designed to perceive and understand both texts and images. Starting from the Qwen-LM as a foundation, we endow it with visual capacity by the meticulously designed (i) visual receptor, (ii) input-output interface, (iii) 3-stage training pipeline, and (iv) multilingual multimodal cleaned corpus. Beyond the conventional image description and question-answering, we implement the grounding and text-reading ability of Qwen-VLs by aligning image-caption-box tuples. The resulting models, including Qwen-VL and Qwen-VL-Chat, set new records for generalist models under similar model scales on a broad range of visual-centric benchmarks (e.g., image captioning, question answering, visual grounding) and different settings (e.g., zero-shot, few-shot). Moreover, on real-world dialog benchmarks, our instruction-tuned Qwen-VL-Chat also demonstrates superiority compared to existing vision-language chatbots. Code, demo and models are available at https://github.com/QwenLM/Qwen-VL.
http://arxiv.org/pdf/2308.12966
Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou
cs.CV, cs.CL
Code, demo and models are available at https://github.com/QwenLM/Qwen-VL
null
cs.CV
20230824
20231013
[ { "id": "2211.01335" }, { "id": "2307.02499" }, { "id": "2305.10403" }, { "id": "2308.16890" }, { "id": "2208.10442" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11172" }, { "id": "2210.08402" }, { "id": "2306.02858" }, { "id": "2209.06794" }, { "id": "1504.00325" }, { "id": "2204.13653" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2212.04408" }, { "id": "2307.05222" }, { "id": "2306.15195" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2306.14824" }, { "id": "2102.05918" }, { "id": "2205.01917" }, { "id": "2111.11432" }, { "id": "2307.16125" }, { "id": "2305.03726" }, { "id": "2203.10244" }, { "id": "2206.08916" }, { "id": "2304.14108" }, { "id": "2307.08581" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2305.18565" } ]
2308.12503
40
E.; Tso, G.; and Kannan, A. 2023. DERA: Enhancing Large Language Model Completions with Dialog-Enabled Resolving Agents. arXiv:2303.17071. OpenAI. 2022. OpenAI. Introducing chatgpt. https://openai. com/blog/chatgpt. Accessed: 2023-03-1. OpenAI. 2023. GPT-4 Technical Report. arXiv:2303.08774. Park, J. S.; O’Brien, J. C.; Cai, C. J.; Morris, M. R.; Liang, P.; and Bernstein, M. S. 2023. Generative Agents: Interac- tive Simulacra of Human Behavior. arXiv:2304.03442.
2308.12503#40
CGMI: Configurable General Multi-Agent Interaction Framework
Benefiting from the powerful capabilities of large language models (LLMs), agents based on LLMs have shown the potential to address domain-specific tasks and emulate human behaviors. However, the content generated by these agents remains somewhat superficial, owing to their limited domain expertise and the absence of an effective cognitive architecture. To address this, we present the Configurable General Multi-Agent Interaction (CGMI) framework, designed to replicate human interactions in real-world scenarios. Specifically, we propose a tree-structured methodology for the assignment, detection, and maintenance of agent personality. Additionally, we designed a cognitive architecture equipped with a skill library based on the ACT* model, which contains memory, reflection, and planning modules. We have also integrated general agents to augment the virtual environment's realism. Using the CGMI framework, we simulated numerous classroom interactions between teacher and students. The experiments indicate that aspects such as the teaching methodology, curriculum, and student performance closely mirror real classroom settings. We will open source our work.
http://arxiv.org/pdf/2308.12503
Shi Jinxin, Zhao Jiabao, Wang Yilei, Wu Xingjiao, Li Jiawen, He Liang
cs.AI, cs.HC, cs.MA
11 pages, 15 figures
null
cs.AI
20230824
20230828
[ { "id": "2302.01560" }, { "id": "2307.05300" }, { "id": "2307.07924" }, { "id": "2210.03350" }, { "id": "2304.05376" }, { "id": "2304.03442" }, { "id": "2210.03629" }, { "id": "2305.04091" }, { "id": "2305.02547" }, { "id": "2303.17071" }, { "id": "2303.17760" }, { "id": "2303.08774" } ]
2308.12519
40
C. Plott. Path independence, rationality, and social choice. Econometrica, 41:1075–1091, 1973. Cheng Qian, Chi Han, Yi R Fung, Yujia Qin, Zhiyuan Liu, and Heng Ji. Creator: Disentangling abstract and concrete reasonings of large language models through tool creation. arXiv preprint arXiv:2305.14318, 2023. Yujia Qin, Zihan Cai, Dian Jin, Lan Yan, Shihao Liang, Kunlun Zhu, Yankai Lin, Xu Han, Ning Ding, Huadong Wang, et al. Webcpm: Interactive web search for chinese long-form question answering. arXiv preprint arXiv:2305.06849, 2023a. Yujia Qin, Shengding Hu, Yankai Lin, Weize Chen, Ning Ding, Ganqu Cui, Zheni Zeng, Yufei Huang, Chaojun Xiao, Chi Han, et al. Tool learning with foundation models. arXiv preprint arXiv:2304.08354, 2023b.
2308.12519#40
Rational Decision-Making Agent with Internalized Utility Judgment
Large language models (LLMs) have demonstrated remarkable advancements and have attracted significant efforts to develop LLMs into agents capable of executing intricate multi-step decision-making tasks beyond traditional NLP applications. Existing approaches to LLM-based decision-making predominantly build upon the manually-designed external performance metrics to guide the decision-making process. However, reliance on the external performance metrics as prior is problematic in real-world scenarios, where such prior may be unavailable, flawed, or even erroneous. For genuine autonomous decision making, it is imperative for the agent to develop its rationality from its posterior experiences to judge decisions independently. Central to the development of rationality is the construction of an internalized utility judgment, capable of assigning numerical utilities to each decision. This paper proposes RadAgent (Rational Decision-Making Agent), which fosters the development of its rationality through an iterative framework involving Experience Exploration and Utility Learning. Within this framework, Elo-based Utility Construction is devised to assign Elo scores to individual decision steps to judge their utilities via pairwise comparisons. Consequently, these Elo scores guide the decision-making process to derive optimal outcomes. Experimental results on the ToolBench dataset demonstrate RadAgent's superiority over baselines, achieving over 10% improvement in Pass Rate on diverse tasks. It offers higher-quality solutions and reduces costs (ChatGPT API calls), highlighting its effectiveness and efficiency.
http://arxiv.org/pdf/2308.12519
Yining Ye, Xin Cong, Shizuo Tian, Yujia Qin, Chong Liu, Yankai Lin, Zhiyuan Liu, Maosong Sun
cs.CL
Received 8,6,6,6 scores on ICLR 2024
null
cs.CL
20230824
20240117
[ { "id": "2305.14318" }, { "id": "2306.06624" }, { "id": "2305.17926" }, { "id": "2305.10601" }, { "id": "2307.16789" }, { "id": "2305.06849" }, { "id": "2304.08354" }, { "id": "2308.09687" }, { "id": "2306.11489" }, { "id": "2306.17563" }, { "id": "2305.14992" }, { "id": "2305.01937" }, { "id": "2308.10379" }, { "id": "2305.11554" } ]
2308.12682
40
• BabyAI (Chevalier-Boisvert et al. 2019) is a 2D-gridworld environment where a bot is provided a language task sampled from a predefined grammar. We focus on pickup tasks where the agent navigates to collect an object, often unlocking doors or moving obstacles. Task difficulty varies with rooms, obstacles, and distractor objects. The agent’s actions include high-level commands like pickup and drop which are composed of atomic actions: “left”, “right”, “forward”, “pick”, and “drop” (see Figure 1) • VirtualHome (Puig et al. 2018) is an interactive platform to simulate complex household activities via interactions with the environment, such as picking up objects, switching on/off appliances. We utilize the VirtualHome-Env dataset (Liao et al. 2019), comprising daily household activities from 7 scenes gathered via crowdsourcing. We only use the goal as the input (see Table 2).
2308.12682#40
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
Large Language Models (LLMs) have demonstrated impressive planning abilities due to their vast "world knowledge". Yet, obtaining plans that are both feasible (grounded in affordances) and cost-effective (in plan length), remains a challenge, despite recent progress. This contrasts with heuristic planning methods that employ domain knowledge (formalized in action models such as PDDL) and heuristic search to generate feasible, optimal plans. Inspired by this, we propose to combine the power of LLMs and heuristic planning by leveraging the world knowledge of LLMs and the principles of heuristic search. Our approach, SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain knowledge, that evaluates actions' feasibility (Can) and long-term reward/payoff (Pay), and heuristic search to select the best sequence of actions. Our contributions are (1) a novel framing of the LLM planning problem in the context of heuristic planning, (2) integrating grounding and cost-effective elements into the generated plans, and (3) using heuristic search over actions. Our extensive evaluations show that our model surpasses other LLM planning approaches.
http://arxiv.org/pdf/2308.12682
Rishi Hazra, Pedro Zuidberg Dos Martires, Luc De Raedt
cs.AI
Accepted in AAAI 2024. Website: https://rishihazra.github.io/SayCanPay/
null
cs.AI
20230824
20240101
[ { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2305.14992" }, { "id": "2302.05128" }, { "id": "2212.08681" }, { "id": "1807.03748" }, { "id": "2303.00855" }, { "id": "2305.10601" }, { "id": "2304.11477" }, { "id": "2210.17323" }, { "id": "2210.11416" }, { "id": "2201.04735" }, { "id": "2202.10936" }, { "id": "2209.07753" }, { "id": "2302.06706" }, { "id": "1909.08593" }, { "id": "2307.15818" }, { "id": "2204.01691" }, { "id": "2207.05608" }, { "id": "2305.14314" } ]
2308.12950
40
The HumanEval infilling benchmark (Fried et al., 2023) turns the reference solutions of the HumanEval benchmark (Chen et al., 2021) into infilling problems by masking out either individual lines or blocks consisting of multiple consecutive lines. It has been extended in Bavarian et al. (2022) with a random span infilling task in which the masking is applied to a randomly selected substring at the character level. Predictions are scored with a pass@1 score based on the test cases of the original HumanEval problems. According to the results in Table 14, our models outperform all other infilling models of their size. Note, however, that the results in random span infilling are significantly worse in suffix-prefix-middle (SPM) format than in prefix-suffix-middle (PSM) format as it would require token healing (Microsoft, 2023), which we have not implemented for this evaluation (see Appendix E for further discussion).
2308.12950#40
Code Llama: Open Foundation Models for Code
We release Code Llama, a family of large language models for code based on Llama 2 providing state-of-the-art performance among open models, infilling capabilities, support for large input contexts, and zero-shot instruction following ability for programming tasks. We provide multiple flavors to cover a wide range of applications: foundation models (Code Llama), Python specializations (Code Llama - Python), and instruction-following models (Code Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are trained on sequences of 16k tokens and show improvements on inputs with up to 100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants support infilling based on surrounding content. Code Llama reaches state-of-the-art performance among open models on several code benchmarks, with scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our models outperform every other publicly available model on MultiPL-E. We release Code Llama under a permissive license that allows for both research and commercial use.
http://arxiv.org/pdf/2308.12950
Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve
cs.CL
null
null
cs.CL
20230824
20240131
[]
2308.12966
40
Parikh, Stefan Lee, and Peter Anderson. nocaps: novel object captioning at scale. In ICCV, 2019. Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, et al. Flamingo: a visual language model for few-shot learning. In NeurIPS, 2022. Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al. Palm 2 technical report. arXiv:2305.10403, 2023. Jinze Bai, Rui Men, Hao Yang, Xuancheng Ren, Kai Dang, Yichang Zhang, Xiaohuan Zhou, Peng Wang, Sinan Tan, An Yang, et al. Ofasys: A multi-modal multi-task learning system for building generalist models. arXiv:2212.04408, 2022.
2308.12966#40
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
In this work, we introduce the Qwen-VL series, a set of large-scale vision-language models (LVLMs) designed to perceive and understand both texts and images. Starting from the Qwen-LM as a foundation, we endow it with visual capacity by the meticulously designed (i) visual receptor, (ii) input-output interface, (iii) 3-stage training pipeline, and (iv) multilingual multimodal cleaned corpus. Beyond the conventional image description and question-answering, we implement the grounding and text-reading ability of Qwen-VLs by aligning image-caption-box tuples. The resulting models, including Qwen-VL and Qwen-VL-Chat, set new records for generalist models under similar model scales on a broad range of visual-centric benchmarks (e.g., image captioning, question answering, visual grounding) and different settings (e.g., zero-shot, few-shot). Moreover, on real-world dialog benchmarks, our instruction-tuned Qwen-VL-Chat also demonstrates superiority compared to existing vision-language chatbots. Code, demo and models are available at https://github.com/QwenLM/Qwen-VL.
http://arxiv.org/pdf/2308.12966
Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou
cs.CV, cs.CL
Code, demo and models are available at https://github.com/QwenLM/Qwen-VL
null
cs.CV
20230824
20231013
[ { "id": "2211.01335" }, { "id": "2307.02499" }, { "id": "2305.10403" }, { "id": "2308.16890" }, { "id": "2208.10442" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.11172" }, { "id": "2210.08402" }, { "id": "2306.02858" }, { "id": "2209.06794" }, { "id": "1504.00325" }, { "id": "2204.13653" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2212.04408" }, { "id": "2307.05222" }, { "id": "2306.15195" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2306.14824" }, { "id": "2102.05918" }, { "id": "2205.01917" }, { "id": "2111.11432" }, { "id": "2307.16125" }, { "id": "2305.03726" }, { "id": "2203.10244" }, { "id": "2206.08916" }, { "id": "2304.14108" }, { "id": "2307.08581" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2305.18565" } ]