doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2308.12503 | 41 | Park, J. S.; Popowski, L.; Cai, C.; Morris, M. R.; Liang, P.; and Bernstein, M. S. 2022. Social Simulacra: Creating In Populated Prototypes for Social Computing Systems. Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology, UIST â22. New York, NY, USA: Association for Computing Machinery. ISBN 9781450393201. Press, O.; Zhang, M.; Min, S.; Schmidt, L.; Smith, N. A.; and Lewis, M. 2023. Measuring and Narrowing the Compo- sitionality Gap in Language Models. arXiv:2210.03350. Qian, C.; Cong, X.; Yang, C.; Chen, W.; Su, Y.; Xu, J.; Liu, Z.; and Sun, M. 2023. Communicative Agents for Software Development. arXiv:2307.07924. Qian, Q.; Huang, M.; Zhao, H.; Xu, J.; and Zhu, X. 2018. Assigning Personality/Profile to a Chatting Machine for Co- herent Conversation Generation. In Ijcai, 4279â4285. Soloman, B. A.; and Felder, | 2308.12503#41 | CGMI: Configurable General Multi-Agent Interaction Framework | Benefiting from the powerful capabilities of large language models (LLMs),
agents based on LLMs have shown the potential to address domain-specific tasks
and emulate human behaviors. However, the content generated by these agents
remains somewhat superficial, owing to their limited domain expertise and the
absence of an effective cognitive architecture. To address this, we present the
Configurable General Multi-Agent Interaction (CGMI) framework, designed to
replicate human interactions in real-world scenarios. Specifically, we propose
a tree-structured methodology for the assignment, detection, and maintenance of
agent personality. Additionally, we designed a cognitive architecture equipped
with a skill library based on the ACT* model, which contains memory,
reflection, and planning modules. We have also integrated general agents to
augment the virtual environment's realism. Using the CGMI framework, we
simulated numerous classroom interactions between teacher and students. The
experiments indicate that aspects such as the teaching methodology, curriculum,
and student performance closely mirror real classroom settings. We will open
source our work. | http://arxiv.org/pdf/2308.12503 | Shi Jinxin, Zhao Jiabao, Wang Yilei, Wu Xingjiao, Li Jiawen, He Liang | cs.AI, cs.HC, cs.MA | 11 pages, 15 figures | null | cs.AI | 20230824 | 20230828 | [
{
"id": "2302.01560"
},
{
"id": "2307.05300"
},
{
"id": "2307.07924"
},
{
"id": "2210.03350"
},
{
"id": "2304.05376"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.04091"
},
{
"id": "2305.02547"
},
{
"id": "2303.17071"
},
{
"id": "2303.17760"
},
{
"id": "2303.08774"
}
] |
2308.12519 | 41 | Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, et al. Toolllm: Facilitating large language models to master 16000+ real-world apis. arXiv preprint arXiv:2307.16789, 2023c.
Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, et al. Large language models are effective text rankers with pairwise ranking prompting. arXiv preprint arXiv:2306.17563, 2023d.
Toran Bruce Richards. Auto-gpt: An autonomous gpt-4 experiment, 2023.
Timo Schick, Jane Dwivedi-Yu, Roberto Dess`ı, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. Toolformer: Language models can teach themselves to use tools. ArXiv preprint, abs/2302.04761, 2023.
J. Searle. Speech acts: An essay in the philosophy of language. 1969. | 2308.12519#41 | Rational Decision-Making Agent with Internalized Utility Judgment | Large language models (LLMs) have demonstrated remarkable advancements and
have attracted significant efforts to develop LLMs into agents capable of
executing intricate multi-step decision-making tasks beyond traditional NLP
applications. Existing approaches to LLM-based decision-making predominantly
build upon the manually-designed external performance metrics to guide the
decision-making process. However, reliance on the external performance metrics
as prior is problematic in real-world scenarios, where such prior may be
unavailable, flawed, or even erroneous. For genuine autonomous decision making,
it is imperative for the agent to develop its rationality from its posterior
experiences to judge decisions independently. Central to the development of
rationality is the construction of an internalized utility judgment, capable of
assigning numerical utilities to each decision. This paper proposes RadAgent
(Rational Decision-Making Agent), which fosters the development of its
rationality through an iterative framework involving Experience Exploration and
Utility Learning. Within this framework, Elo-based Utility Construction is
devised to assign Elo scores to individual decision steps to judge their
utilities via pairwise comparisons. Consequently, these Elo scores guide the
decision-making process to derive optimal outcomes. Experimental results on the
ToolBench dataset demonstrate RadAgent's superiority over baselines, achieving
over 10% improvement in Pass Rate on diverse tasks. It offers higher-quality
solutions and reduces costs (ChatGPT API calls), highlighting its effectiveness
and efficiency. | http://arxiv.org/pdf/2308.12519 | Yining Ye, Xin Cong, Shizuo Tian, Yujia Qin, Chong Liu, Yankai Lin, Zhiyuan Liu, Maosong Sun | cs.CL | Received 8,6,6,6 scores on ICLR 2024 | null | cs.CL | 20230824 | 20240117 | [
{
"id": "2305.14318"
},
{
"id": "2306.06624"
},
{
"id": "2305.17926"
},
{
"id": "2305.10601"
},
{
"id": "2307.16789"
},
{
"id": "2305.06849"
},
{
"id": "2304.08354"
},
{
"id": "2308.09687"
},
{
"id": "2306.11489"
},
{
"id": "2306.17563"
},
{
"id": "2305.14992"
},
{
"id": "2305.01937"
},
{
"id": "2308.10379"
},
{
"id": "2305.11554"
}
] |
2308.12682 | 41 | Data Splits and Evaluation. We aim to assess the success, cost-effectiveness, and out-of-distribution (OOD) gener- alization of the generated plans. We created three data splits for each environment using expert trajectories. (i) train split for Can, Pay model training and few-shot prompting of the Say Model; (ii) test split assesses the LM plannersâ ability to generate successful plans (i.e. reach the goal within limited steps), and also the plannersâ ability to generate cost-effective plans (i.e. plans that succeed and also have the same plan length as the expert plan5). (iii) test-generalize split focuses on the generalization capabilities like handling novel initial observations (e.g., unseen colors of blocks and bowls, distractors in BabyAI), longer sequence lengths (e.g., more blocks or disks in Ravens, more rooms in BabyAI), and unseen tasks in VirtualHome. All test splits have # total episodes = 100 unless specified otherwise. Moreover, all splits are disjoint (i.e. no overlap). | 2308.12682#41 | SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge | Large Language Models (LLMs) have demonstrated impressive planning abilities
due to their vast "world knowledge". Yet, obtaining plans that are both
feasible (grounded in affordances) and cost-effective (in plan length), remains
a challenge, despite recent progress. This contrasts with heuristic planning
methods that employ domain knowledge (formalized in action models such as PDDL)
and heuristic search to generate feasible, optimal plans. Inspired by this, we
propose to combine the power of LLMs and heuristic planning by leveraging the
world knowledge of LLMs and the principles of heuristic search. Our approach,
SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain
knowledge, that evaluates actions' feasibility (Can) and long-term
reward/payoff (Pay), and heuristic search to select the best sequence of
actions. Our contributions are (1) a novel framing of the LLM planning problem
in the context of heuristic planning, (2) integrating grounding and
cost-effective elements into the generated plans, and (3) using heuristic
search over actions. Our extensive evaluations show that our model surpasses
other LLM planning approaches. | http://arxiv.org/pdf/2308.12682 | Rishi Hazra, Pedro Zuidberg Dos Martires, Luc De Raedt | cs.AI | Accepted in AAAI 2024. Website:
https://rishihazra.github.io/SayCanPay/ | null | cs.AI | 20230824 | 20240101 | [
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2305.14992"
},
{
"id": "2302.05128"
},
{
"id": "2212.08681"
},
{
"id": "1807.03748"
},
{
"id": "2303.00855"
},
{
"id": "2305.10601"
},
{
"id": "2304.11477"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2201.04735"
},
{
"id": "2202.10936"
},
{
"id": "2209.07753"
},
{
"id": "2302.06706"
},
{
"id": "1909.08593"
},
{
"id": "2307.15818"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2305.14314"
}
] |
2308.12950 | 41 | Allal et al. (2023) translates the HumanEval infilling benchmark to other programming languages using MultiPL-E (Cassano et al., 2023). Single lines are masked and predictions are scored with an exact match metric against the ground truth solution. Our models, including Code Llama 7B, outperform all open infilling models across the three programming languages contained in the benchmark (Table 6). We observe a further increase in performance when prompting the models in SPM format, like witnessed in Bavarian et al. (2022).
# 3.3 Long context evaluations
We explore Code Llamaâs ability to work with long sequences by measuring perplexity, key retrieval accuracy and performance during generation on code completion tasks. These tasks, and our results are detailed below. For full results and comparisons to alternative techniques of increasing the context length of LLMs, we refer to Appendix G. | 2308.12950#41 | Code Llama: Open Foundation Models for Code | We release Code Llama, a family of large language models for code based on
Llama 2 providing state-of-the-art performance among open models, infilling
capabilities, support for large input contexts, and zero-shot instruction
following ability for programming tasks. We provide multiple flavors to cover a
wide range of applications: foundation models (Code Llama), Python
specializations (Code Llama - Python), and instruction-following models (Code
Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are
trained on sequences of 16k tokens and show improvements on inputs with up to
100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants
support infilling based on surrounding content. Code Llama reaches
state-of-the-art performance among open models on several code benchmarks, with
scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code
Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our
models outperform every other publicly available model on MultiPL-E. We release
Code Llama under a permissive license that allows for both research and
commercial use. | http://arxiv.org/pdf/2308.12950 | Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve | cs.CL | null | null | cs.CL | 20230824 | 20240131 | [] |
2308.12966 | 41 | Shuai Bai, Shusheng Yang, Jinze Bai, Peng Wang, Xingxuan Zhang, Junyang Lin, Xinggang Wang, Chang Zhou, and Jingren Zhou. Touchstone: Evaluating vision-language models by language models. arXiv:2308.16890, 2023.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. In NeurIPS, 2020.
10
Minwoo Byeon, Beomhee Park, Haecheon Kim, Sungjun Lee, Woonhyuk Baek, and Saehoon Kim. Coyo-700m: Image-text pair dataset, 2022. URL https://github.com/kakaobrain/coyo-dataset.
Soravit Changpinyo, Piyush Sharma, Nan Ding, and Radu Soricut. Conceptual 12m: Pushing web-scale image-text pre-training to recognize long-tail visual concepts. In CVPR, 2021. | 2308.12966#41 | Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond | In this work, we introduce the Qwen-VL series, a set of large-scale
vision-language models (LVLMs) designed to perceive and understand both texts
and images. Starting from the Qwen-LM as a foundation, we endow it with visual
capacity by the meticulously designed (i) visual receptor, (ii) input-output
interface, (iii) 3-stage training pipeline, and (iv) multilingual multimodal
cleaned corpus. Beyond the conventional image description and
question-answering, we implement the grounding and text-reading ability of
Qwen-VLs by aligning image-caption-box tuples. The resulting models, including
Qwen-VL and Qwen-VL-Chat, set new records for generalist models under similar
model scales on a broad range of visual-centric benchmarks (e.g., image
captioning, question answering, visual grounding) and different settings (e.g.,
zero-shot, few-shot). Moreover, on real-world dialog benchmarks, our
instruction-tuned Qwen-VL-Chat also demonstrates superiority compared to
existing vision-language chatbots. Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL. | http://arxiv.org/pdf/2308.12966 | Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou | cs.CV, cs.CL | Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL | null | cs.CV | 20230824 | 20231013 | [
{
"id": "2211.01335"
},
{
"id": "2307.02499"
},
{
"id": "2305.10403"
},
{
"id": "2308.16890"
},
{
"id": "2208.10442"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2210.08402"
},
{
"id": "2306.02858"
},
{
"id": "2209.06794"
},
{
"id": "1504.00325"
},
{
"id": "2204.13653"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2212.04408"
},
{
"id": "2307.05222"
},
{
"id": "2306.15195"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2306.14824"
},
{
"id": "2102.05918"
},
{
"id": "2205.01917"
},
{
"id": "2111.11432"
},
{
"id": "2307.16125"
},
{
"id": "2305.03726"
},
{
"id": "2203.10244"
},
{
"id": "2206.08916"
},
{
"id": "2304.14108"
},
{
"id": "2307.08581"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2305.18565"
}
] |
2308.12503 | 42 | to a Chatting Machine for Co- herent Conversation Generation. In Ijcai, 4279â4285. Soloman, B. A.; and Felder, R. M. 2005. Index of learning styles questionnaire. NC State University. Available online at: http://www. engr. ncsu. edu/learningstyles/ilsweb. html (last visited on 14.05. 2010), 70. Wang, L.; Xu, W.; Lan, Y.; Hu, Z.; Lan, Y.; Lee, R. K.-W.; and Lim, E.-P. 2023a. Plan-and-Solve Prompting: Improv- ing Zero-Shot Chain-of-Thought Reasoning by Large Lan- guage Models. arXiv:2305.04091. Wang, Z.; Cai, S.; Liu, A.; Ma, X.; and Liang, Y. 2023b. De- scribe, Explain, Plan and Select: Interactive Planning with Large Language Models Enables Open-World Multi-Task Agents. arXiv:2302.01560. Wang, Z.; Mao, S.; Wu, W.; Ge, T.; Wei, F.; and Ji, H. 2023c. Unleashing Cognitive Synergy in Large Language Models: A | 2308.12503#42 | CGMI: Configurable General Multi-Agent Interaction Framework | Benefiting from the powerful capabilities of large language models (LLMs),
agents based on LLMs have shown the potential to address domain-specific tasks
and emulate human behaviors. However, the content generated by these agents
remains somewhat superficial, owing to their limited domain expertise and the
absence of an effective cognitive architecture. To address this, we present the
Configurable General Multi-Agent Interaction (CGMI) framework, designed to
replicate human interactions in real-world scenarios. Specifically, we propose
a tree-structured methodology for the assignment, detection, and maintenance of
agent personality. Additionally, we designed a cognitive architecture equipped
with a skill library based on the ACT* model, which contains memory,
reflection, and planning modules. We have also integrated general agents to
augment the virtual environment's realism. Using the CGMI framework, we
simulated numerous classroom interactions between teacher and students. The
experiments indicate that aspects such as the teaching methodology, curriculum,
and student performance closely mirror real classroom settings. We will open
source our work. | http://arxiv.org/pdf/2308.12503 | Shi Jinxin, Zhao Jiabao, Wang Yilei, Wu Xingjiao, Li Jiawen, He Liang | cs.AI, cs.HC, cs.MA | 11 pages, 15 figures | null | cs.AI | 20230824 | 20230828 | [
{
"id": "2302.01560"
},
{
"id": "2307.05300"
},
{
"id": "2307.07924"
},
{
"id": "2210.03350"
},
{
"id": "2304.05376"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.04091"
},
{
"id": "2305.02547"
},
{
"id": "2303.17071"
},
{
"id": "2303.17760"
},
{
"id": "2303.08774"
}
] |
2308.12519 | 42 | J. Searle. Speech acts: An essay in the philosophy of language. 1969.
Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Lu Wang, Ruoxi Jia, and Ming Jin. Algo- rithm of thoughts: Enhancing exploration of ideas in large language models. arXiv preprint arXiv:2308.10379, 2023.
Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. Hugging- gpt: Solving ai tasks with chatgpt and its friends in huggingface, 2023.
Noah Shinn, Federico Cassano, Beck Labash, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. Reflexion: Language agents with verbal reinforcement learning, 2023.
Yifan Song, Weimin Xiong, Dawei Zhu, Cheng Li, Ke Wang, Ye Tian, and Sujian Li. Restgpt: Connecting large language models with real-world applications via restful apis. arXiv preprint arXiv:2306.06624, 2023.
11
Preprint | 2308.12519#42 | Rational Decision-Making Agent with Internalized Utility Judgment | Large language models (LLMs) have demonstrated remarkable advancements and
have attracted significant efforts to develop LLMs into agents capable of
executing intricate multi-step decision-making tasks beyond traditional NLP
applications. Existing approaches to LLM-based decision-making predominantly
build upon the manually-designed external performance metrics to guide the
decision-making process. However, reliance on the external performance metrics
as prior is problematic in real-world scenarios, where such prior may be
unavailable, flawed, or even erroneous. For genuine autonomous decision making,
it is imperative for the agent to develop its rationality from its posterior
experiences to judge decisions independently. Central to the development of
rationality is the construction of an internalized utility judgment, capable of
assigning numerical utilities to each decision. This paper proposes RadAgent
(Rational Decision-Making Agent), which fosters the development of its
rationality through an iterative framework involving Experience Exploration and
Utility Learning. Within this framework, Elo-based Utility Construction is
devised to assign Elo scores to individual decision steps to judge their
utilities via pairwise comparisons. Consequently, these Elo scores guide the
decision-making process to derive optimal outcomes. Experimental results on the
ToolBench dataset demonstrate RadAgent's superiority over baselines, achieving
over 10% improvement in Pass Rate on diverse tasks. It offers higher-quality
solutions and reduces costs (ChatGPT API calls), highlighting its effectiveness
and efficiency. | http://arxiv.org/pdf/2308.12519 | Yining Ye, Xin Cong, Shizuo Tian, Yujia Qin, Chong Liu, Yankai Lin, Zhiyuan Liu, Maosong Sun | cs.CL | Received 8,6,6,6 scores on ICLR 2024 | null | cs.CL | 20230824 | 20240117 | [
{
"id": "2305.14318"
},
{
"id": "2306.06624"
},
{
"id": "2305.17926"
},
{
"id": "2305.10601"
},
{
"id": "2307.16789"
},
{
"id": "2305.06849"
},
{
"id": "2304.08354"
},
{
"id": "2308.09687"
},
{
"id": "2306.11489"
},
{
"id": "2306.17563"
},
{
"id": "2305.14992"
},
{
"id": "2305.01937"
},
{
"id": "2308.10379"
},
{
"id": "2305.11554"
}
] |
2308.12682 | 42 | Baselines. At the action level, we evaluate our decoding scores (Say, SayCan, SayCanPay) using various decoding strategies (Greedy and Beam-Action). Therefore, our baselines employ a mix of these strategies and scores. For tokens, we use the Greedy-Token decoding strategy as a reference. Notably, Greedy-Action SayCan is the offline planning version of the original SayCan paper (Ahn et al. 2022).
Training and Inference Details. We use 800 expert train trajectories for each Ravens task and 400 for BabyAI. For VirtualHome, we retained â 800 compatible trajectories for the current simulator. An additional 100 expert trajectories were collected for each test split (20 for VirtualHome test-generalize). The Can and Pay models were trained on 7 NVIDIA-DGX V-100 GPUs using the Distributed Data-Parallel framework across 20 epochs. Training parameters included a 1e-4 learning rate, AdamW optimizer with 1e-5 weight decay, a batch size of 50, a train-validation split of 80-20. For inference, the Say model was loaded using Model Parallel on the same GPUs. Inference hyperparameters are listed in Table 6. Parameters like beam groups and diversity penalty encourage diversity among the beams, thus avoiding multiple similar sequences. We used 8-bit precision for GPU-efficient model loading (Dettmers et al. 2022). | 2308.12682#42 | SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge | Large Language Models (LLMs) have demonstrated impressive planning abilities
due to their vast "world knowledge". Yet, obtaining plans that are both
feasible (grounded in affordances) and cost-effective (in plan length), remains
a challenge, despite recent progress. This contrasts with heuristic planning
methods that employ domain knowledge (formalized in action models such as PDDL)
and heuristic search to generate feasible, optimal plans. Inspired by this, we
propose to combine the power of LLMs and heuristic planning by leveraging the
world knowledge of LLMs and the principles of heuristic search. Our approach,
SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain
knowledge, that evaluates actions' feasibility (Can) and long-term
reward/payoff (Pay), and heuristic search to select the best sequence of
actions. Our contributions are (1) a novel framing of the LLM planning problem
in the context of heuristic planning, (2) integrating grounding and
cost-effective elements into the generated plans, and (3) using heuristic
search over actions. Our extensive evaluations show that our model surpasses
other LLM planning approaches. | http://arxiv.org/pdf/2308.12682 | Rishi Hazra, Pedro Zuidberg Dos Martires, Luc De Raedt | cs.AI | Accepted in AAAI 2024. Website:
https://rishihazra.github.io/SayCanPay/ | null | cs.AI | 20230824 | 20240101 | [
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2305.14992"
},
{
"id": "2302.05128"
},
{
"id": "2212.08681"
},
{
"id": "1807.03748"
},
{
"id": "2303.00855"
},
{
"id": "2305.10601"
},
{
"id": "2304.11477"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2201.04735"
},
{
"id": "2202.10936"
},
{
"id": "2209.07753"
},
{
"id": "2302.06706"
},
{
"id": "1909.08593"
},
{
"id": "2307.15818"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2305.14314"
}
] |
2308.12950 | 42 | Perplexity during extrapolation. In Figure 4a, perplexity is computed over 4M tokens from the code dataset, using a subset of our validation data consisting of large source files (â¥50kB). For all model sizes, we observe a steady decrease in perplexity well beyond 16384 tokens, which is the sequence length we use for long-context fine-tuning. After 100K tokens, the perplexity increases only slightly, in contrast to the well-known instability phenomenon when testing transformer models on sequences larger than those seen during training (Press et al., 2022).
10 | 2308.12950#42 | Code Llama: Open Foundation Models for Code | We release Code Llama, a family of large language models for code based on
Llama 2 providing state-of-the-art performance among open models, infilling
capabilities, support for large input contexts, and zero-shot instruction
following ability for programming tasks. We provide multiple flavors to cover a
wide range of applications: foundation models (Code Llama), Python
specializations (Code Llama - Python), and instruction-following models (Code
Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are
trained on sequences of 16k tokens and show improvements on inputs with up to
100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants
support infilling based on surrounding content. Code Llama reaches
state-of-the-art performance among open models on several code benchmarks, with
scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code
Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our
models outperform every other publicly available model on MultiPL-E. We release
Code Llama under a permissive license that allows for both research and
commercial use. | http://arxiv.org/pdf/2308.12950 | Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve | cs.CL | null | null | cs.CL | 20230824 | 20240131 | [] |
2308.12966 | 42 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, and Rui Zhao. Shikra: Unleashing multimodal llmâs referential dialogue magic. arXiv:2306.15195, 2023a.
Xi Chen, Xiao Wang, Soravit Changpinyo, AJ Piergiovanni, Piotr Padlewski, Daniel Salz, Sebastian Goodman, Adam Grycner, Basil Mustafa, Lucas Beyer, et al. Pali: A jointly-scaled multilingual language-image model. arXiv:2209.06794, 2022.
Xi Chen, Josip Djolonga, Piotr Padlewski, Basil Mustafa, Soravit Changpinyo, Jialin Wu, Carlos Riquelme Ruiz, Sebastian Goodman, Xiao Wang, Yi Tay, et al. Pali-x: On scaling up a multilingual vision and language model. arXiv preprint arXiv:2305.18565, 2023b.
Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco captions: Data collection and evaluation server. arXiv:1504.00325, 2015. | 2308.12966#42 | Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond | In this work, we introduce the Qwen-VL series, a set of large-scale
vision-language models (LVLMs) designed to perceive and understand both texts
and images. Starting from the Qwen-LM as a foundation, we endow it with visual
capacity by the meticulously designed (i) visual receptor, (ii) input-output
interface, (iii) 3-stage training pipeline, and (iv) multilingual multimodal
cleaned corpus. Beyond the conventional image description and
question-answering, we implement the grounding and text-reading ability of
Qwen-VLs by aligning image-caption-box tuples. The resulting models, including
Qwen-VL and Qwen-VL-Chat, set new records for generalist models under similar
model scales on a broad range of visual-centric benchmarks (e.g., image
captioning, question answering, visual grounding) and different settings (e.g.,
zero-shot, few-shot). Moreover, on real-world dialog benchmarks, our
instruction-tuned Qwen-VL-Chat also demonstrates superiority compared to
existing vision-language chatbots. Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL. | http://arxiv.org/pdf/2308.12966 | Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou | cs.CV, cs.CL | Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL | null | cs.CV | 20230824 | 20231013 | [
{
"id": "2211.01335"
},
{
"id": "2307.02499"
},
{
"id": "2305.10403"
},
{
"id": "2308.16890"
},
{
"id": "2208.10442"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2210.08402"
},
{
"id": "2306.02858"
},
{
"id": "2209.06794"
},
{
"id": "1504.00325"
},
{
"id": "2204.13653"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2212.04408"
},
{
"id": "2307.05222"
},
{
"id": "2306.15195"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2306.14824"
},
{
"id": "2102.05918"
},
{
"id": "2205.01917"
},
{
"id": "2111.11432"
},
{
"id": "2307.16125"
},
{
"id": "2305.03726"
},
{
"id": "2203.10244"
},
{
"id": "2206.08916"
},
{
"id": "2304.14108"
},
{
"id": "2307.08581"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2305.18565"
}
] |
2308.12503 | 43 | S.; Wu, W.; Ge, T.; Wei, F.; and Ji, H. 2023c. Unleashing Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self- Collaboration. arXiv:2307.05300. Weng and Lilian. 2023. LLM-powered Autonomous Agents. https://lilianweng.github.io/posts/2023-06-23-agent/. Ac- cessed: 2023-06-23. Yao, S.; Zhao, J.; Yu, D.; Du, N.; Shafran, I.; Narasimhan, K.; and Cao, Y. 2023. ReAct: Synergizing Reasoning and Acting in Language Models. arXiv:2210.03629. | 2308.12503#43 | CGMI: Configurable General Multi-Agent Interaction Framework | Benefiting from the powerful capabilities of large language models (LLMs),
agents based on LLMs have shown the potential to address domain-specific tasks
and emulate human behaviors. However, the content generated by these agents
remains somewhat superficial, owing to their limited domain expertise and the
absence of an effective cognitive architecture. To address this, we present the
Configurable General Multi-Agent Interaction (CGMI) framework, designed to
replicate human interactions in real-world scenarios. Specifically, we propose
a tree-structured methodology for the assignment, detection, and maintenance of
agent personality. Additionally, we designed a cognitive architecture equipped
with a skill library based on the ACT* model, which contains memory,
reflection, and planning modules. We have also integrated general agents to
augment the virtual environment's realism. Using the CGMI framework, we
simulated numerous classroom interactions between teacher and students. The
experiments indicate that aspects such as the teaching methodology, curriculum,
and student performance closely mirror real classroom settings. We will open
source our work. | http://arxiv.org/pdf/2308.12503 | Shi Jinxin, Zhao Jiabao, Wang Yilei, Wu Xingjiao, Li Jiawen, He Liang | cs.AI, cs.HC, cs.MA | 11 pages, 15 figures | null | cs.AI | 20230824 | 20230828 | [
{
"id": "2302.01560"
},
{
"id": "2307.05300"
},
{
"id": "2307.07924"
},
{
"id": "2210.03350"
},
{
"id": "2304.05376"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.04091"
},
{
"id": "2305.02547"
},
{
"id": "2303.17071"
},
{
"id": "2303.17760"
},
{
"id": "2303.08774"
}
] |
2308.12519 | 43 | 11
Preprint
Sai Vemprala, Rogerio Bonatti, Arthur Bucker, and Ashish Kapoor. Chatgpt for robotics: Design principles and model abilities. Technical Report MSR-TR-2023-8, Microsoft, February 2023.
Peiyi Wang, Lei Li, Liang Chen, Dawei Zhu, Binghuai Lin, Yunbo Cao, Qi Liu, Tianyu Liu, and Zhifang Sui. Large language models are not fair evaluators. arXiv preprint arXiv:2305.17926, 2023.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Denny Zhou. Chain-of-thought prompting elicits reasoning in large language models, 2023.
M. Wooldridge and N. Jennings. Intelligent agents: theory and practice. The Knowledge Engineering Review, 10:115 â 152, 1995.
Chenfei Wu, Shengming Yin, Weizhen Qi, Xiaodong Wang, Zecheng Tang, and Nan Duan. Vi- sual chatgpt: Talking, drawing and editing with visual foundation models. ArXiv preprint, abs/2303.04671, 2023. | 2308.12519#43 | Rational Decision-Making Agent with Internalized Utility Judgment | Large language models (LLMs) have demonstrated remarkable advancements and
have attracted significant efforts to develop LLMs into agents capable of
executing intricate multi-step decision-making tasks beyond traditional NLP
applications. Existing approaches to LLM-based decision-making predominantly
build upon the manually-designed external performance metrics to guide the
decision-making process. However, reliance on the external performance metrics
as prior is problematic in real-world scenarios, where such prior may be
unavailable, flawed, or even erroneous. For genuine autonomous decision making,
it is imperative for the agent to develop its rationality from its posterior
experiences to judge decisions independently. Central to the development of
rationality is the construction of an internalized utility judgment, capable of
assigning numerical utilities to each decision. This paper proposes RadAgent
(Rational Decision-Making Agent), which fosters the development of its
rationality through an iterative framework involving Experience Exploration and
Utility Learning. Within this framework, Elo-based Utility Construction is
devised to assign Elo scores to individual decision steps to judge their
utilities via pairwise comparisons. Consequently, these Elo scores guide the
decision-making process to derive optimal outcomes. Experimental results on the
ToolBench dataset demonstrate RadAgent's superiority over baselines, achieving
over 10% improvement in Pass Rate on diverse tasks. It offers higher-quality
solutions and reduces costs (ChatGPT API calls), highlighting its effectiveness
and efficiency. | http://arxiv.org/pdf/2308.12519 | Yining Ye, Xin Cong, Shizuo Tian, Yujia Qin, Chong Liu, Yankai Lin, Zhiyuan Liu, Maosong Sun | cs.CL | Received 8,6,6,6 scores on ICLR 2024 | null | cs.CL | 20230824 | 20240117 | [
{
"id": "2305.14318"
},
{
"id": "2306.06624"
},
{
"id": "2305.17926"
},
{
"id": "2305.10601"
},
{
"id": "2307.16789"
},
{
"id": "2305.06849"
},
{
"id": "2304.08354"
},
{
"id": "2308.09687"
},
{
"id": "2306.11489"
},
{
"id": "2306.17563"
},
{
"id": "2305.14992"
},
{
"id": "2305.01937"
},
{
"id": "2308.10379"
},
{
"id": "2305.11554"
}
] |
2308.12682 | 43 | 5We split test into two parts of 100 samples to evaluate success, cost-effectiveness. For VirtualHome, we use the annotated plans from its dataset.
= Greedy-Token = Greedy-Action SayCan = Beam-Action Say = Beam-Action SayCanPay = Greedy-Action Say == Greedy-Action SayCanPay == Beam-Action SayCan Ravens (tower of hanoi) Ravens (put blocks in bowls) BabyAl VirtualHome Relative Length o ind ° o Pr BR ® oO ° o
Figure 4: [Best viewed in color] The error plot represents the variance in relative length over models Vicuna and Flan- T5. Due to the open-ended nature of VirtualHome, the crowdsourced trajectories are not optimal, which explains why certain cases have a relative length > 1.0. Note that Greedy-Token decoding in VirtualHome has a relative length = 0 since no generated plans were executed successfully for both Vicuna and Flan-T5. | 2308.12682#43 | SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge | Large Language Models (LLMs) have demonstrated impressive planning abilities
due to their vast "world knowledge". Yet, obtaining plans that are both
feasible (grounded in affordances) and cost-effective (in plan length), remains
a challenge, despite recent progress. This contrasts with heuristic planning
methods that employ domain knowledge (formalized in action models such as PDDL)
and heuristic search to generate feasible, optimal plans. Inspired by this, we
propose to combine the power of LLMs and heuristic planning by leveraging the
world knowledge of LLMs and the principles of heuristic search. Our approach,
SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain
knowledge, that evaluates actions' feasibility (Can) and long-term
reward/payoff (Pay), and heuristic search to select the best sequence of
actions. Our contributions are (1) a novel framing of the LLM planning problem
in the context of heuristic planning, (2) integrating grounding and
cost-effective elements into the generated plans, and (3) using heuristic
search over actions. Our extensive evaluations show that our model surpasses
other LLM planning approaches. | http://arxiv.org/pdf/2308.12682 | Rishi Hazra, Pedro Zuidberg Dos Martires, Luc De Raedt | cs.AI | Accepted in AAAI 2024. Website:
https://rishihazra.github.io/SayCanPay/ | null | cs.AI | 20230824 | 20240101 | [
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2305.14992"
},
{
"id": "2302.05128"
},
{
"id": "2212.08681"
},
{
"id": "1807.03748"
},
{
"id": "2303.00855"
},
{
"id": "2305.10601"
},
{
"id": "2304.11477"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2201.04735"
},
{
"id": "2202.10936"
},
{
"id": "2209.07753"
},
{
"id": "2302.06706"
},
{
"id": "1909.08593"
},
{
"id": "2307.15818"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2305.14314"
}
] |
2308.12950 | 43 | 10
Model FIM Size HumanEval MBPP Test loss pass@1 pass@10 pass@100 pass@1 pass@10 pass@100 Code Llama (w/o LCFT) â 7B 33.2% 43.3% 13B 36.8% 49.2% 49.9% 44.8% 52.5% 57.9% 48.2% 57.4% 57.1% 61.6% 0.408 0.372 Code Llama (w/o LCFT) â 7B 33.6% 44.0% 13B 36.2% 48.3% 48.8% 44.2% 51.4% 54.6% 48.0% 56.8% 55.5% 60.8% 0.407 0.373 Absolute gap â - â 7B â0.4% â0.7% 0.9% 13B 0.7% 1.1% 0.6% 3.3% 0.2% 1.1% 0.6% 1.6% 0.001 0.8% â0.001 | 2308.12950#43 | Code Llama: Open Foundation Models for Code | We release Code Llama, a family of large language models for code based on
Llama 2 providing state-of-the-art performance among open models, infilling
capabilities, support for large input contexts, and zero-shot instruction
following ability for programming tasks. We provide multiple flavors to cover a
wide range of applications: foundation models (Code Llama), Python
specializations (Code Llama - Python), and instruction-following models (Code
Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are
trained on sequences of 16k tokens and show improvements on inputs with up to
100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants
support infilling based on surrounding content. Code Llama reaches
state-of-the-art performance among open models on several code benchmarks, with
scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code
Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our
models outperform every other publicly available model on MultiPL-E. We release
Code Llama under a permissive license that allows for both research and
commercial use. | http://arxiv.org/pdf/2308.12950 | Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve | cs.CL | null | null | cs.CL | 20230824 | 20240131 | [] |
2308.12966 | 43 | Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. Uniter: Universal image-text representation learning. In ECCV, 2020.
Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, and Steven Hoi. Instructblip: Towards general-purpose vision-language models with instruction tuning. arXiv:2305.06500, 2023.
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Un- terthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In ICLR, 2021. | 2308.12966#43 | Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond | In this work, we introduce the Qwen-VL series, a set of large-scale
vision-language models (LVLMs) designed to perceive and understand both texts
and images. Starting from the Qwen-LM as a foundation, we endow it with visual
capacity by the meticulously designed (i) visual receptor, (ii) input-output
interface, (iii) 3-stage training pipeline, and (iv) multilingual multimodal
cleaned corpus. Beyond the conventional image description and
question-answering, we implement the grounding and text-reading ability of
Qwen-VLs by aligning image-caption-box tuples. The resulting models, including
Qwen-VL and Qwen-VL-Chat, set new records for generalist models under similar
model scales on a broad range of visual-centric benchmarks (e.g., image
captioning, question answering, visual grounding) and different settings (e.g.,
zero-shot, few-shot). Moreover, on real-world dialog benchmarks, our
instruction-tuned Qwen-VL-Chat also demonstrates superiority compared to
existing vision-language chatbots. Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL. | http://arxiv.org/pdf/2308.12966 | Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou | cs.CV, cs.CL | Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL | null | cs.CV | 20230824 | 20231013 | [
{
"id": "2211.01335"
},
{
"id": "2307.02499"
},
{
"id": "2305.10403"
},
{
"id": "2308.16890"
},
{
"id": "2208.10442"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2210.08402"
},
{
"id": "2306.02858"
},
{
"id": "2209.06794"
},
{
"id": "1504.00325"
},
{
"id": "2204.13653"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2212.04408"
},
{
"id": "2307.05222"
},
{
"id": "2306.15195"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2306.14824"
},
{
"id": "2102.05918"
},
{
"id": "2205.01917"
},
{
"id": "2111.11432"
},
{
"id": "2307.16125"
},
{
"id": "2305.03726"
},
{
"id": "2203.10244"
},
{
"id": "2206.08916"
},
{
"id": "2304.14108"
},
{
"id": "2307.08581"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2305.18565"
}
] |
2308.12503 | 44 | âAy Career: Student Y Name: John Description: John is a Athletic Star student who physical fitness and team spirit allow him to excel in various sports activities. The resilience and determination he demonstrate when faced with challenges are also significant strengths. He might neglect academic learning and artistic development because he devote most of his time and energy to sports activities. He might also rely too heavily on sports, overlooking the need for a balanced physical and mental well-being, Personality (Big Five Personality): ([21, 5, 5, 3, 5, 3], [18, 4, 3, 3, 4, 4], [18, 4,4, 3, 4,3], [16, 4, 3, 3, 3, 3}, [16, 3, 3, 3,4, 3] Learning Style (Solomon's Learning Styles): [[Active-3, a, a, a, a, a, b, b, a, a, b, b], [Sensory-10, a, a, a, a,b, a, a, a, a, a, a}, [Visual-Il, a, a, a, a, a,a,a, a, a,a, a], [Sequential 10, a,a,a,a,a,a,b,a,a,a, al] | 2308.12503#44 | CGMI: Configurable General Multi-Agent Interaction Framework | Benefiting from the powerful capabilities of large language models (LLMs),
agents based on LLMs have shown the potential to address domain-specific tasks
and emulate human behaviors. However, the content generated by these agents
remains somewhat superficial, owing to their limited domain expertise and the
absence of an effective cognitive architecture. To address this, we present the
Configurable General Multi-Agent Interaction (CGMI) framework, designed to
replicate human interactions in real-world scenarios. Specifically, we propose
a tree-structured methodology for the assignment, detection, and maintenance of
agent personality. Additionally, we designed a cognitive architecture equipped
with a skill library based on the ACT* model, which contains memory,
reflection, and planning modules. We have also integrated general agents to
augment the virtual environment's realism. Using the CGMI framework, we
simulated numerous classroom interactions between teacher and students. The
experiments indicate that aspects such as the teaching methodology, curriculum,
and student performance closely mirror real classroom settings. We will open
source our work. | http://arxiv.org/pdf/2308.12503 | Shi Jinxin, Zhao Jiabao, Wang Yilei, Wu Xingjiao, Li Jiawen, He Liang | cs.AI, cs.HC, cs.MA | 11 pages, 15 figures | null | cs.AI | 20230824 | 20230828 | [
{
"id": "2302.01560"
},
{
"id": "2307.05300"
},
{
"id": "2307.07924"
},
{
"id": "2210.03350"
},
{
"id": "2304.05376"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.04091"
},
{
"id": "2305.02547"
},
{
"id": "2303.17071"
},
{
"id": "2303.17760"
},
{
"id": "2303.08774"
}
] |
2308.12519 | 44 | Linyao Yang, Hongyang Chen, Zhao Li, Xiao Ding, and Xindong Wu. Chatgpt is not enough: En- hancing large language models with knowledge graphs for fact-aware language modeling. arXiv preprint arXiv:2306.11489, 2023.
Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. React: Synergizing reasoning and acting in language models. ArXiv preprint, abs/2210.03629, 2022.
Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L Griffiths, Yuan Cao, and Karthik Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:2305.10601, 2023.
# A SELF-JUDGMENT PROMPT
Our self-judgment prompt is designed as follows: | 2308.12519#44 | Rational Decision-Making Agent with Internalized Utility Judgment | Large language models (LLMs) have demonstrated remarkable advancements and
have attracted significant efforts to develop LLMs into agents capable of
executing intricate multi-step decision-making tasks beyond traditional NLP
applications. Existing approaches to LLM-based decision-making predominantly
build upon the manually-designed external performance metrics to guide the
decision-making process. However, reliance on the external performance metrics
as prior is problematic in real-world scenarios, where such prior may be
unavailable, flawed, or even erroneous. For genuine autonomous decision making,
it is imperative for the agent to develop its rationality from its posterior
experiences to judge decisions independently. Central to the development of
rationality is the construction of an internalized utility judgment, capable of
assigning numerical utilities to each decision. This paper proposes RadAgent
(Rational Decision-Making Agent), which fosters the development of its
rationality through an iterative framework involving Experience Exploration and
Utility Learning. Within this framework, Elo-based Utility Construction is
devised to assign Elo scores to individual decision steps to judge their
utilities via pairwise comparisons. Consequently, these Elo scores guide the
decision-making process to derive optimal outcomes. Experimental results on the
ToolBench dataset demonstrate RadAgent's superiority over baselines, achieving
over 10% improvement in Pass Rate on diverse tasks. It offers higher-quality
solutions and reduces costs (ChatGPT API calls), highlighting its effectiveness
and efficiency. | http://arxiv.org/pdf/2308.12519 | Yining Ye, Xin Cong, Shizuo Tian, Yujia Qin, Chong Liu, Yankai Lin, Zhiyuan Liu, Maosong Sun | cs.CL | Received 8,6,6,6 scores on ICLR 2024 | null | cs.CL | 20230824 | 20240117 | [
{
"id": "2305.14318"
},
{
"id": "2306.06624"
},
{
"id": "2305.17926"
},
{
"id": "2305.10601"
},
{
"id": "2307.16789"
},
{
"id": "2305.06849"
},
{
"id": "2304.08354"
},
{
"id": "2308.09687"
},
{
"id": "2306.11489"
},
{
"id": "2306.17563"
},
{
"id": "2305.14992"
},
{
"id": "2305.01937"
},
{
"id": "2308.10379"
},
{
"id": "2305.11554"
}
] |
2308.12682 | 44 | 7.3 Results We analyze the results along the following axes: decoding strategies, decoding scores, and transformer architectures. We assessed planning success and generalization by executing the generated plans in simulators such as Ravens and BabyAI, which have built-in validation checks to determine goal achievement. For the more open-ended VirtualHome environment, we manually reviewed fully executed plans to ensure they met the intended task objectives. For cost- effectiveness, we acquired expert trajectories for each test sample using an oracle planner. Comparing decoding scores. From Tables 3, 4, the performance across various decoding scores can be summarized as Say < SayCan ⤠SayCanPay. (i) planning success: The SayCanPay and SayCan scores lead to comparable per- formances, often outperforming Say. The Pay modelâs minor performance edge could be due to its focus on selecting actions based on long-term relevance, potentially avoiding irreversible (breaking an egg) or even absorbing states (dis- charged cellphone) from where it is impossible to reach the goal (i.e. planning is non-ergodic). (ii) cost-effectiveness: SayCanPay leads to a significant improvement over | 2308.12682#44 | SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge | Large Language Models (LLMs) have demonstrated impressive planning abilities
due to their vast "world knowledge". Yet, obtaining plans that are both
feasible (grounded in affordances) and cost-effective (in plan length), remains
a challenge, despite recent progress. This contrasts with heuristic planning
methods that employ domain knowledge (formalized in action models such as PDDL)
and heuristic search to generate feasible, optimal plans. Inspired by this, we
propose to combine the power of LLMs and heuristic planning by leveraging the
world knowledge of LLMs and the principles of heuristic search. Our approach,
SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain
knowledge, that evaluates actions' feasibility (Can) and long-term
reward/payoff (Pay), and heuristic search to select the best sequence of
actions. Our contributions are (1) a novel framing of the LLM planning problem
in the context of heuristic planning, (2) integrating grounding and
cost-effective elements into the generated plans, and (3) using heuristic
search over actions. Our extensive evaluations show that our model surpasses
other LLM planning approaches. | http://arxiv.org/pdf/2308.12682 | Rishi Hazra, Pedro Zuidberg Dos Martires, Luc De Raedt | cs.AI | Accepted in AAAI 2024. Website:
https://rishihazra.github.io/SayCanPay/ | null | cs.AI | 20230824 | 20240101 | [
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2305.14992"
},
{
"id": "2302.05128"
},
{
"id": "2212.08681"
},
{
"id": "1807.03748"
},
{
"id": "2303.00855"
},
{
"id": "2305.10601"
},
{
"id": "2304.11477"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2201.04735"
},
{
"id": "2202.10936"
},
{
"id": "2209.07753"
},
{
"id": "2302.06706"
},
{
"id": "1909.08593"
},
{
"id": "2307.15818"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2305.14314"
}
] |
2308.12950 | 44 | Table 5: Comparison of models with and without FIM training. pass@1, pass@10 and pass@100 scores on HumanEval and MBPP evaluated at temperature 0.1 for models trained with and without infilling (FIM) objective. Infilling training incurs no cost on autoregressive test set loss, but a small cost on HumanEval and MBPP pass@k metrics that is aggravated at higher sample counts k. The models are compared prior to long context fine-tuning (LCFT).
Model Size Python Java JavaScript PSM SPM PSM SPM PSM SPM InCoder SantaCoder StarCoder 6B 1.1B 15.5B 31.0% 44.0% 62.0% 49.0% 62.0% 73.0% 51.0% 60.0% 74.0% Code Llama 7B 67.6% 72.7% 74.3% 77.6% 80.2% 82.6% 13B 68.3% 74.5% 77.6% 80.0% 80.7% 85.0% | 2308.12950#44 | Code Llama: Open Foundation Models for Code | We release Code Llama, a family of large language models for code based on
Llama 2 providing state-of-the-art performance among open models, infilling
capabilities, support for large input contexts, and zero-shot instruction
following ability for programming tasks. We provide multiple flavors to cover a
wide range of applications: foundation models (Code Llama), Python
specializations (Code Llama - Python), and instruction-following models (Code
Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are
trained on sequences of 16k tokens and show improvements on inputs with up to
100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants
support infilling based on surrounding content. Code Llama reaches
state-of-the-art performance among open models on several code benchmarks, with
scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code
Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our
models outperform every other publicly available model on MultiPL-E. We release
Code Llama under a permissive license that allows for both research and
commercial use. | http://arxiv.org/pdf/2308.12950 | Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve | cs.CL | null | null | cs.CL | 20230824 | 20240131 | [] |
2308.12966 | 44 | Zi-Yi* Dou, Aishwarya* Kamath, Zhe* Gan, Pengchuan Zhang, Jianfeng Wang, Linjie Li, Zicheng Liu, Ce Liu, Yann LeCun, Nanyun Peng, Jianfeng Gao, and Lijuan Wang. Coarse-to-fine vision-language pre-training with fusion in the backbone. In NeurIPS, 2022.
Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Zhenyu Qiu, Wei Lin, Jinrui Yang, Xiawu Zheng, et al. Mme: A comprehensive evaluation benchmark for multimodal large language models. arXiv:2306.13394, 2023.
Samir Yitzhak Gadre, Gabriel Ilharco, Alex Fang, Jonathan Hayase, Georgios Smyrnis, Thao Nguyen, Ryan Marten, Mitchell Wortsman, Dhruba Ghosh, Jieyu Zhang, et al. Datacomp: In search of the next generation of multimodal datasets. arXiv:2304.14108, 2023. | 2308.12966#44 | Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond | In this work, we introduce the Qwen-VL series, a set of large-scale
vision-language models (LVLMs) designed to perceive and understand both texts
and images. Starting from the Qwen-LM as a foundation, we endow it with visual
capacity by the meticulously designed (i) visual receptor, (ii) input-output
interface, (iii) 3-stage training pipeline, and (iv) multilingual multimodal
cleaned corpus. Beyond the conventional image description and
question-answering, we implement the grounding and text-reading ability of
Qwen-VLs by aligning image-caption-box tuples. The resulting models, including
Qwen-VL and Qwen-VL-Chat, set new records for generalist models under similar
model scales on a broad range of visual-centric benchmarks (e.g., image
captioning, question answering, visual grounding) and different settings (e.g.,
zero-shot, few-shot). Moreover, on real-world dialog benchmarks, our
instruction-tuned Qwen-VL-Chat also demonstrates superiority compared to
existing vision-language chatbots. Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL. | http://arxiv.org/pdf/2308.12966 | Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou | cs.CV, cs.CL | Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL | null | cs.CV | 20230824 | 20231013 | [
{
"id": "2211.01335"
},
{
"id": "2307.02499"
},
{
"id": "2305.10403"
},
{
"id": "2308.16890"
},
{
"id": "2208.10442"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2210.08402"
},
{
"id": "2306.02858"
},
{
"id": "2209.06794"
},
{
"id": "1504.00325"
},
{
"id": "2204.13653"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2212.04408"
},
{
"id": "2307.05222"
},
{
"id": "2306.15195"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2306.14824"
},
{
"id": "2102.05918"
},
{
"id": "2205.01917"
},
{
"id": "2111.11432"
},
{
"id": "2307.16125"
},
{
"id": "2305.03726"
},
{
"id": "2203.10244"
},
{
"id": "2206.08916"
},
{
"id": "2304.14108"
},
{
"id": "2307.08581"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2305.18565"
}
] |
2308.12503 | 45 | Appendix The appendix presents the character settings for each char- acter, a tree-structured learning style scale, and a teaching style scale. Role Set In this work, the initialization of role agents is mainly carried out from the perspectives of the career, name, basic informa- tion, personalities, and teaching or learning styles. Figure 8 shows Teacher Mrs Smithâs character settings. Figures 9, 10, 11, 12, and 13 show the character settings of students Ryan, John, Emily, Samantha, and Ying Zheng, respectively. Sternberg Thinking Styles in Teaching Mrs. Smithâs teaching style can be described by Sternberg Thinking Styles in Teaching Inventory with a tree-structured format (Figure 14). Each Level-2 node has its score, rep- resenting the degree of match between the description pro- vided and the actual teaching style, with a maximum of 7 and a minimum of 1. Each Level-1 node also has its cor- responding score, which is the sum of the scores of all its child nodes. The higher the value, the higher the degree of matching. Solomonâs Learning Styles Students learning styles can be described by Solomonâs Learning Styles Inventory with a tree-structured format (Fig- ure 15). Each | 2308.12503#45 | CGMI: Configurable General Multi-Agent Interaction Framework | Benefiting from the powerful capabilities of large language models (LLMs),
agents based on LLMs have shown the potential to address domain-specific tasks
and emulate human behaviors. However, the content generated by these agents
remains somewhat superficial, owing to their limited domain expertise and the
absence of an effective cognitive architecture. To address this, we present the
Configurable General Multi-Agent Interaction (CGMI) framework, designed to
replicate human interactions in real-world scenarios. Specifically, we propose
a tree-structured methodology for the assignment, detection, and maintenance of
agent personality. Additionally, we designed a cognitive architecture equipped
with a skill library based on the ACT* model, which contains memory,
reflection, and planning modules. We have also integrated general agents to
augment the virtual environment's realism. Using the CGMI framework, we
simulated numerous classroom interactions between teacher and students. The
experiments indicate that aspects such as the teaching methodology, curriculum,
and student performance closely mirror real classroom settings. We will open
source our work. | http://arxiv.org/pdf/2308.12503 | Shi Jinxin, Zhao Jiabao, Wang Yilei, Wu Xingjiao, Li Jiawen, He Liang | cs.AI, cs.HC, cs.MA | 11 pages, 15 figures | null | cs.AI | 20230824 | 20230828 | [
{
"id": "2302.01560"
},
{
"id": "2307.05300"
},
{
"id": "2307.07924"
},
{
"id": "2210.03350"
},
{
"id": "2304.05376"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.04091"
},
{
"id": "2305.02547"
},
{
"id": "2303.17071"
},
{
"id": "2303.17760"
},
{
"id": "2303.08774"
}
] |
2308.12519 | 45 | # A SELF-JUDGMENT PROMPT
Our self-judgment prompt is designed as follows:
You are value-GPT, an expert in defining which trail is better and closer to solving the task. Here is the task description: ******************************* {{BEGIN_DESCRIPTION}} your_task: {task_description} your_query: {input_description} {{END_DESCRIPTION}} ******************************* Here are two candidates A and B. They both try to handle the task with some function calls. Their trails are as follows. ******************************* {{CANDIDATE_A_START}} {candidate_A} {{CANDIDATE_A_END}} ******************************* {{CANDIDATE_B_START}} {candidate_B} {{CANDIDATE_B_END}} *******************************
Then, ChatGPT should call the following function2 to give the judgment result.
{
"name": "choose_preference",
# 2https://openai.com/blog/function-calling-and-other-api-updates
12
Preprint
"description": "Choose the preferred answer for the query within all given answers.", "parameters": { "type": "object", "properties": { "preference": { "type": "number", "description": "The index of the preferred answer in all given answers." }, }, },
}
13 | 2308.12519#45 | Rational Decision-Making Agent with Internalized Utility Judgment | Large language models (LLMs) have demonstrated remarkable advancements and
have attracted significant efforts to develop LLMs into agents capable of
executing intricate multi-step decision-making tasks beyond traditional NLP
applications. Existing approaches to LLM-based decision-making predominantly
build upon the manually-designed external performance metrics to guide the
decision-making process. However, reliance on the external performance metrics
as prior is problematic in real-world scenarios, where such prior may be
unavailable, flawed, or even erroneous. For genuine autonomous decision making,
it is imperative for the agent to develop its rationality from its posterior
experiences to judge decisions independently. Central to the development of
rationality is the construction of an internalized utility judgment, capable of
assigning numerical utilities to each decision. This paper proposes RadAgent
(Rational Decision-Making Agent), which fosters the development of its
rationality through an iterative framework involving Experience Exploration and
Utility Learning. Within this framework, Elo-based Utility Construction is
devised to assign Elo scores to individual decision steps to judge their
utilities via pairwise comparisons. Consequently, these Elo scores guide the
decision-making process to derive optimal outcomes. Experimental results on the
ToolBench dataset demonstrate RadAgent's superiority over baselines, achieving
over 10% improvement in Pass Rate on diverse tasks. It offers higher-quality
solutions and reduces costs (ChatGPT API calls), highlighting its effectiveness
and efficiency. | http://arxiv.org/pdf/2308.12519 | Yining Ye, Xin Cong, Shizuo Tian, Yujia Qin, Chong Liu, Yankai Lin, Zhiyuan Liu, Maosong Sun | cs.CL | Received 8,6,6,6 scores on ICLR 2024 | null | cs.CL | 20230824 | 20240117 | [
{
"id": "2305.14318"
},
{
"id": "2306.06624"
},
{
"id": "2305.17926"
},
{
"id": "2305.10601"
},
{
"id": "2307.16789"
},
{
"id": "2305.06849"
},
{
"id": "2304.08354"
},
{
"id": "2308.09687"
},
{
"id": "2306.11489"
},
{
"id": "2306.17563"
},
{
"id": "2305.14992"
},
{
"id": "2305.01937"
},
{
"id": "2308.10379"
},
{
"id": "2305.11554"
}
] |
2308.12682 | 45 | it is impossible to reach the goal (i.e. planning is non-ergodic). (ii) cost-effectiveness: SayCanPay leads to a significant improvement over both Say (â 11 â 97% for Beam-Action) and SayCan (â 0 â 33% for Beam-Action and â 1 â 150% for Greedy-Action). (iii) generalization: From Table 5, while the overall perfor- mance for SayCan and SayCanPay improves over Say, a noticeable drop in performance was observed for Ravens. This led to the hypothesis that the learned domain models (Can, Pay) are not generalizing to OOD data in certain environments (see § 7.5 for potential solutions). | 2308.12682#45 | SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge | Large Language Models (LLMs) have demonstrated impressive planning abilities
due to their vast "world knowledge". Yet, obtaining plans that are both
feasible (grounded in affordances) and cost-effective (in plan length), remains
a challenge, despite recent progress. This contrasts with heuristic planning
methods that employ domain knowledge (formalized in action models such as PDDL)
and heuristic search to generate feasible, optimal plans. Inspired by this, we
propose to combine the power of LLMs and heuristic planning by leveraging the
world knowledge of LLMs and the principles of heuristic search. Our approach,
SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain
knowledge, that evaluates actions' feasibility (Can) and long-term
reward/payoff (Pay), and heuristic search to select the best sequence of
actions. Our contributions are (1) a novel framing of the LLM planning problem
in the context of heuristic planning, (2) integrating grounding and
cost-effective elements into the generated plans, and (3) using heuristic
search over actions. Our extensive evaluations show that our model surpasses
other LLM planning approaches. | http://arxiv.org/pdf/2308.12682 | Rishi Hazra, Pedro Zuidberg Dos Martires, Luc De Raedt | cs.AI | Accepted in AAAI 2024. Website:
https://rishihazra.github.io/SayCanPay/ | null | cs.AI | 20230824 | 20240101 | [
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2305.14992"
},
{
"id": "2302.05128"
},
{
"id": "2212.08681"
},
{
"id": "1807.03748"
},
{
"id": "2303.00855"
},
{
"id": "2305.10601"
},
{
"id": "2304.11477"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2201.04735"
},
{
"id": "2202.10936"
},
{
"id": "2209.07753"
},
{
"id": "2302.06706"
},
{
"id": "1909.08593"
},
{
"id": "2307.15818"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2305.14314"
}
] |
2308.12966 | 45 | Peng Gao, Jiaming Han, Renrui Zhang, Ziyi Lin, Shijie Geng, Aojun Zhou, Wei Zhang, Pan Lu, Conghui He, Xiangyu Yue, et al. Llama-adapter v2: Parameter-efficient visual instruction model. arXiv:2304.15010, 2023.
Rohit Girdhar, Alaaeldin El-Nouby, Zhuang Liu, Mannat Singh, Kalyan Vasudev Alwala, Armand Joulin, and Ishan Misra. Imagebind: One embedding space to bind them all. In CVPR, 2023.
Google. Puppeteer, 2023. URL https://github.com/puppeteer/puppeteer.
Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. Making the v in vqa matter: Elevating the role of image understanding in visual question answering. In CVPR, 2017.
Tanmay Gupta, Ryan Marten, Aniruddha Kembhavi, and Derek Hoiem. Grit: General robust image task benchmark. arXiv:2204.13653, 2022. | 2308.12966#45 | Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond | In this work, we introduce the Qwen-VL series, a set of large-scale
vision-language models (LVLMs) designed to perceive and understand both texts
and images. Starting from the Qwen-LM as a foundation, we endow it with visual
capacity by the meticulously designed (i) visual receptor, (ii) input-output
interface, (iii) 3-stage training pipeline, and (iv) multilingual multimodal
cleaned corpus. Beyond the conventional image description and
question-answering, we implement the grounding and text-reading ability of
Qwen-VLs by aligning image-caption-box tuples. The resulting models, including
Qwen-VL and Qwen-VL-Chat, set new records for generalist models under similar
model scales on a broad range of visual-centric benchmarks (e.g., image
captioning, question answering, visual grounding) and different settings (e.g.,
zero-shot, few-shot). Moreover, on real-world dialog benchmarks, our
instruction-tuned Qwen-VL-Chat also demonstrates superiority compared to
existing vision-language chatbots. Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL. | http://arxiv.org/pdf/2308.12966 | Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou | cs.CV, cs.CL | Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL | null | cs.CV | 20230824 | 20231013 | [
{
"id": "2211.01335"
},
{
"id": "2307.02499"
},
{
"id": "2305.10403"
},
{
"id": "2308.16890"
},
{
"id": "2208.10442"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2210.08402"
},
{
"id": "2306.02858"
},
{
"id": "2209.06794"
},
{
"id": "1504.00325"
},
{
"id": "2204.13653"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2212.04408"
},
{
"id": "2307.05222"
},
{
"id": "2306.15195"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2306.14824"
},
{
"id": "2102.05918"
},
{
"id": "2205.01917"
},
{
"id": "2111.11432"
},
{
"id": "2307.16125"
},
{
"id": "2305.03726"
},
{
"id": "2203.10244"
},
{
"id": "2206.08916"
},
{
"id": "2304.14108"
},
{
"id": "2307.08581"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2305.18565"
}
] |
2308.12503 | 46 | Solomonâs Learning Styles Students learning styles can be described by Solomonâs Learning Styles Inventory with a tree-structured format (Fig- ure 15). Each Level-1 node has its type to represent your type in four different dimensions. When selecting 11 sub- nodes, a is selected more times than b, then the category represented is the former in the description, otherwise, it is the latter. Each Level-2 node has its description and choice to indicate your selection for the current evaluation question. | 2308.12503#46 | CGMI: Configurable General Multi-Agent Interaction Framework | Benefiting from the powerful capabilities of large language models (LLMs),
agents based on LLMs have shown the potential to address domain-specific tasks
and emulate human behaviors. However, the content generated by these agents
remains somewhat superficial, owing to their limited domain expertise and the
absence of an effective cognitive architecture. To address this, we present the
Configurable General Multi-Agent Interaction (CGMI) framework, designed to
replicate human interactions in real-world scenarios. Specifically, we propose
a tree-structured methodology for the assignment, detection, and maintenance of
agent personality. Additionally, we designed a cognitive architecture equipped
with a skill library based on the ACT* model, which contains memory,
reflection, and planning modules. We have also integrated general agents to
augment the virtual environment's realism. Using the CGMI framework, we
simulated numerous classroom interactions between teacher and students. The
experiments indicate that aspects such as the teaching methodology, curriculum,
and student performance closely mirror real classroom settings. We will open
source our work. | http://arxiv.org/pdf/2308.12503 | Shi Jinxin, Zhao Jiabao, Wang Yilei, Wu Xingjiao, Li Jiawen, He Liang | cs.AI, cs.HC, cs.MA | 11 pages, 15 figures | null | cs.AI | 20230824 | 20230828 | [
{
"id": "2302.01560"
},
{
"id": "2307.05300"
},
{
"id": "2307.07924"
},
{
"id": "2210.03350"
},
{
"id": "2304.05376"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.04091"
},
{
"id": "2305.02547"
},
{
"id": "2303.17071"
},
{
"id": "2303.17760"
},
{
"id": "2303.08774"
}
] |
2308.12682 | 46 | Comparing decoding strategies. From Tables 3, 4, 5, the overall performance across decoding strategies follows the pattern: Greedy-Token < Greedy-Action < Beam-Action across all splits. The Beam-Action Say, SayCan, and SayCanPay versions show improvement over their corresponding Greedy-Action counterparts. (i) planning success: Beam-Action SayCanPay beats Greedy-Action SayCanPay by â 1 â 40%. Similar gains are also observed with other decoding scores. (ii) cost-effectiveness: Beam-Action SayCanPay improves over Greedy-Action SayCanPay by â 0 â 73%. (iii) generalization: Beam-Action SayCanPay beats Greedy-Action SayCanPay by â 0 â 89%.
Comparing Transformer Architectures. We did not observe a consistent performance gain for any particular archi- tecture, suggesting that either is apt for planning. We lack a definitive explanation, and further research is required to understand how each LM component impacts reasoning.
7.4 Ablation Details ⢠Effect of beam-size k: As seen in Figure 3, in general, both plan success and cost-effectiveness increases with increase in beam size with k = 1 (Greedy-Action), 2, 3 (Beam-Action). All experiments used the SayCanPay decoding score. However, no clear patterns were observed for generalization results. | 2308.12682#46 | SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge | Large Language Models (LLMs) have demonstrated impressive planning abilities
due to their vast "world knowledge". Yet, obtaining plans that are both
feasible (grounded in affordances) and cost-effective (in plan length), remains
a challenge, despite recent progress. This contrasts with heuristic planning
methods that employ domain knowledge (formalized in action models such as PDDL)
and heuristic search to generate feasible, optimal plans. Inspired by this, we
propose to combine the power of LLMs and heuristic planning by leveraging the
world knowledge of LLMs and the principles of heuristic search. Our approach,
SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain
knowledge, that evaluates actions' feasibility (Can) and long-term
reward/payoff (Pay), and heuristic search to select the best sequence of
actions. Our contributions are (1) a novel framing of the LLM planning problem
in the context of heuristic planning, (2) integrating grounding and
cost-effective elements into the generated plans, and (3) using heuristic
search over actions. Our extensive evaluations show that our model surpasses
other LLM planning approaches. | http://arxiv.org/pdf/2308.12682 | Rishi Hazra, Pedro Zuidberg Dos Martires, Luc De Raedt | cs.AI | Accepted in AAAI 2024. Website:
https://rishihazra.github.io/SayCanPay/ | null | cs.AI | 20230824 | 20240101 | [
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2305.14992"
},
{
"id": "2302.05128"
},
{
"id": "2212.08681"
},
{
"id": "1807.03748"
},
{
"id": "2303.00855"
},
{
"id": "2305.10601"
},
{
"id": "2304.11477"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2201.04735"
},
{
"id": "2202.10936"
},
{
"id": "2209.07753"
},
{
"id": "2302.06706"
},
{
"id": "1909.08593"
},
{
"id": "2307.15818"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2305.14314"
}
] |
2308.12950 | 46 | Key retrieval. In Figure 4b, we investigate key retrieval performance in synthetic task. The prompt consists of a large amount of syntactically valid Python code, with a function returning a scalar inserted at a specified position. The model is asked to complete an assert statement with the return value of the inserted function. Liu et al. (2023b) showed that the inability to recall content placed in the middle of long prompts is a common failure mode in LLMs; our retrieval task is analogous to their setup, albeit tailored to code models which are not fine-tuned to follow instructions. All models exhibit strong retrieval performance on the sequence length they were trained on, with the exception of the 7B model for test cases in which the function is placed at the beginning of the prompt. We include OpenAIâs gpt-3.5-turbo-16k-0613 as a reference. We query GPT with a system prompt of âComplete the following code.â and a temperature of 0. For sequences beyond 16K tokens, i.e., when extrapolating, our models exhibit a decrease in performance (Appendix G.3). | 2308.12950#46 | Code Llama: Open Foundation Models for Code | We release Code Llama, a family of large language models for code based on
Llama 2 providing state-of-the-art performance among open models, infilling
capabilities, support for large input contexts, and zero-shot instruction
following ability for programming tasks. We provide multiple flavors to cover a
wide range of applications: foundation models (Code Llama), Python
specializations (Code Llama - Python), and instruction-following models (Code
Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are
trained on sequences of 16k tokens and show improvements on inputs with up to
100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants
support infilling based on surrounding content. Code Llama reaches
state-of-the-art performance among open models on several code benchmarks, with
scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code
Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our
models outperform every other publicly available model on MultiPL-E. We release
Code Llama under a permissive license that allows for both research and
commercial use. | http://arxiv.org/pdf/2308.12950 | Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve | cs.CL | null | null | cs.CL | 20230824 | 20240131 | [] |
2308.12966 | 46 | Danna Gurari, Qing Li, Abigale J Stangl, Anhong Guo, Chi Lin, Kristen Grauman, Jiebo Luo, and Jeffrey P Bigham. Vizwiz grand challenge: Answering visual questions from blind people. In CVPR, 2018.
11
Ronghang Hu and Amanpreet Singh. Unit: Multimodal multitask learning with a unified transformer. In ICCV, 2021.
Shaohan Huang, Li Dong, Wenhui Wang, Yaru Hao, Saksham Singhal, Shuming Ma, Tengchao Lv, Lei Cui, Owais Khan Mohammed, Qiang Liu, et al. Language is not all you need: Aligning perception with language models. arXiv:2302.14045, 2023.
Drew A Hudson and Christopher D Manning. Gqa: A new dataset for real-world visual reasoning and compositional question answering. In CVPR, 2019.
Gabriel Ilharco, Mitchell Wortsman, Ross Wightman, Cade Gordon, Nicholas Carlini, Rohan Taori, Achal Dave, Vaishaal Shankar, Hongseok Namkoong, John Miller, Hannaneh Hajishirzi, Ali Farhadi, and Ludwig Schmidt. Openclip, 2021. URL https://doi.org/10.5281/zenodo.5143773. | 2308.12966#46 | Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond | In this work, we introduce the Qwen-VL series, a set of large-scale
vision-language models (LVLMs) designed to perceive and understand both texts
and images. Starting from the Qwen-LM as a foundation, we endow it with visual
capacity by the meticulously designed (i) visual receptor, (ii) input-output
interface, (iii) 3-stage training pipeline, and (iv) multilingual multimodal
cleaned corpus. Beyond the conventional image description and
question-answering, we implement the grounding and text-reading ability of
Qwen-VLs by aligning image-caption-box tuples. The resulting models, including
Qwen-VL and Qwen-VL-Chat, set new records for generalist models under similar
model scales on a broad range of visual-centric benchmarks (e.g., image
captioning, question answering, visual grounding) and different settings (e.g.,
zero-shot, few-shot). Moreover, on real-world dialog benchmarks, our
instruction-tuned Qwen-VL-Chat also demonstrates superiority compared to
existing vision-language chatbots. Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL. | http://arxiv.org/pdf/2308.12966 | Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou | cs.CV, cs.CL | Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL | null | cs.CV | 20230824 | 20231013 | [
{
"id": "2211.01335"
},
{
"id": "2307.02499"
},
{
"id": "2305.10403"
},
{
"id": "2308.16890"
},
{
"id": "2208.10442"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2210.08402"
},
{
"id": "2306.02858"
},
{
"id": "2209.06794"
},
{
"id": "1504.00325"
},
{
"id": "2204.13653"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2212.04408"
},
{
"id": "2307.05222"
},
{
"id": "2306.15195"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2306.14824"
},
{
"id": "2102.05918"
},
{
"id": "2205.01917"
},
{
"id": "2111.11432"
},
{
"id": "2307.16125"
},
{
"id": "2305.03726"
},
{
"id": "2203.10244"
},
{
"id": "2206.08916"
},
{
"id": "2304.14108"
},
{
"id": "2307.08581"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2305.18565"
}
] |
2308.12682 | 47 | Impact of Say Model: Planning failures may arise because the Say model fails to propose a right action amongst the candidates, making Can and Pay ineffective. We studied the Say modelâs impact on overall performance using a Perfect Say that always recommends the correct action along with random distractors. From Table 7, we observed 16-84% improvements in SayCan and SayCanPay performance across various environments, indicating the potential of an improved Say model. Thus, using a larger model trained on more data could potentially enhance performance. ⢠Plan length comparison: We compute a relative length= oracle plan length / generated plan length, which compares the generated and oracle plan lengths. A value = 1 indicates equal lengths and a value = 0 that the plan length is infinity (i.e. an unsuccessful plan). As shown in Figure 4, Beam-Action slightly improves over Greedy-Action.
Furthermore, SayCanPay scoring achieves the best relative length (â 1) for both Greedy and Beam-Action strategies signifying the cost-efficiency of the generated plans. | 2308.12682#47 | SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge | Large Language Models (LLMs) have demonstrated impressive planning abilities
due to their vast "world knowledge". Yet, obtaining plans that are both
feasible (grounded in affordances) and cost-effective (in plan length), remains
a challenge, despite recent progress. This contrasts with heuristic planning
methods that employ domain knowledge (formalized in action models such as PDDL)
and heuristic search to generate feasible, optimal plans. Inspired by this, we
propose to combine the power of LLMs and heuristic planning by leveraging the
world knowledge of LLMs and the principles of heuristic search. Our approach,
SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain
knowledge, that evaluates actions' feasibility (Can) and long-term
reward/payoff (Pay), and heuristic search to select the best sequence of
actions. Our contributions are (1) a novel framing of the LLM planning problem
in the context of heuristic planning, (2) integrating grounding and
cost-effective elements into the generated plans, and (3) using heuristic
search over actions. Our extensive evaluations show that our model surpasses
other LLM planning approaches. | http://arxiv.org/pdf/2308.12682 | Rishi Hazra, Pedro Zuidberg Dos Martires, Luc De Raedt | cs.AI | Accepted in AAAI 2024. Website:
https://rishihazra.github.io/SayCanPay/ | null | cs.AI | 20230824 | 20240101 | [
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2305.14992"
},
{
"id": "2302.05128"
},
{
"id": "2212.08681"
},
{
"id": "1807.03748"
},
{
"id": "2303.00855"
},
{
"id": "2305.10601"
},
{
"id": "2304.11477"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2201.04735"
},
{
"id": "2202.10936"
},
{
"id": "2209.07753"
},
{
"id": "2302.06706"
},
{
"id": "1909.08593"
},
{
"id": "2307.15818"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2305.14314"
}
] |
2308.12950 | 47 | Single line completion. Finally, we test the benefits of the ability to handle long context sizes in a single line code completion task. Our task is based on the Long Code Completion (LCC) benchmark (Guo et al., 2023).2 The LCC test set is skewed towards shorter files and we hence sample a new set of examples from LCCâs validation and test set with an equalized distribution over file size (Appendix G.2). In Table 7, we compare the completion accuracy of the Code Llama models to their counterparts prior to long-context fine-tuning. Non-LCFT models fail to generate meaningful completions on long sequences and we thus truncate their prompts to the 4,000 tokens immediate preceding the line to complete. Across all metrics, models fine-tuned to handle long contexts achieve significantly higher performance. This demonstrates that long contexts are informative for code completion, and that with LCFT our models are able to leverage this information to improve their generations. We note that the longest exampleâs prompt in this test consists
2Note that LCC data points are included in our code training data.
11
(a) (b) | 2308.12950#47 | Code Llama: Open Foundation Models for Code | We release Code Llama, a family of large language models for code based on
Llama 2 providing state-of-the-art performance among open models, infilling
capabilities, support for large input contexts, and zero-shot instruction
following ability for programming tasks. We provide multiple flavors to cover a
wide range of applications: foundation models (Code Llama), Python
specializations (Code Llama - Python), and instruction-following models (Code
Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are
trained on sequences of 16k tokens and show improvements on inputs with up to
100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants
support infilling based on surrounding content. Code Llama reaches
state-of-the-art performance among open models on several code benchmarks, with
scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code
Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our
models outperform every other publicly available model on MultiPL-E. We release
Code Llama under a permissive license that allows for both research and
commercial use. | http://arxiv.org/pdf/2308.12950 | Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve | cs.CL | null | null | cs.CL | 20230824 | 20240131 | [] |
2308.12966 | 47 | Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V Le, Yunhsuan Sung, Zhen Li, and Tom Duerig. Scaling up visual and vision-language representation learning with noisy text supervision. arXiv:2102.05918, 2021.
Kushal Kafle, Brian Price, Scott Cohen, and Christopher Kanan. Dvqa: Understanding data visualizations via question answering. In CVPR, 2018.
Sahar Kazemzadeh, Vicente Ordonez, Mark Matten, and Tamara Berg. Referitgame: Referring to objects in photographs of natural scenes. In EMNLP, 2014.
Aniruddha Kembhavi, Mike Salvato, Eric Kolve, Minjoon Seo, Hannaneh Hajishirzi, and Ali Farhadi. A diagram is worth a dozen images. In ECCV, 2016.
Geewook Kim, Teakgyu Hong, Moonbin Yim, JeongYeon Nam, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, and Seunghyun Park. Ocr-free document understanding transformer. In ECCV, 2022. | 2308.12966#47 | Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond | In this work, we introduce the Qwen-VL series, a set of large-scale
vision-language models (LVLMs) designed to perceive and understand both texts
and images. Starting from the Qwen-LM as a foundation, we endow it with visual
capacity by the meticulously designed (i) visual receptor, (ii) input-output
interface, (iii) 3-stage training pipeline, and (iv) multilingual multimodal
cleaned corpus. Beyond the conventional image description and
question-answering, we implement the grounding and text-reading ability of
Qwen-VLs by aligning image-caption-box tuples. The resulting models, including
Qwen-VL and Qwen-VL-Chat, set new records for generalist models under similar
model scales on a broad range of visual-centric benchmarks (e.g., image
captioning, question answering, visual grounding) and different settings (e.g.,
zero-shot, few-shot). Moreover, on real-world dialog benchmarks, our
instruction-tuned Qwen-VL-Chat also demonstrates superiority compared to
existing vision-language chatbots. Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL. | http://arxiv.org/pdf/2308.12966 | Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou | cs.CV, cs.CL | Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL | null | cs.CV | 20230824 | 20231013 | [
{
"id": "2211.01335"
},
{
"id": "2307.02499"
},
{
"id": "2305.10403"
},
{
"id": "2308.16890"
},
{
"id": "2208.10442"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2210.08402"
},
{
"id": "2306.02858"
},
{
"id": "2209.06794"
},
{
"id": "1504.00325"
},
{
"id": "2204.13653"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2212.04408"
},
{
"id": "2307.05222"
},
{
"id": "2306.15195"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2306.14824"
},
{
"id": "2102.05918"
},
{
"id": "2205.01917"
},
{
"id": "2111.11432"
},
{
"id": "2307.16125"
},
{
"id": "2305.03726"
},
{
"id": "2203.10244"
},
{
"id": "2206.08916"
},
{
"id": "2304.14108"
},
{
"id": "2307.08581"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2305.18565"
}
] |
2308.12682 | 48 | Furthermore, SayCanPay scoring achieves the best relative length (â 1) for both Greedy and Beam-Action strategies signifying the cost-efficiency of the generated plans.
⢠Impact of problem size on planning time. Effect of action space: Planning time remains unaffected since the Say model generates rather than discriminates between actions. Effect of plan length: Greedy-Token run time increases by â¼2s for each action step. Effect of decoding strategy: â¼9s for Greedy-Token, â¼17s for Greedy-Action, â¼35s for Beam-Action. Effect of decoding score: Planning time is unaffected since the Can and Pay are small LMs with negligible overheads. Quantization techniques and advanced hardware can further reduce run time and is an active research area (Dettmers et al. 2023; Frantar et al. 2023).
⢠Qualitative Analysis: The Can model effectively selects feasible actions (Figure 1). The Pay model prioritizes actions that lead to quicker goal achievement. While Pay gives a high probability to the âdone taskâ action linking it to plan completion, the Can score negates it due to unsatisfied âdone taskâ preconditions. | 2308.12682#48 | SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge | Large Language Models (LLMs) have demonstrated impressive planning abilities
due to their vast "world knowledge". Yet, obtaining plans that are both
feasible (grounded in affordances) and cost-effective (in plan length), remains
a challenge, despite recent progress. This contrasts with heuristic planning
methods that employ domain knowledge (formalized in action models such as PDDL)
and heuristic search to generate feasible, optimal plans. Inspired by this, we
propose to combine the power of LLMs and heuristic planning by leveraging the
world knowledge of LLMs and the principles of heuristic search. Our approach,
SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain
knowledge, that evaluates actions' feasibility (Can) and long-term
reward/payoff (Pay), and heuristic search to select the best sequence of
actions. Our contributions are (1) a novel framing of the LLM planning problem
in the context of heuristic planning, (2) integrating grounding and
cost-effective elements into the generated plans, and (3) using heuristic
search over actions. Our extensive evaluations show that our model surpasses
other LLM planning approaches. | http://arxiv.org/pdf/2308.12682 | Rishi Hazra, Pedro Zuidberg Dos Martires, Luc De Raedt | cs.AI | Accepted in AAAI 2024. Website:
https://rishihazra.github.io/SayCanPay/ | null | cs.AI | 20230824 | 20240101 | [
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2305.14992"
},
{
"id": "2302.05128"
},
{
"id": "2212.08681"
},
{
"id": "1807.03748"
},
{
"id": "2303.00855"
},
{
"id": "2305.10601"
},
{
"id": "2304.11477"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2201.04735"
},
{
"id": "2202.10936"
},
{
"id": "2209.07753"
},
{
"id": "2302.06706"
},
{
"id": "1909.08593"
},
{
"id": "2307.15818"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2305.14314"
}
] |
2308.12950 | 48 | 2Note that LCC data points are included in our code training data.
11
(a) (b)
Figure 4: Code Llama behavior on long sequences. (a) Perplexity on large source files (â¥50 kB) from the validation data from the code dataset. The dashed line marks the fine-tuning context length. Perplexity decreases for up to 100K tokens for all Code Llama sizes. (b) Accuracy on a synthetic key retrieval task, with a context of 16K tokens and comparison to gpt-3.5-turbo.
Model EM BLEU EM BLEU EM BLEU Code Llama Code Llama â 36.86 7B 7B â 39.23 60.16 61.84 47.82 51.94 69.20 71.89 46.29 50.20 67.75 70.22 Code Llama 13B â 37.96 Code Llama 13B â 41.06 Code Llama 34B â 42.52 Code Llama 34B â 44.89 61.33 62.76 63.74 65.99 50.49 52.67 54.13 56.80 69.99 72.29 72.38 73.79 49.22 52.15 52.34 53.71 69.87 71.00 71.36 72.69 | 2308.12950#48 | Code Llama: Open Foundation Models for Code | We release Code Llama, a family of large language models for code based on
Llama 2 providing state-of-the-art performance among open models, infilling
capabilities, support for large input contexts, and zero-shot instruction
following ability for programming tasks. We provide multiple flavors to cover a
wide range of applications: foundation models (Code Llama), Python
specializations (Code Llama - Python), and instruction-following models (Code
Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are
trained on sequences of 16k tokens and show improvements on inputs with up to
100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants
support infilling based on surrounding content. Code Llama reaches
state-of-the-art performance among open models on several code benchmarks, with
scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code
Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our
models outperform every other publicly available model on MultiPL-E. We release
Code Llama under a permissive license that allows for both research and
commercial use. | http://arxiv.org/pdf/2308.12950 | Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve | cs.CL | null | null | cs.CL | 20230824 | 20240131 | [] |
2308.12966 | 48 | Wonjae Kim, Bokyung Son, and Ildoo Kim. Vilt: Vision-and-language transformer without convolution or region supervision. In ICML, 2021.
Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. Visual genome: Connecting language and vision using crowdsourced dense image annotations. In IJCV, 2017.
Bo Li, Yuanhan Zhang, Liangyu Chen, Jinghao Wang, Jingkang Yang, and Ziwei Liu. Otter: A multi-modal model with in-context instruction tuning. arXiv:2305.03726, 2023a.
Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, and Ying Shan. Seed-bench: Benchmarking multimodal llms with generative comprehension. arXiv:2307.16125, 2023b.
Junnan Li, Ramprasaath R Selvaraju, Akhilesh Deepak Gotmare, Shafiq Joty, Caiming Xiong, and Steven Hoi. Align before fuse: Vision and language representation learning with momentum distillation. In NeurIPS, 2021a. | 2308.12966#48 | Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond | In this work, we introduce the Qwen-VL series, a set of large-scale
vision-language models (LVLMs) designed to perceive and understand both texts
and images. Starting from the Qwen-LM as a foundation, we endow it with visual
capacity by the meticulously designed (i) visual receptor, (ii) input-output
interface, (iii) 3-stage training pipeline, and (iv) multilingual multimodal
cleaned corpus. Beyond the conventional image description and
question-answering, we implement the grounding and text-reading ability of
Qwen-VLs by aligning image-caption-box tuples. The resulting models, including
Qwen-VL and Qwen-VL-Chat, set new records for generalist models under similar
model scales on a broad range of visual-centric benchmarks (e.g., image
captioning, question answering, visual grounding) and different settings (e.g.,
zero-shot, few-shot). Moreover, on real-world dialog benchmarks, our
instruction-tuned Qwen-VL-Chat also demonstrates superiority compared to
existing vision-language chatbots. Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL. | http://arxiv.org/pdf/2308.12966 | Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou | cs.CV, cs.CL | Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL | null | cs.CV | 20230824 | 20231013 | [
{
"id": "2211.01335"
},
{
"id": "2307.02499"
},
{
"id": "2305.10403"
},
{
"id": "2308.16890"
},
{
"id": "2208.10442"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2210.08402"
},
{
"id": "2306.02858"
},
{
"id": "2209.06794"
},
{
"id": "1504.00325"
},
{
"id": "2204.13653"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2212.04408"
},
{
"id": "2307.05222"
},
{
"id": "2306.15195"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2306.14824"
},
{
"id": "2102.05918"
},
{
"id": "2205.01917"
},
{
"id": "2111.11432"
},
{
"id": "2307.16125"
},
{
"id": "2305.03726"
},
{
"id": "2203.10244"
},
{
"id": "2206.08916"
},
{
"id": "2304.14108"
},
{
"id": "2307.08581"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2305.18565"
}
] |
2308.12503 | 49 | Solomonâs Learning Styles Students learning styles can be described by Solomonâs Learning Styles Inventory with a tree-structured format (Fig- ure 15). Each Level-1 node has its type to represent your type in four different dimensions. When selecting 11 sub- nodes, a is selected more times than b, then the category represented is the former in the description, otherwise, it is the latter. Each Level-2 node has its description and choice to indicate your selection for the current evaluation question. Figure 8: Character setting for Mrs. Smith. | 2308.12503#49 | CGMI: Configurable General Multi-Agent Interaction Framework | Benefiting from the powerful capabilities of large language models (LLMs),
agents based on LLMs have shown the potential to address domain-specific tasks
and emulate human behaviors. However, the content generated by these agents
remains somewhat superficial, owing to their limited domain expertise and the
absence of an effective cognitive architecture. To address this, we present the
Configurable General Multi-Agent Interaction (CGMI) framework, designed to
replicate human interactions in real-world scenarios. Specifically, we propose
a tree-structured methodology for the assignment, detection, and maintenance of
agent personality. Additionally, we designed a cognitive architecture equipped
with a skill library based on the ACT* model, which contains memory,
reflection, and planning modules. We have also integrated general agents to
augment the virtual environment's realism. Using the CGMI framework, we
simulated numerous classroom interactions between teacher and students. The
experiments indicate that aspects such as the teaching methodology, curriculum,
and student performance closely mirror real classroom settings. We will open
source our work. | http://arxiv.org/pdf/2308.12503 | Shi Jinxin, Zhao Jiabao, Wang Yilei, Wu Xingjiao, Li Jiawen, He Liang | cs.AI, cs.HC, cs.MA | 11 pages, 15 figures | null | cs.AI | 20230824 | 20230828 | [
{
"id": "2302.01560"
},
{
"id": "2307.05300"
},
{
"id": "2307.07924"
},
{
"id": "2210.03350"
},
{
"id": "2304.05376"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.04091"
},
{
"id": "2305.02547"
},
{
"id": "2303.17071"
},
{
"id": "2303.17760"
},
{
"id": "2303.08774"
}
] |
2308.12682 | 49 | Parameter Value Exceptions max new tokens beam groups diversity penalty candidates (m) beam-size (k) 10 3 2.0 6 3 11 Vicuna (Ravens-Blocks), 3 (VirtualHome) 4 for Flan-T5 (BabyAI) 8 for Flan-T5 (Baby-AI)
Table 6: Inference hyperparameters. Here the candidates (m) and the beam-size (k) parameter are over actions. The rest of the beam search parameters are over tokens.
# 7.5 Limitations and Future Work
The main limitations are (i) the need for expert tra- jectories to train domain models, and (ii) the domain modelsâ limited adaptability to OOD data. These challenges are inherent to deep learning models. However, recent advances in LLMs offer promising solutions. For example, newer studies have leveraged LLMs for reward design due to their ability to infer intentions from minimal prompts (Kwon et al. 2023). Notably, LLMs outperform smaller counterparts like Bert in generalization. Since both Can and Pay scores resemble rewards, future studies could use LLMs to mitigate training and improve generaliza- tion. Another potential direction could be to experi- ment with symbolic methods and non-parameterized heuristics like comparing the current generated plan with the successful/expert trajectories in the buffer.
# 8 Conclusion | 2308.12682#49 | SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge | Large Language Models (LLMs) have demonstrated impressive planning abilities
due to their vast "world knowledge". Yet, obtaining plans that are both
feasible (grounded in affordances) and cost-effective (in plan length), remains
a challenge, despite recent progress. This contrasts with heuristic planning
methods that employ domain knowledge (formalized in action models such as PDDL)
and heuristic search to generate feasible, optimal plans. Inspired by this, we
propose to combine the power of LLMs and heuristic planning by leveraging the
world knowledge of LLMs and the principles of heuristic search. Our approach,
SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain
knowledge, that evaluates actions' feasibility (Can) and long-term
reward/payoff (Pay), and heuristic search to select the best sequence of
actions. Our contributions are (1) a novel framing of the LLM planning problem
in the context of heuristic planning, (2) integrating grounding and
cost-effective elements into the generated plans, and (3) using heuristic
search over actions. Our extensive evaluations show that our model surpasses
other LLM planning approaches. | http://arxiv.org/pdf/2308.12682 | Rishi Hazra, Pedro Zuidberg Dos Martires, Luc De Raedt | cs.AI | Accepted in AAAI 2024. Website:
https://rishihazra.github.io/SayCanPay/ | null | cs.AI | 20230824 | 20240101 | [
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2305.14992"
},
{
"id": "2302.05128"
},
{
"id": "2212.08681"
},
{
"id": "1807.03748"
},
{
"id": "2303.00855"
},
{
"id": "2305.10601"
},
{
"id": "2304.11477"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2201.04735"
},
{
"id": "2202.10936"
},
{
"id": "2209.07753"
},
{
"id": "2302.06706"
},
{
"id": "1909.08593"
},
{
"id": "2307.15818"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2305.14314"
}
] |
2308.12950 | 49 | Table 7: Average single line completion performance on LCC-balanced. Comparison of models before and after long-context fine-tuning in terms of exact match (EM) and BLEU. For non-LCFT models, context size limits are respected by truncating prompts to 4,000 tokens.
of 103K tokens, for which all Code Llama models generate syntactically correct completions, with the 7B model producing an exact match.
Performance impact on short contexts. While our models are effective on long sequences, we observe that LCFT slightly hurts performance on standard code synthesis benchmarks consisting of short sequences. In Table 10, we observe an average decrease of 0.52 percentage points on HumanEval pass@1 and 1.9 points on MBPP for the pass@1 metric. Similarly, a breakdown of the code completion results in Table 7 by the number of tokens in each example shows that for prompts shorter than 4k tokens, long context fine-tuning induces a reduction of up to 2 BLEU points from base models after code training (Figure 9b). We observe similar decreases in performance for infilling tasks (Table 14). | 2308.12950#49 | Code Llama: Open Foundation Models for Code | We release Code Llama, a family of large language models for code based on
Llama 2 providing state-of-the-art performance among open models, infilling
capabilities, support for large input contexts, and zero-shot instruction
following ability for programming tasks. We provide multiple flavors to cover a
wide range of applications: foundation models (Code Llama), Python
specializations (Code Llama - Python), and instruction-following models (Code
Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are
trained on sequences of 16k tokens and show improvements on inputs with up to
100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants
support infilling based on surrounding content. Code Llama reaches
state-of-the-art performance among open models on several code benchmarks, with
scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code
Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our
models outperform every other publicly available model on MultiPL-E. We release
Code Llama under a permissive license that allows for both research and
commercial use. | http://arxiv.org/pdf/2308.12950 | Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve | cs.CL | null | null | cs.CL | 20230824 | 20240131 | [] |
2308.12966 | 49 | Junnan Li, Dongxu Li, Caiming Xiong, and Steven C. H. Hoi. Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation. In ICML, 2022.
Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. arXiv:2301.12597, 2023c.
Wei Li, Can Gao, Guocheng Niu, Xinyan Xiao, Hao Liu, Jiachen Liu, Hua Wu, and Haifeng Wang. UNIMO: towards unified-modal understanding and generation via cross-modal contrastive learning. In ACL, 2021b.
Xiujun Li, Xi Yin, Chunyuan Li, Xiaowei Hu, Pengchuan Zhang, Lei Zhang, Lijuan Wang, Houdong Hu, Li Dong, Furu Wei, Yejin Choi, and Jianfeng Gao. Oscar: Object-semantics aligned pre-training for vision-language tasks. In ECCV, 2020.
12 | 2308.12966#49 | Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond | In this work, we introduce the Qwen-VL series, a set of large-scale
vision-language models (LVLMs) designed to perceive and understand both texts
and images. Starting from the Qwen-LM as a foundation, we endow it with visual
capacity by the meticulously designed (i) visual receptor, (ii) input-output
interface, (iii) 3-stage training pipeline, and (iv) multilingual multimodal
cleaned corpus. Beyond the conventional image description and
question-answering, we implement the grounding and text-reading ability of
Qwen-VLs by aligning image-caption-box tuples. The resulting models, including
Qwen-VL and Qwen-VL-Chat, set new records for generalist models under similar
model scales on a broad range of visual-centric benchmarks (e.g., image
captioning, question answering, visual grounding) and different settings (e.g.,
zero-shot, few-shot). Moreover, on real-world dialog benchmarks, our
instruction-tuned Qwen-VL-Chat also demonstrates superiority compared to
existing vision-language chatbots. Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL. | http://arxiv.org/pdf/2308.12966 | Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou | cs.CV, cs.CL | Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL | null | cs.CV | 20230824 | 20231013 | [
{
"id": "2211.01335"
},
{
"id": "2307.02499"
},
{
"id": "2305.10403"
},
{
"id": "2308.16890"
},
{
"id": "2208.10442"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2210.08402"
},
{
"id": "2306.02858"
},
{
"id": "2209.06794"
},
{
"id": "1504.00325"
},
{
"id": "2204.13653"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2212.04408"
},
{
"id": "2307.05222"
},
{
"id": "2306.15195"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2306.14824"
},
{
"id": "2102.05918"
},
{
"id": "2205.01917"
},
{
"id": "2111.11432"
},
{
"id": "2307.16125"
},
{
"id": "2305.03726"
},
{
"id": "2203.10244"
},
{
"id": "2206.08916"
},
{
"id": "2304.14108"
},
{
"id": "2307.08581"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2305.18565"
}
] |
2308.12682 | 50 | # 8 Conclusion
Ravens-Hanoi Ravens-Blocks BabyAI VirtualHome Score SayCan SayCanPay SayCan SayCanPay SayCan SayCanPay SayCan SayCanPay LM Perfect 48 50 52 54 81 88 49 52 88 92 70 75 90 92 60 64
Table 7: The table depicts the impact of the Say model on planning success performance. In this context, both âLMâ and âPerfectâ represent Say models. âLMâ corresponds to the Vicuna model, while âPerfect Sayâ is an oracle Say model that consistently proposes the correct action along with two other distractor actions as next candidates. For all experiments, we used the Greedy-Action decoding strategy.
We proposed to combine the world knowledge and generative capabilities of LLMs with the systematic- ity of classical planning by formulating a heuristic search-based planning framework for LLMs. We demonstrated how to generate plans that are both feasible and cost- effective. While LLMs still cannot generate long-horizon plans on par with classical planners, our method overcomes issues inherent to LLM-based planning and extends traditional planning with the advantages of language models, mark- ing significant progress for planning research with LLMs.
# Acknowledgement | 2308.12682#50 | SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge | Large Language Models (LLMs) have demonstrated impressive planning abilities
due to their vast "world knowledge". Yet, obtaining plans that are both
feasible (grounded in affordances) and cost-effective (in plan length), remains
a challenge, despite recent progress. This contrasts with heuristic planning
methods that employ domain knowledge (formalized in action models such as PDDL)
and heuristic search to generate feasible, optimal plans. Inspired by this, we
propose to combine the power of LLMs and heuristic planning by leveraging the
world knowledge of LLMs and the principles of heuristic search. Our approach,
SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain
knowledge, that evaluates actions' feasibility (Can) and long-term
reward/payoff (Pay), and heuristic search to select the best sequence of
actions. Our contributions are (1) a novel framing of the LLM planning problem
in the context of heuristic planning, (2) integrating grounding and
cost-effective elements into the generated plans, and (3) using heuristic
search over actions. Our extensive evaluations show that our model surpasses
other LLM planning approaches. | http://arxiv.org/pdf/2308.12682 | Rishi Hazra, Pedro Zuidberg Dos Martires, Luc De Raedt | cs.AI | Accepted in AAAI 2024. Website:
https://rishihazra.github.io/SayCanPay/ | null | cs.AI | 20230824 | 20240101 | [
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2305.14992"
},
{
"id": "2302.05128"
},
{
"id": "2212.08681"
},
{
"id": "1807.03748"
},
{
"id": "2303.00855"
},
{
"id": "2305.10601"
},
{
"id": "2304.11477"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2201.04735"
},
{
"id": "2202.10936"
},
{
"id": "2209.07753"
},
{
"id": "2302.06706"
},
{
"id": "1909.08593"
},
{
"id": "2307.15818"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2305.14314"
}
] |
2308.12950 | 50 | LCFT comes at a cost for short sequences, and slightly decreases our scores on standard coding benchmarks such as HumanEval and MBPP. However, many real-world use cases are not captured by these benchmarks, and we believe that this cost is more than offset by the potential of handling long sequences for real downstream applications. Hence we opt to release all our Code Llama, Code Llama - Python and Code Llama - Instruct models with long-context capabilities.
12
(a) (b) (c)
Figure 5: (a) Training perplexity of Code Llama models. The continued decrease at 500B tokens suggests further training would be beneficial. Results are presented without infilling for 7B and 13B models. (b) Training losses of both Code Llama 7B versus an identical model trained from scratch (c) MBPP (coding benchmark) vs. Helpfulness according to the helpfulness reward model from Llama 2 (Touvron et al., 2023b).
# 3.4 Ablation studies
# 3.4.1 Fine tuning Llama 2 vs. training from scratch on code
Code Llama is based on the Llama 2 models, which are trained on 2T tokens of text, including only 80B tokens of code. We tune these models on 500B extra tokens, consisting mostly of code (85%). Figure 5a shows the training curves of Code Llama. | 2308.12950#50 | Code Llama: Open Foundation Models for Code | We release Code Llama, a family of large language models for code based on
Llama 2 providing state-of-the-art performance among open models, infilling
capabilities, support for large input contexts, and zero-shot instruction
following ability for programming tasks. We provide multiple flavors to cover a
wide range of applications: foundation models (Code Llama), Python
specializations (Code Llama - Python), and instruction-following models (Code
Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are
trained on sequences of 16k tokens and show improvements on inputs with up to
100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants
support infilling based on surrounding content. Code Llama reaches
state-of-the-art performance among open models on several code benchmarks, with
scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code
Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our
models outperform every other publicly available model on MultiPL-E. We release
Code Llama under a permissive license that allows for both research and
commercial use. | http://arxiv.org/pdf/2308.12950 | Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve | cs.CL | null | null | cs.CL | 20230824 | 20240131 | [] |
2308.12966 | 50 | 12
Junyang Lin, Rui Men, An Yang, Chang Zhou, Ming Ding, Yichang Zhang, Peng Wang, Ang Wang, Le Jiang, Xianyan Jia, et al. M6: A chinese multimodal pretrainer. In KDD, 2021.
Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In ECCV, 2014.
Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. arXiv:2304.08485, 2023.
Jiasen Lu, Christopher Clark, Rowan Zellers, Roozbeh Mottaghi, and Aniruddha Kembhavi. Unified-io: A unified model for vision, language, and multi-modal tasks. arXiv:2206.08916, 2022a.
Pan Lu, Swaroop Mishra, Tanglin Xia, Liang Qiu, Kai-Wei Chang, Song-Chun Zhu, Oyvind Tafjord, Peter Clark, and Ashwin Kalyan. Learn to explain: Multimodal reasoning via thought chains for science question answering. In NeurIPS, 2022b. | 2308.12966#50 | Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond | In this work, we introduce the Qwen-VL series, a set of large-scale
vision-language models (LVLMs) designed to perceive and understand both texts
and images. Starting from the Qwen-LM as a foundation, we endow it with visual
capacity by the meticulously designed (i) visual receptor, (ii) input-output
interface, (iii) 3-stage training pipeline, and (iv) multilingual multimodal
cleaned corpus. Beyond the conventional image description and
question-answering, we implement the grounding and text-reading ability of
Qwen-VLs by aligning image-caption-box tuples. The resulting models, including
Qwen-VL and Qwen-VL-Chat, set new records for generalist models under similar
model scales on a broad range of visual-centric benchmarks (e.g., image
captioning, question answering, visual grounding) and different settings (e.g.,
zero-shot, few-shot). Moreover, on real-world dialog benchmarks, our
instruction-tuned Qwen-VL-Chat also demonstrates superiority compared to
existing vision-language chatbots. Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL. | http://arxiv.org/pdf/2308.12966 | Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou | cs.CV, cs.CL | Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL | null | cs.CV | 20230824 | 20231013 | [
{
"id": "2211.01335"
},
{
"id": "2307.02499"
},
{
"id": "2305.10403"
},
{
"id": "2308.16890"
},
{
"id": "2208.10442"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2210.08402"
},
{
"id": "2306.02858"
},
{
"id": "2209.06794"
},
{
"id": "1504.00325"
},
{
"id": "2204.13653"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2212.04408"
},
{
"id": "2307.05222"
},
{
"id": "2306.15195"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2306.14824"
},
{
"id": "2102.05918"
},
{
"id": "2205.01917"
},
{
"id": "2111.11432"
},
{
"id": "2307.16125"
},
{
"id": "2305.03726"
},
{
"id": "2203.10244"
},
{
"id": "2206.08916"
},
{
"id": "2304.14108"
},
{
"id": "2307.08581"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2305.18565"
}
] |
2308.12503 | 51 | âDescriptionâ : I like to have students design some discussion projects that they are interested in. âScoreâ : [] âDescriptionâ : I want students to learn how to solve problems on their own. âScoreâ : [] âDescriptionâ : I will choose course content that allows students to learn in their own way. âScoreâ :[] âDescriptionâ : Legislative âScoreâ :[] âDescriptionâ : When assigning a written assignment, I let students come up with their own topics. âScoreâ : [] âDescriptionâ : In my class, I try my best to stimulate studentsâ creativity. âScoreâ :[] âDescriptionâ : I teach my students to understand the importance of creativity in every activity, such as in personal life, learning, and work. âScoreâ : [| âDescriptionâ : I often assign some homework that requires students to complete independently. âScoreâ : |] âDescriptionâ : Good students always pay attention to listen to the teacher's instructions. âScoreâ : [] âDescriptionâ : Students should do what teachers ask them to do. âScoreâ | 2308.12503#51 | CGMI: Configurable General Multi-Agent Interaction Framework | Benefiting from the powerful capabilities of large language models (LLMs),
agents based on LLMs have shown the potential to address domain-specific tasks
and emulate human behaviors. However, the content generated by these agents
remains somewhat superficial, owing to their limited domain expertise and the
absence of an effective cognitive architecture. To address this, we present the
Configurable General Multi-Agent Interaction (CGMI) framework, designed to
replicate human interactions in real-world scenarios. Specifically, we propose
a tree-structured methodology for the assignment, detection, and maintenance of
agent personality. Additionally, we designed a cognitive architecture equipped
with a skill library based on the ACT* model, which contains memory,
reflection, and planning modules. We have also integrated general agents to
augment the virtual environment's realism. Using the CGMI framework, we
simulated numerous classroom interactions between teacher and students. The
experiments indicate that aspects such as the teaching methodology, curriculum,
and student performance closely mirror real classroom settings. We will open
source our work. | http://arxiv.org/pdf/2308.12503 | Shi Jinxin, Zhao Jiabao, Wang Yilei, Wu Xingjiao, Li Jiawen, He Liang | cs.AI, cs.HC, cs.MA | 11 pages, 15 figures | null | cs.AI | 20230824 | 20230828 | [
{
"id": "2302.01560"
},
{
"id": "2307.05300"
},
{
"id": "2307.07924"
},
{
"id": "2210.03350"
},
{
"id": "2304.05376"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.04091"
},
{
"id": "2305.02547"
},
{
"id": "2303.17071"
},
{
"id": "2303.17760"
},
{
"id": "2303.08774"
}
] |
2308.12950 | 51 | We compare the 7B parameters model to an identical model trained from scratch on the same data mix (Figure 5b). At the end of training, the loss of the model trained from scratch is equal to the loss of Code Llama 7B at about half of its training (with 240B less training tokens). Moreover, this gap becomes larger over time.
# 3.4.2 Instruction fine-tuning
General helpfulness vs. coding ability We evaluate Code Llama - Instruct and compare it to Llama 2-Chat for coding tasks and helpfulness (Figure 5c). We observe that Code Llama improves its coding abilities for each model sizes, while preserving the general helpfulness performance inherited from Llama 2. The results on the helpfulness axis is an indication that Code Llama performs greatly on general instructions following. But we emphasize that this result should be taken with a grain of salt, since we limited our automatic evaluation to scoring the models answers with Llama 2 reward model. | 2308.12950#51 | Code Llama: Open Foundation Models for Code | We release Code Llama, a family of large language models for code based on
Llama 2 providing state-of-the-art performance among open models, infilling
capabilities, support for large input contexts, and zero-shot instruction
following ability for programming tasks. We provide multiple flavors to cover a
wide range of applications: foundation models (Code Llama), Python
specializations (Code Llama - Python), and instruction-following models (Code
Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are
trained on sequences of 16k tokens and show improvements on inputs with up to
100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants
support infilling based on surrounding content. Code Llama reaches
state-of-the-art performance among open models on several code benchmarks, with
scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code
Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our
models outperform every other publicly available model on MultiPL-E. We release
Code Llama under a permissive license that allows for both research and
commercial use. | http://arxiv.org/pdf/2308.12950 | Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve | cs.CL | null | null | cs.CL | 20230824 | 20240131 | [] |
2308.12966 | 51 | Junhua Mao, Jonathan Huang, Alexander Toshev, Oana Camburu, Alan L Yuille, and Kevin Murphy. Gener- ation and comprehension of unambiguous object descriptions. In CVPR, 2016.
Kenneth Marino, Mohammad Rastegari, Ali Farhadi, and Roozbeh Mottaghi. Ok-vqa: A visual question answering benchmark requiring external knowledge. In CVPR, 2019.
Ahmed Masry, Do Xuan Long, Jia Qing Tan, Shafiq Joty, and Enamul Hoque. Chartqa: A benchmark for question answering about charts with visual and logical reasoning. arXiv:2203.10244, 2022.
Minesh Mathew, Dimosthenis Karatzas, and CV Jawahar. Docvqa: A dataset for vqa on document images. In WACV, 2021.
Anand Mishra, Shashank Shekhar, Ajeet Kumar Singh, and Anirban Chakraborty. Ocr-vqa: Visual question answering by reading text in images. In ICDAR, 2019.
Openai. Chatml documents. URL https://github.com/openai/openai-python/blob/main/chatml.md.
OpenAI. Gpt-4 technical report, 2023. | 2308.12966#51 | Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond | In this work, we introduce the Qwen-VL series, a set of large-scale
vision-language models (LVLMs) designed to perceive and understand both texts
and images. Starting from the Qwen-LM as a foundation, we endow it with visual
capacity by the meticulously designed (i) visual receptor, (ii) input-output
interface, (iii) 3-stage training pipeline, and (iv) multilingual multimodal
cleaned corpus. Beyond the conventional image description and
question-answering, we implement the grounding and text-reading ability of
Qwen-VLs by aligning image-caption-box tuples. The resulting models, including
Qwen-VL and Qwen-VL-Chat, set new records for generalist models under similar
model scales on a broad range of visual-centric benchmarks (e.g., image
captioning, question answering, visual grounding) and different settings (e.g.,
zero-shot, few-shot). Moreover, on real-world dialog benchmarks, our
instruction-tuned Qwen-VL-Chat also demonstrates superiority compared to
existing vision-language chatbots. Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL. | http://arxiv.org/pdf/2308.12966 | Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou | cs.CV, cs.CL | Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL | null | cs.CV | 20230824 | 20231013 | [
{
"id": "2211.01335"
},
{
"id": "2307.02499"
},
{
"id": "2305.10403"
},
{
"id": "2308.16890"
},
{
"id": "2208.10442"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2210.08402"
},
{
"id": "2306.02858"
},
{
"id": "2209.06794"
},
{
"id": "1504.00325"
},
{
"id": "2204.13653"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2212.04408"
},
{
"id": "2307.05222"
},
{
"id": "2306.15195"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2306.14824"
},
{
"id": "2102.05918"
},
{
"id": "2205.01917"
},
{
"id": "2111.11432"
},
{
"id": "2307.16125"
},
{
"id": "2305.03726"
},
{
"id": "2203.10244"
},
{
"id": "2206.08916"
},
{
"id": "2304.14108"
},
{
"id": "2307.08581"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2305.18565"
}
] |
2308.12503 | 52 | listen to the teacher's instructions. âScoreâ : [] âDescriptionâ : Students should do what teachers ask them to do. âScoreâ :[] âDescriptionâ : I like to teach according to the instructions in the textbook manual. âScoreâ :[] âScoreâ :[] âDescriptionâ : I prefer having students do homework on assigned topics rather than letting them choose topics freely. âScoreâ : [] âDescriptionâ : I think textbooks should include specific steps on how to teach each activity. âScoreâ :[] âDescriptionâ : I think it's equally important for teachers to let administrators know about teaching as the teaching itself. âScoreâ : [] âDescriptionâ : Students should follow the teacher's steps closely when learning. âScoreâ : [] âDescriptionâ : Teachers should continuously provide feedback on studentsâ learning progress. âScoreâ :[] âDescriptionâ : In schools, the best way for teachersâ professional growth is to provide opportunities for teachers to observe each other's classes and have time to evaluate each other's teaching. âScoreâ :[] | 2308.12503#52 | CGMI: Configurable General Multi-Agent Interaction Framework | Benefiting from the powerful capabilities of large language models (LLMs),
agents based on LLMs have shown the potential to address domain-specific tasks
and emulate human behaviors. However, the content generated by these agents
remains somewhat superficial, owing to their limited domain expertise and the
absence of an effective cognitive architecture. To address this, we present the
Configurable General Multi-Agent Interaction (CGMI) framework, designed to
replicate human interactions in real-world scenarios. Specifically, we propose
a tree-structured methodology for the assignment, detection, and maintenance of
agent personality. Additionally, we designed a cognitive architecture equipped
with a skill library based on the ACT* model, which contains memory,
reflection, and planning modules. We have also integrated general agents to
augment the virtual environment's realism. Using the CGMI framework, we
simulated numerous classroom interactions between teacher and students. The
experiments indicate that aspects such as the teaching methodology, curriculum,
and student performance closely mirror real classroom settings. We will open
source our work. | http://arxiv.org/pdf/2308.12503 | Shi Jinxin, Zhao Jiabao, Wang Yilei, Wu Xingjiao, Li Jiawen, He Liang | cs.AI, cs.HC, cs.MA | 11 pages, 15 figures | null | cs.AI | 20230824 | 20230828 | [
{
"id": "2302.01560"
},
{
"id": "2307.05300"
},
{
"id": "2307.07924"
},
{
"id": "2210.03350"
},
{
"id": "2304.05376"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.04091"
},
{
"id": "2305.02547"
},
{
"id": "2303.17071"
},
{
"id": "2303.17760"
},
{
"id": "2303.08774"
}
] |
2308.12682 | 52 | References Ahn, M.; Brohan, A.; Brown, N.; Chebotar, Y.; Cortes, O.; David, B.; Finn, C.; Fu, C.; Gopalakrishnan, K.; Hausman, K.; Herzog, A.; Ho, D.; Hsu, J.; Ibarz, J.; Ichter, B.; Irpan, A.; Jang, E.; Ruano, R. J.; Jeffrey, K.; Jesmonth, S.; Joshi, N. J.; Julian, R.; Kalashnikov, D.; Kuang, Y.; Lee, K.-H.; Levine, S.; Lu, Y.; Luu, L.; Parada, C.; Pastor, P.; Quiambao, J.; Rao, K.; Rettinghouse, J.; Reyes, D.; Sermanet, P.; Sievers, N.; Tan, C.; Toshev, A.; Vanhoucke, V.; Xia, F.; Xiao, T.; Xu, P.; Xu, S.; Yan, M.; and Zeng, A. 2022. Do As I Can, Not As I Say: Grounding Language in Robotic Affordances. arXiv:2204.01691. Bonet, B.; and Geffner, H. 2001. Planning as heuristic search. Artificial | 2308.12682#52 | SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge | Large Language Models (LLMs) have demonstrated impressive planning abilities
due to their vast "world knowledge". Yet, obtaining plans that are both
feasible (grounded in affordances) and cost-effective (in plan length), remains
a challenge, despite recent progress. This contrasts with heuristic planning
methods that employ domain knowledge (formalized in action models such as PDDL)
and heuristic search to generate feasible, optimal plans. Inspired by this, we
propose to combine the power of LLMs and heuristic planning by leveraging the
world knowledge of LLMs and the principles of heuristic search. Our approach,
SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain
knowledge, that evaluates actions' feasibility (Can) and long-term
reward/payoff (Pay), and heuristic search to select the best sequence of
actions. Our contributions are (1) a novel framing of the LLM planning problem
in the context of heuristic planning, (2) integrating grounding and
cost-effective elements into the generated plans, and (3) using heuristic
search over actions. Our extensive evaluations show that our model surpasses
other LLM planning approaches. | http://arxiv.org/pdf/2308.12682 | Rishi Hazra, Pedro Zuidberg Dos Martires, Luc De Raedt | cs.AI | Accepted in AAAI 2024. Website:
https://rishihazra.github.io/SayCanPay/ | null | cs.AI | 20230824 | 20240101 | [
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2305.14992"
},
{
"id": "2302.05128"
},
{
"id": "2212.08681"
},
{
"id": "1807.03748"
},
{
"id": "2303.00855"
},
{
"id": "2305.10601"
},
{
"id": "2304.11477"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2201.04735"
},
{
"id": "2202.10936"
},
{
"id": "2209.07753"
},
{
"id": "2302.06706"
},
{
"id": "1909.08593"
},
{
"id": "2307.15818"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2305.14314"
}
] |
2308.12950 | 52 | The value of self-instruct data We also perform ablations, showing the value of the self-instruct data that we generate with our own model. To evaluate the capacity of the model to answer questions, we use a zero-shot version of MBPP. We prompt the model to generate the code between [PYTHON] and [/PYTHON] tags to make it easy to parse the result. Our exact prompt is shown in Figure 13 in the Appendix. Table 8 show the impact of training on data generated using our models and filtered with unit tests as described in Section 2.5. The self-instruct data allows us to improve our scores on benchmarks such as HumanEval and MBPP. It also makes the training more reliable. With self-instruct, the model easily learns to follow the format requested for MBPP zero-shot while it sometimes fails without it.
Unnatural model. For comparison purposes, we also finetuned Code Llama - Python 34B on 15,000 unnatural instructions similarly to Honovich et al. (2023) using the same prompts as for the self-instruct dataset. We do not release this model, but we observe clear improvements on HumanEval and MBPP which are indicative of the improvements that can be reached with a small set of high-quality coding data. The results of the unnatural model are shown in Table 2.
13
60 | 2308.12950#52 | Code Llama: Open Foundation Models for Code | We release Code Llama, a family of large language models for code based on
Llama 2 providing state-of-the-art performance among open models, infilling
capabilities, support for large input contexts, and zero-shot instruction
following ability for programming tasks. We provide multiple flavors to cover a
wide range of applications: foundation models (Code Llama), Python
specializations (Code Llama - Python), and instruction-following models (Code
Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are
trained on sequences of 16k tokens and show improvements on inputs with up to
100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants
support infilling based on surrounding content. Code Llama reaches
state-of-the-art performance among open models on several code benchmarks, with
scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code
Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our
models outperform every other publicly available model on MultiPL-E. We release
Code Llama under a permissive license that allows for both research and
commercial use. | http://arxiv.org/pdf/2308.12950 | Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve | cs.CL | null | null | cs.CL | 20230824 | 20240131 | [] |
2308.12966 | 52 | OpenAI. Gpt-4 technical report, 2023.
Vicente Ordonez, Girish Kulkarni, and Tamara Berg. Im2text: Describing images using 1 million captioned photographs. In NeurIPS, 2011.
Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, and Furu Wei. Kosmos-2: Grounding multimodal large language models to the world. arXiv:2306.14824, 2023.
Qwen. Introducing qwen-7b: Open foundation and human-aligned models (of the state-of-the-arts), 2023. URL https://github.com/QwenLM/Qwen-7B.
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In ICML, 2021. | 2308.12966#52 | Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond | In this work, we introduce the Qwen-VL series, a set of large-scale
vision-language models (LVLMs) designed to perceive and understand both texts
and images. Starting from the Qwen-LM as a foundation, we endow it with visual
capacity by the meticulously designed (i) visual receptor, (ii) input-output
interface, (iii) 3-stage training pipeline, and (iv) multilingual multimodal
cleaned corpus. Beyond the conventional image description and
question-answering, we implement the grounding and text-reading ability of
Qwen-VLs by aligning image-caption-box tuples. The resulting models, including
Qwen-VL and Qwen-VL-Chat, set new records for generalist models under similar
model scales on a broad range of visual-centric benchmarks (e.g., image
captioning, question answering, visual grounding) and different settings (e.g.,
zero-shot, few-shot). Moreover, on real-world dialog benchmarks, our
instruction-tuned Qwen-VL-Chat also demonstrates superiority compared to
existing vision-language chatbots. Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL. | http://arxiv.org/pdf/2308.12966 | Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou | cs.CV, cs.CL | Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL | null | cs.CV | 20230824 | 20231013 | [
{
"id": "2211.01335"
},
{
"id": "2307.02499"
},
{
"id": "2305.10403"
},
{
"id": "2308.16890"
},
{
"id": "2208.10442"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2210.08402"
},
{
"id": "2306.02858"
},
{
"id": "2209.06794"
},
{
"id": "1504.00325"
},
{
"id": "2204.13653"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2212.04408"
},
{
"id": "2307.05222"
},
{
"id": "2306.15195"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2306.14824"
},
{
"id": "2102.05918"
},
{
"id": "2205.01917"
},
{
"id": "2111.11432"
},
{
"id": "2307.16125"
},
{
"id": "2305.03726"
},
{
"id": "2203.10244"
},
{
"id": "2206.08916"
},
{
"id": "2304.14108"
},
{
"id": "2307.08581"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2305.18565"
}
] |
2308.12503 | 53 | professional growth is to provide opportunities for teachers to observe each other's classes and have time to evaluate each other's teaching. âScoreâ :[] âDescriptionâ : Students need to learn to critically evaluate and criticize the materials they read. âScoreâ : [] âDescriptionâ : Judicial âScoreâ :[] âDescriptionâ : Teachers need to do a lot of self-reflection, analysis, and evaluation of their own work. âScoreâ :[] âDescriptionâ : Understanding concepts is more important than simply rote learning or teaching methods to remember concepts. âScoreâ : [] âDescriptionâ : I think that for most materials students read, what they get out of it is quite superficial. âScoreâ : [] âDescriptionâ : One of the most important jobs of teachers is to assess studentsâ learning status. âScoreâ :[] âDescriptionâ : Teachers must enable students to understand the conceptual knowledge related to the course, not just provide some facts. âScoreâ :[] âDescriptionâ : I like to focus on the general concepts of the subjects I teach, rather than list a lot of factual details. âScoreâ | 2308.12503#53 | CGMI: Configurable General Multi-Agent Interaction Framework | Benefiting from the powerful capabilities of large language models (LLMs),
agents based on LLMs have shown the potential to address domain-specific tasks
and emulate human behaviors. However, the content generated by these agents
remains somewhat superficial, owing to their limited domain expertise and the
absence of an effective cognitive architecture. To address this, we present the
Configurable General Multi-Agent Interaction (CGMI) framework, designed to
replicate human interactions in real-world scenarios. Specifically, we propose
a tree-structured methodology for the assignment, detection, and maintenance of
agent personality. Additionally, we designed a cognitive architecture equipped
with a skill library based on the ACT* model, which contains memory,
reflection, and planning modules. We have also integrated general agents to
augment the virtual environment's realism. Using the CGMI framework, we
simulated numerous classroom interactions between teacher and students. The
experiments indicate that aspects such as the teaching methodology, curriculum,
and student performance closely mirror real classroom settings. We will open
source our work. | http://arxiv.org/pdf/2308.12503 | Shi Jinxin, Zhao Jiabao, Wang Yilei, Wu Xingjiao, Li Jiawen, He Liang | cs.AI, cs.HC, cs.MA | 11 pages, 15 figures | null | cs.AI | 20230824 | 20230828 | [
{
"id": "2302.01560"
},
{
"id": "2307.05300"
},
{
"id": "2307.07924"
},
{
"id": "2210.03350"
},
{
"id": "2304.05376"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.04091"
},
{
"id": "2305.02547"
},
{
"id": "2303.17071"
},
{
"id": "2303.17760"
},
{
"id": "2303.08774"
}
] |
2308.12682 | 53 | in Robotic Affordances. arXiv:2204.01691. Bonet, B.; and Geffner, H. 2001. Planning as heuristic search. Artificial Intelligence, 129(1-2): 5â33. Brohan, A.; Brown, N.; Carbajal, J.; Chebotar, Y.; Chen, X.; Choromanski, K.; Ding, T.; Driess, D.; Dubey, A.; Finn, C.; Florence, P.; Fu, C.; Arenas, M. G.; Gopalakrishnan, K.; Han, K.; Hausman, K.; Herzog, A.; Hsu, J.; Ichter, B.; Irpan, A.; Joshi, N.; Julian, R.; Kalashnikov, D.; Kuang, Y.; Leal, I.; Lee, L.; Lee, T.-W. E.; Levine, S.; Lu, Y.; Michalewski, H.; Mordatch, I.; Pertsch, K.; Rao, K.; Reymann, K.; Ryoo, M.; Salazar, G.; Sanketi, P.; Sermanet, P.; Singh, J.; Singh, A.; Soricut, R.; Tran, H.; Vanhoucke, V.; Vuong, Q.; | 2308.12682#53 | SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge | Large Language Models (LLMs) have demonstrated impressive planning abilities
due to their vast "world knowledge". Yet, obtaining plans that are both
feasible (grounded in affordances) and cost-effective (in plan length), remains
a challenge, despite recent progress. This contrasts with heuristic planning
methods that employ domain knowledge (formalized in action models such as PDDL)
and heuristic search to generate feasible, optimal plans. Inspired by this, we
propose to combine the power of LLMs and heuristic planning by leveraging the
world knowledge of LLMs and the principles of heuristic search. Our approach,
SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain
knowledge, that evaluates actions' feasibility (Can) and long-term
reward/payoff (Pay), and heuristic search to select the best sequence of
actions. Our contributions are (1) a novel framing of the LLM planning problem
in the context of heuristic planning, (2) integrating grounding and
cost-effective elements into the generated plans, and (3) using heuristic
search over actions. Our extensive evaluations show that our model surpasses
other LLM planning approaches. | http://arxiv.org/pdf/2308.12682 | Rishi Hazra, Pedro Zuidberg Dos Martires, Luc De Raedt | cs.AI | Accepted in AAAI 2024. Website:
https://rishihazra.github.io/SayCanPay/ | null | cs.AI | 20230824 | 20240101 | [
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2305.14992"
},
{
"id": "2302.05128"
},
{
"id": "2212.08681"
},
{
"id": "1807.03748"
},
{
"id": "2303.00855"
},
{
"id": "2305.10601"
},
{
"id": "2304.11477"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2201.04735"
},
{
"id": "2202.10936"
},
{
"id": "2209.07753"
},
{
"id": "2302.06706"
},
{
"id": "1909.08593"
},
{
"id": "2307.15818"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2305.14314"
}
] |
2308.12950 | 53 | 13
60
Size SI HumanEval MBPP 3-shot zero-shot 7B â â 30.5% 43.4% 34.8% 44.4% 37.6% 37.4% 13B â â 40.9% 46.2% 42.7% 49.4% 20.4% 40.2%
Table 8: Impact of self-instruct data. Impact of self-instruct data (SI) on the MBPP and HumanEval scores of our self-instruct models. The scores are computed using greedy decoding. In MBPP zero-shot, we prompt the model to generate the solution between [PYTHON][/PYTHON] tags. Removing SI results in generally lower scores on HumanEval and MBPP, and makes learning to generate code with the right format for MBPP zero shot much less reliable. | 2308.12950#53 | Code Llama: Open Foundation Models for Code | We release Code Llama, a family of large language models for code based on
Llama 2 providing state-of-the-art performance among open models, infilling
capabilities, support for large input contexts, and zero-shot instruction
following ability for programming tasks. We provide multiple flavors to cover a
wide range of applications: foundation models (Code Llama), Python
specializations (Code Llama - Python), and instruction-following models (Code
Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are
trained on sequences of 16k tokens and show improvements on inputs with up to
100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants
support infilling based on surrounding content. Code Llama reaches
state-of-the-art performance among open models on several code benchmarks, with
scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code
Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our
models outperform every other publicly available model on MultiPL-E. We release
Code Llama under a permissive license that allows for both research and
commercial use. | http://arxiv.org/pdf/2308.12950 | Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve | cs.CL | null | null | cs.CL | 20230824 | 20240131 | [] |
2308.12966 | 53 | Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, et al. Laion-5b: An open large-scale dataset for training next generation image-text models. arXiv:2210.08402, 2022a.
Christoph Schuhmann, Andreas Köpf, Richard Vencu, Theo Coombes, and Romain Beaumont. Laion coco: 600m synthetic captions from laion2b-en. https://laion.ai/blog/laion-coco/, 2022b.
Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. Conceptual captions: A cleaned, hyper- nymed, image alt-text dataset for automatic image captioning. In ACL, 2018.
Oleksii Sidorov, Ronghang Hu, Marcus Rohrbach, and Amanpreet Singh. Textcaps: a dataset for image captioning with reading comprehension. In ECCV, 2020.
13 | 2308.12966#53 | Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond | In this work, we introduce the Qwen-VL series, a set of large-scale
vision-language models (LVLMs) designed to perceive and understand both texts
and images. Starting from the Qwen-LM as a foundation, we endow it with visual
capacity by the meticulously designed (i) visual receptor, (ii) input-output
interface, (iii) 3-stage training pipeline, and (iv) multilingual multimodal
cleaned corpus. Beyond the conventional image description and
question-answering, we implement the grounding and text-reading ability of
Qwen-VLs by aligning image-caption-box tuples. The resulting models, including
Qwen-VL and Qwen-VL-Chat, set new records for generalist models under similar
model scales on a broad range of visual-centric benchmarks (e.g., image
captioning, question answering, visual grounding) and different settings (e.g.,
zero-shot, few-shot). Moreover, on real-world dialog benchmarks, our
instruction-tuned Qwen-VL-Chat also demonstrates superiority compared to
existing vision-language chatbots. Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL. | http://arxiv.org/pdf/2308.12966 | Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou | cs.CV, cs.CL | Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL | null | cs.CV | 20230824 | 20231013 | [
{
"id": "2211.01335"
},
{
"id": "2307.02499"
},
{
"id": "2305.10403"
},
{
"id": "2308.16890"
},
{
"id": "2208.10442"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2210.08402"
},
{
"id": "2306.02858"
},
{
"id": "2209.06794"
},
{
"id": "1504.00325"
},
{
"id": "2204.13653"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2212.04408"
},
{
"id": "2307.05222"
},
{
"id": "2306.15195"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2306.14824"
},
{
"id": "2102.05918"
},
{
"id": "2205.01917"
},
{
"id": "2111.11432"
},
{
"id": "2307.16125"
},
{
"id": "2305.03726"
},
{
"id": "2203.10244"
},
{
"id": "2206.08916"
},
{
"id": "2304.14108"
},
{
"id": "2307.08581"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2305.18565"
}
] |
2308.12503 | 54 | :[] âDescriptionâ : I like to focus on the general concepts of the subjects I teach, rather than list a lot of factual details. âScoreâ :[] âDescriptionâ : When I prepare for lessons, I would prepare the main points to teach, leaving the details for students to find out by themselves. âScoreâ : [ ] âDescriptionâ : Global âScoreâ :[] âDescriptionâ : I like to teach students a method that can be used to solve various problems. âScoreâ : [] âDescriptionâ :I prefer to explain to students the scope and conditions of applying a problem, rather than explain the details. âScoreâ : [|] âDescriptionâ : I think students should learn how to understand some key issues and the context these issues exist in. âScoreâ : [] âDescriptionâ : The main task of teachers is to provide students with a way of thinking that can be universally applied in various aspects. âScoreâ : [] âDescriptionâ : Teachers must provide students with a lot of concrete and detailed course materials. âScoreâ :[] âDescriptionâ : I like to | 2308.12503#54 | CGMI: Configurable General Multi-Agent Interaction Framework | Benefiting from the powerful capabilities of large language models (LLMs),
agents based on LLMs have shown the potential to address domain-specific tasks
and emulate human behaviors. However, the content generated by these agents
remains somewhat superficial, owing to their limited domain expertise and the
absence of an effective cognitive architecture. To address this, we present the
Configurable General Multi-Agent Interaction (CGMI) framework, designed to
replicate human interactions in real-world scenarios. Specifically, we propose
a tree-structured methodology for the assignment, detection, and maintenance of
agent personality. Additionally, we designed a cognitive architecture equipped
with a skill library based on the ACT* model, which contains memory,
reflection, and planning modules. We have also integrated general agents to
augment the virtual environment's realism. Using the CGMI framework, we
simulated numerous classroom interactions between teacher and students. The
experiments indicate that aspects such as the teaching methodology, curriculum,
and student performance closely mirror real classroom settings. We will open
source our work. | http://arxiv.org/pdf/2308.12503 | Shi Jinxin, Zhao Jiabao, Wang Yilei, Wu Xingjiao, Li Jiawen, He Liang | cs.AI, cs.HC, cs.MA | 11 pages, 15 figures | null | cs.AI | 20230824 | 20230828 | [
{
"id": "2302.01560"
},
{
"id": "2307.05300"
},
{
"id": "2307.07924"
},
{
"id": "2210.03350"
},
{
"id": "2304.05376"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.04091"
},
{
"id": "2305.02547"
},
{
"id": "2303.17071"
},
{
"id": "2303.17760"
},
{
"id": "2303.08774"
}
] |
2308.12682 | 54 | Sermanet, P.; Singh, J.; Singh, A.; Soricut, R.; Tran, H.; Vanhoucke, V.; Vuong, Q.; Wahid, A.; Welker, S.; Wohlhart, P.; Wu, J.; Xia, F.; Xiao, T.; Xu, P.; Xu, S.; Yu, T.; and Zitkovich, B. 2023. RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control. arXiv:2307.15818. Chevalier-Boisvert, M.; Bahdanau, D.; Lahlou, S.; Willems, L.; Saharia, C.; Nguyen, T. H.; and Bengio, Y. 2019. BabyAI: First Steps Towards Grounded Language Learning With a Human In the Loop. In International Conference on Learning Representations, volume 105. Chiang, W.-L.; Li, Z.; Lin, Z.; Sheng, Y.; Wu, Z.; Zhang, H.; Zheng, L.; Zhuang, S.; Zhuang, Y.; Gonzalez, J. E.; Stoica, I.; and Xing, E. P. 2023. Vicuna: An Open-Source Chatbot Impressing GPT-4 with | 2308.12682#54 | SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge | Large Language Models (LLMs) have demonstrated impressive planning abilities
due to their vast "world knowledge". Yet, obtaining plans that are both
feasible (grounded in affordances) and cost-effective (in plan length), remains
a challenge, despite recent progress. This contrasts with heuristic planning
methods that employ domain knowledge (formalized in action models such as PDDL)
and heuristic search to generate feasible, optimal plans. Inspired by this, we
propose to combine the power of LLMs and heuristic planning by leveraging the
world knowledge of LLMs and the principles of heuristic search. Our approach,
SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain
knowledge, that evaluates actions' feasibility (Can) and long-term
reward/payoff (Pay), and heuristic search to select the best sequence of
actions. Our contributions are (1) a novel framing of the LLM planning problem
in the context of heuristic planning, (2) integrating grounding and
cost-effective elements into the generated plans, and (3) using heuristic
search over actions. Our extensive evaluations show that our model surpasses
other LLM planning approaches. | http://arxiv.org/pdf/2308.12682 | Rishi Hazra, Pedro Zuidberg Dos Martires, Luc De Raedt | cs.AI | Accepted in AAAI 2024. Website:
https://rishihazra.github.io/SayCanPay/ | null | cs.AI | 20230824 | 20240101 | [
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2305.14992"
},
{
"id": "2302.05128"
},
{
"id": "2212.08681"
},
{
"id": "1807.03748"
},
{
"id": "2303.00855"
},
{
"id": "2305.10601"
},
{
"id": "2304.11477"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2201.04735"
},
{
"id": "2202.10936"
},
{
"id": "2209.07753"
},
{
"id": "2302.06706"
},
{
"id": "1909.08593"
},
{
"id": "2307.15818"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2305.14314"
}
] |
2308.12950 | 54 | 10 HumanEval Code Llama 7B 10 HumanEval Code Llama 13B 10 HumanEval Code Llama 34B 09 â4â Pass@1 09 â4â Pass@1 09 0.3, > Pass@10 ee <i Pass@10 2 4 Pass@ 100 08 0 07 Pet Gos a Gos pert Zoe B05 B05 B05. 4 | an 04 04 04-7 nel + a 03-6 â = = 03 0.3- 2 Pass@10 0.2 0.2 0.2- âxâ Pass@ 100 9191 02 03 04 05 06 07 08 9191 02 03 04 05 06 07 08 9191 02 03 04 05 06 07 08 Temperature Temperature Temperature 10 MBPP Code Llama 7B 10 MBPP Code Llama 13B 10 MBPP Code Llama 34B 09 09 09 0.8 0.8 0.8 0.7 0.7 0.7 at © 06 pet eon S06 gy gy a =e ia Eos # | B05, i t 0s a 04 â*â Pass@1 - +. 04 â*â Pass@1 04 â*â Pass@1 0.3- 4 Pass@ 10 0.3- 4 Pass@ 10 0.3- 4 Pass@ 10 0.2> ââ Pass@ 100 0.2> ââ Pass@ 100 0.2> ââ Pass@ 100 | 2308.12950#54 | Code Llama: Open Foundation Models for Code | We release Code Llama, a family of large language models for code based on
Llama 2 providing state-of-the-art performance among open models, infilling
capabilities, support for large input contexts, and zero-shot instruction
following ability for programming tasks. We provide multiple flavors to cover a
wide range of applications: foundation models (Code Llama), Python
specializations (Code Llama - Python), and instruction-following models (Code
Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are
trained on sequences of 16k tokens and show improvements on inputs with up to
100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants
support infilling based on surrounding content. Code Llama reaches
state-of-the-art performance among open models on several code benchmarks, with
scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code
Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our
models outperform every other publicly available model on MultiPL-E. We release
Code Llama under a permissive license that allows for both research and
commercial use. | http://arxiv.org/pdf/2308.12950 | Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve | cs.CL | null | null | cs.CL | 20230824 | 20240131 | [] |
2308.12966 | 54 | 13
Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela. Flava: A foundational language and vision alignment model. In CVPR, 2022.
Artifex Software. Pymupdf, 2015. URL https://github.com/pymupdf/PyMuPDF.
Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, and Jifeng Dai. Vl-bert: Pre-training of generic visual-linguistic representations. In ICLR, 2019.
Quan Sun, Qiying Yu, Yufeng Cui, Fan Zhang, Xiaosong Zhang, Yueze Wang, Hongcheng Gao, Jingjing Liu, Tiejun Huang, and Xinlong Wang. Generative pretraining in multimodality. arXiv:2307.05222, 2023.
Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. Cider: Consensus-based image description evaluation. In CVPR, 2015. | 2308.12966#54 | Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond | In this work, we introduce the Qwen-VL series, a set of large-scale
vision-language models (LVLMs) designed to perceive and understand both texts
and images. Starting from the Qwen-LM as a foundation, we endow it with visual
capacity by the meticulously designed (i) visual receptor, (ii) input-output
interface, (iii) 3-stage training pipeline, and (iv) multilingual multimodal
cleaned corpus. Beyond the conventional image description and
question-answering, we implement the grounding and text-reading ability of
Qwen-VLs by aligning image-caption-box tuples. The resulting models, including
Qwen-VL and Qwen-VL-Chat, set new records for generalist models under similar
model scales on a broad range of visual-centric benchmarks (e.g., image
captioning, question answering, visual grounding) and different settings (e.g.,
zero-shot, few-shot). Moreover, on real-world dialog benchmarks, our
instruction-tuned Qwen-VL-Chat also demonstrates superiority compared to
existing vision-language chatbots. Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL. | http://arxiv.org/pdf/2308.12966 | Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou | cs.CV, cs.CL | Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL | null | cs.CV | 20230824 | 20231013 | [
{
"id": "2211.01335"
},
{
"id": "2307.02499"
},
{
"id": "2305.10403"
},
{
"id": "2308.16890"
},
{
"id": "2208.10442"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2210.08402"
},
{
"id": "2306.02858"
},
{
"id": "2209.06794"
},
{
"id": "1504.00325"
},
{
"id": "2204.13653"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2212.04408"
},
{
"id": "2307.05222"
},
{
"id": "2306.15195"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2306.14824"
},
{
"id": "2102.05918"
},
{
"id": "2205.01917"
},
{
"id": "2111.11432"
},
{
"id": "2307.16125"
},
{
"id": "2305.03726"
},
{
"id": "2203.10244"
},
{
"id": "2206.08916"
},
{
"id": "2304.14108"
},
{
"id": "2307.08581"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2305.18565"
}
] |
2308.12503 | 55 | : Teachers must provide students with a lot of concrete and detailed course materials. âScoreâ :[] âDescriptionâ : I like to ask questions that require students to answer with accurate, precise and very detailed knowledge. âScoreâ : [ âDescriptionâ : For students, the most important thing is to know a lot of facts and details, then they can learn how to analyze and synthesize. âScoreâ : âScoreâ :[] âDescriptionâ : I think the focus of teaching is to master factual details. âScoreâ :[] âDescriptionâ : I like to explain specific steps and detailed things to students. âScoreâ : [|] âDescriptionâ : Teaching is imparting facts and enabling students to obtain a lot of useful information. âScoreâ :[] âDescriptionâ : I prefer discussions or learning around concrete issues that allow me to focus on a large number of details. âScoreâ : [] âDescriptionâ : Teachers must pay constant attention to curriculum and teaching reforms to understand the direction of education. âScoreâ : |] âDescriptionâ : Each year I choose some new textbooks or | 2308.12503#55 | CGMI: Configurable General Multi-Agent Interaction Framework | Benefiting from the powerful capabilities of large language models (LLMs),
agents based on LLMs have shown the potential to address domain-specific tasks
and emulate human behaviors. However, the content generated by these agents
remains somewhat superficial, owing to their limited domain expertise and the
absence of an effective cognitive architecture. To address this, we present the
Configurable General Multi-Agent Interaction (CGMI) framework, designed to
replicate human interactions in real-world scenarios. Specifically, we propose
a tree-structured methodology for the assignment, detection, and maintenance of
agent personality. Additionally, we designed a cognitive architecture equipped
with a skill library based on the ACT* model, which contains memory,
reflection, and planning modules. We have also integrated general agents to
augment the virtual environment's realism. Using the CGMI framework, we
simulated numerous classroom interactions between teacher and students. The
experiments indicate that aspects such as the teaching methodology, curriculum,
and student performance closely mirror real classroom settings. We will open
source our work. | http://arxiv.org/pdf/2308.12503 | Shi Jinxin, Zhao Jiabao, Wang Yilei, Wu Xingjiao, Li Jiawen, He Liang | cs.AI, cs.HC, cs.MA | 11 pages, 15 figures | null | cs.AI | 20230824 | 20230828 | [
{
"id": "2302.01560"
},
{
"id": "2307.05300"
},
{
"id": "2307.07924"
},
{
"id": "2210.03350"
},
{
"id": "2304.05376"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.04091"
},
{
"id": "2305.02547"
},
{
"id": "2303.17071"
},
{
"id": "2303.17760"
},
{
"id": "2303.08774"
}
] |
2308.12682 | 55 | J. E.; Stoica, I.; and Xing, E. P. 2023. Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90%* ChatGPT Quality. Chung, H. W.; Hou, L.; Longpre, S.; Zoph, B.; Tay, Y.; Fedus, W.; Li, Y.; Wang, X.; Dehghani, M.; Brahma, S.; Webson, A.; Gu, S. S.; Dai, Z.; Suzgun, M.; Chen, X.; Chowdhery, A.; Castro-Ros, A.; Pellat, M.; Robinson, K.; Valter, D.; Narang, S.; Mishra, G.; Yu, A.; Zhao, V.; Huang, Y.; Dai, A.; Yu, H.; Petrov, S.; Chi, E. H.; Dean, J.; Devlin, J.; Roberts, A.; Zhou, D.; Le, Q. V.; and Wei, J. 2022. Scaling Instruction-Finetuned Language Models. arXiv:2210.11416. Dettmers, T.; Lewis, M.; Belkada, Y.; and Zettlemoyer, L. 2022. LLM.int8(): 8-bit Matrix | 2308.12682#55 | SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge | Large Language Models (LLMs) have demonstrated impressive planning abilities
due to their vast "world knowledge". Yet, obtaining plans that are both
feasible (grounded in affordances) and cost-effective (in plan length), remains
a challenge, despite recent progress. This contrasts with heuristic planning
methods that employ domain knowledge (formalized in action models such as PDDL)
and heuristic search to generate feasible, optimal plans. Inspired by this, we
propose to combine the power of LLMs and heuristic planning by leveraging the
world knowledge of LLMs and the principles of heuristic search. Our approach,
SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain
knowledge, that evaluates actions' feasibility (Can) and long-term
reward/payoff (Pay), and heuristic search to select the best sequence of
actions. Our contributions are (1) a novel framing of the LLM planning problem
in the context of heuristic planning, (2) integrating grounding and
cost-effective elements into the generated plans, and (3) using heuristic
search over actions. Our extensive evaluations show that our model surpasses
other LLM planning approaches. | http://arxiv.org/pdf/2308.12682 | Rishi Hazra, Pedro Zuidberg Dos Martires, Luc De Raedt | cs.AI | Accepted in AAAI 2024. Website:
https://rishihazra.github.io/SayCanPay/ | null | cs.AI | 20230824 | 20240101 | [
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2305.14992"
},
{
"id": "2302.05128"
},
{
"id": "2212.08681"
},
{
"id": "1807.03748"
},
{
"id": "2303.00855"
},
{
"id": "2305.10601"
},
{
"id": "2304.11477"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2201.04735"
},
{
"id": "2202.10936"
},
{
"id": "2209.07753"
},
{
"id": "2302.06706"
},
{
"id": "1909.08593"
},
{
"id": "2307.15818"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2305.14314"
}
] |
2308.12966 | 55 | Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. Cider: Consensus-based image description evaluation. In CVPR, 2015.
Peng Wang, An Yang, Rui Men, Junyang Lin, Shuai Bai, Zhikang Li, Jianxin Ma, Chang Zhou, Jingren Zhou, and Hongxia Yang. Ofa: Unifying architectures, tasks, and modalities through a simple sequence-to- sequence learning framework. In ICML, 2022a.
Peng Wang, Shijie Wang, Junyang Lin, Shuai Bai, Xiaohuan Zhou, Jingren Zhou, Xinggang Wang, and Chang Zhou. One-peace: Exploring one general representation model toward unlimited modalities. arXiv:2305.11172, 2023.
Wenhui Wang, Hangbo Bao, Li Dong, Johan Bjorck, Zhiliang Peng, Qiang Liu, Kriti Aggarwal, Owais Khan Mohammed, Saksham Singhal, Subhojit Som, et al. Image as a foreign language: Beit pretraining for all vision and vision-language tasks. arXiv:2208.10442, 2022b. | 2308.12966#55 | Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond | In this work, we introduce the Qwen-VL series, a set of large-scale
vision-language models (LVLMs) designed to perceive and understand both texts
and images. Starting from the Qwen-LM as a foundation, we endow it with visual
capacity by the meticulously designed (i) visual receptor, (ii) input-output
interface, (iii) 3-stage training pipeline, and (iv) multilingual multimodal
cleaned corpus. Beyond the conventional image description and
question-answering, we implement the grounding and text-reading ability of
Qwen-VLs by aligning image-caption-box tuples. The resulting models, including
Qwen-VL and Qwen-VL-Chat, set new records for generalist models under similar
model scales on a broad range of visual-centric benchmarks (e.g., image
captioning, question answering, visual grounding) and different settings (e.g.,
zero-shot, few-shot). Moreover, on real-world dialog benchmarks, our
instruction-tuned Qwen-VL-Chat also demonstrates superiority compared to
existing vision-language chatbots. Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL. | http://arxiv.org/pdf/2308.12966 | Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou | cs.CV, cs.CL | Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL | null | cs.CV | 20230824 | 20231013 | [
{
"id": "2211.01335"
},
{
"id": "2307.02499"
},
{
"id": "2305.10403"
},
{
"id": "2308.16890"
},
{
"id": "2208.10442"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2210.08402"
},
{
"id": "2306.02858"
},
{
"id": "2209.06794"
},
{
"id": "1504.00325"
},
{
"id": "2204.13653"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2212.04408"
},
{
"id": "2307.05222"
},
{
"id": "2306.15195"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2306.14824"
},
{
"id": "2102.05918"
},
{
"id": "2205.01917"
},
{
"id": "2111.11432"
},
{
"id": "2307.16125"
},
{
"id": "2305.03726"
},
{
"id": "2203.10244"
},
{
"id": "2206.08916"
},
{
"id": "2304.14108"
},
{
"id": "2307.08581"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2305.18565"
}
] |
2308.12503 | 56 | curriculum and teaching reforms to understand the direction of education. âScoreâ : |] âDescriptionâ : Each year I choose some new textbooks or reference materials to supplement my teaching content. âScoreâ :[] âDescriptionâ : Teachers and students must abandon old ways of thinking and learn new methods to face everything. âScoreâ :[] âScoreâ :[] âDescriptionâ : Teachers should raise questions and tell students about the contradictions and dilemmas they face in solving problems. âScoreâ :[] âDescriptionâ : I like when students have different perspectives on the views I raise. âScoreâ :[] âDescriptionâ : Teachers should see teaching or learning as an ongoing process of pedagogical innovation, problem-solving, and meeting challenges. âScoreâ : [| âDescriptionâ : The role of teachers is to enable students to acquire knowledge through experimentation or evidencing approaches in the classroom. âScoreâ :[] âDescriptionâ : I think textbooks selected by the school or administrative department are the best teaching materials. âScoreâ :[] âDescriptionâ : Students should | 2308.12503#56 | CGMI: Configurable General Multi-Agent Interaction Framework | Benefiting from the powerful capabilities of large language models (LLMs),
agents based on LLMs have shown the potential to address domain-specific tasks
and emulate human behaviors. However, the content generated by these agents
remains somewhat superficial, owing to their limited domain expertise and the
absence of an effective cognitive architecture. To address this, we present the
Configurable General Multi-Agent Interaction (CGMI) framework, designed to
replicate human interactions in real-world scenarios. Specifically, we propose
a tree-structured methodology for the assignment, detection, and maintenance of
agent personality. Additionally, we designed a cognitive architecture equipped
with a skill library based on the ACT* model, which contains memory,
reflection, and planning modules. We have also integrated general agents to
augment the virtual environment's realism. Using the CGMI framework, we
simulated numerous classroom interactions between teacher and students. The
experiments indicate that aspects such as the teaching methodology, curriculum,
and student performance closely mirror real classroom settings. We will open
source our work. | http://arxiv.org/pdf/2308.12503 | Shi Jinxin, Zhao Jiabao, Wang Yilei, Wu Xingjiao, Li Jiawen, He Liang | cs.AI, cs.HC, cs.MA | 11 pages, 15 figures | null | cs.AI | 20230824 | 20230828 | [
{
"id": "2302.01560"
},
{
"id": "2307.05300"
},
{
"id": "2307.07924"
},
{
"id": "2210.03350"
},
{
"id": "2304.05376"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.04091"
},
{
"id": "2305.02547"
},
{
"id": "2303.17071"
},
{
"id": "2303.17760"
},
{
"id": "2303.08774"
}
] |
2308.12682 | 56 | Dettmers, T.; Lewis, M.; Belkada, Y.; and Zettlemoyer, L. 2022. LLM.int8(): 8-bit Matrix Multiplication for Trans- formers at Scale. arXiv:2208.07339. Dettmers, T.; Pagnoni, A.; Holtzman, A.; and Zettlemoyer, L. 2023. QLoRA: Efficient Finetuning of Quantized LLMs. arXiv:2305.14314. Devlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Associa- tion for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 4171â4186. Minneapolis, Minnesota: Association for Computational Linguistics. Ding, Y.; Zhang, X.; Amiri, S.; Cao, N.; Yang, H.; Kaminski, A.; Esselink, C.; and Zhang, S. 2023. Integrating action knowledge and LLMs for task planning and situation handling in open worlds. Autonomous Robots, 47(8): 981â997. | 2308.12682#56 | SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge | Large Language Models (LLMs) have demonstrated impressive planning abilities
due to their vast "world knowledge". Yet, obtaining plans that are both
feasible (grounded in affordances) and cost-effective (in plan length), remains
a challenge, despite recent progress. This contrasts with heuristic planning
methods that employ domain knowledge (formalized in action models such as PDDL)
and heuristic search to generate feasible, optimal plans. Inspired by this, we
propose to combine the power of LLMs and heuristic planning by leveraging the
world knowledge of LLMs and the principles of heuristic search. Our approach,
SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain
knowledge, that evaluates actions' feasibility (Can) and long-term
reward/payoff (Pay), and heuristic search to select the best sequence of
actions. Our contributions are (1) a novel framing of the LLM planning problem
in the context of heuristic planning, (2) integrating grounding and
cost-effective elements into the generated plans, and (3) using heuristic
search over actions. Our extensive evaluations show that our model surpasses
other LLM planning approaches. | http://arxiv.org/pdf/2308.12682 | Rishi Hazra, Pedro Zuidberg Dos Martires, Luc De Raedt | cs.AI | Accepted in AAAI 2024. Website:
https://rishihazra.github.io/SayCanPay/ | null | cs.AI | 20230824 | 20240101 | [
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2305.14992"
},
{
"id": "2302.05128"
},
{
"id": "2212.08681"
},
{
"id": "1807.03748"
},
{
"id": "2303.00855"
},
{
"id": "2305.10601"
},
{
"id": "2304.11477"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2201.04735"
},
{
"id": "2202.10936"
},
{
"id": "2209.07753"
},
{
"id": "2302.06706"
},
{
"id": "1909.08593"
},
{
"id": "2307.15818"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2305.14314"
}
] |
2308.12950 | 56 | Figure 6: Code Llama scores different temperature values. Results are presented for 7B, 13B, and 34B models on HumanEval and MBPP benchmarks. We report Pass@1, Pass@10, and Pass@100 for different temperature values. We use nucleus sampling with p=0.95.
# 3.4.3 Pass@k evaluation
We study the effect of the sampling temperature on the pass@k performance. Specifically, we report pass@1, 10, and 100 using temperature â {0.1, 0.4, 0.6, 0.8} on both HumanEval and MBPP. Results are depicted in Figure 6. As expected, as we increase the temperature, the pass@1 scores are getting worse while the pass@10 and pass@100 improve.
# 4 Responsible AI and safety
Large language models have been shown to have the potential to produce known falsehoods due to miscon- ceptions or false beliefs (Lin et al., 2022), generate toxic or offensive content (Hartvigsen et al., 2022) and reproduce or even amplify the biases that are contained in the training data (Dhamala et al., 2021). As
14
mentioned in Section 2.5, we make Code Llama - Instruct safer by fine-tuning on outputs from Llama 2, including adversarial prompts with safe responses, as well as prompts addressing code-specific risks. | 2308.12950#56 | Code Llama: Open Foundation Models for Code | We release Code Llama, a family of large language models for code based on
Llama 2 providing state-of-the-art performance among open models, infilling
capabilities, support for large input contexts, and zero-shot instruction
following ability for programming tasks. We provide multiple flavors to cover a
wide range of applications: foundation models (Code Llama), Python
specializations (Code Llama - Python), and instruction-following models (Code
Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are
trained on sequences of 16k tokens and show improvements on inputs with up to
100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants
support infilling based on surrounding content. Code Llama reaches
state-of-the-art performance among open models on several code benchmarks, with
scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code
Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our
models outperform every other publicly available model on MultiPL-E. We release
Code Llama under a permissive license that allows for both research and
commercial use. | http://arxiv.org/pdf/2308.12950 | Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve | cs.CL | null | null | cs.CL | 20230824 | 20240131 | [] |
2308.12966 | 56 | An Yang, Junshu Pan, Junyang Lin, Rui Men, Yichang Zhang, Jingren Zhou, and Chang Zhou. Chinese clip: Contrastive vision-language pretraining in chinese. arXiv:2211.01335, 2022a.
Zhengyuan Yang, Zhe Gan, Jianfeng Wang, Xiaowei Hu, Yumao Lu, Zicheng Liu, and Lijuan Wang. An empirical study of gpt-3 for few-shot knowledge-based vqa. In AAAI, 2022b.
Jiabo Ye, Anwen Hu, Haiyang Xu, Qinghao Ye, Ming Yan, Yuhao Dan, Chenlin Zhao, Guohai Xu, Chenliang Li, Junfeng Tian, et al. mplug-docowl: Modularized multimodal large language model for document understanding. arXiv:2307.02499, 2023a.
Qinghao Ye, Haiyang Xu, Guohai Xu, Jiabo Ye, Ming Yan, Yiyang Zhou, Junyang Wang, Anwen Hu, Pengcheng Shi, Yaya Shi, et al. mplug-owl: Modularization empowers large language models with multimodality. arXiv:2304.14178, 2023b. | 2308.12966#56 | Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond | In this work, we introduce the Qwen-VL series, a set of large-scale
vision-language models (LVLMs) designed to perceive and understand both texts
and images. Starting from the Qwen-LM as a foundation, we endow it with visual
capacity by the meticulously designed (i) visual receptor, (ii) input-output
interface, (iii) 3-stage training pipeline, and (iv) multilingual multimodal
cleaned corpus. Beyond the conventional image description and
question-answering, we implement the grounding and text-reading ability of
Qwen-VLs by aligning image-caption-box tuples. The resulting models, including
Qwen-VL and Qwen-VL-Chat, set new records for generalist models under similar
model scales on a broad range of visual-centric benchmarks (e.g., image
captioning, question answering, visual grounding) and different settings (e.g.,
zero-shot, few-shot). Moreover, on real-world dialog benchmarks, our
instruction-tuned Qwen-VL-Chat also demonstrates superiority compared to
existing vision-language chatbots. Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL. | http://arxiv.org/pdf/2308.12966 | Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou | cs.CV, cs.CL | Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL | null | cs.CV | 20230824 | 20231013 | [
{
"id": "2211.01335"
},
{
"id": "2307.02499"
},
{
"id": "2305.10403"
},
{
"id": "2308.16890"
},
{
"id": "2208.10442"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2210.08402"
},
{
"id": "2306.02858"
},
{
"id": "2209.06794"
},
{
"id": "1504.00325"
},
{
"id": "2204.13653"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2212.04408"
},
{
"id": "2307.05222"
},
{
"id": "2306.15195"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2306.14824"
},
{
"id": "2102.05918"
},
{
"id": "2205.01917"
},
{
"id": "2111.11432"
},
{
"id": "2307.16125"
},
{
"id": "2305.03726"
},
{
"id": "2203.10244"
},
{
"id": "2206.08916"
},
{
"id": "2304.14108"
},
{
"id": "2307.08581"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2305.18565"
}
] |
2308.12503 | 57 | : I think textbooks selected by the school or administrative department are the best teaching materials. âScoreâ :[] âDescriptionâ : Students should adopt the perspectives that teachers think are correct. âScoreâ :[] âDescriptionâ : I like to follow some ready-made rules and procedures when teaching courses. âScoreâ : [] âScoreâ :[] âDescriptionâ : I prefer teaching the same subject and the same grade every year. âScoreâ :[] âDescriptionâ : In my work, I like to use some topics, tests, and teaching methods that have proven successful. âScoreâ : [] âDescriptionâ : We should measure a teacher's performance based on classroom order, behavioral requirements for students, students' level of courtesy, and their ability to give correct answers to questions. âScoreâ :[] âDescriptionâ : I agree with teachers being more strict on classroom discipline. âScoreâ :[] | 2308.12503#57 | CGMI: Configurable General Multi-Agent Interaction Framework | Benefiting from the powerful capabilities of large language models (LLMs),
agents based on LLMs have shown the potential to address domain-specific tasks
and emulate human behaviors. However, the content generated by these agents
remains somewhat superficial, owing to their limited domain expertise and the
absence of an effective cognitive architecture. To address this, we present the
Configurable General Multi-Agent Interaction (CGMI) framework, designed to
replicate human interactions in real-world scenarios. Specifically, we propose
a tree-structured methodology for the assignment, detection, and maintenance of
agent personality. Additionally, we designed a cognitive architecture equipped
with a skill library based on the ACT* model, which contains memory,
reflection, and planning modules. We have also integrated general agents to
augment the virtual environment's realism. Using the CGMI framework, we
simulated numerous classroom interactions between teacher and students. The
experiments indicate that aspects such as the teaching methodology, curriculum,
and student performance closely mirror real classroom settings. We will open
source our work. | http://arxiv.org/pdf/2308.12503 | Shi Jinxin, Zhao Jiabao, Wang Yilei, Wu Xingjiao, Li Jiawen, He Liang | cs.AI, cs.HC, cs.MA | 11 pages, 15 figures | null | cs.AI | 20230824 | 20230828 | [
{
"id": "2302.01560"
},
{
"id": "2307.05300"
},
{
"id": "2307.07924"
},
{
"id": "2210.03350"
},
{
"id": "2304.05376"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.04091"
},
{
"id": "2305.02547"
},
{
"id": "2303.17071"
},
{
"id": "2303.17760"
},
{
"id": "2303.08774"
}
] |
2308.12682 | 57 | S. 2023. Integrating action knowledge and LLMs for task planning and situation handling in open worlds. Autonomous Robots, 47(8): 981â997. Du, Y.; Liu, Z.; Li, J.; and Zhao, W. X. 2022. A Survey of Vision-Language Pre-Trained Models. arXiv:2202.10936. Frantar, E.; Ashkboos, S.; Hoefler, T.; and Alistarh, D. 2023. GPTQ: Accurate Post-Training Quantization for Genera- tive Pre-trained Transformers. arXiv:2210.17323. Golowich, N.; Moitra, A.; and Rohatgi, D. 2022. Planning in Observable POMDPs in Quasipolynomial Time. arXiv:2201.04735. Hao, S.; Gu, Y.; Ma, H.; Hong, J. J.; Wang, Z.; Wang, D. Z.; and Hu, Z. 2023. Reasoning with Language Model is Planning with World Model. arXiv:2305.14992. Helmert, M. 2006. The fast downward planning system. Journal of Artificial Intelligence Research, 26: 191â246. Huang, W.; | 2308.12682#57 | SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge | Large Language Models (LLMs) have demonstrated impressive planning abilities
due to their vast "world knowledge". Yet, obtaining plans that are both
feasible (grounded in affordances) and cost-effective (in plan length), remains
a challenge, despite recent progress. This contrasts with heuristic planning
methods that employ domain knowledge (formalized in action models such as PDDL)
and heuristic search to generate feasible, optimal plans. Inspired by this, we
propose to combine the power of LLMs and heuristic planning by leveraging the
world knowledge of LLMs and the principles of heuristic search. Our approach,
SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain
knowledge, that evaluates actions' feasibility (Can) and long-term
reward/payoff (Pay), and heuristic search to select the best sequence of
actions. Our contributions are (1) a novel framing of the LLM planning problem
in the context of heuristic planning, (2) integrating grounding and
cost-effective elements into the generated plans, and (3) using heuristic
search over actions. Our extensive evaluations show that our model surpasses
other LLM planning approaches. | http://arxiv.org/pdf/2308.12682 | Rishi Hazra, Pedro Zuidberg Dos Martires, Luc De Raedt | cs.AI | Accepted in AAAI 2024. Website:
https://rishihazra.github.io/SayCanPay/ | null | cs.AI | 20230824 | 20240101 | [
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2305.14992"
},
{
"id": "2302.05128"
},
{
"id": "2212.08681"
},
{
"id": "1807.03748"
},
{
"id": "2303.00855"
},
{
"id": "2305.10601"
},
{
"id": "2304.11477"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2201.04735"
},
{
"id": "2202.10936"
},
{
"id": "2209.07753"
},
{
"id": "2302.06706"
},
{
"id": "1909.08593"
},
{
"id": "2307.15818"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2305.14314"
}
] |
2308.12950 | 57 | In this section, we perform evaluations on three widely-used automatic safety benchmarks from the perspectives of truthfulness, toxicity, and bias, respectively. Specifically, we assess the safety capabilities of both pretrained Code Llama and fine-tuned Code Llama - Instruct with Falcon (Almazrouei et al., 2023), MPT (MosaicML, 2023), and StarCoder (Li et al., 2023). Although we have chosen certain standard benchmarks commonly used in the language model community to highlight some of the problems with these models, itâs important to note that these evaluations alone do not provide a comprehensive understanding of the risks associated with them. We complement the safety analysis of Code Llama - Instruct with additional red teaming from various domain experts in offensive security, malware development, responsible AI and software engineering, similar to Touvron et al. (2023b). | 2308.12950#57 | Code Llama: Open Foundation Models for Code | We release Code Llama, a family of large language models for code based on
Llama 2 providing state-of-the-art performance among open models, infilling
capabilities, support for large input contexts, and zero-shot instruction
following ability for programming tasks. We provide multiple flavors to cover a
wide range of applications: foundation models (Code Llama), Python
specializations (Code Llama - Python), and instruction-following models (Code
Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are
trained on sequences of 16k tokens and show improvements on inputs with up to
100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants
support infilling based on surrounding content. Code Llama reaches
state-of-the-art performance among open models on several code benchmarks, with
scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code
Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our
models outperform every other publicly available model on MultiPL-E. We release
Code Llama under a permissive license that allows for both research and
commercial use. | http://arxiv.org/pdf/2308.12950 | Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve | cs.CL | null | null | cs.CL | 20230824 | 20240131 | [] |
2308.12966 | 57 | Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. In ACL, 2014.
Jiahui Yu, Zirui Wang, Vijay Vasudevan, Legg Yeung, Mojtaba Seyedhosseini, and Yonghui Wu. Coca: Contrastive captioners are image-text foundation models. arXiv:2205.01917, 2022.
Lu Yuan, Dongdong Chen, Yi-Ling Chen, Noel C. F. Codella, Xiyang Dai, Jianfeng Gao, Houdong Hu, Xuedong Huang, Boxin Li, Chunyuan Li, Ce Liu, Mengchen Liu, Zicheng Liu, Yumao Lu, Yu Shi, Lijuan Wang, Jianfeng Wang, Bin Xiao, Zhen Xiao, Jianwei Yang, Michael Zeng, Luowei Zhou, and Pengchuan Zhang. Florence: A new foundation model for computer vision. arXiv:2111.11432, 2021.
Yan Zeng, Xinsong Zhang, and Hang Li. Multi-grained vision language pre-training: Aligning texts with visual concepts. arXiv:2111.08276, 2021. | 2308.12966#57 | Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond | In this work, we introduce the Qwen-VL series, a set of large-scale
vision-language models (LVLMs) designed to perceive and understand both texts
and images. Starting from the Qwen-LM as a foundation, we endow it with visual
capacity by the meticulously designed (i) visual receptor, (ii) input-output
interface, (iii) 3-stage training pipeline, and (iv) multilingual multimodal
cleaned corpus. Beyond the conventional image description and
question-answering, we implement the grounding and text-reading ability of
Qwen-VLs by aligning image-caption-box tuples. The resulting models, including
Qwen-VL and Qwen-VL-Chat, set new records for generalist models under similar
model scales on a broad range of visual-centric benchmarks (e.g., image
captioning, question answering, visual grounding) and different settings (e.g.,
zero-shot, few-shot). Moreover, on real-world dialog benchmarks, our
instruction-tuned Qwen-VL-Chat also demonstrates superiority compared to
existing vision-language chatbots. Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL. | http://arxiv.org/pdf/2308.12966 | Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou | cs.CV, cs.CL | Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL | null | cs.CV | 20230824 | 20231013 | [
{
"id": "2211.01335"
},
{
"id": "2307.02499"
},
{
"id": "2305.10403"
},
{
"id": "2308.16890"
},
{
"id": "2208.10442"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2210.08402"
},
{
"id": "2306.02858"
},
{
"id": "2209.06794"
},
{
"id": "1504.00325"
},
{
"id": "2204.13653"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2212.04408"
},
{
"id": "2307.05222"
},
{
"id": "2306.15195"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2306.14824"
},
{
"id": "2102.05918"
},
{
"id": "2205.01917"
},
{
"id": "2111.11432"
},
{
"id": "2307.16125"
},
{
"id": "2305.03726"
},
{
"id": "2203.10244"
},
{
"id": "2206.08916"
},
{
"id": "2304.14108"
},
{
"id": "2307.08581"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2305.18565"
}
] |
2308.12682 | 58 | Helmert, M. 2006. The fast downward planning system. Journal of Artificial Intelligence Research, 26: 191â246. Huang, W.; Abbeel, P.; Pathak, D.; and Mordatch, I. 2022a. Language models as zero-shot planners: Extracting action- able knowledge for embodied agents. In International Conference on Machine Learning, 9118â9147. PMLR. Huang, W.; Xia, F.; Shah, D.; Driess, D.; Zeng, A.; Lu, Y.; Florence, P.; Mordatch, I.; Levine, S.; Hausman, K.; and Ichter, B. 2023. Grounded Decoding: Guiding Text Generation with Grounded Models for Embodied Agents. arXiv:2303.00855. Huang, W.; Xia, F.; Xiao, T.; Chan, H.; Liang, J.; Florence, P.; Zeng, A.; Tompson, J.; Mordatch, I.; Chebotar, Y.; Sermanet, P.; Brown, N.; Jackson, T.; Luu, L.; Levine, S.; Hausman, K.; and Ichter, B. 2022b. Inner Monologue: Embodied Reasoning through Planning with Language Models. | 2308.12682#58 | SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge | Large Language Models (LLMs) have demonstrated impressive planning abilities
due to their vast "world knowledge". Yet, obtaining plans that are both
feasible (grounded in affordances) and cost-effective (in plan length), remains
a challenge, despite recent progress. This contrasts with heuristic planning
methods that employ domain knowledge (formalized in action models such as PDDL)
and heuristic search to generate feasible, optimal plans. Inspired by this, we
propose to combine the power of LLMs and heuristic planning by leveraging the
world knowledge of LLMs and the principles of heuristic search. Our approach,
SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain
knowledge, that evaluates actions' feasibility (Can) and long-term
reward/payoff (Pay), and heuristic search to select the best sequence of
actions. Our contributions are (1) a novel framing of the LLM planning problem
in the context of heuristic planning, (2) integrating grounding and
cost-effective elements into the generated plans, and (3) using heuristic
search over actions. Our extensive evaluations show that our model surpasses
other LLM planning approaches. | http://arxiv.org/pdf/2308.12682 | Rishi Hazra, Pedro Zuidberg Dos Martires, Luc De Raedt | cs.AI | Accepted in AAAI 2024. Website:
https://rishihazra.github.io/SayCanPay/ | null | cs.AI | 20230824 | 20240101 | [
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2305.14992"
},
{
"id": "2302.05128"
},
{
"id": "2212.08681"
},
{
"id": "1807.03748"
},
{
"id": "2303.00855"
},
{
"id": "2305.10601"
},
{
"id": "2304.11477"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2201.04735"
},
{
"id": "2202.10936"
},
{
"id": "2209.07753"
},
{
"id": "2302.06706"
},
{
"id": "1909.08593"
},
{
"id": "2307.15818"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2305.14314"
}
] |
2308.12950 | 58 | Truthfulness. We use TruthfulQA (Lin et al., 2022) to gauge the factuality and common sense of our models. The TruthfulQA benchmark comprises 817 questions spread across 38 categories, encompassing topics such as health, finance, law, and politics (Lin et al., 2022). The questions are designed to be challenging, even for humans, causing them to answer incorrectly due to unfounded beliefs or misconceptions. To evaluate the generated outputs from LLMs, we utilize GPT-3-based metrics following Lin et al. (2022) to determine the truthfulness and informativeness of the outputs. For the QA prompt, we use a few-shot prompt containing 6 random QA pairs, structured according to the InstructGPT format (Ouyang et al., 2022). The results are reported as the percentage of generations that are both truthful and informative, as well as the percentage that are either truthful or informative. | 2308.12950#58 | Code Llama: Open Foundation Models for Code | We release Code Llama, a family of large language models for code based on
Llama 2 providing state-of-the-art performance among open models, infilling
capabilities, support for large input contexts, and zero-shot instruction
following ability for programming tasks. We provide multiple flavors to cover a
wide range of applications: foundation models (Code Llama), Python
specializations (Code Llama - Python), and instruction-following models (Code
Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are
trained on sequences of 16k tokens and show improvements on inputs with up to
100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants
support infilling based on surrounding content. Code Llama reaches
state-of-the-art performance among open models on several code benchmarks, with
scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code
Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our
models outperform every other publicly available model on MultiPL-E. We release
Code Llama under a permissive license that allows for both research and
commercial use. | http://arxiv.org/pdf/2308.12950 | Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve | cs.CL | null | null | cs.CL | 20230824 | 20240131 | [] |
2308.12966 | 58 | Xiaohua Zhai, Xiao Wang, Basil Mustafa, Andreas Steiner, Daniel Keysers, Alexander Kolesnikov, and Lucas Beyer. Lit: Zero-shot transfer with locked-image text tuning. In CVPR, 2022.
Hang Zhang, Xin Li, and Lidong Bing. Video-llama: An instruction-tuned audio-visual language model for video understanding. arXiv:2306.02858, 2023.
14
Pengchuan Zhang, Xiujun Li, Xiaowei Hu, Jianwei Yang, Lei Zhang, Lijuan Wang, Yejin Choi, and Jianfeng Gao. Vinvl: Revisiting visual representations in vision-language models. In CVPR, 2021.
Yang Zhao, Zhijie Lin, Daquan Zhou, Zilong Huang, Jiashi Feng, and Bingyi Kang. Bubogpt: Enabling visual grounding in multi-modal llms. arXiv:2307.08581, 2023.
Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mohamed Elhoseiny. Minigpt-4: Enhancing vision- language understanding with advanced large language models. arXiv:2304.10592, 2023. | 2308.12966#58 | Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond | In this work, we introduce the Qwen-VL series, a set of large-scale
vision-language models (LVLMs) designed to perceive and understand both texts
and images. Starting from the Qwen-LM as a foundation, we endow it with visual
capacity by the meticulously designed (i) visual receptor, (ii) input-output
interface, (iii) 3-stage training pipeline, and (iv) multilingual multimodal
cleaned corpus. Beyond the conventional image description and
question-answering, we implement the grounding and text-reading ability of
Qwen-VLs by aligning image-caption-box tuples. The resulting models, including
Qwen-VL and Qwen-VL-Chat, set new records for generalist models under similar
model scales on a broad range of visual-centric benchmarks (e.g., image
captioning, question answering, visual grounding) and different settings (e.g.,
zero-shot, few-shot). Moreover, on real-world dialog benchmarks, our
instruction-tuned Qwen-VL-Chat also demonstrates superiority compared to
existing vision-language chatbots. Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL. | http://arxiv.org/pdf/2308.12966 | Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou | cs.CV, cs.CL | Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL | null | cs.CV | 20230824 | 20231013 | [
{
"id": "2211.01335"
},
{
"id": "2307.02499"
},
{
"id": "2305.10403"
},
{
"id": "2308.16890"
},
{
"id": "2208.10442"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2210.08402"
},
{
"id": "2306.02858"
},
{
"id": "2209.06794"
},
{
"id": "1504.00325"
},
{
"id": "2204.13653"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2212.04408"
},
{
"id": "2307.05222"
},
{
"id": "2306.15195"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2306.14824"
},
{
"id": "2102.05918"
},
{
"id": "2205.01917"
},
{
"id": "2111.11432"
},
{
"id": "2307.16125"
},
{
"id": "2305.03726"
},
{
"id": "2203.10244"
},
{
"id": "2206.08916"
},
{
"id": "2304.14108"
},
{
"id": "2307.08581"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2305.18565"
}
] |
2308.12682 | 59 | Levine, S.; Hausman, K.; and Ichter, B. 2022b. Inner Monologue: Embodied Reasoning through Planning with Language Models. arXiv:2207.05608. Kaelbling, L. P.; Littman, M. L.; and Cassandra, A. R. 1998. Planning and acting in partially observable stochastic domains. Artificial intelligence, 101(1-2): 99â134. Kwon, M.; Xie, S. M.; Bullard, K.; and Sadigh, D. 2023. Reward Design with Language Models. In The Eleventh International Conference on Learning Representations. | 2308.12682#59 | SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge | Large Language Models (LLMs) have demonstrated impressive planning abilities
due to their vast "world knowledge". Yet, obtaining plans that are both
feasible (grounded in affordances) and cost-effective (in plan length), remains
a challenge, despite recent progress. This contrasts with heuristic planning
methods that employ domain knowledge (formalized in action models such as PDDL)
and heuristic search to generate feasible, optimal plans. Inspired by this, we
propose to combine the power of LLMs and heuristic planning by leveraging the
world knowledge of LLMs and the principles of heuristic search. Our approach,
SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain
knowledge, that evaluates actions' feasibility (Can) and long-term
reward/payoff (Pay), and heuristic search to select the best sequence of
actions. Our contributions are (1) a novel framing of the LLM planning problem
in the context of heuristic planning, (2) integrating grounding and
cost-effective elements into the generated plans, and (3) using heuristic
search over actions. Our extensive evaluations show that our model surpasses
other LLM planning approaches. | http://arxiv.org/pdf/2308.12682 | Rishi Hazra, Pedro Zuidberg Dos Martires, Luc De Raedt | cs.AI | Accepted in AAAI 2024. Website:
https://rishihazra.github.io/SayCanPay/ | null | cs.AI | 20230824 | 20240101 | [
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2305.14992"
},
{
"id": "2302.05128"
},
{
"id": "2212.08681"
},
{
"id": "1807.03748"
},
{
"id": "2303.00855"
},
{
"id": "2305.10601"
},
{
"id": "2304.11477"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2201.04735"
},
{
"id": "2202.10936"
},
{
"id": "2209.07753"
},
{
"id": "2302.06706"
},
{
"id": "1909.08593"
},
{
"id": "2307.15818"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2305.14314"
}
] |
2308.12950 | 59 | Toxicity. We use ToxiGen (Hartvigsen et al., 2022) to quantify the extent of toxic language and hate speech generation across various demographic groups. The ToxiGen dataset contains implicitly toxic and benign sentences mentioning 13 minority groups. Following Touvron et al. (2023b), we utilize an improved version of the dataset, which minimizes noise by removing prompts with disagreements among annotators regarding the target demographic group. To measure the toxicity of the generated outputs from each of the LLMs, we employ the default ToxiGen classifier, tuned on RoBERTa (Liu et al., 2019). | 2308.12950#59 | Code Llama: Open Foundation Models for Code | We release Code Llama, a family of large language models for code based on
Llama 2 providing state-of-the-art performance among open models, infilling
capabilities, support for large input contexts, and zero-shot instruction
following ability for programming tasks. We provide multiple flavors to cover a
wide range of applications: foundation models (Code Llama), Python
specializations (Code Llama - Python), and instruction-following models (Code
Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are
trained on sequences of 16k tokens and show improvements on inputs with up to
100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants
support infilling based on surrounding content. Code Llama reaches
state-of-the-art performance among open models on several code benchmarks, with
scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code
Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our
models outperform every other publicly available model on MultiPL-E. We release
Code Llama under a permissive license that allows for both research and
commercial use. | http://arxiv.org/pdf/2308.12950 | Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve | cs.CL | null | null | cs.CL | 20230824 | 20240131 | [] |
2308.12966 | 59 | Xizhou Zhu, Jinguo Zhu, Hao Li, Xiaoshi Wu, Hongsheng Li, Xiaohua Wang, and Jifeng Dai. Uni-perceiver: Pre-training unified architecture for generic perception for zero-shot and few-shot tasks. In CVPR, 2022.
15
# A Dataset details
# A.1 Image-text pairs
We use web-crawled image-text pairs dataset for pre-training, which includes LAION-en (Schuhmann et al., 2022a), LAION-zh (Schuhmann et al., 2022a), LAION-COCO (Schuhmann et al., 2022b), DataComp (Gadre et al., 2023) and Coyo (Byeon et al., 2022). We clean these noisy data by several steps:
1. Removing pairs with too large aspect ratio of the image
2. Removing pairs with too small image
3. Removing pairs with a harsh CLIP score (dataset-specific)
4. Removing pairs with text containing non-English or non-Chinese characters
5. Removing pairs with text containing emoji characters
6. Removing pairs with text length too short or too long
7. Cleaning the textâs HTML-tagged part
8. Cleaning the text with certain unregular patterns | 2308.12966#59 | Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond | In this work, we introduce the Qwen-VL series, a set of large-scale
vision-language models (LVLMs) designed to perceive and understand both texts
and images. Starting from the Qwen-LM as a foundation, we endow it with visual
capacity by the meticulously designed (i) visual receptor, (ii) input-output
interface, (iii) 3-stage training pipeline, and (iv) multilingual multimodal
cleaned corpus. Beyond the conventional image description and
question-answering, we implement the grounding and text-reading ability of
Qwen-VLs by aligning image-caption-box tuples. The resulting models, including
Qwen-VL and Qwen-VL-Chat, set new records for generalist models under similar
model scales on a broad range of visual-centric benchmarks (e.g., image
captioning, question answering, visual grounding) and different settings (e.g.,
zero-shot, few-shot). Moreover, on real-world dialog benchmarks, our
instruction-tuned Qwen-VL-Chat also demonstrates superiority compared to
existing vision-language chatbots. Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL. | http://arxiv.org/pdf/2308.12966 | Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou | cs.CV, cs.CL | Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL | null | cs.CV | 20230824 | 20231013 | [
{
"id": "2211.01335"
},
{
"id": "2307.02499"
},
{
"id": "2305.10403"
},
{
"id": "2308.16890"
},
{
"id": "2208.10442"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2210.08402"
},
{
"id": "2306.02858"
},
{
"id": "2209.06794"
},
{
"id": "1504.00325"
},
{
"id": "2204.13653"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2212.04408"
},
{
"id": "2307.05222"
},
{
"id": "2306.15195"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2306.14824"
},
{
"id": "2102.05918"
},
{
"id": "2205.01917"
},
{
"id": "2111.11432"
},
{
"id": "2307.16125"
},
{
"id": "2305.03726"
},
{
"id": "2203.10244"
},
{
"id": "2206.08916"
},
{
"id": "2304.14108"
},
{
"id": "2307.08581"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2305.18565"
}
] |
2308.12682 | 60 | Lakhotia, K.; Kharitonov, E.; Hsu, W.-N.; Adi, Y.; Polyak, A.; Bolte, B.; Nguyen, T.-A.; Copet, J.; Baevski, A.; Mo- hamed, A.; and Dupoux, E. 2021. On Generative Spoken Language Modeling from Raw Audio. Transactions of the Association for Computational Linguistics, 9: 1336â1354. Liang, J.; Huang, W.; Xia, F.; Xu, P.; Hausman, K.; Ichter, B.; Florence, P.; and Zeng, A. 2023. Code as Policies: Language Model Programs for Embodied Control. arXiv:2209.07753. Liao, Y.-H.; Puig, X.; Boben, M.; Torralba, A.; and Fidler, S. 2019. Synthesizing Environment-Aware Activities via Activity Sketches. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 6284â6292. Lin, K.; Agia, C.; Migimatsu, T.; Pavone, M.; and Bohg, J. 2023. Text2Motion: from natural language instructions to | 2308.12682#60 | SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge | Large Language Models (LLMs) have demonstrated impressive planning abilities
due to their vast "world knowledge". Yet, obtaining plans that are both
feasible (grounded in affordances) and cost-effective (in plan length), remains
a challenge, despite recent progress. This contrasts with heuristic planning
methods that employ domain knowledge (formalized in action models such as PDDL)
and heuristic search to generate feasible, optimal plans. Inspired by this, we
propose to combine the power of LLMs and heuristic planning by leveraging the
world knowledge of LLMs and the principles of heuristic search. Our approach,
SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain
knowledge, that evaluates actions' feasibility (Can) and long-term
reward/payoff (Pay), and heuristic search to select the best sequence of
actions. Our contributions are (1) a novel framing of the LLM planning problem
in the context of heuristic planning, (2) integrating grounding and
cost-effective elements into the generated plans, and (3) using heuristic
search over actions. Our extensive evaluations show that our model surpasses
other LLM planning approaches. | http://arxiv.org/pdf/2308.12682 | Rishi Hazra, Pedro Zuidberg Dos Martires, Luc De Raedt | cs.AI | Accepted in AAAI 2024. Website:
https://rishihazra.github.io/SayCanPay/ | null | cs.AI | 20230824 | 20240101 | [
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2305.14992"
},
{
"id": "2302.05128"
},
{
"id": "2212.08681"
},
{
"id": "1807.03748"
},
{
"id": "2303.00855"
},
{
"id": "2305.10601"
},
{
"id": "2304.11477"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2201.04735"
},
{
"id": "2202.10936"
},
{
"id": "2209.07753"
},
{
"id": "2302.06706"
},
{
"id": "1909.08593"
},
{
"id": "2307.15818"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2305.14314"
}
] |
2308.12950 | 60 | Bias. We employ the Bias in Open-Ended Language Generation Dataset (BOLD) (Dhamala et al., 2021) to investigate how the sentiment in the modelâs outputs may differ based on demographic attributes. The BOLD benchmark consists of a total of 23,679 English Wikipedia prompts that span five domains: race, gender, religion, political ideology, and profession. These prompts cover 43 different subgroups. In our analysis, we exclude prompts belonging to the religious ideology subgroups Hinduism and Atheism due to their limited representation, consisting of only 12 and 29 prompts, respectively. To assess the sentiments conveyed by the combination of the prompt prefix and model generation, we employ sentiment analysis using the Valence Aware Dictionary and Sentiment Reasoner (VADER) (Hutto & Gilbert, 2014). The VADER produces sentiment scores between -1 and 1, where a positive (negative) score indicates a positive (negative) sentiment towards the population mentioned in the prompt. A score closer to 0 indicates a neutral sentiment. | 2308.12950#60 | Code Llama: Open Foundation Models for Code | We release Code Llama, a family of large language models for code based on
Llama 2 providing state-of-the-art performance among open models, infilling
capabilities, support for large input contexts, and zero-shot instruction
following ability for programming tasks. We provide multiple flavors to cover a
wide range of applications: foundation models (Code Llama), Python
specializations (Code Llama - Python), and instruction-following models (Code
Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are
trained on sequences of 16k tokens and show improvements on inputs with up to
100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants
support infilling based on surrounding content. Code Llama reaches
state-of-the-art performance among open models on several code benchmarks, with
scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code
Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our
models outperform every other publicly available model on MultiPL-E. We release
Code Llama under a permissive license that allows for both research and
commercial use. | http://arxiv.org/pdf/2308.12950 | Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve | cs.CL | null | null | cs.CL | 20230824 | 20240131 | [] |
2308.12966 | 60 | 6. Removing pairs with text length too short or too long
7. Cleaning the textâs HTML-tagged part
8. Cleaning the text with certain unregular patterns
For academic caption datasets, we remove pairs whose text contains the special tags in CC12M (Changpinyo et al., 2021) and SBU (Ordonez et al., 2011). If there is more than one text matching the same image, we select the longest one.
# A.2 VQA
For the VQAv2 (Goyal et al., 2017) dataset, we select the answer annotation based on the maximum confidence. For other VQA datasets, we didnât do anything special.
# A.3 Grounding
For the GRIT (Peng et al., 2023) dataset, we found that there are many recursive grounding box labels in one caption. We use the greedy algorithm to clean the caption to make sure each image contains the most box labels with no recursive box labels. For other grounding datasets, we simply concatenate the noun/phrase with respective bounding box coordinates.
# A.4 OCR | 2308.12966#60 | Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond | In this work, we introduce the Qwen-VL series, a set of large-scale
vision-language models (LVLMs) designed to perceive and understand both texts
and images. Starting from the Qwen-LM as a foundation, we endow it with visual
capacity by the meticulously designed (i) visual receptor, (ii) input-output
interface, (iii) 3-stage training pipeline, and (iv) multilingual multimodal
cleaned corpus. Beyond the conventional image description and
question-answering, we implement the grounding and text-reading ability of
Qwen-VLs by aligning image-caption-box tuples. The resulting models, including
Qwen-VL and Qwen-VL-Chat, set new records for generalist models under similar
model scales on a broad range of visual-centric benchmarks (e.g., image
captioning, question answering, visual grounding) and different settings (e.g.,
zero-shot, few-shot). Moreover, on real-world dialog benchmarks, our
instruction-tuned Qwen-VL-Chat also demonstrates superiority compared to
existing vision-language chatbots. Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL. | http://arxiv.org/pdf/2308.12966 | Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou | cs.CV, cs.CL | Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL | null | cs.CV | 20230824 | 20231013 | [
{
"id": "2211.01335"
},
{
"id": "2307.02499"
},
{
"id": "2305.10403"
},
{
"id": "2308.16890"
},
{
"id": "2208.10442"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2210.08402"
},
{
"id": "2306.02858"
},
{
"id": "2209.06794"
},
{
"id": "1504.00325"
},
{
"id": "2204.13653"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2212.04408"
},
{
"id": "2307.05222"
},
{
"id": "2306.15195"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2306.14824"
},
{
"id": "2102.05918"
},
{
"id": "2205.01917"
},
{
"id": "2111.11432"
},
{
"id": "2307.16125"
},
{
"id": "2305.03726"
},
{
"id": "2203.10244"
},
{
"id": "2206.08916"
},
{
"id": "2304.14108"
},
{
"id": "2307.08581"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2305.18565"
}
] |
2308.12682 | 61 | Agia, C.; Migimatsu, T.; Pavone, M.; and Bohg, J. 2023. Text2Motion: from natural language instructions to feasible plans. Autonomous Robots, 47(8): 1345â1365. Liu, B.; Jiang, Y.; Zhang, X.; Liu, Q.; Zhang, S.; Biswas, J.; and Stone, P. 2023. LLM+P: Empowering Large Language Models with Optimal Planning Proficiency. arXiv:2304.11477. Pallagani, V.; Muppasani, B.; Murugesan, K.; Rossi, F.; Horesh, L.; Srivastava, B.; Fabiano, F.; and Loreggia, A. 2022. Plansformer: Generating Symbolic Plans using Transformers. arXiv:2212.08681. Puig, X.; Ra, K.; Boben, M.; Li, J.; Wang, T.; Fidler, S.; and Torralba, A. 2018. Virtualhome: Simulating household activities via programs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 8494â 8502. Raffel, C.; Shazeer, N.; | 2308.12682#61 | SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge | Large Language Models (LLMs) have demonstrated impressive planning abilities
due to their vast "world knowledge". Yet, obtaining plans that are both
feasible (grounded in affordances) and cost-effective (in plan length), remains
a challenge, despite recent progress. This contrasts with heuristic planning
methods that employ domain knowledge (formalized in action models such as PDDL)
and heuristic search to generate feasible, optimal plans. Inspired by this, we
propose to combine the power of LLMs and heuristic planning by leveraging the
world knowledge of LLMs and the principles of heuristic search. Our approach,
SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain
knowledge, that evaluates actions' feasibility (Can) and long-term
reward/payoff (Pay), and heuristic search to select the best sequence of
actions. Our contributions are (1) a novel framing of the LLM planning problem
in the context of heuristic planning, (2) integrating grounding and
cost-effective elements into the generated plans, and (3) using heuristic
search over actions. Our extensive evaluations show that our model surpasses
other LLM planning approaches. | http://arxiv.org/pdf/2308.12682 | Rishi Hazra, Pedro Zuidberg Dos Martires, Luc De Raedt | cs.AI | Accepted in AAAI 2024. Website:
https://rishihazra.github.io/SayCanPay/ | null | cs.AI | 20230824 | 20240101 | [
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2305.14992"
},
{
"id": "2302.05128"
},
{
"id": "2212.08681"
},
{
"id": "1807.03748"
},
{
"id": "2303.00855"
},
{
"id": "2305.10601"
},
{
"id": "2304.11477"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2201.04735"
},
{
"id": "2202.10936"
},
{
"id": "2209.07753"
},
{
"id": "2302.06706"
},
{
"id": "1909.08593"
},
{
"id": "2307.15818"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2305.14314"
}
] |
2308.12950 | 61 | Benchmark evaluation results. Table 9 shows the evaluation results of the three safety benchmarks. We follow the decoding setting as in Touvron et al. (2023b) where a temperature of 0.1 and top-p of 0.9 are used. Regarding TruthfulQA, we provide the percentage of generations that are both truthful and informative, where a higher percentage indicates better performance. Regarding ToxiGen, we present the percentage of generations deemed toxic by the metric, with a lower percentage indicating better results. Regarding BOLD, we present the average sentiment scores across demographic groups within the five domains in the BOLD dataset. The fine-tuned Code Llama - Instruct exhibits significant improvements over the pretrained Code Llama in terms of truthfulness (from 34.64 to 47.37 for 34B) and toxicity (from 17.62 to 0.00 for 34B). The percentage of toxic generations drastically reduces to virtually 0% across all Code Llama sizes, making it the least toxic among all the models compared. When compared to Falcon and MPT fine-tuned models, the fine-tuned Code Llama demonstrates the second-best performance level in both toxicity and truthfulness, | 2308.12950#61 | Code Llama: Open Foundation Models for Code | We release Code Llama, a family of large language models for code based on
Llama 2 providing state-of-the-art performance among open models, infilling
capabilities, support for large input contexts, and zero-shot instruction
following ability for programming tasks. We provide multiple flavors to cover a
wide range of applications: foundation models (Code Llama), Python
specializations (Code Llama - Python), and instruction-following models (Code
Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are
trained on sequences of 16k tokens and show improvements on inputs with up to
100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants
support infilling based on surrounding content. Code Llama reaches
state-of-the-art performance among open models on several code benchmarks, with
scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code
Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our
models outperform every other publicly available model on MultiPL-E. We release
Code Llama under a permissive license that allows for both research and
commercial use. | http://arxiv.org/pdf/2308.12950 | Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve | cs.CL | null | null | cs.CL | 20230824 | 20240131 | [] |
2308.12966 | 61 | # A.4 OCR
We generated the synthetic OCR dataset using Synthdog (Kim et al., 2022). Specifically, we use the COCO (Lin et al., 2014) train2017 and unlabeld2017 dataset split as the natural scenery background. Then we selected 41 English fonts and 11 Chinese fonts to generate text. We use the default hyperparameters as in Synthdog. We track the generated text locations in the image and convert them to quadrilateral coordinates and we also use these coordinates as training labels. The visualization example is illustrated in the second row of Fig 5.
For all the PDF data we collected, we follow the steps below to pre-process the data using PyMuPDF (Software, 2015) to get the rendering results of each page in a PDF file as well as all the text annotations with their bounding boxes.
1. Extracting all texts and their bounding boxes for each page.
16 | 2308.12966#61 | Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond | In this work, we introduce the Qwen-VL series, a set of large-scale
vision-language models (LVLMs) designed to perceive and understand both texts
and images. Starting from the Qwen-LM as a foundation, we endow it with visual
capacity by the meticulously designed (i) visual receptor, (ii) input-output
interface, (iii) 3-stage training pipeline, and (iv) multilingual multimodal
cleaned corpus. Beyond the conventional image description and
question-answering, we implement the grounding and text-reading ability of
Qwen-VLs by aligning image-caption-box tuples. The resulting models, including
Qwen-VL and Qwen-VL-Chat, set new records for generalist models under similar
model scales on a broad range of visual-centric benchmarks (e.g., image
captioning, question answering, visual grounding) and different settings (e.g.,
zero-shot, few-shot). Moreover, on real-world dialog benchmarks, our
instruction-tuned Qwen-VL-Chat also demonstrates superiority compared to
existing vision-language chatbots. Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL. | http://arxiv.org/pdf/2308.12966 | Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou | cs.CV, cs.CL | Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL | null | cs.CV | 20230824 | 20231013 | [
{
"id": "2211.01335"
},
{
"id": "2307.02499"
},
{
"id": "2305.10403"
},
{
"id": "2308.16890"
},
{
"id": "2208.10442"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2210.08402"
},
{
"id": "2306.02858"
},
{
"id": "2209.06794"
},
{
"id": "1504.00325"
},
{
"id": "2204.13653"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2212.04408"
},
{
"id": "2307.05222"
},
{
"id": "2306.15195"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2306.14824"
},
{
"id": "2102.05918"
},
{
"id": "2205.01917"
},
{
"id": "2111.11432"
},
{
"id": "2307.16125"
},
{
"id": "2305.03726"
},
{
"id": "2203.10244"
},
{
"id": "2206.08916"
},
{
"id": "2304.14108"
},
{
"id": "2307.08581"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2305.18565"
}
] |
2308.12682 | 62 | programs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 8494â 8502. Raffel, C.; Shazeer, N.; Roberts, A.; Lee, K.; Narang, S.; Matena, M.; Zhou, Y.; Li, W.; and Liu, P. J. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1): 5485â5551. Silver, T.; Hariprasad, V.; Shuttleworth, R. S.; Kumar, N.; Lozano-P´erez, T.; and Kaelbling, L. P. 2022. PDDL Planning with Pretrained Large Language Models. In NeurIPS 2022 Foundation Models for Decision Making Workshop. Singh, I.; Blukis, V.; Mousavian, A.; Goyal, A.; Xu, D.; Tremblay, J.; Fox, D.; Thomason, J.; and Garg, A. 2023. ProgPrompt: Generating Situated Robot Task Plans using Large Language Models. In International Conference on Robotics and Automation (ICRA). Touvron, H.; Lavril, T.; Izacard, G.; Martinet, | 2308.12682#62 | SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge | Large Language Models (LLMs) have demonstrated impressive planning abilities
due to their vast "world knowledge". Yet, obtaining plans that are both
feasible (grounded in affordances) and cost-effective (in plan length), remains
a challenge, despite recent progress. This contrasts with heuristic planning
methods that employ domain knowledge (formalized in action models such as PDDL)
and heuristic search to generate feasible, optimal plans. Inspired by this, we
propose to combine the power of LLMs and heuristic planning by leveraging the
world knowledge of LLMs and the principles of heuristic search. Our approach,
SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain
knowledge, that evaluates actions' feasibility (Can) and long-term
reward/payoff (Pay), and heuristic search to select the best sequence of
actions. Our contributions are (1) a novel framing of the LLM planning problem
in the context of heuristic planning, (2) integrating grounding and
cost-effective elements into the generated plans, and (3) using heuristic
search over actions. Our extensive evaluations show that our model surpasses
other LLM planning approaches. | http://arxiv.org/pdf/2308.12682 | Rishi Hazra, Pedro Zuidberg Dos Martires, Luc De Raedt | cs.AI | Accepted in AAAI 2024. Website:
https://rishihazra.github.io/SayCanPay/ | null | cs.AI | 20230824 | 20240101 | [
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2305.14992"
},
{
"id": "2302.05128"
},
{
"id": "2212.08681"
},
{
"id": "1807.03748"
},
{
"id": "2303.00855"
},
{
"id": "2305.10601"
},
{
"id": "2304.11477"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2201.04735"
},
{
"id": "2202.10936"
},
{
"id": "2209.07753"
},
{
"id": "2302.06706"
},
{
"id": "1909.08593"
},
{
"id": "2307.15818"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2305.14314"
}
] |
2308.12950 | 62 | compared to Falcon and MPT fine-tuned models, the fine-tuned Code Llama demonstrates the second-best performance level in both toxicity and truthfulness, right after Llama 2 Chat. Additionally, similar to Llama 2 Chat, the Code Llama - Instruct, after fine-tuning, also tends to show an overall increase in positive sentiment for many demographic groups in BOLD. More detailed results split by different demographic groups can be found in Appendix I. | 2308.12950#62 | Code Llama: Open Foundation Models for Code | We release Code Llama, a family of large language models for code based on
Llama 2 providing state-of-the-art performance among open models, infilling
capabilities, support for large input contexts, and zero-shot instruction
following ability for programming tasks. We provide multiple flavors to cover a
wide range of applications: foundation models (Code Llama), Python
specializations (Code Llama - Python), and instruction-following models (Code
Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are
trained on sequences of 16k tokens and show improvements on inputs with up to
100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants
support infilling based on surrounding content. Code Llama reaches
state-of-the-art performance among open models on several code benchmarks, with
scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code
Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our
models outperform every other publicly available model on MultiPL-E. We release
Code Llama under a permissive license that allows for both research and
commercial use. | http://arxiv.org/pdf/2308.12950 | Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve | cs.CL | null | null | cs.CL | 20230824 | 20240131 | [] |
2308.12966 | 62 | 2014 Real Estate Taxes TS Real Estate Taxes OTe Real Estate Taxes O17 foal estate Tax fertecetsar e Comeany reson ene MEE
Figure 5: Visualization of the Grounding and OCR data used for training Qwen-VL
17
2. Rendering each page and save them as an image file.
3. Removing too small image.
4. Removing images with too many or too few characters.
5. Removing images containing Unicode characters in the âLatin Extended-Aâ and âLatin Extended-Bâ blocks.
6. Removing images containing Unicode characters in the âPrivate Use Area (PUA)â block.
For all HTML web pages we collected, we pre-process them in a similar approach to all the PDF data we collected, but we use Puppeteer (Google, 2023) instead of PyMuPDF to render these HTML pages and get the ground truth annotation. We follow the steps below to pre-process the data.
1. Extracting all texts for each webpage.
2. Rendering each page and save them as an image file.
3. Removing too small image.
4. Removing images with too many or too few characters.
5. Removing images containing Unicode characters in the âPrivate Use Area (PUA)â block.
# B Data Format Details of Training
# B.1 Data Format of Multi-Task Pre-training | 2308.12966#62 | Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond | In this work, we introduce the Qwen-VL series, a set of large-scale
vision-language models (LVLMs) designed to perceive and understand both texts
and images. Starting from the Qwen-LM as a foundation, we endow it with visual
capacity by the meticulously designed (i) visual receptor, (ii) input-output
interface, (iii) 3-stage training pipeline, and (iv) multilingual multimodal
cleaned corpus. Beyond the conventional image description and
question-answering, we implement the grounding and text-reading ability of
Qwen-VLs by aligning image-caption-box tuples. The resulting models, including
Qwen-VL and Qwen-VL-Chat, set new records for generalist models under similar
model scales on a broad range of visual-centric benchmarks (e.g., image
captioning, question answering, visual grounding) and different settings (e.g.,
zero-shot, few-shot). Moreover, on real-world dialog benchmarks, our
instruction-tuned Qwen-VL-Chat also demonstrates superiority compared to
existing vision-language chatbots. Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL. | http://arxiv.org/pdf/2308.12966 | Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou | cs.CV, cs.CL | Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL | null | cs.CV | 20230824 | 20231013 | [
{
"id": "2211.01335"
},
{
"id": "2307.02499"
},
{
"id": "2305.10403"
},
{
"id": "2308.16890"
},
{
"id": "2208.10442"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2210.08402"
},
{
"id": "2306.02858"
},
{
"id": "2209.06794"
},
{
"id": "1504.00325"
},
{
"id": "2204.13653"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2212.04408"
},
{
"id": "2307.05222"
},
{
"id": "2306.15195"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2306.14824"
},
{
"id": "2102.05918"
},
{
"id": "2205.01917"
},
{
"id": "2111.11432"
},
{
"id": "2307.16125"
},
{
"id": "2305.03726"
},
{
"id": "2203.10244"
},
{
"id": "2206.08916"
},
{
"id": "2304.14108"
},
{
"id": "2307.08581"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2305.18565"
}
] |
2308.12682 | 63 | Language Models. In International Conference on Robotics and Automation (ICRA). Touvron, H.; Lavril, T.; Izacard, G.; Martinet, X.; Lachaux, M.-A.; Lacroix, T.; Rozi`ere, B.; Goyal, N.; Hambro, E.; Azhar, F.; Rodriguez, A.; Joulin, A.; Grave, E.; and Lample, G. 2023. LLaMA: Open and Efficient Foundation Lan- guage Models. arXiv:2302.13971. Valmeekam, K.; Olmo, A.; Sreedharan, S.; and Kambhampati, S. 2022. Large Language Models Still Canât Plan (A Benchmark for LLMs on Planning and Reasoning about Change). In NeurIPS 2022 Foundation Models for Decision Making Workshop. Valmeekam, K.; Sreedharan, S.; Marquez, M.; Olmo, A.; and Kambhampati, S. 2023. On the Planning Abilities of Large Language Models (A Critical Investigation with a Proposed Benchmark). arXiv:2302.06706. van den Oord, A.; Li, Y.; and Vinyals, O. 2019. | 2308.12682#63 | SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge | Large Language Models (LLMs) have demonstrated impressive planning abilities
due to their vast "world knowledge". Yet, obtaining plans that are both
feasible (grounded in affordances) and cost-effective (in plan length), remains
a challenge, despite recent progress. This contrasts with heuristic planning
methods that employ domain knowledge (formalized in action models such as PDDL)
and heuristic search to generate feasible, optimal plans. Inspired by this, we
propose to combine the power of LLMs and heuristic planning by leveraging the
world knowledge of LLMs and the principles of heuristic search. Our approach,
SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain
knowledge, that evaluates actions' feasibility (Can) and long-term
reward/payoff (Pay), and heuristic search to select the best sequence of
actions. Our contributions are (1) a novel framing of the LLM planning problem
in the context of heuristic planning, (2) integrating grounding and
cost-effective elements into the generated plans, and (3) using heuristic
search over actions. Our extensive evaluations show that our model surpasses
other LLM planning approaches. | http://arxiv.org/pdf/2308.12682 | Rishi Hazra, Pedro Zuidberg Dos Martires, Luc De Raedt | cs.AI | Accepted in AAAI 2024. Website:
https://rishihazra.github.io/SayCanPay/ | null | cs.AI | 20230824 | 20240101 | [
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2305.14992"
},
{
"id": "2302.05128"
},
{
"id": "2212.08681"
},
{
"id": "1807.03748"
},
{
"id": "2303.00855"
},
{
"id": "2305.10601"
},
{
"id": "2304.11477"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2201.04735"
},
{
"id": "2202.10936"
},
{
"id": "2209.07753"
},
{
"id": "2302.06706"
},
{
"id": "1909.08593"
},
{
"id": "2307.15818"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2305.14314"
}
] |
2308.12950 | 63 | 15
pafety Reward Model Scores Distribution on Red Teaming Prompts
Responding Model GPT 3.5 Turbo Code Llama 138 Instruct Code Llama 34B Instruct Code Llama 7B Instruct 0.0-+ -0.2 0.0 0.2 0.4 0.6 08 1.0 12 Llama 2 70B Safety Reward Model Score
Figure 7: KDE plot of the risk score output by the Llama 2 safety reward model on prompts with clear intent specific to code risk created by red teamers with background in cybersecurity and malware generation.
Red teaming. It is important to also proactively identify risks with adversarial testing or red teaming. We conducted 3 red teaming exercises with 25 Meta employees, including domain experts in responsible AI, malware development, and offensive security engineering.
The red teamers provided a nuanced evaluation specifically on the risk from so called âdual intent prompts.â Dual intent prompts are requests for help with writing code that could be used maliciously but the prompt does not directly address the topic (example âMosaic Promptsâ Glukhov et al. (2023)). For example, the model rightfully refuses to provide support with writing ransomware code but it complies when asked to provide a script to encrypt all files in the userâs home directory since such a script could be used for benign purposes. | 2308.12950#63 | Code Llama: Open Foundation Models for Code | We release Code Llama, a family of large language models for code based on
Llama 2 providing state-of-the-art performance among open models, infilling
capabilities, support for large input contexts, and zero-shot instruction
following ability for programming tasks. We provide multiple flavors to cover a
wide range of applications: foundation models (Code Llama), Python
specializations (Code Llama - Python), and instruction-following models (Code
Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are
trained on sequences of 16k tokens and show improvements on inputs with up to
100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants
support infilling based on surrounding content. Code Llama reaches
state-of-the-art performance among open models on several code benchmarks, with
scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code
Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our
models outperform every other publicly available model on MultiPL-E. We release
Code Llama under a permissive license that allows for both research and
commercial use. | http://arxiv.org/pdf/2308.12950 | Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve | cs.CL | null | null | cs.CL | 20230824 | 20240131 | [] |
2308.12503 | 64 | : I think textbooks selected by the school or administrative department are the best teaching materials. âScoreâ :[] âDescriptionâ : Students should adopt the perspectives that teachers think are correct. âScoreâ :[] âDescriptionâ : I like to follow some ready-made rules and procedures when teaching courses. âScoreâ : [] âScoreâ :[] âDescriptionâ : I prefer teaching the same subject and the same grade every year. âScoreâ :[] âDescriptionâ : In my work, I like to use some topics, tests, and teaching methods that have proven successful. âScoreâ : [] âDescriptionâ : We should measure a teacher's performance based on classroom order, behavioral requirements for students, students' level of courtesy, and their ability to give correct answers to questions. âDescriptionâ : I agree with teachers being more strict on classroom discipline. âScoreâ :[] | 2308.12503#64 | CGMI: Configurable General Multi-Agent Interaction Framework | Benefiting from the powerful capabilities of large language models (LLMs),
agents based on LLMs have shown the potential to address domain-specific tasks
and emulate human behaviors. However, the content generated by these agents
remains somewhat superficial, owing to their limited domain expertise and the
absence of an effective cognitive architecture. To address this, we present the
Configurable General Multi-Agent Interaction (CGMI) framework, designed to
replicate human interactions in real-world scenarios. Specifically, we propose
a tree-structured methodology for the assignment, detection, and maintenance of
agent personality. Additionally, we designed a cognitive architecture equipped
with a skill library based on the ACT* model, which contains memory,
reflection, and planning modules. We have also integrated general agents to
augment the virtual environment's realism. Using the CGMI framework, we
simulated numerous classroom interactions between teacher and students. The
experiments indicate that aspects such as the teaching methodology, curriculum,
and student performance closely mirror real classroom settings. We will open
source our work. | http://arxiv.org/pdf/2308.12503 | Shi Jinxin, Zhao Jiabao, Wang Yilei, Wu Xingjiao, Li Jiawen, He Liang | cs.AI, cs.HC, cs.MA | 11 pages, 15 figures | null | cs.AI | 20230824 | 20230828 | [
{
"id": "2302.01560"
},
{
"id": "2307.05300"
},
{
"id": "2307.07924"
},
{
"id": "2210.03350"
},
{
"id": "2304.05376"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.04091"
},
{
"id": "2305.02547"
},
{
"id": "2303.17071"
},
{
"id": "2303.17760"
},
{
"id": "2303.08774"
}
] |
2308.12682 | 64 | Critical Investigation with a Proposed Benchmark). arXiv:2302.06706. van den Oord, A.; Li, Y.; and Vinyals, O. 2019. Representation Learning with Contrastive Predictive Coding. arXiv:1807.03748. Wang, Y.; Wang, W.; Joty, S.; and Hoi, S. C. 2021. CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder In Moens, M.-F.; Huang, X.; Specia, L.; and Yih, S. W.-t., eds., Models for Code Understanding and Generation. Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 8696â8708. Online and Punta Cana, Dominican Republic: Association for Computational Linguistics. Xie, Y.; Yu, C.; Zhu, T.; Bai, J.; Gong, Z.; and Soh, H. 2023. Translating Natural Language to Planning Goals with Large-Language Models. arXiv:2302.05128. Yao, S.; Yu, D.; Zhao, J.; Shafran, I.; Griffiths, T. L.; Cao, Y.; and Narasimhan, K. | 2308.12682#64 | SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge | Large Language Models (LLMs) have demonstrated impressive planning abilities
due to their vast "world knowledge". Yet, obtaining plans that are both
feasible (grounded in affordances) and cost-effective (in plan length), remains
a challenge, despite recent progress. This contrasts with heuristic planning
methods that employ domain knowledge (formalized in action models such as PDDL)
and heuristic search to generate feasible, optimal plans. Inspired by this, we
propose to combine the power of LLMs and heuristic planning by leveraging the
world knowledge of LLMs and the principles of heuristic search. Our approach,
SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain
knowledge, that evaluates actions' feasibility (Can) and long-term
reward/payoff (Pay), and heuristic search to select the best sequence of
actions. Our contributions are (1) a novel framing of the LLM planning problem
in the context of heuristic planning, (2) integrating grounding and
cost-effective elements into the generated plans, and (3) using heuristic
search over actions. Our extensive evaluations show that our model surpasses
other LLM planning approaches. | http://arxiv.org/pdf/2308.12682 | Rishi Hazra, Pedro Zuidberg Dos Martires, Luc De Raedt | cs.AI | Accepted in AAAI 2024. Website:
https://rishihazra.github.io/SayCanPay/ | null | cs.AI | 20230824 | 20240101 | [
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2305.14992"
},
{
"id": "2302.05128"
},
{
"id": "2212.08681"
},
{
"id": "1807.03748"
},
{
"id": "2303.00855"
},
{
"id": "2305.10601"
},
{
"id": "2304.11477"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2201.04735"
},
{
"id": "2202.10936"
},
{
"id": "2209.07753"
},
{
"id": "2302.06706"
},
{
"id": "1909.08593"
},
{
"id": "2307.15818"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2305.14314"
}
] |
2308.12950 | 64 | After conducting red team exercises, we asked participants (who had also participated in Llama 2 Chat exercises) to also provide qualitative assessment of safety capabilities of the model. Some participants who had expertise in offensive security and malware development questioned the ultimate risk posed by âmalicious code generationâ through LLMs with current capabilities.
One red teamer remarked, âWhile LLMs being able to iteratively improve on produced source code is a risk, producing source code isnât the actual gap. That said, LLMs may be risky because they can inform low-skill adversaries in production of scripts through iteration that perform some malicious behavior.â
According to another red teamer, â[v]arious scripts, program code, and compiled binaries are readily available on mainstream public websites, hacking forums or on âthe dark web.â Advanced malware development is beyond the current capabilities of available LLMs, and even an advanced LLM paired with an expert malware developer is not particularly useful- as the barrier is not typically writing the malware code itself. That said, these LLMs may produce code which will get easily caught if used directly.â | 2308.12950#64 | Code Llama: Open Foundation Models for Code | We release Code Llama, a family of large language models for code based on
Llama 2 providing state-of-the-art performance among open models, infilling
capabilities, support for large input contexts, and zero-shot instruction
following ability for programming tasks. We provide multiple flavors to cover a
wide range of applications: foundation models (Code Llama), Python
specializations (Code Llama - Python), and instruction-following models (Code
Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are
trained on sequences of 16k tokens and show improvements on inputs with up to
100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants
support infilling based on surrounding content. Code Llama reaches
state-of-the-art performance among open models on several code benchmarks, with
scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code
Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our
models outperform every other publicly available model on MultiPL-E. We release
Code Llama under a permissive license that allows for both research and
commercial use. | http://arxiv.org/pdf/2308.12950 | Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve | cs.CL | null | null | cs.CL | 20230824 | 20240131 | [] |
2308.12966 | 64 | Image Captioning <img>cc3m/01581435.jpg</img>Generate the caption in English: design.<eos> the beautiful flowers for Vision Question Answering <img>VG_100K_2/1.jpg</img> Does the bandage have a different color than the wrist band? Answer: No, both the bandage and the wrist band are white.<eos> OCR VQA <img>ocr_vqa/1.jpg</img> What is the title of this book? Answer: Asi Se Dice!, Volume 2: Work- book And Audio Activities (Glencoe Spanish) (Spanish Edition)<eos> Caption with Grounding <img>coyo700m/1.jpg</img>Generate the caption in English with grounding: Beautiful shot of <ref>bees</ref><box>(661,612),(833,812)</box><box>(120,555),(265,770) </box> gathering nectars from <ref>an apricot flower</ref><box>(224,13),(399,313) </box><eos> Referring Grounding <img>VG_100K_2/3.jpg</img><ref>the ear on a giraffe</ref><box>(176,106),(232,160) | 2308.12966#64 | Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond | In this work, we introduce the Qwen-VL series, a set of large-scale
vision-language models (LVLMs) designed to perceive and understand both texts
and images. Starting from the Qwen-LM as a foundation, we endow it with visual
capacity by the meticulously designed (i) visual receptor, (ii) input-output
interface, (iii) 3-stage training pipeline, and (iv) multilingual multimodal
cleaned corpus. Beyond the conventional image description and
question-answering, we implement the grounding and text-reading ability of
Qwen-VLs by aligning image-caption-box tuples. The resulting models, including
Qwen-VL and Qwen-VL-Chat, set new records for generalist models under similar
model scales on a broad range of visual-centric benchmarks (e.g., image
captioning, question answering, visual grounding) and different settings (e.g.,
zero-shot, few-shot). Moreover, on real-world dialog benchmarks, our
instruction-tuned Qwen-VL-Chat also demonstrates superiority compared to
existing vision-language chatbots. Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL. | http://arxiv.org/pdf/2308.12966 | Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou | cs.CV, cs.CL | Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL | null | cs.CV | 20230824 | 20231013 | [
{
"id": "2211.01335"
},
{
"id": "2307.02499"
},
{
"id": "2305.10403"
},
{
"id": "2308.16890"
},
{
"id": "2208.10442"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2210.08402"
},
{
"id": "2306.02858"
},
{
"id": "2209.06794"
},
{
"id": "1504.00325"
},
{
"id": "2204.13653"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2212.04408"
},
{
"id": "2307.05222"
},
{
"id": "2306.15195"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2306.14824"
},
{
"id": "2102.05918"
},
{
"id": "2205.01917"
},
{
"id": "2111.11432"
},
{
"id": "2307.16125"
},
{
"id": "2305.03726"
},
{
"id": "2203.10244"
},
{
"id": "2206.08916"
},
{
"id": "2304.14108"
},
{
"id": "2307.08581"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2305.18565"
}
] |
2308.12682 | 65 | S.; Yu, D.; Zhao, J.; Shafran, I.; Griffiths, T. L.; Cao, Y.; and Narasimhan, K. 2023. Tree of Thoughts: Deliberate Problem Solving with Large Language Models. arXiv:2305.10601. Zeng, A.; Florence, P.; Tompson, J.; Welker, S.; Chien, J.; Attarian, M.; Armstrong, T.; Krasin, I.; Duong, D.; Sind- hwani, V.; and Lee, J. 2021. Transporter Networks: Rearranging the Visual World for Robotic Manipulation. In Proceedings of the 2020 Conference on Robot Learning, volume 155 of Proceedings of Machine Learning Research, 726â747. PMLR. Ziegler, D. M.; Stiennon, N.; Wu, J.; Brown, T. B.; Radford, A.; Amodei, D.; Christiano, P.; and Irving, G. 2020. Fine-Tuning Language Models from Human Preferences. arXiv:1909.08593. | 2308.12682#65 | SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge | Large Language Models (LLMs) have demonstrated impressive planning abilities
due to their vast "world knowledge". Yet, obtaining plans that are both
feasible (grounded in affordances) and cost-effective (in plan length), remains
a challenge, despite recent progress. This contrasts with heuristic planning
methods that employ domain knowledge (formalized in action models such as PDDL)
and heuristic search to generate feasible, optimal plans. Inspired by this, we
propose to combine the power of LLMs and heuristic planning by leveraging the
world knowledge of LLMs and the principles of heuristic search. Our approach,
SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain
knowledge, that evaluates actions' feasibility (Can) and long-term
reward/payoff (Pay), and heuristic search to select the best sequence of
actions. Our contributions are (1) a novel framing of the LLM planning problem
in the context of heuristic planning, (2) integrating grounding and
cost-effective elements into the generated plans, and (3) using heuristic
search over actions. Our extensive evaluations show that our model surpasses
other LLM planning approaches. | http://arxiv.org/pdf/2308.12682 | Rishi Hazra, Pedro Zuidberg Dos Martires, Luc De Raedt | cs.AI | Accepted in AAAI 2024. Website:
https://rishihazra.github.io/SayCanPay/ | null | cs.AI | 20230824 | 20240101 | [
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2305.14992"
},
{
"id": "2302.05128"
},
{
"id": "2212.08681"
},
{
"id": "1807.03748"
},
{
"id": "2303.00855"
},
{
"id": "2305.10601"
},
{
"id": "2304.11477"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2201.04735"
},
{
"id": "2202.10936"
},
{
"id": "2209.07753"
},
{
"id": "2302.06706"
},
{
"id": "1909.08593"
},
{
"id": "2307.15818"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2305.14314"
}
] |
2308.12950 | 65 | In addition to red teaming sessions, we ran a quantitative evaluation on risk from generating malicious code by scoring Code Llamaâs responses to ChatGPTâs (GPT3.5 Turbo) with LLAMAv2 70Bâs safety reward model. For this second quantitative evaluation, we selected prompts that the red teamers generated specifically attempting to solicit malicious code (even though the red teaming included consideration of a broad set of safety risks). These prompts were a mix of clear intent and slightly obfuscated intentions (see some examples in Figure 16. We show a KDE plot of the distribution of the safety score for all models in Figure 7). We observe that Code Llama tends to answer with safer responses; the distribution of safety scores for Code Llama has more weight in the safer part of the range.
False refusals. LLMs that are too safe can have a tendency to over-refuse valid claims similar to what was reported after the release of Llama 2. We specifically asked red teamers to test for this behavior. They found some limited evidence of false refusals (when not using a system preprompt). False refusals could also
16 | 2308.12950#65 | Code Llama: Open Foundation Models for Code | We release Code Llama, a family of large language models for code based on
Llama 2 providing state-of-the-art performance among open models, infilling
capabilities, support for large input contexts, and zero-shot instruction
following ability for programming tasks. We provide multiple flavors to cover a
wide range of applications: foundation models (Code Llama), Python
specializations (Code Llama - Python), and instruction-following models (Code
Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are
trained on sequences of 16k tokens and show improvements on inputs with up to
100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants
support infilling based on surrounding content. Code Llama reaches
state-of-the-art performance among open models on several code benchmarks, with
scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code
Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our
models outperform every other publicly available model on MultiPL-E. We release
Code Llama under a permissive license that allows for both research and
commercial use. | http://arxiv.org/pdf/2308.12950 | Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve | cs.CL | null | null | cs.CL | 20230824 | 20240131 | [] |
2308.12503 | 66 | âDescriptionâ : To better understand something, I first (a) Try it out. (b) Contemplate it deeply. âChoiceâ :[] âDeseriptionâ : When I'm learning something, I can't help but (a) Talk about it. (b) Think about it. âChoiceâ : [] âDescriptionâ : When facing a problem in a study group, I usually (a) Step forward and speak my mind. (b) Step back and listen to opinions. âChoiceâ : [] âDeseriptionâ : In the classes I take, (a) I usually get to know many classmates. (b) I know very few classmates. âChoiceâ :[] *Dese » : Processing Descriptionâ : When I do homework, I prefer to (a) Start answering right away. (b) First try to understand the question. âChoiceâ :[] Type: Active vs. Reflective âDescriptionâ : I like (a) Studying in a group. (b) Studying alone. âChoiceâ : [] âTypeâ: [1] âDescriptionâ : When I work, I like to (a) Give it a try. (b) Think before I act. âChoiceâ | 2308.12503#66 | CGMI: Configurable General Multi-Agent Interaction Framework | Benefiting from the powerful capabilities of large language models (LLMs),
agents based on LLMs have shown the potential to address domain-specific tasks
and emulate human behaviors. However, the content generated by these agents
remains somewhat superficial, owing to their limited domain expertise and the
absence of an effective cognitive architecture. To address this, we present the
Configurable General Multi-Agent Interaction (CGMI) framework, designed to
replicate human interactions in real-world scenarios. Specifically, we propose
a tree-structured methodology for the assignment, detection, and maintenance of
agent personality. Additionally, we designed a cognitive architecture equipped
with a skill library based on the ACT* model, which contains memory,
reflection, and planning modules. We have also integrated general agents to
augment the virtual environment's realism. Using the CGMI framework, we
simulated numerous classroom interactions between teacher and students. The
experiments indicate that aspects such as the teaching methodology, curriculum,
and student performance closely mirror real classroom settings. We will open
source our work. | http://arxiv.org/pdf/2308.12503 | Shi Jinxin, Zhao Jiabao, Wang Yilei, Wu Xingjiao, Li Jiawen, He Liang | cs.AI, cs.HC, cs.MA | 11 pages, 15 figures | null | cs.AI | 20230824 | 20230828 | [
{
"id": "2302.01560"
},
{
"id": "2307.05300"
},
{
"id": "2307.07924"
},
{
"id": "2210.03350"
},
{
"id": "2304.05376"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.04091"
},
{
"id": "2305.02547"
},
{
"id": "2303.17071"
},
{
"id": "2303.17760"
},
{
"id": "2303.08774"
}
] |
2308.12950 | 66 | 16
TruthfulQA â ToxiGen â BOLD 25.95 29.13 22.77 33.29 41.86 43.45 26.19 33.29 34.64 14.53 22.32 10.36 21.25 26.10 21.19 22.64 22.45 17.62 0.283 0.322 0.310 0.304 0.330 0.318 0.230 0.176 0.255 28.03 29.99 57.04 62.18 67.20 31.46 36.84 47.37 7.89 16.33 0.00 0.00 0.02 0.04 0.01 0.00 0.332 0.302 0.482 0.471 0.461 0.503 0.365 0.452
Table 9: Evaluations on safety datasets for both pretrained (base) models and aligned (instruct) models. For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher, the better). For ToxiGen, we present the percentage of toxic generations (the smaller, the better). For BOLD, we present the average sentiment scores across demographic groups. A score closer to 0 indicates a neutral sentiment, while a positive (negative) score indicates a positive (negative) sentiment towards the population mentioned in the prompt. | 2308.12950#66 | Code Llama: Open Foundation Models for Code | We release Code Llama, a family of large language models for code based on
Llama 2 providing state-of-the-art performance among open models, infilling
capabilities, support for large input contexts, and zero-shot instruction
following ability for programming tasks. We provide multiple flavors to cover a
wide range of applications: foundation models (Code Llama), Python
specializations (Code Llama - Python), and instruction-following models (Code
Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are
trained on sequences of 16k tokens and show improvements on inputs with up to
100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants
support infilling based on surrounding content. Code Llama reaches
state-of-the-art performance among open models on several code benchmarks, with
scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code
Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our
models outperform every other publicly available model on MultiPL-E. We release
Code Llama under a permissive license that allows for both research and
commercial use. | http://arxiv.org/pdf/2308.12950 | Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve | cs.CL | null | null | cs.CL | 20230824 | 20240131 | [] |
2308.12966 | 66 | # B.2 Data Format of Supervised Fine-tuning
To better accommodate multi-image dialogue and multiple image inputs, we add the string "Picture id:" before different images, where the id corresponds to the order of image input dialogue. In terms of dialogue format, we construct our instruction tuning dataset using the ChatML (Openai) format, where each interactionâs statement is marked with two special tokens (<im_start> and <im_end>) to facilitate dialogue termination.
The Dataset Format Example of ChatML
<im_start>user Picture 1: <img>vg/VG_100K_2/649.jpg</img>What is the sign in the picture?<im_end> <im_start>assistant The sign is a road closure with an orange rhombus.<im_end> <im_start>user How is the weather in the picture?<im_end> <im_start>assistant The shape of the road closure sign is an orange rhombus.<im_end>
During training, we ensure the consistency between prediction and training distributions by only supervising answers and special tokens (blue in the example), and not supervising role names or question prompts.
19
# C Hyperparameters
We report the detailed training hyperparameter settings of Qwen-VL in Table 8.
# Table 8: Training hyperparameters of Qwen-VL | 2308.12966#66 | Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond | In this work, we introduce the Qwen-VL series, a set of large-scale
vision-language models (LVLMs) designed to perceive and understand both texts
and images. Starting from the Qwen-LM as a foundation, we endow it with visual
capacity by the meticulously designed (i) visual receptor, (ii) input-output
interface, (iii) 3-stage training pipeline, and (iv) multilingual multimodal
cleaned corpus. Beyond the conventional image description and
question-answering, we implement the grounding and text-reading ability of
Qwen-VLs by aligning image-caption-box tuples. The resulting models, including
Qwen-VL and Qwen-VL-Chat, set new records for generalist models under similar
model scales on a broad range of visual-centric benchmarks (e.g., image
captioning, question answering, visual grounding) and different settings (e.g.,
zero-shot, few-shot). Moreover, on real-world dialog benchmarks, our
instruction-tuned Qwen-VL-Chat also demonstrates superiority compared to
existing vision-language chatbots. Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL. | http://arxiv.org/pdf/2308.12966 | Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou | cs.CV, cs.CL | Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL | null | cs.CV | 20230824 | 20231013 | [
{
"id": "2211.01335"
},
{
"id": "2307.02499"
},
{
"id": "2305.10403"
},
{
"id": "2308.16890"
},
{
"id": "2208.10442"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2210.08402"
},
{
"id": "2306.02858"
},
{
"id": "2209.06794"
},
{
"id": "1504.00325"
},
{
"id": "2204.13653"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2212.04408"
},
{
"id": "2307.05222"
},
{
"id": "2306.15195"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2306.14824"
},
{
"id": "2102.05918"
},
{
"id": "2205.01917"
},
{
"id": "2111.11432"
},
{
"id": "2307.16125"
},
{
"id": "2305.03726"
},
{
"id": "2203.10244"
},
{
"id": "2206.08916"
},
{
"id": "2304.14108"
},
{
"id": "2307.08581"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2305.18565"
}
] |
2308.12503 | 67 | [1] âDescriptionâ : When I work, I like to (a) Give it a try. (b) Think before I act. âChoiceâ : [] âDescriptionâ : I remember best (a) What I see. (b) What I hear. âChoiceâ :{] âDescriptionâ : When I have to participate in a group project, I want (a) Everyone to brainstorm first and contribute ideas. (b) People to think separately, then come together to compare ideas. âChoiceâ : [] âDescriptionâ : I'm usually considered by others to be (a) Extroverted. (b) Reserved. âChoiceâ :[] âDescriptionâ : I think the idea of giving one grade to a cooperative group (a) Appeals to me. (b) Does not appeal to me. âChoiceâ :[] âDescriptionâ : I prefer to (a) Be practical in my work. (b) Be innovative. âChoiceâ : [] âDescriptionâ : If I were a teacher, I would prefer to teach (a) Courses about facts and practical matters. (b) Courses about ideas and theories. âChoiceâ :[] | 2308.12503#67 | CGMI: Configurable General Multi-Agent Interaction Framework | Benefiting from the powerful capabilities of large language models (LLMs),
agents based on LLMs have shown the potential to address domain-specific tasks
and emulate human behaviors. However, the content generated by these agents
remains somewhat superficial, owing to their limited domain expertise and the
absence of an effective cognitive architecture. To address this, we present the
Configurable General Multi-Agent Interaction (CGMI) framework, designed to
replicate human interactions in real-world scenarios. Specifically, we propose
a tree-structured methodology for the assignment, detection, and maintenance of
agent personality. Additionally, we designed a cognitive architecture equipped
with a skill library based on the ACT* model, which contains memory,
reflection, and planning modules. We have also integrated general agents to
augment the virtual environment's realism. Using the CGMI framework, we
simulated numerous classroom interactions between teacher and students. The
experiments indicate that aspects such as the teaching methodology, curriculum,
and student performance closely mirror real classroom settings. We will open
source our work. | http://arxiv.org/pdf/2308.12503 | Shi Jinxin, Zhao Jiabao, Wang Yilei, Wu Xingjiao, Li Jiawen, He Liang | cs.AI, cs.HC, cs.MA | 11 pages, 15 figures | null | cs.AI | 20230824 | 20230828 | [
{
"id": "2302.01560"
},
{
"id": "2307.05300"
},
{
"id": "2307.07924"
},
{
"id": "2210.03350"
},
{
"id": "2304.05376"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.04091"
},
{
"id": "2305.02547"
},
{
"id": "2303.17071"
},
{
"id": "2303.17760"
},
{
"id": "2303.08774"
}
] |
2308.12950 | 67 | be solved by rephrasing the prompt e.g. âCan you tell me how to kill a process?â rephrased to âHow do I kill a process?â. We show some examples in Appendix Table 15. This behavior is something we plan to investigate in more details in the future.
Safety and coding performance. As our instruction finetuning set prioritizes safety, longer finetunings tend to degrade coding performance. We trained our models to reach high coding performances, while not compromising on safety. As shown in Figure 7, our Code Llama - Instruct models are safer than ChatGPT.
# 5 Related work | 2308.12950#67 | Code Llama: Open Foundation Models for Code | We release Code Llama, a family of large language models for code based on
Llama 2 providing state-of-the-art performance among open models, infilling
capabilities, support for large input contexts, and zero-shot instruction
following ability for programming tasks. We provide multiple flavors to cover a
wide range of applications: foundation models (Code Llama), Python
specializations (Code Llama - Python), and instruction-following models (Code
Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are
trained on sequences of 16k tokens and show improvements on inputs with up to
100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants
support infilling based on surrounding content. Code Llama reaches
state-of-the-art performance among open models on several code benchmarks, with
scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code
Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our
models outperform every other publicly available model on MultiPL-E. We release
Code Llama under a permissive license that allows for both research and
commercial use. | http://arxiv.org/pdf/2308.12950 | Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve | cs.CL | null | null | cs.CL | 20230824 | 20240131 | [] |
2308.12966 | 67 | # Table 8: Training hyperparameters of Qwen-VL
Configuration Pre-training Multi-task Pre-training Supervised Fine-tuning ViT init. Open-CLIP-bigG Qwen-VL 1st-stage Qwen-VL 2nd-stage LLM init. Qwen-7B Qwen-7B Qwen-VL 2nd-stage VL Adapter init. random Qwen-VL 1st-stage Qwen-VL 2nd-stage Image resolution ViT sequence length 2242 256 4482 1024 4482 1024 LLM sequence length 512 2048 2048 Learnable query numbers 256 256 256 Optimizer Optimizer hyperparameter AdamW β1 = 0.9, β2 = 0.98, eps = 1eâ6 Peak learning rate Minimum learning rate ViT learning rate decay 2eâ4 1eâ6 0.95 5eâ5 1eâ5 0.95 1eâ5 1eâ6 0 ViT Drop path rate 0 Learning rate schedule cosine decay Weight decay 0.05 Gradient clip 1.0 Training steps 50k 19k 8k Warm-up steps 500 400 3k Global batch size 30720 4096 128 Gradient Acc. 6 8 8 Numerical precision Optimizer sharding bfloat16 â Activation checkpointing â Model parallelism â 2 2 Pipeline parallelism â | 2308.12966#67 | Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond | In this work, we introduce the Qwen-VL series, a set of large-scale
vision-language models (LVLMs) designed to perceive and understand both texts
and images. Starting from the Qwen-LM as a foundation, we endow it with visual
capacity by the meticulously designed (i) visual receptor, (ii) input-output
interface, (iii) 3-stage training pipeline, and (iv) multilingual multimodal
cleaned corpus. Beyond the conventional image description and
question-answering, we implement the grounding and text-reading ability of
Qwen-VLs by aligning image-caption-box tuples. The resulting models, including
Qwen-VL and Qwen-VL-Chat, set new records for generalist models under similar
model scales on a broad range of visual-centric benchmarks (e.g., image
captioning, question answering, visual grounding) and different settings (e.g.,
zero-shot, few-shot). Moreover, on real-world dialog benchmarks, our
instruction-tuned Qwen-VL-Chat also demonstrates superiority compared to
existing vision-language chatbots. Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL. | http://arxiv.org/pdf/2308.12966 | Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou | cs.CV, cs.CL | Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL | null | cs.CV | 20230824 | 20231013 | [
{
"id": "2211.01335"
},
{
"id": "2307.02499"
},
{
"id": "2305.10403"
},
{
"id": "2308.16890"
},
{
"id": "2208.10442"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2210.08402"
},
{
"id": "2306.02858"
},
{
"id": "2209.06794"
},
{
"id": "1504.00325"
},
{
"id": "2204.13653"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2212.04408"
},
{
"id": "2307.05222"
},
{
"id": "2306.15195"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2306.14824"
},
{
"id": "2102.05918"
},
{
"id": "2205.01917"
},
{
"id": "2111.11432"
},
{
"id": "2307.16125"
},
{
"id": "2305.03726"
},
{
"id": "2203.10244"
},
{
"id": "2206.08916"
},
{
"id": "2304.14108"
},
{
"id": "2307.08581"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2305.18565"
}
] |
2308.12503 | 68 | I would prefer to teach (a) Courses about facts and practical matters. (b) Courses about ideas and theories. âChoiceâ :[] âDescriptionâ : I find it easier to learn (a) Factual content. (b) Conceptual content. âChoiceâ :[] âDescriptionâ : When reading non-fiction, I prefer (a) Things that tell me new facts and teach me how to do things. (b) Things that inspire me to think. âChoiceâ : [] âDescriptionâ : I prefer (a) Deterministic ideas. (b) Speculative ideas. âChoiceâ : âDescriptionâ : Perception setow te = = ul Type Sensory vs. Intuitive âDeseriptionâ : I prefer to be seen as: (a) Detail-oriented in my work. (b) Creative in my work. âChoiceâ :[] yee" 11 âDescriptionâ : When I read interesting stories, I like authors who (a) Get straight to the point. (b) Write in a novel and interesting way. âChoiceâ : [] âDeseriptionâ : When I carry out a task, I like to (a) Master one method. (b) | 2308.12503#68 | CGMI: Configurable General Multi-Agent Interaction Framework | Benefiting from the powerful capabilities of large language models (LLMs),
agents based on LLMs have shown the potential to address domain-specific tasks
and emulate human behaviors. However, the content generated by these agents
remains somewhat superficial, owing to their limited domain expertise and the
absence of an effective cognitive architecture. To address this, we present the
Configurable General Multi-Agent Interaction (CGMI) framework, designed to
replicate human interactions in real-world scenarios. Specifically, we propose
a tree-structured methodology for the assignment, detection, and maintenance of
agent personality. Additionally, we designed a cognitive architecture equipped
with a skill library based on the ACT* model, which contains memory,
reflection, and planning modules. We have also integrated general agents to
augment the virtual environment's realism. Using the CGMI framework, we
simulated numerous classroom interactions between teacher and students. The
experiments indicate that aspects such as the teaching methodology, curriculum,
and student performance closely mirror real classroom settings. We will open
source our work. | http://arxiv.org/pdf/2308.12503 | Shi Jinxin, Zhao Jiabao, Wang Yilei, Wu Xingjiao, Li Jiawen, He Liang | cs.AI, cs.HC, cs.MA | 11 pages, 15 figures | null | cs.AI | 20230824 | 20230828 | [
{
"id": "2302.01560"
},
{
"id": "2307.05300"
},
{
"id": "2307.07924"
},
{
"id": "2210.03350"
},
{
"id": "2304.05376"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.04091"
},
{
"id": "2305.02547"
},
{
"id": "2303.17071"
},
{
"id": "2303.17760"
},
{
"id": "2303.08774"
}
] |
2308.12950 | 68 | Early observations with LLMs such as GPT-Neo (Black et al., 2021) or GPT-J (Wang & Komatsuzaki, 2021) showed that adding code in the training data makes program synthesis possible even with medium size LLMs. Code from open-source software is now a standard part of the training data for general-purpose LLMs such as PaLM (Chowdhery et al., 2022), Chinchilla (Hoffmann et al., 2022), Gopher (Rae et al., 2021), GPT-4 (OpenAI, 2023), and Llama (Touvron et al., 2023a;b). In parallel, models specifically trained or fine-tuned for code understanding and program synthesis from natural language prompts emerged with LLMs such as Codex (Chen et al., 2021), CodeT5 (Wang et al., 2021), InCoder (Fried et al., 2023), AlphaCode (Li et al., 2022), CodeGen (Nijkamp et al., 2023b) and CodeGen 2 (Nijkamp et al., 2023a), GPT-NeoX (Black et al., 2022), SantaCoder (Allal et al., 2023), StarCoder (Li et al., 2023) and | 2308.12950#68 | Code Llama: Open Foundation Models for Code | We release Code Llama, a family of large language models for code based on
Llama 2 providing state-of-the-art performance among open models, infilling
capabilities, support for large input contexts, and zero-shot instruction
following ability for programming tasks. We provide multiple flavors to cover a
wide range of applications: foundation models (Code Llama), Python
specializations (Code Llama - Python), and instruction-following models (Code
Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are
trained on sequences of 16k tokens and show improvements on inputs with up to
100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants
support infilling based on surrounding content. Code Llama reaches
state-of-the-art performance among open models on several code benchmarks, with
scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code
Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our
models outperform every other publicly available model on MultiPL-E. We release
Code Llama under a permissive license that allows for both research and
commercial use. | http://arxiv.org/pdf/2308.12950 | Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve | cs.CL | null | null | cs.CL | 20230824 | 20240131 | [] |
2308.12966 | 68 | In the first pre-training stage, the model is trained using AdamW optimizer with β1 = 0.9, β2 = 0.98, eps = 1eâ6. We use the cosine learning rate schedule and set the maximum learning rate of 2eâ4 and minimum of 1eâ6 with a linear warm-up of 500 steps. We use a weight decay of 5eâ2 and a gradient clipping of 1.0. For the ViT image encoder, we apply a layer-wise learning rate decay strategy with a decay factor of 0.95. The training process uses a batch size of 30720 for the image-text pairs, and the entire first stage of pre-training lasts for 50,000 steps, consuming approximately 1.5 billion image-text samples and 500 billion image-text tokens. | 2308.12966#68 | Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond | In this work, we introduce the Qwen-VL series, a set of large-scale
vision-language models (LVLMs) designed to perceive and understand both texts
and images. Starting from the Qwen-LM as a foundation, we endow it with visual
capacity by the meticulously designed (i) visual receptor, (ii) input-output
interface, (iii) 3-stage training pipeline, and (iv) multilingual multimodal
cleaned corpus. Beyond the conventional image description and
question-answering, we implement the grounding and text-reading ability of
Qwen-VLs by aligning image-caption-box tuples. The resulting models, including
Qwen-VL and Qwen-VL-Chat, set new records for generalist models under similar
model scales on a broad range of visual-centric benchmarks (e.g., image
captioning, question answering, visual grounding) and different settings (e.g.,
zero-shot, few-shot). Moreover, on real-world dialog benchmarks, our
instruction-tuned Qwen-VL-Chat also demonstrates superiority compared to
existing vision-language chatbots. Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL. | http://arxiv.org/pdf/2308.12966 | Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou | cs.CV, cs.CL | Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL | null | cs.CV | 20230824 | 20231013 | [
{
"id": "2211.01335"
},
{
"id": "2307.02499"
},
{
"id": "2305.10403"
},
{
"id": "2308.16890"
},
{
"id": "2208.10442"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2210.08402"
},
{
"id": "2306.02858"
},
{
"id": "2209.06794"
},
{
"id": "1504.00325"
},
{
"id": "2204.13653"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2212.04408"
},
{
"id": "2307.05222"
},
{
"id": "2306.15195"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2306.14824"
},
{
"id": "2102.05918"
},
{
"id": "2205.01917"
},
{
"id": "2111.11432"
},
{
"id": "2307.16125"
},
{
"id": "2305.03726"
},
{
"id": "2203.10244"
},
{
"id": "2206.08916"
},
{
"id": "2304.14108"
},
{
"id": "2307.08581"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2305.18565"
}
] |
2308.12503 | 69 | and interesting way. âChoiceâ : [] âDeseriptionâ : When I carry out a task, I like to (a) Master one method. (b) Think of multiple methods. âChoiceâ : [ ] âDescriptionâ : When I want to compliment someone, I say they are (a) Very sensitive. (b) Very imaginative. âChoiceâ :[] âDeseriptionâ : The content I like in courses is mainly (a) Concrete materials (facts, data). (b) Abstract materials (concepts, theories). âChoiceâ : [] âDescriptionâ : When I'm doing calculations for a long time, (a) I like to repeat my steps and check my work carefully. (b) I find checking work very boring, and I force myself to do it. âChoiceâ :[] : When I reflect on things I've done in the past, most often, what comes to mind is (a) An image. (b) Some words. âChoiceâ :[] : My preferred medium for acquiring new information is (a) Pictures, diagrams, graphics, and images. (b) Written instructions and verbal information. âChoiceâ : [| : When reading a book with many | 2308.12503#69 | CGMI: Configurable General Multi-Agent Interaction Framework | Benefiting from the powerful capabilities of large language models (LLMs),
agents based on LLMs have shown the potential to address domain-specific tasks
and emulate human behaviors. However, the content generated by these agents
remains somewhat superficial, owing to their limited domain expertise and the
absence of an effective cognitive architecture. To address this, we present the
Configurable General Multi-Agent Interaction (CGMI) framework, designed to
replicate human interactions in real-world scenarios. Specifically, we propose
a tree-structured methodology for the assignment, detection, and maintenance of
agent personality. Additionally, we designed a cognitive architecture equipped
with a skill library based on the ACT* model, which contains memory,
reflection, and planning modules. We have also integrated general agents to
augment the virtual environment's realism. Using the CGMI framework, we
simulated numerous classroom interactions between teacher and students. The
experiments indicate that aspects such as the teaching methodology, curriculum,
and student performance closely mirror real classroom settings. We will open
source our work. | http://arxiv.org/pdf/2308.12503 | Shi Jinxin, Zhao Jiabao, Wang Yilei, Wu Xingjiao, Li Jiawen, He Liang | cs.AI, cs.HC, cs.MA | 11 pages, 15 figures | null | cs.AI | 20230824 | 20230828 | [
{
"id": "2302.01560"
},
{
"id": "2307.05300"
},
{
"id": "2307.07924"
},
{
"id": "2210.03350"
},
{
"id": "2304.05376"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.04091"
},
{
"id": "2305.02547"
},
{
"id": "2303.17071"
},
{
"id": "2303.17760"
},
{
"id": "2303.08774"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.