doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2309.02033
46
4.2 Interactive Visualization The ability of interactive visualization is integral to multiple feed- back stages of Data-Juicer. Specifically, as Figure 4.(a) demon- strates, users can visually track the effects of individual OPs in terms of the processed data samples. This is facilitated by an innovative built-in tool, tracer, which records sample changes after apply- ing each operation for Data-Juicer. For example, tracer presents discarded samples for Filters, pre- and post-editing differences for Mappers, and (near-) duplicate sample pairs for Deduplicators. Cou- pling this tracking ability with fruitful built-in sampling and visu- alization tools, Data-Juicer enhances users’ control over the data processing and boosts their confidence and rationals of the process. Transitioning to the mid-term stage of LLM data processing, Data-Juicer offers a comparative visualization of the data before and after the entire processing from the view of OP pipeline and sta- tistical analysis, as Figures 4.(b) and 4.(c) show. Aided by a built-in tool, analyzer, Data-Juicer provides statistical analysis (counts, means, standard deviations, min/max, quantiles, entropy, etc.) to
2309.02033#46
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
46
• Selection. Given a set of actions and their values, the selection step either selects one to execute or rejects them and loops back to the proposal step. Depending on the form of action values, selection may occur via argmax, softmax, or an alternative such as majority vote (Wang et al., 2022b). Execution. The selected action is applied by executing the relevant procedures from the agent’s source code. Depending on the agent implementation, this might be an external grounding action (e.g., an API call; 12 SayCan (Ahn et al., 2022) ReAct (Yao et al., 2022b) Voyager (Wang et al., 2023a) Generative Agents (Park et al., 2023) Tree of Thoughts (Yao et al., 2023) Long-term Memory5 - - procedural episodic/semantic - External Grounding physical digital digital digital/agent digital6 Internal Actions - reason reason/retrieve/learn reason/retrieve/learn reason Decision Making evaluate propose propose propose propose, evaluate, select Table 2: Some recent language agents cast into the CoALA framework. Section 4.2) or an internal learning action (e.g., a write to episodic memory; Section 4.5). An observation can be made from the environment, providing feedback from the agent’s action, and the cycle loops again.
2309.02427#46
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
47
4.4 Feedback Loop Showcase The general feedback loop has been discussed before in Figure 2. We now further expound on this by presenting a concrete development example. Here, we intertwine several previously mentioned tools to demonstrate the Data-in-the-LLMdev-Loop process, which results in improved LLM data. As illustrated in Figure 5, we begin with a raw dataset and aim to refine it for better pre-training or fine-tuning of an LLM. The entire process flows as per the following steps: (1) Analyze the original dataset. We can opt to utilize an existing data recipe (a specific configuration file) or craft a new one based on prior understandings of data processing needs. Our built-in Analyzer and Visualizer facilitate this process by computing
2309.02033#47
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
47
Empirically, many early language agents simply use LLMs to propose an action (Schick et al., 2023), a sequence of actions (Huang et al., 2022b), or evaluate a fixed set of actions (Ahn et al., 2022) without intermediate reasoning or retrieval. Followup work (Yao et al., 2022b; Shinn et al., 2023; Xu et al., 2023b; Lin et al., 2023; Wang et al., 2023a; Park et al., 2023) has exploited intermediate reasoning and retrieval to analyze the situation, make and maintain action plans, refine the previous action given the environmental feedback, and leveraged a more complex procedure to propose a single action. Most recently, research has started to investigate more complex decision-making employing iterative proposal and evaluation to consider multiple actions. These procedures are modeled after classical planning algorithms: for example, Tree of Thoughts (Yao et al., 2023) and RAP (Hao et al., 2023) use LLMs to implement BFS/DFS and Monte Carlo Tree Search (MCTS; Browne et al., 2012) respectively. LLMs are used to generate proposals (i.e., to simulate rollouts conditioned on an action) and evaluate them (i.e., to value the outcome of the proposed action). # 5 Case Studies
2309.02427#47
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
48
Improved Quality and Quantity == Original Dataset Process data with refined recipe (reusing checkpoints & caches) © train/Tune LLMs Refined Dataset Original Recipe (Config File): Refined Recipe: SSS S-S © me — Analyze o rord_repetition filter: = word_repetition filter: @ Real-Time & Auto Evaluation rep len: 10 —)| original mincrat Analyze 6) Dataset nax_ratio: 0. refined (via Analyzer ~ special. characters filter: dataset Collate & Visualizer) min ratio: 0.0 min ratio: 0. ¢ max ratio: 8.25 mmax_patio: 0.25 z compare list Refine parameters of data recipe (manally or via HPO) Original Data Probe Improved Diversity and Nid B ones Data Leardboard with Refined Data Probe Reference Models Figure 5: The demonstration of data processing feedback of Data-Juicer, which helps to generate better data recipes for LLMs. more than a dozen measures such as linguistic diversity, textual statistics, and others to generate a data probe. The two pie plots within Figure 5 indicate the top 20 most common root verbs (inner circle) and their top 4 direct noun objects (outer circle) for the data in field “text.instructions”.
2309.02033#48
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
48
# 5 Case Studies With variations and ablations of the memory modules, action space, and decision-making procedures, CoALA can express a wide spectrum of language agents. Table 2 lists some popular recent methods across diverse domains — from Minecraft to robotics, from pure reasoning to social simulacra. CoALA helps characterize their internal mechanisms and reveal their similarities and differences in a simple and structured way. SayCan (Ahn et al., 2022) grounds a language model to robotic interactions in a kitchen to satisfy user commands (e.g., “I just worked out, can you bring me a drink and a snack to recover?”). Its long-term memory is procedural only (an LLM and a learned value function). The action space is external only – a fixed set of 551 grounding skills (e.g., “find the apple”, “go to the table”), with no internal actions of reasoning, retrieval, or learning. During decision-making, SayCan evaluates each action using a combination of LLM and learned values, which balance a skill’s usefulness and groundedness. SayCan therefore employs the LLM (in conjunction with the learned value function) as a single-step planner.
2309.02427#48
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
49
(2) Refine parameters of the original recipe. Based on the data probe, users figure out the weaknesses of the original dataset, such as low diversity in expression manners, and long-tail statistics of word counts. Then we refine the parameters in the recipe by adding/removing some OPs or tightening/relaxing filter ranges. During refining, we could find out the effect of each OP easily based on the interactive visualization tool mentioned in Sec. 4.2. auto-registered as a reference model, or additional refining guidance from the LLM perspective to further enhance data recipes. 5 BOOSTING USABILITY WITH BUILT-INS In response to the challenge of varied user customized preferences and technical expertise (Challenge 3 in Sec. 1), we offer an easy- to-use configuration paradigm for data recipes, ready-to-use data recipe templates, and extensive tools, as detailed below. (3) Process the original dataset with the refined recipe. Then we process the original dataset with the refined recipe using Data-Juicer and get a refined dataset and several saved check- points for further adjustments. This step can be facilitated with the help of our cache and checkpoint mechanisms.
2309.02033#49
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
49
ReAct (Yao et al., 2022b) is a language agent grounded to various digital environments (e.g., Wikipedia API, text game, website). Like SayCan, it lacks semantic or episodic memory and therefore has no retrieval or learning actions. Its action space consists of (internal) reasoning and (external) grounding. Its decision cycle is fixed to use a single reasoning action to analyze the situation and (re)make action plans, then generates a grounding action without evaluation or selection stages. ReAct can be considered the simplest language agent that leverages both internal and external actions, and is the initial work that demonstrates their synergizing effects: reasoning helps guide acting, while acting provides environmental feedback to support reasoning.
2309.02427#49
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
50
(4) Analyze the refined dataset. Like step (1), we analyze the refined dataset again to obtain a new data probe. Based on the statis- tics and visualization results, we assess the degree of improvement in the data quality. If the refined data fails to meet our expectations, we revert to step 2 to manually adjust the data recipe or employ our HPO tool for automatic refinement (refer Sec. 4.1). (5) Get LLMs with the refined dataset. Then we can train or fine-tune LLMs with the refined dataset and training frameworks integrated into Data-Juicer (Sec. 4.3). During the training or fine- tuning process, our auto-evaluation tools offer timely, multi-view assessments of LLMs. These tools inspect numerous metrics across multiple evaluation datasets. This feature provides us the advantage of halting the process prematurely if the refined data weakens LLM performance, thereby preventing unnecessary costs. (6) Collate results and compare with reference models. Finally, Data-Juicer automatically collates the evaluation results and compares them with reference models in the data leaderboard, providing a clear representation of the effects of data processing alone. Consequently, we derive either a superior LLM, which can be
2309.02033#50
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
50
Voyager (Wang et al., 2023a) is a language agent grounded to the Minicraft API. Unlike SayCan, which grounds to perception via the learned value function, Voyager’s grounding is text-only. It has a long-term procedural memory that stores a library of code-based grounding procedures a.k.a. skills (e.g., “combatZombie”, “craftStoneSword”). This library is hierarchical: complex skills can use simpler skills as sub-procedures (e.g., “combatZombie” may call “craftStoneSword” if no sword is in inventory). Most impressively, its action space has all four kinds of actions: grounding, reasoning, retrieval, and learning (by adding new grounding 5All agents contain some procedural memory (agent code and LLM weights), so here we only list writable procedural memory. 6Special digital grounding with the only external action being submitting a final answer. 13
2309.02427#50
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
51
5.1 Configuring Your Data Recipe Notably, we make the end-to-end pipeline of data processing con- figurable in Data-Juicer, including specified processing environ- ment parameters, OP lists, tools used, and so on. This principle of all-in-one configuration ensures reproducibility and traceability, and simplifies changing specifications in data processing, thereby facilitating the formation of data recipes for further refinement and reuse, and enabling the quantitative exploration and automatic optimization of data processing (Sec. 4.1). Specifically, built upon Jsonargparse [46], we provide unified, flexible, easy-to-use and powerful configuration capabilities. It is engineered to automatically register configuration items for OPs and tools, and accept varying sources of configurations such as com- mand line entries, yaml and jsonnet files, environment variables, default hard-coded values, and a mixture of those for convenient incremental modifications.
2309.02033#51
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
51
13 procedures). During a decision cycle, Voyager first reasons to propose a new task objective if it is missing in the working memory, then reasons to propose a code-based grounding procedure to solve the task. In the next decision cycle, Voyager reasons over the environmental feedback to determine task completion. If successful, Voyager selects a learning action adding the grounding procedure to procedural memory; otherwise, it uses reasoning to refine the code and re-executes it. The importance of long-term memory and procedural learning is empirically verified by comparing to baselines like ReAct and AutoGPT and ablations without the procedural memory. Voyager is shown to better explore areas, master the tech tree, and zero-shot generalize to unseen tasks.
2309.02427#51
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
52
For example, users can easily build up their own config files by two recommended methodologies—“subtraction” or “addition”. The “subtraction” approach utilizes a pre-set configuration file contain- ing all available OPs, tools, and their default parameters. Users can simply remove or re-order these OPs and adjust these parame- ters per their requirements. Conversely, the “addition” approach lets users build their configuration files from scratch, leveraging our extensive examples of pre-built data processing recipes for totally more than 20 high-quality and diverse data recipes for pre- training, fine-tuning, English, Chinese, etc. More quantitative analysis on certain recipes are in our experiments (Sec. 7.1).
2309.02033#52
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
52
Generative Agents (Park et al., 2023) are language agents grounded to a sandbox game affording interaction with the environment and other agents. Its action space also has all four kinds of actions: grounding, reasoning, retrieval, and learning. Each agent has a long-term episodic memory that stores events in a list. These agents use retrieval and reasoning to generate reflections on their episodic memory (e.g., “I like to ski now.”) which are then written to long-term semantic memory. During decision-making, it retrieves relevant reflections from semantic memory, then reasons to make a high-level plan of the day. While executing the plan, the agent recieves stream of grounding observations; it can reason over these to maintain or adjust the plan. Tree of Thoughts (ToT) (Yao et al., 2023) can be seen as a special kind of language agent with only one external action: submitting a final solution to a reasoning problem (game of 24, creative writing, crosswords puzzle). It has no long-term memory, and only reasoning in its internal action space, but differs from all previous agents in its deliberate decision-making. During planning, ToT iteratively proposes, evaluates, and selects “thoughts” (reasoning actions) based on LLM reasoning, and systematically maintains them via a tree search algorithm to enable global exploration as well as local backtrack and foresight.
2309.02427#52
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
53
5.2 Dedicated Pluggable Tools To further enhance usability, facilitate system customization and augment users’ data handling capabilities, Data-Juicer includes an extensible collection of powerful dedicated tools that can be con- veniently plugged into different stages of the LLM data processing. Quality Classifier. As an illustrative example, we describe our text quality classifier for culling high-quality text from heteroge- neous data sources like CommonCrawl. This tool is a reproduced model based on the closed-source GPT-3 quality scorer [9]. More- over, we have expanded its applicability to Chinese text and various code types. Encapsulated as a callable pipeline, this tool provides users with the freedom to adapt it to other various scenarios. The functionality of the classifier is backed by PySpark’s standard Tokenizer or Sentencepiece model [50], along with HashingTF as the feature extractor. It then applies a binary logistic regression classifier to gauge the quality of a text. We provide more empirical verification of them in Sec. 7.2.3. Enhanced Sampler for LLM data. In Data-Juicer, we have designed several advanced data sampling utilities specialized for large-scale data chunk handling in LLMs. Our solutions effectively streamline representative extraction, optimize processing time and resources, and meet the distinctive needs of LLM developers.
2309.02033#53
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
53
# 6 Actionable Insights Compared to some recent empirical surveys around language agents (Mialon et al., 2023; Weng, 2023; Wang et al., 2023b), CoALA offers a theoretical framework grounded in the well-established research of cognitive architectures. This leads to a unique and complementary set of actionable insights. Agent design: thinking beyond monolithic designs for individual applications. Perhaps our most important suggestion is that agents should follow a systematic, modular design. CoALA can help practitioners in this regard: for example, it may be beneficial to consider whether an application requires semantic or episodic memory; whether the agent should be capable of modifying its semantic memory; and so on. Practically, just as standardized software is used across robotics platforms (Quigley, 2009; Macenski et al., 2022), a framework for language agents would consolidate technical investment and improve compatibility.
2309.02427#53
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
54
Our stratified sampling technique is noteworthy in this LLM data context. It capitalizes on information within the metadata or statistical fields, thus accommodating varied selection metrics in crafting an effective data sample. To ensure a comprehensive yet flexible representation of the data corpus, we consider various heterogeneous criteria such as document length, token count, the frequency of boolean predicates for post-conditional checks, and even linguistic diversity formulated via occurrences of verb-noun pair (as shown in the pie plots in Figure 2) . These dynamic criteria are tailored to distinct analytic needs and promote efficient data processing, seamlessly integrating with downstream OPs and tools. Full Toolkit. As for other tools, readers can refer to Sec. 4 for an examination of multiple previously discussed tools, including tracer and analyzer (Sec. 4.2), and evaluator and reference mod- els (Sec. 4.3). We diligently maintain and evolve the toolkit in Data-Juicer, and make the full set publicly accessible.
2309.02033#54
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
54
• In academic research, standardized terms allow conceptual comparisons across works (Table 2), and open-source implementations would further facilitate modular plug-and-play and re-use. For example, the theoretical framework of Markov Decision Processes (Puterman, 2014) provides a standardized set of concepts and terminology (e.g., state, action, reward, transition) for reinforcement learning (Sutton and Barto, 2018). Correspondingly, empirical frameworks like OpenAI Gym (Brockman et al., 2016) provided standardized abstractions (e.g., obs, reward, done, info = env.step(action)) that facilitate empirical RL work. Thus, it would be timely and impactful to also implement useful abstractions (e.g., Memory, Action, Agent classes) for language agents, and cast simpler agents into such an empirical CoALA framework as examples for building more complex agents. • In industry applications, maintaining a single company-wide “language agent library” would reduce technical debt (Sculley et al., 2014; Lwakatare et al., 2020) by facilitating systematic testing and component re-use across individual agent deployments. It could also standardize the customer experience: rather than interacting with a hodgepodge of language agents developed by individual teams, end users would experience a context-specific instantiation of the same base agent.
2309.02427#54
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
55
5.3 User-Friendly Experiences in Data-Juicer Data-Juicer is designed not just for functionality but also for adaptability, catering to an extensive user base with diverse exper- tise and skill sets. While abstracting the intricate system internals, we provide user-friendly interfaces and extensive customizable components. Accordingly, users can embark on zero-code data pro- cessing, engage in low-code customization, or delve into in-depth extensions for complex requirements. • Zero-Code Processing: For novice users, Data-Juicer sup- plies a series of ready-to-use data recipes and plug-in tools for immediate use. This requires no knowledge of advanced system architectures or OPs, as discussed in Sec. 5.1 and Sec. 5.2. • Low-Code Customization: Intermediate users can enjoy the flexibility to alter configurations, data, and external resources to suit their specific needs. They can readily reuse, combine, and edit built-in data configurations; customize quality classifiers and tokenizers; refine data based on our pre-developed recipes; or provide fresh links to auxiliary models or vocabularies from our unified, routinely updated public cloud drive.
2309.02033#55
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
55
• LLMs vs. code in agent design. CoALA agents possess two forms of procedural memory: agent code (deterministic rules) and LLM parameters (a large, stochastic production system). Agent code is interpretable and extensible, but often brittle in face of stochasticity and limited to address situations 14 the designer anticipates. In contrast, LLM parameters are hard to interpret, but offer significant zero-shot flexibility in new contexts (Huang et al., 2022b). CoALA thus suggests using code sparingly to implement generic algorithms that complement LLM limitations, e.g., implementing tree search to mitigate myopia induced by autoregressive generation (Yao et al., 2023; Hao et al., 2023). Structured reasoning: thinking beyond prompt engineering. Early work on prompt engineering manipulated the LLM’s input and output via low-level string operations. CoALA suggests a more structured reasoning procedure to update working memory variables.
2309.02427#55
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
56
• Advanced Extension: Experienced users are allowed to easily introduce new OPs by deriving from base classes and implement- ing their specific “process()” and “compute_stats()” functions, as demonstrated in the code Listing 1. This grants the users an end-to-end view of the process for a single sample, while Data-Juicer handles the nitty-gritty of configuration registra- tion and efficiency optimization. Additionally, Data-Juicer’s decoupled design facilitates the smooth incorporation of new tools for users at all stages of LLM data processing, ranging from novel visualization dimensions and evaluation datasets to pre- or post-processing scripts. To enhance the ease of adoption and use of Data-Juicer, apart from the numerous pre-built data recipes (refer Sec. 5), we also provide a series of interactive demos, implemented in Streamlit, for varied profiles and scenarios. This hands-on learning approach has been designed to enable users of varying skill levels to quickly familiarize themselves with and effectively use Data-Juicer. 6 COMPREHENSIVE SYSTEM OPTIMIZATION To handle large-scale data (Challenge 4 in Sec. 1), we employ a series of optimizations in Data-Juicer from various aspects.
2309.02033#56
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
56
• Prompting frameworks like LangChain (LangChain, 2022) and LlamaIndex (LlamaIndex, 2023) can be used to define higher-level sequences of reasoning steps, reducing the burden of reasoning per LLM call and the low-level prompt crafting efforts. Structural output parsing solutions such as Guidance (Guidance, 2023) and OpenAI function calling (OpenAI, 2023b) can help update working memory variables systematically. Defining and building good working memory modules will also be an important direction of future research. Such modules may be especially important for industry solutions where LLM reasoning needs to seamlessly integrate with large-scale code infrastructure. • Reasoning usecases in agents can inform and reshape LLM training in terms of the types (e.g., reasoning for self-evaluation, reflection, action generation, etc.) and formats (e.g. ,CoT (Wei et al., 2022b), ReAct (Yao et al., 2022b), Reflexion (Shinn et al., 2023)) of training instances. By default, existing LLMs are trained and optimized for NLP tasks, but agent applications have explored new modes of LLM reasoning (e.g., self-evaluation) that have proven broadly useful. LLMs trained or finetuned towards these capabilities will more likely be the backbones of future agents.
2309.02427#56
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
57
Optimized Computation: Context management, Operator (OP) Fusion and Reordering. To elevate computational efficiency in LLM data processing, we provide advanced context management, operator fusion, and operator reordering techniques for nuanced implementation contributions. The manager meticulously handles shared intermediate variables, such as segmented words, split lines, and others derived from the original textual corpus, across different operators. It allows seamless reuse of these context variables across multiple operators, thereby mitigating the necessity for computa- tionally expensive re-evaluations. Based on the context manager, the proposed operator fusion method is another new contribution to the field. We propose to identify fusible operators that either share the same contexts or computation sub-procedures. It detects the OP groups first. Succes- sive OPs in the same group should be commutative with each other. It then amalgamates identified fusible operators in each group into a single fused OP, enabling them to be executed faster with a larger localized perspective. The contexts of each sample will be cleaned up after each fused OP, hence little extra memory is required for context management and operator fusion.
2309.02033#57
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
57
Long-term memory: thinking beyond retrieval augmentation. While traditional retrieval-augmented language models (Guu et al., 2020; Lewis et al., 2020; Borgeaud et al., 2022) only read from human-written corpora, memory-augmented language agents can both read and write self-generated content autonomously. This opens up numerous possibilities for efficient lifelong learning. • Combining existing human knowledge with new experience and skills can help agents bootstrap to learn efficiently. For example, a code-writing agent could be endowed with semantic programming knowledge in the form of manuals or textbooks. It could then generate its own episodic knowledge from experience; reflect on these experiences to generate new semantic knowledge; and gradually create procedural knowledge in the form of a code library storing useful methods. • Integrating retrieval and reasoning can help to better ground planning. Recent computational psychological models implicate an integrated process of memory recall and decision-making (Zhou et al., 2023a; Zhao et al., 2022) – suggesting that adaptive mechanisms interleaving memory search and forward simulation will allow agents to make the most of their knowledge. Learning: thinking beyond in-context learning or finetuning. CoALA’s definition of “learning” encompasses these methods, but extends further to storing new experience or knowledge, or writing new agent code (Section 4.5). Important future directions include:
2309.02427#57
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
58
Due to the time-consuming increase of single fused OP, we fur- ther design a strategy of operator reordering to optimize the execu- tion sequence of the OP list after fusion. For example, based on the commutativity of Filters, we delay the running of time-consuming OPs (such as fused Filters) and prioritize other less time-consuming OPs. As a result, these time-consuming OPs only need to handle fewer samples because the preceding operators have filtered out some of them, enhancing overall computational efficiency. G Gp aa, ‘oat Reorder the only G _a \ ) _g Masibie or = L_ = - t ‘ ; (TD Fina titer —— G soup} =... = I ona [ FusibleFiter ] ap oa) =~ Le am (= 1) Do nothing to “y am Gm" resi ors =. on =~ Gar” Figure 6: The OP fusion procedure for an OP list.
2309.02033#58
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
58
• Meta-learning by modifying agent code would allow agents to learn more effectively. For example, learning better retrieval procedures could enable agents to make better use of their experience. Recent expansion-based techniques (Nogueira et al., 2019; Wang et al., 2023c; Tang et al., 2023a) could allow agents to reason about when certain knowledge would be useful, and store this as metadata to facilitate later recall. These forms of meta-learning would enable agents to go beyond human-written code, yet are understudied due to their difficulty and risk. • New forms of learning (and unlearning) could include fine-tuning smaller models for specific reasoning sub-tasks (Zelikman et al., 2022; Huang et al., 2022a; Ahn et al., 2022), deleting unneeded memory items for “unlearning” (Nguyen et al., 2022c), and studying the interaction effects between multiple forms of learning (Tuyls et al., 2022; Park et al., 2023; Xie et al., 2023; Khattab et al., 2022). 15
2309.02427#58
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
59
Figure 6: The OP fusion procedure for an OP list. The whole procedure of OP fusion is summarized in Figure 6. These amalgamation strategies serve dual purposes. Firstly, it mini- mizes redundant computation, eliminating the need for repetitive yet shared computations. Secondly, it mitigates the overhead of initializing multiple processes by reducing the total count of pro- cessing OPs, thus maintaining expeditious data processing routines. Optimized Space Utilization: Caching OPs and Compression. Recognizing the inadequacies of the original cache management protocol in the Huggingface-datasets library, especially pertaining to the handling of non-serializable third-party models and functions in certain OPs, we design a dedicated hashing method to bypass the serialization procedures of those non-serializable objects, which ensures successful caching of each OP and permits Data-Juicer to leverage optimal cache management.
2309.02033#59
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
59
15 Action space: thinking beyond external tools or actions. Although “action space” is a standard term in reinforcement learning, it has been used sparingly with language agents. CoALA argues for defining a clear and task-suitable action space with both internal (reasoning, retrieval, learning) and external (grounding) actions, which will help systematize and inform the agent design. • Size of the action space. More capable agents (e.g., Voyager, Generative Agents) have larger action spaces – which in turn means they face a more complex decision-making problem. As a result, these agents rely on more customized or hand-crafted decision procedures. The tradeoff of the action space vs. decision-making complexities is a basic problem to be considered before agent development, and taking the minimal action space necessary to solve a given task might be preferred.
2309.02427#59
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
60
Furthermore, we incorporated the ability for users to activate ad- vanced compression technologies, such as Zstandard (zstd) [23] and LZ4 [64], in Data-Juicer. It will automatically compress cache files after each OP and decompress these compressed files back to nor- mal cache files when rerunning this OP in the same configuration. Compared with the processing time, compressing/decompressing time is relatively negligible due to the high efficiency of the com- pression technologies mentioned above. This feature substantially reduces the volume of cache data storage, facilitating the processing of larger datasets without compromising speed or stability. Optimized Scalability: Distributed Data Processing. The vol- ume of LLM training data can be extremely large, making it difficult to be processed with a single machine. Data-Juicer meshes with distributed processing frameworks such as Ray [66], Apache Beam [5] and Apache Flink [12], and offers the ability to seamlessly trans- late a data processing pipeline running on a single node into a multi-node cluster. In this way, resources in cluster computing can be utilized to accelerate data processing and generation.
2309.02033#60
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
60
• Safety of the action space. Some parts of the action space are inherently riskier. “Learning” actions (especially procedural deletion and modification) could cause internal harm, while “grounding” actions (e.g., “rm” in bash terminal, harmful speech in human dialog, holding a knife in physical environments) could cause external harm. Today, safety measures are typically task-specific heuristics (e.g., remove “os” operations in Python (Chen et al., 2021), filter keywords in dialog (Chowdhery et al., 2022; Driess et al., 2023), limit robots to controlled environments (Ahn et al., 2022)). However, as agents are grounded to more complex environments with richer internal mechanisms, it may be necessary to specify and ablate the agent’s action space for worst-case scenario prediction and prevention (Yao and Narasimhan, 2023). Decision making: thinking beyond action generation. We believe one of the most exciting future directions for language agents is decision-making: as detailed in Section 4.6, most works are still confined to proposing (or directly generating) a single action. Present agents have just scratched the surface of more deliberate, propose-evaluate-select decision-making procedures.
2309.02427#60
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
61
Specifically, we adapt the underlying interfaces of HuggingFace- datasets for those of Ray-datasets, such that all OPs of Data-Juicer, even when written as single-machine Python functions, can be executed in a distributed mode with the help of automatic data partitioning by Ray. An alternative approach we support is to replace the default Ray runner of Data-Juicer with other dis- tributed processing back-ends such as Flink, via pre-translations from Data-Juicer’s processing pipelines into the Beam-compatible ones. As a result, almost all the OPs within Data-Juicer (Mapper, Filter, and Deduplicator) can be accelerated in a multi-node clus- ter, and effectively alleviate the bottlenecks on a single node (even with process-based parallelism) caused by memory capacity and IO throughput. More empirical results can be found in Sec. 7.2.4. In a nutshell, all of these optimizations enhance Data-Juicer’s scalability from various views, to handle the vast amount of data involved in LLMs, ensuring robust and efficient processing while minimizing resource requirements.
2309.02033#61
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
61
• Mixing language-based reasoning and code-based planning may offer the best of both worlds. Existing approaches either plan directly in natural language (Huang et al., 2022c; Ahn et al., 2022) or use LLMs to translate from natural language to structured world models (Wong et al., 2023; Liu et al., 2023a; Zhang et al., 2023a; Li et al., 2023a; Guan et al., 2023; Silver et al., 2022; 2023). Future work could integrate these: just as Soar incorporates a simulator for physical reasoning (Laird, 2022), agents may write and execute simulation code on the fly to evaluate the consequences of plans. See Section 7 for more discussion. • Extending deliberative reasoning to real-world settings. Initial works have implemented classical planning and tree search (Yao et al., 2023; Hao et al., 2023; Liu et al., 2023a; Dagan et al., 2023), using toy tasks such as game of 24 or block building. Extending these schemes to more complicated tasks with grounding (Qin et al., 2023) and long-term memory is an exciting direction.
2309.02427#61
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
62
7 EVALUATION OF DATA-JUICER 7.1 Making Better Data Recipes The value of an effective LLM data processing system is reflected not only in its comprehensive and flexible operability but also in its capacity to produce high-quality data that LLMs can more readily “digest”. Data-Juicer provides specialized features for ex- ploring and making data recipes tailored to LLMs, and we have developed numerous ready-to-use data recipes using Data-Juicer. In this section, we evaluate the quality of data recipes generated by Data-Juicer for both LLM pre-training and fine-tuning. 7.1.1 Refined Pre-training Data Recipes. The pre-training data we produced consists solely of publicly available sources, exem- plifying the core principles of transparency and reproducibility. Specifically, we choose to improve two widely-used, high-quality datasets for LLMs, TogetherAI’s RedPajama [24] and EleutherAI’s Pile [31], which were curated from 15 highly diverse text sources and subjected to meticulous pre-processing and cleaning to ensure their quality. With the help of Data-Juicer, we further refine them via data analysis, merging and quality enhancement, employing dozens of OPs with varied configurations. For detailed statistics, processing steps and refined data recipes, please refer to Appendix B.2.
2309.02033#62
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
62
• Metareasoning to improve efficiency. LLM calls are both slow and computationally intensive. Using LLMs for decision-making entails a balance between their computational cost and the utility of the resulting improved plan. Most LLM reasoning methods fix a search budget by specifying a depth of reasoning (Yao et al., 2023), but humans appear to adaptively allocate computation (Russek et al., 2022; Lieder and Griffiths, 2020; Callaway et al., 2022; Gershman et al., 2015). Future work should develop mechanisms to estimate the utility of planning (Laidlaw et al., 2023) and modify the decision procedure accordingly, either via amortization (fine-tuning the LLM based on the results of previous actions, e.g. Nguyen, 2023; Hamrick et al., 2019), routing among several decision sub-procedures (e.g., ReAct (Yao et al., 2022b) investigated backing off to CoT (Wei et al., 2022b) and vice versa), or updates to the decision-making procedure.
2309.02427#62
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
63
To verify the quality of the data recipes derived by Data-Juicer, we use the original RedPajam and Pile, and our refined datasets to pre-train LLMs with mainstream LLaMA architecture and assess the models’ performance across 16 core HELM tasks. We keep the training configurations the same while only modifying the training data. Detailed hyper-parameters are in Appendix B.3.1. The results of average scores of 16 tasks are visualized in Figure 7, where we evaluated checkpoints throughout the pre-training process with an increasing number of billion-sized tokens at 50B, 100B, and 150B. Notably, through fair comparisons with equivalent training tokens, LLMs pre-trained on Data-Juicer-recipes consistently out- performed those using only RedPajama or its union with the Pile, reinforcing the usefulness and effectiveness of Data-Juicer.
2309.02033#63
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
63
• Calibration and alignment. More complex decision-making is currently bottlenecked by issues such as over-confidence and miscalibration (Jiang et al., 2021; Braverman et al., 2020; Chen et al., 2022), misalignment with human values or bias (Liang et al., 2021; Feng et al., 2023), hallucinations in self-evaluation (Shinn et al., 2023), and lack of human-in-the-loop mechanisms in face of uncer- tainties (Nguyen et al., 2022a; Ren et al., 2023). Solving these issues will significantly improve LLMs’ utilities as agent backbones. 16 # 7 Discussion Internal vs. external: what is the boundary between an agent and its environment? While humans or robots are clearly distinct from their embodied environment, digital language agents have less clear boundaries. For example, is a Wikipedia database an internal semantic memory or an external digital environment (Yao et al., 2022b)? If an agent iteratively executes and improves code before submitting an answer (Shinn et al., 2023; Yang et al., 2023), is the code execution internal or external? If a method consists of proposal and evaluation prompts (Yao et al., 2023), should it be considered a single agent or two collaborating simpler agents (proposer and evaluator)?
2309.02427#63
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
64
Moreover, we compare Data-Juicer-models with several SOTA baselines and summarize the results in Table 2. With only half the data volume (150B tokens), LLaMA-1.3B pre-trained on Data-Juicer- recipe outperformed Pythia-1.4B [6] (300B tokens), and even beats highly competitive Falcon-1.3B [71] trained on 350B tokens. No- tably, we further labeled 17 subsets from Alpaca-CoT (a collection of 39 public fine-tuning datasets) with the “Instruct Fine-Tuning (IFT)” tag and performed data mixing and processing using Data-Juicer. Following the usual practice [105], we incorporate these large- volume IFT data into the pre-training phase and execute continuous w a —® RedPajama+Pile (Data-Juicer) —#- RedPajama+Pile sm RedPajama w a Average score on 16 tasks wow ww x 8 8 8 w 6 N o 50 75 100 125 150 #Tokens (B) for pre-training LLaMA-1.3B Figure 7: Evaluation results of reference models trained with different datasets but the same pre-training procedures. Data-Juicer’s data recipe gains consistent improvements over baselines.
2309.02033#64
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
64
We suggest the boundary question can be answered in terms of controllability and coupling. For example, Wikipedia is not controllable: it is an external environment that may be unexpectedly modified by other users. However, an offline version that only the agent may write to is controllable, and thus can be considered an internal memory. Similarly, code execution on a internal virtual environment should be considered an internal reasoning action, whereas code execution on an external machine (which may possess security vulnerabilities) should be considered an external grounding action. Lastly, if aspects of the agent – such as proposal and evaluation prompts – are designed for and dependent on each other, then they are tightly coupled and best conceptualized as components in an individual agent. In contrast, if the steps are independently useful, a multi-agent perspective may be more appropriate. While these dilemmas are primarily conceptual, such understanding can support systematic agent design and help the field align on shared terminology. Practioners may also just choose their preferred framing, as long as it is consistent and useful for their own work.
2309.02427#64
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
65
Figure 7: Evaluation results of reference models trained with different datasets but the same pre-training procedures. Data-Juicer’s data recipe gains consistent improvements over baselines. training upon the checkpoint of Data-Juicer (RedPajama+Pile)- 150B. As reflected in the last two rows of Table 2, Data-Juicer gains a further 4.9% relative improvement over the original Alpaca- CoT-IFT while utilizing only ∼30% data volume. Table 2: The average score of the pre-trained LLMs on the 16 HELM core tasks. Individual task results and data recipes are detailed in Appendix B.4. “IFT” denotes the datasets tagged with “Instruct Fine-Tuning” in our context. Model Falcon-1.3B [41] Training Data RefinedWeb #Tokens 350B Score 33.97 Pythia-1.4B [29] Pile 300B 33.96 LLaMA-1.3B Data-Juicer (RedPajama+Pile) + Alpaca-CoT-IFT 150B 150B + 15B 34.21 35.04 + Our Refined IFT 150B + 4.7B 36.76
2309.02033#65
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
65
Physical vs. digital: what differences beget attention? While animals only live once in the physical world, digital environments (e.g., the Internet) often allow sequential (via resets) and parallel trials. This means digital agents can more boldly explore (e.g., open a million webpages) and self-clone for parallel task solving (e.g., a million web agents try different web paths), which may result in decision-making procedures different from current ones inspired by human cognition (Griffiths, 2020). Learning vs. acting: how should agents continuously and autonomously learn? In the CoALA framework, learning is a result action of a decision-making cycle just like grounding: the agent deliberately chooses to commit information to long-term memory. This is in contrast to most agents, which simply fix a learning schedule and only use decison making for external actions. Biological agents, however, do not have this luxury: they must balance learning against external actions in their lifetime, choosing when and what to learn (Mattar and Daw, 2018). More flexible language agents (Wang et al., 2023a; Park et al., 2023) would follow a similar design and treat learning on par with external actions. Learning could be proposed as a possible action during regular decision-making, allowing the agent to “defer” it until the appropriate time.
2309.02427#65
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
66
Taken together, these findings underscore the potential of the Data-Juicer system to generate high-quality data and verify the excellence of Data-Juicer-recipes in terms of enhancing LLM performance while reducing LLM training costs. 7.1.2 Refined Fine-tuning Data Recipes. For the Alpaca-CoT collection, besides the “IFT” tag as validated in Table 2, we also labeled datasets within it with “Chat Fine-Tuning (CFT)” for en- hanced dialog ability and aligned human value. To examine their quality, we first use the CFT and EN tags to filter out several com- petitive subsets, and then generate two new equal-size datasets by random sampling and our designed recipe respectively. Then we conduct fine-tuning on the generated datasets based on the open- source mainstream architecture, English LLaMA-7B [34]. Similarly, we replace the tag “EN” with “ZH”, and use a SOTA LLaMA-2-7B variant [42] for the Chinese scenario. Statistics of these datasets and training hyper-parameters are in Appendix B.3.2. For a thorough and comparative performance evaluation, we used GPT-4 API for pairwise scoring and tallying of wins and ties.
2309.02033#66
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
66
GPT-4 vs GPT-N: how would agent design change with more powerful LLMs? Agent design is a moving target as new LLM capabilities emerge with scale (Wei et al., 2022a). For example, earlier language models such as GPT-2 (Radford et al., 2019) would not support LLM agents — indeed, work at that time needed to combine GPT-2 with reinforcement learning for action generation (Yao et al., 2020); GPT-3 (Brown et al., 2020) unlocked flexible few-shot and zero-shot reasoning for NLP tasks; while only GPT-4 (OpenAI, 2023a) starts to afford more reliable self-evaluation (Saunders et al., 2022; Shinn et al., 2023; Yao et al., 2023) and self-refinement (Madaan et al., 2023; Chen et al., 2023b). Will future LLMs further reduce the need for coded rules and extra-learned models? Will this necessitate changes to the CoALA framework? As a thought experiment, imagine GPT-N could “simulate” memory, grounding, learning, and decision-making in context: list all the possible actions, simulate and evaluate each one, and maintain its entire
2309.02427#66
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
67
For a thorough and comparative performance evaluation, we used GPT-4 API for pairwise scoring and tallying of wins and ties. Table 3: Results of pair-wise model comparisons using GPT4 scoring. “CFT”, “EN” and “ZH” indicate meta-tags as Chat Fine-Tuning, English, and Chinese text respectively. Model LLaMA-7B [34] LLaMA2-7B (Chinese, FlagAlpha [42]) Tuning Data Alpaca Data-Juicer Random (CFT, EN) Data-Juicer Belle Data-Juicer Random (CFT, ZH) Data-Juicer #Samples Win Tie 52k 40k 40k 40k 543k 52k 52k 52k 16 44 19 36 28 33 19 45 100 105 99 96
2309.02033#67
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
67
memory, grounding, learning, and decision-making in context: list all the possible actions, simulate and evaluate each one, and maintain its entire long-term memory explicitly in a very long context. Or even more boldly: perhaps GPT-N+1 succeeds at generating the next action by simulating these implicitly in neurons, without any intermediate reasoning in context. While these extreme cases seem unlikely in the immediate future, incremental improvements may alter the importance of different CoALA components. For example, a longer context window could reduce the importance of long-term memory, while more powerful reasoning for internal evaluation and simulation could allow longer-horizon planning. In general, LLMs are not subject to biological limitations (Griffiths, 2020), and their emergent properties have been difficult to predict. Nonetheless, CoALA – and cognitive science more generally – may still help systematically organize tasks where language agents succeed or fail, and suggest code-based procedures to complement a given LLM on a given task. Even in the most extreme
2309.02427#67
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
68
The results are consolidated in Table 3, from which we can see that LLMs utilizing Data-Juicer-recipes consistently demonstrate high validity. Firstly, compared to LLMs trained on the competitive fine-tuning open datasets, Alpaca [92] and Belle [45], LLMs trained on Data-Juicer data gain higher win rates (up to 17.5% for English case) while using less data (up to 90.4% reduction for Chinese case). Secondly, compared to the LLMs trained on the datasets with trivial processing strategy (mixture by random sampling), LLMs trained on Data-Juicer still gain higher win rates (up to 14.4% ), which attests to the effectiveness of our enhanced sampling strategy and quality of Data-Juicer-recipes for LLMs again.
2309.02033#68
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
68
17 case, where GPT implements all of CoALA’s mechanisms in neurons, it may be helpful to leverage CoALA as a conceptual guide to discover and interpret those implicit circuits. Of course, as discussed in Section 6, agent usecases will also help discover, define and shape LLM capabilities. Similar to how chips and computer architectures have co-evolved, language model and agent design should also develop a reciprocal path forward. # 8 Conclusion We proposed Cognitive Architectures for Language Agents (CoALA), a conceptual framework to systematically understand and build language agents. Our framework draws inspiration from the rich history of symbolic artificial intelligence and cognitive science, connecting decades-old insights to frontier research on large language models. We believe this approach provides a path towards developing more general and more human-like artificial intelligence. # Acknowledgements
2309.02427#68
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
69
7.2 Processing Data Efficiently and Effectively 7.2.1 End-to-End System Performance. To evaluate the pro- cessing performance of Data-Juicer, we compare it with two SOTA baselines: TogetherAI’s RedPajama [24] and AllenAI’s Dolma [86]. A more detailed introduction to and comparison with these baselines can be found in Appendix B.3.4. For a fair comparison, here we use their official code repositories and run Data-Juicer on the data recipes with the same OPs to process the Books, arXiv, and C4 datasets, which vary in terms of data sizes, distributions and involve diverse processing OPs. We conduct multiple rounds of experiments on different numbers of processes (np=[32, 64, 128]) and monitor several core metrics, in- cluding processing time and average memory usage. The monitored time is the wall-clock time of the whole processing pipeline. The average memory usage is monitored every second and aggregated across all relevant processes. For more experimental details, please refer to Appendix B.3.3.
2309.02033#69
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
69
# Acknowledgements We thank Harrison Chase, Baian Chen, Khanh Nguyen, Ofir Press, Noah Shinn, Jens Tuyls for proofreading and valuable feedback, and other members from the Princeton NLP Group and Princeton Computational Cognitive Science Lab for helpful discussions. SY and KN acknowledge support from an Oracle Collaborative Research award and the National Science Foundation under Grant No. 2239363. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. SY is also supported by the Harold W. Dodds Fellowship from Princeton. TS is supported by the National Defense Science and Engineering (NDSEG) Graduate Fellowship Program. # References S. Adams, I. Arel, J. Bach, R. Coop, R. Furlan, B. Goertzel, J. S. Hall, A. Samsonovich, M. Scheutz, M. Schlesinger, et al. Mapping the landscape of human-level artificial general intelligence. AI magazine, 33 (1):25–42, 2012.
2309.02427#69
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
70
The experimental results are summarized in Figure 8. Notably, for all datasets and various numbers of processes, Data-Juicer requires an average of 50.6% less processing time and 55.1% less memory. In particular, it saves at most 88.7% processing time for the arXiv dataset compared with the baseline. Also, it takes up to only 22.9% memory of baseline for Data-Juicer to process the Books dataset, which is mainly because the processing procedure of the baseline loads the whole dataset at once. Overall, Data-Juicer 5500 Books 18000 C4 (subset) me wo 5000 \ 15000 . p32 \ _, 12000 np=32 ', aperze {3 1 S so00 mots |B F 6000 3o00| "8 =-pataycer| — yggqfemea2 5, -Datacer) 82) apedy = Data-cer “-RedPajama “SRedPojama | 7° alma opzi2e mo=128 ‘30 160150200250300 0 102030 40506070 0 3 6 9 12 15 18 ‘Avg. Memory(GiB) + ‘Avg. Memory(GiB) + ‘Avg. Memory(GiB) + Figure 8: Comparison of stand-alone performance in various data sizes and processing configurations.
2309.02033#70
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
70
M. Ahn, A. Brohan, N. Brown, Y. Chebotar, O. Cortes, B. David, C. Finn, C. Fu, K. Gopalakrishnan, K. Hausman, et al. Do as I can, not as I say: Grounding language in robotic affordances. arXiv preprint arXiv:2204.01691, 2022. J.-B. Alayrac, J. Donahue, P. Luc, A. Miech, I. Barr, Y. Hasson, K. Lenc, A. Mensch, K. Millican, M. Reynolds, et al. Flamingo: a visual language model for few-shot learning. Advances in Neural Information Processing Systems, 35:23716–23736, 2022. J. R. Anderson and C. Lebiere. The Newell test for a theory of cognition. Behavioral and Brain Sciences, 26 (5):587–601, 2003. J. Andreas. Language models as agent models. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 5769–5779, 2022. R. C. Atkinson and R. M. Shiffrin. Human memory: A proposed system and its control processes. In Psychology of Learning and Motivation, volume 2, pages 89–195. Elsevier, 1968.
2309.02427#70
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
71
Figure 8: Comparison of stand-alone performance in various data sizes and processing configurations. effectively alleviates the bottleneck caused by IO of cache files, and achieves better end-to-end time-space efficiency than baselines. 7.2.2 Effect of Context Management, OP Fusion, and Re- ordering. As introduced in Sec. 6, Data-Juicer employs dedi- cated optimization to minimize redundant computations and save processing time. To examine the optimization effect, we prepared three test datasets of varied sizes and sample counts. Each dataset goes through the same processing recipe which includes 14 OPs (5 Mappers, 8 Filters, and 1 Deduplicator), with 5 of these OPs being fuse-able. We conduct comparison experiments with 4 processes, except for the largest dataset, where we utilize 50 processes to assess if these techniques remain effective on larger scales.
2309.02033#71
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
71
A. D. Baddeley and G. Hitch. Working memory. In Psychology of Learning and Motivation, volume 8, pages 47–89. Elsevier, 1974. Y. Bai, S. Kadavath, S. Kundu, A. Askell, J. Kernion, A. Jones, A. Chen, A. Goldie, A. Mirhoseini, C. McKinnon, et al. Constitutional AI: Harmlessness from AI feedback. arXiv preprint arXiv:2212.08073, 2022. Y. Bisk, D. Marcu, and W. Wong. Towards a dataset for human computer communication via grounded language acquisition. In Workshops at the Thirtieth AAAI Conference on Artificial Intelligence, 2016. 18 E. Biyik and M. Palan. Asking easy questions: A user-friendly approach to active reward learning. In Proceedings of the 3rd Conference on Robot Learning, 2019. C. Blundell, B. Uria, A. Pritzel, Y. Li, A. Ruderman, J. Z. Leibo, J. Rae, D. Wierstra, and D. Hassabis. Model-free episodic control. arXiv preprint arXiv:1606.04460, 2016.
2309.02427#71
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
72
= All OPs before fusion All OPs after fusion lm Fusible OPs before fusion Fusible OPs after fusion 100- 19-99% 100.90% 100.00% 100.0% = 24.91% wee 20.78% Nee 4 iSz8is 83.74% & g0- tess 79.22% STG: Fa 13888 a € a 5 60- ia Y 7.97% 5 40- 5 40 35.63% i s £ 20- § 2 17MB-np=4 169MB-np=4 21GB-np=4 Different dataset sizes and number of processes 21GB-np=50 Figure 9: Time comparison before and after OP fusion. The results are shown in Figure 9, where both the normalized and actual time consumption for each experimental setup are indicated. The results signify that our optimization strategy effectively saves up to 24.91% of the total time for the entire process and saves at most 42.04% of time for those fusible OPs. In addition, the findings showcase that the optimization performs efficiently regardless of variations in dataset sizes or the number of processes utilized.
2309.02033#72
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
72
S. Borgeaud, A. Mensch, J. Hoffmann, T. Cai, E. Rutherford, K. Millican, G. B. Van Den Driessche, J.-B. Lespiau, B. Damoc, A. Clark, et al. Improving language models by retrieving from trillions of tokens. In International Conference on Machine Learning, pages 2206–2240, 2022. S. Branavan, D. Silver, and R. Barzilay. Learning to win by reading manuals in a Monte-Carlo framework. Journal of Artificial Intelligence Research, 43:661–704, 2012. M. Braverman, X. Chen, S. Kakade, K. Narasimhan, C. Zhang, and Y. Zhang. Calibration, entropy rates, and memory in language models. In International Conference on Machine Learning, pages 1089–1099, 2020. G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba. Openai gym, 2016.
2309.02427#72
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
73
7.2.3 Effect of Quality Classifiers. As described in Section 5.2, Data-Juicer provides built-in quality classifiers for LLM data pro- cessing, and here we present several empirical results regarding their performance. Specifically, we follow the training procedure of the proprietary quality classifier used in GPT-3 [9] and extend its training pipeline to include Chinese text. In the evaluation of the collected data, we found that our reimplementation of the GPT-3 classifier and its Chinese adaptation achieved F1 scores of 97.47% and 98.64%, respectively. Further training and evaluation details are provided in the Appendix B.1. # Table 4: Comparison of keeping ratio on CommonCrawl.
2309.02033#73
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
73
A. Brohan, N. Brown, J. Carbajal, Y. Chebotar, J. Dabis, C. Finn, K. Gopalakrishnan, K. Hausman, A. Herzog, J. Hsu, et al. RT-1: Robotics transformer for real-world control at scale. arXiv preprint arXiv:2212.06817, 2022. A. Brohan, N. Brown, J. Carbajal, Y. Chebotar, X. Chen, K. Choromanski, T. Ding, D. Driess, A. Dubey, C. Finn, et al. RT-2: Vision-language-action models transfer web knowledge to robotic control. arXiv preprint arXiv:2307.15818, 2023. T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, et al. Language models are few-shot learners. Advances in Neural Information Processing Systems, 33:1877–1901, 2020.
2309.02427#73
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
74
# Table 4: Comparison of keeping ratio on CommonCrawl. Quality Classifier Original GPT-3 Keeping Ratio @ label - Keeping Ratio @ Pareto 1.30% Our GPT-3 3.22% 1.41% Chinese 1.81% Furthermore, we assess the filtering effectiveness of these clas- sifiers by comparing their keeping ratios on CommonCrawl. The results are summarized in Table 4, where we employ two data keep- ing methods used in GPT-3: (1) label: 𝑑𝑜𝑐𝑠𝑐𝑜𝑟𝑒 > 0.5; and (2) Pareto [9]: 𝑑𝑜𝑐𝑠𝑐𝑜𝑟𝑒 > 1 − np.random.pareto(𝛼), 𝛼 = 9. The keeping ratios of our re-implemented GPT-3 quality classifiers are generally in line with the original one, and our Chinese extended version maintains a keeping ratio comparable to that of the English version.
2309.02033#74
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
74
C. B. Browne, E. Powley, D. Whitehouse, S. M. Lucas, P. I. Cowling, P. Rohlfshagen, S. Tavener, D. Perez, S. Samothrakis, and S. Colton. A survey of Monte Carlo tree search methods. IEEE Transactions on Computational Intelligence and AI in games, 4(1):1–43, 2012. F. Callaway, B. van Opheusden, S. Gul, P. Das, P. M. Krueger, T. L. Griffiths, and F. Lieder. Rational use of cognitive resources in human planning. Nature Human Behaviour, 6(8):1112–1125, 2022. C.-M. Chan, W. Chen, Y. Su, J. Yu, W. Xue, S. Zhang, J. Fu, and Z. Liu. Chateval: Towards better llm-based evaluators through multi-agent debate. arXiv preprint arXiv:2308.07201, 2023.
2309.02427#74
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
75
7.2.4 System Scalability. To verify the enhanced scalability of our system (as detailed in Sec. 6), we carry out a series of exper- iments to measure data processing times across multiple servers. Specifically, we adopt the StackExchange and arXiv datasets from RedPajama. The total size of the StackExchange and arXiv datasets are 65GB and 140GB in jsonl format, respectively. We compare the performance of Data-Juicer on Ray, Data-Juicer on Beam (using the Flink backend), and original Data-Juicer in these tests. More details about the implementation and experimental platforms are in Appendix B.3.5. 16384 8192 . 4096 2048 Time (s) 1024 512 + StackExchange e+ arxiv 2564 —#— Stackexchange [Ray] —-- arXiv [Ray] te StackExchange [Beam] e+ arxiv [Beam] 128 i 3 4 Ey q i 1024 cores Number of nodes Figure 10: Processing time with varying number of nodes. Data-Juicer accelerates processing in distributed mode.
2309.02033#75
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
75
B. Chen, F. Xia, B. Ichter, K. Rao, K. Gopalakrishnan, M. S. Ryoo, A. Stone, and D. Kappler. Open- vocabulary queryable scene representations for real world planning. In 2023 IEEE International Conference on Robotics and Automation (ICRA), pages 11509–11522, 2023a. D. Chen and R. Mooney. Learning to interpret natural language navigation instructions from observations. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 25, pages 859–865, 2011. D. Chen, A. Fisch, J. Weston, and A. Bordes. Reading Wikipedia to answer open-domain questions. arXiv preprint arXiv:1704.00051, 2017. M. Chen, J. Tworek, H. Jun, Q. Yuan, H. P. d. O. Pinto, J. Kaplan, H. Edwards, Y. Burda, N. Joseph, G. Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021. X. Chen, M. Lin, N. Schärli, and D. Zhou. Teaching large language models to self-debug. arXiv preprint arXiv:2304.05128, 2023b. 19
2309.02427#75
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
76
Figure 10: Processing time with varying number of nodes. Data-Juicer accelerates processing in distributed mode. The experiment results are illustrated in Figure 10. Notably, thanks to various optimizations, our original system outperforms both Ray and Beam in the single server scenario. Moreover, as the number of nodes increases, the processing time of our system on Ray decreases proportionally (up to 87.4% and 84.6% time reduc- tion on StackExchange and arXiv respectively), demonstrating its effective scalability across multiple servers. Nonetheless, the processing time of Data-Juicer on Beam re- mains almost unchanged as the number of nodes increases. Upon further investigation of the processing workflow, we found that the limited scalability of Data-Juicer on Beam is primarily con- strained by the data loading component of Beam, which leads to a dominant file loading time ratio and requires substantial develop- ment changes for adaptation and further performance optimization.
2309.02033#76
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
76
19 Y. Chen, L. Yuan, G. Cui, Z. Liu, and H. Ji. A close look into the calibration of pre-trained language models. arXiv preprint arXiv:2211.00151, 2022. N. Chomsky. Three models for the description of language. IRE Transactions on information theory, 2(3): 113–124, 1956. A. Chowdhery, S. Narang, J. Devlin, M. Bosma, G. Mishra, A. Roberts, P. Barham, H. W. Chung, C. Sutton, S. Gehrmann, et al. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022. P. F. Christiano, J. Leike, T. Brown, M. Martic, S. Legg, and D. Amodei. Deep reinforcement learning from human preferences. Advances in neural information processing systems, 30, 2017. A. Church. A set of postulates for the foundation of logic. Annals of mathematics, pages 346–366, 1932.
2309.02427#76
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
77
7.3 Empowering Real-world Products Data-Juicer has been adopted by several real-world LLM-based products, playing a crucial role in data understanding and pro- cessing. It evolves continually through the integration of feedback from real-world demands. A notable testament to its utility is its contribution to the development of several industrial LLMs from Alibaba Cloud’s Tongyi suite [21], such as Dianjin, which is used for financial analysis; Zhiwen, a reading assistance tool; and Xingchen, which specializes in AI character customization. Moreover, the data processing capabilities of Data-Juicer have been incorporated into Alibaba Cloud’s Platform for AI (PAI) [22] to support more real-world applications.
2309.02033#77
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
77
A. Church. A set of postulates for the foundation of logic. Annals of mathematics, pages 346–366, 1932. M.-A. Côté, A. Kádár, X. Yuan, B. Kybartas, T. Barnes, E. Fine, J. Moore, M. Hausknecht, L. El Asri, M. Adada, et al. Textworld: A learning environment for text-based games. In Computer Games: 7th Workshop, CGW 2018, pages 41–75. Springer, 2019. A. Creswell, M. Shanahan, and I. Higgins. Selection-inference: Exploiting large language models for interpretable logical reasoning. In The Eleventh International Conference on Learning Representations, 2023. G. Dagan, F. Keller, and A. Lascarides. Dynamic Planning with a LLM. arXiv preprint arXiv:2308.06391, 2023. I. Dasgupta, C. Kaeser-Chen, K. Marino, A. Ahuja, S. Babayan, F. Hill, and R. Fergus. Collaborating with language models for embodied reasoning. In Second Workshop on Language and Reinforcement Learning, 2022.
2309.02427#77
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
78
Our system’s fine-grained OP abstraction, coupled with the ex- tensive tools for LLM data-processing, empowers users to easily explore and refine data recipes tailored to the distinct textual at- tributes of diverse use cases. For example, within the financial sector, it is crucial to accommodate data that includes numerous digits and standardized terminology. In the realm of reading assistance, the focus shifts to data characterized by extended text lengths and coherent structures. Conversely, character customization demands data rich in dialogue and varied enough to support personalized services. Data-Juicer adeptly meets these varied demands by fa- cilitating the combination of distinct OPs, hyper-parameters, and tools that adapt to the unique need of each real-world application.
2309.02033#78
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
78
X. Deng, Y. Gu, B. Zheng, S. Chen, S. Stevens, B. Wang, H. Sun, and Y. Su. Mind2Web: Towards a generalist agent for the web. arXiv preprint arXiv:2306.06070, 2023. N. Derbinsky, J. Li, and J. Laird. A multi-domain evaluation of scaling in a general episodic memory. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 26, pages 193–199, 2012. J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In NAACL-HLT (1), 2019. D. Dohan, W. Xu, A. Lewkowycz, J. Austin, D. Bieber, R. G. Lopes, Y. Wu, H. Michalewski, R. A. Saurous, J. Sohl-Dickstein, et al. Language model cascades. arXiv preprint arXiv:2207.10342, 2022.
2309.02427#78
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
79
8 CONCLUSIONS To conclude, the introduction of Data-Juicer reflects a new step forward in the field of data-centric LLM development. By offering a user-friendly, versatile, and efficient solution, Data-Juicer effec- tively addresses the existing limitations of open-source tools for LLM data processing, which lean towards data reproducibility at the expense of adaptability and usability. The decoupling of tradition- ally linked components fosters greater abstraction and modularity, and the organic arrangement of over 50 built-in operators, dedi- cated tools, and abundant data recipes serves diverse needs for LLM pre-training and fine-tuning. Beyond supporting auto-evaluation, Data-Juicer is carefully optimized and seamlessly integrated with both ecosystems for LLM training and evaluation, as well as dis- tributed computing. Empirical validation bears witness to substan- tial improvements in LLMs’ performance using Data-Juicer’s data recipes, and shows advances in system efficiency and scalability. As such, Data-Juicer stands as a compelling addition to the toolkit for LLM data processing, which we hope can shed light on broader research for the field of data-centric LLM development. # REFERENCES
2309.02033#79
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
79
D. Driess, F. Xia, M. S. Sajjadi, C. Lynch, A. Chowdhery, B. Ichter, A. Wahid, J. Tompson, Q. Vuong, T. Yu, et al. PALM-E: An embodied multimodal language model. arXiv preprint arXiv:2303.03378, 2023. Y. Du, S. Li, A. Torralba, J. B. Tenenbaum, and I. Mordatch. Improving factuality and reasoning in language models through multiagent debate. arXiv preprint arXiv:2305.14325, 2023. A. Ecoffet, J. Huizinga, J. Lehman, K. O. Stanley, and J. Clune. Go-explore: a new approach for hard- exploration problems. arXiv preprint arXiv:1901.10995, 2019.
2309.02427#79
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
80
# REFERENCES [1] Ebtesam Almazrouei, Hamza Alobeidli, Abdulaziz Alshamsi, Alessandro Cap- pelli, Ruxandra Cojocaru, Merouane Debbah, Etienne Goffinet, Daniel Heslow, Julien Launay, Quentin Malartic, Badreddine Noune, Baptiste Pannier, and Guilherme Penedo. 2023. Falcon-40B: an open large language model with state-of-the-art performance. (2023). [2] Apache Arrow. 2023. https://arrow.apache.org/ [3] Amanda Askell, Yuntao Bai, Anna Chen, Dawn Drain, Deep Ganguli, Tom Henighan, Andy Jones, Nicholas Joseph, Benjamin Mann, Nova DasSarma, Nelson Elhage, Zac Hatfield-Dodds, Danny Hernandez, Jackson Kernion, Kamal Ndousse, Catherine Olsson, Dario Amodei, Tom B. Brown, Jack Clark, Sam McCandlish, Chris Olah, and Jared Kaplan. 2021. A General Language Assistant as a Laboratory for Alignment. CoRR abs/2112.00861 (2021).
2309.02033#80
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
80
K. Ellis, C. Wong, M. Nye, M. Sablé-Meyer, L. Morales, L. Hewitt, L. Cary, A. Solar-Lezama, and J. B. Tenenbaum. Dreamcoder: Bootstrapping inductive program synthesis with wake-sleep library learning. In Proceedings of the 42nd ACM SIGPLAN International Conference on Programming Language Design and Implementation, pages 835–850, 2021. S. Feng, C. Y. Park, Y. Liu, and Y. Tsvetkov. From pretraining data to language models to downstream tasks: Tracking the trails of political biases leading to unfair nlp models. arXiv preprint arXiv:2305.08283, 2023. 20 D. Ganguli, A. Askell, N. Schiefer, T. Liao, K. Lukoši¯ut˙e, A. Chen, A. Goldie, A. Mirhoseini, C. Olsson, D. Hernandez, et al. The capacity for moral self-correction in large language models. arXiv preprint arXiv:2302.07459, 2023.
2309.02427#80
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
81
[4] Stephen H. Bach, Victor Sanh, Zheng Xin Yong, Albert Webson, Colin Raffel, Nihal V. Nayak, Abheesht Sharma, Taewoon Kim, M. Saiful Bari, Thibault Févry, Zaid Alyafeai, Manan Dey, Andrea Santilli, Zhiqing Sun, Srulik Ben-David, Canwen Xu, Gunjan Chhablani, Han Wang, Jason Alan Fries, Maged Saeed AlShaibani, Shanya Sharma, Urmish Thakker, Khalid Almubarak, Xiangru Tang, Dragomir R. Radev, Mike Tian-Jian Jiang, and Alexander M. Rush. 2022. Prompt- Source: An Integrated Development Environment and Repository for Natural Language Prompts. In ACL (demo). 93–104. [5] Apache Beam. 2023. https://beam.apache.org/ [6] Stella Biderman, Hailey Schoelkopf, Quentin Gregory Anthony, Herbie Bradley, Kyle O’Brien, Eric Hallahan, Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward
2309.02033#81
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
81
C. Gao, X. Lan, Z. Lu, J. Mao, J. Piao, H. Wang, D. Jin, and Y. Li. S3: Social-network simulation system with large language model-empowered agents. arXiv preprint arXiv:2307.14984, 2023. T. Gao, A. Fisch, and D. Chen. Making pre-trained language models better few-shot learners. arXiv preprint arXiv:2012.15723, 2020. S. J. Gershman, E. J. Horvitz, and J. B. Tenenbaum. Computational rationality: A converging paradigm for intelligence in brains, minds, and machines. Science, 349(6245):273–278, 2015. T. L. Griffiths. Understanding human intelligence through human limitations. Trends in Cognitive Sciences, 24(11):873–883, 2020. J. Gu, Y. Wang, K. Cho, and V. O. Li. Search engine guided neural machine translation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32, 2018.
2309.02427#81
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02427
82
L. Guan, K. Valmeekam, S. Sreedharan, and S. Kambhampati. Leveraging pre-trained large language models to construct and utilize world models for model-based task planning. arXiv preprint arXiv:2305.14909, 2023. Guidance. Guidance, 2023. URL https://github.com/guidance-ai/guidance. I. Gur, H. Furuta, A. Huang, M. Safdari, Y. Matsuo, D. Eck, and A. Faust. A real-world webagent with planning, long context understanding, and program synthesis. arXiv preprint arXiv:2307.12856, 2023. K. Guu, K. Lee, Z. Tung, P. Pasupat, and M. Chang. Retrieval augmented language model pre-training. In International conference on machine learning, pages 3929–3938, 2020. J. B. Hamrick, V. Bapst, A. Sanchez-Gonzalez, T. Pfaff, T. Weber, L. Buesing, and P. W. Battaglia. Combining q-learning and search with amortized value estimates. In International Conference on Learning Representations, 2019.
2309.02427#82
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
83
[7] Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, and Samuel Weinbach. 2022. GPT-NeoX-20B: An Open-Source Autoregressive Language Model. CoRR abs/2204.06745 (2022). [8] Andrei Z Broder, Moses Charikar, Alan M Frieze, and Michael Mitzenmacher. 2000. Min-Wise Independent Permutations. J. Comput. System Sci. 60, 3 (2000), 630–659.
2309.02033#83
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
83
A. W. Hanjie, V. Zhong, and K. Narasimhan. Grounding language to entities and dynamics for generalization in reinforcement learning. In International Conference on Machine Learning (ICML), 2021. S. Hao, Y. Gu, H. Ma, J. J. Hong, Z. Wang, D. Z. Wang, and Z. Hu. Reasoning with language model is planning with world model. arXiv preprint arXiv:2305.14992, 2023. M. Hasan, C. Ozel, S. Potter, and E. Hoque. Sapien: Affective virtual agents powered by large language models. arXiv preprint arXiv:2308.03022, 2023. P. Haslum, N. Lipovetzky, D. Magazzeni, C. Muise, R. Brachman, F. Rossi, and P. Stone. An introduction to the planning domain definition language, volume 13. Springer, 2019. M. Hausknecht, P. Ammanabrolu, M.-A. Côté, and X. Yuan. Interactive fiction games: A colossal adventure. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 7903–7910, 2020.
2309.02427#83
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
84
[9] Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Ka- plan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Rad- ford, Ilya Sutskever, and Dario Amodei. 2020. Language Models are Few-Shot Learners. In NeurIPS. [10] Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott M. Lund- berg, Harsha Nori, Hamid Palangi, Marco Túlio Ribeiro, and Yi Zhang. 2023. Sparks of Artificial General Intelligence: Early experiments with GPT-4. CoRR abs/2303.12712 (2023).
2309.02033#84
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
84
S. Hong, X. Zheng, J. Chen, Y. Cheng, C. Zhang, Z. Wang, S. K. S. Yau, Z. Lin, L. Zhou, C. Ran, et al. Metagpt: Meta programming for multi-agent collaborative framework. arXiv preprint arXiv:2308.00352, 2023. J. Huang, S. S. Gu, L. Hou, Y. Wu, X. Wang, H. Yu, and J. Han. Large language models can self-improve. arXiv preprint arXiv:2210.11610, 2022a. S. Huang, Z. Jiang, H. Dong, Y. Qiao, P. Gao, and H. Li. Instruct2Act: Mapping Multi-modality Instructions to Robotic Actions with Large Language Model. arXiv preprint arXiv:2305.11176, 2023. 21 W. Huang, P. Abbeel, D. Pathak, and I. Mordatch. Language models as zero-shot planners: Extracting actionable knowledge for embodied agents. In International Conference on Machine Learning, pages 9118–9147, 2022b.
2309.02427#84
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
85
[11] Tianle Cai, Xuezhi Wang, Tengyu Ma, Xinyun Chen, and Denny Zhou. 2023. Large Language Models as Tool Makers. CoRR abs/2305.17126 (2023). [12] Paris Carbone, Asterios Katsifodimos, Stephan Ewen, Volker Markl, Seif Haridi, and Kostas Tzoumas. 2015. Apache Flink: Stream and batch processing in a single engine. IEEE Data Eng. Bull. 38, 4 (2015). [13] Nicholas Carlini, Daphne Ippolito, Matthew Jagielski, Katherine Lee, Florian Tramèr, and Chiyuan Zhang. 2023. Quantifying Memorization Across Neural Language Models. In ICLR. [14] Moses S. Charikar. 2002. Similarity Estimation Techniques from Rounding Algorithms. In STOC. 380–388. # [15] ChatGLM2-6B . 2023. https://github.com/THUDM/ChatGLM2-6B [16] ChatLLaMA. 2023. https://github.com/nebuly-ai/nebuly/tree/main/ optimization/chatllama
2309.02033#85
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
85
W. Huang, F. Xia, T. Xiao, H. Chan, J. Liang, P. Florence, A. Zeng, J. Tompson, I. Mordatch, Y. Chebotar, et al. Inner monologue: Embodied reasoning through planning with language models. arXiv preprint arXiv:2207.05608, 2022c. A. Hussein, M. M. Gaber, E. Elyan, and C. Jayne. Imitation learning: A survey of learning methods. ACM Computing Surveys (CSUR), 50(2):1–35, 2017. G. Irving, P. Christiano, and D. Amodei. AI safety via debate. arXiv preprint arXiv:1805.00899, 2018. G. Izacard, M. Caron, L. Hosseini, S. Riedel, P. Bojanowski, A. Joulin, and E. Grave. Unsupervised dense information retrieval with contrastive learning. arXiv preprint arXiv:2112.09118, 2021. Z. Jiang, J. Araki, H. Ding, and G. Neubig. How can we know when language models know? on the calibration of language models for question answering. Transactions of the Association for Computational Linguistics, 9:962–977, 2021.
2309.02427#85
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02427
86
Z. Jin, S. Levine, F. G. Adauto, O. Kamal, M. Sap, M. Sachan, R. Mihalcea, J. B. Tenenbaum, and B. Schölkopf. When to make exceptions: Exploring language models as accounts of human moral judgment. In A. H. Oh, A. Agarwal, D. Belgrave, and K. Cho, editors, Advances in Neural Information Processing Systems, 2022. S. Jinxin, Z. Jiabao, W. Yilei, W. Xingjiao, L. Jiawen, and H. Liang. Cgmi: Configurable general multi-agent interaction framework. arXiv preprint arXiv:2308.12503, 2023. R. M. Jones, J. E. Laird, P. E. Nielsen, K. J. Coulter, P. Kenny, and F. V. Koss. Automated intelligent pilots for combat flight simulation. AI magazine, 20(1):27–27, 1999. D. Jurafsky. Speech & language processing. Pearson Education India, 2000.
2309.02427#86
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
87
[18] Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Pondé de Oliveira Pinto, Jared Kaplan, Harrison Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Joshua Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. 2021.
2309.02033#87
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
87
D. Jurafsky. Speech & language processing. Pearson Education India, 2000. O. Khattab, K. Santhanam, X. L. Li, D. Hall, P. Liang, C. Potts, and M. Zaharia. Demonstrate-search-predict: Composing retrieval and language models for knowledge-intensive NLP. arXiv preprint arXiv:2212.14024, 2022. URL https://github.com/stanfordnlp/dspy. G. Kim, P. Baldi, and S. McAleer. Language models can solve computer tasks. arXiv preprint arXiv:2303.17491, 2023. J. R. Kirk and J. E. Laird. Interactive task learning for simple games. Advances in Cognitive Systems, 3 (13-30):5, 2014. J. R. Kirk, W. Robert, P. Lindes, and J. E. Laird. Improving Knowledge Extraction from LLMs for Robotic Task Learning through Agent Analysis. arXiv preprint arXiv:2306.06770, 2023.
2309.02427#87
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02427
88
K. R. Koedinger, J. R. Anderson, W. H. Hadley, M. A. Mark, et al. Intelligent tutoring goes to school in the big city. International Journal of Artificial Intelligence in Education, 8(1):30–43, 1997. T. Kojima, S. S. Gu, M. Reid, Y. Matsuo, and Y. Iwasawa. Large language models are zero-shot reasoners. Advances in Neural Information Processing Systems, 35:22199–22213, 2022. I. Kotseruba and J. K. Tsotsos. 40 years of cognitive architectures: core cognitive abilities and practical applications. Artificial Intelligence Review, 53(1):17–94, 2020. C. Laidlaw, S. Russell, and A. Dragan. Bridging rl theory and practice with the effective horizon. arXiv preprint arXiv:2304.09853, 2023. J. E. Laird. The Soar cognitive architecture. MIT press, 2019. J. E. Laird. Introduction to Soar. arXiv preprint arXiv:2205.03854, 2022. 22 J. E. Laird, P. S. Rosenbloom, and A. Newell. Chunking in Soar: The anatomy of a general learning
2309.02427#88
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
89
[20] Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fe- dus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Web- son, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Y. Zhao, Yan- ping Huang, Andrew M. Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei. 2022. Scaling Instruction-Finetuned Language Models. CoRR abs/2210.11416 (2022). [21] Alibaba Cloud. 2023. https://tongyi.aliyun.com [22] Alibaba Cloud. 2023. https://www.alibabacloud.com/en/product/machine- learning [23] Yann Collet and Murray Kucherawy. 2021. Zstandard Compression and the ’application/zstd’ Media Type. RFC 8878.
2309.02033#89
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
89
22 J. E. Laird, P. S. Rosenbloom, and A. Newell. Chunking in Soar: The anatomy of a general learning mechanism. Machine Learning, 1:11–46, 1986. J. E. Laird, A. Newell, and P. S. Rosenbloom. Soar: An architecture for general intelligence. Artificial Intelligence, 33(1):1–64, 1987. J. E. Laird, K. R. Kinkade, S. Mohan, and J. Z. Xu. Cognitive robotics using the Soar cognitive architecture. In CogRob @ AAAI, 2012. B. M. Lake, T. D. Ullman, J. B. Tenenbaum, and S. J. Gershman. Building machines that learn and think like people, 2016. LangChain. LangChain, 2022. URL http://www.langchain.com. H. Le, Y. Wang, A. D. Gotmare, S. Savarese, and S. C. H. Hoi. Coderl: Mastering code generation through pretrained models and deep reinforcement learning. Advances in Neural Information Processing Systems, 35:21314–21328, 2022.
2309.02427#89
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
90
[23] Yann Collet and Murray Kucherawy. 2021. Zstandard Compression and the ’application/zstd’ Media Type. RFC 8878. [24] Together Computer. 2023. RedPajama: An Open Source Recipe to Reproduce LLaMA training dataset. https://github.com/togethercomputer/RedPajama- Data [25] Michael J Cormier, Jonathan R Belyeu, Brent S Pedersen, Joseph Brown, Jo- hannes Köster, and Aaron R Quinlan. 2021. Go Get Data (GGD) is a framework that facilitates reproducible access to genomic data. Nature Communications 12, 1 (2021), 2151. [26] Common Crawl. 2023. https://commoncrawl.org/ [27] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In NAACL-HLT (1). 4171–4186.
2309.02033#90
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
90
Y. LeCun. A path towards autonomous machine intelligence version 0.9. 2, 2022-06-27. Open Review, 62, 2022. P. Lewis, E. Perez, A. Piktus, F. Petroni, V. Karpukhin, N. Goyal, H. Küttler, M. Lewis, W.-t. Yih, T. Rocktäschel, et al. Retrieval-augmented generation for knowledge-intensive NLP tasks. Advances in Neural Information Processing Systems, 33:9459–9474, 2020. B. Z. Li, W. Chen, P. Sharma, and J. Andreas. Lampp: Language models as probabilistic priors for perception and action. arXiv preprint arXiv:2302.02801, 2023a. H. Li, Y. Su, D. Cai, Y. Wang, and L. Liu. A survey on retrieval-augmented text generation. arXiv preprint arXiv:2202.01110, 2022a.
2309.02427#90
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
91
[28] Nan Du, Yanping Huang, Andrew M. Dai, Simon Tong, Dmitry Lepikhin, Yuanzhong Xu, Maxim Krikun, Yanqi Zhou, Adams Wei Yu, Orhan Firat, Barret Zoph, Liam Fedus, Maarten P. Bosma, Zongwei Zhou, Tao Wang, Yu Emma Wang, Kellie Webster, Marie Pellat, Kevin Robinson, Kathleen S. Meier-Hellstern, Toju Duke, Lucas Dixon, Kun Zhang, Quoc V. Le, Yonghui Wu, Zhifeng Chen, and Claire Cui. 2022. GLaM: Efficient Scaling of Language Models with Mixture- of-Experts. In ICML. 5547–5569. [29] EleutherAI. 2023. Pythia-1.4B. https://huggingface.co/EleutherAI/pythia-1.4b [30] Guhao Feng, Bohang Zhang, Yuntian Gu, Haotian Ye, Di He, and Liwei Wang. 2023. Towards Revealing the Mystery behind Chain of Thought: a Theoretical Perspective. CoRR abs/2305.15408 (2023).
2309.02033#91
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
91
R. Li, L. B. Allal, Y. Zi, N. Muennighoff, D. Kocetkov, C. Mou, M. Marone, C. Akiki, J. Li, J. Chim, Q. Liu, E. Zheltonozhskii, T. Y. Zhuo, T. Wang, O. Dehaene, M. Davaadorj, J. Lamy-Poirier, J. Monteiro, O. Shliazhko, N. Gontier, N. Meade, A. Zebaze, M.-H. Yee, L. K. Umapathi, J. Zhu, B. Lipkin, M. Oblokulov, Z. Wang, R. Murthy, J. Stillerman, S. S. Patel, D. Abulkhanov, M. Zocca, M. Dey, Z. Zhang, N. Fahmy, U. Bhattacharyya, W. Yu, S. Singh, S. Luccioni, P. Villegas, M. Kunakov, F. Zhdanov, M. Romero, T. Lee, N. Timor, J. Ding, C. Schlesinger, H. Schoelkopf,
2309.02427#91
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
92
[31] Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, and Connor Leahy. 2021. The Pile: An 800GB Dataset of Diverse Text for Language Modeling. CoRR abs/2101.00027 (2021). [32] Leo Gao, Jonathan Tow, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster, Laurence Golding, Jeffrey Hsu, Kyle McDonell, Niklas Muennighoff, Jason Phang, Laria Reynolds, Eric Tang, Anish Thite, Ben Wang, Kevin Wang, and Andy Zou. 2021. A framework for few-shot language model evaluation. [33] Samuel Gehman, Suchin Gururangan, Maarten Sap, Yejin Choi, and Noah A. Smith. 2020. RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language Models. In EMNLP (Findings). 3356–3369. [34] Xinyang Geng and Hao Liu. 2023. OpenLLaMA: An Open Reproduction of LLaMA. https://github.com/openlm-research/open_llama
2309.02033#92
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02033
93
[35] Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio César Teodoro Mendes, Allie Del Giorno, Sivakanth Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo de Rosa, Olli Saarikivi, Adil Salim, Shital Shah, Harkirat Singh Behl, Xin Wang, Sébastien Bubeck, Ronen Eldan, Adam Tauman Kalai, Yin Tat Lee, and Yuanzhi Li. 2023. Textbooks Are All You Need. arXiv:2306.11644 [cs.CL] [36] Project Gutenberg. 2023. https://www.gutenberg.org/ [37] Xu Han, Zhengyan Zhang, Ning Ding, Yuxian Gu, Xiao Liu, Yuqi Huo, Jiezhong Qiu, Yuan Yao, Ao Zhang, Liang Zhang, Wentao Han, Minlie Huang, Qin Jin, Yanyan Lan, Yang Liu, Zhiyuan Liu, Zhiwu Lu, Xipeng Qiu, Ruihua Song, Jie Tang, Ji-Rong Wen, Jinhui Yuan, Wayne Xin Zhao, and Jun Zhu. 2021. Pre- trained models: Past, present and future. AI Open 2 (2021), 225–250.
2309.02033#93
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
93
Y. Li, D. H. Choi, J. Chung, N. Kushman, J. Schrittwieser, R. Leblond, Tom, Eccles, J. Keeling, F. Gimeno, A. D. Lago, T. Hubert, P. Choy, C. de, M. d’Autume, I. Babuschkin, X. Chen, P.-S. Huang, J. Welbl, S. Gowal, Alexey, Cherepanov, J. Molloy, D. J. Mankowitz, E. S. Robson, P. Kohli, N. de, Freitas, K. Kavukcuoglu, and O. Vinyals. Competition-level code generation with alphacode. Science, 378:1092 – 1097, 2022b. J. Liang, W. Huang, F. Xia, P. Xu, K. Hausman, B. Ichter, P. Florence, and A. Zeng. Code as policies: Language model programs for embodied control. In 2023 IEEE International Conference on Robotics and Automation (ICRA), pages 9493–9500, 2023a.
2309.02427#93
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
94
[38] Shibo Hao, Tianyang Liu, Zhen Wang, and Zhiting Hu. 2023. ToolkenGPT: Augmenting Frozen Language Models with Massive Tools via Tool Embeddings. CoRR abs/2305.11554 (2023). [39] Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2021. Measuring Massive Multitask Language Understanding. In ICLR. [40] Or Honovich, Thomas Scialom, Omer Levy, and Timo Schick. 2022. Unnatural Instructions: Tuning Language Models with (Almost) No Human Labor. CoRR abs/2212.09689 (2022). [41] Technology Innovation Institute. 2023. Falcon-RW-1B. https://huggingface.co/ tiiuae/falcon-rw-1b [42] Technology Innovation Institute. 2023. Falcon-RW-1B. https://huggingface.co/ FlagAlpha/Atom-7B
2309.02033#94
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
94
P. P. Liang, C. Wu, L.-P. Morency, and R. Salakhutdinov. Towards understanding and mitigating social biases in language models. In International Conference on Machine Learning, pages 6565–6576, 2021. T. Liang, Z. He, W. Jiao, X. Wang, Y. Wang, R. Wang, Y. Yang, Z. Tu, and S. Shi. Encouraging divergent thinking in large language models through multi-agent debate. arXiv preprint arXiv:2305.19118, 2023b. F. Lieder and T. L. Griffiths. Resource-rational analysis: Understanding human cognition as the optimal use of limited computational resources. Behavioral and Brain Sciences, 43:e1, 2020. 23 B. Y. Lin, Y. Fu, K. Yang, P. Ammanabrolu, F. Brahman, S. Huang, C. Bhagavatula, Y. Choi, and X. Ren. Swiftsage: A generative agent with fast and slow thinking for complex interactive tasks. arXiv preprint arXiv:2305.17390, 2023. P. Lindes and J. E. Laird. Toward integrating cognitive linguistics and cognitive language processing. In Proceedings of the 14th International Conference on Cognitive Modeling (ICCM), 2016.
2309.02427#94
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
95
[42] Technology Innovation Institute. 2023. Falcon-RW-1B. https://huggingface.co/ FlagAlpha/Atom-7B [43] Gautier Izacard, Patrick S. H. Lewis, Maria Lomeli, Lucas Hosseini, Fabio Petroni, Timo Schick, Jane Dwivedi-Yu, Armand Joulin, Sebastian Riedel, and Edouard Grave. 2022. Few-shot Learning with Retrieval Augmented Language Models. CoRR abs/2208.03299 (2022). [44] Abhinav Jain, Hima Patel, Lokesh Nagalapatti, Nitin Gupta, Sameep Mehta, Shanmukha Guttula, Shashank Mujumdar, Shazia Afzal, Ruhi Sharma Mittal, and Vitobha Munigala. 2020. Overview and importance of data quality for machine learning tasks. In KDD. 3561–3562.
2309.02033#95
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
95
B. Liu, Y. Jiang, X. Zhang, Q. Liu, S. Zhang, J. Biswas, and P. Stone. LLM+P: Empowering large language models with optimal planning proficiency. arXiv preprint arXiv:2304.11477, 2023a. H. Liu, C. Sferrazza, and P. Abbeel. Languages are rewards: Hindsight finetuning using human feedback. arXiv preprint arXiv:2302.02676, 2023b. J. Liu, D. Shen, Y. Zhang, B. Dolan, L. Carin, and W. Chen. What Makes Good In-Context Examples for GPT-3 ? arXiv preprint arXiv:2101.06804, 2021. P. Liu, W. Yuan, J. Fu, Z. Jiang, H. Hayashi, and G. Neubig. Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. ACM Computing Surveys, 55(9), 2023c. ISSN 0360-0300.
2309.02427#95
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
96
[45] Yunjie Ji, Yong Deng, Yan Gong, Yiping Peng, Qiang Niu, Baochang Ma, and Xiangang Li. 2023. BELLE: Be Everyone’s Large Language model Engine. https: //github.com/LianjiaTech/BELLE. jsonargparse. 2023. https://github.com/omni-us/jsonargparse [46] [47] Nikhil Kandpal, Eric Wallace, and Colin Raffel. 2022. Deduplicating Training Data Mitigates Privacy Risks in Language Models. In ICML. 10697–10707. [48] Andreas Köpf, Yannic Kilcher, Dimitri von Rütte, Sotiris Anagnostidis, Zhi-Rui Tam, Keith Stevens, Abdullah Barhoum, Nguyen Minh Duc, Oliver Stanley, Richárd Nagyfi, Shahul ES, Sameer Suri, David Glushkov, Arnav Dantuluri, Andrew Maguire, Christoph Schuhmann, Huu Nguyen, and Alexander Mattick. 2023. OpenAssistant Conversations - Democratizing Large Language Model Alignment. CoRR abs/2304.07327 (2023).
2309.02033#96
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
96
R. Liu, J. Wei, S. S. Gu, T.-Y. Wu, S. Vosoughi, C. Cui, D. Zhou, and A. M. Dai. Mind’s eye: Grounded language model reasoning through simulation. In The Eleventh International Conference on Learning Representations, 2023d. R. Liu, R. Yang, C. Jia, G. Zhang, D. Zhou, A. M. Dai, D. Yang, and S. Vosoughi. Training socially aligned language models in simulated human society. arXiv preprint arXiv:2305.16960, 2023e. LlamaIndex. LlamaIndex, 2023. URL http://www.llamaindex.ai. L. E. Lwakatare, A. Raj, I. Crnkovic, J. Bosch, and H. H. Olsson. Large-scale machine learning systems in real-world industrial settings: A review of challenges and solutions. Information and software technology, 127:106368, 2020. Z. Ma, Y. Mei, and Z. Su. Understanding the benefits and challenges of using large language model-based conversational agents for mental well-being support. arXiv preprint arXiv:2307.15810, 2023.
2309.02427#96
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02427
97
conversational agents for mental well-being support. arXiv preprint arXiv:2307.15810, 2023. S. Macenski, T. Foote, B. Gerkey, C. Lalancette, and W. Woodall. Robot operating system 2: Design, architecture, and uses in the wild. Science Robotics, 7(66):eabm6074, 2022. A. Madaan, N. Tandon, P. Gupta, S. Hallinan, L. Gao, S. Wiegreffe, U. Alon, N. Dziri, S. Prabhumoye, Y. Yang, et al. Self-refine: Iterative refinement with self-feedback. arXiv preprint arXiv:2303.17651, 2023. A. A. Markov. The theory of algorithms. Trudy Matematicheskogo Instituta Imeni VA Steklova, 42:3–375, 1954. M. G. Mattar and N. D. Daw. Prioritized memory access explains planning and hippocampal replay. Nature Neuroscience, 21(11):1609–1617, 2018.
2309.02427#97
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
98
[51] Hugo Laurençon, Lucile Saulnier, Thomas Wang, Christopher Akiki, Albert Vil- lanova del Moral, Teven Le Scao, Leandro von Werra, Chenghao Mou, Ed- uardo González Ponferrada, Huu Nguyen, Jörg Frohberg, Mario Sasko, Quentin Lhoest, Angelina McMillan-Major, Gérard Dupont, Stella Biderman, Anna Rogers, Loubna Ben Allal, Francesco De Toni, Giada Pistilli, Olivier Nguyen, So- maieh Nikpoor, Maraim Masoud, Pierre Colombo, Javier de la Rosa, Paulo Villegas, Tristan Thrush, Shayne Longpre, Sebastian Nagel, Leon Weber, Manuel Muñoz, Jian Zhu, Daniel van Strien, Zaid Alyafeai, Khalid Almubarak, Minh Chien Vu, Itziar Gonzalez-Dios, Aitor Soroa, Kyle Lo, Manan Dey, Pe- dro Ortiz Suarez, Aaron Gokaslan, Shamik Bose, David Ifeoluwa Adelani, Long Phan, Hieu Tran, Ian
2309.02033#98
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]