doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2309.03409
134
its shape by closely examining its curves and coordinates, then select the correct option. hyperbaton Choose the option with the correct adjective order in each sentence, prioritizing specific attributes like size, color, and origin. Place the most specific adjective before the more general ones for precise and standardized ordering across all examples. Ensure accurate alignment of the adjectives based on their respective attributes for consistent and standardized ordering. logical_deduction _seven_objects Determine the precise order of the given objects/participants based on the provided information and establish the final ranking accurately, considering all relevant factors, while maintaining logical consistency with maximum efficiency. movie_recommendation Choose the most similar option from the choices provided that closely aligns with the given movies’ themes, genres, and impact for the most accurate recommendation possible. Make your selection wisely. multistep_arithmetic_two Carefully follow the order of operations to precisely simplify the expressions within parentheses and efficiently find the accurate final answer. navigate Always face forward. Take 10 steps forward. Turn right and walk for 5 steps. Then, make a left turn and continue for 9 steps. Proceed by walking 6 steps backward. Finally, turn around and take 200
2309.03409#134
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03409
135
10 steps forward. Turn right and walk for 5 steps. Then, make a left turn and continue for 9 steps. Proceed by walking 6 steps backward. Finally, turn around and take 200 steps. Accurately track your movements, diligently adhere to the given path, and ensure to return to the starting point without any deviations or obstacles. object_counting Determine the total count of items mentioned, including all listed items, using an efficient and concise method. State the final count as your answer. penguins_in_a_table Identify the animal with the maximum measurement (weight, age, or height) in the table and state its name and species. reasoning_about _colored_objects Determine the color of each item in the given scenario and select the correct color option from the provided choices for accurate responses, ensuring utmost precision and completeness. ruin_names Choose the option that creatively and hilariously transforms the given artist or movie name. salient_translation _error_detection Carefully analyze the translations and select the most suitable option from the given choices to rectify the specific error category, ensuring complete precision, accuracy, and faithful representation of the intended meaning, while considering all relevant information in the source text. snarks Choose the option that cleverly
2309.03409#135
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03409
136
ensuring complete precision, accuracy, and faithful representation of the intended meaning, while considering all relevant information in the source text. snarks Choose the option that cleverly employs sarcasm to defy all expectations and leave everyone utterly dumbfounded, questioning the very essence of their own perception. sports_understanding Evaluate the plausibility of each given statement and provide a well-supported justification based on logical reasoning, contextual understanding, and relevant evidence to arrive at a definitive and conclusive answer. temporal_sequences Identify the possible time slot for the desired activity based on the given information and sightings, then select the correct option. tracking_shuffled_objects _seven_objects Thoroughly analyze the given scenarios, systematically consider all available information, and confidently determine the final outcome with exceptional precision and optimal efficiency, while maintaining a strategic and logical approach throughout the process. web_of_lies Examine each person’s statements meticulously to accurately determine the truth and confidently identify who is telling the truth, enabling you to effectively solve the given problem. word_sorting Sort the given words alphabetically using spaces as separators while maintaining their original order and including all
2309.03409#136
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03409
138
39 # Large Language Models as Optimizers E.3 PALM 2-L AS SCORER, G P T-3.5-T U R B O AS OPTIMIZER, OPTIMIZATION STARTING FROM “LET’S SOLVE THE PROBLEM.” Figure 26 and Table 14 compare the accuracies of found instructions vs “Let’s solve the problem.”, “Let’s think step by step.”, and the instructions in Table 11. Table 15 details the found instructions. The “Let’s” pattern appears more often in the found instructions because of the starting points, and the instructions are more often declarative that are more suitable for A_begin, even if some are semantically far from “Let’s solve the problem”. In fact, “Let’s” was adopted by Zhou et al. (2022b) as a fixed pattern in generated prompts, possibly because of the same reason.
2309.03409#138
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03409
139
& Bn, yh doo RRO Baus us Rep nb 0. Cetus aaa sH2ay. Pin ly % le? o~ — Sygh Py, 2580 oq, hny, Bepoyieioy My, S05 M0, “Rey regions > Py) Onpltey Papraeges Yay Oy 54, (90°F? Bay, Hes re okry Mog Golo ueery, a, Sa oGeA22Dy SSG Ue Magne seek Moritou See i Sion My, I oueeng A, Yo, U3 640 " SytuspSig Qe 013200 uP Wsssnin Sto “aie (es, SNe, 3 3 3 Sey 20 8 a we aouaseyip A2eund2e Og | Oy Mf 3%, , Pee Bf $92{%o°a Mf Cup2ner9, ee ae A ME | S92) UY Mey? by = ee uy Meg bingy. = YoY y % cm | O07, “One ‘Ye, Te | 2005 "ue, SE ee Ly, ees | Shu, "se Mi | oi 90 Wier, % Mim | 9062049 Yo, Tf Sey MPO, ory, mmf Beery,
2309.03409#139
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03409
141
# y (a) ours minus “Let’s think step by step.” (b) ours minus “Let’s solve the problem.” starting point = by, [mmm | S9/05~ a iso Lo, oto Moy an = ee Mey 2S. 18, Sd, 1 do kus apo? 0 Soh35, Play Meo ' SH Pan< BF” By, syle(ror, Ste Uy t 209 Young Mgr, 1 PAM, % | BPO, ne “oe, MI So95, Be, 4 205 Stuy dn, ys " Beaten, = See, Mong epg 90 Maes Oe uy i Laeger ey yp, ¢ , BOF, Oaseta ENE Teli. Pap = Culeers —P}, | ec penesog, ay = Meee " 01200 Lun Sy 06 ey 3 ° ° %. “ 9 “ex aouazayyip Adeund2e Og (c) ours minus the instructions found with the empty starting point
2309.03409#141
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03409
142
(c) ours minus the instructions found with the empty starting point Figure 26: On 23 BBH tasks, the accuracy differences among instructions found by prompt opti- mization (with the text-bison scorer and the gpt-3.5-turbo optimizer), “Let’s think step by step.”, and “Let’s solve the problem.” (optimization starting point). The found instructions mostly outperform the “Let’s think step by step.” baseline, the “Let’s solve the problem.” starting point, and the instructions in Table 11 found by prompt optimization from the empty string. 40 3.8 # Large Language Models as Optimizers Table 14: Accuracies on BBH tasks with the PaLM 2-L scorer and the gpt-3.5-turbo optimizer that starts from “Let’s solve the problem”. The scores are from A_begin instructions.
2309.03409#142
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03409
143
Task Scorer Our Acc “Let’s solve the problem.” Acc training / test / overall training / test / overall PaLM 2-L PaLM 2-L PaLM 2-L PaLM 2-L PaLM 2-L PaLM 2-L PaLM 2-L PaLM 2-L PaLM 2-L PaLM 2-L PaLM 2-L PaLM 2-L PaLM 2-L PaLM 2-L PaLM 2-L PaLM 2-L PaLM 2-L PaLM 2-L PaLM 2-L PaLM 2-L PaLM 2-L PaLM 2-L PaLM 2-L 98.0 / 89.5 / 91.2 83.8 / 58.7 / 63.6 90.0 / 82.0 / 83.6 78.0 / 68.0 / 70.0 100.0 / 100.0 / 100.0 84.0 / 62.0 / 66.4 62.0 / 42.5 / 46.4 94.0 / 91.5 / 92.0 66.0 / 53.0 / 55.6 88.0 / 88.0 / 88.0 66.0 / 55.0 / 57.2 76.0 / 67.0 / 68.8 96.0 / 92.5 / 93.2 86.2 / 70.9 /
2309.03409#143
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03409
144
/ 55.0 / 57.2 76.0 / 67.0 / 68.8 96.0 / 92.5 / 93.2 86.2 / 70.9 / 74.0 88.0 / 69.0 / 72.8 92.0 / 85.5 / 86.8 66.0 / 67.5 / 67.2 88.6 / 76.9 / 79.2 72.0 / 63.5 / 65.2 100.0 / 99.5 / 99.6 56.0 / 63.5 / 62.0 56.0 / 58.5 / 58.0 52.0 / 44.5 / 46.0 78.0 / 69.0 / 70.8 62.0 / 61.3 / 61.5 74.0 / 71.0 / 71.6 52.0 / 54.5 / 54.0 94.0 / 97.0 / 96.4 68.0 / 54.0 / 56.8 30.0 / 22.0 / 23.6 72.0 / 77.0 / 76.0 38.0 / 36.5 / 36.8 66.0 / 76.0 / 74.0 30.0 / 22.0 / 23.6 54.0 / 63.5 / 61.6 58.0 / 58.0 / 58.0 69.0 / 72.6 / 71.9 78.0 /
2309.03409#144
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03409
147
boolean_expressions Let’s accurately assess the given conditions and determine their corresponding Boolean values. causal_judgement Let’s conduct a meticulous evaluation of the given scenarios, accurately determine the causal relationships, and provide definitive answers through comprehensive analysis, ensuring a precise understanding of causation and a thorough determination of events in each situation. date_understanding Let’s accurately determine the correct date based on the given information and select the corresponding option in the standard MM/DD/YYYY format with utmost precision and reliability, ensuring the most definitive and reliable solution possible for accurate representation in all scenarios without any room for ambiguity, error, or confusion, and providing the highest level of accuracy and reliability. disambiguation_qa Let’s thoroughly analyze the given sentences to accurately determine the unambiguous antecedents of the pronouns used, ensuring clear understanding, effective communication, and leaving no room for any confusion or ambiguity. dyck_languages Let’s find the correct closing parentheses and brackets for the given sequences. formal_fallacies
2309.03409#147
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03409
148
room for any confusion or ambiguity. dyck_languages Let’s find the correct closing parentheses and brackets for the given sequences. formal_fallacies Let’s thoroughly analyze the explicitly stated premises and draw definitive conclusions to accurately determine the deductive validity of the arguments provided in each question, employing precise and logical reasoning in our assessments for unwavering confidence in our determinations. geometric_shapes Let’s accurately determine the shape represented by the given SVG path element by carefully analyzing its path data and considering all available options for a precise identification. hyperbaton Let’s quickly identify the correct adjective order. logical_deduction _seven_objects Let’s methodically analyze the given information, employ logical reasoning, thoroughly evaluate all relevant details, and accurately determine the solutions for each problem by considering all provided options comprehensively and strategically, ensuring an efficient and effective approach towards arriving at the correct answers. movie_recommendation Let’s uncover the perfect movie recommendation from the options provided, ensuring an exceptional cinematic experience together as we select the most captivating and
2309.03409#148
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03409
149
Let’s uncover the perfect movie recommendation from the options provided, ensuring an exceptional cinematic experience together as we select the most captivating and satisfying choice that will keep us thoroughly engaged and immersed until the very end. multistep_arithmetic_two Let’s tackle the following calculations. navigate Let’s accurately and efficiently determine the correct solution for each given scenario, ensuring the highest level of precision, reliability, and consistency throughout. object_counting Let’s determine the total count of various items/objects/ingredients/animals mentioned in order to accurately and efficiently find the answer. penguins_in_a_table Let’s analyze the given information and determine the correct answer. reasoning_about _colored_objects Let’s systematically analyze the given information and carefully evaluate each answer choice to confidently determine the accurate and optimal solutions, considering all available options and specific details provided in each question for precise and concise responses, ensuring complete accuracy and clarity in our answers. ruin_names Prepare to have a side-splittingly funny time as we uncover the most clever and hilarious alternatives for these artist or movie
2309.03409#149
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03409
150
answers. ruin_names Prepare to have a side-splittingly funny time as we uncover the most clever and hilarious alternatives for these artist or movie names, challenging your wit to guess the correct one with a burst of creativity, humor, and imaginative twists! salient_translation _error_detection Let’s meticulously analyze the provided translations, accurately identifying any errors or discrepancies, and conduct a comprehensive evaluation to ensure the highest level of translation quality and fidelity. By considering contextual nuances, cultural references, linguistic conventions, potential factual errors, and any dropped content, our ultimate aim is to achieve precise and thorough assessments for optimal translation accuracy and adherence to the source text. snarks Let’s expertly determine the sarcastic statement among the given options and confidently provide the definitive answer without any room for doubt or confusion, ensuring absolute precision, clarity, and unwavering expertise in our response, while carefully analyzing the context, tone, and intention behind each statement to achieve unrivaled accuracy and unwavering confidence. sports_understanding Let’s find the accurate information. temporal_sequences The flawless
2309.03409#150
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03409
151
behind each statement to achieve unrivaled accuracy and unwavering confidence. sports_understanding Let’s find the accurate information. temporal_sequences The flawless approach tracking_shuffled_objects _seven_objects By meticulously analyzing the given scenarios and accurately determining the final outcomes through a series of trades, swaps, and exchanges among the individuals involved, let’s ascertain the conclusive results. web_of_lies
2309.03409#151
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.02427
0
3 2 0 2 p e S 7 2 ] I A . s c [ 2 v 7 2 4 2 0 . 9 0 3 2 : v i X r a # Cognitive Architectures for Language Agents Theodore R. Sumers∗ Shunyu Yao∗ Karthik Narasimhan Thomas L. Griffiths Princeton University {sumers, shunyuy, karthikn, tomg}@princeton.edu # Abstract Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today’s language agents within the broader history of AI and outlines a path towards language-based general intelligence. # 1 Introduction
2309.02427#0
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
1
ABSTRACT The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high- quality data. A data recipe is a mixture of data of different types and from different sources for training an LLM, which has been known as one of the most important factors that decide the LLM’s performance. Existing open-source tools for LLM data processing are mostly tailored for preparing specific data recipes. To continu- ously uncover the potential of LLMs, incorporate (after cleaning) data from new sources, and improve LLMs’ general-purpose or domain-specific performance, we build a data processing system, named Data-Juicer, with which we can efficiently generate di- verse data recipes, explore different possibilities in forming the data mixtures, and evaluate their effects on the model performance. Dif- ferent from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for form- ing data recipes are truly heterogeneous and massive with various qualities (e.g., considering all web-pages on the Internet). Secondly, it is extremely expensive to precisely evaluate data recipes’ impact on the LLMs’ performance. Thirdly, sufficient flexibility needs to be provided to the end users of Data-Juicer, model developers, to configure and evaluate different data recipes.
2309.02033#1
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
1
# 1 Introduction Language agents (Weng, 2023; Wang et al., 2023b; Xi et al., 2023; Yao and Narasimhan, 2023) are an emerging class of artifical intelligence (AI) systems that use large language models (LLMs; Vaswani et al., 2017; Brown et al., 2020; Devlin et al., 2019; OpenAI, 2023a) to interact with the world. They apply the latest advances in LLMs to the existing field of agent design (Russell and Norvig, 2013). Intriguingly, this synthesis offers benefits for both fields. On one hand, LLMs possess limited knowledge and reasoning capabilities. Language agents mitigate these issues by connecting LLMs to internal memory and environments, grounding them to existing knowledge or external observations. On the other hand, traditional agents often require handcrafted rules (Wilkins, 2014) or reinforcement learning (Sutton and Barto, 2018), making generalization to new environments challenging (Lake et al., 2016). Language agents leverage commonsense priors present in LLMs to adapt to novel tasks, reducing the dependence on human annotation or trial-and-error learning.
2309.02427#1
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
2
general-purpose corpus and are fine-tuned with specific-purpose data for alignment or downstream tasks. For pre-training data, a collection of diverse data, including web texts, dialogues, academic papers, code bases, and others, help to develop the vast repository of knowledge and great applicability [9, 57, 75]. Fine-tuning data, which further refines LLMs and aligns model behavior with human values [3, 48, 68]. As “garbage in, garbage out” suggests, the input data for training or tuning an LLM has a direct impact on the quality of the derived model [35, 44]. Building effective data processing solutions for LLMs remains a sophisticated yet fully under-explored task, given the common challenges in processing both pre-training and fine-tuning data, which pursue good data quality, proper data diversity, and large data volume. Unfortunately, there exist only a few open-source projects con- tributing their LLM training data and the corresponding processing codes [24, 51], particularly in comparison to numerous open-source projects on models and training infrastructures [6, 7, 19, 67, 80, 93, 105]. Such limited development of data processing will obstruct the progress of quantitatively understanding and enhancing LLMs from the perspective of data, especially accompanied by the following noteworthy Challenges for LLM data processing.
2309.02033#2
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
2
While the earliest agents used LLMs to directly select or generate actions (Figure 1B; Ahn et al., 2022; Huang et al., 2022b), more recent agents additionally use them to reason (Yao et al., 2022b), plan (Hao et al., 2023; Yao et al., 2023), and manage long-term memory (Park et al., 2023; Wang et al., 2023a) to improve decision-making. This latest generation of cognitive language agents use remarkably sophisticated internal processes (Figure 1C). Today, however, individual works use custom terminology to describe these processes (such as ‘tool use’, ‘grounding’, ‘actions’), making it difficult to compare different agents, understand how they are evolving over time, or build new agents with clean and consistent abstractions.
2309.02427#2
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
3
Data-Juicer features a fine-grained abstraction of the pipeline for constructing data recipes, with over 50 built-in operators that can be freely composed and extended. By incorporating visualiza- tion and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop after data processing for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed comput- ing. With the help of Data-Juicer, we derive data recipes that achieve remarkable performance boosts on state-of-the-art LLMs, demonstrating up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. More importantly, we hope that Data-Juicer pro- motes broader data-centric research on training and understanding LLMs. Data-Juicer and our data recipes are released and actively maintained at https://github.com/alibaba/data-juicer.
2309.02033#3
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
3
In order to establish a conceptual framework organizing these efforts, we draw parallels with two ideas from the history of computing and artificial intelligence (AI): production systems and cognitive architectures. Production systems generate a set of outcomes by iteratively applying rules (Newell and Simon, 1972). They originated as string manipulation systems – an analog of the problem that LLMs solve – and were subsequently adopted by the AI community to define systems capable of complex, hierarchically structured behaviors (Newell et al., 1989). To do so, they were incorporated into cognitive architectures that specified control flow for selecting, applying, and even generating new productions (Laird et al., 1987; Laird, 2022; ∗Equal contribution, order decided by coin flip. Each person reserves the right to list their name first. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents. 1 Cc Cognitive Language Agent = nen Reasoning rete Learn \ ( ) B Language Agent Observations NO ~ Actions | Actions \ Environment Environment
2309.02427#3
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
4
(C1) High Heterogeneity in LLM’s Data Recipe. LLMs in- volve several developmental stages and enable diverse usages in- cluding coding and dialog assistance, and even aiming at Artificial General Intelligence. As a result, they demand an extensive variety of data types, formats, and quality in their training data, leading to highly complex data-processing pipelines. A data recipe for training or tuning an LLM is such a mixture of processed data from different types of sources, with their ratios and processing pipelines properly set [24, 25]. Existing systems, e.g., [24, 80], release certain processing scripts to generate data recipes for the pre-training pur- pose, whereas [17, 92] focus on data recipes for improving data diversity and quality in LLaMA’s [93] fine-tuning stage. However, due to the lack of abstraction of processing pipelines and compos- ability of operators (OPs), such as those for data editing, cleaning, and filtering, it is difficult to incorporate new data sources in data recipes provided by these systems, or to extend their pipelines for exploring other possibilities of data recipes.
2309.02033#4
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
4
1 Cc Cognitive Language Agent = nen Reasoning rete Learn \ ( ) B Language Agent Observations NO ~ Actions | Actions \ Environment Environment Figure 1: Different uses of large language models (LLMs). A: In natural language processing (NLP), an LLM takes text as input and outputs text. B: Language agents (Ahn et al., 2022; Huang et al., 2022c) place the LLM in a direct feedback loop with the external environment by transforming observations into text and using the LLM to choose actions. C: Cognitive language agents (Yao et al., 2022b; Shinn et al., 2023; Wang et al., 2023a) additionally use the LLM to manage the agent’s internal state via processes such as learning and reasoning. In this work, we propose a blueprint to structure such agents. Kotseruba and Tsotsos, 2020). We suggest a meaningful analogy between production systems and LLMs: just as productions indicate possible ways to modify strings, LLMs define a distribution over changes or additions to text. This further suggests that controls from cognitive architectures used with production systems might be equally applicable to transform LLMs into language agents.
2309.02427#4
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
5
1 INTRODUCTION Large Language Models (LLMs) [9, 18, 69, 70, 90, 92] have achieved unprecedented intelligence, enabling applications that would other- wise be infeasible due to unsatisfied performance. As the “food” for LLMs, data plays a pivotal role in these exciting advancements [31, 62, 71, 103]. LLMs are built by pre-training on large-scale ∗Co-first authors. †Equal contribution. ‡Corresponding authors, email addresses: {yaliang.li, bolin.ding}@alibaba-inc.com (C2) Timely Feedback for Data Recipe. The search space of LLM’s data recipes is huge due to the high degree of heterogeneity in data sources and numerous ways to mix them (with proper pro- cessing OPs, combinations, and ratios). We want to explore as many data recipes in the search space as possible with timely feedback to uncover the potential of LLMs and improve their performance. However, as the size of an LLM (number of model parameters) is usually billions or even larger, it is super expensive, in terms of both the time and computational resources, to evaluate the impact
2309.02033#5
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
5
Thus, we propose Cognitive Architectures for Language Agents (CoALA), a conceptual framework to understand existing language agents and help develop new ones. CoALA organizes agents along three key dimensions: their information storage (divided into working and long-term memories); their action space (divided into internal and external actions); and their decision-making procedure (which is structured as an interactive loop with planning and execution). Through these three concepts (memory, action, and decision-making), we show CoALA can neatly express a large body of diverse agents and identify underexplored directions. Notably, while several recent papers propose conceptual architectures for general intelligence (LeCun, 2022; McClelland et al., 2019) or empirically survey language models and agents (Mialon et al., 2023; Weng, 2023; Wang et al., 2023b), this paper combines elements of both: we propose a theoretical framework and use it to organize diverse empirical work. This grounds our theory to existing practices and allows us to identify both short-term and long-term directions for future work.
2309.02427#5
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
6
(Take-it-away Users) a > 7 Pre-training/Fine-Tuning (Megatron-LM, Transformers, ...) | Auto-Evaluation (LLM API, HELM, ...) LLM Ecosystems Zero-code Data 4 Feedback Processing e Plentiful Data Recipes & Demos aw for Pre-training (RedPajama, oscar, refined, ...) (Novice Users) (instruction, alignment, refined, ...) for Fine-tuning t Distributed Computing Checkpoints a 5 A Ecosytems C Low code Flexible & Well-documented Configuration Cosytems ‘ustomization oS ae data clean || data mixture data re-format data probe B (Experienced Users) 4 Versatile & Resuable OPs Dedicated & Pluggable Tools Off-the-shelf Mappers Filters op Analyzers Quality Classifiers Sampler Data Processing (transform data in-place) || (remove specific info) |} Ey sion (OP-effect, HPO, ...) || (GPT-3, chinese, code, ...) || (meta, stats, ...) Components Deduplicators Formatters (Ganz, Visualizers Reference LMs Tracer (compare in multile views) || (unify json, txt, pdf...) || Peorderin®) (histgram, diversity, ...) | |
2309.02033#6
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
6
The plan for the rest of the paper is as follows. Section 2 introduces production systems and cognitive architectures, and Section 3 outlines their parallels with LLMs and language agents. Section 4 introduces the CoALA framework, and surveys and organizes diverse language agents accordingly. Section 5 provides a deeper case study of several prominent agents. Section 6 suggests actionable steps to construct future language agents, while Section 7 highlights open questions in the broader arc of cognitive science and AI. Finally, Section 8 concludes. Readers interested in applied agent design may prioritize Sections 4-6. # 2 Background: From Strings to Symbolic AGI We first introduce production systems and cognitive architectures, providing a historical perspective on cognitive science and artificial intelligence: beginning with theories of logic and computation (Post, 1943), and ending with attempts to build symbolic artificial general intelligence (Newell et al., 1989). 2 # 2.1 Production systems for string manipulation
2309.02427#6
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02427
7
2 # 2.1 Production systems for string manipulation In the first half of the twentieth century, a significant line of intellectual work led to the reduction of mathematics (Whitehead and Russell, 1997) and computation (Church, 1932; Turing et al., 1936) to symbolic manipulation. Production systems are one such formalism. Intuitively, production systems consist of a set of rules, each specifying a precondition and an action. When the precondition is met, the action can be taken. The idea originates in efforts to characterize the limits of computation. Post (1943) proposed thinking about arbitrary logical systems in these terms, where formulas are expressed as strings and the conclusions they license are identified by production rules (as one string “produces” another). This formulation was subsequently shown to be equivalent to a simpler string rewriting system. In such a system, we specify rules of the form X Y Z → X W Z indicating that the string XY Z can be rewritten to the string XW Z. String rewriting plays a significant role in the theory of formal languages, in the form of Chomsky’s phrase structure grammar (Chomsky, 1956). # 2.2 Control flow: From strings to algorithms
2309.02427#7
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02427
8
# 2.2 Control flow: From strings to algorithms By itself, a production system simply characterizes the set of strings that can be generated from a starting point. However, they can be used to specify algorithms if we impose control flow to determine which productions are executed. For example, Markov algorithms are production systems with a priority ordering (Markov, 1954). The following algorithm implements division-with-remainder by converting a number written as strokes | into the form Q ∗ R, where Q is the quotient of division by 5 and R is the remainder: ∗||||| → | ∗ •−→ ∗ → ∗ where the priority order runs from top to bottom, productions are applied to the first substring matching their preconditions when moving from left to right (including the empty substring, in the last production), and •−→ indicates the algorithm halts after executing the rule. The first rule effectively “subtracts” five if possible; the second handles the termination condition when no more subtraction is possible; and the third handles the empty substring input case. For example, given the input 11, this would yield the sequence of productions ∗||||||||||| → | ∗ |||||| → || ∗ | •−→ || ∗ | which is interpreted as 2 remainder 1. Simple productions can result in complex behavior – Markov algorithms can be shown to be Turing complete.
2309.02427#8
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
9
(C3) Usability and Customizability. The workflow of training or tuning LLMs starts from processing raw data. Exacerbated by the above two challenges, there is an urgent need for a data-centric infrastructure, so that the model developers can easily re-use or implement their own OPs and tools for data processing, configure their processing pipeline, explore various data recipes, and eval- uate the resulting LLMs’ performance. We need such a system to accelerate the exploration and understanding of LLMs’ potentials. (C4) Massive Data Volume. Last but not least, LLMs are trained on vast corpora, with data volumes stretching to an unprecedented magnitude of billions or even trillions of tokens (a modeling unit of text dependent on the used tokenizer [49]). Efficient LLM data processing of such volume is critical but arduous. However, consid- erations on system performance optimization are often bypassed by existing studies, leaving significant room for enhancement in en- suring the stability of data processing and facilitating the deliveries of processed data and trained weights for LLMs. Overview of Data-Juicer. In this paper, we advocate for a one- stop data processing system that addresses these challenges,
2309.02033#9
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
9
# 2.3 Cognitive architectures: From algorithms to agents Production systems were popularized in the AI community by Allen Newell, who was looking for a formalism to capture human problem solving (Newell, 1967; Newell and Simon, 1972). Productions were generalized beyond string rewriting to logical operations: preconditions that could be checked against the agent’s goals and world state, and actions that should be taken if the preconditions were satisfied. In their landmark book Human Problem Solving (Newell and Simon, 1972), Allen Newell and Herbert Simon gave the example of a simple production system implementing a thermostat agent: (temperature > 70◦) ∧ (temperature < 72◦) → stop (temperature < 70◦) ∧ (furnace off) → turn on furnace (temperature > 72◦) ∧ (furnace on) → turn off furnace temperature < 32◦ → call for repairs; turn on electric heater Following this work, production systems were adopted by the AI community. The resulting agents con- tained large production systems connected to external sensors, actuators, and knowledge bases – requiring correspondingly sophisticated control flow. AI researchers defined “cognitive architectures” that mimicked human cognition – explicitly instantiating processes such as perception, memory, and planning (Adams et al., 3
2309.02427#9
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
10
of processed data and trained weights for LLMs. Overview of Data-Juicer. In this paper, we advocate for a one- stop data processing system that addresses these challenges, en- abling comprehensive, user-friendly, and efficient data processing abilities to facilitate data-centric LLM research and development. The proposed system, named Data-Juicer and illustrated in a bottom-up view in Figure 1, is strategically designed to generate data recipes making data more “juicy” and digestible for LLMs. We decouple the mixture elements of existing solutions for LLM data processing, such as specific data types, auxiliary models, and downstream tasks. As highlighted by the green boxes, Data-Juicer fosters a fine-grained abstraction and implementation of compos- able modules with over 50 versatile OPs and dedicated tools. We
2309.02033#10
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
10
3 Symbolic Long-Term Memories: Procedural Semantic Episodic v & fb Sy Proposal and Evalutation Chunking Semantic Episodic Learning Leaming v ¥ —) a) Symbolic Working Memory Ns, * Application Preference Memory Decision Procedure Spatial-Visual System Perceptual LT Memory Other Perception Visual Perception ry x Embodiment Figure 2: Cognitive architectures augment a production system with sensory groundings, long-term memory, and a decision procedure for selecting actions. A: The Soar architecture, reproduced with permission from Laird (2022). B: Soar’s decision procedure uses productions to select and implement actions. These actions may be internal (such as modifying the agent’s memory) or external (such as a motor command). 2012) to achieve flexible, rational, real-time behaviors (Sun, 2004; Newell, 1980; 1992; Anderson and Lebiere, 2003). This led to applications from psychological modeling to robotics, with hundreds of architectures and thousands of publications (see Kotseruba and Tsotsos (2020) for a recent survey).
2309.02427#10
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
11
make Data-Juicer end-to-end configurable to help prepare trace- able, comparable, and refinable data recipes at various scenarios of LLM pre-training and fine-tuning, as shown in the yellow and pink boxes. Coupled with established auto-evaluation capabilities, Data-Juicer supports a timely feedback loop at multiple devel- opment stages of data recipes and LLMs, thereby promoting the production of valuable LLM data. To meet diverse user backgrounds and needs (marked by the left three rectangle boxes), we design Data-Juicer as an easy-to- use, flexible and extensible system. Beginners are shielded from underlying complexities and benefit from numerous ready-to-use datasets, data recipes, and pluggable tools, supporting zero-code LLM data processing. With the help of the flexible configuration module, experienced users can simply modify built-in data recipes, reorganize the order of OPs and tools, and tune the value of their hyper-parameters, to meet their lightweight customization needs. Thanks to the standardization and modularization, advanced users are empowered to conveniently extend and register their new OPs and tools into Data-Juicer, facilitating quick engagement in sec- ondary development. Furthermore, we offer more than a dozen interactive tutorials implemented by streamlit [87] to help users with their LLM data processing journey.
2309.02033#11
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
11
A canonical example is the Soar architecture (Fig. 2A). Soar stores productions in long-term memory and executes them based on how well their preconditions match working memory (Fig. 2B). These productions specify actions that modify the contents of working and long-term memory. We next provide a brief overview of Soar and refer readers to Laird (2022; 2019) for deeper introductions. Memory. Building on psychological theories, Soar uses several types of memory to track the agent’s state (Atkinson and Shiffrin, 1968). Working memory (Baddeley and Hitch, 1974) reflects the agent’s current circumstances: it stores the agent’s recent perceptual input, goals, and results from intermediate, internal reasoning. Long term memory is divided into three distinct types. Procedural memory stores the production system itself: the set of rules that can be applied to working memory to determine the agent’s behavior. Semantic memory stores facts about the world (Lindes and Laird, 2016), while episodic memory stores sequences of the agent’s past behaviors (Nuxoll and Laird, 2007).
2309.02427#11
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
12
Data-Juicer hinges itself on the Huggingface-datasets library [55], providing a unified intermediate representation of data and achieving optimized space-time efficiency and robustness through various techniques such as context management, OP fusion, caching, and checkpoint mechanisms. Furthermore, as the right two circles show, Data-Juicer seamlessly integrates with ecosystems for LLM training and evaluation such as Megatron-LM [85] and HELM [59], and distributed computing ecosystems such as Ray [66] and Beam [5], thus facilitating comprehensive LLM data processing and en- hancing large-scale data processing capabilities.
2309.02033#12
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
12
Grounding. Soar can be instantiated in simulations (Tambe et al., 1995; Jones et al., 1999) or real-world robotic systems (Laird et al., 2012). In embodied contexts, a variety of sensors stream perceptual input into working memory, where it is available for decision-making. Soar agents can also be equipped with actuators, allowing for physical actions and interactive learning via language (Mohan et al., 2012; Mohan and Laird, 2014; Kirk and Laird, 2014). Decision making. Soar implements a decision loop that evaluates productions and applies the one that matches best (Fig. 2B). Productions are stored in long-term procedural memory. During each decision cycle, their preconditions are checked against the agent’s working memory. In the proposal and evaluation phase, a set of productions is used to generate and rank a candidate set of possible actions.1 The best action is 1In more detail, Soar divides productions into two types: “operators,” which we refer to as actions, and “rules” which are used to propose, evaluate, and execute operators. Differentiating these is conceptually important for Soar but not language agents, and so we elide the distinction. 4 then chosen.2 Another set of productions is then used to implement the action – for example, modifying the contents of working memory or issuing a motor command.
2309.02427#12
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
13
Leveraging the proposed system, we refine several open-sourced datasets and derive numerous data recipes for both LLM pre-trained and fine-tuning. These refined datasets are not only higher in qual- ity but also more digestible by LLMs, leading to effective perfor- mance improvements of LLMs. Empirical analysis showcases an improvement of up to 7.45% averaged score across 16 LLM bench- marks using our refined pre-training data. Even pre-trained on only 43% quantity of compared data, we observe superior performance over state-of-the-art (SOTA) LLMs such as Falcon [1]. Moreover, compared with SOTA LLMs fine-tuned on competitive open English and Chinese data, LLMs fine-tuned on Data-Juicer’s data gain an average of 10.3% higher win rate of pair-wise GPT-4 evaluation, while with an average 56.8% fewer data quantity. Finally, we intro- duce its utility in real-world deployment, and validate its superior system efficiency and scalability of Data-Juicer, by up to 88.7% reduction in single-machine processing time and 77.1% savings in memory usage, and 7.91x
2309.02033#13
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
13
4 then chosen.2 Another set of productions is then used to implement the action – for example, modifying the contents of working memory or issuing a motor command. Learning. Soar supports multiple modes of learning. First, new information can be stored directly in long-term memory: facts can be written to semantic memory, while experiences can be written to episodic memory (Derbinsky et al., 2012). This information can later be retrieved back into working memory when needed for decision-making. Second, behaviors can be modified. Reinforcement learning (Sutton and Barto, 2018) can be used to up-weight productions that have yielded good outcomes, allowing the agent to learn from experience (Nason and Laird, 2005). Most remarkably, Soar is also capable of writing new productions into its procedural memory (Laird et al., 1986) – effectively updating its source code. Cognitive architectures were used broadly across psychology and computer science, with applications including robotics (Laird et al., 2012), military simulations (Jones et al., 1999; Tambe et al., 1995), and intelligent tutoring (Koedinger et al., 1997). Yet they have become less popular in the AI community over the last few decades. This decrease in popularity reflects two of the challenges involved in such systems: they are limited to domains that can be described by logical predicates and require many pre-specified rules to function.
2309.02427#13
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
14
scalability of Data-Juicer, by up to 88.7% reduction in single-machine processing time and 77.1% savings in memory usage, and 7.91x distributed processing acceleration. Contributions. Our contributions are summarized as follows: • We propose and build a novel system for LLM data processing, Data-Juicer, which is featured by decoupled modules and over 50 versatile OPs and tools. To easily dive into data quality and insights, Data-Juicer fosters a timely feedback loop with inter- active visualizations and auto-evaluation capabilities.
2309.02033#14
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
14
Intriguingly, LLMs appear well-posed to meet these challenges. First, they operate over arbitrary text, making them more flexible than logic-based systems. Second, rather than requiring the user to specify productions, they learn a distribution over productions via pre-training on an internet corpus. Recognizing this, researchers have begun to use LLMs within cognitive architectures, leveraging their implicit world knowledge (Wray et al., 2021) to augment traditional symbolic approaches (Kirk et al., 2023; Romero et al., 2023). Here, we instead import principles from cognitive architecture to guide the design of LLM-based agents. # 2.4 Language models and agents
2309.02427#14
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
15
Demonstrated by extensive empirical evidence, Data-Juicer produces numerous high-quality data recipes to enhance LLMs and exhibits superior system performance, powered by dedicated optimization and integrated distributed computing ecosystems. • We integrate data-centric methodologies for LLM data processing and LLM development with user-centric interface designs, with the aim that Data-Juicer can ease access for diverse users and democratize LLM data processing. • To promote further research and development, our system, data recipes, and tutorials are maintained and released at https:// github.com/alibaba/data-juicer, which we hope can help pave the way for next-generation production paradigms of LLM data. Organization. The subsequent sections describe Data-Juicer in detail. Sec. 2 elaborates on the background and related studies. Sec. 3 outlines our OP pool, as a response to high heterogeneity of LLM data recipes (C1). Sec. 4 delves into our formulation of timely feedback loops for data processing and development of LLMs (C2). Sec. 5 details our repository of data recipes and tools that counteract usability and customization issues (C3). Sec. 6 expounds on the employed system optimization to tackle massive data volume (C4). Sec. 7 focuses on an extensive empirical evaluation for the quality of data recipes, performance and usability of Data-Juicer. Lastly, we draw a summary in Sec. 8.
2309.02033#15
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
15
# 2.4 Language models and agents Language modeling is a decades-old endeavor in the NLP and AI communities, aiming to develop systems that can generate text given some context (Jurafsky, 2000). Formally, language models learn a distribution P (wi|w<i), where each w is an individual token (word). This model can then generate text by sampling from the distribution, one token at a time. At its core, a language model is a probabilistic input-output system, since there are inherently several ways to continue a text (e.g., “I went to the” → “market” | “beach” | ...). While earlier attempts at modeling language (e.g., n-grams) faced challenges in generalization and scaling, there has been a recent resurgence of the area due to the rise of Transformer-based (Vaswani et al., 2017) LLMs with a large number (billions) of parameters (e.g., GPT-4; OpenAI, 2023a) and smart tokenization schemes. Modern LLMs are trained on enormous amounts of data, which helps them accumulate knowledge from a large number of input-output combinations and successfully generate human-like text (Andreas, 2022).
2309.02427#15
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
16
2 BACKGROUND AND RELATED WORKS 2.1 Large Language Model (LLM) Data Large Language Models (LLMs). Language modeling is a crucial component for achieving machine intelligence [65, 109]. In the last few years, this field has witnessed remarkable advancements, particularly with the emergence of the pre-training and fine-tuning paradigms, where language models undergo an initial phase of training with a general-purpose corpus before being fine-tuned with specific-purpose tasks [27, 72]. This procedure has yielded exceptional performance across a spectrum of natural language processing (NLP) tasks [54, 76].
2309.02033#16
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
16
Unexpectedly, training these models on internet-scale text also made them useful for many tasks beyond generating text, such as writing code (Li et al., 2022b; Rozière et al., 2023; Li et al., 2023b), modeling proteins (Meier et al., 2021), and acting in interactive environments (Yao et al., 2022b; Nakano et al., 2021). The latter has led to the rise of “language agents” – systems that use LLMs as a core computation unit to reason, plan, and act – with applications in areas such as robotics (Ahn et al., 2022), web manipulation (Yao et al., 2022a; Deng et al., 2023), puzzle solving (Yao et al., 2023; Hao et al., 2023) and interactive code generation (Yang et al., 2023). The combination of language understanding and decision-making capabilities is an exciting and emerging direction that promises to bring these agents closer to human-like intelligence. # 3 Connections between Language Models and Production Systems Based on their common origins in processing strings, there is a natural analogy between production systems and language models. We first develop this analogy. We then review prompting methods, showing that these efforts recapitulate the algorithms and agents based on production systems – and suggesting that cognitive architectures like those developed for production systems may be usefully applied to LLMs.
2309.02427#16
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
17
with specific-purpose tasks [27, 72]. This procedure has yielded exceptional performance across a spectrum of natural language processing (NLP) tasks [54, 76]. Recently, taking advantage of the highly parallelizable nature of the self-supervised Transformer architecture, the scales of model parameters and training corpus for LLMs have significantly been increased [28, 69]. Meanwhile, LLMs have aroused considerable interest in the potential of artificial general intelligence [10, 11, 30, 38, 43, 99, 108]. While model-centric studies proliferate, how to better process LLM data remains an intricate domain yet to be completely unfurled, whether for pre-training or fine-tuning data. Pre-training Data. Pre-training serves as the foundation for LLM intelligence. By being trained on large amounts of high-quality data, LLMs can acquire elementary language comprehension and generation capabilities [37]. Aiming to elucidate the link between data and LLMs intuitively, let us consider a typical pre-training objective prevalent among mainstream LLMs. Given a token se- quence [𝑡1, ..., 𝑡𝑖, ..., 𝑡𝑛], an LLM 𝜃 is trained to maximize the joint probability of the text as follows: 𝑛 ∑︁
2309.02033#17
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
18
5 # Prompting Method # Production Sequence Zero-shot Q ∼∼∼∼▸LLM Q A Few-shot (Brown et al., 2020) Q −→ Q1 A1 Q2 A2 Q ∼∼∼∼▸LLM Q1 A1 Q2 A2 Q A Zero-shot Chain-of-Thought (Kojima et al., 2022) Q −→ QStep-by-step ∼∼∼∼▸LLM QStep-by-stepA Retrieval Augmented Generation (Lewis et al., 2020) Q Wiki−−−→ Q O ∼∼∼∼▸LLM Q O A Socratic Models (Zeng et al., 2022) Q ∼∼∼∼▸VLM Q O ∼∼∼∼▸LLM Q O A Self-Critique (Saunders et al., 2022) Q ∼∼∼∼▸LLM Q A ∼∼∼∼▸LLM Q A C ∼∼∼∼▸LLM Q A C A
2309.02427#18
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
19
This objective is for auto-regressive language modeling and allows the pre-trained 𝜃0 to predict the probability of the next token by adhering to the inherent sequential ordering of the language [94]. Exploiting this unified yet simple modeling goal, researchers col- lect a large volume and diverse range of corpus data, which usually contains hundreds of billion tokens or even trillion tokens. After tokenization and pre-training, LLMs have succeeded in stimulating a wide range of advanced capabilities. The LLM pre-training data generally includes various types derived from the web crawlers [26, 71], dialogues or social media [107], book-length formal texts [36, 110], rigorous encyclopedias and academic texts [31, 100], struc- tured coding texts [18, 57], and more texts from financial, medical and legal domains [58, 91, 104]. A challenge is nonetheless posed in the careful processing and formulation of pre-training data to filter noise, redundancy, irrelevance, and potentially toxic [33, 62]. Fine-tuning Data. Numerous studies have underscored that fine-tuning – the process of refining pre-trained LLMs using a
2309.02033#19
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
19
Table 1: Conceptual diagram illustrating how prompting methods manipulate the input string before generating completions. Q = question, A = answer, O = observation, C = critique, and ∼∼∼▸ denotes sampling from a stochastic production. These pre-processing manipulations – which can employ other models such as vision-language models (VLMs), or even the LLM itself – can be seen as productions. Prompting methods thus define a sequence of productions. # 3.1 Language models as probabilistic production systems In their original instantiation, production systems specified the set of strings that could be generated from a starting point, breaking this process down into a series of string rewriting operations. Language models also define a possible set of expansions or modifications of a string – the prompt provided to the model.3
2309.02427#19
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
20
toxic [33, 62]. Fine-tuning Data. Numerous studies have underscored that fine-tuning – the process of refining pre-trained LLMs using a smaller, task-specific dataset – can further enhance or unlock addi- tional capabilities of LLMs [40, 53, 97, 98]. Crucially, this process also paves the way for better aligning the behavior of these ad- vanced models with human values and preferences [60, 68].
2309.02033#20
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
20
For example, we can formulate the problem of completing a piece of text as a production. If X is the prompt and Y the continuation, then we can write this as the production X → X Y .4 We might want to allow multiple possible continuations, in which case we have X → X Yi for some set of Yi. LLMs assign a probability to each of these completions. Viewed from this perspective, the LLM defines a probability distribution over which productions to select when presented with input X, yielding a distribution P (Yi|X) over possible completions (Dohan et al., 2022). LLMs can thus be viewed as probabilistic production systems that sample a possible completion each time they are called, e.g., X ∼∼▸ X Y .
2309.02427#20
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
21
In this phase, though the data volume decreases exponentially compared to the pre-training phase, the format of fine-tuning data is quite different [73]. Typically, given a textual dataset {(𝑥1, 𝑠1, 𝑦1), ..., (𝑥 𝑗 , 𝑠 𝑗 , 𝑦 𝑗 ), ..., (𝑥𝑚, 𝑠𝑚, 𝑦𝑚)}, the goal of fine-tuning is to adjust the pre-trained LLM 𝜃0 to find 𝜃 ∗ that maximizes the likelihood of the task-oriented response 𝑦 𝑗 for the user query 𝑥 𝑗 : 𝑚 ∑︁ 𝜃 ∗ = arg max 𝜃 𝑗 =1 log 𝑝 (𝑦 𝑗 |𝑥 𝑗 , 𝑠 𝑗 ; 𝜃 ); 𝜃 ← 𝜃0. (2) Here 𝑠 𝑗 stands for task-specific instructions, such as “summarize the following text: ”, optionally accompanied by a few demonstrative samples for in-context learning [9].
2309.02033#21
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
21
This probabilistic form offers both advantages and disadvantages compared to traditional production systems. The primary disadvantage of LLMs is their inherent opaqueness: while production systems are defined by discrete and human-legible rules, LLMs consist of billions of uninterpretable parameters. This opaqueness – coupled with inherent randomness from their probabilistic formulation – makes it challenging to analyze or systematically control their behaviors (Romero et al., 2023; Valmeekam et al., 2022). Nonetheless, their scale and pre-training provide massive advantages over traditional production systems. LLMs pre-trained on large-scale internet data learn a remarkably effective prior over string completions, allowing them to solve a wide range of tasks out of the box (Huang et al., 2022b). # 3.2 Prompt engineering as control flow The weights of an LLM define a prioritization over output strings (completions), conditioned by the input string (the prompt). The resulting distribution can be interpreted as a task-specific prioritization of productions – in other words, a simple control flow. Tasks such as question answering can be formulated directly as an input string (the question), yielding conditional distributions over completions (possible answers).
2309.02427#21
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
22
The fine-tuning data can be broadly categorized into two types: Instruct Fine-Tuning (IFT) datasets to enhance the instruction-following abilities of LLMs and are usually adapted from existing NLP bench- marks [4, 61]; and Chat Fine-Tuning (CFT) datasets for enhanced dialog ability and human value alignment [70, 92]. There are pre- liminary explorations emphasizing the importance of data diversity over volume for fine-tuning data [20, 95]. Several studies also indi- cate that data types representing human values can potentially lead to degraded general performance, a phenomenon known as the “alignment tax” [70]. However, how to more effectively process the fine-tuning data to maximize its usefulness and minimize potential risks remains an open area for further investigation.
2309.02033#22
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
22
Early work on few-shot learning (Brown et al., 2020) and prompt engineering (Wei et al., 2022b; Kojima et al., 2022; Xu et al., 2023c) found that the LLM could be further biased towards high-quality productions 3In this work, we focus on autoregressive LLMs which are typically used for language agents. However, bidirectional LLMs such as BERT (Devlin et al., 2019) can be seen in a similar light: they define a distribution over in-filling productions. 4Alternatively, we can treat the prompt as input and take the output of the LLM as the next state, represented by the production X → Y – a more literal form of rewriting. 6 # A oe, B © Gers) Go) construction i | aa ~ | Answer |—>| Critique |—>| Refinement Answer vim |e] Act im —— D roe Self-Critique Inner Monologue | LLM calls String parsing oe) Chain / ‘eason |—»| Act Agent , Selection Inference Answer Re Execution Selection-Inference ReAct
2309.02427#22
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
23
The Symbiotic Nature of Pre-training and Fine-tuning Data. It is worth pointing out the analogous properties shared between these two types of data, which motivate our synergetic approach when bearing quality, diversity, and volume considerations in mind. Specifically, the quality aspect of the text has been studied exten- sively in existing literature [62]. Efforts have been made to enhance aspects such as text structure, the soundness of arguments, con- textual richness, writing correctness, comprehensiveness, levels of anonymization, and harmlessness. The widespread implemen- tation of cleaning, deduplication, and anonymization processes in pre-training data typifies the aforementioned pursuit. For exam- ple, researchers may opt to iterate over additional epochs with Wikipedia-style data in LLM training [93]. Similarly, fine-tuning data processing also employs filtering, deduplication, and detoxifi- cation strategies, aiming to enhance the user experience and the degree of aid offered by LLMs [17, 33].
2309.02033#23
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
23
Figure 3: From language models to language agents. A: Basic structure of an LLM call. Prompt construction selects a template and populates it with variables from working memory. After calling the LLM, the string output is parsed into an action space and executed. An LLM call may result in one or more actions – for example, returning an answer, calling a function, or issuing motor commands. B: Prompt chaining techniques such as Self-Critique (Wang et al., 2022b) or Selection-Inference (Creswell et al., 2023) use a pre-defined sequence of LLM calls to generate an output. C: Language agents such as Inner Monologue (Huang et al., 2022c) and ReAct (Yao et al., 2022b) instead use an interactive feedback loop with the external environment. Vision-language models (VLMs) can be used to translate perceptual data into text for the LLM to process.
2309.02427#23
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
24
Diversity is another shared property studied at length in both types of data. Mixing various types of data and finding suitable mix- ture weights to achieve appropriate diversity has been a primary concern in works for pre-training data processing [103]. Analo- gously, efforts for fine-tuning data aim to increase multi-view di- versity such as tuning tasks and expression styles, which further underscores this shared property [70, 77, 92]. In addition, the pursuit of quality and diversity tends to trade off with data volume, which is also reflected in these two types of data. Researchers have incessantly strived to empower LLMs with massive amounts of data, hoping to encapsulate as much human knowledge as possible. For instance, there has been an influx in pre- training data volumes to terabyte levels [69, 71], and fine-tuning data volumes have grown from mere thousands to millions [4, 96]. However, the counter effects of these initiatives are also brought into these large volumes of data, including heightened noise, poten- tial inferior quality, and increased bias, which necessitate additional data processing efforts and surging LLM training overheads.
2309.02033#24
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
24
by pre-processing the input string. These simple manipulations – typically concatenating additional text to the input – can themselves be seen as productions, meaning that these methods define a sequence of productions (Table 1). Later work extended these approaches to dynamic, context-sensitive prompts: for example, selecting few-shot examples that are maximally relevant to the input (Liu et al., 2021) or populating a template with external observations from video (Zeng et al., 2022) or databases (Lewis et al., 2020). For a survey of such prompting techniques, see Liu et al. (2023c). Subsequent work used the LLM itself as a pre-processing step, eliciting targeted reasoning to foreground a particular aspect of the problem (Bai et al., 2022; Jin et al., 2022; Ganguli et al., 2023; Madaan et al., 2023; Saunders et al., 2022; Kim et al., 2023; Kirk et al., 2023) or generate intermediate reasoning steps (Tafjord et al., 2021; Creswell et al., 2023; Yao et al., 2023) before returning an answer. Chaining multiple calls to an LLM (Wu et al., 2022a;b; Dohan et al., 2022) allows for increasingly complicated algorithms (Fig. 3). # 3.3 Towards cognitive language agents
2309.02427#24
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
25
2.2 Existing LLM Data Processing Solutions LLM data processing is an early area that is still working towards common standards, and we aim to embody a pioneering system for the community. With a commitment to open-source ethos, Data-Juicer caters to the increasing demand for versatile, flexible, user-friendly and efficient data processing solutions, details of which will be described later. This contrasts the well-known LLMs that were largely closed-source in data or data processing, such as the GPT derivatives [9, 18, 69, 84], LLaMA derivatives [16, 19, 89, 92, 93], and others [1, 15, 79, 102, 107]. While some progress has been made in the open-source LLM data processing landscape [4, 24, 51, 86], they have not fully delivered the abstraction and breadth of func- tionalities that Data-Juicer aims to bring to the forefront of the field.
2309.02033#25
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
25
# 3.3 Towards cognitive language agents Language agents move beyond pre-defined prompt chains and instead place the LLM in a feedback loop with the external environment (Fig. 1B). These approaches first transform multimodal input into text and pass it to the LLM. The LLM’s output is then parsed and used to determine an external action (Fig. 3C). Early agents interfaced the LLM directly with the external environment, using it to produce high-level instructions based on the agent’s state (Ahn et al., 2022; Huang et al., 2022c; Dasgupta et al., 2022). Later work developed more sophisticated language agents that use the LLM to perform intermediate reasoning before selecting an action (Yao et al., 2022b). The most recent agents incorporate sophisticated learning strategies such as reflecting on episodic memory to generate new semantic inferences (Shinn et al., 2023) or modifying their program code to generate procedural knowledge (Wang et al., 2023a), using their previous experience to adapt their future behaviors. These cognitive language agents employ nontrivial LLM-based reasoning and learning (Fig. 1C). Just as cognitive architectures were used to structure production systems’ interactions with agents’ internal state and external environments, we suggest that they can help design LLM-based cognitive agents. In the remainder of the paper, we use this perspective to organize existing approaches and highlight promising extensions. 7
2309.02427#25
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
26
they have not fully delivered the abstraction and breadth of func- tionalities that Data-Juicer aims to bring to the forefront of the field. Examining this from the perspective of the target datasets, ex- isting works typically fixate on specific data sources and use cases for LLMs, spanning alignment of specialized English sub-datasets for LLaMA pre-training [93], assembly of multi-lingual corpora for pre-training [51], or crowdsourcing for fine-tuning prompt data [4]. However, they lack the systematic and modular processing abilities required to proficiently manage heterogeneous data, which is an area Data-Juicer strives to push its boundaries. These limitations become especially apparent when handling new data types, engag- ing in language transfer, or implementing particular data cleaning and transformations for LLM applications.
2309.02033#26
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
26
7 A Procedural Memory Semantic Memory —_ Episodic Memory B M6 = (- LLM Agent Code =) 4 | ! iN | Ly | iN rompt) Parse.) (Retrieval) (Learning) (Retrieval ) (Learning ) (Retrieval) (Learning) I Y y I y | y I : > : = TTT Ne Decision Procedure —t Working Meme ; Actions Observations Selection ‘Q @ Dialogue Physical 2Qge Planning v U(U Proposal v tL) Execution Digital Figure 4: Cognitive architectures for language agents (CoALA). A: CoALA defines a set of interacting modules and processes. The decision procedure executes the agent’s source code. This source code consists of procedures to interact with the LLM (prompt templates and parsers), internal memories (retrieval and learning), and the external environment (grounding). B: Temporally, the agent’s decision procedure executes a decision cycle in a loop with the external environment. During each cycle, the agent uses retrieval and reasoning to plan by proposing and evaluating candidate learning or grounding actions. The best action is then selected and executed. An observation may be made, and the cycle begins again. # 4 Cognitive Architectures for Language Agents (CoALA): A Conceptual Framework
2309.02427#26
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
27
Moreover, existing works suffer from sub-optimal usability and ability to explore data insight. Most of these works only offer the processed data along with purpose-built processing codes specific to those data, lacking in ease-of-use considerations and support of assistive tool-kits. This hinders their adaptability to diverse users and alternative usages. Users might face a daunting task when substituting data processing goals or conducting analyses due to a dearth of complementary data-analytical capabilities. The re- development of data processing tools and analytical methodologies, specifically tailored for LLMs, remains largely uncharted territory. Furthermore, the focus of current works gravitates towards func- tionality rather than system performance, leaving large room for enhancement in efficiency, space management and scalability. Note- worthy shortcomings include reliance on single-machine Python scripts, inappropriate handling of large-scale data, and poor pro- cessing speeds due to the utilization of Python’s plain dict object. We will provide further empirical comparisons in terms of both the quality of the generated data recipes (Sec. 7.1) and the perfor- mance of the data processing system (Sec. 7.2).
2309.02033#27
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
27
# 4 Cognitive Architectures for Language Agents (CoALA): A Conceptual Framework We present Cognitive Architectures for Language Agents (CoALA) as a framework to organize existing language agents and guide the development of new ones. CoALA positions the LLM as the core component of a larger cognitive architecture (Figure 4). Under CoALA, a language agent stores information in memory modules (Section 4.1), and acts in an action space structured into external and internal parts (Figure 5): • External actions interact with external environments (e.g., control a robot, communicate with a human, navigate a website) through grounding (Section 4.2). • Internal actions interact with internal memories. Depending on which memory gets accessed and whether the access is read or write, internal actions can be further decomposed into three kinds: retrieval (read from long-term memory; Section 4.3), reasoning (update the short-term working memory with LLM; Section 4.4), and learning (write to long-term memory; Section 4.5).
2309.02427#27
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
28
3 STANDARDIZED OPERATOR POOL In addressing the heterogeneity of data recipes for LLMs (Chal- lenge 1 in Sec. 1), we devise a set of standardized operator (OP) pool. As outlined in Table 1, the OPs are organized into four primary categories: Formatters, Mappers, Filters, and Deduplicators, which incorporate diverse categories, functions, inputs, processing levels, outputs, and application scenarios. Core principles of decoupling and composability guide their structuring, resulting in a varied yet standard set of procedures that contribute to flexibility and user interaction at multiple processing levels. This strategic im- plementation enhances reusability and reduces complexity, aiding streamlined and decoupled data recipe construction. 3.1 Unified Data Representation We first introduce Formatter OPs designed to unify diverse data sources into an intermediate data representation. Specifically, we choose to build Data-Juicer upon Huggingface-datasets [55] due to its compatibility with mainstream LLM datasets and its column- oriented storage ability backed by Apache Arrow [2]. Our Format- ters maintain data objects that are instantiated from several unified base classes that simplify the process design for follow-up OPs and facilitate data accessing efficiency. We support numerous text input
2309.02033#28
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
28
Language agents choose actions via decision-making, which follows a repeated cycle (Section 4.6, Figure 4B). In each cycle, the agent can use reasoning and retrieval actions to plan. This planning subprocess selects a grounding or learning action, which is executed to affect the outside world or the agent’s long-term memory. CoALA’s decision cycle is analogous to a program’s “main” procedure (a method without return values, as opposed to functions) that runs in loops continuously, accepting new perceptual input and calling various action procedures in response. CoALA (Figure 4) is inspired by the decades of research in cognitive architectures (Section 2.3), leveraging key concepts such as memory, grounding, learning, and decision-making. Yet the incorporation of an LLM leads to the addition of “reasoning” actions, which can flexibly produce new knowledge and heuristics for various purposes – replacing hand-written rules in traditional cognitive architectures. It also makes text the de facto internal representation, streamlining agents’ memory modules. Finally, recent advances in vision-language 8 Internal External A A oC me Reasoning Retrieval | Learning Grounding Planning Figure 5: Agents’ action spaces can be divided into internal memory accesses and external interactions with the world. Reasoning and retrieval actions are used to support planning.
2309.02427#28
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
29
Table 1: Overview of the operator (OP) pool in Data-Juicer, with a detailed list continuously maintained at the official documentation: https://github.com/alibaba/data-juicer/blob/main/docs/Operators.md. Category Formatters Function Data format unifying Input Dataset Process Level Dataset Output Dataset OP Usage Examples Load and unify dataset-hub, txt, json, md, codes, html, pdf, docx, ... Mappers In-place text editing Sample Single-sample; Multi-samples Sample; Samples Transform specified headers, textual elements; Fix messy codes; Enable text enhancement Filters Dedup- licators Conditional text removing Duplication removing Sample Single or Paired Dataset Single-sample; Dataset Dataset Boolean Dataset Filter by meta-info, stats (e.g., lines count); model scores; external resources (e.g., flagged words) Compare with hash-based and vector-based deduplication methods
2309.02033#29
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
29
Figure 5: Agents’ action spaces can be divided into internal memory accesses and external interactions with the world. Reasoning and retrieval actions are used to support planning. models (VLMs; Alayrac et al., 2022) can simplify grounding by providing a straightforward translation of perceptual data into text (Zeng et al., 2022). The rest of this section details key concepts in CoALA: memory, actions (grounding, reasoning, retrieval, and learning), and decision-making. For each concept, we use existing language agents (or relevant NLP/RL methods) as examples – or note gaps in the literature for future directions. # 4.1 Memory Language models are stateless: they do not persist information across calls. In contrast, language agents may store and maintain information internally for multi-step interaction with the world. Under the CoALA framework, language agents explicitly organize information (mainly textural, but other modalities also allowed) into multiple memory modules, each containing a different form of information. These include short-term working memory and several long-term memories: episodic, semantic, and procedural.
2309.02427#29
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
30
formats - txt, JSON, parquet, html, md, pdf, code files such as .py and .cpp, amongst others - and homogenize them into a structured format composed of certain columns with nested access support, which are conceptually organized by three primary parts “text”, “meta”, and “stats”. These parts respectively hold the raw textual data, metadata information (e.g., date and version), and statistical data that can be generated and consumed by Data-Juicer’s other OPs and tools. This interface works at either the text sample or dataset level, and is independent of underlying in-memory or disk data layout, alleviating the potential worry over heterogeneous data formats by OP developers.
2309.02033#30
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
30
Working memory. Working memory maintains active and readily available information as symbolic variables for the current decision cycle (Section 4.6). This includes perceptual inputs, active knowledge (generated by reasoning or retrieved from long-term memory), and other core information carried over from the previous decision cycle (e.g., agent’s active goals). Previous methods encourage the LLM to generate intermediate reasoning (Wei et al., 2022b; Nye et al., 2021), using the LLM’s own context as a form of working memory. CoALA’s notion of working memory is more general: it is a data structure that persists across LLM calls. On each LLM call, the LLM input is synthesized from a subset of working memory (e.g., a prompt template and relevant variables). The LLM output is then parsed back into other variables (e.g., an action name and arguments) which are stored back in working memory and used to execute the corresponding action (Figure 3A). Besides the LLM, the working memory also interacts with long-term memories and grounding interfaces. It thus serves as the central hub connecting different components of a language agent.
2309.02427#30
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
31
It is noteworthy that the outputs of Filter OPs are Booleans, which helps to decouple the implementations of actual data process- ing and computation for various statistics. This dedicated segrega- tion results in two key advantages. Firstly, it enables our dedicated analyzer-related tools (detailed in Sec. 5.2) to utilize these computed statistics for the entire dataset, rather than a filtered subset. Users are also allowed to generate fingerprints for specific partial sam- ples. Secondly, this decoupling enhances compatibility between Huggingface-datasets and Data-Juicer, thereby enabling the effi- cient reuse of the Dataset.map and Dataset.filter interfaces to perform these sub-processes in a streamlined manner. As a result, users can effortlessly extend their own custom OPs that only vary from existing OPs in specific partial processing behaviors. In Ap- pendix A.1, we offer an illustrative code example of this decoupling in Listing 1. 3.2 Versatile Data Processing Next, we elaborate on the functionality of the OP pool in Data-Juicer, which is pivotal to the comprehensive data processing tailored for LLMs. Besides the Formatters, which play an essential role in uni- fying data formats and ensuring a consistent and efficient data flow throughout the processing pipeline, we now give more details about the other three types of data-transformation OPs in Table 1.
2309.02033#31
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
31
Episodic memory. Episodic memory stores experience from earlier decision cycles. This can consist of training input-output pairs (Rubin et al., 2021), history event flows (Weston et al., 2014; Park et al., 2023), game trajectories from previous episodes (Yao et al., 2020; Tuyls et al., 2022), or other representations of the agent’s experiences. During the planning stage of a decision cycle, these episodes may be retrieved into working memory to support reasoning. An agent can also write new experiences from working to episodic memory as a form of learning (Section 4.5).
2309.02427#31
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
32
Mappers facilitate crucial functionalities of in-place text edit- ing, necessary for single-sample or multi-sample processing across various needs of LLM data processing, such as modifying texts for pre-training and enhancing text diversity for fine-tuning. They effectively handle processing tasks like the removal of specific file headers, messy code rectification, and text enhancements. Filters come into play by conditionally filtering texts via individualsample metrics, dataset-level statistics, or external resources like stop-word lists. In doing so, they can eliminate unnecessary text samples, contributing to data focus, cleanliness, and the cost reduc- tion of follow-up LLM training processes significantly. Deduplicators reduce potential storage waste and improve effi- ciency. As indicated by several studies [13, 47, 52], duplicate samples adversely affect both the pre-training stability and the performance of LLMs. Besides, Deduplicators help prevent unintentional data leakage during training into evaluation benchmarks, particularly for zero-shot or few-shot tasks [39]. To ensure accurate detection and removal of duplication, we provide efficient and robust methods including hash-based and vector-based comparisons [8, 14, 81].
2309.02033#32
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
32
Semantic memory. Semantic memory stores an agent’s knowledge about the world and itself. Traditional NLP or RL approaches that leverage retrieval for reasoning or decision-making initialize semantic memory from an external database for knowledge support. For example, retrieval-augmented methods in NLP (Lewis et al., 2020; Borgeaud et al., 2022; Chen et al., 2017) can be viewed as retrieving from a semantic memory of unstructured text (e.g., Wikipedia). In RL, “reading to learn” approaches (Branavan et al., 2012; Narasimhan et al., 2018; Hanjie et al., 2021; Zhong et al., 2021) leverage game manuals and facts as a semantic memory to affect the policy. While these examples essentially employ a fixed, read-only semantic memory, language agents may also write new knowledge obtained from LLM reasoning into semantic memory as a form of learning (Section 4.5) to incrementally build up world knowledge from experience. Procedural memory. Language agents contain two forms of procedural memory: implicit knowledge stored in the LLM weights, and explicit knowledge written in the agent’s code. The agent’s code can be further 9
2309.02427#32
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
33
3.3 Composability Data-Juicer’s OPs serve as a testament to our system’s versatility. They enable users to effortlessly process a variety of data types in a composable and modular manner, showcasing Data-Juicer’s dedication to user adaptability and high-quality data production for LLMs. Besides the functions, inputs, outputs and processing levels summarized in Table 1, this composability is embedded in more facets, including the fields to be processed, OP hyper-parameters, and recommended use cases of each OP.
2309.02033#33
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
33
9 divided into two types: procedures that implement actions (reasoning, retrieval, grounding, and learning procedures), and procedures that implement decision-making itself (Section 4.6). During a decision cycle, the LLM can be accessed via reasoning actions, and various code-based procedures can be retrieved and executed. Unlike episodic or semantic memory that may be initially empty or even absent, procedural memory must be initialized by the designer with proper code to bootstrap the agent. Finally, while learning new actions by writing to procedural memory is possible (Section 4.5), it is significantly riskier than writing to episodic or semantic memory, as it can easily introduce bugs or allow an agent to subvert its designers’ intentions. # 4.2 Grounding actions Grounding procedures execute external actions and process environmental feedback into working memory as text. This effectively simplifies the agent’s interaction with the outside world as a “text game” with textual observations and actions. We categorize three kinds of external environments:
2309.02427#33
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
34
Each OP in Data-Juicer is designed to serve a distinct function and can be commanded by users to process different text fields. For example, OP A could process the sample field “text.abstract”, while OP B could focus on “text.main_body”. By default, each OP process on “text” field, which can be freely specified to other “meta” or “stats” related data fields according to users’ needs. This adaptability allows for immense flexibility by simultaneously using OPs with different fields, enabling users to easily manipulate specific text snippets such as removing GitHub codes based on their star counts. Moreover, these OPs establish a one-size-fits-all solution that encompasses a multitude of configurable parameters such as the number of tokens, filtering thresholds, auxiliary models, and much more. This adjustability of a single OP, amalgamated with the com- posability of OP pipelines, empowers Data-Juicer to manage a spectrum of input, output, and processing granularity, contributing to its powerful processing abilities.
2309.02033#34
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
34
Physical environments. Physical embodiment is the oldest instantiation envisioned for AI agents (Nilsson, 1984). It involves processing perceptual inputs (visual, audio, tactile) into textual observations (e.g., via pre-trained captioning models), and affecting the physical environments via robotic planners that take language-based commands. Recent advances in LLMs have led to numerous robotic projects (Ahn et al., 2022; Liang et al., 2023a; Singh et al., 2023; Palo et al., 2023; Ren et al., 2023) that leverage LLMs as a “brain” for robots to generate actions or plans in the physical world. For perceptual input, vision-language models are typically used to convert images to text (Alayrac et al., 2022; Sumers et al., 2023) providing additional context for the LLM (Driess et al., 2023; Huang et al., 2023; Brohan et al., 2022; 2023).
2309.02427#34
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
35
For usage combinations, OPs are labeled with typical usage sce- narios. We maintain OP tags as general usage, LaTeX source files, programming codes, financial data processing, or language-specific processing such as English and Chinese, and so on. These labels facilitate easy navigation and operation, underscoring our aim to blend simplicity with power in Data-Juicer’s architecture. 4 FEEDBACK-DRIVEN DATA PROCESSING Addressing Challenge 2 outlined in Sec. 1, we incorporate a dynamic feedback loop into the data processing pipeline, which allows users to process and understand data effectively via built-in visualization and automated tracking abilities. As demonstrated in Figure 2, our system (Data-Juicer) enables timely perception and swift iterative refinement of data recipes (indicated by the left and upward arrows) within a holistic feedback loop of LLM data processing and LLM training (indicated by the right arrows). Data Recipe Data Probe Data Data Quality LLMs Training/ built-in, custom] [analyser, visulizer] Processing ‘Assement Tuning } of o— Mi = all — Interactive Visual HPO for recipe (+ Checkpoints & Cache) ‘Auto-Evaluation Figure 2: The feedback loop of Data-Juicer.
2309.02033#35
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
35
Dialogue with humans or other agents. Classic linguistic interactions allow the agent to accept instructions (Winograd, 1972; Tellex et al., 2011; Chen and Mooney, 2011; Bisk et al., 2016) or learn from people (Nguyen et al., 2021; Sumers et al., 2022; 2021; Wang et al., 2016). Agents capable of generating language may ask for help (Ren et al., 2023; Nguyen et al., 2022b; 2019; Nguyen and Daumé III, 2019) or clarification (Biyik and Palan, 2019; Sadigh et al., 2017; Padmakumar et al., 2022; Thomason et al., 2020; Narayan-Chen et al., 2019) – or entertain or emotionally help people (Zhang et al., 2020; Zhou et al., 2018; Pataranutaporn et al., 2021; Hasan et al., 2023; Ma et al., 2023). Recent work also investigates interaction among multiple language agents for social simulation (Park et al., 2023; Jinxin et al., 2023; Gao et al., 2023), debate (Chan et al., 2023; Liang et al., 2023b; Du et al., 2023), improved safety (Irving et al., 2018), or collabrative task solving (Qian et al., 2023; Wu et al., 2023; Hong et al., 2023).
2309.02427#35
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
36
Figure 2: The feedback loop of Data-Juicer. We will discuss the modeling of the data processing feedback in a hyper-parameter optimization (HPO) perspective (Sec. 4.1), and go through the utility of the interactive visualization (Sec. 4.2), and the integration of ecosystems for LLM training and evaluations (Sec. 4.3). The synergy of these techniques offers an efficient and effective solution to debug and dive into LLM data processing. 4.1 HPO for Data Processing In Data-Juicer, we incorporate the concept of hyper-parameter optimization (HPO) into the data processing procedure. This is done by tying data-processing-specific hyper-parameters to a variety of feedback signals, including custom target metrics and visualization results. We enhance our system’s functionality by innovatively speeding up the data processing iteration through Checkpoint and Caching mechanisms, and by integrating an automated HPO tool.
2309.02033#36
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
36
Digital environments. This includes interacting with games (Hausknecht et al., 2020; Côté et al., 2019; Shridhar et al., 2020; Wang et al., 2022a; Liu et al., 2023d), APIs (Schick et al., 2023; Yao et al., 2022b; Parisi et al., 2022; Tang et al., 2023b), and websites (Shi et al., 2017; Nakano et al., 2021; Yao et al., 2022a; Zhou et al., 2023b; Gur et al., 2023; Deng et al., 2023) as well as general code execution (Yang et al., 2023; Le et al., 2022; Ni et al., 2023). Such digital grounding is cheaper and faster than physical or human interaction. It is thus a convenient testbed for language agents and has been studied with increasing intensity in recent years. In particular, for NLP tasks that require augmentation of external knowledge or computation, stateless digital APIs (e.g., search, calculator, translator) are often packaged as “tools” (Parisi et al., 2022; Schick et al., 2023; Xu et al., 2023a; Tang et al., 2023b; Qin et al., 2023), which can be viewed as special “single-use” digital environments. # 4.3 Retrieval actions
2309.02427#36
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
37
4.1.1 Acceleration with Checkpoint and Caching. LLM data processing often necessitates frequent re-conduction due to the al- terations in OP hyper-parameters and potential conduction failures, such as exceeding available memory, disk or pre-defined time limits, especially for massive datasets. Accordingly, we provide built-in checkpoint and caching management to foster resilient and reliable data processing. Based on a carefully organized directory structure, Data-Juicer automatically monitors every running process for configuration changes, and creates new files to safely store data and processing states only when any error or exception occurs. While the checkpoint preserves the whole dataset and processing state enabling complete recovery of the processing site, the cache solely saves the dataset object for each OP and is more suited for smaller- scale adjustments as it reduces the overhead of pre-order caches. These techniques allow for a swift recovery during system restarts or failures, reverting to the most optimal recent processing state stored in the checkpoints, thus mitigating processing redundancy and increasing the feedback frequencies.
2309.02033#37
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
37
# 4.3 Retrieval actions In CoALA, a retrieval procedure (Li et al., 2022a; Gu et al., 2018) reads information from long-term memories into working memory. Depending on the information and memory type, it could be implemented in various ways, e.g., rule-based, sparse, or dense retrieval. For example, Voyager (Wang et al., 2023a) loads code-based skills from a skill library via dense retrieval to interact with the Minecraft world – effectively retrieving grounding procedures from a procedural memory. Generative Agents (Park et al., 2023) retrieves relevant events from episodic memory via a combination of recency (rule-based), importance (reasoning-based), and relevance (embedding-based) scores. DocPrompting (Zhou et al., 2022a) proposes to leverage library documents to assist code generation, which can be seen as retrieving knowledge from semantic memory. While retrieval plays a key role in human decision-making (Zhou et al., 2023a; Zhao et al., 2022), adaptive 10 and context-specific recall remains understudied in language agents. In Section 6, we suggest a principled integration of decision-making and retrieval as an important future direction. # 4.4 Reasoning actions
2309.02427#37
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
38
or failures, reverting to the most optimal recent processing state stored in the checkpoints, thus mitigating processing redundancy and increasing the feedback frequencies. Additionally, the proposed state-saving mechanism enables a flexible space-time trade-off at different feedback stages. Users have the option to save states after each OP in the data processing flow, ensuring minimal re-execution time at the cost of maximum storage overhead. Conversely, they could choose to only save the last OP’s checkpoint and cache, incurring minimal storage overhead but increased re-execution time, especially when needing to revert to early steps in the process. To facilitate a good space-time trade-off, we further perform space complexity analysis for individual OPs, which aids in pre- dicting peak space occupancy and guides us in determining how many checkpoints and caches to store based on available space. By default, Data-Juicer actively monitors disk space usage at the start of data processing, and automatically determines if, and when, checkpoints and cache should be deployed. User-specified saving frequencies and rules are also supported. Consequently, strategic checkpoint and cache management reinforce both the resilience and efficiency of the feedback loop for LLM data processing. The detailed space usage analysis can be found in Appendix A.2.
2309.02033#38
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
38
# 4.4 Reasoning actions Reasoning allows language agents to process the contents of working memory to generate new information. Unlike retrieval (which reads from long-term memory into working memory), reasoning reads from and writes to working memory. This allows the agent to summarize and distill insights about the most recent observation (Yao et al., 2022b; Peng et al., 2023), the most recent trajectory (Shinn et al., 2023), or information retrieved from long-term memory (Park et al., 2023). Reasoning can be used to support learning (by writing the results into long-term memory) or decision-making (by using the results as additional context for subsequent LLM calls). # 4.5 Learning actions Learning occurs by writing information to long-term memory, which includes a spectrum of diverse procedures. Updating episodic memory with experience. It is common practice for RL agents to store episodic trajectories to update a parametric policy (Blundell et al., 2016; Pritzel et al., 2017) or establish a non- parametric policy (Ecoffet et al., 2019; Tuyls et al., 2022). For language agents, added experiences in episodic memory may be retrieved later as examples and bases for reasoning or decision-making (Weston et al., 2014; Rubin et al., 2021; Park et al., 2023).
2309.02427#38
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
39
4.1.2 Auto-HPO. We incorporate an automated HPO tool1 into Data-Juicer to streamline the finding of good data processing hyper-parameters. To reduce search costs of different data recipes, we support leveraging advanced HPO algorithms such as Bayesian optimization [82], progressive early-stop strategies, such as the Hy- perband algorithm [56], and built-in LLM-oriented sampling strate- gies (detailed later in Sec. 5.2). Specifically, given a pre-defined tar- get metric and search space of data recipes, users can conveniently explore the impact of specific data processing hyper-parameters. Here, we give an illustrative example as follows: Example of Data Mixing with HPO: Suppose we aim to find a good set of sampling weights for 𝑀 datasets to be mixed, where our search space is defined as 𝑤𝑖 ∈ [0, 1], 𝑖 ∈ [1, 𝑀]. The pipeline can be structured as follows: (1) We specify the target text fields across all 𝑀 datasets, and unify their meta-tags and name of text fields via Formatter OPs. (2) We leverage meta-tag Filters to
2309.02033#39
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
39
Updating semantic memory with knowledge. Recent work (Shinn et al., 2023; Park et al., 2023) has applied LLMs to reason about raw experiences and store the resulting inferences in semantic memory. For example, Reflexion (Shinn et al., 2023) uses an LLM to reflect on failed episodes and stores the results (e.g., “there is no dishwasher in kitchen”) as semantic knowledge to be attached to LLM context for solving later episodes. Finally, work in robotics (Chen et al., 2023a) uses vision-language models to build a semantic map of the environment, which can later be queried to execute instructions.
2309.02427#39
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02427
40
Updating LLM parameters (procedural memory). The LLM weights represent implicit procedural knowledge. These can be adjusted to an agent’s domain by fine-tuning during the agent’s lifetime. Such fine- tuning can be accomplished via supervised (Liu et al., 2023b; Zhang et al., 2023b) or imitation learning (Hussein et al., 2017), reinforcement learning (RL) from environment feedback (Sutton and Barto, 2018), human feedback (RLHF; Christiano et al., 2017; Ouyang et al., 2022; Nakano et al., 2021), or AI feedback (Bai et al., 2022; Liu et al., 2023e). Classic LLM self-improvement methods (Huang et al., 2022a; Zelikman et al., 2022) use an external measure such as consistency Wang et al. (2022b) to select generations to fine-tune on. In reinforcement learning settings, this can be extended to use environmental feedback instead: for example, XTX (Tuyls et al., 2022) periodically fine-tunes a small language model on high-scoring trajectories stored in episodic memory, which serves as a robust
2309.02427#40
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
41
(4) A pre-configured data processing including de-duplication OPs is executed on the mixed dataset, ensuring dataset cleanness. (5) The target metric is calculated on D𝑚𝑖𝑥 as (𝑛/𝑁 + 𝑠), where 𝑁 is the total number of tokens of all 𝑀 datasets, 𝑛 and 𝑠 is the number of tokens and average quality score of D𝑚𝑖𝑥 using built- in GPT-3 quality classifier (detailed in Sec. 5.2) respectively. The mixture dataset D𝑚𝑖𝑥 is iteratively refined by carrying out it- erations steps (3)∼(5) to get a larger quantity and better quality. □ The HPO results offer a powerful means of visualizing and under- standing data insights as shown in Figure 3, where the importance, # 1W&B Sweeps, https://docs.wandb.ai/guides/sweeps
2309.02033#41
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
41
et al., 2022) periodically fine-tunes a small language model on high-scoring trajectories stored in episodic memory, which serves as a robust “exploitation” policy to reach exploration frontiers in the face of stochasity. Fine-tuning the agent’s LLM is a costly form of learning; thus, present studies specify learning schedules. However, as training becomes more efficient – or if agents utilize smaller subtask-specific LLMs – it may be possible to allow language agents to autonomously determine when and how to fine-tune their LLMs.
2309.02427#41
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
42
# 1W&B Sweeps, https://docs.wandb.ai/guides/sweeps Parameter importance with respect to Global Interactions | }€—__ ri target_metric v Linear Correlation High-order Correlation Q jes 3 poram —i>| mix_data_w3 mix_data_wi. — a a mix_data_w2 allow a deep understanding of of per-sample statistics covers displays histograms and box cluding diverse criteria like word percentage, and paragraph have the flexibility to adjust bespoke visualization and data allow a deep understanding of the data. By default, the summary of per-sample statistics covers 13 dimensions and automatically displays histograms and box plots for each statistical variable, in- cluding diverse criteria like sample perplexity, word count, flagged word percentage, and paragraph length, among others. Users also have the flexibility to adjust the dimensions to observe, with a bespoke visualization and data processing experience. Parameter importance with respect to Global Interactions | }€—__ ri target_metric v Linear Correlation High-order Correlation Q jes 3 poram —i>| mix_data_w3 mix_data_wi. — a a mix_data_w2
2309.02033#42
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
42
Updating agent code (procedural memory). CoALA allows agents to update their source code, thus modifying the implementation of various procedures. These can be broken down as follows: • Updating reasoning (e.g., prompt templates; Gao et al., 2020; Zhou et al., 2022b). For example, APE (Zhou et al., 2022b) infers prompt instructions from input-output examples, then uses these instructions as part of the LLM prompt to assist task solving. Such a prompt update can be seen as a form of learning to reason. • Updating grounding (e.g., code-based skills; Liang et al., 2023a; Ellis et al., 2021; Wang et al., 2023a). For example, Voyager (Wang et al., 2023a) maintains a curriculum library. Notably, current methods are limited to creating new code skills to interact with external environments. 11 • Updating retrieval. To our knowledge, these learning options are not studied in recent language agents. Retrieval is usually considered a basic action designed with some fixed implementation (e.g., BM25 or dense retrieval), but research in query/document expansion (Nogueira et al., 2019; Wang et al., 2023c; Tang et al., 2023a) or retrieval distillion (Izacard et al., 2021) may be helpful for language agents to learn better retrieval procedures.
2309.02427#42
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
43
4.3 Feedback with Integrated LLM Libraries In the later stages of our pipeline, we utilize robust ecosystems designed for LLM training and evaluation, ensuring seamless in- tegration with widely-used libraries such as Megatron-LM [85], DeepSpeed [78], and HuggingFace’s Transformers [101]. With this integration, users can easily train LLMs on datasets produced by Data-Juicer and evaluate their performance to obtain feedback using our pre-built tools and scripts, without getting bogged down in complicated LLM training and evaluation details. # Figure 3: Demonstration of HPO for data recipe. (a) Tracking Specific Data Samples language_id_score_filter Lang filtered: 107 of 23040 docs (0.46) Notably, our system facilitates the timely assessment of model abilities by incorporating multiple dimensions. The system’s capa- bility to swiftly identify potentially ineffective data and training allows us to terminate unwanted LLM data processing promptly. Instead of solely relying on model loss as the basis for evaluating model performance, we support the LLM assessment across various metrics or benchmarks, and track shifts in target scores. Conse- quently, we can determine whether continued training of an LLM on the produced dataset is justified, thereby helping us minimize data processing and LLM training costs.
2309.02033#43
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
43
• Updating learning or decision-making. Finally, it is theoretically possible for CoALA agents to learn new procedures for learning or decision-making, thus providing significant adaptability. In general, however, updates to these procedures are risky both for the agent’s functionality and alignment. At present, we are not aware of any language agents that implement this form of learning; we discuss such possibilities more in Section 6. While RL agents usually fix one way of learning (e.g., Q-learning, PPO, or A3C) and learn by updating model parameters, language agents can select from a diversity of learning procedures. This allows them to learn rapidly by storing task-relevant language (cheaper and quicker than parameter updates), and leverage multiple forms of learning to compound their self-improvement (e.g., Generative Agents discussed in Section 5). Finally, while our discussion has mostly focused on adding to memory, modifying and deleting (a case of “unlearning”) are understudied in recent language agents. We address these areas more in Section 6. # 4.6 Decision making
2309.02427#43
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
44
(b) Effect of OP Pipeline (Number of Samples) (c) Data Distribution Diff. # Figure 4: The illustration of interactive visualization of Data-Juicer. More demos are publicly available. Specifically, Data-Juicer’s evaluator supports SOTA LLM bench- marks such as HELM [59], LM-harness [32] and GPT-API-based evaluation [19], as well as the extension of customized evaluation benchmarks and tasks. For a balanced and straightforward evalua- tion, Data-Juicer supports a leaderboard-style comparison by con- solidating results from different target evaluation scenarios, such as ranking averaging, score-normalized averaging, or other cus- tomized strategies. The leaderboard-style scoring utility enhances the visualization of strengths and weaknesses of models, guiding subsequent iterations of data recipes and LLMs’ refinements. We also make available Reference Models - these are model checkpoints binding with traceable training data in Data-Juicer, popular LLM architectures, training parameters, computation costs, and corre- sponding evaluation results. They facilitate effortless comparison among different training configurations, particularly for further research on diverse, iteratively developed data recipes.
2309.02033#44
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
44
# 4.6 Decision making With various actions (grounding, learning, reasoning, retrieval) in the action space, how should a language agent choose which action to apply? This is handled by the decision-making procedure, which is effectively the top-level or “main” agent program. CoALA structures this top-level program into decision cycles (Figure 4B) which yield an external grounding action (Section 4.2) or internal learning action (Section 4.5). In each cycle, program code defines a sequence of reasoning and retrieval actions to propose and evaluate alternatives (planning stage), then executes the selected action (execution stage) – then the cycle loops again. Planning stage. During planning, reasoning and retrieval can be flexibly applied to propose, evaluate, and select actions, and these sub-stages could interleave or iterate to build up multi-step simulations (Tamari et al., 2020) before taking an external action (Yao et al., 2023; Hao et al., 2023). It also enables agents to iteratively improve candidate solutions – for example, by using the LLM to simulate them, identifying defects, and proposing modifications that address those defects (Kirk et al., 2023; Shinn et al., 2023).
2309.02427#44
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]
2309.02033
45
correlation and interaction of 𝑤𝑖 for the quality score are estimated and plotted. Besides the quality score demonstrated in this exam- ple, the target metric can be customized to include other trade-off terms composed of intrinsic data measures – such as toxicity, help- fulness, or other scores predicted by auxiliary models – or even performance measures of LLMs, such as training loss or benchmark scores (which we will discuss later in Sec. 4.3).
2309.02033#45
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.
http://arxiv.org/pdf/2309.02033
Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou
cs.LG, cs.DB, cs.DC
20 Pages, 10 figures, 9 tables. The system, data recipes, and demos are continuously maintained at https://github.com/alibaba/data-juicer
null
cs.LG
20230905
20231220
[ { "id": "2306.11644" }, { "id": "2212.09597" }, { "id": "2303.17580" } ]
2309.02427
45
• Proposal. The proposal sub-stage generates one or more action candidates. The usual approach is to use reasoning (and optionally retrieval) to sample one (Huang et al., 2022c) or more (Chen et al., 2021; Wang et al., 2022b) external grounding actions from the LLM. For simple domains with limited actions, the proposal stage might simply include all actions (e.g., SayCan in Section 5). More sophisticated agents use if-else or while-if code structures (Wang et al., 2023a; Park et al., 2023); while agents deployed in well-defined domains may utilize structured simulators (Haslum et al., 2019) to generate plausible rollouts (Liu et al., 2023a; Dagan et al., 2023). • Evaluation. If multiple actions are proposed, the evaluation sub-stage assigns a value to each. This may use heuristic rules, LLM (perplexity) values (Ahn et al., 2022), learned values (Yao et al., 2020), LLM reasoning (Yao et al., 2023; Hao et al., 2023), or some combination. Particularly, LLM reasoning can help evaluate actions by internally simulating their grounding feedback from the external world (Hao et al., 2023; Yang et al., 2023).
2309.02427#45
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
http://arxiv.org/pdf/2309.02427
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
cs.AI, cs.CL, cs.LG, cs.SC
v2 enriched actionable insights and discussions, and polished abstract and introduction. 18 pages of main content, 12 pages of references, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
cs.AI
20230905
20230927
[ { "id": "2305.14909" }, { "id": "2307.15810" }, { "id": "1704.00051" }, { "id": "2201.11903" }, { "id": "2305.19118" }, { "id": "1606.04460" }, { "id": "2305.11176" }, { "id": "2304.11477" }, { "id": "2209.02299" }, { "id": "2305.17390" }, { "id": "2308.08155" }, { "id": "2308.07201" }, { "id": "2306.12672" }, { "id": "2201.01251" }, { "id": "2307.12856" }, { "id": "2212.14024" }, { "id": "2010.02903" }, { "id": "2302.02801" }, { "id": "2308.03022" }, { "id": "2207.05608" }, { "id": "2206.10498" }, { "id": "2305.08283" }, { "id": "2302.04761" }, { "id": "2308.12503" }, { "id": "2305.10601" }, { "id": "2212.06817" }, { "id": "2306.06070" }, { "id": "2305.14688" }, { "id": "2306.05301" }, { "id": "2307.07924" }, { "id": "2305.14325" }, { "id": "2306.14898" }, { "id": "2308.09830" }, { "id": "1901.10995" }, { "id": "2305.16960" }, { "id": "2305.16334" }, { "id": "2302.05206" }, { "id": "2203.07540" }, { "id": "2112.09332" }, { "id": "1912.05877" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1805.00899" }, { "id": "2204.00598" }, { "id": "2307.14984" }, { "id": "2309.07864" }, { "id": "2101.06804" }, { "id": "2205.03854" }, { "id": "2305.16291" }, { "id": "2305.11014" }, { "id": "2305.18323" }, { "id": "2109.08270" }, { "id": "2210.03629" }, { "id": "2206.05802" }, { "id": "2302.07459" }, { "id": "2307.15818" }, { "id": "2306.06770" }, { "id": "2307.16789" }, { "id": "2204.01691" }, { "id": "2304.05128" }, { "id": "2308.06391" }, { "id": "2302.07842" }, { "id": "2304.09853" }, { "id": "2204.02311" }, { "id": "2307.13854" }, { "id": "2302.02676" }, { "id": "2305.14992" }, { "id": "2010.03768" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.00151" }, { "id": "2203.11171" }, { "id": "2303.03378" }, { "id": "2202.01110" }, { "id": "2112.08633" }, { "id": "2112.09118" }, { "id": "2212.08073" }, { "id": "2308.04030" }, { "id": "2207.10342" }, { "id": "2012.15723" }, { "id": "1909.01871" }, { "id": "2210.11610" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2303.07678" }, { "id": "2205.12255" } ]