id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
listlengths 1
1
|
---|---|---|---|---|---|---|
2309.03409#74 | Large Language Models as Optimizers | Luis Miguel Rios and Nikolaos V Sahinidis. Derivative-free optimization: a review of algorithms and comparison of software implementations. Journal of Global Optimization, 56:1247â 1293, 2013. Daniel J Rosenkrantz, Richard E Stearns, and Philip M Lewis, II. An analysis of several heuristics for the traveling salesman problem. SIAM journal on computing, 6(3):563â 581, 1977. Subhro Roy and Dan Roth. arXiv:1608.01413, 2016. Solving general arithmetic word problems. arXiv preprint | 2309.03409#73 | 2309.03409#75 | 2309.03409 | [
"2205.12548"
] |
2309.03409#75 | Large Language Models as Optimizers | Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. Toolformer: Language models can teach themselves to use tools. arXiv preprint arXiv:2302.04761, 2023. Taylor Shin, Yasaman Razeghi, Robert L Logan IV, Eric Wallace, and Sameer Singh. Autoprompt: Eliciting knowledge from language models with automatically generated prompts. arXiv preprint arXiv:2010.15980, 2020. | 2309.03409#74 | 2309.03409#76 | 2309.03409 | [
"2205.12548"
] |
2309.03409#76 | Large Language Models as Optimizers | Noah Shinn, Beck Labash, and Ashwin Gopinath. Reflexion: an autonomous agent with dynamic memory and self-reflection. arXiv preprint arXiv:2303.11366, 2023. 24 # Large Language Models as Optimizers Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, et al. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. arXiv preprint arXiv:2206.04615, 2022. Mirac Suzgun, Nathan Scales, Nathanael Schärli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V Le, Ed H Chi, Denny Zhou, et al. Challenging big-bench tasks and whether chain-of-thought can solve them. arXiv preprint arXiv:2210.09261, 2022. Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and Anima Anandkumar. | 2309.03409#75 | 2309.03409#77 | 2309.03409 | [
"2205.12548"
] |
2309.03409#77 | Large Language Models as Optimizers | Voyager: An open-ended embodied agent with large language models. arXiv preprint arXiv:2305.16291, 2023. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdh- ery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171, 2022. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. Chain of thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903, 2022. Jerry Wei, Jason Wei, Yi Tay, Dustin Tran, Albert Webson, Yifeng Lu, Xinyun Chen, Hanxiao Liu, Da Huang, Denny Zhou, et al. Larger language models do in-context learning differently. arXiv preprint arXiv:2303.03846, 2023. Yuxin Wen, Neel Jain, John Kirchenbauer, Micah Goldblum, Jonas Geiping, and Tom Goldstein. | 2309.03409#76 | 2309.03409#78 | 2309.03409 | [
"2205.12548"
] |
2309.03409#78 | Large Language Models as Optimizers | Hard prompts made easy: Gradient-based discrete optimization for prompt tuning and discovery. arXiv preprint arXiv:2302.03668, 2023. Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, and Daxin Jiang. Wizardlm: Empowering large language models to follow complex instructions. arXiv preprint arXiv:2304.12244, 2023. Hanwei Xu, Yujun Chen, Yulun Du, Nan Shao, Yanggang Wang, Haiyu Li, and Zhilin Yang. | 2309.03409#77 | 2309.03409#79 | 2309.03409 | [
"2205.12548"
] |
2309.03409#79 | Large Language Models as Optimizers | Gps: Genetic prompt search for efficient few-shot learning. arXiv preprint arXiv:2210.17041, 2022. Weizhe Yuan, Kyunghyun Cho, and Jason Weston. System-level natural language feedback. arXiv preprint arXiv:2306.13588, 2023. Tianjun Zhang, Xuezhi Wang, Denny Zhou, Dale Schuurmans, and Joseph E Gonzalez. Tempera: Test-time prompt editing via reinforcement learning. In The Eleventh International Conference on Learning Representations, 2023. Zihao Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. | 2309.03409#78 | 2309.03409#80 | 2309.03409 | [
"2205.12548"
] |
2309.03409#80 | Large Language Models as Optimizers | Calibrate before use: Improving few-shot performance of language models. In International Conference on Machine Learning, pp. 12697â 12706. PMLR, 2021. Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Claire Cui, Olivier Bousquet, Quoc Le, et al. Least-to-most prompting enables complex reasoning in large language models. arXiv preprint arXiv:2205.10625, 2022a. Yongchao Zhou, Andrei Ioan Muresanu, Ziwen Han, Keiran Paster, Silviu Pitis, Harris Chan, and Jimmy Ba. Large language models are human-level prompt engineers. arXiv preprint arXiv:2211.01910, 2022b. | 2309.03409#79 | 2309.03409#81 | 2309.03409 | [
"2205.12548"
] |
2309.03409#81 | Large Language Models as Optimizers | 25 # Large Language Models as Optimizers A SOME FAILURE CASES Although LLMs show the power of optimizing basic math problems (Section 3) and prompts (Sec- tion 4), we see some limitations across all optimizer LLMs that may impede their power of solving more challenging problems. These limitations include: â ¢ Hallucinating the values that need to come from math calculation: The optimizer LLMs often output contents like â the function value at (5, 3) is 15â despite that the true value is not 15. The model will get it right if external tools that can reliably calculate the value are triggered. When and how to trigger such tool use cases remains an interesting topic (see e.g., (Schick et al., 2023; Cai et al., 2023)). | 2309.03409#80 | 2309.03409#82 | 2309.03409 | [
"2205.12548"
] |
2309.03409#82 | Large Language Models as Optimizers | â ¢ Generating solutions already appeared in context even if we tell it to "Give me a new (w, b) pair that is different from all pairs above": the optimizer LLMs do not 100% reliably follow this instruction even if its own outputs often include sentences like â I will provide a new pair that is differentâ , making the output self-contradictory. The output is almost guaranteed to be different from in-context old solutions when the model output contains a comparison of the new pair and all old pairs, though. Thus (implicitly) triggering such behaviors may be a solution. How to implement this feature without harming the instruction following performance of other parts remains an interesting topic to study. â ¢ In black-box math optimization, getting stuck at a point that is neither global nor local optimal: This often occurs in two linear regression cases: (a) The in-context exemplars all share the same w or b that is different from wtrue or btrue. This case is more likely to be avoided when a larger number of past solutions are included in the meta-prompt; (b) one or several of the best previous solutions in the meta-prompt have ws and bs in quantitatively opposite directions from the global optima wtrue and btrue: for example, the ws are all smaller than wtrue while the bs are all larger than btrue. Since the optimizer model often proposes to only increase w or decrease b when the past solutions in meta-prompt share w or b, the optimization will get stuck if either increasing w or decreasing b would increase the objective value. This issue is mitigated by sampling multiple new solutions (thus more exploration) at each step. | 2309.03409#81 | 2309.03409#83 | 2309.03409 | [
"2205.12548"
] |
2309.03409#83 | Large Language Models as Optimizers | â ¢ Hard to navigate a bumpy loss landscape: Like other optimizers, it is harder for the optimizer LLM to optimize black-box functions when the loss landscape gets more complicated. For example, when minimizing the Rosenbrock function f (x, y) = (aâ x)2+b(yâ x2)2 with a = 20 (whose global optimal point is x = 20, y = 400) with 5 starting points in [10, 20] Ã [10, 20], the optimization often gets stuck at around (0, 0). This is because the optimizer LLM sees a decrease of objective value when it drastically decreases both x and y to 0. Then starting from (0, 0), the optimizer LLM is hard to further navigate x and y along the narrow valley in the loss landscape towards (20, 400) (Figure 13). 15000 $10000 < 50000 5 10 x 0 100 900 5 1 y 300 4o9 20 Figure 13: A visualization of the landscape of the Rosenbrock function f (x, y) = (aâ x)2+b(yâ x2)2 with a = 20 and b = 1. The global optima is at x = 20, y = 400 with function value 0. The function value at x = 0, y = 0 is 400. The landscape has a narrow valley between (0, 0) and (20, 400). | 2309.03409#82 | 2309.03409#84 | 2309.03409 | [
"2205.12548"
] |
2309.03409#84 | Large Language Models as Optimizers | 26 # Large Language Models as Optimizers B PROMPTING FORMATS FOR SCORER LLM Figure 14, 15, and 16 show examples of the Q_begin, Q_end, and A_begin prompting formats when the â QAâ pattern is present. The â QAâ pattern is eliminated when prompting instruction-tuned scorer models like text-bison with the Q_begin and Q_end formats (Figure 17 and 18). Q: {instruction} Janetâ s ducks lay 16 eggs per day. She eats three for breakfast every morning and bakes muffins for her friends every day with four. She sells the remainder at the farmersâ market daily for $2 per fresh duck egg. How much in dollars does she make every day at the farmersâ market? A: Figure 14: The Q_begin prompting format on a GSM8K test exemplar with the "QA" pattern. Q: Janetâ | 2309.03409#83 | 2309.03409#85 | 2309.03409 | [
"2205.12548"
] |
2309.03409#85 | Large Language Models as Optimizers | s ducks lay 16 eggs per day. She eats three for breakfast every morning and bakes muffins for her friends every day with four. She sells the remainder at the farmersâ market daily for $2 per fresh duck egg. How much in dollars does she make every day at the farmersâ market? {instruction} A: Figure 15: The Q_end prompting format on a GSM8K test exemplar with the "QA" pattern. Q: Janetâ s ducks lay 16 eggs per day. She eats three for breakfast every morning and bakes muffins for her friends every day with four. She sells the remainder at the farmersâ market daily for $2 per fresh duck egg. How much in dollars does she make every day at the farmersâ market? | 2309.03409#84 | 2309.03409#86 | 2309.03409 | [
"2205.12548"
] |
2309.03409#86 | Large Language Models as Optimizers | # A: {instruction} Figure 16: The A_begin prompting format on a GSM8K test exemplar. {instruction} Janetâ s ducks lay 16 eggs per day. She eats three for breakfast every morning and bakes muffins for her friends every day with four. She sells the remainder at the farmersâ market daily for $2 per fresh duck egg. How much in dollars does she make every day at the farmersâ market? Figure 17: The Q_begin prompting format on a GSM8K test exemplar without the "QA" pattern. Janetâ | 2309.03409#85 | 2309.03409#87 | 2309.03409 | [
"2205.12548"
] |
2309.03409#87 | Large Language Models as Optimizers | s ducks lay 16 eggs per day. She eats three for breakfast every morning and bakes muffins for her friends every day with four. She sells the remainder at the farmersâ market daily for $2 per fresh duck egg. How much in dollars does she make every day at the farmersâ market? {instruction} Figure 18: The Q_end prompting format on a GSM8K test exemplar without the "QA" pattern. 27 # Large Language Models as Optimizers # C META-PROMPTS C.1 META-PROMPT FOR MATH OPTIMIZATION Now you will help me minimize a function with two input variables w, b. I have some (w, b) pairs and the function values at those points. The pairs are arranged in descending order based on their function values, where lower values are better. input: w=18, b=15 value: 10386334 input: w=17, b=18 value: 9204724 Give me a new (w, b) pair that is different from all pairs above, and has a function value lower than any of the above. Do not write code. The output must end with a pair [w, b], where w and b are numerical values. Figure 19: An example of the meta-prompt for linear regression. The blue text contains solution-score pairs; the orange text are meta-instructions. You are given a list of points with coordinates below: (0): (-4, 5), (1): (17, 76), (2): (-9, 0), (3): (-31, -86), (4): (53, -35), (5): (26, 91), (6): (65, -33), (7): (26, 86), (8): (-13, -70), (9): (13, 79), (10): (-73, -86), (11): (-45, 93), (12): (74, 24), (13): (67, -42), (14): (87, 51), (15): (83, 94), (16): (-7, 52), (17): (-89, 47), (18): (0, -38), (19): (61, 58). Below are some previous traces and their lengths. | 2309.03409#86 | 2309.03409#88 | 2309.03409 | [
"2205.12548"
] |
2309.03409#88 | Large Language Models as Optimizers | The traces are arranged in descending order based on their lengths, where lower values are better. <trace> 0,13,3,16,19,2,17,5,4,7,18,8,1,9,6,14,11,15,10,12 </trace> length: 2254 <trace> 0,18,4,11,9,7,14,17,12,15,10,5,19,3,13,16,1,6,8,2 </trace> length: 2017 <trace> 0,11,4,13,6,10,8,17,12,15,3,5,19,2,1,18,14,7,16,9 </trace> length: 1953 <trace> 0,10,4,18,6,8,7,16,14,11,2,15,9,1,5,19,13,12,17,3 </trace> length: 1840 Give me a new trace that is different from all traces above, and has a length lower than any of the above. The trace should traverse all points exactly once. The trace should start with <trace> and end with </trace>. Figure 20: An example of the meta-prompt for Traveling Salesman Problems with problem size n = 20. The blue text contains solution-score pairs; the orange text are meta-instructions. | 2309.03409#87 | 2309.03409#89 | 2309.03409 | [
"2205.12548"
] |
2309.03409#89 | Large Language Models as Optimizers | 28 # Large Language Models as Optimizers C.2 META-PROMPT FOR PROMPT OPTIMIZATION Different optimizer models work the best on different styles of meta-prompts. Figure 3 in the main paper shows the meta-prompt for PaLM 2-L-IT; Figure 21 shows that for pre-trained PaLM 2-L; Figure 22 shows that for GPT models. Create a piece of text at the beginning of the answer to enhance the precision in solving diverse grade school math problems. Precision: 4 <TEXT>A dime</TEXT> Precision: 17 <TEXT>The answer is a function. It is</TEXT> Precision: 19 <TEXT>So how can we find out what this equation means?</TEXT> Precision: 20 <TEXT>Solutions:</TEXT> Figure 21: An example of the meta-prompt for prompt optimization with pre-trained PaLM 2-L on GSM8K, where the generated instruction will be prepended to the beginning of the scorer LLM output (A_begin in Section 4.1). Your task is to generate the instruction <INS>. Below are some previous instructions with their scores. The score ranges from 0 to 100. | 2309.03409#88 | 2309.03409#90 | 2309.03409 | [
"2205.12548"
] |
2309.03409#90 | Large Language Models as Optimizers | text: Letâ s figure it out! score: 61 text: Letâ s solve the problem. score: 63 (. . . more instructions and scores . . . ) Below are some problems. Problem: Q: Alannah, Beatrix, and Queen are preparing for the new school year and have been given books by their parents. Alannah has 20 more books than Beatrix. Queen has 1/5 times more books than Alannah. If Beatrix has 30 books, how many books do the three have together? | 2309.03409#89 | 2309.03409#91 | 2309.03409 | [
"2205.12548"
] |
2309.03409#91 | Large Language Models as Optimizers | A: <INS> # Ground truth answer: 140 (. . . more exemplars . . . ) Generate an instruction that is different from all the instructions <INS> above, and has a higher score than all the instructions <INS> above. The instruction should begin with <INS> and end with </INS>. The instruction should be concise, effective, and generally applicable to all problems above. Figure 22: An example of the meta-prompt for prompt optimization with GPT models (gpt-3.5-turbo or gpt-4) on GSM8K, where the generated instruction will be prepended to the beginning of the scorer LLM output (A_begin in Section 4.1). The blue text contains solution- score pairs; the purple text describes the optimization task and output format; the orange text are meta-instructions. | 2309.03409#90 | 2309.03409#92 | 2309.03409 | [
"2205.12548"
] |
2309.03409#92 | Large Language Models as Optimizers | 29 # Large Language Models as Optimizers # D PROMPT OPTIMIZATION CURVES ON THE REMAINING BBH TASKS 80.0 Tb > beer 8 70.0 hi i | 60.0 > g a 8 50.0 wilt B | fg BH = i TT fate . understanding 40.0°5 50100 150 # steps 90.0 > aloo) Zoo | feared hy > Wty vy il Yat \ PIC Sat) ia BBH = â * pboolean_expressions 50.0°5 50 100 # steps fd + 60.05 # e causal_judgement) ~* 100 50 | 2309.03409#91 | 2309.03409#93 | 2309.03409 | [
"2205.12548"
] |
2309.03409#93 | Large Language Models as Optimizers | (a) BBH boolean_expressions (b) BBH causal_judgement (d) BBH disambiguation_qa (e) BBH dyck_languages (g) BBH geometric_shapes (h) BBH hyperbaton (j) BBH movie_recommendation (k) BBH multistep_arithmetic_two (l) BBH navigate (c) BBH date_understanding 70.0 > g | pi 4 § / Wis % 60.01 [I D> \V A I BBH 5 ~* formal_fallacies 50.0°5 20 40 60 # steps > 8 An i ha inn 560.0) he tal ANE A 3 1 Ae ! ay oD \ |Â¥ eel | BBH s ~* disambiguation_ga 40.05 50 160 # steps # (f) BBH formal_fallacies a Ga os BBH logical_ a seven sh i 60 2 anit | 55 iy 50 100 150 â 00 # steps 2500 a ail ne 8 2 < 20.0 i . ~*~ geometric_shapes 0 50 100 150 200 # steps # (i) BBH logical_deduction_seven_objects 70 hi 565 As Ah Itt \ Higa ney Â¥ Nati 60 tl | | 55 â ®- BBH navigate 0 40 80 120 # steps is 60 # # steps # # steps # # steps (m) BBH object_counting # (n) BBH penguins_in_a_table # (o) BBH reasoning_about_colored_objects Figure 23: Prompt optimization on 21 BBH tasks (except ruin_names and temporal_sequences already shown in Figure 6) with the text-bison scorer and the PaLM 2-L-IT optimizer, Part I. Most curves have upward trends. | 2309.03409#92 | 2309.03409#94 | 2309.03409 | [
"2205.12548"
] |
2309.03409#94 | Large Language Models as Optimizers | 30 # Large Language Models as Optimizers (a) BBH salient_translation_error_detection (b) BBH snarks (c) BBH sports_understanding (d) BBH objects_seven_objects tracking_shuffled_ (e) BBH web_of_lies (f) BBH word_sorting Figure 24: Prompt optimization on 21 BBH tasks (except ruin_names and temporal_sequences in Figure 6) with the text-bison scorer and the PaLM 2-L-IT optimizer, Part II. All curves have upward trends. E PROMPT OPTIMIZATION ON BBH TASKS â TABULATED ACCURACIES AND FOUND INSTRUCTIONS # E.1 PALM 2-L-IT AS OPTIMIZER, OPTIMIZATION STARTING FROM THE EMPTY STRING Table 8 and 9 show the instructions found by prompt optimization. A comparison of their accuracies with baselines â | 2309.03409#93 | 2309.03409#95 | 2309.03409 | [
"2205.12548"
] |
2309.03409#95 | Large Language Models as Optimizers | Letâ s think step by step.â (Kojima et al., 2022), â Letâ s work this out in a step by step way to be sure we have the right answer.â (Zhou et al., 2022b), and the empty string is in Table 7; a visualization is in Section 5.2 Figure 5. 31 # Large Language Models as Optimizers Table 7: Accuracies on BBH tasks: our found instructions with the PaLM 2-L-IT optimizer vs baseline. The optimization starts from the empty string. Because of the 20-80 train-test split, we show accuracies with the format â training / test / overall (training + test)â . The PaLM 2-L scores are from A_begin instructions; the text-bison scores are from Q_begin instructions. Bold numbers indicate the best for the corresponding task. | 2309.03409#94 | 2309.03409#96 | 2309.03409 | [
"2205.12548"
] |
2309.03409#96 | Large Language Models as Optimizers | empty string â â Acc Task Scorer Our Acc â Letâ s think step by step.â Acc â Letâ s work this out in a step by step way to be sure we have the right answer.â Acc training / test / overall training / test / overall training / test / overall 90.0 / 83.5 / 84.8 84.8 / 58.0 / 63.1 86.0 / 84.5 / 84.8 80.0 / 69.0 / 71.2 100.0 / 100.0 / 100.0 84.0 / 64.0 / 68.4 76.0 / 57.0 / 60.8 100.0 / 96.0 / 96.8 74.0 / 57.0 / 60.4 92.0 / 90.5 / 90.8 72.0 / 55.5 / 58.8 92.0 / 75.0 / 78.4 84.0 / 86.5 / 86.0 86.2 / 71.8 / 74.7 98.0 / 85.5 / 88.0 88.0 / 88.0 / 88.0 62.0 / 67.0 / 66.0 85.7 / 83.2 / 83.7 98.0 / 88.0 / 90.0 100.0 / 100.0 / 100.0 32.0 / 16.5 / 19.6 62.0 / 52.0 / 54.0 54.0 / 54.5 / 54.4 98.0 / 87.0 / 89.2 78.4 / 58.0 / 62.0 60.0 / 50.0 / 52.0 68.0 / 73.0 / 72.0 training / test / overall 32 # Large Language Models as Optimizers Task Our Instruction boolean_expressions A Boolean expression is a well-formed expression consisting of variables, values, and logical operators. The expression must evaluate to a single True or False value. The order of precedence of the logical operators is as follows: | 2309.03409#95 | 2309.03409#97 | 2309.03409 | [
"2205.12548"
] |
2309.03409#97 | Large Language Models as Optimizers | NOT, AND, OR, XOR, IMP. Parentheses can be used to group subexpressions and to control the order of evaluation. causal_judgement When considering questions about causation, a typical person would consider the following factors: whether the action or event was a necessary condition for the outcome to occur, a sufficient condition, a proximate cause, or a foreseeable cause. date_understanding To find the date X time ago from today, first find todayâ s date. Then subtract X time from todayâ s date. If the current date is the last day of a month, then the date a month ago is the last day of the previous month. If the current date is not the last day of a month, then the date a month ago is the same day of the previous month. For example, if today is March 31, 2023, then the date a month ago is February 28, 2023. If today is April 1, 2023, then the date a month ago is March 1, 2023. disambiguation_qa Identifying Antecedents of Pronouns: | 2309.03409#96 | 2309.03409#98 | 2309.03409 | [
"2205.12548"
] |
2309.03409#98 | Large Language Models as Optimizers | A Comprehensive Guide dyck_languages First, look for the opening parentheses. Then, count the number of opening parentheses. Finally, close the parentheses in the reverse order that they were opened. formal_fallacies A deductive argument is one where the conclusion follows necessarily from the premises. If the premises are true, then the conclusion must also be true. An invalid argument is one where it is possible for the premises to be true and the conclusion to be false. geometric_shapes A closed polygonal chain is a series of connected line segments. The line segments can be straight or curved. The first and last line segments are connected. The line segments do not intersect each other except at their endpoints. A closed polygon can be described by an SVG path element, which starts at a given point, goes to one or more additional points, and then ends at the starting point. The path element can consist of straight line segments, curved segments, or a mixture of both. hyperbaton The correct adjective order in English is opinion, size, shape, age, color, origin, material, and purpose. If you have more than one adjective of the same type, they are usually placed in order of importance. For example, you would say "a large, old, Pakistani ship" rather than "an old, large, Pakistani ship." There are a few exceptions to these rules, but they are generally followed in most cases. logical_deduction _seven_objects The following questions will test your ability to use deductive reasoning. You will be given a set of statements about a group of objects. You will then be asked to answer questions about the objects based on the statements. The statements in the questions are logically consistent, so you can use them to deduce the order of the objects. For each question, you must choose the option that is logically consistent with the information in the questions. movie_recommendation Based on your input, I have analyzed the given movies in terms of genre, plot, tone, audience rating, year of release, director, cast, and reviews. I have also taken into account the given options. The movie that is most similar to the given movies in terms of all these factors is: multistep_arithmetic _two The order of operations in mathematics is PEMDAS, which stands for Parentheses, Exponents, Multiplication, Division, Addition, and Subtraction. When there are multiple operations of the same precedence, they must be performed from left to right. | 2309.03409#97 | 2309.03409#99 | 2309.03409 | [
"2205.12548"
] |
2309.03409#99 | Large Language Models as Optimizers | Note that multiplication and division have the same precedence, as do addition and subtraction. navigate You will return to the starting point if and only if (1) the total number of steps you take forward is equal to the total number of steps you take back, and (2) the total number of turns you make is a multiple of 180 degrees. object_counting Here is a list of the objects you mentioned and their corresponding counts: penguins_in_a_table Here is my new text: reasoning_about _colored_objects Starting from the leftmost object in the row, I observe the following objects arranged in this order: ruin_names Which is the funniest pun on the artist or movie name? salient_translation _error_detection Instructions: Read the German sentence and its English translation carefully, then identify the type of error in the translation and select the correct option. There are six possible types of errors: Named Entities, Numerical Values, Modifiers or Adjectives, Negation or Antonyms, Facts, and Dropped Content. # snarks Identify the sarcastic statement by considering the following factors: incongruity, exaggeration, understatement, context, speakerâ s intent, and audienceâ s reaction. I will also consider the speakerâ s tone of voice, facial expressions, and body language. # sports_understanding I will determine if a sentence about an athlete is plausible by first checking if it is grammatically correct. If it is, I will then check if it is consistent with the athleteâ s sport, position, and real-world statistics. I will also check if it is consistent with the rules of the athleteâ s sport. If the sentence is consistent with all of these things, I will answer "yes", otherwise I will answer "no". | 2309.03409#98 | 2309.03409#100 | 2309.03409 | [
"2205.12548"
] |
2309.03409#100 | Large Language Models as Optimizers | # temporal_sequences The answer is the time that is not mentioned in the given statements. # tracking_shuffled_objects _seven_objects Claire has the blue ball, Gertrude has the black ball, and Dave has the green ball. They are all happy with their new balls. # web_of_lies The answer to a question is yes if there are an odd number of liars before the current speaker, and no if there are an even number of liars before the current speaker. If the current speaker is a truth-teller, they will say the opposite of what the previous person said, while a liar will say the same thing as the previous person said. | 2309.03409#99 | 2309.03409#101 | 2309.03409 | [
"2205.12548"
] |
2309.03409#101 | Large Language Models as Optimizers | # word_sorting Alphabetical order of given words: 33 # Large Language Models as Optimizers Table 9: BBH task-wise instructions found by prompt optimization with the text-bison scorer and the PaLM 2-L-IT optimizer. The optimization starts from the empty string. # Task Our Instruction boolean_expressions Not (not False) and not not False is False causal_judgement A typical person would likely answer the questions about causation as follows: date_understanding Today is February 28, 2023. It is a Tuesday. Yesterday was Monday, February 27, 2023. Tomorrow will be Wednesday, March 1, 2023. A week ago, it was February 21, 2023, and a month ago, it was January 28, 2023. A year from now, it will be February 28, 2024. The day of the week is important to note because it will help us to correctly answer the questions below. Not all years are leap years that contain February 29. disambiguation_qa A pronoun is a word that stands in for a noun. The noun that a pronoun refers to is called its antecedent. To identify the antecedent of a pronoun, look for the noun that the pronoun could be referring to. If there is only one possible noun, then that is the antecedent. If there are two or more possible nouns, then the antecedent is ambiguous. Use the context of the sentence to help you determine the correct antecedent. dyck_languages { } formal_fallacies How to Evaluate Deductive Validity of an Argument geometric_shapes What shape is this SVG code drawing, and how many sides does it have? hyperbaton In English, adjectives are typically placed before nouns in a specific order. The order is: opinion, size, shape, age, color, origin, material, purpose, noun. For example, the sentence "the big, old, red barn" would be considered grammatically correct, while the sentence "the old, big, red barn" would not. Adjectives that come before nouns are called attributive adjectives, while adjectives that come after nouns are called predicative adjectives. logical_deduction _seven_objects In this logical reasoning task, you will be given a series of paragraphs, each of which describes a set of objects arranged in a fixed order. | 2309.03409#100 | 2309.03409#102 | 2309.03409 | [
"2205.12548"
] |
2309.03409#102 | Large Language Models as Optimizers | The statements in each paragraph are logically consistent. You must read each paragraph carefully and use the information given to determine the logical relationships between the objects. You will then be asked a question about the order of the objects. Read each question carefully and choose the option that answers the question correctly. movie_recommendation What is the highest-rated movie similar to the given movies, with a similar IMDb rating and released in the same year? multistep_arithmetic_two Letâ s solve these equations using PEMDAS order of operations. Remember that PEMDAS stands for parentheses, exponents, multiplication and division, and addition and subtraction. navigate Starting at the origin, facing north, follow the instructions. If your displacement from the origin is zero and your direction is unchanged, then your answer is Yes. Otherwise, your answer is No. object_counting Let me help you count the items you have. | 2309.03409#101 | 2309.03409#103 | 2309.03409 | [
"2205.12548"
] |
2309.03409#103 | Large Language Models as Optimizers | Just list them one by one, separated by commas. I will then count each item and tell you how many items there are in total. penguins_in_a_table This table shows information about penguins. The columns show the penguinâ s name, age, height (in cm), and weight (in kg). The penguins are listed in order of their age, from youngest to oldest. reasoning_about _colored_objects First, read the input carefully. Then, identify all the objects mentioned, their colors, and their positions. Next, visualize the objects and their positions in your mind. Finally, answer the questions accurately based on the information given. Make sure to pay attention to the order of the objects. ruin_names A humorous edit of an artist or movie name can be created by replacing one or more letters to form a new word or phrase that sounds similar but has a different meaning. The new word or phrase should be relevant to the original word, but it should also be a surprise, which makes the edit funny. For example, the artist or movie name "Rocky" can be changed to "Ricky," and "Schindlerâ s List" can be changed to "Schindlerâ s Lift." Be creative and have fun! salient_translation _error_detection The following translations from German to English contain a particular error. The error may be one of the following types: Named Entities, Numerical Values, Modifiers or Adjectives, Negation or Antonyms, Facts, or Dropped Content. Please identify the error. snarks The statement sports_understanding To determine the plausibility of a sports sentence, I will first identify the sport, athletes, teams, and events mentioned in the sentence. Then, I will use my knowledge of the rules of the sport, the context of the sentence, common sense, and my knowledge of the world to determine whether the sentence is plausible. I will also consider the time period and location, as well as any other relevant information. Finally, I will return a score of 1 for plausible sentences and 0 for implausible ones. temporal_sequences To determine the time period when a person went to a place, first identify all the time periods when the personâ s whereabouts are unknown. Then, rule out any time periods during which the person was seen doing something else or the place was closed. The remaining time periods are the possible times when the person could have gone to the place. | 2309.03409#102 | 2309.03409#104 | 2309.03409 | [
"2205.12548"
] |
2309.03409#104 | Large Language Models as Optimizers | # tracking_shuffled_objects _seven_objects At the start of the game, Claire has a blue ball. Throughout the game, pairs of people swap balls. Claire ends up with the yellow ball. # web_of_lies # web_of_lies People in a group either tell the truth or lie. The truthfulness of a personâ s statement is determined by the statement of the previous person. If the previous person told the truth, then the current person who says the opposite is lying. If the previous person lied, then the current person who says the opposite is telling the truth. This rule applies to all subsequent statements. | 2309.03409#103 | 2309.03409#105 | 2309.03409 | [
"2205.12548"
] |
2309.03409#105 | Large Language Models as Optimizers | # word_sorting # word_sorting Sort the following words alphabetically, ignoring case and punctuation. Print the sorted list. 34 # Large Language Models as Optimizers E.2 G P T-3.5-T U R B O AS OPTIMIZER, OPTIMIZATION STARTING FROM THE EMPTY STRING Table 11, 12 and 13 show the instructions found by prompt optimization. Their accuracies are listed in Table 10. Figure 25 visualizes the difference between their accuracies and those of the baselines â | 2309.03409#104 | 2309.03409#106 | 2309.03409 | [
"2205.12548"
] |
2309.03409#106 | Large Language Models as Optimizers | Letâ s think step by step.â and the empty string. The optimizations find instructions better than the empty starting point, and most of the found instructions are better than â Letâ s think step by stepâ . # £s One caveat in the A_begin instructions (Table 11) is that a lot of the found instructions are imperative or interrogative sentences that are more suitable to be put into â Q:â rather than â A:â , like â Solve the sequence by properly closing the parentheses.â for dyck_languages and â Which movie option from the given choices ...?â for movie_recommendation. Such styles appear more often here than the PaLM 2-L-IT optimizer results (Table 8), showing PaLM 2-L-IT understands the needed style better. In Section E.3, we show the A_begin optimization results with the non-empty starting point â Letâ s solve the problem.â . Most results there are declarative sentences â more suitable for A_begin. (a) PaLM 2-L, ours minus â Letâ s think step by step.â (b) PaLM 2-L, ours minus empty starting point (c) text-bison, ours minus â Letâ s think step by step.â (d) text-bison, ours minus empty starting point Figure 25: On 23 BBH tasks, the accuracy differences among instructions found by prompt opti- mization (with the gpt-3.5-turbo optimizer), â Letâ s think step by step.â | 2309.03409#105 | 2309.03409#107 | 2309.03409 | [
"2205.12548"
] |
2309.03409#107 | Large Language Models as Optimizers | , and the empty string (optimization starting point). 35 # Large Language Models as Optimizers Table 10: Accuracies on BBH tasks with the gpt-3.5-turbo optimizer that starts from the empty string. The PaLM 2-L scores are from A_begin (left) instructions; the text-bison scores include Q_begin (left) and Q_end (right) instructions. Task Scorer training / test / overall training / test / overall 36 # Large Language Models as Optimizers Table 11: BBH task-wise instructions found by prompt optimization with the PaLM 2-L scorer and the gpt-3.5-turbo optimizer. The optimizations start from the empty string. Task Our Instruction boolean_expressions An accurate evaluation of logical expressions involves correctly applying Boolean operators, considering the order of operations, and analyzing the truth values of the operands in accordance with Boolean logic principles. causal_judgement Understanding causality is critical for accurately assessing cause and effect relationships in various scenarios, leading to well-informed judgments, precise conclusions, and definitive answers to questions about the outcomes involved. date_understanding What is the specific date mentioned or required in each given problem or question, taking into account all relevant information, available options, and the provided context? | 2309.03409#106 | 2309.03409#108 | 2309.03409 | [
"2205.12548"
] |
2309.03409#108 | Large Language Models as Optimizers | Please provide the accurate answer in the format MM/DD/YYYY. disambiguation_qa Accurately analyze and clarify the pronoun-antecedent relationship in the given sentences, identifying the appropriate referent to eliminate any potential confusion or ambiguity and ensure a precise understanding of the intended meaning. dyck_languages Solve the sequence by properly closing the parentheses. formal_fallacies In determining the deductive validity of arguments based on explicit premises, a meticulous analysis of the logical relationships and implications is essential for definitively establishing their soundness, confirming their validity or invalidity, and ensuring a reliable and robust assessment of the arguments at hand. geometric_shapes The SVG path element with the "d" attribute plays a crucial role in web development, allowing for the precise definition and rendering of various shapes on a webpage. hyperbaton Understanding the correct order of adjectives is crucial for constructing grammatically accurate and coherent sentences that effectively convey the intended meaning in diverse contexts while ensuring clarity, cohesion, and consistency throughout consistently and effortlessly. logical_deduction _seven_objects By conducting a meticulous analysis of the given information and ensuring logical consistency within each paragraph, we can accurately determine the precise order or ranking of the mentioned objects, allowing us to confidently and consistently identify the correct answer in every presented scenario with utmost precision and confidence. movie_recommendation Which movie option from the given choices closely matches the mentioned films in terms of themes, storylines, and characteristics, guaranteeing the highest possible similarity score among them all? multistep_arithmetic_two Evaluate the given mathematical expressions step by step to determine the correct solutions accurately. navigate Is it possible to determine, with absolute certainty, whether strictly adhering to the given instructions will unfailingly bring you back to the original starting point without any exceptions, errors, or deviations? object_counting Determine the total number of objects or entities mentioned in the given list, covering various categories and types, to accurately calculate the overall count. penguins_in_a_table From the given table, what information can we gather about the mentioned animals and their respective attributes, including names, ages, heights, and weights? reasoning_about _colored_objects By thoroughly examining the given information, accurately determine the answers for each question by considering the specific characteristics, colors, and positions of the mentioned objects. ruin_names Select the most amusing and clever alteration from the options provided for the given artist, movie, or title name, and accurately choose the correct answer to test your wit and creativity. salient_translation _error_detection Thoroughly examine the given translations from German to English and accurately identify any errors by carefully analyzing the text and selecting the appropriate option with meticulous attention to detail, precision, utmost accuracy, and comprehensive understanding of the language for precise evaluation and categorization. snarks Which option delivers the most devastatingly sarcastic response, brilliantly exposing the sheer absurdity and leaving absolutely no doubt whatsoever in all the given situations? sports_understanding Maintaining the accuracy, reliability, and integrity of sports event representation is essential for upholding the highest standards of credibility, trustworthiness, and overall quality in conveying information, without any compromise, misrepresentation, or distortion, thereby ensuring the factual accuracy of sports journalism. temporal_sequences Based on the provided timeline and observed activities, we can accurately determine the possible time range when each individual could have visited their intended destinations and answer questions about their visitation time. tracking_shuffled_objects _seven_objects An important point to note is that each person in the group starts with one specific book at the beginning of the semester. web_of_lies | 2309.03409#107 | 2309.03409#109 | 2309.03409 | [
"2205.12548"
] |
2309.03409#109 | Large Language Models as Optimizers | Analyzing the consistency and accuracy of statements provided by each person is crucial for determining the truthfulness of individuals in every scenario. # word_sorting Please sort the given words in alphabetical order: The list of words to be sorted contains - 37 # Large Language Models as Optimizers Table 12: BBH task-wise Q_begin instructions found by prompt optimization with the text-bison scorer and the gpt-3.5-turbo optimizer. The optimizations start from the empty string. # Task # Our Instruction boolean_expressions Group sub-expressions with parentheses to accurately evaluate logical operations: not, and, and finally or. Determine the resulting value as either True or False. causal_judgement Consider the intentions and actions of the individuals involved. date_understanding Determine the one-day difference in the given date and express it in the format MM/DD/YYYY. disambiguation_qa Determine the precise antecedent of the pronoun in the given sentence and select the correct option or state if it is ambiguous. dyck_languages Ensure that all opening brackets have a corresponding closing bracket, and that the closing brackets are in the correct order. formal_fallacies Thoroughly analyze the explicitly provided premises and determine the deductive validity of the argument based on all necessary conditions, implications, exclusions, and dependencies given. geometric_shapes Analyze the given SVG path element carefully and confidently select the correct option from the provided choices to accurately determine the corresponding shape. Pay close attention to the specific path details and confidently make the most suitable choice. hyperbaton Select the sentence that strictly adheres to the standard order of adjectives: opinion, size, age, shape, color, origin, material, and purpose. Ensure there are no deviations or alterations in the adjective order. Choose the option without any changes. logical_deduction _seven_objects Analyze the given information to accurately determine the precise order and ranking of the mentioned objects/people, considering their relationships, positions, and any provided comparisons, for a definitive and logical progression with maximum accuracy and efficiency. movie_recommendation Based on the movie list provided, carefully consider your preferences and make a well-informed decision. multistep_arithmetic_two First, simplify any expressions within parentheses following the correct order of operations to accurately evaluate the final answer with efficiency and precision. navigate Always face forward. Take 10 steps forward. Turn left. Take 5 steps forward. Take 3 steps backward. Finally, take 7 steps forward. | 2309.03409#108 | 2309.03409#110 | 2309.03409 | [
"2205.12548"
] |
2309.03409#110 | Large Language Models as Optimizers | Turn around and take 1 step forward. Repeat the previous sequence three times. Follow the given path precisely without any deviations. At the end, turn right and take 11 steps forward. If you follow these instructions, will you return to the starting point? Options: - Yes - No object_counting Determine the total count of mentioned vegetables accurately and state the final count as the answer. penguins_in_a_table Analyze the given table to accurately determine the required information based on the provided criteria and attributes of the penguins and giraffes. Utilize efficient problem-solving strategies to arrive at the correct answer. reasoning_about _colored_objects ruin_names State the color of the object mentioned in the given arrangement with utmost accuracy. Choose the option that offers the most clever and humorous alteration of the given artist or movie name. Let your creativity shine and select the answer that will undoubtedly bring a smile to your face! Make sure to think outside the box! salient_translation _error_detection Analyze the translation and accurately identify the specific error type based on the source text, providing the most appropriate corresponding option. snarks Choose the option that wickedly embodies sarcasm. sports_understanding Determine the plausibility of the given statement by evaluating factual accuracy, logical consistency, and contextual relevance, then provide a succinct and well-justified response. temporal_sequences Identify the optimal time slot for the individual to engage in the mentioned location/activity considering the given sightings and waking up time, taking into account the opening and closing times of the location and the duration of each event. tracking_shuffled_objects _seven_objects Pay attention to the given information and track the swaps/exchanges carefully to accurately determine the final possession/position/outcome for the specified individual. web_of_lies To determine the truthfulness of the last person mentioned, analyze the consistency of each statement and count the number of individuals accusing the previous person of lying. If the count of accusers is even, that person tells the truth; if it is odd, that person lies. word_sorting Alphabetically sort the given list of words, ensuring all words are included and in ascending order. | 2309.03409#109 | 2309.03409#111 | 2309.03409 | [
"2205.12548"
] |
2309.03409#111 | Large Language Models as Optimizers | 38 # Large Language Models as Optimizers Table 13: BBH task-wise Q_end instructions found by prompt optimization with the text-bison scorer and the gpt-3.5-turbo optimizer. The optimizations start from the empty string. Task Our Instruction boolean_expressions Accurately use order of operations and parentheses to evaluate logical expressions and determine truth values efficiently. causal_judgement Consider all relevant factors, prioritize overall well-being and ethical considerations, make well-informed decisions while foreseeing potential consequences efficiently, and consistently strive for optimal outcomes with empathy and adaptability in a thoughtful and comprehensive manner. date_understanding Subtract the specified number of days from the given date and format the outcome as MM/DD/YYYY to accurately determine the desired result in an efficient manner. disambiguation_qa Clearly identify and select the unambiguous antecedent for the pronoun or designate it as "Ambiguous" if it is unclear. dyck_languages Add the missing closing parentheses. formal_fallacies Determine the deductive validity of the argument presented based on the explicitly stated premises and reach a definitive conclusion. geometric_shapes Analyzing the given SVG path element, accurately determine its shape by closely examining its curves and coordinates, then select the correct option. hyperbaton Choose the option with the correct adjective order in each sentence, prioritizing specific attributes like size, color, and origin. Place the most specific adjective before the more general ones for precise and standardized ordering across all examples. Ensure accurate alignment of the adjectives based on their respective attributes for consistent and standardized ordering. logical_deduction _seven_objects Determine the precise order of the given objects/participants based on the provided information and establish the final ranking accurately, considering all relevant factors, while maintaining logical consistency with maximum efficiency. movie_recommendation Choose the most similar option from the choices provided that closely aligns with the given moviesâ themes, genres, and impact for the most accurate recommendation possible. Make your selection wisely. multistep_arithmetic_two Carefully follow the order of operations to precisely simplify the expressions within parentheses and efficiently find the accurate final answer. navigate Always face forward. Take 10 steps forward. Turn right and walk for 5 steps. Then, make a left turn and continue for 9 steps. Proceed by walking 6 steps backward. Finally, turn around and take 200 steps. | 2309.03409#110 | 2309.03409#112 | 2309.03409 | [
"2205.12548"
] |
2309.03409#112 | Large Language Models as Optimizers | Accurately track your movements, diligently adhere to the given path, and ensure to return to the starting point without any deviations or obstacles. object_counting Determine the total count of items mentioned, including all listed items, using an efficient and concise method. State the final count as your answer. penguins_in_a_table Identify the animal with the maximum measurement (weight, age, or height) in the table and state its name and species. reasoning_about _colored_objects Determine the color of each item in the given scenario and select the correct color option from the provided choices for accurate responses, ensuring utmost precision and completeness. ruin_names Choose the option that creatively and hilariously transforms the given artist or movie name. salient_translation _error_detection Carefully analyze the translations and select the most suitable option from the given choices to rectify the specific error category, ensuring complete precision, accuracy, and faithful representation of the intended meaning, while considering all relevant information in the source text. snarks Choose the option that cleverly employs sarcasm to defy all expectations and leave everyone utterly dumbfounded, questioning the very essence of their own perception. sports_understanding Evaluate the plausibility of each given statement and provide a well-supported justification based on logical reasoning, contextual understanding, and relevant evidence to arrive at a definitive and conclusive answer. temporal_sequences Identify the possible time slot for the desired activity based on the given information and sightings, then select the correct option. tracking_shuffled_objects _seven_objects Thoroughly analyze the given scenarios, systematically consider all available information, and confidently determine the final outcome with exceptional precision and optimal efficiency, while maintaining a strategic and logical approach throughout the process. web_of_lies Examine each personâ s statements meticulously to accurately determine the truth and confidently identify who is telling the truth, enabling you to effectively solve the given problem. word_sorting Sort the given words alphabetically using spaces as separators while maintaining their original order and including all words. | 2309.03409#111 | 2309.03409#113 | 2309.03409 | [
"2205.12548"
] |
2309.03409#113 | Large Language Models as Optimizers | 39 # Large Language Models as Optimizers E.3 PALM 2-L AS SCORER, G P T-3.5-T U R B O AS OPTIMIZER, OPTIMIZATION STARTING FROM â LETâ S SOLVE THE PROBLEM.â Figure 26 and Table 14 compare the accuracies of found instructions vs â Letâ s solve the problem.â , â Letâ s think step by step.â , and the instructions in Table 11. Table 15 details the found instructions. The â Letâ sâ pattern appears more often in the found instructions because of the starting points, and the instructions are more often declarative that are more suitable for A_begin, even if some are semantically far from â Letâ s solve the problemâ . In fact, â Letâ sâ | 2309.03409#112 | 2309.03409#114 | 2309.03409 | [
"2205.12548"
] |
2309.03409#114 | Large Language Models as Optimizers | was adopted by Zhou et al. (2022b) as a fixed pattern in generated prompts, possibly because of the same reason. & Bn, yh doo RRO Baus us Rep nb 0. Cetus aaa sH2ay. Pin ly % le? o~ â Sygh Py, 2580 oq, hny, Bepoyieioy My, S05 M0, â Rey regions > Py) Onpltey Papraeges Yay Oy 54, (90°F? Bay, Hes re okry Mog Golo ueery, a, Sa oGeA22Dy SSG Ue Magne seek Moritou See i Sion My, I oueeng A, Yo, U3 640 " SytuspSig Qe 013200 uP Wsssnin Sto â aie (es, SNe, 3 3 3 Sey 20 8 a we aouaseyip A2eund2e Og | Oy Mf 3%, , Pee Bf $92{%o°a Mf Cup2ner9, ee ae A ME | S92) UY Mey? by = ee uy Meg bingy. = YoY y % cm | O07, â One â Ye, Te | 2005 "ue, SE ee Ly, ees | Shu, "se Mi | oi 90 Wier, % Mim | 9062049 Yo, Tf Sey MPO, ory, mmf Beery, Nour â Sone feukloag 20, open woe Oy odes ioe 7 oe om ay Jes, Se 2 ° ° Geesâ g g ego esuaieyp AeIno3e Y009 # y (a) ours minus â Letâ s think step by step.â (b) ours minus â Letâ s solve the problem.â starting point = by, [mmm | S9/05~ a iso Lo, oto Moy an = ee Mey 2S. 18, Sd, 1 do kus apo? 0 Soh35, Play Meo ' SH Pan< BFâ By, syle(ror, Ste Uy t 209 Young Mgr, 1 PAM, % | BPO, ne â | 2309.03409#113 | 2309.03409#115 | 2309.03409 | [
"2205.12548"
] |
2309.03409#115 | Large Language Models as Optimizers | oe, MI So95, Be, 4 205 Stuy dn, ys " Beaten, = See, Mong epg 90 Maes Oe uy i Laeger ey yp, ¢ , BOF, Oaseta ENE Teli. Pap = Culeers â P}, | ec penesog, ay = Meee " 01200 Lun Sy 06 ey 3 ° ° %. â 9 â ex aouazayyip Adeund2e Og (c) ours minus the instructions found with the empty starting point Figure 26: On 23 BBH tasks, the accuracy differences among instructions found by prompt opti- mization (with the text-bison scorer and the gpt-3.5-turbo optimizer), â Letâ s think step by step.â , and â Letâ s solve the problem.â (optimization starting point). The found instructions mostly outperform the â Letâ s think step by step.â baseline, the â Letâ s solve the problem.â starting point, and the instructions in Table 11 found by prompt optimization from the empty string. 40 3.8 | 2309.03409#114 | 2309.03409#116 | 2309.03409 | [
"2205.12548"
] |
2309.03409#116 | Large Language Models as Optimizers | # Large Language Models as Optimizers Table 14: Accuracies on BBH tasks with the PaLM 2-L scorer and the gpt-3.5-turbo optimizer that starts from â Letâ s solve the problemâ . The scores are from A_begin instructions. Task Scorer Our Acc â Letâ s solve the problem.â Acc training / test / overall training / test / overall PaLM 2-L PaLM 2-L PaLM 2-L PaLM 2-L PaLM 2-L PaLM 2-L PaLM 2-L PaLM 2-L PaLM 2-L PaLM 2-L PaLM 2-L PaLM 2-L PaLM 2-L PaLM 2-L PaLM 2-L PaLM 2-L PaLM 2-L PaLM 2-L PaLM 2-L PaLM 2-L PaLM 2-L PaLM 2-L PaLM 2-L 98.0 / 89.5 / 91.2 83.8 / 58.7 / 63.6 90.0 / 82.0 / 83.6 78.0 / 68.0 / 70.0 100.0 / 100.0 / 100.0 84.0 / 62.0 / 66.4 62.0 / 42.5 / 46.4 94.0 / 91.5 / 92.0 66.0 / 53.0 / 55.6 88.0 / 88.0 / 88.0 66.0 / 55.0 / 57.2 76.0 / 67.0 / 68.8 96.0 / 92.5 / 93.2 86.2 / 70.9 / 74.0 88.0 / 69.0 / 72.8 92.0 / 85.5 / 86.8 66.0 / 67.5 / 67.2 88.6 / 76.9 / 79.2 72.0 / 63.5 / 65.2 100.0 / 99.5 / 99.6 56.0 / 63.5 / 62.0 56.0 / 58.5 / 58.0 52.0 / 44.5 / 46.0 78.0 / 69.0 / 70.8 62.0 / 61.3 / 61.5 74.0 / 71.0 / 71.6 52.0 / 54.5 / 54.0 94.0 / 97.0 / 96.4 68.0 / 54.0 / 56.8 30.0 / 22.0 / 23.6 72.0 / 77.0 / 76.0 38.0 / 36.5 / 36.8 66.0 / 76.0 / 74.0 30.0 / 22.0 / 23.6 54.0 / 63.5 / 61.6 58.0 / 58.0 / 58.0 69.0 / 72.6 / 71.9 78.0 / 69.5 / 71.2 76.0 / 79.5 / 80.8 30.0 / 35.5 / 34.4 80.0 / 70.6 / 72.5 60.0 / 50.5 / 52.4 96.0 / 92.5 / 93.2 42.0 / 51.5 / 49.6 0.0 / 4.0 / 3.2 18.0 / 20.5 / 20.0 | 2309.03409#115 | 2309.03409#117 | 2309.03409 | [
"2205.12548"
] |
2309.03409#117 | Large Language Models as Optimizers | 41 # Large Language Models as Optimizers Table 15: BBH task-wise Q_begin instructions found by prompt optimization with the PaLM 2-L scorer and the gpt-3.5-turbo optimizer. The optimizations start from â Letâ s solve the problemâ . boolean_expressions Letâ s accurately assess the given conditions and determine their corresponding Boolean values. causal_judgement Letâ s conduct a meticulous evaluation of the given scenarios, accurately determine the causal relationships, and provide definitive answers through comprehensive analysis, ensuring a precise understanding of causation and a thorough determination of events in each situation. date_understanding Letâ s accurately determine the correct date based on the given information and select the corresponding option in the standard MM/DD/YYYY format with utmost precision and reliability, ensuring the most definitive and reliable solution possible for accurate representation in all scenarios without any room for ambiguity, error, or confusion, and providing the highest level of accuracy and reliability. disambiguation_qa Letâ s thoroughly analyze the given sentences to accurately determine the unambiguous antecedents of the pronouns used, ensuring clear understanding, effective communication, and leaving no room for any confusion or ambiguity. dyck_languages Letâ s find the correct closing parentheses and brackets for the given sequences. formal_fallacies Letâ s thoroughly analyze the explicitly stated premises and draw definitive conclusions to accurately determine the deductive validity of the arguments provided in each question, employing precise and logical reasoning in our assessments for unwavering confidence in our determinations. geometric_shapes Letâ s accurately determine the shape represented by the given SVG path element by carefully analyzing its path data and considering all available options for a precise identification. hyperbaton Letâ s quickly identify the correct adjective order. logical_deduction _seven_objects Letâ s methodically analyze the given information, employ logical reasoning, thoroughly evaluate all relevant details, and accurately determine the solutions for each problem by considering all provided options comprehensively and strategically, ensuring an efficient and effective approach towards arriving at the correct answers. movie_recommendation Letâ s uncover the perfect movie recommendation from the options provided, ensuring an exceptional cinematic experience together as we select the most captivating and satisfying choice that will keep us thoroughly engaged and immersed until the very end. multistep_arithmetic_two Letâ s tackle the following calculations. navigate Letâ s accurately and efficiently determine the correct solution for each given scenario, ensuring the highest level of precision, reliability, and consistency throughout. object_counting Letâ | 2309.03409#116 | 2309.03409#118 | 2309.03409 | [
"2205.12548"
] |
2309.03409#118 | Large Language Models as Optimizers | s determine the total count of various items/objects/ingredients/animals mentioned in order to accurately and efficiently find the answer. penguins_in_a_table Letâ s analyze the given information and determine the correct answer. reasoning_about _colored_objects Letâ s systematically analyze the given information and carefully evaluate each answer choice to confidently determine the accurate and optimal solutions, considering all available options and specific details provided in each question for precise and concise responses, ensuring complete accuracy and clarity in our answers. ruin_names Prepare to have a side-splittingly funny time as we uncover the most clever and hilarious alternatives for these artist or movie names, challenging your wit to guess the correct one with a burst of creativity, humor, and imaginative twists! salient_translation _error_detection Letâ s meticulously analyze the provided translations, accurately identifying any errors or discrepancies, and conduct a comprehensive evaluation to ensure the highest level of translation quality and fidelity. By considering contextual nuances, cultural references, linguistic conventions, potential factual errors, and any dropped content, our ultimate aim is to achieve precise and thorough assessments for optimal translation accuracy and adherence to the source text. snarks Letâ s expertly determine the sarcastic statement among the given options and confidently provide the definitive answer without any room for doubt or confusion, ensuring absolute precision, clarity, and unwavering expertise in our response, while carefully analyzing the context, tone, and intention behind each statement to achieve unrivaled accuracy and unwavering confidence. sports_understanding Letâ s find the accurate information. temporal_sequences The flawless approach tracking_shuffled_objects _seven_objects By meticulously analyzing the given scenarios and accurately determining the final outcomes through a series of trades, swaps, and exchanges among the individuals involved, letâ s ascertain the conclusive results. web_of_lies # word_sorting Employing efficient and precise measures, sort the given list of words in alphabetical order to provide an optimal solution for any sorting problem, ensuring maximum performance and effectiveness. | 2309.03409#117 | 2309.03409#119 | 2309.03409 | [
"2205.12548"
] |
2309.03409#119 | Large Language Models as Optimizers | 42 | 2309.03409#118 | 2309.03409 | [
"2205.12548"
] |
|
2309.02033#0 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | 3 2 0 2 c e D 0 2 ] G L . s c [ 3 v 3 3 0 2 0 . 9 0 3 2 : v i X r a # Data-Juicer: A One-Stop Data Processing System for Large Language Models Daoyuan Chenâ , Yilun Huangâ , Zhijian Maâ , Hesen Chenâ , Xuchen Panâ , Ce Geâ , Dawei Gaoâ , Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Liâ ¡, Bolin Dingâ ¡, Jingren Zhou Alibaba Group | 2309.02033#1 | 2309.02033 | [
"2306.11644"
] |
|
2309.02033#1 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | ABSTRACT The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high- quality data. A data recipe is a mixture of data of different types and from different sources for training an LLM, which has been known as one of the most important factors that decide the LLMâ s performance. Existing open-source tools for LLM data processing are mostly tailored for preparing specific data recipes. To continu- ously uncover the potential of LLMs, incorporate (after cleaning) data from new sources, and improve LLMsâ general-purpose or domain-specific performance, we build a data processing system, named Data-Juicer, with which we can efficiently generate di- verse data recipes, explore different possibilities in forming the data mixtures, and evaluate their effects on the model performance. Dif- ferent from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for form- ing data recipes are truly heterogeneous and massive with various qualities (e.g., considering all web-pages on the Internet). Secondly, it is extremely expensive to precisely evaluate data recipesâ | 2309.02033#0 | 2309.02033#2 | 2309.02033 | [
"2306.11644"
] |
2309.02033#2 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | impact on the LLMsâ performance. Thirdly, sufficient flexibility needs to be provided to the end users of Data-Juicer, model developers, to configure and evaluate different data recipes. general-purpose corpus and are fine-tuned with specific-purpose data for alignment or downstream tasks. For pre-training data, a collection of diverse data, including web texts, dialogues, academic papers, code bases, and others, help to develop the vast repository of knowledge and great applicability [9, 57, 75]. Fine-tuning data, which further refines LLMs and aligns model behavior with human values [3, 48, 68]. | 2309.02033#1 | 2309.02033#3 | 2309.02033 | [
"2306.11644"
] |
2309.02033#3 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | As â garbage in, garbage outâ suggests, the input data for training or tuning an LLM has a direct impact on the quality of the derived model [35, 44]. Building effective data processing solutions for LLMs remains a sophisticated yet fully under-explored task, given the common challenges in processing both pre-training and fine-tuning data, which pursue good data quality, proper data diversity, and large data volume. Unfortunately, there exist only a few open-source projects con- tributing their LLM training data and the corresponding processing codes [24, 51], particularly in comparison to numerous open-source projects on models and training infrastructures [6, 7, 19, 67, 80, 93, 105]. Such limited development of data processing will obstruct the progress of quantitatively understanding and enhancing LLMs from the perspective of data, especially accompanied by the following noteworthy Challenges for LLM data processing. Data-Juicer features a fine-grained abstraction of the pipeline for constructing data recipes, with over 50 built-in operators that can be freely composed and extended. By incorporating visualiza- tion and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop after data processing for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed comput- ing. With the help of Data-Juicer, we derive data recipes that achieve remarkable performance boosts on state-of-the-art LLMs, demonstrating up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. More importantly, we hope that Data-Juicer pro- motes broader data-centric research on training and understanding LLMs. Data-Juicer and our data recipes are released and actively maintained at https://github.com/alibaba/data-juicer. (C1) High Heterogeneity in LLMâ s Data Recipe. LLMs in- volve several developmental stages and enable diverse usages in- cluding coding and dialog assistance, and even aiming at Artificial General Intelligence. As a result, they demand an extensive variety of data types, formats, and quality in their training data, leading to highly complex data-processing pipelines. | 2309.02033#2 | 2309.02033#4 | 2309.02033 | [
"2306.11644"
] |
2309.02033#4 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | A data recipe for training or tuning an LLM is such a mixture of processed data from different types of sources, with their ratios and processing pipelines properly set [24, 25]. Existing systems, e.g., [24, 80], release certain processing scripts to generate data recipes for the pre-training pur- pose, whereas [17, 92] focus on data recipes for improving data diversity and quality in LLaMAâ s [93] fine-tuning stage. However, due to the lack of abstraction of processing pipelines and compos- ability of operators (OPs), such as those for data editing, cleaning, and filtering, it is difficult to incorporate new data sources in data recipes provided by these systems, or to extend their pipelines for exploring other possibilities of data recipes. 1 INTRODUCTION Large Language Models (LLMs) [9, 18, 69, 70, 90, 92] have achieved unprecedented intelligence, enabling applications that would other- wise be infeasible due to unsatisfied performance. | 2309.02033#3 | 2309.02033#5 | 2309.02033 | [
"2306.11644"
] |
2309.02033#5 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | As the â foodâ for LLMs, data plays a pivotal role in these exciting advancements [31, 62, 71, 103]. LLMs are built by pre-training on large-scale â Co-first authors. â Equal contribution. â ¡Corresponding authors, email addresses: {yaliang.li, bolin.ding}@alibaba-inc.com (C2) Timely Feedback for Data Recipe. The search space of LLMâ s data recipes is huge due to the high degree of heterogeneity in data sources and numerous ways to mix them (with proper pro- cessing OPs, combinations, and ratios). We want to explore as many data recipes in the search space as possible with timely feedback to uncover the potential of LLMs and improve their performance. However, as the size of an LLM (number of model parameters) is usually billions or even larger, it is super expensive, in terms of both the time and computational resources, to evaluate the impact (Take-it-away Users) a > 7 Pre-training/Fine-Tuning (Megatron-LM, Transformers, ...) | Auto-Evaluation (LLM API, HELM, ...) LLM Ecosystems Zero-code Data 4 Feedback Processing e Plentiful Data Recipes & Demos aw for Pre-training (RedPajama, oscar, refined, ...) (Novice Users) (instruction, alignment, refined, ...) for Fine-tuning t Distributed Computing Checkpoints a 5 A Ecosytems C Low code Flexible & Well-documented Configuration Cosytems â ustomization oS ae data clean || data mixture data re-format data probe B (Experienced Users) 4 Versatile & Resuable OPs Dedicated & Pluggable Tools Off-the-shelf Mappers Filters op Analyzers Quality Classifiers Sampler Data Processing (transform data in-place) || (remove specific info) |} Ey sion (OP-effect, HPO, ...) || (GPT-3, chinese, code, ...) || (meta, stats, ...) Components Deduplicators Formatters (Ganz, Visualizers Reference LMs Tracer (compare in multile views) || (unify json, txt, pdf...) || Peorderin®) (histgram, diversity, ...) | | (LLaMA, ModelScope, ...) || (lineage, ...) # Figure 1: Overview of Data-Juicer. of a data recipe on the LLMâ | 2309.02033#4 | 2309.02033#6 | 2309.02033 | [
"2306.11644"
] |
2309.02033#6 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | s performance by training or tuning it with the recipe [85] and running evaluation benchmarks [59]. (C3) Usability and Customizability. The workflow of training or tuning LLMs starts from processing raw data. Exacerbated by the above two challenges, there is an urgent need for a data-centric infrastructure, so that the model developers can easily re-use or implement their own OPs and tools for data processing, configure their processing pipeline, explore various data recipes, and eval- uate the resulting LLMsâ performance. We need such a system to accelerate the exploration and understanding of LLMsâ potentials. (C4) Massive Data Volume. Last but not least, LLMs are trained on vast corpora, with data volumes stretching to an unprecedented magnitude of billions or even trillions of tokens (a modeling unit of text dependent on the used tokenizer [49]). Efficient LLM data processing of such volume is critical but arduous. However, consid- erations on system performance optimization are often bypassed by existing studies, leaving significant room for enhancement in en- suring the stability of data processing and facilitating the deliveries of processed data and trained weights for LLMs. Overview of Data-Juicer. In this paper, we advocate for a one- stop data processing system that addresses these challenges, en- abling comprehensive, user-friendly, and efficient data processing abilities to facilitate data-centric LLM research and development. The proposed system, named Data-Juicer and illustrated in a bottom-up view in Figure 1, is strategically designed to generate data recipes making data more â juicyâ and digestible for LLMs. We decouple the mixture elements of existing solutions for LLM data processing, such as specific data types, auxiliary models, and downstream tasks. As highlighted by the green boxes, Data-Juicer fosters a fine-grained abstraction and implementation of compos- able modules with over 50 versatile OPs and dedicated tools. | 2309.02033#5 | 2309.02033#7 | 2309.02033 | [
"2306.11644"
] |
2309.02033#7 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | We make Data-Juicer end-to-end configurable to help prepare trace- able, comparable, and refinable data recipes at various scenarios of LLM pre-training and fine-tuning, as shown in the yellow and pink boxes. Coupled with established auto-evaluation capabilities, Data-Juicer supports a timely feedback loop at multiple devel- opment stages of data recipes and LLMs, thereby promoting the production of valuable LLM data. To meet diverse user backgrounds and needs (marked by the left three rectangle boxes), we design Data-Juicer as an easy-to- use, flexible and extensible system. Beginners are shielded from underlying complexities and benefit from numerous ready-to-use datasets, data recipes, and pluggable tools, supporting zero-code LLM data processing. With the help of the flexible configuration module, experienced users can simply modify built-in data recipes, reorganize the order of OPs and tools, and tune the value of their hyper-parameters, to meet their lightweight customization needs. Thanks to the standardization and modularization, advanced users are empowered to conveniently extend and register their new OPs and tools into Data-Juicer, facilitating quick engagement in sec- ondary development. Furthermore, we offer more than a dozen interactive tutorials implemented by streamlit [87] to help users with their LLM data processing journey. Data-Juicer hinges itself on the Huggingface-datasets library [55], providing a unified intermediate representation of data and achieving optimized space-time efficiency and robustness through various techniques such as context management, OP fusion, caching, and checkpoint mechanisms. Furthermore, as the right two circles show, Data-Juicer seamlessly integrates with ecosystems for LLM training and evaluation such as Megatron-LM [85] and HELM [59], and distributed computing ecosystems such as Ray [66] and Beam [5], thus facilitating comprehensive LLM data processing and en- hancing large-scale data processing capabilities. Leveraging the proposed system, we refine several open-sourced datasets and derive numerous data recipes for both LLM pre-trained and fine-tuning. These refined datasets are not only higher in qual- ity but also more digestible by LLMs, leading to effective perfor- mance improvements of LLMs. Empirical analysis showcases an improvement of up to 7.45% averaged score across 16 LLM bench- marks using our refined pre-training data. | 2309.02033#6 | 2309.02033#8 | 2309.02033 | [
"2306.11644"
] |
2309.02033#8 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | Even pre-trained on only 43% quantity of compared data, we observe superior performance over state-of-the-art (SOTA) LLMs such as Falcon [1]. Moreover, compared with SOTA LLMs fine-tuned on competitive open English and Chinese data, LLMs fine-tuned on Data-Juicerâ s data gain an average of 10.3% higher win rate of pair-wise GPT-4 evaluation, while with an average 56.8% fewer data quantity. Finally, we intro- duce its utility in real-world deployment, and validate its superior system efficiency and scalability of Data-Juicer, by up to 88.7% reduction in single-machine processing time and 77.1% savings in memory usage, and 7.91x distributed processing acceleration. | 2309.02033#7 | 2309.02033#9 | 2309.02033 | [
"2306.11644"
] |
2309.02033#9 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | Contributions. Our contributions are summarized as follows: â ¢ We propose and build a novel system for LLM data processing, Data-Juicer, which is featured by decoupled modules and over 50 versatile OPs and tools. To easily dive into data quality and insights, Data-Juicer fosters a timely feedback loop with inter- active visualizations and auto-evaluation capabilities. Demonstrated by extensive empirical evidence, Data-Juicer produces numerous high-quality data recipes to enhance LLMs and exhibits superior system performance, powered by dedicated optimization and integrated distributed computing ecosystems. â ¢ We integrate data-centric methodologies for LLM data processing and LLM development with user-centric interface designs, with the aim that Data-Juicer can ease access for diverse users and democratize LLM data processing. | 2309.02033#8 | 2309.02033#10 | 2309.02033 | [
"2306.11644"
] |
2309.02033#10 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | â ¢ To promote further research and development, our system, data recipes, and tutorials are maintained and released at https:// github.com/alibaba/data-juicer, which we hope can help pave the way for next-generation production paradigms of LLM data. Organization. The subsequent sections describe Data-Juicer in detail. Sec. 2 elaborates on the background and related studies. Sec. 3 outlines our OP pool, as a response to high heterogeneity of LLM data recipes (C1). Sec. 4 delves into our formulation of timely feedback loops for data processing and development of LLMs (C2). Sec. 5 details our repository of data recipes and tools that counteract usability and customization issues (C3). Sec. 6 expounds on the employed system optimization to tackle massive data volume (C4). Sec. 7 focuses on an extensive empirical evaluation for the quality of data recipes, performance and usability of Data-Juicer. Lastly, we draw a summary in Sec. 8. 2 BACKGROUND AND RELATED WORKS 2.1 Large Language Model (LLM) Data Large Language Models (LLMs). Language modeling is a crucial component for achieving machine intelligence [65, 109]. In the last few years, this field has witnessed remarkable advancements, particularly with the emergence of the pre-training and fine-tuning paradigms, where language models undergo an initial phase of training with a general-purpose corpus before being fine-tuned with specific-purpose tasks [27, 72]. This procedure has yielded exceptional performance across a spectrum of natural language processing (NLP) tasks [54, 76]. Recently, taking advantage of the highly parallelizable nature of the self-supervised Transformer architecture, the scales of model parameters and training corpus for LLMs have significantly been increased [28, 69]. Meanwhile, LLMs have aroused considerable interest in the potential of artificial general intelligence [10, 11, 30, 38, 43, 99, 108]. While model-centric studies proliferate, how to better process LLM data remains an intricate domain yet to be completely unfurled, whether for pre-training or fine-tuning data. Pre-training Data. Pre-training serves as the foundation for LLM intelligence. By being trained on large amounts of high-quality data, LLMs can acquire elementary language comprehension and generation capabilities [37]. | 2309.02033#9 | 2309.02033#11 | 2309.02033 | [
"2306.11644"
] |
2309.02033#11 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | Aiming to elucidate the link between data and LLMs intuitively, let us consider a typical pre-training objective prevalent among mainstream LLMs. Given a token se- quence [ð ¡1, ..., ð ¡ð , ..., ð ¡ð ], an LLM ð is trained to maximize the joint probability of the text as follows: ð â ï¸ ð 0 = arg max ð ð =1 log ð (ð ¡ð |ð ¡1:ð â 1; ð ). (1) This objective is for auto-regressive language modeling and allows the pre-trained ð 0 to predict the probability of the next token by adhering to the inherent sequential ordering of the language [94]. Exploiting this unified yet simple modeling goal, researchers col- lect a large volume and diverse range of corpus data, which usually contains hundreds of billion tokens or even trillion tokens. After tokenization and pre-training, LLMs have succeeded in stimulating a wide range of advanced capabilities. The LLM pre-training data generally includes various types derived from the web crawlers [26, 71], dialogues or social media [107], book-length formal texts [36, 110], rigorous encyclopedias and academic texts [31, 100], struc- tured coding texts [18, 57], and more texts from financial, medical and legal domains [58, 91, 104]. A challenge is nonetheless posed in the careful processing and formulation of pre-training data to filter noise, redundancy, irrelevance, and potentially toxic [33, 62]. Fine-tuning Data. Numerous studies have underscored that fine-tuning â the process of refining pre-trained LLMs using a smaller, task-specific dataset â can further enhance or unlock addi- tional capabilities of LLMs [40, 53, 97, 98]. Crucially, this process also paves the way for better aligning the behavior of these ad- vanced models with human values and preferences [60, 68]. In this phase, though the data volume decreases exponentially compared to the pre-training phase, the format of fine-tuning data is quite different [73]. Typically, given a textual dataset {(ð ¥1, ð 1, ð ¦1), ..., (ð ¥ ð , ð ð , ð ¦ ð ), ..., (ð ¥ð , ð ð | 2309.02033#10 | 2309.02033#12 | 2309.02033 | [
"2306.11644"
] |
2309.02033#12 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | , ð ¦ð )}, the goal of fine-tuning is to adjust the pre-trained LLM ð 0 to find ð â that maximizes the likelihood of the task-oriented response ð ¦ ð for the user query ð ¥ ð : ð â ï¸ ð â = arg max ð ð =1 log ð (ð ¦ ð |ð ¥ ð , ð ð ; ð ); ð â ð 0. (2) Here ð ð stands for task-specific instructions, such as â summarize the following text: â , optionally accompanied by a few demonstrative samples for in-context learning [9]. The fine-tuning data can be broadly categorized into two types: Instruct Fine-Tuning (IFT) datasets to enhance the instruction-following abilities of LLMs and are usually adapted from existing NLP bench- marks [4, 61]; and Chat Fine-Tuning (CFT) datasets for enhanced dialog ability and human value alignment [70, 92]. There are pre- liminary explorations emphasizing the importance of data diversity over volume for fine-tuning data [20, 95]. Several studies also indi- cate that data types representing human values can potentially lead to degraded general performance, a phenomenon known as the â | 2309.02033#11 | 2309.02033#13 | 2309.02033 | [
"2306.11644"
] |
2309.02033#13 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | alignment taxâ [70]. However, how to more effectively process the fine-tuning data to maximize its usefulness and minimize potential risks remains an open area for further investigation. The Symbiotic Nature of Pre-training and Fine-tuning Data. It is worth pointing out the analogous properties shared between these two types of data, which motivate our synergetic approach when bearing quality, diversity, and volume considerations in mind. Specifically, the quality aspect of the text has been studied exten- sively in existing literature [62]. Efforts have been made to enhance aspects such as text structure, the soundness of arguments, con- textual richness, writing correctness, comprehensiveness, levels of anonymization, and harmlessness. The widespread implemen- tation of cleaning, deduplication, and anonymization processes in pre-training data typifies the aforementioned pursuit. For exam- ple, researchers may opt to iterate over additional epochs with Wikipedia-style data in LLM training [93]. Similarly, fine-tuning data processing also employs filtering, deduplication, and detoxifi- cation strategies, aiming to enhance the user experience and the degree of aid offered by LLMs [17, 33]. Diversity is another shared property studied at length in both types of data. Mixing various types of data and finding suitable mix- ture weights to achieve appropriate diversity has been a primary concern in works for pre-training data processing [103]. Analo- gously, efforts for fine-tuning data aim to increase multi-view di- versity such as tuning tasks and expression styles, which further underscores this shared property [70, 77, 92]. In addition, the pursuit of quality and diversity tends to trade off with data volume, which is also reflected in these two types of data. Researchers have incessantly strived to empower LLMs with massive amounts of data, hoping to encapsulate as much human knowledge as possible. For instance, there has been an influx in pre- training data volumes to terabyte levels [69, 71], and fine-tuning data volumes have grown from mere thousands to millions [4, 96]. However, the counter effects of these initiatives are also brought into these large volumes of data, including heightened noise, poten- tial inferior quality, and increased bias, which necessitate additional data processing efforts and surging LLM training overheads. | 2309.02033#12 | 2309.02033#14 | 2309.02033 | [
"2306.11644"
] |
2309.02033#14 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | 2.2 Existing LLM Data Processing Solutions LLM data processing is an early area that is still working towards common standards, and we aim to embody a pioneering system for the community. With a commitment to open-source ethos, Data-Juicer caters to the increasing demand for versatile, flexible, user-friendly and efficient data processing solutions, details of which will be described later. This contrasts the well-known LLMs that were largely closed-source in data or data processing, such as the GPT derivatives [9, 18, 69, 84], LLaMA derivatives [16, 19, 89, 92, 93], and others [1, 15, 79, 102, 107]. While some progress has been made in the open-source LLM data processing landscape [4, 24, 51, 86], they have not fully delivered the abstraction and breadth of func- tionalities that Data-Juicer aims to bring to the forefront of the field. Examining this from the perspective of the target datasets, ex- isting works typically fixate on specific data sources and use cases for LLMs, spanning alignment of specialized English sub-datasets for LLaMA pre-training [93], assembly of multi-lingual corpora for pre-training [51], or crowdsourcing for fine-tuning prompt data [4]. However, they lack the systematic and modular processing abilities required to proficiently manage heterogeneous data, which is an area Data-Juicer strives to push its boundaries. These limitations become especially apparent when handling new data types, engag- ing in language transfer, or implementing particular data cleaning and transformations for LLM applications. Moreover, existing works suffer from sub-optimal usability and ability to explore data insight. Most of these works only offer the processed data along with purpose-built processing codes specific to those data, lacking in ease-of-use considerations and support of assistive tool-kits. This hinders their adaptability to diverse users and alternative usages. Users might face a daunting task when substituting data processing goals or conducting analyses due to a dearth of complementary data-analytical capabilities. The re- development of data processing tools and analytical methodologies, specifically tailored for LLMs, remains largely uncharted territory. Furthermore, the focus of current works gravitates towards func- tionality rather than system performance, leaving large room for enhancement in efficiency, space management and scalability. | 2309.02033#13 | 2309.02033#15 | 2309.02033 | [
"2306.11644"
] |
2309.02033#15 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | Note- worthy shortcomings include reliance on single-machine Python scripts, inappropriate handling of large-scale data, and poor pro- cessing speeds due to the utilization of Pythonâ s plain dict object. We will provide further empirical comparisons in terms of both the quality of the generated data recipes (Sec. 7.1) and the perfor- mance of the data processing system (Sec. 7.2). 3 STANDARDIZED OPERATOR POOL In addressing the heterogeneity of data recipes for LLMs (Chal- lenge 1 in Sec. 1), we devise a set of standardized operator (OP) pool. As outlined in Table 1, the OPs are organized into four primary categories: Formatters, Mappers, Filters, and Deduplicators, which incorporate diverse categories, functions, inputs, processing levels, outputs, and application scenarios. Core principles of decoupling and composability guide their structuring, resulting in a varied yet standard set of procedures that contribute to flexibility and user interaction at multiple processing levels. This strategic im- plementation enhances reusability and reduces complexity, aiding streamlined and decoupled data recipe construction. 3.1 Unified Data Representation We first introduce Formatter OPs designed to unify diverse data sources into an intermediate data representation. Specifically, we choose to build Data-Juicer upon Huggingface-datasets [55] due to its compatibility with mainstream LLM datasets and its column- oriented storage ability backed by Apache Arrow [2]. Our Format- ters maintain data objects that are instantiated from several unified base classes that simplify the process design for follow-up OPs and facilitate data accessing efficiency. We support numerous text input Table 1: Overview of the operator (OP) pool in Data-Juicer, with a detailed list continuously maintained at the official documentation: https://github.com/alibaba/data-juicer/blob/main/docs/Operators.md. Category Formatters Function Data format unifying Input Dataset Process Level Dataset Output Dataset OP Usage Examples Load and unify dataset-hub, txt, json, md, codes, html, pdf, docx, ... | 2309.02033#14 | 2309.02033#16 | 2309.02033 | [
"2306.11644"
] |
2309.02033#16 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | Mappers In-place text editing Sample Single-sample; Multi-samples Sample; Samples Transform specified headers, textual elements; Fix messy codes; Enable text enhancement Filters Dedup- licators Conditional text removing Duplication removing Sample Single or Paired Dataset Single-sample; Dataset Dataset Boolean Dataset Filter by meta-info, stats (e.g., lines count); model scores; external resources (e.g., flagged words) Compare with hash-based and vector-based deduplication methods formats - txt, JSON, parquet, html, md, pdf, code files such as .py and .cpp, amongst others - and homogenize them into a structured format composed of certain columns with nested access support, which are conceptually organized by three primary parts â textâ , â metaâ , and â statsâ . These parts respectively hold the raw textual data, metadata information (e.g., date and version), and statistical data that can be generated and consumed by Data-Juicerâ s other OPs and tools. This interface works at either the text sample or dataset level, and is independent of underlying in-memory or disk data layout, alleviating the potential worry over heterogeneous data formats by OP developers. It is noteworthy that the outputs of Filter OPs are Booleans, which helps to decouple the implementations of actual data process- ing and computation for various statistics. This dedicated segrega- tion results in two key advantages. Firstly, it enables our dedicated analyzer-related tools (detailed in Sec. 5.2) to utilize these computed statistics for the entire dataset, rather than a filtered subset. Users are also allowed to generate fingerprints for specific partial sam- ples. Secondly, this decoupling enhances compatibility between Huggingface-datasets and Data-Juicer, thereby enabling the effi- cient reuse of the Dataset.map and Dataset.filter interfaces to perform these sub-processes in a streamlined manner. As a result, users can effortlessly extend their own custom OPs that only vary from existing OPs in specific partial processing behaviors. In Ap- pendix A.1, we offer an illustrative code example of this decoupling in Listing 1. 3.2 Versatile Data Processing Next, we elaborate on the functionality of the OP pool in Data-Juicer, which is pivotal to the comprehensive data processing tailored for LLMs. | 2309.02033#15 | 2309.02033#17 | 2309.02033 | [
"2306.11644"
] |
2309.02033#17 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | Besides the Formatters, which play an essential role in uni- fying data formats and ensuring a consistent and efficient data flow throughout the processing pipeline, we now give more details about the other three types of data-transformation OPs in Table 1. Mappers facilitate crucial functionalities of in-place text edit- ing, necessary for single-sample or multi-sample processing across various needs of LLM data processing, such as modifying texts for pre-training and enhancing text diversity for fine-tuning. They effectively handle processing tasks like the removal of specific file headers, messy code rectification, and text enhancements. Filters come into play by conditionally filtering texts via individual- sample metrics, dataset-level statistics, or external resources like stop-word lists. In doing so, they can eliminate unnecessary text samples, contributing to data focus, cleanliness, and the cost reduc- tion of follow-up LLM training processes significantly. Deduplicators reduce potential storage waste and improve effi- ciency. As indicated by several studies [13, 47, 52], duplicate samples adversely affect both the pre-training stability and the performance of LLMs. Besides, Deduplicators help prevent unintentional data leakage during training into evaluation benchmarks, particularly for zero-shot or few-shot tasks [39]. To ensure accurate detection and removal of duplication, we provide efficient and robust methods including hash-based and vector-based comparisons [8, 14, 81]. 3.3 Composability Data-Juicerâ s OPs serve as a testament to our systemâ s versatility. They enable users to effortlessly process a variety of data types in a composable and modular manner, showcasing Data-Juicerâ s dedication to user adaptability and high-quality data production for LLMs. Besides the functions, inputs, outputs and processing levels summarized in Table 1, this composability is embedded in more facets, including the fields to be processed, OP hyper-parameters, and recommended use cases of each OP. Each OP in Data-Juicer is designed to serve a distinct function and can be commanded by users to process different text fields. For example, OP A could process the sample field â text.abstractâ , while OP B could focus on â text.main_bodyâ . By default, each OP process on â textâ field, which can be freely specified to other â metaâ or â statsâ related data fields according to usersâ needs. | 2309.02033#16 | 2309.02033#18 | 2309.02033 | [
"2306.11644"
] |
2309.02033#18 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | This adaptability allows for immense flexibility by simultaneously using OPs with different fields, enabling users to easily manipulate specific text snippets such as removing GitHub codes based on their star counts. Moreover, these OPs establish a one-size-fits-all solution that encompasses a multitude of configurable parameters such as the number of tokens, filtering thresholds, auxiliary models, and much more. This adjustability of a single OP, amalgamated with the com- posability of OP pipelines, empowers Data-Juicer to manage a spectrum of input, output, and processing granularity, contributing to its powerful processing abilities. For usage combinations, OPs are labeled with typical usage sce- narios. We maintain OP tags as general usage, LaTeX source files, programming codes, financial data processing, or language-specific processing such as English and Chinese, and so on. These labels facilitate easy navigation and operation, underscoring our aim to blend simplicity with power in Data-Juicerâ s architecture. 4 FEEDBACK-DRIVEN DATA PROCESSING Addressing Challenge 2 outlined in Sec. 1, we incorporate a dynamic feedback loop into the data processing pipeline, which allows users to process and understand data effectively via built-in visualization and automated tracking abilities. As demonstrated in Figure 2, our system (Data-Juicer) enables timely perception and swift iterative refinement of data recipes (indicated by the left and upward arrows) within a holistic feedback loop of LLM data processing and LLM training (indicated by the right arrows). Data Recipe Data Probe Data Data Quality LLMs Training/ built-in, custom] [analyser, visulizer] Processing â Assement Tuning } of oâ Mi = all â Interactive Visual HPO for recipe (+ Checkpoints & Cache) â Auto-Evaluation Figure 2: The feedback loop of Data-Juicer. We will discuss the modeling of the data processing feedback in a hyper-parameter optimization (HPO) perspective (Sec. 4.1), and go through the utility of the interactive visualization (Sec. 4.2), and the integration of ecosystems for LLM training and evaluations (Sec. 4.3). The synergy of these techniques offers an efficient and effective solution to debug and dive into LLM data processing. 4.1 HPO for Data Processing In Data-Juicer, we incorporate the concept of hyper-parameter optimization (HPO) into the data processing procedure. | 2309.02033#17 | 2309.02033#19 | 2309.02033 | [
"2306.11644"
] |
2309.02033#19 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | This is done by tying data-processing-specific hyper-parameters to a variety of feedback signals, including custom target metrics and visualization results. We enhance our systemâ s functionality by innovatively speeding up the data processing iteration through Checkpoint and Caching mechanisms, and by integrating an automated HPO tool. 4.1.1 Acceleration with Checkpoint and Caching. LLM data processing often necessitates frequent re-conduction due to the al- terations in OP hyper-parameters and potential conduction failures, such as exceeding available memory, disk or pre-defined time limits, especially for massive datasets. Accordingly, we provide built-in checkpoint and caching management to foster resilient and reliable data processing. Based on a carefully organized directory structure, Data-Juicer automatically monitors every running process for configuration changes, and creates new files to safely store data and processing states only when any error or exception occurs. While the checkpoint preserves the whole dataset and processing state enabling complete recovery of the processing site, the cache solely saves the dataset object for each OP and is more suited for smaller- scale adjustments as it reduces the overhead of pre-order caches. These techniques allow for a swift recovery during system restarts or failures, reverting to the most optimal recent processing state stored in the checkpoints, thus mitigating processing redundancy and increasing the feedback frequencies. Additionally, the proposed state-saving mechanism enables a flexible space-time trade-off at different feedback stages. Users have the option to save states after each OP in the data processing flow, ensuring minimal re-execution time at the cost of maximum storage overhead. Conversely, they could choose to only save the last OPâ s checkpoint and cache, incurring minimal storage overhead but increased re-execution time, especially when needing to revert to early steps in the process. To facilitate a good space-time trade-off, we further perform space complexity analysis for individual OPs, which aids in pre- dicting peak space occupancy and guides us in determining how many checkpoints and caches to store based on available space. By default, Data-Juicer actively monitors disk space usage at the start of data processing, and automatically determines if, and when, checkpoints and cache should be deployed. User-specified saving frequencies and rules are also supported. Consequently, strategic checkpoint and cache management reinforce both the resilience and efficiency of the feedback loop for LLM data processing. The detailed space usage analysis can be found in Appendix A.2. 4.1.2 Auto-HPO. | 2309.02033#18 | 2309.02033#20 | 2309.02033 | [
"2306.11644"
] |
2309.02033#20 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | We incorporate an automated HPO tool1 into Data-Juicer to streamline the finding of good data processing hyper-parameters. To reduce search costs of different data recipes, we support leveraging advanced HPO algorithms such as Bayesian optimization [82], progressive early-stop strategies, such as the Hy- perband algorithm [56], and built-in LLM-oriented sampling strate- gies (detailed later in Sec. 5.2). Specifically, given a pre-defined tar- get metric and search space of data recipes, users can conveniently explore the impact of specific data processing hyper-parameters. Here, we give an illustrative example as follows: Example of Data Mixing with HPO: Suppose we aim to find a good set of sampling weights for ð datasets to be mixed, where our search space is defined as ð ¤ð â [0, 1], ð â [1, ð ]. The pipeline can be structured as follows: (1) We specify the target text fields across all ð datasets, and unify their meta-tags and name of text fields via Formatter OPs. (2) We leverage meta-tag Filters to cater to specific usage scenarios. Here we only include samples with the language tag â | 2309.02033#19 | 2309.02033#21 | 2309.02033 | [
"2306.11644"
] |
2309.02033#21 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | ENâ . (3) A datasets Dð ð ð ¥ is generated from the ð datasets, with mixture weights [ð ¤ð ] drawn by the HPO scheduler to maximize the target metric in step (5). (4) A pre-configured data processing including de-duplication OPs is executed on the mixed dataset, ensuring dataset cleanness. (5) The target metric is calculated on Dð ð ð ¥ as (ð /ð + ð ), where ð is the total number of tokens of all ð datasets, ð and ð is the number of tokens and average quality score of Dð ð ð | 2309.02033#20 | 2309.02033#22 | 2309.02033 | [
"2306.11644"
] |
2309.02033#22 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | ¥ using built- in GPT-3 quality classifier (detailed in Sec. 5.2) respectively. The mixture dataset Dð ð ð ¥ is iteratively refined by carrying out it- erations steps (3)â ¼(5) to get a larger quantity and better quality. â ¡ The HPO results offer a powerful means of visualizing and under- standing data insights as shown in Figure 3, where the importance, # 1W&B Sweeps, https://docs.wandb.ai/guides/sweeps Parameter importance with respect to Global Interactions | }â ¬â __ ri target_metric v Linear Correlation High-order Correlation Q jes 3 poram â i>| mix_data_w3 mix_data_wi. â a a mix_data_w2 allow a deep understanding of of per-sample statistics covers displays histograms and box cluding diverse criteria like word percentage, and paragraph have the flexibility to adjust bespoke visualization and data allow a deep understanding of the data. By default, the summary of per-sample statistics covers 13 dimensions and automatically displays histograms and box plots for each statistical variable, in- cluding diverse criteria like sample perplexity, word count, flagged word percentage, and paragraph length, among others. Users also have the flexibility to adjust the dimensions to observe, with a bespoke visualization and data processing experience. Parameter importance with respect to Global Interactions | }â ¬â __ ri target_metric v Linear Correlation High-order Correlation Q jes 3 poram â i>| mix_data_w3 mix_data_wi. â a a mix_data_w2 4.3 Feedback with Integrated LLM Libraries In the later stages of our pipeline, we utilize robust ecosystems designed for LLM training and evaluation, ensuring seamless in- tegration with widely-used libraries such as Megatron-LM [85], DeepSpeed [78], and HuggingFaceâ s Transformers [101]. With this integration, users can easily train LLMs on datasets produced by Data-Juicer and evaluate their performance to obtain feedback using our pre-built tools and scripts, without getting bogged down in complicated LLM training and evaluation details. # Figure 3: Demonstration of HPO for data recipe. (a) Tracking Specific Data Samples | 2309.02033#21 | 2309.02033#23 | 2309.02033 | [
"2306.11644"
] |
2309.02033#23 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | language_id_score_filter Lang filtered: 107 of 23040 docs (0.46) Notably, our system facilitates the timely assessment of model abilities by incorporating multiple dimensions. The systemâ s capa- bility to swiftly identify potentially ineffective data and training allows us to terminate unwanted LLM data processing promptly. Instead of solely relying on model loss as the basis for evaluating model performance, we support the LLM assessment across various metrics or benchmarks, and track shifts in target scores. Conse- quently, we can determine whether continued training of an LLM on the produced dataset is justified, thereby helping us minimize data processing and LLM training costs. (b) Effect of OP Pipeline (Number of Samples) (c) Data Distribution Diff. # Figure 4: The illustration of interactive visualization of Data-Juicer. More demos are publicly available. Specifically, Data-Juicerâ s evaluator supports SOTA LLM bench- marks such as HELM [59], LM-harness [32] and GPT-API-based evaluation [19], as well as the extension of customized evaluation benchmarks and tasks. For a balanced and straightforward evalua- tion, Data-Juicer supports a leaderboard-style comparison by con- solidating results from different target evaluation scenarios, such as ranking averaging, score-normalized averaging, or other cus- tomized strategies. The leaderboard-style scoring utility enhances the visualization of strengths and weaknesses of models, guiding subsequent iterations of data recipes and LLMsâ refinements. We also make available Reference Models - these are model checkpoints binding with traceable training data in Data-Juicer, popular LLM architectures, training parameters, computation costs, and corre- sponding evaluation results. They facilitate effortless comparison among different training configurations, particularly for further research on diverse, iteratively developed data recipes. correlation and interaction of ð ¤ð for the quality score are estimated and plotted. Besides the quality score demonstrated in this exam- ple, the target metric can be customized to include other trade-off terms composed of intrinsic data measures â such as toxicity, help- fulness, or other scores predicted by auxiliary models â or even performance measures of LLMs, such as training loss or benchmark scores (which we will discuss later in Sec. 4.3). 4.2 Interactive Visualization The ability of interactive visualization is integral to multiple feed- back stages of Data-Juicer. | 2309.02033#22 | 2309.02033#24 | 2309.02033 | [
"2306.11644"
] |
2309.02033#24 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | Specifically, as Figure 4.(a) demon- strates, users can visually track the effects of individual OPs in terms of the processed data samples. This is facilitated by an innovative built-in tool, tracer, which records sample changes after apply- ing each operation for Data-Juicer. For example, tracer presents discarded samples for Filters, pre- and post-editing differences for Mappers, and (near-) duplicate sample pairs for Deduplicators. Cou- pling this tracking ability with fruitful built-in sampling and visu- alization tools, Data-Juicer enhances usersâ control over the data processing and boosts their confidence and rationals of the process. Transitioning to the mid-term stage of LLM data processing, Data-Juicer offers a comparative visualization of the data before and after the entire processing from the view of OP pipeline and sta- tistical analysis, as Figures 4.(b) and 4.(c) show. Aided by a built-in tool, analyzer, Data-Juicer provides statistical analysis (counts, means, standard deviations, min/max, quantiles, entropy, etc.) to 4.4 Feedback Loop Showcase The general feedback loop has been discussed before in Figure 2. We now further expound on this by presenting a concrete development example. Here, we intertwine several previously mentioned tools to demonstrate the Data-in-the-LLMdev-Loop process, which results in improved LLM data. As illustrated in Figure 5, we begin with a raw dataset and aim to refine it for better pre-training or fine-tuning of an LLM. The entire process flows as per the following steps: (1) Analyze the original dataset. We can opt to utilize an existing data recipe (a specific configuration file) or craft a new one based on prior understandings of data processing needs. Our built-in Analyzer and Visualizer facilitate this process by computing Improved Quality and Quantity == Original Dataset Process data with refined recipe (reusing checkpoints & caches) © train/Tune LLMs Refined Dataset Original Recipe (Config File): Refined Recipe: | 2309.02033#23 | 2309.02033#25 | 2309.02033 | [
"2306.11644"
] |
2309.02033#25 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | SSS S-S © me â Analyze o rord_repetition filter: = word_repetition filter: @ Real-Time & Auto Evaluation rep len: 10 â )| original mincrat Analyze 6) Dataset nax_ratio: 0. refined (via Analyzer ~ special. characters filter: dataset Collate & Visualizer) min ratio: 0.0 min ratio: 0. ¢ max ratio: 8.25 mmax_patio: 0.25 z compare list Refine parameters of data recipe (manally or via HPO) Original Data Probe Improved Diversity and Nid B ones Data Leardboard with Refined Data Probe Reference Models Figure 5: The demonstration of data processing feedback of Data-Juicer, which helps to generate better data recipes for LLMs. more than a dozen measures such as linguistic diversity, textual statistics, and others to generate a data probe. The two pie plots within Figure 5 indicate the top 20 most common root verbs (inner circle) and their top 4 direct noun objects (outer circle) for the data in field â text.instructionsâ . | 2309.02033#24 | 2309.02033#26 | 2309.02033 | [
"2306.11644"
] |
2309.02033#26 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | (2) Refine parameters of the original recipe. Based on the data probe, users figure out the weaknesses of the original dataset, such as low diversity in expression manners, and long-tail statistics of word counts. Then we refine the parameters in the recipe by adding/removing some OPs or tightening/relaxing filter ranges. During refining, we could find out the effect of each OP easily based on the interactive visualization tool mentioned in Sec. 4.2. auto-registered as a reference model, or additional refining guidance from the LLM perspective to further enhance data recipes. 5 BOOSTING USABILITY WITH BUILT-INS In response to the challenge of varied user customized preferences and technical expertise (Challenge 3 in Sec. 1), we offer an easy- to-use configuration paradigm for data recipes, ready-to-use data recipe templates, and extensive tools, as detailed below. (3) Process the original dataset with the refined recipe. Then we process the original dataset with the refined recipe using Data-Juicer and get a refined dataset and several saved check- points for further adjustments. This step can be facilitated with the help of our cache and checkpoint mechanisms. (4) Analyze the refined dataset. Like step (1), we analyze the refined dataset again to obtain a new data probe. Based on the statis- tics and visualization results, we assess the degree of improvement in the data quality. If the refined data fails to meet our expectations, we revert to step 2 to manually adjust the data recipe or employ our HPO tool for automatic refinement (refer Sec. 4.1). (5) Get LLMs with the refined dataset. Then we can train or fine-tune LLMs with the refined dataset and training frameworks integrated into Data-Juicer (Sec. 4.3). During the training or fine- tuning process, our auto-evaluation tools offer timely, multi-view assessments of LLMs. These tools inspect numerous metrics across multiple evaluation datasets. This feature provides us the advantage of halting the process prematurely if the refined data weakens LLM performance, thereby preventing unnecessary costs. (6) Collate results and compare with reference models. Finally, Data-Juicer automatically collates the evaluation results and compares them with reference models in the data leaderboard, providing a clear representation of the effects of data processing alone. Consequently, we derive either a superior LLM, which can be | 2309.02033#25 | 2309.02033#27 | 2309.02033 | [
"2306.11644"
] |
2309.02033#27 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | 5.1 Configuring Your Data Recipe Notably, we make the end-to-end pipeline of data processing con- figurable in Data-Juicer, including specified processing environ- ment parameters, OP lists, tools used, and so on. This principle of all-in-one configuration ensures reproducibility and traceability, and simplifies changing specifications in data processing, thereby facilitating the formation of data recipes for further refinement and reuse, and enabling the quantitative exploration and automatic optimization of data processing (Sec. 4.1). Specifically, built upon Jsonargparse [46], we provide unified, flexible, easy-to-use and powerful configuration capabilities. It is engineered to automatically register configuration items for OPs and tools, and accept varying sources of configurations such as com- mand line entries, yaml and jsonnet files, environment variables, default hard-coded values, and a mixture of those for convenient incremental modifications. For example, users can easily build up their own config files by two recommended methodologiesâ | 2309.02033#26 | 2309.02033#28 | 2309.02033 | [
"2306.11644"
] |
2309.02033#28 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | â subtractionâ or â additionâ . The â subtractionâ approach utilizes a pre-set configuration file contain- ing all available OPs, tools, and their default parameters. Users can simply remove or re-order these OPs and adjust these parame- ters per their requirements. Conversely, the â additionâ approach lets users build their configuration files from scratch, leveraging our extensive examples of pre-built data processing recipes for totally more than 20 high-quality and diverse data recipes for pre- training, fine-tuning, English, Chinese, etc. More quantitative analysis on certain recipes are in our experiments (Sec. 7.1). 5.2 Dedicated Pluggable Tools To further enhance usability, facilitate system customization and augment usersâ data handling capabilities, Data-Juicer includes an extensible collection of powerful dedicated tools that can be con- veniently plugged into different stages of the LLM data processing. Quality Classifier. As an illustrative example, we describe our text quality classifier for culling high-quality text from heteroge- neous data sources like CommonCrawl. This tool is a reproduced model based on the closed-source GPT-3 quality scorer [9]. More- over, we have expanded its applicability to Chinese text and various code types. Encapsulated as a callable pipeline, this tool provides users with the freedom to adapt it to other various scenarios. The functionality of the classifier is backed by PySparkâ s standard Tokenizer or Sentencepiece model [50], along with HashingTF as the feature extractor. It then applies a binary logistic regression classifier to gauge the quality of a text. We provide more empirical verification of them in Sec. 7.2.3. Enhanced Sampler for LLM data. In Data-Juicer, we have designed several advanced data sampling utilities specialized for large-scale data chunk handling in LLMs. Our solutions effectively streamline representative extraction, optimize processing time and resources, and meet the distinctive needs of LLM developers. Our stratified sampling technique is noteworthy in this LLM data context. It capitalizes on information within the metadata or statistical fields, thus accommodating varied selection metrics in crafting an effective data sample. | 2309.02033#27 | 2309.02033#29 | 2309.02033 | [
"2306.11644"
] |
2309.02033#29 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | To ensure a comprehensive yet flexible representation of the data corpus, we consider various heterogeneous criteria such as document length, token count, the frequency of boolean predicates for post-conditional checks, and even linguistic diversity formulated via occurrences of verb-noun pair (as shown in the pie plots in Figure 2) . These dynamic criteria are tailored to distinct analytic needs and promote efficient data processing, seamlessly integrating with downstream OPs and tools. Full Toolkit. As for other tools, readers can refer to Sec. 4 for an examination of multiple previously discussed tools, including tracer and analyzer (Sec. 4.2), and evaluator and reference mod- els (Sec. 4.3). We diligently maintain and evolve the toolkit in Data-Juicer, and make the full set publicly accessible. 5.3 User-Friendly Experiences in Data-Juicer Data-Juicer is designed not just for functionality but also for adaptability, catering to an extensive user base with diverse exper- tise and skill sets. While abstracting the intricate system internals, we provide user-friendly interfaces and extensive customizable components. Accordingly, users can embark on zero-code data pro- cessing, engage in low-code customization, or delve into in-depth extensions for complex requirements. | 2309.02033#28 | 2309.02033#30 | 2309.02033 | [
"2306.11644"
] |
2309.02033#30 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | â ¢ Zero-Code Processing: For novice users, Data-Juicer sup- plies a series of ready-to-use data recipes and plug-in tools for immediate use. This requires no knowledge of advanced system architectures or OPs, as discussed in Sec. 5.1 and Sec. 5.2. â ¢ Low-Code Customization: Intermediate users can enjoy the flexibility to alter configurations, data, and external resources to suit their specific needs. They can readily reuse, combine, and edit built-in data configurations; customize quality classifiers and tokenizers; refine data based on our pre-developed recipes; or provide fresh links to auxiliary models or vocabularies from our unified, routinely updated public cloud drive. | 2309.02033#29 | 2309.02033#31 | 2309.02033 | [
"2306.11644"
] |
2309.02033#31 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | â ¢ Advanced Extension: Experienced users are allowed to easily introduce new OPs by deriving from base classes and implement- ing their specific â process()â and â compute_stats()â functions, as demonstrated in the code Listing 1. This grants the users an end-to-end view of the process for a single sample, while Data-Juicer handles the nitty-gritty of configuration registra- tion and efficiency optimization. Additionally, Data-Juicerâ s decoupled design facilitates the smooth incorporation of new tools for users at all stages of LLM data processing, ranging from novel visualization dimensions and evaluation datasets to pre- or post-processing scripts. To enhance the ease of adoption and use of Data-Juicer, apart from the numerous pre-built data recipes (refer Sec. 5), we also provide a series of interactive demos, implemented in Streamlit, for varied profiles and scenarios. This hands-on learning approach has been designed to enable users of varying skill levels to quickly familiarize themselves with and effectively use Data-Juicer. 6 COMPREHENSIVE SYSTEM OPTIMIZATION To handle large-scale data (Challenge 4 in Sec. 1), we employ a series of optimizations in Data-Juicer from various aspects. Optimized Computation: Context management, Operator (OP) Fusion and Reordering. To elevate computational efficiency in LLM data processing, we provide advanced context management, operator fusion, and operator reordering techniques for nuanced implementation contributions. The manager meticulously handles shared intermediate variables, such as segmented words, split lines, and others derived from the original textual corpus, across different operators. It allows seamless reuse of these context variables across multiple operators, thereby mitigating the necessity for computa- tionally expensive re-evaluations. Based on the context manager, the proposed operator fusion method is another new contribution to the field. We propose to identify fusible operators that either share the same contexts or computation sub-procedures. It detects the OP groups first. Succes- sive OPs in the same group should be commutative with each other. It then amalgamates identified fusible operators in each group into a single fused OP, enabling them to be executed faster with a larger localized perspective. The contexts of each sample will be cleaned up after each fused OP, hence little extra memory is required for context management and operator fusion. | 2309.02033#30 | 2309.02033#32 | 2309.02033 | [
"2306.11644"
] |
2309.02033#32 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | Due to the time-consuming increase of single fused OP, we fur- ther design a strategy of operator reordering to optimize the execu- tion sequence of the OP list after fusion. For example, based on the commutativity of Filters, we delay the running of time-consuming OPs (such as fused Filters) and prioritize other less time-consuming OPs. As a result, these time-consuming OPs only need to handle fewer samples because the preceding operators have filtered out some of them, enhancing overall computational efficiency. | 2309.02033#31 | 2309.02033#33 | 2309.02033 | [
"2306.11644"
] |
2309.02033#33 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | G Gp aa, â oat Reorder the only G _a \ ) _g Masibie or = L_ = - t â ; (TD Fina titer â â G soup} =... = I ona [ FusibleFiter ] ap oa) =~ Le am (= 1) Do nothing to â y am Gm" resi ors =. on =~ Garâ Figure 6: The OP fusion procedure for an OP list. The whole procedure of OP fusion is summarized in Figure 6. These amalgamation strategies serve dual purposes. Firstly, it mini- mizes redundant computation, eliminating the need for repetitive yet shared computations. Secondly, it mitigates the overhead of initializing multiple processes by reducing the total count of pro- cessing OPs, thus maintaining expeditious data processing routines. Optimized Space Utilization: Caching OPs and Compression. Recognizing the inadequacies of the original cache management protocol in the Huggingface-datasets library, especially pertaining to the handling of non-serializable third-party models and functions in certain OPs, we design a dedicated hashing method to bypass the serialization procedures of those non-serializable objects, which ensures successful caching of each OP and permits Data-Juicer to leverage optimal cache management. Furthermore, we incorporated the ability for users to activate ad- vanced compression technologies, such as Zstandard (zstd) [23] and LZ4 [64], in Data-Juicer. It will automatically compress cache files after each OP and decompress these compressed files back to nor- mal cache files when rerunning this OP in the same configuration. Compared with the processing time, compressing/decompressing time is relatively negligible due to the high efficiency of the com- pression technologies mentioned above. This feature substantially reduces the volume of cache data storage, facilitating the processing of larger datasets without compromising speed or stability. Optimized Scalability: Distributed Data Processing. The vol- ume of LLM training data can be extremely large, making it difficult to be processed with a single machine. Data-Juicer meshes with distributed processing frameworks such as Ray [66], Apache Beam [5] and Apache Flink [12], and offers the ability to seamlessly trans- late a data processing pipeline running on a single node into a multi-node cluster. In this way, resources in cluster computing can be utilized to accelerate data processing and generation. | 2309.02033#32 | 2309.02033#34 | 2309.02033 | [
"2306.11644"
] |
2309.02033#34 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | Specifically, we adapt the underlying interfaces of HuggingFace- datasets for those of Ray-datasets, such that all OPs of Data-Juicer, even when written as single-machine Python functions, can be executed in a distributed mode with the help of automatic data partitioning by Ray. An alternative approach we support is to replace the default Ray runner of Data-Juicer with other dis- tributed processing back-ends such as Flink, via pre-translations from Data-Juicerâ s processing pipelines into the Beam-compatible ones. As a result, almost all the OPs within Data-Juicer (Mapper, Filter, and Deduplicator) can be accelerated in a multi-node clus- ter, and effectively alleviate the bottlenecks on a single node (even with process-based parallelism) caused by memory capacity and IO throughput. More empirical results can be found in Sec. 7.2.4. In a nutshell, all of these optimizations enhance Data-Juicerâ s scalability from various views, to handle the vast amount of data involved in LLMs, ensuring robust and efficient processing while minimizing resource requirements. 7 EVALUATION OF DATA-JUICER 7.1 Making Better Data Recipes The value of an effective LLM data processing system is reflected not only in its comprehensive and flexible operability but also in its capacity to produce high-quality data that LLMs can more readily â | 2309.02033#33 | 2309.02033#35 | 2309.02033 | [
"2306.11644"
] |
2309.02033#35 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | digestâ . Data-Juicer provides specialized features for ex- ploring and making data recipes tailored to LLMs, and we have developed numerous ready-to-use data recipes using Data-Juicer. In this section, we evaluate the quality of data recipes generated by Data-Juicer for both LLM pre-training and fine-tuning. 7.1.1 Refined Pre-training Data Recipes. The pre-training data we produced consists solely of publicly available sources, exem- plifying the core principles of transparency and reproducibility. Specifically, we choose to improve two widely-used, high-quality datasets for LLMs, TogetherAIâ s RedPajama [24] and EleutherAIâ s Pile [31], which were curated from 15 highly diverse text sources and subjected to meticulous pre-processing and cleaning to ensure their quality. With the help of Data-Juicer, we further refine them via data analysis, merging and quality enhancement, employing dozens of OPs with varied configurations. For detailed statistics, processing steps and refined data recipes, please refer to Appendix B.2. To verify the quality of the data recipes derived by Data-Juicer, we use the original RedPajam and Pile, and our refined datasets to pre-train LLMs with mainstream LLaMA architecture and assess the modelsâ performance across 16 core HELM tasks. We keep the training configurations the same while only modifying the training data. Detailed hyper-parameters are in Appendix B.3.1. The results of average scores of 16 tasks are visualized in Figure 7, where we evaluated checkpoints throughout the pre-training process with an increasing number of billion-sized tokens at 50B, 100B, and 150B. Notably, through fair comparisons with equivalent training tokens, LLMs pre-trained on Data-Juicer-recipes consistently out- performed those using only RedPajama or its union with the Pile, reinforcing the usefulness and effectiveness of Data-Juicer. Moreover, we compare Data-Juicer-models with several SOTA baselines and summarize the results in Table 2. | 2309.02033#34 | 2309.02033#36 | 2309.02033 | [
"2306.11644"
] |
2309.02033#36 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | With only half the data volume (150B tokens), LLaMA-1.3B pre-trained on Data-Juicer- recipe outperformed Pythia-1.4B [6] (300B tokens), and even beats highly competitive Falcon-1.3B [71] trained on 350B tokens. No- tably, we further labeled 17 subsets from Alpaca-CoT (a collection of 39 public fine-tuning datasets) with the â Instruct Fine-Tuning (IFT)â tag and performed data mixing and processing using Data-Juicer. Following the usual practice [105], we incorporate these large- volume IFT data into the pre-training phase and execute continuous | 2309.02033#35 | 2309.02033#37 | 2309.02033 | [
"2306.11644"
] |
2309.02033#37 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | w a â ® RedPajama+Pile (Data-Juicer) â #- RedPajama+Pile sm RedPajama w a Average score on 16 tasks wow ww x 8 8 8 w 6 N o 50 75 100 125 150 #Tokens (B) for pre-training LLaMA-1.3B Figure 7: Evaluation results of reference models trained with different datasets but the same pre-training procedures. Data-Juicerâ s data recipe gains consistent improvements over baselines. training upon the checkpoint of Data-Juicer (RedPajama+Pile)- 150B. As reflected in the last two rows of Table 2, Data-Juicer gains a further 4.9% relative improvement over the original Alpaca- CoT-IFT while utilizing only â ¼30% data volume. Table 2: The average score of the pre-trained LLMs on the 16 HELM core tasks. Individual task results and data recipes are detailed in Appendix B.4. â IFTâ denotes the datasets tagged with â Instruct Fine-Tuningâ in our context. Model Falcon-1.3B [41] Training Data RefinedWeb #Tokens 350B Score 33.97 Pythia-1.4B [29] Pile 300B 33.96 LLaMA-1.3B Data-Juicer (RedPajama+Pile) + Alpaca-CoT-IFT 150B 150B + 15B 34.21 35.04 + Our Refined IFT 150B + 4.7B 36.76 Taken together, these findings underscore the potential of the Data-Juicer system to generate high-quality data and verify the excellence of Data-Juicer-recipes in terms of enhancing LLM performance while reducing LLM training costs. 7.1.2 Refined Fine-tuning Data Recipes. For the Alpaca-CoT collection, besides the â IFTâ tag as validated in Table 2, we also labeled datasets within it with â Chat Fine-Tuning (CFT)â for en- hanced dialog ability and aligned human value. | 2309.02033#36 | 2309.02033#38 | 2309.02033 | [
"2306.11644"
] |
2309.02033#38 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | To examine their quality, we first use the CFT and EN tags to filter out several com- petitive subsets, and then generate two new equal-size datasets by random sampling and our designed recipe respectively. Then we conduct fine-tuning on the generated datasets based on the open- source mainstream architecture, English LLaMA-7B [34]. Similarly, we replace the tag â ENâ with â ZHâ , and use a SOTA LLaMA-2-7B variant [42] for the Chinese scenario. | 2309.02033#37 | 2309.02033#39 | 2309.02033 | [
"2306.11644"
] |
2309.02033#39 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | Statistics of these datasets and training hyper-parameters are in Appendix B.3.2. For a thorough and comparative performance evaluation, we used GPT-4 API for pairwise scoring and tallying of wins and ties. Table 3: Results of pair-wise model comparisons using GPT4 scoring. â CFTâ , â ENâ and â ZHâ indicate meta-tags as Chat Fine-Tuning, English, and Chinese text respectively. Model LLaMA-7B [34] LLaMA2-7B (Chinese, FlagAlpha [42]) Tuning Data Alpaca Data-Juicer Random (CFT, EN) Data-Juicer Belle Data-Juicer Random (CFT, ZH) Data-Juicer #Samples Win Tie 52k 40k 40k 40k 543k 52k 52k 52k 16 44 19 36 28 33 19 45 100 105 99 96 The results are consolidated in Table 3, from which we can see that LLMs utilizing Data-Juicer-recipes consistently demonstrate high validity. Firstly, compared to LLMs trained on the competitive fine-tuning open datasets, Alpaca [92] and Belle [45], LLMs trained on Data-Juicer data gain higher win rates (up to 17.5% for English case) while using less data (up to 90.4% reduction for Chinese case). Secondly, compared to the LLMs trained on the datasets with trivial processing strategy (mixture by random sampling), LLMs trained on Data-Juicer still gain higher win rates (up to 14.4% ), which attests to the effectiveness of our enhanced sampling strategy and quality of Data-Juicer-recipes for LLMs again. 7.2 Processing Data Efficiently and Effectively 7.2.1 End-to-End System Performance. To evaluate the pro- cessing performance of Data-Juicer, we compare it with two SOTA baselines: TogetherAIâ s RedPajama [24] and AllenAIâ s Dolma [86]. | 2309.02033#38 | 2309.02033#40 | 2309.02033 | [
"2306.11644"
] |
2309.02033#40 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | A more detailed introduction to and comparison with these baselines can be found in Appendix B.3.4. For a fair comparison, here we use their official code repositories and run Data-Juicer on the data recipes with the same OPs to process the Books, arXiv, and C4 datasets, which vary in terms of data sizes, distributions and involve diverse processing OPs. We conduct multiple rounds of experiments on different numbers of processes (np=[32, 64, 128]) and monitor several core metrics, in- cluding processing time and average memory usage. The monitored time is the wall-clock time of the whole processing pipeline. The average memory usage is monitored every second and aggregated across all relevant processes. For more experimental details, please refer to Appendix B.3.3. The experimental results are summarized in Figure 8. Notably, for all datasets and various numbers of processes, Data-Juicer requires an average of 50.6% less processing time and 55.1% less memory. In particular, it saves at most 88.7% processing time for the arXiv dataset compared with the baseline. Also, it takes up to only 22.9% memory of baseline for Data-Juicer to process the Books dataset, which is mainly because the processing procedure of the baseline loads the whole dataset at once. Overall, Data-Juicer 5500 Books 18000 C4 (subset) me wo 5000 \ 15000 . p32 \ _, 12000 np=32 ', aperze {3 1 S so00 mots |B F 6000 3o00| "8 =-pataycer| â yggqfemea2 5, -Datacer) 82) apedy = Data-cer â -RedPajama â SRedPojama | 7° alma opzi2e mo=128 â 30 160150200250300 0 102030 40506070 0 3 6 9 12 15 18 â Avg. Memory(GiB) + â Avg. Memory(GiB) + â Avg. Memory(GiB) + Figure 8: Comparison of stand-alone performance in various data sizes and processing configurations. effectively alleviates the bottleneck caused by IO of cache files, and achieves better end-to-end time-space efficiency than baselines. | 2309.02033#39 | 2309.02033#41 | 2309.02033 | [
"2306.11644"
] |
2309.02033#41 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | 7.2.2 Effect of Context Management, OP Fusion, and Re- ordering. As introduced in Sec. 6, Data-Juicer employs dedi- cated optimization to minimize redundant computations and save processing time. To examine the optimization effect, we prepared three test datasets of varied sizes and sample counts. Each dataset goes through the same processing recipe which includes 14 OPs (5 Mappers, 8 Filters, and 1 Deduplicator), with 5 of these OPs being fuse-able. We conduct comparison experiments with 4 processes, except for the largest dataset, where we utilize 50 processes to assess if these techniques remain effective on larger scales. = All OPs before fusion All OPs after fusion lm Fusible OPs before fusion Fusible OPs after fusion 100- 19-99% 100.90% 100.00% 100.0% = 24.91% wee 20.78% Nee 4 iSz8is 83.74% & g0- tess 79.22% STG: | 2309.02033#40 | 2309.02033#42 | 2309.02033 | [
"2306.11644"
] |
2309.02033#42 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | Fa 13888 a â ¬ a 5 60- ia Y 7.97% 5 40- 5 40 35.63% i s £ 20- § 2 17MB-np=4 169MB-np=4 21GB-np=4 Different dataset sizes and number of processes 21GB-np=50 Figure 9: Time comparison before and after OP fusion. The results are shown in Figure 9, where both the normalized and actual time consumption for each experimental setup are indicated. The results signify that our optimization strategy effectively saves up to 24.91% of the total time for the entire process and saves at most 42.04% of time for those fusible OPs. In addition, the findings showcase that the optimization performs efficiently regardless of variations in dataset sizes or the number of processes utilized. 7.2.3 Effect of Quality Classifiers. As described in Section 5.2, Data-Juicer provides built-in quality classifiers for LLM data pro- cessing, and here we present several empirical results regarding their performance. Specifically, we follow the training procedure of the proprietary quality classifier used in GPT-3 [9] and extend its training pipeline to include Chinese text. In the evaluation of the collected data, we found that our reimplementation of the GPT-3 classifier and its Chinese adaptation achieved F1 scores of 97.47% and 98.64%, respectively. Further training and evaluation details are provided in the Appendix B.1. # Table 4: Comparison of keeping ratio on CommonCrawl. Quality Classifier Original GPT-3 Keeping Ratio @ label - Keeping Ratio @ Pareto 1.30% Our GPT-3 3.22% 1.41% Chinese 1.81% - Furthermore, we assess the filtering effectiveness of these clas- sifiers by comparing their keeping ratios on CommonCrawl. The results are summarized in Table 4, where we employ two data keep- ing methods used in GPT-3: (1) label: ð | 2309.02033#41 | 2309.02033#43 | 2309.02033 | [
"2306.11644"
] |
2309.02033#43 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | ð ð ð ð ð ð ð > 0.5; and (2) Pareto [9]: ð ð ð ð ð ð ð ð > 1 â np.random.pareto(ð ¼), ð ¼ = 9. The keeping ratios of our re-implemented GPT-3 quality classifiers are generally in line with the original one, and our Chinese extended version maintains a keeping ratio comparable to that of the English version. 7.2.4 System Scalability. To verify the enhanced scalability of our system (as detailed in Sec. 6), we carry out a series of exper- iments to measure data processing times across multiple servers. Specifically, we adopt the StackExchange and arXiv datasets from RedPajama. The total size of the StackExchange and arXiv datasets are 65GB and 140GB in jsonl format, respectively. We compare the performance of Data-Juicer on Ray, Data-Juicer on Beam (using the Flink backend), and original Data-Juicer in these tests. More details about the implementation and experimental platforms are in Appendix B.3.5. 16384 8192 . 4096 2048 Time (s) 1024 512 + StackExchange e+ arxiv 2564 â #â Stackexchange [Ray] â -- arXiv [Ray] te StackExchange [Beam] e+ arxiv [Beam] 128 i 3 4 Ey q i 1024 cores Number of nodes Figure 10: Processing time with varying number of nodes. Data-Juicer accelerates processing in distributed mode. The experiment results are illustrated in Figure 10. Notably, thanks to various optimizations, our original system outperforms both Ray and Beam in the single server scenario. Moreover, as the number of nodes increases, the processing time of our system on Ray decreases proportionally (up to 87.4% and 84.6% time reduc- tion on StackExchange and arXiv respectively), demonstrating its effective scalability across multiple servers. Nonetheless, the processing time of Data-Juicer on Beam re- mains almost unchanged as the number of nodes increases. | 2309.02033#42 | 2309.02033#44 | 2309.02033 | [
"2306.11644"
] |
2309.02033#44 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | Upon further investigation of the processing workflow, we found that the limited scalability of Data-Juicer on Beam is primarily con- strained by the data loading component of Beam, which leads to a dominant file loading time ratio and requires substantial develop- ment changes for adaptation and further performance optimization. 7.3 Empowering Real-world Products Data-Juicer has been adopted by several real-world LLM-based products, playing a crucial role in data understanding and pro- cessing. It evolves continually through the integration of feedback from real-world demands. A notable testament to its utility is its contribution to the development of several industrial LLMs from Alibaba Cloudâ s Tongyi suite [21], such as Dianjin, which is used for financial analysis; Zhiwen, a reading assistance tool; and Xingchen, which specializes in AI character customization. Moreover, the data processing capabilities of Data-Juicer have been incorporated into Alibaba Cloudâ s Platform for AI (PAI) [22] to support more real-world applications. Our systemâ s fine-grained OP abstraction, coupled with the ex- tensive tools for LLM data-processing, empowers users to easily explore and refine data recipes tailored to the distinct textual at- tributes of diverse use cases. For example, within the financial sector, it is crucial to accommodate data that includes numerous digits and standardized terminology. In the realm of reading assistance, the focus shifts to data characterized by extended text lengths and coherent structures. Conversely, character customization demands data rich in dialogue and varied enough to support personalized services. Data-Juicer adeptly meets these varied demands by fa- cilitating the combination of distinct OPs, hyper-parameters, and tools that adapt to the unique need of each real-world application. 8 CONCLUSIONS To conclude, the introduction of Data-Juicer reflects a new step forward in the field of data-centric LLM development. By offering a user-friendly, versatile, and efficient solution, Data-Juicer effec- tively addresses the existing limitations of open-source tools for LLM data processing, which lean towards data reproducibility at the expense of adaptability and usability. The decoupling of tradition- ally linked components fosters greater abstraction and modularity, and the organic arrangement of over 50 built-in operators, dedi- cated tools, and abundant data recipes serves diverse needs for LLM pre-training and fine-tuning. | 2309.02033#43 | 2309.02033#45 | 2309.02033 | [
"2306.11644"
] |
2309.02033#45 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | Beyond supporting auto-evaluation, Data-Juicer is carefully optimized and seamlessly integrated with both ecosystems for LLM training and evaluation, as well as dis- tributed computing. Empirical validation bears witness to substan- tial improvements in LLMsâ performance using Data-Juicerâ s data recipes, and shows advances in system efficiency and scalability. As such, Data-Juicer stands as a compelling addition to the toolkit for LLM data processing, which we hope can shed light on broader research for the field of data-centric LLM development. # REFERENCES [1] Ebtesam Almazrouei, Hamza Alobeidli, Abdulaziz Alshamsi, Alessandro Cap- pelli, Ruxandra Cojocaru, Merouane Debbah, Etienne Goffinet, Daniel Heslow, Julien Launay, Quentin Malartic, Badreddine Noune, Baptiste Pannier, and Guilherme Penedo. 2023. Falcon-40B: an open large language model with state-of-the-art performance. (2023). [2] Apache Arrow. 2023. https://arrow.apache.org/ [3] Amanda Askell, Yuntao Bai, Anna Chen, Dawn Drain, Deep Ganguli, Tom Henighan, Andy Jones, Nicholas Joseph, Benjamin Mann, Nova DasSarma, Nelson Elhage, Zac Hatfield-Dodds, Danny Hernandez, Jackson Kernion, Kamal Ndousse, Catherine Olsson, Dario Amodei, Tom B. Brown, Jack Clark, Sam McCandlish, Chris Olah, and Jared Kaplan. 2021. A General Language Assistant as a Laboratory for Alignment. CoRR abs/2112.00861 (2021). [4] Stephen H. Bach, Victor Sanh, Zheng Xin Yong, Albert Webson, Colin Raffel, Nihal V. Nayak, Abheesht Sharma, Taewoon Kim, M. | 2309.02033#44 | 2309.02033#46 | 2309.02033 | [
"2306.11644"
] |
2309.02033#46 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | Saiful Bari, Thibault Févry, Zaid Alyafeai, Manan Dey, Andrea Santilli, Zhiqing Sun, Srulik Ben-David, Canwen Xu, Gunjan Chhablani, Han Wang, Jason Alan Fries, Maged Saeed AlShaibani, Shanya Sharma, Urmish Thakker, Khalid Almubarak, Xiangru Tang, Dragomir R. Radev, Mike Tian-Jian Jiang, and Alexander M. | 2309.02033#45 | 2309.02033#47 | 2309.02033 | [
"2306.11644"
] |
2309.02033#47 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | Rush. 2022. Prompt- Source: An Integrated Development Environment and Repository for Natural Language Prompts. In ACL (demo). 93â 104. [5] Apache Beam. 2023. https://beam.apache.org/ [6] Stella Biderman, Hailey Schoelkopf, Quentin Gregory Anthony, Herbie Bradley, Kyle Oâ Brien, Eric Hallahan, Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, Aviya Skowron, Lintang Sutawika, and Oskar van der Wal. 2023. | 2309.02033#46 | 2309.02033#48 | 2309.02033 | [
"2306.11644"
] |
2309.02033#48 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | Pythia: A Suite for Analyzing Large Language Models Across Training and Scaling. In ICML, Vol. 202. 2397â 2430. [7] Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, and Samuel Weinbach. 2022. GPT-NeoX-20B: | 2309.02033#47 | 2309.02033#49 | 2309.02033 | [
"2306.11644"
] |
2309.02033#49 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | An Open-Source Autoregressive Language Model. CoRR abs/2204.06745 (2022). [8] Andrei Z Broder, Moses Charikar, Alan M Frieze, and Michael Mitzenmacher. 2000. Min-Wise Independent Permutations. J. Comput. System Sci. 60, 3 (2000), 630â 659. [9] Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Ka- plan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Rad- ford, Ilya Sutskever, and Dario Amodei. 2020. | 2309.02033#48 | 2309.02033#50 | 2309.02033 | [
"2306.11644"
] |
2309.02033#50 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | Language Models are Few-Shot Learners. In NeurIPS. [10] Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott M. Lund- berg, Harsha Nori, Hamid Palangi, Marco Túlio Ribeiro, and Yi Zhang. 2023. Sparks of Artificial General Intelligence: Early experiments with GPT-4. CoRR abs/2303.12712 (2023). [11] Tianle Cai, Xuezhi Wang, Tengyu Ma, Xinyun Chen, and Denny Zhou. 2023. Large Language Models as Tool Makers. CoRR abs/2305.17126 (2023). [12] Paris Carbone, Asterios Katsifodimos, Stephan Ewen, Volker Markl, Seif Haridi, and Kostas Tzoumas. 2015. Apache Flink: Stream and batch processing in a single engine. IEEE Data Eng. | 2309.02033#49 | 2309.02033#51 | 2309.02033 | [
"2306.11644"
] |
2309.02033#51 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | Bull. 38, 4 (2015). [13] Nicholas Carlini, Daphne Ippolito, Matthew Jagielski, Katherine Lee, Florian Tramèr, and Chiyuan Zhang. 2023. Quantifying Memorization Across Neural Language Models. In ICLR. [14] Moses S. Charikar. 2002. Similarity Estimation Techniques from Rounding Algorithms. In STOC. 380â 388. | 2309.02033#50 | 2309.02033#52 | 2309.02033 | [
"2306.11644"
] |
2309.02033#52 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | # [15] ChatGLM2-6B . 2023. https://github.com/THUDM/ChatGLM2-6B [16] ChatLLaMA. 2023. https://github.com/nebuly-ai/nebuly/tree/main/ optimization/chatllama [17] Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, and Hongxia Jin. 2023. | 2309.02033#51 | 2309.02033#53 | 2309.02033 | [
"2306.11644"
] |
2309.02033#53 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | AlpaGasus: Training A Better Alpaca with Fewer Data. CoRR abs/2307.08701 (2023). [18] Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Pondé de Oliveira Pinto, Jared Kaplan, Harrison Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Joshua Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. 2021. | 2309.02033#52 | 2309.02033#54 | 2309.02033 | [
"2306.11644"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.