id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
listlengths 1
1
|
---|---|---|---|---|---|---|
2308.09687#10 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | 3 The GoT Framework We now detail the GoT framework. We present it in Figure 1, and compare it to other prompting strategies. Multiple CoTs (CoT-SC) Input Basic Input- Output ( Input vIâ ¢ ae am 1 1 Y v. ' | y â Y ! ne 2 @ Positive score J \ ( Negative @ Nie Output axtmithdlererm - â Tree of Thoughts (ToT) Input Backtracking ays ndencies between thoughts Intermediate Selecting a chain with (ues [J Abandon thought distsigse also scored â ¢., Backtrack Graph of Thoughts (GoT) Refining [This work] from a chain Backtracking Aggregating Aggregating, geregatin chains Output | 2308.09687#9 | 2308.09687#11 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#11 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | Figure 1: Comparison of Graph of Thoughts (GoT) to other prompting strategies. Formally, GoT can be modeled as a tuple (G, T , E, R), where G is the â LLM reasoning processâ (i.e., all the LLM thoughts within the context, with their relationships), T are the potential thought transformations, E is an evaluator func- tion used to obtain scores of thoughts, and R is a ranking function used to select most relevant thoughts. 3.1 Reasoning Process We model the reasoning process as a directed graph G = (V, E); V is a set of vertices and E â V Ã V is a set of edges. G is directed and thus the edges are a subset of or- dered vertex pairs E â V Ã V . A vertex contains a solution to a problem at hand (be it an initial, intermediate, or a fi- nal one). The concrete form of such a thought depends on a use case; it could be a paragraph (in writing tasks) or a sequence of numbers (in sorting). A directed edge (t1, t2) indicates that thought t2 has been constructed using t1 as â direct inputâ , i.e., by explicitly instructing the LLM to use t1 for generating t2. Graph theory view Example sorting task Example writing task aIeTS tee Boo e ? 1278 2367 1145 |anticle alia sg = 3 Fy \ | VA F} 111223456778 Dax a [summary â Merging sorted subarrays Combining articles into into a sorted array of numbers â a coherent summary aT5T5 Boo 146242498754 g L { \ sg 5 ( ) r ) 1462 4249 8754 g A vertex models a thought. An edge models dependency Splitting an unsorted array into . Generating summaries from subarrays, for subsequent sorting an article, to maximize quality Figure 2: Examples of aggregation and generation thought transformations. In certain use cases, graph nodes belong to different classes. For example, in writing tasks, some vertices model plans of writing a paragraph, while other vertices model the actual paragraphs of text. | 2308.09687#10 | 2308.09687#12 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#12 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | In such cases, GoT embraces a heterogeneous graph G = (V, E, c) to model the LLM rea- soning, where c maps vertices V into their respective classes C (in the above case, it would be C = {plan, par}). Hence, any vertex v can model different aspects of reasoning. We associate G with the LLM reasoning process. To ad- vance this process, one applies thought transformations to G. An example of such a transformation is to merge best- scoring (so far) thoughts into a new one. Another example is to loop over a thought, in order to enhance it. Note that these transformations strictly extend the set of transforma- tions available in the CoT, CoT-SC, or ToT. # 3.2 Transformations of Thoughts GoT enables novel transformations of thoughts thanks to the graph-based model for reasoning. | 2308.09687#11 | 2308.09687#13 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#13 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | We refer to them as graph-enabled transformations. For example, in writing, one could combine several input articles into one coherent summary. In sorting, one could merge several sorted subar- rays of numbers into a final sorted array. We illustrate exam- ples of aggregation and generation in Figure 2. Formally, each such transformation can be modeled as T (G, pθ) where G = (V, E) is the graph reflecting the current state of the reasoning, and pθ is the used LLM. T modifies G usually by adding new vertices and their incom- ing edges. We have Gâ ² = T (G, pθ) = (V â ², Eâ ²), where V â ² = (V â ª V +) \ V â and Eâ ² = (E â ª E+) \ Eâ . V + and E+ are new vertices and edges inserted into G to model the new thoughts and their dependencies, respectively. To maximize the expressiveness of GoT â we also enable the user to explicitly remove thoughts, by specifying the corre- sponding vertices and edges to be removed (V â and Eâ , re- spectively). Here, it is the userâ s responsibility to ensure that the sets V +, E+, V â , and Eâ come with consistent trans- formations (i.e., for example, that the user does not attempt to remove a vertex that does not exist). This enables seam- 3 less incorporation of schemes where, in order to save space within the context, one can remove parts of reasoning that do not promise improvements. | 2308.09687#12 | 2308.09687#14 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#14 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | The specific form of T and how it impacts G depends on a specific transformation. We first detail the primary graph- enabled thought transformations, and then proceed to de- scribe how GoT embraces the transformations from the ear- lier schemes. Unless stated otherwise, V â = Eâ = â . Aggregation Transformations First, with GoT, one can aggregate arbitrary thoughts into new ones, to combine and reinforce the advantages of these thoughts, while elim- inating their disadvantages. In the basic form, in which only one new vertex is created, V + = {v+} and E+ = {(v1, v+), ..., (vk, v+)}, where v1, ..., vk are the merged k thoughts. More generally, this enables aggregating reason- ing paths, i.e., longer chains of thoughts, beyond just indi- vidual thoughts. With the graph model, it is simply achieved by adding outgoing edges from the vertices v1, ..., vk mod- eling final thoughts in several chains, into a single thought v+ combining these chains. Refining Transformations Another thought transforma- tion is the refining of a current thought v by modifying its content: V + = {} and E+ = {(v, v)}. This loop in the graph indicates an iterated thought with the same connec- tions as the original thought. Generation Transformations Finally, one can generate one or more new thoughts based on an existing single thought v. This class embraces analogous reasoning steps from earlier schemes, such as ToT or CoT-SC. Formally, we have V + = {v+ k )}. 3.3 Scoring & Ranking Thoughts Thoughts are scored to understand whether the current solu- tion is good enough. A score is modeled as a general func- tion E(v, G, pθ), where v is a thought to be evaluated. We use the state of the whole reasoning process (G) in E for maximum generality, because â for example â in some eval- uation scenarios, scores may be relative to other thoughts. GoT can also rank thoughts. We model this with a func- tion R(G, pθ, h) where h specifies the number of highest- ranking thoughts in G to be returned by R. | 2308.09687#13 | 2308.09687#15 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#15 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | While the spe- cific form of R depends on a use case, we most often use a simple yet effective strategy where h thoughts with highest scores are returned, i.e., v1, ..., vh = R(G, pθ, h). Specific forms of E and R depend on a use case. We dis- cuss the details in Section 5. For example, the score (or rank) for sorting corresponds to the count of elements correctly sorted (or incorrectly, when obtaining the error as a score). | 2308.09687#14 | 2308.09687#16 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#16 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | 4 System Architecture & Extensibility The GoT architecture consists of a set of interacting mod- ules, see Figure 3 (the blue part). These modules are the Prompter (prepares the messages for the LLM), the Parser (extracts information from LLMsâ replies), the Scoring module (verifies and scores the LLM replies), and the Con- troller (coordinates the entire reasoning process, and decides on how to progress it). The Controller contains two further 4 important elements: the Graph of Operations (GoO) and the Graph Reasoning State (GRS). GoO is a static structure that specifies the graph decomposition of a given task, i.e., it pre- scribes transformations to be applied to LLM thoughts, to- gether with their order & dependencies. GRS is a dynamic structure that maintains the state of the ongoing LLM rea- soning process (the history of its thoughts and their states). | 2308.09687#15 | 2308.09687#17 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#17 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | # 4.1 Prompter The Prompter prepares the prompt to be sent to the LLM. This module is responsible for the specifics of encoding the graph structure within the prompt. The GoT architecture en- ables the user to implement use-case specific graph encod- ings by providing full access to the graph structure. # 4.2 Parser The Parser extracts information from LLMâ s thoughts. For each such thought, the Parser constructs the thought state, which contains this extracted information. The thought state is then used to update GRS accordingly. | 2308.09687#16 | 2308.09687#18 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#18 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | # 4.3 Scoring & Validation Here, we verify whether a given LLMâ s thought satisfies po- tential correctness conditions, and then we assign it a score. Depending on how the score is derived, the module may consult the LLM. Moreover, depending on the use case, the score may also be assigned by a human. Finally, use cases such as sorting use simple local scoring functions. # 4.4 Controller The Controller implements a specific strategy for select- ing thoughts from its GRS structure. It also selects what transformations should be applied to which thoughts, and then passes this information to the Prompter. It also decides whether the whole process should be finalized, or whether the next round of interaction with the LLM should be initi- ated. In our current design, this is dictated by the execution plan specified in GoO. # 4.5 GoO & GRS The user constructs a GoO instance, which prescribes the ex- ecution plan of thought operations. GoO is a static structure that is constructed once, before the execution starts. Each operation object knows its predecessor operations and suc- cessor operations. Then, during the execution, an instance of GoO maintains the continually updated information about the LLM reasoning process. This includes which operation has been executed so far, the states of all the generated LLM thoughts, their validity and scores, and any other relevant information. The above elements offer extensible APIs, enabling straightforward implementations of different prompting schemes. The APIs are outlines in the green part of Fig- ure 3, and detailed in the documentation. We also provide examples of prompts used by these operations and a corre- sponding GRS in the red part of Figure 3. Legend Gray block) External entity Blue block {ome oer Score Prompt Qi Thought Ch Operation Thought state + its 2 Thought state pment operations API for Controller â Dependency on Thought state + thoughts score = //LLM params: model used, temperature, max tokens, api key, org, ... = //LLM cost features: prompt token cost, response token cost, ... = //Instances of Prompter + Parser + Graph of Operations, = //Any additional input parameters (e.g., numbers to be sorted). | 2308.09687#17 | 2308.09687#19 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#19 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | Available operations when building GoO (extensible) ~ Generate, Aggregate, Score, ... //see Prompter API = KeepBest (N) //preserves N best scoring thoughts = Repeat (k) //Repeat a given operation k times, generating k thoughts. //For example, this enables "Aggregate" to generate multiple outcomes Hof the combination operation. Each such thought is maintained //within the Graph Reasoning State and scored individually. API for Prompter (extensible) â ¢ Generate(t, k) /generate a prompt for k new thoughts, using thought t = ValidateAndImprove(t ) //generate a prompt to enhance thought t, ~ Aggregate(t1,..., tk) /generate a prompt to combine thoughts tl, ..., tk = Score(t) //score thought t = Validate(t) /generate a prompt to validate the correctness of thought t Architecture overview Goal: Initiate, coordinate, manage, Cc Il and progress the GoT execution â â â ~. | 2308.09687#18 | 2308.09687#20 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#20 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | Controller Guth 2% Graph of al: Speci : LLM thought Operations transformations User Graph Reasoning State Goal: Build a prompt e LLM wn to be sent to the > <â â = Prompter » Gaaa LLM 2 a << Parser â â > & Goal: Extract ave i Goal: Maintain information from Goal: Assess the the ongoing LLM juality of the ® f« M's solution ze Human â = Scoring & <-_ or LLM _#A,, validation @A,, reasoning process CALA LA ~~ i Goal: Indicate the Ranking top-scoring thoughts Specifying the Structure of Graph of Operations (GoO) Graph of Operations enables seamless specification of not only API for Parser (extensible) Gof, but also existing schemes such as CoT, CoT-SC, ToT. ParseGenerate, ParseAggregate, ParseImprove, ParseValidate, ParseScore, //Each of the above routines is responsible for parsing an LLM's reply /o a corresponding Prompter routine (e.g., ParseScore parses Score). Example prompts and the Graph Reasoning State for the sorting use case O-0-0 oo O-O-6 Keo o-tO om (some examples within each prompt are omitted due to space constraints) FBnitiavsystem prompt (optional) ] » Hello. I want to sort the following input sequence of numbers: {input [> | A prompt used by Generate(t, k=1)+Repeat (k=4) © PALA promptused by Generate(t, k=4) 1) <Instruction> Split the following list of 64 numbers into 4 lists of numbers each, the first list should contain the first 16 numbers, the second list the second 16 numbers, the third list the third 16 numbers and the fourth list the fourth 16 numbers. Only output the final 4 lists in the following format without any additional text or thoughts! if "List 1": â List 2â : [2, 9, 2, "List 3": "List 4": | 2308.09687#19 | 2308.09687#21 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#21 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | J} <Anstruction> <Example> [3,1,9,3,7,5,5,4,8,1,5,3,3,2,3,0], 19, 7, 2, 2, 4, 4,8,5, 0,8, 7,3, 3,8, 7, 0], "List 3": [9, 5, 1, 6, 7, 6, 8, 9,0, 3, 0, 6, 3, 4, 8, O], "List 4": [6, 9, 8, 4, 1, 2, 9, 0, 4, 8, 8,9, 9, 8, 5, 9] H </Example> Input: {input} â This prompt is use by an operation Generate where the branching factor is k = 4. Four new thoughts are â constructed based on the LLM reply to this prompt. <Instruction> Sort the following list of numbers in ascending order. Output only the sorted list of numbers, no additional text. </Instruction> <Example> Input: [3, 7, 0, 2, 8, 2,0, 9,3, 3,9, 2, 1) Output: [0, 0, 1, 1, 2, 2, 2, 2,2, 2,3,3,3,3,3,4,4, 4,4,5,5,5,5, 6,6,7,7,8,8,9,9, 9] â The input thought t Oo </Example> â This prompt is used by an operation Generate where the 2 Ao a @ â generated: However, as'we chain it with the operation Repeat {vith ke, the underlying GoT framework ensures that Generate executes 4 times and rests in 4 separate thoughts, Note thet, from the graph theory perspective the GRS ential to that nthe operation Generate KA), â | 2308.09687#20 | 2308.09687#22 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#22 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | The difference between these two is that Generate(t, kd) gives the user more contol over how these ule thouguisare Consvcied) le Genes Kei} *Repeat int i eas leuible but more easy to use, Moreover wth Repeat â ne has 4 context-isolated responses from the LLM for identical prompts, whereas without Repeat theres only one context where all 4 thoughts are {Generated and must be explicitly handled ina single prompt/session.. 1,2, 2, 2,4, 7,8,5, 5,3, 9, 4, 3, 5, 6, 6,4, 4,5, Input: {input} A prompt used by Pi iusregate(ta, t2)+Repeat (K=3)+KeepBest (N=) (2) <Instruction> Merge the following 2 sorted lists of length {length} each into one sorted list of length {length2} using a merge sort style approach. Only output the final merged list without any additional text or thoughts! </Instruction> <Approach> â To merge the two lists in a merge-sort style approach, follow these steps: 1. Compare the first element of both lists. 2. Append the smaller element to the merged list and move to the next element in the list from which the smaller element came. 3. Repeat steps 1 and 2 until one of the lists is empty. 4. Append the remaining elements of the non-empty list to the merged list. </Approach> â Merge the following two lists into one sorted list: ttinputl) : input 2: {input2)} ought ty 2 ® ® Merged list: â This prompt is used by an operation Aggregate where the aggregation factor is k=2@ input thoughts, t and 12, ae aggregated). This is repeated GoP 3 times, to maximize qualit aly the est et selected Note hat tis example, exis the merge operation onl â remaining opera- clin the GoO and are handled by the underlying Gol framework. FBLA prompt used by improve(t)+Repeat (k=4) Q â <Instruction> The following two lists represent an unsorted list of number: and a sorted variant of that list. The sorted variant is not correct. | 2308.09687#21 | 2308.09687#23 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#23 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | Fix the sorted variant so that it is correct. Make sure that the output list is sorted in ascending order, has the same number of elements as the input list ({length}), and contains the same elements as the input list. </Instruction> <Approach> To fix the incorrectly sorted list follow these steps: 1. For each number from 0 to 9, compare the frequency of that number in the incorrectly sorted list to the frequency of that number in the input list. 2. Iterate through the incorrectly sorted list and add or remove numbers as needed to make the frequency of each number in the incorrectly sorted list â match the frequency of that number in the input list. </Approach> <Examples> Input: [3, 7, 0, 2, 8, 1, 2, 2, 2, 4, 7, 8, 5,5,3, 9] Incorrectly Sorted: [0, 0, 0, 0, 0, 1, 2, 2, 3,3, 4, 4, 4,5,5, 7, 7, 8,8,9, 9,9, 9] Reason: The incorrectly sorted list contains four extra Os, two extra 4s and three extra 9s and is missing two 2s. | 2308.09687#22 | 2308.09687#24 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#24 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | Output: (0, 1, 2, 2, 2, 2,3, 3, 4,5, 5, 7, 7, 8, 8, 9] Input: [6, 4, 5, 7, 5, 6, 9, 7, 6, 9, 4, 6, 9, 8, 1, 9, 2,4, 9, 0, 7, 6,5, 6,6, 2,8, 3,9,5,6, 1] Incorrectly Sorted: [0, 1, 1, 2, 2, 3, 4, 4, 4, 4, 4, 5, 5, 5,5, 6,6, 6,6, 6,6, 7, 7,7,8,8,9,9, 9,9, 9] Reason: The incorrectly sorted list contains two extra 4s and is missing two 65 and one 9. Output: (0, 1, 1, 2, 2,3, 4,4, 4,5, 5,5, 5,6, 6, 6, 6,6, 6, 6,6, 7,7, 7,8,8,9, 9,9, 9, 9, 9] Thei 3 input â </Examples> eee ® Input: {input} o Incorrectly Sorted: {incorrectly_sorted} oo 3 This prompt is used by an operation Improve(t), which enhances a given thought t using information provided in another thought. Depending on how the Improve + Repeat operation is implemented by the user within 7, it can either generate a number of new thoughts in. GRS (the upper graph on the right), similr to Generate Repeat oF may reline the same thought in GRS (the lower graph on the right), chaining k=4 refinement iterations together. Legend Gray block) External entity Blue block {ome oer Score Prompt Qi Thought Ch Operation Thought state + its 2 Thought state pment operations API for Controller â Dependency on Thought state + thoughts score | 2308.09687#23 | 2308.09687#25 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#25 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | Figure 3: The system architecture of GoT, and the APIs of respective modules. The user can straightforwardly extend the design towards new prompting schemes, experiment with novel thought transformations, and plug in different LLMs. The blue part of the figure contains the architecture overview, the green part lists the API, and the red part contains example prompts together with a GRS and operations involved. 5 5 Example Use Cases We now describe several use cases of GoT. We detail one use case (sorting) and summarize the others. 5.1 Sorting We focus on the decomposition of the sorting use case and Graph of Operations, which are central for implementing and executing any workload within GoT. We consider sorting numbers 0â 9 with duplicates. The considered LLMs are unable to sort a sequence of such num- bers correctly beyond a certain length consistently because duplicate counts do not match. In GoT, we employ merge-based sorting: First, one de- composes the input sequence of numbers into subarrays. Then, one sorts these subarrays individually, and then re- spectively merges them into a final solution. Figure 4 illus- trates this use case together with its graph decomposition. Here, an LLM thought is a sequence of sorted numbers. To score an outcome, denote an input sequence with [a1, a2, ..., an] and an output one with [b1, b2, ..., bm]. We use the following score that determines â the scopeâ | 2308.09687#24 | 2308.09687#26 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#26 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | of er- rors: error-scope = X + Y where p â {1, ..., m}, q â {1, ..., n}, and mâ 1 X = SY sen(max(b; â b:41,0)), i=l 9 Y= DOI bp 2 bp =F â [ag sq = oH | i=0 Here, X indicates how many consecutive pairs of num- bers are incorrectly sorted. If two numbers i and i + 1 are incorrectly sorted (i.e., bi > bi+1), then the expression within the summation returns 1, increasing the error score by one. For two numbers correctly sorted, this expression amounts to 0. Then, Y determines how well a given output sequence preserves the frequency of output numbers. Specif- ically, for each considered number x (x â {0, ..., 9}), we obtain the difference between the count of input elements being equal to x, vs. the count of output elements equal to x. For an output sequence perfectly preserving the fre- quency of x, this would amount to 0. | 2308.09687#25 | 2308.09687#27 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#27 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | Any single â devia- tionâ in this count, increases the â error scopeâ by 1. We then sum this over all considered values of x. When plot- ting this score, to improve the clarity of plots, we addition- ally apply clipping min(error-scope, n), as some baselines (IO, CoT) result in large numbers of outliers with high er- ror scope. Finally, to use a â positive scoreâ describing â the scope of correctly sortedâ elements, one can use the value max(n â error-scope, 0). 5.2 Set Operations Moreover, we also consider set operations, focusing on set intersection. They have numerous applications (particularly set intersection) in problems ranging from genome or docu- ment comparisons to pattern matching [9â 11, 20, 27, 38, 50, 6 Graph of Operations (GoO) for sorting 64 numbers Details of the highlighted Note that this is an Gems graph decomposition. The structure part of GoO are below SS f connections between all operations can be arbitrarily modified. ° G G Generate S Sort K) KeepBest A Aggregate Details of the highli = The fist Generate fs 7 splits the 64-element (Genero) w= 4 ip aay io four Couns Splitting into four 16-element chunks, 14624 ... 98754 16-element chunks Partial solution QP Paral solution Partial solution MMP Partial solution BP 16 numbers 16 numbers 16 numbers 16 numbers 14... 43 82..13 11... 42 19..54 Generataey) Generate) } ceneraiees) (cenerateoy) Sort N=3 Sort N=3 Sort N=3 Sort N=3 eoece Sorting is implemented within the Generate operation. Here, 'N=3 means that, for each 16 element chunk, we generate 16 numbers 16 numbers 12... 48 12..78 11..57 three different sortings Lo] â Assess how well each sequence is sorted o % ie \â How do we score? â To obtain the score, for every number 0 - 9, we get the difference between the input and the sorted list, and we su all 10 values. Zero indicates (Partial sation M2 â | 2308.09687#26 | 2308.09687#28 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#28 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | Partial solution MP Partial sation AP 16 numbers 16 numbers 16 numbers Eamapeel ye, 12... 48 12..78 Deiat ie ccones 0). Score: 100% 7% Score: 78% 7% Score: 86% @â ¢% a \â rere, N=1 means that we Keep the best. y-4 maintain a single best scored thoughts sorting outcome out of the three input ones. eocee eoooe 00000 (Partial solution Partial station AP 16 numbers 16 numbers 13..46 12..48 a? oe 32 numbers 32 numbers scree ew) SSEâ 11..68 11.. 89 0 score:97% OR Score: 100% Merge into a 32 ee on element subarray N= 1° â AggregatetN) o Merge into a 64 element subarray N=? Here, N=10 means that we try 10 different aggregations of the two input 16-element subarrays. Figure 4: An example graph decomposition of the sorting use case in GoT. All the used operations (Generate, Aggre- gate, Score, KeepBest) are described in Figure 3. | 2308.09687#27 | 2308.09687#29 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#29 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | 58]. Set intersection of two sets is implemented similarly as the sorting. The second input set is split into subsets and the intersection of those subsets with the first input set is deter- mined with the help of the LLM. Afterwards the resulting intersection sets are aggregated for the final results. For the evaluation we use different set sizes of 32, 64 and 128 el- ements and we vary the number of elements found in both sets to be between 25% and 75%. Our score indicates the total number of missing or in- correctly included elements in the final intersection. Specif- ically, denote two input sets with A = [a1, a2, ..., an] and B = [b1, b2, ..., bn], and the output set with C = [c1, c2, ..., cm]. | 2308.09687#28 | 2308.09687#30 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#30 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | Then, error-scope = X1 + X2 + Xd where X1 = |C \ (A â © B)| are the number of elements in C that are not supposed to be there, X2 = |(Aâ ©B)\C| are the number of elements missing from C, and Xd is the number of duplicates in C (because the LLM expresses the set as a list in natural language). Finally, to use a â positive scoreâ describing â the scope of correctly computedâ elements, one can use the value max(n â error-scope, 0). | 2308.09687#29 | 2308.09687#31 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#31 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | 5.3 Keyword Counting Keyword counting finds the frequency of keywords in a given category (countries in our example implementation) within the input text. GoT splits the input text into multi- ple passages, counts the keywords in each passage and ag- gregates the sub-results. The number of passages is config- urable and can also be left to the LLM, making it possible to treat each sentence as a separate passage. Here, to score a thought, we first â for each keyword â | 2308.09687#30 | 2308.09687#32 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#32 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | derive the absolute difference between the computed count and the correct one. We then sum all these differences to get the final score. 5.4 Document Merging Finally, we also provide document merging. Here, the goal is to generate a new Non-Disclosure Agreement (NDA) doc- ument based on several input ones that partially overlap in terms of their contents. The goal is to ensure minimal amount of duplication, while maximizing information reten- tion. Document merging is broadly applicable in, e.g., legal procedures, where multiple sources of information have to be combined into a single document or article. To score a solution, we query the LLM for two values (3 times for each value, and take the average). The first value corresponds to the solution redundancy (10 indicates no redundancy, 0 im- plies at least half the information is redundant), the second value stands for information retention (10 indicates all infor- mation is retained, 0 says that none is retained). We compute the harmonic mean of these values. 6 The Latency-Volume Tradeoff We now show that GoT improves upon previous prompting schemes in terms of the tradeoff between latency (number of hops in the graph of thoughts to reach a given final thought) and volume. We define volume â for a given thought t â as 7 the number of preceding LLM thoughts that could have im- pacted t. Formally, the volume of t is the number of thoughts from which there exists a path to t in the graph of thoughts. We assume that outputting a single thought costs O(1) time and fix the total cost to Î (n) for each prompting scheme. The structure of the schemes is as follows. CoT-SC con- sists of k independent chains originating from a single start- ing thought. ToT is a complete k-ary tree. Finally, in GoT, a complete k-ary tree is joined at its leaves with a â | 2308.09687#31 | 2308.09687#33 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#33 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | mirroredâ k-ary tree of the same size but with its edges reversed. The analysis is detailed in Table 2. CoT offers a large vol- ume of up to N , but at the cost of a high latency of N . CoT- SC reduces the latency by a factor of k (which corresponds to its branching factor), but it simultaneously decreases the volume by k as well. ToT offers a latency of logk N but also has low volume. GoT is the only scheme to come with both a low latency of logk N and a high volume N . This is enabled by the fact that GoT harnesses aggregations of thoughts, making it possible to reach the final thought from any other intermediate thought in the graph decomposition. Scheme Latency Volume N N N/k Chain-of-Thought (CoT) Self-Consistency with CoT (CoT-SC) N/k Tree of Thoughts (ToT) logk N O(logk N ) logk N N Graph of Thoughts (GoT) | 2308.09687#32 | 2308.09687#34 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#34 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | Table 2: Comparison of prompting schemes, with respect to their fundamental tradeoff between latency and volume. GoT offers the best tradeoff. 7 Evaluation We show the advantages of GoT over the state of the art. We focus on comparing GoT to ToT, as it was shown to consis- tently outperform other schemes. Still, for a broad compari- son, we also experiment with IO, CoT, and CoT-SC. As our analysis results in a large evaluation space, we present rep- resentative results and omit data that does not bring relevant insights (e.g., CoT-SC). 7.1 Evaluation Methodology We use 100 input samples for each task and comparison baseline. We set temperature to be 1.0 and we use 4k con- text unless stated otherwise. For each experiment, we fix the numbers of thoughts in respective schemes to achieve simi- lar costs in each experiment. Parameters We experiment extensively with the branching factor k and the number of levels L to ensure that we com- pare GoT to cost-effective and advantageous configurations. We plot two variants of ToT: one with higher k and lower depth (ToT), the other with lower k but higher L (ToT2). We usually aim to achieve a sweetspot in the tradeoff be- tween sparser generation rounds (lower k) vs. more rounds (larger L). Usually more responses per round is more expen- sive (e.g., 80 vs. 60 total responses for Figure 7 but $6 vs. $3 costs). We also try different problem sizes P (e.g., in sorting, P states how many numbers are to be sorted). 32 elements 64 elements 128 elements 128 1g | oT Figure 4 & Appendix 64 oer 48 190 16 \ P clipped BR B an N ze -â | 2308.09687#33 | 2308.09687#35 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#35 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | â F I, © 00 BR ° #incorrectly sorted elements; the lower the better 0.0 10 CoT ToT ToT2GoT 0.0 10 CoT ToT ToT2GoT 45 112 15 4.2 104 ug 3.9 96 134 33 8 ue 3.0 . 10 27 95 64 2.4 86 56 21 724 18 48 6 15 40 5 1.2 32 42 0.9 24 3 8 06 16 2F 03 8 1 0 10 CoT ToT ToT2GoT 4 # a > 6 Figure 5: | 2308.09687#34 | 2308.09687#36 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#36 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | Number of errors and cost in sorting tasks with ChatGPT-3.5. L and k indicate the structure of ToT (see Sections 3.2 and 6). Used LLMs Due to budget restrictions, we focus on GPT- 3.5. We also experimented with Llama-2, but it was usually worse than GPT-3.5 and also much slower to run, making it infeasible to obtain enough samples. # 7.2 Analysis of GoTâ | 2308.09687#35 | 2308.09687#37 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#37 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | s Advantages The results of analysis are in Figure 5 (sorting), 6 (set inter- section), 7 (keyword counting), and 8 (document merging); see Section 5 for the description of specific use cases. Over- all, GoT improves the quality of outcomes over all the con- sidered baselines and it reduces inference costs compared to ToT. example, in sorting, while for P = 32 GoT only negligibly improves upon ToT2, its median error count becomes lower by â 61% for P = 64 and â 69% for P = 128. | 2308.09687#36 | 2308.09687#38 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#38 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | The quar- tiles also become respectively better. The results for other schemes also follow the intuition; for example, IO becomes consistently worse with the increasing P , which is expected as a single thought is unlikely to solve a large problem in- stance. Overall, this analysis illustrates that GoT is indeed well-suited for elaborate problem cases, as the execution schedules usually become more complex with the growing problem sizes. GoT vs. ToT GoT improves upon ToT and ToT2 by a large margin over all the considered problem instances. ToT usually comes with somewhat higher quality than ToT2, but simultaneously much higher costs. GoTâ s costs are always lower than ToT, and comparable (in some cases lower, in others higher) to ToT2. For example, it reduces median error by â 62%, thereby achieving a higher quality of sorting, for P = 128 in comparison to ToT while ensuring >31% cost reductions. These advantages are due to GoTâ s ability to de- compose complex tasks into simpler sub-tasks, solve these sub-tasks independently, and then incrementally merge these outcomes into the final result. # 7.3 Discussion on Task Decomposition When splitting a task into subtasks and then solving these subtasks, the size of responses and the input (in tokens) are reduced proportionally to the degree of task decomposition. | 2308.09687#37 | 2308.09687#39 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#39 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | However, the â staticâ part of the prompt (i.e., few-shot ex- amples) may become a significant overhead (see GoT4 to GoT8 in Figure 7). Here, we observe that these few-shot ex- amples can usually also be reduced in size (e.g., the passages used to demonstrate keyword counting can also be made smaller and still be indicative of the actual input size), thus actively working towards decreasing the cost (e.g., see the difference between GoT8 and GoTx in Figure 7). GoT vs. IO and CoT GoT consistently delivers much higher quality of outcomes than IO/CoT. For example, for sorting (P = 64), GoTâ | 2308.09687#38 | 2308.09687#40 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#40 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | s median error is â 65% and â 83% lower than, respectively, CoT and IO. Yet, the costs of GoT â and ToT â are much higher than in IO and CoT. This is mostly due to our configuration of CoT, where we do not ar- tificially inflate the lengths of the chains of reasoning if this does not improve the outcomes. The higher costs of GoT and ToT are driven by k new thoughts built for each Generate operation; these multiple thoughts are one of the reasons for GoTâ s superiority in quality. | 2308.09687#39 | 2308.09687#41 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#41 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | The overall goal when conducting graph decomposition is to break down a task to the point, where the LLM can solve it correctly for the majority of time using a single prompt (or with a few additional improvement steps). This signifi- cantly lowers the number of improvement/refinement steps needed during the later stages of the graph exploration. Fur- thermore, as indicated by our results, combining or concate- nating sub-results is usually an easier task than solving large task instances from scratch. Hence, the LLM is often suc- cessful when aggregating the final solution. Increasing Complexity of Tackled Problems Most im- portantly, the advantages of GoT in the quality increase for all the baselines with the growing size of the problem P . | 2308.09687#40 | 2308.09687#42 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#42 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | For # 8 Related Work We summarize relations between GoT and related work. 8 32 elements 64 elements 128 elements eeny] 7 6 31 29 43 322 09 0 0 4 oo Oo oO 8 5 GoT: Appendix } 1.8 GoT: Appendix | 4 g 88 GoT: Appendix[ 21 _ 18 28 10 1.6 8 16 9 é 1.4 24 ° 2 3 14 : - ed Si 1.2 20 7 z £1 10 16 ° 2 54 & 8 o8 & â ¬ 42 2 6 0.6 a ov Oo g oa 8 33 is) B g * 28 8 2 0.2 4 1 * 0 0 0.0 10 CoT ToT ToT2GoT 0.0 10 CoT ToTToT2GoT 10 CoT ToTToT2GoT Figure 6: Number of errors and cost in set intersection with ChatGPT-3.5. L and k indicate the structure of ToT (see Sections 3.2 and 6). Samples solved tl correct iy 35,9 °0 1 0 8 7 25 5 Osplits the input text into 4 passages, counts| Bu keywords in each one, aggregates the sub- 30 © results always 2 at a time : s GoT4, but splits the o _ As GoT4, but splits th 7 F input text into 8 passages ry ° 5 254 Splits the 6 _ input into A Gahtences Z & (each inpu 204 fasi2-19 | 5.9 £ sentences)| ~" 5 15 ae £ 7 Ria G 3H 6 3 G 109 25 E 3 S 54+ ie 2 10 CoT ToT ToT2 GoT4 GoT8 GoTx â Aggregation of fully] L=3 merged NDAs . = i) 2G Bhs 11 : | 2308.09687#41 | 2308.09687#43 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#43 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | T Pe : t Z oO sont 22 = Aggregation . m6, Of partially 3 = merged o NDAs = FS) 92 ea) 3 6 6 4 3 8 £5] a 5 3p is} wn o- i?) ite) CoT ToT GoT GoT2 Figure 7: Number of errors and cost in keyword counting with ChatGPT-3.5. L and k indicate the structure of ToT (see Sections 3.2 and 6). Figure 8: | 2308.09687#42 | 2308.09687#44 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#44 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | Score and cost in document merging with ChatGPT-3.5. L and k indicate the structure of ToT (see Sec- tions 3.2 and 6). Number of samples: 50; context size: 16k tokens. # 8.1 Prompting Paradigms & Approaches We detail different prompting paradigms in Section 1 and Table 1. There are numerous other works related to prompt- ing. We now briefly summarize selected most related ones; more extensive descriptions can be found in dedicated sur- veys [34, 40, 69, 70]. Wang et al. proposed Plan-and- Solve, an approach to enhance CoT with an explicit plan- ning stage [66]. Using complexity-based criteria to enhance prompting within a CoT was designed by Fu et al. [29, 67]. The self-taught reasoner (STaR) [80] generates several chain of thoughts, and selects the ones that are valid. Similarly, a scheme by Shum et al. [61] generates a pool of CoT candi- dates, and selects the best candidate based on whether the candidates match the ground truth and on a policy gradient- based method. Automatic prompt generation overcomes the issues of scaling in CoT [41, 42, 59]. Zhou et al. pro- pose to harness selecting the best prompt out of a candidate set [84]. Skeleon-of-Thought [47] generates at first a num- ber of skeleton answers (brief bullet points of 3 to 5 words) and expands on these points in parallel in a second step. Finally, in prompt chaining, one cascades different LLMs. This enables prompting different LLMs via different con- texts, enabling more powerful reasoning [21, 23, 48, 51, 72, 73, 73]. GoT is orthogonal to this class of schemes, as it focuses on a single context capabilities. 8.2 Self-Reflection & Self-Evaluation Self-reflection and self-evaluation were introduced re- cently [45, 49, 60, 75]. They are used to enhance differ- ent tasks, for example for code generation [17] or com- | 2308.09687#43 | 2308.09687#45 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#45 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | 9 puter operation tasks [39]. In GoT, we partially rely on self-evaluation when taking decisions on how to expand the graph of thoughts within a prompt. # 8.3 LLMs & Planning There are many works recently on how to plan complex tasks with LLMs [36, 37, 68, 76, 78, 81]. GoT could be seen as a generic framework that could potentially be used to en- hance such schemes, by offering a paradigm for generating complex graph-based plans. # 8.4 Graphs and Graph Computing Graphs have become an immensely popular and important part of the general computing landscape [31, 32, 44, 46, 56]. Recently, there has been a growing interest in domains such as graph databases [2â 4, 7, 55], graph pattern match- ing [8, 10, 11, 18, 25, 62], graph streaming [1, 22, 26], and graph machine learning as well as graph neural net- works [5, 6, 12, 16, 30, 33, 33, 57, 74, 82, 83]. The graph abstraction has been fruitful for many modern research do- mains, such as social sciences (e.g., studying human inter- actions), bioinformatics (e.g., analyzing protein structures), chemistry (e.g., designing chemical compounds), medicine (e.g., drug discovery), cybersecurity (e.g., identifying in- truder machines), healthcare (e.g., exposing groups of peo- ple who submit fraudulent claims), web graph analysis (e.g., providing accurate search services), entertainment services (e.g., predicting movie popularity), linguistics (e.g., model- ing relationships between words), transportation (e.g., find- ing efficient routes), physics (e.g., understanding phase tran- sitions and critical phenomena), and many others [15, 20, 35, 38, 44]. In this work, we harness the graph abstraction as a key mechanism that enhances prompting capabilities in LLMs. | 2308.09687#44 | 2308.09687#46 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#46 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | # 9 Conclusion Prompt engineering is one of the central new domains of the large language model (LLM) research. It enables using LLMs efficiently, without any model updates. However, de- signing effective prompts is a challenging task. In this work, we propose Graph of Thoughts (GoT), a new paradigm that enables the LLM to solve different tasks effec- tively without any model updates. The key idea is to model the LLM reasoning as an arbitrary graph, where thoughts are vertices and dependencies between thoughts are edges. This enables novel transformations of thoughts, such as ag- gregation. | 2308.09687#45 | 2308.09687#47 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#47 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | Humanâ s task solving is often non-linear, and it involves combining intermediate solutions into final ones, or changing the flow of reasoning upon discovering new in- sights. GoT reflects this with its graph structure. GoT outperforms other prompting schemes, for example ensuring 62% increase in the quality of sorting over ToT, while simultaneously reducing costs by >31%. We also pro- pose a novel metric for a prompting scheme, the volume of a thought, to indicate the scope of information that a given LLM output could carry with it, where GoT also excels. This provides a step towards more principled prompt engineering. | 2308.09687#46 | 2308.09687#48 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#48 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | 10 The graph abstraction has been the foundation of several successful designs in computing and AI over last decades, for example AlphaFold for protein predictions. Our work harnesses it within the realm of prompt engineering. Acknowledgements We thank Hussein Harake, Colin McMurtrie, Mark Klein, An- gelo Mangili, and the whole CSCS team granting access to the Ault and Daint machines, and for their excellent technical sup- port. We thank Timo Schneider for help with infrastructure at SPCL. This project received funding from the European Re- search Council (Project PSAP, No. 101002047), and the European High-Performance Computing Joint Undertaking (JU) under grant agreement No. 955513 (MAELSTROM). This project was sup- ported by the ETH Future Computing Laboratory (EFCL), financed by a donation from Huawei Technologies. This project received funding from the European Unionâ s HE research and innovation programme under the grant agreement No. 101070141 (Project GLACIATION). References [1] Besta, M.; Fischer, M.; Kalavri, V.; Kapralov, M.; and Hoefler, T. 2023. Practice of Streaming Processing of Dynamic Graphs: Concepts, Models, and Systems. IEEE Transactions on Parallel and Distributed Sys- tems, 34(6): 1860â | 2308.09687#47 | 2308.09687#49 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#49 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | 1876. [2] Besta, M.; Gerstenberger, R.; Blach, N.; Fischer, M.; and Hoefler, T. 2023. GDI: A Graph Database Inter- face Standard. https://github.com/spcl/GDI-RMA. Ac- cessed: 2023-09-05. [3] Besta, M.; Gerstenberger, R.; Fischer, M.; Podstawski, M.; Blach, N.; Egeli, B.; Mitenkov, G.; Chlapek, W.; Michalewicz, M.; Niewiadomski, H.; M¨uller, J.; and Hoefler, T. 2023. The Graph Database Interface: Scal- ing Online Transactional and Analytical Graph Work- loads to Hundreds of Thousands of Cores. In Proceed- ings of the International Conference for High Perfor- mance Computing, Networking, Storage and Analysis, SC â | 2308.09687#48 | 2308.09687#50 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#50 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | 23. ACM. [4] Besta, M.; Gerstenberger, R.; Peter, E.; Fischer, M.; Podstawski, M.; Barthels, C.; Alonso, G.; and Hoefler, T. 2023. Demystifying Graph Databases: Analysis and Taxonomy of Data Organization, System Designs, and Graph Queries. ACM Comput. Surv., 56(2). [5] Besta, M.; Grob, R.; Miglioli, C.; Bernold, N.; Kwa´sniewski, G.; Gjini, G.; Kanakagiri, R.; Ashkboos, S.; Gianinazzi, L.; Dryden, N.; and Hoefler, T. 2022. | 2308.09687#49 | 2308.09687#51 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#51 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | Motif Prediction with Graph Neural Networks. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, KDD â 22, 35â 45. [6] Besta, M.; and Hoefler, T. 2022. Parallel and Dis- tributed Graph Neural Networks: An In-Depth Concur- rency Analysis. arXiv:2205.09702. [7] Besta, M.; Iff, P.; Scheidl, F.; Osawa, K.; Dryden, N.; Podstawski, M.; Chen, T.; and Hoefler, T. 2022. Neural Graph Databases. In Proceedings of the First Learning on Graphs Conference, volume 198 of Proceedings of Machine Learning Research, 31:1â | 2308.09687#50 | 2308.09687#52 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#52 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | 31:38. PMLR. [8] Besta, M.; Kanakagiri, R.; Kwa´sniewski, G.; Ausavarungnirun, R.; Ber´anek, J.; Kanellopoulos, K.; Janda, K.; Vonarburg-Shmaria, Z.; Gianinazzi, L.; Stefan, I.; Luna, J. G.; Golinowski, J.; Copik, M.; Kapp-Schwoerer, L.; Di Girolamo, S.; Blach, N.; Konieczny, M.; Mutlu, O.; and Hoefler, T. 2021. SISA: Set-Centric Instruction Set Architecture for Graph Mining on Processing-in-Memory Systems. In Proceedings of the 54th Annual IEEE/ACM Interna- tional Symposium on Microarchitecture, MICRO â | 2308.09687#51 | 2308.09687#53 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#53 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | 21, 282â 297. [9] Besta, M.; Kanakagiri, R.; Mustafa, H.; Karasikov, M.; R¨atsch, G.; Hoefler, T.; and Solomonik, E. 2020. Communication-Efficient Jaccard Similarity for High- Performance Distributed Genome Comparisons. In Proceedings of the IEEE International Parallel and Distributed Processing Symposium, IPDPS â 20, 1122â 1132. | 2308.09687#52 | 2308.09687#54 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#54 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | [10] Besta, M.; Miglioli, C.; Labini, P. S.; TË etek, J.; Iff, P.; Kanakagiri, R.; Ashkboos, S.; Janda, K.; Podstawski, M.; Kwa´sniewski, G.; Gleinig, N.; Vella, F.; Mutlu, O.; and Hoefler, T. 2022. ProbGraph: High-Performance and High-Accuracy Graph Mining with Probabilistic In Proceedings of the Interna- Set Representations. tional Conference on High Performance Computing, Networking, Storage and Analysis, SC â | 2308.09687#53 | 2308.09687#55 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#55 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | 22. IEEE. [11] Besta, M.; Vonarburg-Shmaria, Z.; Schaffner, Y.; Schwarz, L.; Kwa´sniewski, G.; Gianinazzi, L.; Be- ranek, J.; Janda, K.; Holenstein, T.; Leisinger, S.; Tatkowski, P.; Ozdemir, E.; Balla, A.; Copik, M.; Lin- denberger, P.; Konieczny, M.; Mutlu, O.; and Hoe- fler, T. 2021. GraphMineSuite: Enabling High- Performance and Programmable Graph Mining Algo- rithms with Set Algebra. Proc. VLDB Endow., 14(11): 1922â 1935. [12] Bronstein, M. M.; Bruna, J.; LeCun, Y.; Szlam, A.; and Vandergheynst, P. 2017. Geometric Deep Learning: Going beyond Euclidean data. IEEE Signal Process- ing Magazine, 34(4): 18â 42. [13] Brown, T.; Mann, B.; Ryder, N.; Subbiah, M.; Ka- plan, J. D.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; Agarwal, S.; Herbert-Voss, A.; Krueger, G.; Henighan, T.; Child, R.; Ramesh, A.; Ziegler, D.; Wu, J.; Winter, C.; Hesse, C.; Chen, M.; Sigler, E.; Litwin, M.; Gray, S.; Chess, B.; Clark, J.; Berner, C.; McCandlish, S.; Radford, A.; Sutskever, I.; and Amodei, D. 2020. | 2308.09687#54 | 2308.09687#56 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#56 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | Language Models are Few-Shot Learners. In Advances in Neural Information Process- ing Systems (NeurIPS â 20), volume 33, 1877â 1901. Curran Associates. [14] Bubeck, S.; Chandrasekaran, V.; Eldan, R.; Gehrke, J.; Horvitz, E.; Kamar, E.; Lee, P.; Lee, Y. T.; Li, Y.; Lundberg, S.; Nori, H.; Palangi, H.; Ribeiro, M. T.; and Zhang, Y. 2023. | 2308.09687#55 | 2308.09687#57 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#57 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | Sparks of Artificial General Intelligence: Early experiments with GPT-4. arXiv:2303.12712. [15] Chakrabarti, D.; and Faloutsos, C. 2006. Graph Min- 11 ing: Laws, Generators, and Algorithms. ACM Comput. Surv., 38(1). [16] Chami, I.; Abu-El-Haija, S.; Perozzi, B.; R´e, C.; and Murphy, K. 2020. | 2308.09687#56 | 2308.09687#58 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#58 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | Machine Learning on Graphs: A Model and Comprehensive Taxonomy. arXiv:2005.03675. [17] Chen, X.; Lin, M.; Sch¨arli, N.; and Zhou, D. 2023. Teaching Large Language Models to Self-Debug. arXiv:2304.05128. [18] Cheng, J.; Yu, J. X.; Ding, B.; Philip, S. Y.; and Wang, H. 2008. Fast Graph Pattern Matching. In Proceedings of the IEEE 24th International Conference on Data En- gineering, ICDE â | 2308.09687#57 | 2308.09687#59 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#59 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | 08, 913â 922. [19] Chowdhery, A.; Narang, S.; Devlin, J.; Bosma, M.; Mishra, G.; Roberts, A.; Barham, P.; Chung, H. W.; Sutton, C.; Gehrmann, S.; Schuh, P.; Shi, K.; Tsvyashchenko, S.; Maynez, J.; Rao, A.; Barnes, P.; Tay, Y.; Shazeer, N.; Prabhakaran, V.; Reif, E.; Du, N.; Hutchinson, B.; Pope, R.; Bradbury, J.; Austin, J.; Isard, M.; Gur-Ari, G.; Yin, P.; Duke, T.; Levskaya, A.; Ghemawat, S.; Dev, S.; Michalewski, H.; Garcia, X.; Misra, V.; Robinson, K.; Fedus, L.; Zhou, D.; Ip- polito, D.; Luan, D.; Lim, H.; Zoph, B.; Spiridonov, A.; Sepassi, R.; Dohan, D.; Agrawal, S.; Omernick, M.; Dai, A. M.; Pillai, T. S.; Pellat, M.; Lewkowycz, A.; Moreira, E.; Child, R.; Polozov, O.; Lee, K.; Zhou, Z.; Wang, X.; Saeta, B.; Diaz, M.; Firat, O.; Catasta, M.; Wei, J.; Meier-Hellstern, K.; Eck, D.; Dean, J.; Petrov, S.; and Fiedel, N. 2022. PaLM: Scaling Lan- guage Modeling with Pathways. arXiv:2204.02311. [20] Cook, D. J.; and Holder, L. B., eds. 2006. | 2308.09687#58 | 2308.09687#60 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#60 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | Mining Graph Data. John Wiley & Sons. [21] Creswell, A.; Shanahan, M.; and Higgins, I. 2022. Selection-Inference: Exploiting Large Language Models Logical Reasoning. arXiv:2205.09712. [22] Dhulipala, L.; Blelloch, G. E.; and Shun, J. 2019. Low- Latency Graph Streaming Using Compressed Purely- In Proceedings of the 40th ACM Functional Trees. SIGPLAN Conference on Programming Language De- sign and Implementation, PLDI â | 2308.09687#59 | 2308.09687#61 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#61 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | 19, 918â 934. [23] Dohan, D.; Xu, W.; Lewkowycz, A.; Austin, J.; Bieber, D.; Lopes, R. G.; Wu, Y.; Michalewski, H.; Saurous, R. A.; Sohl-Dickstein, J.; Murphy, K.; and Sutton, C. 2022. Language Model Cascades. In Beyond Bayes: Paths Towards Universal Reasoning Systems, Work- shop at ICML â | 2308.09687#60 | 2308.09687#62 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#62 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | 22. [24] Drori, I.; Zhang, S.; Shuttleworth, R.; Tang, L.; Lu, A.; Ke, E.; Liu, K.; Chen, L.; Tran, S.; Cheng, N.; Wang, R.; Singh, N.; Patti, T. L.; Lynch, J.; Shporer, A.; Verma, N.; Wu, E.; and Strang, G. 2022. A neural net- work solves, explains, and generates university math problems by program synthesis and few-shot learning at human level. Proceedings of the National Academy of Sciences, 119(32): e2123433119. [25] Fan, W.; Li, J.; Ma, S.; Tang, N.; Wu, Y.; and Wu, Y. 2010. | 2308.09687#61 | 2308.09687#63 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#63 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | Graph Pattern Matching: From Intractable to Polynomial Time. Proc. VLDB Endow., 3(1â 2): 264â 275. [26] Feng, G.; Meng, X.; and Ammar, K. 2015. DIS- TINGER: A distributed graph data structure for mas- sive dynamic graph processing. In Proccedings of the IEEE International Conference on Big Data, Big Data â 15, 1814â 1822. [27] Friggeri, A.; Chelius, G.; and Fleury, E. 2011. Trian- In Proceedings of gles to Capture Social Cohesion. the IEEE Third International Conference on Privacy, Security, Risk and Trust and IEEE Third International Conference on Social Computing, PASSAT/SocialCom â 11, 258â 265. [28] Friston, K. 2008. Hierarchical Models in the Brain. PLOS Computational Biology, 4(11): 1â 24. [29] Fu, Y.; Peng, H.; Sabharwal, A.; Clark, P.; and Khot, T. 2022. Complexity-Based Prompting for Multi-Step Reasoning. arXiv:2210.00720. [30] Gianinazzi, L.; Fries, M.; Dryden, N.; Ben-Nun, T.; Besta, M.; and Hoefler, T. 2021. Learning Combina- torial Node Labeling Algorithms. arXiv:2106.03594. [31] Gregor, D.; and Lumsdaine, A. 2005. Lifting Sequen- tial Graph Algorithms for Distributed-Memory Parallel Computation. SIGPLAN Not., 40(10): 423â 437. [32] Gregor, D.; and Lumsdaine, A. 2005. The Parallel BGL: A generic library for distributed graph compu- tations. Parallel Object-Oriented Scientific Computing (POOSC). | 2308.09687#62 | 2308.09687#64 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#64 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | [33] Hamilton, W. L.; Ying, R.; and Leskovec, J. 2017. Rep- resentation Learning on Graphs: Methods and Appli- cations. Bulletin of the Technical Committee on Data Engineering, 40(3): 52â 74. [34] Hartmann, M.; and Sonntag, D. 2022. A survey on improving NLP models with human explanations. In Proceedings of the First Workshop on Learning with Natural Language Supervision, 40â | 2308.09687#63 | 2308.09687#65 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#65 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | 47. Association for Computational Linguistics. [35] Horv´ath, T.; G¨artner, T.; and Wrobel, S. 2004. Cyclic Pattern Kernels for Predictive Graph Mining. In Pro- ceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Min- ing, KDD â 04, 158â 167. [36] Huang, W.; Abbeel, P.; Pathak, D.; and Mordatch, I. 2022. Language Models as Zero-Shot Planners: Ex- tracting Actionable Knowledge for Embodied Agents. In Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, 9118â | 2308.09687#64 | 2308.09687#66 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#66 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | 9147. PMLR. [37] Huang, W.; Xia, F.; Xiao, T.; Chan, H.; Liang, J.; Flo- rence, P.; Zeng, A.; Tompson, J.; Mordatch, I.; Cheb- otar, Y.; Sermanet, P.; Brown, N.; Jackson, T.; Luu, L.; Levine, S.; Hausman, K.; and Ichter, B. 2022. | 2308.09687#65 | 2308.09687#67 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#67 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | In- ner Monologue: Embodied Reasoning through Plan- ning with Language Models. arXiv:2207.05608. [38] Jiang, C.; Coenen, F.; and Zito, M. 2013. A survey of frequent subgraph mining algorithms. The Knowledge Engineering Review, 28(1): 75â 105. 12 [39] Kim, G.; Baldi, P.; and McAleer, S. 2023. Language Models can Solve Computer Tasks. arXiv:2303.17491. 2021. | 2308.09687#66 | 2308.09687#68 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#68 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | F. Explanation-Based Human Debugging of NLP Models: A Survey. Transactions of the Association for Computational Linguistics, 9: 1508â 1528. [41] Lester, B.; Al-Rfou, R.; and Constant, N. 2021. The Power of Scale for Parameter-Efficient Prompt Tun- In Proceedings of the Conference on Empiri- ing. cal Methods in Natural Language Processing, EMNLP â 21, 3045â 3059. | 2308.09687#67 | 2308.09687#69 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#69 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | Association for Computational Lin- guistics. [42] Li, X. L.; and Liang, P. 2021. Optimizing Continuous Prompts arXiv:2101.00190. Prefix-Tuning: for Generation. [43] Long, J. 2023. Large Language Model Guided Tree- of-Thought. arXiv:2305.08291. [44] Lumsdaine, A.; Gregor, D.; Hendrickson, B.; and Berry, J. 2007. Challenges in Parallel Graph Process- ing. Parallel Processing Letters, 17(1): 5â | 2308.09687#68 | 2308.09687#70 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#70 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | 20. [45] Madaan, A.; Tandon, N.; Gupta, P.; Hallinan, S.; Gao, L.; Wiegreffe, S.; Alon, U.; Dziri, N.; Prabhumoye, S.; Yang, Y.; Gupta, S.; Majumder, B. P.; Hermann, K.; Welleck, S.; Yazdanbakhsh, A.; and Clark, P. 2023. Self-Refine: Iterative Refinement with Self-Feedback. arXiv:2303.17651. [46] Malewicz, G.; Austern, M. H.; Bik, A. J.; Dehnert, J. C.; Horn, I.; Leiser, N.; and Czajkowski, G. 2010. Pregel: | 2308.09687#69 | 2308.09687#71 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#71 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | A System for Large-Scale Graph Processing. In Proceedings of the International Conference on Man- agement of Data, SIGMOD â 10, 135â 146. ACM. [47] Ning, X.; Lin, Z.; Zhou, Z.; Wang, Z.; Yang, H.; and Wang, Y. 2023. Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding. arXiv:2307.15337. [48] Nye, M.; Andreassen, A. J.; Gur-Ari, G.; Michalewski, H.; Austin, J.; Bieber, D.; Dohan, D.; Lewkowycz, A.; Bosma, M.; Luan, D.; Sutton, C.; and Odena, A. 2021. | 2308.09687#70 | 2308.09687#72 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#72 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | Show Your Work: Scratchpads for Intermediate Com- putation with Language Models. arXiv:2112.00114. [49] Paul, D.; Ismayilzada, M.; Peyrard, M.; Borges, B.; Bosselut, A.; West, R.; and Faltings, B. 2023. RE- FINER: Reasoning Feedback on Intermediate Repre- sentations. arXiv:2304.01904. [50] Prat-P´erez, A.; Dominguez-Sal, D.; Brunat, J. M.; and Larriba-Pey, J.-L. 2012. Shaping Communities out In Proceedings of the 21st ACM Inter- of Triangles. national Conference on Information and Knowledge Management, CIKM â | 2308.09687#71 | 2308.09687#73 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#73 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | 12, 1677â 1681. [51] Qiao, S.; Ou, Y.; Zhang, N.; Chen, X.; Yao, Y.; Deng, S.; Tan, C.; Huang, F.; and Chen, H. 2023. Reasoning with Language Model Prompting: A Survey. In Pro- ceedings of the 61st Annual Meeting of the Association for Computational Linguistics, ACL â 23, 5368â 5393. Association for Computational Linguistics. [52] qrdlgit. 2023. graph-of-thoughts Repository. https: Accessed: //github.com/qrdlgit/graph-of-thoughts. 2023-10-11. [53] Radford, A.; Narasimhan, K.; Salimans, T.; and Sutskever, I. 2018. Improving Language Understand- ing by Generative Pre-Training. https://openai.com/ research/language-unsupervised. Accessed: 2023-09- 06. [54] Radford, A.; Wu, J.; Child, R.; Luan, D.; Amodei, D.; and Sutskever, I. 2019. Language Models are Unsuper- vised Multitask Learners. https://openai.com/research/ better-language-models. Accessed: 2023-09-06. [55] Robinson, I.; Webber, J.; and Eifrem, E. 2015. Graph Databases: New Opportunities for Connected Data. | 2308.09687#72 | 2308.09687#74 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#74 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | Oâ Reilly Media, 2nd edition. [56] Sakr, S.; Bonifati, A.; Voigt, H.; Iosup, A.; Ammar, K.; Angles, R.; Aref, W.; Arenas, M.; Besta, M.; Boncz, P. A.; Daudjee, K.; Valle, E. D.; Dumbrava, S.; Har- tig, O.; Haslhofer, B.; Hegeman, T.; Hidders, J.; Hose, K.; Iamnitchi, A.; Kalavri, V.; Kapp, H.; Martens, W.; ¨Ozsu, M. T.; Peukert, E.; Plantikow, S.; Ragab, M.; Ri- peanu, M. R.; Salihoglu, S.; Schulz, C.; Selmer, P.; Se- queda, J. F.; Shinavier, J.; Sz´arnyas, G.; Tommasini, R.; Tumeo, A.; Uta, A.; Varbanescu, A. L.; Wu, H.- Y.; Yakovets, N.; Yan, D.; and Yoneki, E. 2021. | 2308.09687#73 | 2308.09687#75 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#75 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | The Future is Big Graphs: A Community View on Graph Processing Systems. Commun. ACM, 64(9): 62â 71. [57] Scarselli, F.; Gori, M.; Tsoi, A. C.; Hagenbuchner, M.; and Monfardini, G. 2008. The Graph Neural Network Model. IEEE Transactions on Neural Networks, 20(1): 61â 80. [58] Schaeffer, S. E. 2007. Graph clustering. Computer Science Review, 1(1): 27â 64. [59] Shin, T.; Razeghi, Y.; Logan IV, R. L.; Wallace, E.; and Singh, S. 2020. | 2308.09687#74 | 2308.09687#76 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#76 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts. arXiv:2010.15980. [60] Shinn, N.; Labash, B.; and Gopinath, A. 2023. Re- flexion: Language Agents with Verbal Reinforcement Learning. arXiv:2303.11366. [61] Shum, K.; Diao, S.; and Zhang, T. 2023. Automatic Prompt Augmentation and Selection with Chain-of- Thought from Labeled Data. arXiv:2302.12822. [62] Teixeira, C. H. C.; Fonseca, A. J.; Serafini, M.; Siganos, G.; Zaki, M. J.; and Aboulnaga, A. 2015. Arabesque: A System for Distributed Graph Mining. In Proceedings of the 25th Symposium on Operating Systems Principles, SOSP â | 2308.09687#75 | 2308.09687#77 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#77 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | 15, 425â 440. ACM. [63] Touvron, H.; Lavril, T.; Izacard, G.; Martinet, X.; Lachaux, M.-A.; Lacroix, T.; Rozi`ere, B.; Goyal, N.; Hambro, E.; Azhar, F.; Rodriguez, A.; Joulin, A.; Grave, E.; and Lample, G. 2023. LLaMA: Open and Efficient Foundation Language Models. arXiv:2302.13971. [64] Touvron, H.; Martin, L.; Stone, K.; Albert, P.; Alma- hairi, A.; Babaei, Y.; Bashlykov, N.; Batra, S.; Bhar- gava, P.; Bhosale, S.; Bikel, D.; Blecher, L.; Ferrer, C. C.; Chen, M.; Cucurull, G.; Esiobu, D.; Fernandes, J.; Fu, J.; Fu, W.; Fuller, B.; Gao, C.; Goswami, V.; Goyal, N.; Hartshorn, A.; Hosseini, S.; Hou, R.; Inan, | 2308.09687#76 | 2308.09687#78 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#78 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | 13 H.; Kardas, M.; Kerkez, V.; Khabsa, M.; Kloumann, I.; Korenev, A.; Koura, P. S.; Lachaux, M.-A.; Lavril, T.; Lee, J.; Liskovich, D.; Lu, Y.; Mao, Y.; Martinet, X.; Mihaylov, T.; Mishra, P.; Molybog, I.; Nie, Y.; Poulton, A.; Reizenstein, J.; Rungta, R.; Saladi, K.; Schelten, A.; Silva, R.; Smith, E. M.; Subramanian, R.; Tan, X. E.; Tang, B.; Taylor, R.; Williams, A.; Kuan, J. X.; Xu, P.; Yan, Z.; Zarov, I.; Zhang, Y.; Fan, A.; Kambadur, M.; Narang, S.; Rodriguez, A.; Sto- jnic, R.; Edunov, S.; and Scialom, T. 2023. | 2308.09687#77 | 2308.09687#79 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#79 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | Llama 2: Open Foundation and Fine-Tuned Chat Models. arXiv:2307.09288. [65] Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Å .; and Polosukhin, I. 2017. Attention is All you Need. In Advances in Neu- ral Information Processing Systems (NIPS â 17), vol- ume 30. Curran Associates. [66] Wang, L.; Xu, W.; Lan, Y.; Hu, Z.; Lan, Y.; Lee, R. K.-W.; and Lim, E.-P. 2023. Plan-and-Solve Prompt- ing: Improving Zero-Shot Chain-of-Thought Reason- ing by Large Language Models. In Proceedings of the 61st Annual Meeting of the Association for Computa- tional Linguistics, ACL â | 2308.09687#78 | 2308.09687#80 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#80 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | 23, 2609â 2634. Association for Computational Linguistics. [67] Wang, X.; Wei, J.; Schuurmans, D.; Le, Q. V.; Chi, E. H.; Narang, S.; Chowdhery, A.; and Zhou, D. 2023. Self-Consistency Improves Chain of Thought Rea- In Proceedings of the soning in Language Models. Eleventh International Conference on Learning Rep- resentations, ICLR â | 2308.09687#79 | 2308.09687#81 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#81 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | 23. [68] Wang, Z.; Cai, S.; Liu, A.; Ma, X.; and Liang, Y. 2023. Describe, Explain, Plan and Select: Interactive Planning with Large Language Models Enables Open- World Multi-Task Agents. arXiv:2302.01560. [69] Wang, Z.; Zhang, G.; Yang, K.; Shi, N.; Zhou, W.; Hao, S.; Xiong, G.; Li, Y.; Sim, M. Y.; Chen, X.; Zhu, Q.; Yang, Z.; Nik, A.; Liu, Q.; Lin, C.; Wang, S.; Liu, R.; Chen, W.; Xu, K.; Liu, D.; Guo, Y.; and Fu, J. 2023. Interactive Natural Language Processing. arXiv:2305.13246. [70] Wang, Z. J.; Choi, D.; Xu, S.; and Yang, D. 2021. Putting Humans in the Natural Language Processing In Proceedings of the First Work- Loop: A Survey. shop on Bridging Human-Computer Interaction and Natural Language Processing, 47â 52. Association for Computational Linguistics. [71] Wei, J.; Wang, X.; Schuurmans, D.; Bosma, M.; Chi, E.; Le, Q.; and Zhou, D. 2022. Chain-of-Thought Prompting Elicits Reasoning in Large Language Mod- els. arXiv:2201.11903. [72] Wu, T.; Jiang, E.; Donsbach, A.; Gray, J.; Molina, A.; Terry, M.; and Cai, C. J. 2022. | 2308.09687#80 | 2308.09687#82 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#82 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | PromptChainer: Chain- ing Large Language Model Prompts through Visual In Extended Abstracts of the Confer- Programming. ence on Human Factors in Computing Systems, CHI EA â 22. ACM. [73] Wu, T.; Terry, M.; and Cai, C. J. 2022. AI Chains: Transparent and Controllable Human-AI Interaction by Chaining Large Language Model Prompts. In Pro- ceedings of the Conference on Human Factors in Com- puting Systems, CHI â | 2308.09687#81 | 2308.09687#83 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#83 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | 22. ACM. [74] Wu, Z.; Pan, S.; Chen, F.; Long, G.; Zhang, C.; and Yu, P. S. 2021. A Comprehensive Survey on Graph Neural Networks. IEEE Transactions on Neural Networks and Learning Systems, 32(1): 4â 24. [75] Xie, Y.; Kawaguchi, K.; Zhao, Y.; Zhao, X.; Kan, M.- Y.; He, J.; and Xie, Q. 2023. Decomposition En- hances Reasoning via Self-Evaluation Guided Decod- ing. arXiv:2305.00633. [76] Yang, S.; Nachum, O.; Du, Y.; Wei, J.; Abbeel, P.; and Schuurmans, D. 2023. Foundation Models for Deci- sion Making: Problems, Methods, and Opportunities. arXiv:2303.04129. [77] Yao, S.; Yu, D.; Zhao, J.; Shafran, I.; Griffiths, T. L.; Cao, Y.; and Narasimhan, K. 2023. | 2308.09687#82 | 2308.09687#84 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#84 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | Tree of Thoughts: Deliberate Problem Solving with Large Language Models. arXiv:2305.10601. I.; Narasimhan, K. R.; and Cao, Y. 2023. ReAct: Syner- gizing Reasoning and Acting in Language Models. In Proceedings of the Eleventh International Conference on Learning Representations, ICLR â 23. [79] Yao, Y.; Li, Z.; and Zhao, H. 2023. Beyond Chain- of-Thought, Effective Graph-of-Thought Reasoning in Large Language Models. arXiv:2305.16582. [80] Zelikman, E.; Wu, Y.; Mu, J.; and Goodman, N. 2022. | 2308.09687#83 | 2308.09687#85 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#85 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | STaR: Bootstrapping Reasoning With Reasoning. In Advances in Neural Information Processing Systems (NeurIPS â 22), volume 35, 15476â 15488. Curran As- sociates. [81] Zhang, S.; Chen, Z.; Shen, Y.; Ding, M.; Tenenbaum, J. B.; and Gan, C. 2023. Planning with Large Lan- In Proceedings guage Models for Code Generation. of the Eleventh International Conference on Learning Representations, ICLR â 23. [82] Zhang, Z.; Cui, P.; and Zhu, W. 2022. | 2308.09687#84 | 2308.09687#86 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#86 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | Deep Learning on Graphs: A Survey. IEEE Transactions on Knowl- edge and Data Engineering, 34(1): 249â 270. [83] Zhou, J.; Cui, G.; Hu, S.; Zhang, Z.; Yang, C.; Liu, Z.; Wang, L.; Li, C.; and Sun, M. 2020. Graph neural networks: A review of methods and applications. AI Open, 1: 57â 81. [84] Zhou, Y.; Muresanu, A. I.; Han, Z.; Paster, K.; Pitis, S.; Chan, H.; and Ba, J. 2022. Large Lan- guage Models Are Human-Level Prompt Engineers. arXiv:2211.01910. | 2308.09687#85 | 2308.09687#87 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#87 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | 14 # A Positive Score Evaluation The following figures plot the same data as Figures 5 and 6 respec- tively, however use the â positive scoreâ described in Sections 5.1 and 5.2. 64 elements [Gor Foure say Keio g 128 elements Gor Figure @ #correct elements; the higher the better Total Cost ($); the lo 0 10 Cot ToTToT2GoT 16 00 0 0.0 10 CoT ToT ToT2GoT 10 Cot ToTToT2GoT Figure 9: Accuracy and cost in sorting tasks with ChatGPT- 3.5. L and k indicate the structure of ToT (see Sections 3.2 and 6). 128 elements 2.4 64 2.2 60 2.0 56 18 52 16 48 1a 44 12 40 10 36 0.8 32 0.6 28 0.4 24 0.2 20 0 16 0 0.0 10 Cot ToTToT2GoT 0.0 10 Cot ToTToT2GoT 8 0. 10 Cot ToTToT2GoT intersection with Figure 10: Accuracy and cost ChatGPT-3.5. L and k indicate the structure of ToT (see Sec- tions 3.2 and 6). # B Example Prompts - Sorting We present the prompts only for the sorting of 32-element lists, as those for 64-element and 128-element lists are identical, except for the split prompt where the number of elements in the one-shot example matches the problem size. For sorting, we employ three distinct types of operations that interact with the LLM, each with its corresponding prompts. First, there is the Generate operation, utilizing the sort prompt to guide the LLM in sorting a provided list of values, and the split prompt to direct the LLM to split a specified list into a designated number of sublists. Next, the Improve operation employs the improve prompt to instruct the LLM to refine a sorted list if it detects mistakes. Finally, the Aggregate operation leverages the merge prompt to guide the LLM in merging two pre-sorted lists into a single sorted list. | 2308.09687#86 | 2308.09687#88 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#88 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | First, we present the prompt stubs (Table 3), serving as templates to dynamically generate appropriate prompts at runtime. For clar- ity, we display their corresponding few-shot examples separately in Table 4. Following this, we outline the LLM interactions through- out the process of solving the sorting use case (Table 5 - Table 9). Table 3: Prompt stubs for the sorting tasks; parameters in single curly brackets will be substituted at runtime. sort prompt: <Instruction> Sort the following list of numbers in ascending order. Output only the sorted list of numbers, no additional text. </Instruction> # <Examples> See Table 4 </Examples> Input: {input list} split prompt (32 elements): <Instruction> Split the following list of 32 numbers into 2 lists of 16 numbers each, the first list should contain the first 16 numbers and the second list the second 16 numbers. Only output the final 2 lists in the following format without any additional text or thoughts!: {{ "List 1": [3, 4, 3, 5, 7, 8, 1, ...], "List 2": [2, 9, 2, 4, 7, 1, 5, ...] # }} # </Instruction> <Examples> See Table 4 </Examples> Input: {input list} improve prompt: <Instruction> The following two lists represent an unsorted list of numbers and a sorted variant of that list. The sorted variant is not correct. Fix the sorted variant so that it is correct. Make sure that the output list is sorted in ascending order, has the same number of elements as the input list ({length}), and contains the same elements as the input list.</Instruction> <Approach> To fix the incorrectly sorted list follow these steps: 1. For each number from 0 to 9, compare the frequency of that number in the incorrectly sorted list to the frequency of that number in the input list. 2. Iterate through the incorrectly sorted list and add or remove numbers as needed to make the frequency of each number in the incorrectly sorted list match the frequency of that number in the input list. </Approach> <Examples> See Table 4 </Examples> Input: {input list} Incorrectly Sorted: {sorted list} | 2308.09687#87 | 2308.09687#89 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#89 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | merge prompt: <Instruction> Merge the following 2 sorted lists of length {length} each, into one sorted list of length {length combined} using a merge sort style approach. Only output the final merged list without any additional text or thoughts!: </Instruction> <Approach> To merge the two lists in a merge-sort style approach, follow these steps: 1. Compare the first element of both lists. 2. Append the smaller element to the merged list and move to the next element in the list from which the smaller element | 2308.09687#88 | 2308.09687#90 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#90 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | came. 3. Repeat steps 1 and 2 until one of the lists is empty. 4. Append the remaining elements of the non-empty list to the merged list. </Approach> Merge the following two lists into one sorted list: 1. {input list1} 2. {input list2} Merged list: 15 Table 4: Few-shot examples for each prompt used for the sorting tasks; some lists are truncated for brevity. # sort prompt: # <Examples> Input: [5, 1, 0, 1, 2, 0, 4, 8, 1, 9, 5, 1, 3, 3, 9, 7] Output: [0, 0, 1, 1, 1, 1, 2, 3, 3, 4, 5, 5, 7, 8, 9, 9] Input: [3, 7, 0, 2, 8, 1, 2, 2, 2, 4, 7, 8, 5, 5, 3, 9, 4, 3, . . . (Omitted 14/32 numbers)] Output: [0, 0, 1, 1, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 4, 4, 4, . . . (Omitted 14/32 numbers)] Input: [4, 4, 9, 7, 9, 7, 0, 0, 4, 9, 1, 7, 9, 5, 8, 7, 5, 6, . . . (Omitted 46/64 numbers)] Output: [0, 0, 0, 0, 0, 0, 0, 1, 1, 2, 2, 2, 2, 2, 2, 3, 3, 3, . . . (Omitted 46/64 numbers)] </Examples> # split prompt (32 elements): <Examples> | 2308.09687#89 | 2308.09687#91 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#91 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | Input: [9, 6, 7, 7, 2, 0, 2, 2, 3, 5, 0, 9, 2, 2, 4, 4, 5, 2, . . . (Omitted 14/32 numbers)] Output: # {{ "List 1": [9, 6, 7, 7, 2, 0, 2, 2, 3, 5, 0, 9, 2, 2, 4, 4], "List 2": [5, 2, 5, 1, 2, 8, 3, 8, 3, 9, 6, 0, 4, 2, 2, 3] }} </Examples> improve prompt: <Examples> Input: [3, 7, 0, 2, 8, 1, 2, 2, 2, 4, 7, 8, 5, 5, 3, 9] Incorrectly Sorted: [0, 0, 0, 0, 0, 1, 2, 2, 3, 3, 4, 4, 4, 5, 5, 7, 7, 8, 8, 9, 9, 9, 9] | 2308.09687#90 | 2308.09687#92 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#92 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | Reason: The incorrectly sorted list contains four extra 0s, two extra 4s and three extra 9s and is missing two 2s. Output: [0, 1, 2, 2, 2, 2, 3, 3, 4, 5, 5, 7, 7, 8, 8, 9] Input: [6, 4, 5, 7, 5, 6, 9, 7, 6, 9, 4, 6, 9, 8, 1, 9, 2, 4, . . . (Omitted 14/32 numbers)] Incorrectly Sorted: [0, 1, 1, 2, 2, 3, 4, 4, 4, 4, 4, 5, 5, 5, 5, 6, 6, 6, . . . (Omitted 14/32 numbers)] Reason: The incorrectly sorted list contains two extra 4s and is missing two 6s and one 9. Output: [0, 1, 1, 2, 2, 3, 4, 4, 4, 5, 5, 5, 5, 6, 6, 6, 6, 6, . . . (Omitted 14/32 numbers)] Input: [4, 4, 9, 7, 9, 7, 0, 0, 4, 9, 1, 7, 9, 5, 8, 7, 5, 6, . . . (Omitted 46/64 numbers)] Incorrectly Sorted: [0, 0, 0, 0, 0, 0, 0, 1, 1, 2, 2, 2, 2, 3, 3, 3, 4, 4, . . . (Omitted 46/64 numbers)] Reason: The incorrectly sorted list contains one extra 8 and is missing two 2s, one 3, three 4s, two 5s, one 6, six 7s and one 9. | 2308.09687#91 | 2308.09687#93 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#93 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | Output: [0, 0, 0, 0, 0, 0, 0, 1, 1, 2, 2, 2, 2, 2, 2, 3, 3, 3, . . . (Omitted 46/64 numbers)] </Examples> # merge prompt: - 16 Table 5: Sorting of a 32 element list: Execution plan (GoO) # GoO: 1. | 2308.09687#92 | 2308.09687#94 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#94 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | Split the input list into two sub-lists of equal size (split prompt) 2. For each sub-list: Sort the sub-list (sort prompt) five times; score each sort attempt; keep the best 3. Merge the sorted sub-lists into one fully sorted list (merge prompt) 10 times; score each merge attempt; keep the best 4. Fix any potential mistakes in the sorted list (improve prompt) 10 times; score each improvement attempt; keep the best | 2308.09687#93 | 2308.09687#95 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#95 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | Table 6: Sorting of a 32 element list: Step 1 (Prompt/Response) # Step 1 â Prompt: <Instruction> Split the following list of 32 numbers into 2 lists of 16 numbers each, the first list should contain the first 16 numbers and the second list the second 16 numbers. Only output the final 2 lists in the following format without any additional text or thoughts!: # {{ "List 1": [3, 4, 3, 5, 7, 8, 1, ...], "List 2": [2, 9, 2, 4, 7, 1, 5, ...] }} # </Instruction> <Examples> See Table 4 </Examples> Input: [8, 7, 1, 1, 1, 1, 3, 3, 0, 9, 4, 1, 0, 2, 5, 1, 0, 5, 6, 7, 1, 4, 5, 9, 4, 6, 2, 5, 8, 6, 2, 6] Step 1 â Response (Fully Correct): # {{ "List 1": [8, 7, 1, 1, 1, 1, 3, 3, 0, 9, 4, 1, 0, 2, 5, 1], "List 1": [0, 5, 6, 7, 1, 4, 5, 9, 4, 6, 2, 5, 8, 6, 2, 6] | 2308.09687#94 | 2308.09687#96 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#96 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | }} 17 Table 7: Sorting of a 32 element list: Step 2 (Prompts/Responses) # Step 2a â Prompt: <Instruction> Sort the following list of numbers in ascending order. Output only the sorted list of numbers, no additional text. </Instruction> <Examples> See Table 4 </Examples> Input: [8, 7, 1, 1, 1, 1, 3, 3, 0, 9, 4, 1, 0, 2, 5, 1] Step 2a â 5 Responses: 1. [0, 0, 1, 1, 1, 1, 1, 1, 2, 3, 3, 4, 5, 7, 8, 9] (Fully Correct) 2. [0, 0, 1, 1, 1, 1, 1, 2, 3, 3, 4, 5, 7, 8, 9] (1 Error - Missing one 1) 3. [0, 0, 1, 1, 1, 1, 1, 2, 3, 3, 4, 5, 7, 8, 9] (1 Error - Missing one 1) 4. [0, 0, 1, 1, 1, 1, 1, 2, 3, 3, 4, 5, 7, 8, 9] (1 Error - Missing one 1) 5. [0, 0, 1, 1, 1, 1, 1, 2, 3, 3, 4, 5, 7, 8, 9] (1 Error - Missing one 1) # Step 2b â Prompt: | 2308.09687#95 | 2308.09687#97 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#97 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | <Instruction> Sort the following list of numbers in ascending order. Output only the sorted list of numbers, no additional text. </Instruction> <Examples> See Table 4 </Examples> Input: [0, 5, 6, 7, 1, 4, 5, 9, 4, 6, 2, 5, 8, 6, 2, 6] Step 2b â 5 Responses: 1. [0, 1, 2, 2, 4, 4, 5, 5, 5, 6, 6, 6, 7, 8, 9] (1 Error - Missing one 6) 2. [0, 1, 2, 2, 4, 4, 5, 5, 5, 6, 6, 6, 7, 8, 9] (1 Error - Missing one 6) 3. [0, 1, 2, 2, 4, 4, 5, 5, 5, 6, 6, 6, 7, 8, 9] (1 Error - Missing one 6) 4. [0, 1, 2, 2, 4, 4, 5, 5, 5, 6, 6, 6, 6, 7, 8, 9] (Fully Correct) 5. [0, 1, 2, 2, 4, 4, 5, 5, 5, 6, 6, 6, 7, 8, 9] (1 Error - Missing one 6) | 2308.09687#96 | 2308.09687#98 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#98 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | 18 Table 8: Sorting of a 32 element list: Step 3 (Prompt/Responses) # Step 3 â Prompt: <Instruction> Merge the following 2 sorted lists of length 16 each, into one sorted list of length 32 using a merge sort style approach. Only output the final merged list without any additional text or thoughts!: </Instruction> # <Approach> To merge the two lists in a merge-sort style approach, follow these steps: 1. Compare the first element of both lists. 2. Append the smaller element to the merged list and move to the next element in the list from which the smaller element | 2308.09687#97 | 2308.09687#99 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#99 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | came. 3. Repeat steps 1 and 2 until one of the lists is empty. 4. Append the remaining elements of the non-empty list to the merged list. </Approach> Merge the following two lists into one sorted list: 1. [0, 0, 1, 1, 1, 1, 1, 1, 2, 3, 3, 4, 5, 7, 8, 9] 2. [0, 1, 2, 2, 4, 4, 5, 5, 5, 6, 6, 6, 6, 7, 8, 9] Merged list: | 2308.09687#98 | 2308.09687#100 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#100 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | Step 3 â 10 Responses: 1. [0, 0, 0, 1, 1, 1, 1, 1, 1, 2, 2, 2, 3, 3, 4, 4, 4, 5, 5, 5, 6, 6, 6, 6, 7, 7, 8, 8, 9, 9] (2 Errors - Missing one 1 and one 5) 2. [0, 0, 0, 1, 1, 1, 1, 1, 2, 2, 2, 3, 3, 4, 4, 4, 5, 5, 5, 5, 6, 6, 6, 6, 7, 7, 8, 8, 9, 9] (2 Errors - Missing two 1s) 3. [0, 0, 0, 1, 1, 1, 1, 1, 1, 2, 2, 2, 3, 3, 4, 4, 4, 5, 5, 5, 5, 6, 6, 6, 6, 6, 7, 7, 8, 8, 9, 9] (2 Errors - Missing one 1 and got one extra 6) 4. [0, 0, 0, 1, 1, 1, 1, 1, 1, 2, 2, 2, 3, 3, 4, 4, 4, 5, 5, 5, 5, 6, 6, 6, 6, 6, 7, 7, 8, 8, 9] (3 Errors - Missing one 1, one 9 and got one extra 6) | 2308.09687#99 | 2308.09687#101 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#101 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | 5. [0, 0, 0, 1, 1, 1, 1, 1, 1, 2, 2, 2, 3, 3, 4, 4, 4, 5, 5, 5, 5, 6, 6, 6, 6, 6, 7, 7, 8, 8, 9, 9] (2 Errors - Missing one 1 and got one extra 6) 6. [0, 0, 0, 1, 1, 1, 1, 1, 1, 2, 2, 2, 3, 3, 4, 4, 4, 5, 5, 5, 6, 6, 6, 6, 7, 7, 8, 8, 9, 9] (2 Errors - Missing one 1 and one 5) 7. [0, 0, 0, 1, 1, 1, 1, 1, 1, 2, 2, 2, 3, 3, 4, 4, 4, 5, 5, 5, 5, 6, 6, 6, 6, 6, 7, 7, 8, 8, 9, 9] (2 Errors - Missing one 1 and got one extra 6) 8. [0, 0, 0, 1, 1, 1, 1, 1, 1, 2, 2, 2, 3, 3, 4, 4, 4, 5, 5, 5, 5, 6, 6, 6, 6, 7, 7, 8, 8, 9, 9] (1 Error - Missing one 1) | 2308.09687#100 | 2308.09687#102 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#102 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | 9. [0, 0, 0, 1, 1, 1, 1, 1, 1, 2, 2, 2, 3, 3, 4, 4, 4, 5, 5, 5, 5, 6, 6, 6, 6, 7, 7, 8, 8, 9, 9] (1 Error - Missing one 1) 10. [0, 0, 0, 1, 1, 1, 1, 1, 1, 2, 2, 2, 3, 3, 4, 4, 4, 5, 5, 5, 6, 6, 6, 6, 7, 7, 8, 8, 9, 9] (2 Errors - Missing one 1 and one 5) 19 Table 9: Sorting of a 32 element list: Step 4 (Prompt/Responses) # Step 4 â Prompt: | 2308.09687#101 | 2308.09687#103 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#103 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | <Instruction> The following two lists represent an unsorted list of numbers and a sorted variant of that list. The sorted variant is not correct. Fix the sorted variant so that it is correct. Make sure that the output list is sorted in ascending order, has the same number of elements as the input list (32), and contains the same elements as the input list.</Instruction> # <Approach> To fix the incorrectly sorted list follow these steps: 1. For each number from 0 to 9, compare the frequency of that number in the incorrectly sorted list to the frequency of that number in the input list. 2. Iterate through the incorrectly sorted list and add or remove numbers as needed to make the frequency of each number in the incorrectly sorted list match the frequency of that number in the input list. </Approach> <Examples> See Table 4 </Examples> | 2308.09687#102 | 2308.09687#104 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#104 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | Input: [8, 7, 1, 1, 1, 1, 3, 3, 0, 9, 4, 1, 0, 2, 5, 1, 0, 5, 6, 7, 1, 4, 5, 9, 4, 6, 2, 5, 8, 6, 2, 6] Incorrectly Sorted: [0, 0, 0, 1, 1, 1, 1, 1, 1, 2, 2, 2, 3, 3, 4, 4, 4, 5, 5, 5, 5, 6, 6, 6, 6, 7, 7, 8, 8, 9, 9] # Step 4 â 10 Responses: one 1. Reason: 5. 9. The 5, Output: 6, 6, two 4s and one 5 Note: The output list has the same number of elements as the input list, is sorted in ascending order, and con- tains the same elements as the input list. | 2308.09687#103 | 2308.09687#105 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#105 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | 2. Output: [0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 3, 3, 4, 4, 4, 5, 5, 5, 5, 6, 6, 6, 6, 6, 7, 7, 8, 8, 9, 9] 1 Error - One extra 6 3. Output: [0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 4, 4, 4, 5, 5, 5, 5, 5, 6, 6, 6, 6, 6, 7, 7, 7, 8, 8, 8, 9, 9, 9] 6 Errors - One extra 2, 5, 6, 7, 8 and 9 0. Output: [0, 0, 0, 1, 1, 1, 1, 1, 1, 2, 2, 2, 3, 3, 4, 4, 4, 5, 5, 5, 5, 6, 6, 6, 6, 7, 7, 8, 8, 9, 9, 9, 9] 3 Errors - Missing one 1 and has two extra 9s 9. | 2308.09687#104 | 2308.09687#106 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#106 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | Output: [0, 0, 0, 1, 1, 1, 1, 1, 1, 2, 2, 3, 3, 4, 4, 4, 5, 5, 5, 5, 6, 6, 6, 6, 6, 6, 6, 6, 7, 7, 7, 8, 8, 8, 9, 9, 9, 9] 10 Errors - Missing one 1, 2, has four extra 6s, one extra 7, 8 and two extra 9s 6. Output: [0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 4, 4, 4, 5, 5, 5, 5, 6, 6, 6, 6, 6, 7, 7, 7, 7, 8, 8, 8, 9, 9, 9, 9] 8 Errors - Has one extra 2, 3, 6, two extra 7s, one extra 8 and two extra 9s 7. Output: [0, 0, 0, 1, 1, 1, 1, 1, 1, 2, 2, 2, 3, 3, 4, 4, 4, 4, 5, 5, 5, 5, 6, 6, 6, 6, 7, 7, 8, 8, 9, 9] 2 Errors - Missing one 1 and has one extra 4 two 6s, and one 9. | 2308.09687#105 | 2308.09687#107 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#107 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | Output: [0, 0, 0, 1, 1, 1, 1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4, 5, 5, 5, 5, 6, 6, 6, 6, 6, 6, 7, 7, 8, 8, 9] 5 Errors - Missing one 1, 9, has one extra 3 and two extra 6s 9. Reason: | 2308.09687#106 | 2308.09687#108 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#108 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | The incorrectly sorted list is missing one 0, three 2s, two 3s, two 4s, three 5s, two 6s, one 7, and one 9. Output: [0, 0, 0, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 3, 3, 3, 3, 4, 4, 4, 5, 5, 5, 5, 5, 6, 6, 6, 6, 6, 7, 8, 8, 9, 9] 8 Errors - Missing one 1, one 7, has two extra 2s, two extra 3s, one extra 5 and 6 10. | 2308.09687#107 | 2308.09687#109 | 2308.09687 | [
"2302.13971"
]
|
2308.09687#109 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | Reason: The incorrectly sorted list is missing three 1s, one 3, one 4, two 5s, one 6, one 7, one 8, and one 9. Output: [0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 3, 3, 4, 4, 4, 5, 5, 5, 5, 6, 6, 6, 6, 7, 7, 8, 8, 9, 9] (Fully Correct) Final Result (Correctly Sorted): [0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 3, 3, 4, 4, 4, 5, 5, 5, 5, 6, 6, 6, 6, 7, 7, 8, 8, 9, 9] | 2308.09687#108 | 2308.09687#110 | 2308.09687 | [
"2302.13971"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.