id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
2308.10379#50
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
The LLM dis- covers the solution but continues its search without consol- idating the finding. 4) Other errors: This umbrella term en- compasses other mistakes like computational errors that re- sult in overlooking the solution or furnishing incorrect an- swers. To exclusively showcase the AoTâ s search capabil- ities, we also present the AoT + Manual Resolution ver- sion. Here, once the LLM pinpoints a solution, its final ar- ticulation is manually processedâ
2308.10379#49
2308.10379#51
2308.10379
[ "2204.02311" ]
2308.10379#51
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
a strategy also employed by the ToT method. As evidenced in Table 2, a notable 7% of mistakes stem from non-algorithmic factors like non- finalization and expression missteps. In fact, with manual resolution, AoT attains a 78% success rate, surpassing ToT. This underlines the potential for refining our prompt, espe- cially in areas concerning recognizing and expressing suc- cessful problem resolutions. Additionally, the token limi- tation underscores the appeal of expanding the generative context window, which may further bolster LLMsâ
2308.10379#50
2308.10379#52
2308.10379
[ "2204.02311" ]
2308.10379#52
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
recursive reasoning when engaged with algorithmic examples. Error Type Out-of-token error Expression misstep Non-finalization error Others Method ToT AoT AoT + Manual Resolution Error 9% 4% 3% 13% Success 69% 71% 78% Table 2: Game of 24: AoT error analysis. Mini Crosswords The 5 Ã 5 mini crossword is a compact word puzzle featur- ing a grid of 25 squares arranged in a 5-by-5 configuration.
2308.10379#51
2308.10379#53
2308.10379
[ "2204.02311" ]
2308.10379#53
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Players are tasked with filling the grid based on provided clues for each word. Clues are given for words that run both across (horizontally) and down (vertically). Words intersect at certain letters, offering additional hints to complete the puzzle. Task Setup. Adhering to the setup outlined in (Yao et al. 2023), we draw our prompts from games 136, 141, 146, 151, and 156 out of the 156 games available on goobix.com.
2308.10379#52
2308.10379#54
2308.10379
[ "2204.02311" ]
2308.10379#54
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Our testing focuses on a set of 20 games, specifically games 1, 6, . . ., 91, and 96. Baselines. Mirroring our approach for the game of 24, we benchmark our method against established techniques: stan- dard prompting, CoT, and ToT. For standard prompting, we provide both the crosswords and their respective solutions as in-context examples. CoT augments this by prompting the retrieval of words for each of the ten cluesâ
2308.10379#53
2308.10379#55
2308.10379
[ "2204.02311" ]
2308.10379#55
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
equally split between horizontal and vertical orientations. We directly ex- tract the success rates of ToT from their original publication for comparison. AoT Setup. We divide the process into two steps, each in- volving a query. Initially, we task the LLM with suggesting five potential words for each row and column. We then pin- point the starting word candidates that have the highest com- patibility with other words within the crossword framework. This preliminary phase mirrors a â warm-upâ sequence in al- gorithm initialization.
2308.10379#54
2308.10379#56
2308.10379
[ "2204.02311" ]
2308.10379#56
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
In the subsequent step, we exclusively leverage the LLMâ s algorithmic reasoning prowess, starting with the pre-selected word. The method involves cyclically choosing a likely option (specifically, a row or column) for insertion, generating candidate words, and assessing their compatibility with the words already on the board. If no match is found, the process shifts focus to another promising candidate. Otherwise, the word is added to the crossword, and the search continues. The cycle concludes either when the board is fully populated or no more suitable words can be found, which may be due to either incorrect existing words or the absence of matching words.
2308.10379#55
2308.10379#57
2308.10379
[ "2204.02311" ]
2308.10379#57
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Notably, this entire pro- cess unfolds within a single generation window. The algo- rithmic examples in our prompt (detailed in the Appendix) include three that achieve game completion and two that pre- dominantly populate the crossword, filling 8 or 9 slots. Results. Table 3 underscores AoTâ s proficiency in the mini crosswords task, showcasing a word success rateâ a measure used in existing studies to represent the percent- age of words correctly completed out of the totalâ
2308.10379#56
2308.10379#58
2308.10379
[ "2204.02311" ]
2308.10379#58
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
that sur- passes earlier methods reliant on various prompting tech- niques. However, it trails behind ToT. An important observa- tion is the sheer volume of queries ToT employs, exceeding AoTâ s by over a factor of 100. One factor hindering AoT from surpassing ToT is that the backtracking capability in- herent in the algorithmic example isnâ t fully activated. Fully unlocking this capability would lead to a significant elonga- tion in the generation phase.
2308.10379#57
2308.10379#59
2308.10379
[ "2204.02311" ]
2308.10379#59
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
In contrast, ToT has the advan- tage of leveraging external memory for its backtracking. Method Standard Prompting CoT ToT AoT (ours) Word Success Avg. Queries 14% 15.6% 60% 52% 1 1 >200 2 Table 3: 5 Ã 5 mini crosswords word: word success rates and the average number of LLM queries for each example. Error Analysis. To understand the prevalent mistakes made by AoT, weâ
2308.10379#58
2308.10379#60
2308.10379
[ "2204.02311" ]
2308.10379#60
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
ve categorized the errors into four dis- tinct categories. In our analysis for each game, we focus on the initial error the LLM produces while charting its rea- soning path, given that an early error typically cascades into subsequent failures. 1) No preselections: LLM fails to gen- erate compatible words essential for the warm-start phase. Given a correctly preselected word, the second phase for re- cursive reasoning can exhibit errors including: 2) Expres- sion misstep:
2308.10379#59
2308.10379#61
2308.10379
[ "2204.02311" ]
2308.10379#61
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
The LLM mistakenly believes it has exhausted all choices and jumps to an answer prematurely. 3) Incor- rect pattern extraction: The LLM wrongly extracts a pattern based on the current board layout. 4) Erroneous word place- ment: Despite recognizing the correct pattern, the LLM se- lects a mismatched word or misses better-fitting alternatives. Navigating the crossword complexity arises from outdated terms, esoteric references, and typographical mishaps. Pre- dominantly, the errors observed are due to misguided word placements followed by pattern misinterpretations.
2308.10379#60
2308.10379#62
2308.10379
[ "2204.02311" ]
2308.10379#62
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Also, the LLM seems challenged in aligning letters at precise indices to create word structuresâ an obstracle circumvented by an external mechanism in the ToT framework. Discussion In this section, we delve into crucial aspects to consider when crafting prompts for AoT, using the game of 24 as our primary case study. Can AoT surpass the DFS itâ s patterned after? A core query of ours is to ascertain if the LLM has the capability to not only mirror but also outdo the efficiency of the al- gorithm introduced in-context.
2308.10379#61
2308.10379#63
2308.10379
[ "2204.02311" ]
2308.10379#63
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
As evidenced in Fig. 5, AoT Error Type No preselections Expression misstep Incorrect pattern extraction Erroneous word placement Error 15.8% 5.3% 26.3% 52.6% Table 4: Breakdown of errors in 5 Ã 5 mini crosswords with AoT. Numbers indicate the relative percentage of each error type among all errors. systematically navigates fewer nodes than its DFS counter- part. While DFS employs a uniform strategy when choosing the subsequent subtree to investigate, AoTâ
2308.10379#62
2308.10379#64
2308.10379
[ "2204.02311" ]
2308.10379#64
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
s LLM integrates its inherent heuristic. This amplification over the base algo- rithm exemplifies the advantages of LLMâ s recursive reason- ing capability. 20 # of Games DFS AoT 0 200 400 600 800 1000 # of Visited Nodes Figure 5: Histogram showing the number of visited nodes for AoT and DFS in the Game of 24. How does algorithm selection influence AoTâ s efficacy? To explore the impact of algorithm choice on AoTâ
2308.10379#63
2308.10379#65
2308.10379
[ "2204.02311" ]
2308.10379#65
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
s per- formance, we implemented both BFS and random search within the AoT framework. Our findings, presented in Ta- ble 5, reveal that all three AoT variations outperform the single-query CoT. This outcome was anticipated as AoT, ir- respective of the algorithm, undertakes a search and revis- its potential mistakesâ either by random retry in the ran- dom search variant or through backtracking in the DFS and BFS configurations. Notably, the structured search versions, AoT (DFS) and AoT (BFS), displayed better efficiency than AoT (Random), underscoring the advantage of algorithmic insights in solution discovery.
2308.10379#64
2308.10379#66
2308.10379
[ "2204.02311" ]
2308.10379#66
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
However, AoT (BFS) lagged behind AoT (DFS). Closer inspection of errors made by AoT (BFS) revealed the LLM faced greater challenges in identi- fying optimal operations than its DFS counterpart. How does the search step count within the algorithmic example modulate AoTâ s behavior? We begin with the standard AoT prompt and modify the subtree explorations. In AoT (Short), each in-context example uses one or two steps to reach a solution, while AoT (Long) incorporates three to five extra subtree explorations.
2308.10379#65
2308.10379#67
2308.10379
[ "2204.02311" ]
2308.10379#67
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
The impact on total search steps is illustrated in Fig. 6. Our observations high- light longer generations for AoT (Long) and shorter ones Method CoT CoT-SC (k=100) ToT AoT (DFS) AoT (BFS) AoT (Random) Success Avg. Queries 1 4% 100 9% 109.1 69% 1 71% 1 48% 1 20% Table 5:
2308.10379#66
2308.10379#68
2308.10379
[ "2204.02311" ]
2308.10379#68
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Comparative success rates and average LLM query counts for AoT variations templated by distinct algorithms. for AoT (Short) relative to the original AoT. This suggests that the search step count introduces an implicit bias on the LLMâ s search velocity. Notably, even when navigating in- correct steps, itâ s essential to emphasize the exploration of promising directions. 100 80 60 40 # of Games â AoT (Short) 20 â AoT â AoT (Long) ° 0 50 100 150 200 250 300 350 400 # of Visited Nodes
2308.10379#67
2308.10379#69
2308.10379
[ "2204.02311" ]
2308.10379#69
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Figure 6: Comparison of AoT with shorter and longer in- context examples prompted AoT versions: cumulative num- ber of games for the number of visited nodes. Limitations. While AoT substantially cuts down on the number of queries relative to ToT, its resource demands ex- ceed those of standard prompting and CoT, a consequence of its extensive exploration of ideas via token generation. Crafting token-efficient algorithmic examples is one avenue, but thereâ s also potential in judiciously tapping into or un- locking the LLMâ
2308.10379#68
2308.10379#70
2308.10379
[ "2204.02311" ]
2308.10379#70
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
s â tunnel-visionâ . Our research primarily spotlighted certain algorithms, with a keen focus on tree- search tasks. Itâ s pertinent to highlight that we conducted our tests exclusively with GPT-4. Though more costly than other LLMs, GPT-4â s advanced capabilities appear pivotal for AoTâ s optimal functioning; models of lesser caliber might not yield comparable performance boosts from AoT. Conclusion This paper presents the Algorithm of Thoughts, a pioneer- ing prompting strategy to navigate reasoning pathways in LLMs using minimal queries.
2308.10379#69
2308.10379#71
2308.10379
[ "2204.02311" ]
2308.10379#71
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Our findings reveal that this method not only substantially surpasses prior single-query techniques but also rivals external tree-search implementa- tions. Such an approach augments the potential to stream- line idea discovery in LLMs, balancing both cost and com- putational demands. Future work includes designing token- efficient algorithmic examples, developing adaptive mecha- nisms for â tunnel-visionâ activation to expedite the search, and deepening the understanding of this fresh mode of in- context learning from theoretical angles.
2308.10379#70
2308.10379#72
2308.10379
[ "2204.02311" ]
2308.10379#72
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
References Aminabadi, R. Y.; Rajbhandari, S.; Awan, A. A.; Li, C.; Li, D.; Zheng, E.; Ruwase, O.; Smith, S.; Zhang, M.; Rasley, J.; et al. 2022. DeepSpeed-inference: enabling efficient infer- ence of transformer models at unprecedented scale. In SC22: International Conference for High Performance Computing, Networking, Storage and Analysis, 1â
2308.10379#71
2308.10379#73
2308.10379
[ "2204.02311" ]
2308.10379#73
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
15. IEEE. Austin, J.; Odena, A.; Nye, M.; Bosma, M.; Michalewski, H.; Dohan, D.; Jiang, E.; Cai, C.; Terry, M.; Le, Q.; et al. 2021. Program synthesis with large language models. arXiv preprint arXiv:2108.07732. Baddeley, A. 2003. Working memory: looking back and looking forward.
2308.10379#72
2308.10379#74
2308.10379
[ "2204.02311" ]
2308.10379#74
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Nature reviews neuroscience, 4(10): 829â 839. Bai, Y.; Kadavath, S.; Kundu, S.; Askell, A.; Kernion, J.; Jones, A.; Chen, A.; Goldie, A.; Mirhoseini, A.; McKinnon, C.; Chen, C.; Olsson, C.; Olah, C.; Hernandez, D.; Drain, D.; Ganguli, D.; Li, D.; Tran-Johnson, E.; Perez, E.; Kerr, J.; Mueller, J.; Ladish, J.; Landau, J.; Ndousse, K.; Lukosuite, K.; Lovitt, L.; Sellitto, M.; Elhage, N.; Schiefer, N.; Mer- cado, N.; DasSarma, N.; Lasenby, R.; Larson, R.; Ringer, S.; Johnston, S.; Kravec, S.; Showk, S.
2308.10379#73
2308.10379#75
2308.10379
[ "2204.02311" ]
2308.10379#75
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
E.; Fort, S.; Lanham, T.; Telleen-Lawton, T.; Conerly, T.; Henighan, T.; Hume, T.; Bowman, S. R.; Hatfield-Dodds, Z.; Mann, B.; Amodei, D.; Joseph, N.; McCandlish, S.; Brown, T.; and Kaplan, J. 2022. Constitutional AI: Harmlessness from AI Feedback. ArXiv:2212.08073 [cs].
2308.10379#74
2308.10379#76
2308.10379
[ "2204.02311" ]
2308.10379#76
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Banerjee, S.; Bringsjord, S.; Giancola, M.; and Govindara- julu, N. S. 2022. Qualitative Mechanical Problem-Solving by Artificial Agents:: Further Progress, Under Psychometric AI. In The International FLAIRS Conference Proceedings, volume 35. Brown, T.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J. D.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; Agarwal, S.; Herbert-Voss, A.; Krueger, G.; Henighan, T.; Child, R.; Ramesh, A.; Ziegler, D.; Wu, J.; Winter, C.; Hesse, C.; Chen, M.; Sigler, E.; Litwin, M.; Gray, S.; Chess, B.; Clark, J.; Berner, C.; McCandlish, S.; Radford, A.; Sutskever, I.; and Amodei, D. 2020.
2308.10379#75
2308.10379#77
2308.10379
[ "2204.02311" ]
2308.10379#77
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Language Mod- els are Few-Shot Learners. Advances in Neural Information Processing Systems, 33: 1877â 1901. Chen, L.; Zaharia, M.; and Zou, J. 2023. FrugalGPT: How to Use Large Language Models While Reducing Cost and Improving Performance. arXiv preprint arXiv:2305.05176. Chen, M.; Tworek, J.; Jun, H.; Yuan, Q.; Pinto, H.
2308.10379#76
2308.10379#78
2308.10379
[ "2204.02311" ]
2308.10379#78
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
P. d. O.; Kaplan, J.; Edwards, H.; Burda, Y.; Joseph, N.; Brockman, G.; et al. 2021. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374. Chowdhery, A.; Narang, S.; Devlin, J.; Bosma, M.; Mishra, G.; Roberts, A.; Barham, P.; Chung, H.
2308.10379#77
2308.10379#79
2308.10379
[ "2204.02311" ]
2308.10379#79
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
W.; Sutton, C.; Gehrmann, S.; et al. 2022. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311. Dhar, P. 2020. The carbon impact of artificial intelligence. Nat. Mach. Intell., 2(8): 423â 425. Drozdov, A.; Sch¨arli, N.; Aky¨urek, E.; Scales, N.; Song, X.; Chen, X.; Bousquet, O.; and Zhou, D. 2022.
2308.10379#78
2308.10379#80
2308.10379
[ "2204.02311" ]
2308.10379#80
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Compositional Semantic Parsing with Large Language Models. Helie, S.; and Pizlo, Z. 2022. When is psychology research useful in artificial intelligence? A case for reducing compu- tational complexity in problem solving. Topics in Cognitive Science, 14(4): 687â 701. Holyoak, K. J.; and Morrison, R. G. 2005. The Cambridge handbook of thinking and reasoning. Cambridge University Press. Huang, J.; and Chang, K.
2308.10379#79
2308.10379#81
2308.10379
[ "2204.02311" ]
2308.10379#81
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
C.-C. 2022. Towards reason- ing in large language models: A survey. arXiv preprint arXiv:2212.10403. Kadavath, S.; Conerly, T.; Askell, A.; Henighan, T.; Drain, D.; Perez, E.; Schiefer, N.; Hatfield-Dodds, Z.; DasSarma, N.; Tran-Johnson, E.; et al. 2022.
2308.10379#80
2308.10379#82
2308.10379
[ "2204.02311" ]
2308.10379#82
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Language models (mostly) know what they know. arXiv preprint arXiv:2207.05221. Kahneman, D. 2011. Thinking, fast and slow. macmillan. Kojima, T.; Gu, S. S.; Reid, M.; Matsuo, Y.; and Iwasawa, Y. 2022. Large Language Models are Zero-Shot Reason- ers. Advances in Neural Information Processing Systems, 35: 22199â
2308.10379#81
2308.10379#83
2308.10379
[ "2204.02311" ]
2308.10379#83
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
22213. Lanham, T.; Chen, A.; Radhakrishnan, A.; Steiner, B.; Deni- son, C.; Hernandez, D.; Li, D.; Durmus, E.; Hubinger, E.; Kernion, J.; et al. 2023. Measuring Faithfulness in Chain- of-Thought Reasoning. arXiv preprint arXiv:2307.13702. Liang, P.; Bommasani, R.; Lee, T.; Tsipras, D.; Soylu, D.; Yasunaga, M.; Zhang, Y.; Narayanan, D.; Wu, Y.; Kumar, A.; et al. 2022.
2308.10379#82
2308.10379#84
2308.10379
[ "2204.02311" ]
2308.10379#84
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Holistic evaluation of language models. arXiv preprint arXiv:2211.09110. Libby, M. E.; Weiss, J. S.; Bancroft, S.; and Ahearn, W. H. 2008. A comparison of most-to-least and least-to-most prompting on the acquisition of solitary play skills. Behav- ior analysis in practice, 1: 37â 43. Liu, Y.; Han, T.; Ma, S.; Zhang, J.; Yang, Y.; Tian, J.; He, H.; Li, A.; He, M.; Liu, Z.; et al. 2023.
2308.10379#83
2308.10379#85
2308.10379
[ "2204.02311" ]
2308.10379#85
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Summary of chatgpt/gpt- 4 research and perspective towards the future of large lan- guage models. arXiv preprint arXiv:2304.01852. Long, J. 2023. Large Language Model Guided Tree-of- Thought. arXiv preprint arXiv:2305.08291. Lyu, Q.; Havaldar, S.; Stein, A.; Zhang, L.; Rao, D.; Wong, E.; Apidianaki, M.; and Callison-Burch, C. 2023.
2308.10379#84
2308.10379#86
2308.10379
[ "2204.02311" ]
2308.10379#86
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Faithful Chain-of-Thought Reasoning. ArXiv:2301.13379 [cs]. Mialon, G.; Dess`ı, R.; Lomeli, M.; Nalmpantis, C.; Pa- sunuru, R.; Raileanu, R.; Rozi`ere, B.; Schick, T.; Dwivedi- Yu, J.; Celikyilmaz, A.; et al. 2023. Augmented language models: a survey. arXiv preprint arXiv:2302.07842.
2308.10379#85
2308.10379#87
2308.10379
[ "2204.02311" ]
2308.10379#87
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Monsell, S. 2003. Task switching. Trends in cognitive sci- ences, 7(3): 134â 140. Ouyang, L.; Wu, J.; Jiang, X.; Almeida, D.; Wainwright, C.; Mishkin, P.; Zhang, C.; Agarwal, S.; Slama, K.; Ray, A.; et al. 2022. Training language models to follow instructions with human feedback. Advances in Neural Information Pro- cessing Systems, 35: 27730â
2308.10379#86
2308.10379#88
2308.10379
[ "2204.02311" ]
2308.10379#88
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
27744. Robinson, J.; and Wingate, D. 2022. Leveraging Large Lan- guage Models for Multiple Choice Question Answering. Shao, Z.; Gong, Y.; Shen, Y.; Huang, M.; Duan, N.; and Chen, W. 2023. Synthetic Prompting: Generating Chain- of-Thought Demonstrations for Large Language Models. Sloman, S. A. 1996. The empirical case for two systems of reasoning.
2308.10379#87
2308.10379#89
2308.10379
[ "2204.02311" ]
2308.10379#89
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Psychological bulletin, 119(1): 3. Srivastava, A.; Rastogi, A.; Rao, A.; Shoeb, A. A. M.; Abid, A.; Fisch, A.; Brown, A. R.; Santoro, A.; Gupta, A.; Garriga- Alonso, A.; et al. 2022. Beyond the imitation game: Quanti- fying and extrapolating the capabilities of language models. arXiv preprint arXiv:2206.04615.
2308.10379#88
2308.10379#90
2308.10379
[ "2204.02311" ]
2308.10379#90
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Suzgun, M.; Scales, N.; Sch¨arli, N.; Gehrmann, S.; Tay, Y.; Chung, H. W.; Chowdhery, A.; Le, Q. V.; Chi, E. H.; Zhou, D.; and Wei, J. 2022. Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them. ArXiv:2210.09261 [cs]. Thoppilan, R.; De Freitas, D.; Hall, J.; Shazeer, N.; Kul- shreshtha, A.; Cheng, H.-T.; Jin, A.; Bos, T.; Baker, L.; Du, Y.; et al. 2022.
2308.10379#89
2308.10379#91
2308.10379
[ "2204.02311" ]
2308.10379#91
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Lamda: Language models for dialog appli- cations. arXiv preprint arXiv:2201.08239. Turpin, M.; Michael, J.; Perez, E.; and Bowman, S. R. 2023. Language Models Donâ t Always Say What They Think: Un- faithful Explanations in Chain-of-Thought Prompting. arXiv preprint arXiv:2305.04388. Wang, X.; Wei, J.; Schuurmans, D.; Le, Q.
2308.10379#90
2308.10379#92
2308.10379
[ "2204.02311" ]
2308.10379#92
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
V.; Chi, E. H.; Narang, S.; Chowdhery, A.; and Zhou, D. 2022. Self- Consistency Improves Chain of Thought Reasoning in Lan- guage Models. Wei, J.; Tay, Y.; Bommasani, R.; Raffel, C.; Zoph, B.; Borgeaud, S.; Yogatama, D.; Bosma, M.; Zhou, D.; Metzler, D.; Chi, E.
2308.10379#91
2308.10379#93
2308.10379
[ "2204.02311" ]
2308.10379#93
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
H.; Hashimoto, T.; Vinyals, O.; Liang, P.; Dean, J.; and Fedus, W. 2022a. Emergent Abilities of Large Lan- guage Models. ArXiv:2206.07682 [cs]. Wei, J.; Wang, X.; Schuurmans, D.; Bosma, M.; Ichter, B.; Xia, F.; Chi, E.; Le, Q. V.; and Zhou, D. 2022b.
2308.10379#92
2308.10379#94
2308.10379
[ "2204.02311" ]
2308.10379#94
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Chain- of-Thought Prompting Elicits Reasoning in Large Language Models. Advances in Neural Information Processing Sys- tems, 35: 24824â 24837. Wu, C.-J.; Raghavendra, R.; Gupta, U.; Acun, B.; Ardalani, N.; Maeng, K.; Chang, G.; Aga, F.; Huang, J.; Bai, C.; et al. 2022. Sustainable ai:
2308.10379#93
2308.10379#95
2308.10379
[ "2204.02311" ]
2308.10379#95
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Environmental implications, chal- lenges and opportunities. Proceedings of Machine Learning and Systems, 4: 795â 813. Yao, S.; Yu, D.; Zhao, J.; Shafran, I.; Griffiths, T. L.; Cao, Y.; and Narasimhan, K. 2023. Tree of Thoughts: Deliberate Problem Solving with Large Language Models. ArXiv:2305.10601 [cs]. Zelikman, E.; Wu, Y.; Mu, J.; and Goodman, N. 2022.
2308.10379#94
2308.10379#96
2308.10379
[ "2204.02311" ]
2308.10379#96
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Star: Bootstrapping reasoning with reasoning. Advances in Neu- ral Information Processing Systems, 35: 15476â 15488. Zhang, Z.; Zhang, A.; Li, M.; and Smola, A. 2022. Auto- matic Chain of Thought Prompting in Large Language Mod- els. Zhou, D.; Sch¨arli, N.; Hou, L.; Wei, J.; Scales, N.; Wang, X.; Schuurmans, D.; Cui, C.; Bousquet, O.; Le, Q.
2308.10379#95
2308.10379#97
2308.10379
[ "2204.02311" ]
2308.10379#97
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
V.; and Chi, E. H. 2022. Least-to-Most Prompting Enables Complex Reasoning in Large Language Models. # Game of 24 - Additional Details In order to avoid confusion in our analysis of AoT in the game of 24, we give additional details in terms of terminologies we use as well as their direct implications in the performance figures. An Illustration of these are given in Fig. 7. Input: 8 6 4 4 4-4=8 Subtree 8-6=2 Fi i irst Operations Exploration P 7 4+2=6 4/4=1 Second Operations Visited Nodes (left: 6, 4) | (left: 2, 1) | \ p 6*4=24 | 6+4=10 (left: 24) (left: 10) \ Third Operations
2308.10379#96
2308.10379#98
2308.10379
[ "2204.02311" ]
2308.10379#98
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Figure 7: An illustration of terminologies we use for the game of 24. The yellow nodes represent the first operations and the states they lead to; the green node represents the node where we find the solution; all other nodes are represented by pink. First operations / First iterations. This represents the scenario that after we choose the first two number in the game of 24, the case of either adding, subtracting, multiplying or dividing them. Subtree Exploration. This denotes searching all or most of the nodes coming from the same state, typically states with less than four numbers left.
2308.10379#97
2308.10379#99
2308.10379
[ "2204.02311" ]
2308.10379#99
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Number of nodes visited. This is the number of states that the method has been on the game of 24. Each state is the set of number we are left with, after our operations in the numbers. For example, after the first operation we might be left with the numbers â 8 3 1â . This set of numbers represent a state, as well as the state of â 8 3â that we will be left with after another operation of â 8 â 1 = 8â
2308.10379#98
2308.10379#100
2308.10379
[ "2204.02311" ]
2308.10379#100
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
. # Creative Writing We use the creative writing task, also used by (Yao et al. 2023), where the LLM is provided with four arbitrary sentences. The objective is to craft a cohesive narrative divided into four paragraphs, with each paragraph culminating in one of the given sentences. This exercise not only fosters creativity but also emphasizes strategic deliberation. # Task Setup Sentences are randomly sourced from randomwordgenerator.com, resulting in 100 distinct sets of inputs. Given the absence of predetermined correct answers, the primary focus lies in evaluating the coherence of the responses.
2308.10379#99
2308.10379#101
2308.10379
[ "2204.02311" ]
2308.10379#101
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
We have noted that GPT-4 consistently aligns with these input guidelines. Evaluation is centered around assessing passage coherence using a GPT-4 zero- shot prompt, where each output is rated on a scale of 1 to 10. Each task response undergoes five such evaluations, with their scores being averaged subsequently. # Baselines For this task, both standard and CoT prompts are employed without preliminary training. While the standard prompt directly guides the LLM to fashion a cohesive narrative based on stipulated parameters, the CoT prompt obliges the model to initially outline a succinct plan prior to drafting the narrative, serving as an intermediate cognitive bridge.
2308.10379#100
2308.10379#102
2308.10379
[ "2204.02311" ]
2308.10379#102
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
For each task iteration, ten samples are generated using both the standard and CoT methods. Results of the ToT approach are presented without modification. AoT Setup Mirroring ToTâ s methodology, the task is tackled in a zero-shot setting. Our prompt instructs the model to first formulate five distinct plans. Subsequent to this, the model selects the most promising among them to shape a narrative and then refines it for optimal coherence. The exact prompts used for this zero-shot approach will be provided in the subsequent section.
2308.10379#101
2308.10379#103
2308.10379
[ "2204.02311" ]
2308.10379#103
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Results As depicted in Fig. 8, AoT outpaces other singular query prompting techniques such as standard prompting and CoT in terms of performance. It also exhibits a marked improvement over ToT, although the difference is not statistically significant. Com- prehensive scores, along with the average query count needed for each method, are consolidated in Table 6. Notably, AoT necessitates fewer queries compared to ToT. 10 == -- 8 â - | 6 â ll $ a + 4 + ¢ + a + 2 $ 0 Standard CoT ToT AoT
2308.10379#102
2308.10379#104
2308.10379
[ "2204.02311" ]
2308.10379#104
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Figure 8: Comparison of the standard prompting, CoT, ToT and AoT on the creative writing task. Method Standard Prompting CoT ToT AoT Score Avg. Queries 6.19 6.93 7.56 7.58 1 1 20 1 Table 6: Performance of the methods determined by GPT-4. CoT vs. Single Iteration AoT in the Game of 24 To demonstrate that the tree search mechanism is fundamentally distinct from the CoT prompting, even in scenarios where AoTâ
2308.10379#103
2308.10379#105
2308.10379
[ "2204.02311" ]
2308.10379#105
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
s in-context examples include only a single initial operation in the game of 24, we draw a comparison between AoT (Short) and CoT. In this setup, AoT (Short) determines the first operation and subsequently conducts a tree search on the remaining three numbers. Interestingly, AoT (Short) achieves a success rate of 48%, while CoT lags significantly, securing only 4%. These results underscore the notion that even a rudimentary search mechanism can lead to significant performance enhancements.
2308.10379#104
2308.10379#106
2308.10379
[ "2204.02311" ]
2308.10379#106
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Detailed Analysis on the Effect of the Length of the Prompts In this section, we delve deeper into Fig. 6 by presenting histograms for the successful, unsuccessful, and total games of â 24â , considering the number of initial steps in methods AoT (Short), AoT, and AoT (Long). These are displayed in Figs. 9-11. From these figures, it becomes evident that the length of the prompts, measured by the number of initial steps included in in-context examples, correlates with the length of their solutions to test examples.
2308.10379#105
2308.10379#107
2308.10379
[ "2204.02311" ]
2308.10379#107
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
This trend is consistent across all three cases, suggesting that AoTâ s strategy in determining the number of initial steps is influenced by its in-context examples. Interestingly, when AoT is provided a well-balanced set of initial steps that emphasize the most promising operations, it excels in solving the majority of games in earlier iterations. This indicates AoTâ s capacity to prioritize swift problem-solving without sacrificing performance. This tendency is also observed in AoT (Long), albeit with a somewhat reduced success rate, as illustrated in Fig. 9.
2308.10379#106
2308.10379#108
2308.10379
[ "2204.02311" ]
2308.10379#108
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
40 20 40 20 40 20 â â â i} 2 4 8 10 12 6 # of First Steps # # of Successful Games # AoT (Short) # AoT # AoT (Long) Figure 9: Histogram of the number of successful games with respect to the number of first steps for AoT (Short), AoT and AoT (Long). 40 20 40 20 # of Unsuccessful Games â â â
2308.10379#107
2308.10379#109
2308.10379
[ "2204.02311" ]
2308.10379#109
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
i} 2 4 6 8 10 12 # of First Steps # AoT (Short) # AoT # AoT (Long) Figure 10: Histogram of the number of unsuccessful games with respect to the number of first steps for AoT (Short), AoT and AoT (Long). # Prompts Game of 24 Below, we represent the specific prompts employed for the various methods detailed in the experiments section. Itâ s important to note that the terms â
2308.10379#108
2308.10379#110
2308.10379
[ "2204.02311" ]
2308.10379#110
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Systemâ ,â Userâ , and â Assistantâ are utilized to denote the roles within the OpenAI API when operating in chat completion mode. The line breaks serve to show the transitions between the user and assistant interactions within the API. # AoT (DFS) System: 100 50 12 100 50 # of All Games 12 100 50 2 4 6 8 10 # of First Steps 12 â â â # AoT (Short) # AoT # AoT (Long)
2308.10379#109
2308.10379#111
2308.10379
[ "2204.02311" ]
2308.10379#111
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Figure 11: Histogram of the number of all games with respect to the number of first steps for AoT (Short), AoT and AoT (Long). Use numbers and basic arithmetic operations (+ - * /) to obtain 24. When considering the next steps, do not choose operations that will result in a negative or fractional number. In order to help with the calculations, the numbers in the parenthesis represent the numbers that are left after the operations and they are in descending order. Another thing we do is when there are only two numbers left in the parenthesis, we check whether we can arrive at 24 only by using basic arithmetic operations (+ - * /).
2308.10379#110
2308.10379#112
2308.10379
[ "2204.02311" ]
2308.10379#112
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Some examples regarding this idea: (21 2) no since 21 + 2 = 23, 21 - 2 = 19, 21 * 2 = 42, 21 / 2 = 10.5, none of which is equal to 24. (30 6) 30 - 6 = 24 yes (8 3) 8 * 3 = 24 yes (12 8) no (48 2) 48 / 2 = 24 yes Most importantly, do not give up, all the numbers that will be given has indeed a solution. Ë
2308.10379#111
2308.10379#113
2308.10379
[ "2204.02311" ]
2308.10379#113
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë
2308.10379#112
2308.10379#114
2308.10379
[ "2204.02311" ]
2308.10379#114
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
User: 14 8 8 2 Assistant: Trying a promising first operation: 1. 8 + 2: (14, 10, 8) - 14 + 10: (24, 8) 32, 16, 192, 3 - 14 - 10: (8, 4) 12, 8, 32, 2 - 14 * 10: (140, 8) 148, 132, 1120, fractional - 14 / 10: (8, 1.4) fractional - 14 + 8: (22, 10) 32, 12, 220, fractional - 14 - 8: (10, 6) 16, 4, 60, fractional - 14 * 8: (112, 10) 122, 102, 1120, fractional - 14 / 8: fractional - 10 + 8: (18, 14) 32, 4, 252, fractional - 10 - 8: (14, 2) 16, 12, 28, 7 - 10 * 8: (80, 14) 94, 66, big, fractional - 10 / 8: fractional Trying another promising first operation: 2. 8 / 2: (14, 8, 4) - 14 + 8: (22, 4) 26, 18, 88, fractional - 14 - 8: (6, 4) 10, 2, 24 = 6 * 4 -> found it!
2308.10379#113
2308.10379#115
2308.10379
[ "2204.02311" ]
2308.10379#115
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Backtracking the solution: Step 1: 8 / 2 = 4 Step 2: 14 - 8 = 6 Step 3: 6 * 4 = 24 Considering these steps: 24 = 6 * 4 = (14 - 8) * 4 = (14 - 8) * (8 / 2) = 24. answer: (14 - 8) * (8 / 2) = 24. Ë
2308.10379#114
2308.10379#116
2308.10379
[ "2204.02311" ]
2308.10379#116
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë
2308.10379#115
2308.10379#117
2308.10379
[ "2204.02311" ]
2308.10379#117
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
User: 9 5 5 5 Assistant: Trying a promising first operation: 1. 9 - 5: (5, 5, 4) - 5 + 5: (10, 4) 14, 6, 40, fractional - 5 - 5: (4, 0) 4, 4, 0, undefined - 5 * 5: (25, 4) 29, 21, 100, fractional - 5 / 5: (4, 1) 5, 3, 4, 4 - 5 + 4: (9, 5) 14, 4, 45, fractional - 5 - 4: (5, 1) 6, 4, 5, 0.2 - 5 * 4: (20, 5) 25, 15, 100, fractional - 5 / 4: fractional Trying another promising first operation: 2. 5 + 5: (10, 9, 5) - 10 + 9: (19, 5) 24 = 19 + 5 -> found it!
2308.10379#116
2308.10379#118
2308.10379
[ "2204.02311" ]
2308.10379#118
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Backtracking the solution: Step 1: 5 + 5 = 10 Step 2: 10 + 9 = 19 Step 3: 19 + 5 = 24 Considering these steps: 24 = 19 + 5 = (10 + 9) + 5 = ((5 + 5) + 9) + 5 = 24. answer: ((5 + 5) + 9) + 5 = 24. Ë
2308.10379#117
2308.10379#119
2308.10379
[ "2204.02311" ]
2308.10379#119
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë
2308.10379#118
2308.10379#120
2308.10379
[ "2204.02311" ]
2308.10379#120
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
User: 8 6 4 4 Assistant: Trying a promising first operation: 1. 8 - 6: (4, 4, 2) - 4 + 4: (8, 2) 10, 6, 16, 4 - 4 - 4: (2, 0) 2, 2, 0, undefined - 4 * 4: (16, 2) 18, 14, 32, 8 - 4 / 4: (2, 1) 3, 1, 2, 2 - 4 + 2: (6, 4) 10, 2, 24 = 6 * 4 -> found it!
2308.10379#119
2308.10379#121
2308.10379
[ "2204.02311" ]
2308.10379#121
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Backtracking the solution: Step 1: 8 - 6 = 2 Step 2: 4 + 2 = 6 Step 3: 6 * 4 = 24 Considering these steps: 24 = 6 * 4 = (4 + 2) * 4 = (4 + (8 - 6)) * 4 = 24. answer: (4 + (8 - 6)) * 4 = 24. Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë
2308.10379#120
2308.10379#122
2308.10379
[ "2204.02311" ]
2308.10379#122
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
User: 13 10 9 4 # Assistant: Trying a promising first operation: 1. 13 - 4: (10, 9, 9) - 10 + 9: (19, 9) 28, 10, 171, fractional - 10 - 9: (9, 1) 10, 8, 9, 9 - 10 * 9: (90, 9) 99, 81, 810, fractional - 10 / 9: fractional - 9 + 9: (18, 10) 28, 8, 180, fractional - 9 - 9: (10, 0) 10, 10, 0, undefined - 9 * 9: (81, 10) 91, 71, 810, fractional - 9 / 9: (10, 1) 11, 9, 10, 10 Trying another promising first operation: 2. 10 - 4: (13, 9, 6) - 13 + 9: (22, 6) 28, 16, 132, fractional - 13 - 9: (6, 4) 10, 2, 24 = 6 * 4 -> found it!
2308.10379#121
2308.10379#123
2308.10379
[ "2204.02311" ]
2308.10379#123
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Backtracking the solution: Step 1: 10 - 4 = 6 Step 2: 13 - 9 = 4 Step 3: 6 * 4 = 24 Considering these steps: 24 = 6 * 4 = 6 * (13 - 9) = (10 - 4) * (13 - 9) = 24. answer: (10 - 4) * (13 - 9) = 24. Ë
2308.10379#122
2308.10379#124
2308.10379
[ "2204.02311" ]
2308.10379#124
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë
2308.10379#123
2308.10379#125
2308.10379
[ "2204.02311" ]
2308.10379#125
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
User: 8 8 5 4 Assistant: Trying a promising first operation: 1. 8 - 4: (8, 5, 4) - 8 + 5: (13, 4) 17, 9, 52, fractional - 8 - 5: (4, 3) 7, 1, 12, fractional - 8 * 5: (40, 4) 44, 36, 160, fractional - 8 / 5: fractional - 8 + 4: (12, 5) 17, 7, 60, fractional - 8 - 4: (5, 4) 9, 1, 20, fractional - 8 * 4: (32, 5) 37, 27, 160, fractional - 8 / 4: (5, 2) 7, 3, 10, 2.5 - 5 + 4: (9, 8) 17, 1, 72, fractional - 5 - 4: (8, 1) 9, 7, 8, 8 - 5 * 4: (20, 8) 28, 12, 160, fractional - 5 / 4: fractional
2308.10379#124
2308.10379#126
2308.10379
[ "2204.02311" ]
2308.10379#126
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Trying another promising first operation: 2. 8 / 4: (8, 5, 2) - 8 + 5: (13, 2) 15, 11, 26, fractional - 8 - 5: (3, 2) 5, 1, 6, 1.5 - 8 * 5: (40, 2) 42, 38, 80, 20 - 8 / 5: fractional - 8 + 2: (10, 5) 15, 5, 50, 2 - 8 - 2: (6, 5) 11, 1, 30, fractional - 8 * 2: (16, 5) 21, 11, 80, fractional - 8 / 2: (5, 4) 9, 1, 20, fractional - 5 + 2: (7, 8) 15, 1, 56, fractional - 5 - 2: (8, 3) 11, 5, 24 = 8 * 3 -> found it! Backtracking the solution:
2308.10379#125
2308.10379#127
2308.10379
[ "2204.02311" ]
2308.10379#127
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Step 1: 8 / 4 = 2 Step 2: 5 - 2 = 3 Step 3: 8 * 3 = 24 Considering these steps: 24 = 8 * 3 = 8 * (5 - 2) = 8 * (5 - (8 / 4)) = 24. answer: 8 * (5 - (8 / 4)) = 24. Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë
2308.10379#126
2308.10379#128
2308.10379
[ "2204.02311" ]
2308.10379#128
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
User: 11 11 1 1 Assistant: Trying a promising first operation: 1. 11 - 1: (11, 10, 1) - 11 + 10: (21, 1) 22, 20, 21, 21 - 11 - 10: (1, 1) 2, 0, 1, 1 - 11 * 10: (110, 1) 111, 109, 110, 110 - 11 / 10: fractional - 11 + 1: (12, 10) 22, 2, 120, 1.2 - 11 - 1: (10, 10) 20, 0, 100, 1 - 11 * 1: (11, 10) 21, 1, 110, 1.1 - 11 / 1: (11, 10) 21, 1, 110, fractional - 10 + 1: (11, 11) 22, 0, 121, 1 - 10 - 1: (11, 9) 20, 2, 99, 1.1 - 10 * 1: (11, 10) 21, 1, 110, 1.1 - 10 / 1: (11, 10) 21, 1, 110, 1.1 Trying another promising first operation: 2. 11 + 1: (12, 11, 1) - 12 + 11: (23, 1) 24 = 23 + 1 -> found it!
2308.10379#127
2308.10379#129
2308.10379
[ "2204.02311" ]
2308.10379#129
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Backtracking the solution: Step 1: 11 + 1 = 12 Step 2: 12 + 11 = 23 Step 3: 23 + 1 Considering these steps: 24 = 23 + 1 = (12 + 11) + 1 = ((11 + 1) + 11) + 1 = 24. answer: ((11 + 1) + 11) + 1 = 24. Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë
2308.10379#128
2308.10379#130
2308.10379
[ "2204.02311" ]
2308.10379#130
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
User: 11 7 4 1 Assistant: Trying a promising first operation: 1. 7 * 4: (28, 11, 1) - 28 + 11: (39, 1) 40, 38, 39, 39 - 28 - 11: (17, 1) 18, 16, 17, 17 - 28 * 11: (308, 1) 309, 307, 308, 308 - 28 / 11: fractional - 28 + 1: (29, 11) 40, 18, 319, fractional - 28 - 1: (27, 11) 38, 16, 297, fractional - 28 * 1: (28, 11) 39, 17, 308, fractional - 28 / 1: (28, 11) 39, 17, 308, fractional - 11 + 1: (29, 28) 57, 1, 812, fractional - 11 - 1: (28, 10) 38, 18, 280, fractional - 11 * 1: (28, 11) 39, 17, 308, fractional - 11 / 1: (28, 11) 39, 17, 308, fractional
2308.10379#129
2308.10379#131
2308.10379
[ "2204.02311" ]
2308.10379#131
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
# User: # Assistant: Trying another promising first operation: 2. 7 + 1: (11 8 4) - 11 + 8: (19, 4) 23, 15, 76, fractional - 11 - 8: (4, 3) 7, 1, 12, fractional - 11 * 8: (88, 4) 92, 84, 352, fractional - 11 / 8: fractional - 11 + 4: (15, 8) 23, 7, 120, fractional - 11 - 4: (7, 8) 15, -1, 56, fractional - 11 * 4: (44, 8) 52, 36, 352, fractional - 11 / 4: fractional - 8 + 4: (12, 11) 23, -1, 132, fractional - 8 - 4: (11, 4) 15, 7, 44, fractional - 8 * 4: (32, 11) 43, 21, 352, fractional - 8 / 4: (11, 2) 13, 9, 22, fractional
2308.10379#130
2308.10379#132
2308.10379
[ "2204.02311" ]
2308.10379#132
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Trying another promising first operation: 3. 4 + 1: (11 7 5) - 11 + 7: (18, 5) 23, 13, 90, fractional - 11 - 7: (5, 4) 9, 1, 20, fractional - 11 * 7: (77, 5) 82, 72, 385, fractional - 11 / 7: fractional - 11 + 5: (16, 7) 23, 9, 112, fractional - 11 - 5: (7, 6) 13, 1, 42, fractional - 11 * 5: (55, 7) 62, 48, 385, fractional - 11 / 5: fractional - 7 + 5: (12, 11) 23, 1, 132, fractional - 7 - 5: (11, 2) 13, 9, 22, fractional - 7 * 5: (35, 11) 46, 24 = 35 - 11 -> found it!
2308.10379#131
2308.10379#133
2308.10379
[ "2204.02311" ]
2308.10379#133
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Step 1: 4 + 1 = 5 Step 2: 7 * 5 = 35 Step 3: 35 - 11 = 24 Considering these steps: Backtracking the solution: 24 = 35 - 11 = (7 * 5) - 11 = (7 * (4 + 1)) - 11 = 24. answer: (7 * (4 + 1)) - 11 = 24. Ë
2308.10379#132
2308.10379#134
2308.10379
[ "2204.02311" ]
2308.10379#134
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë
2308.10379#133
2308.10379#135
2308.10379
[ "2204.02311" ]
2308.10379#135
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
User: 11 5 4 3 Assistant: Trying a promising first operation: 1. 5 * 4: (20, 11, 3) - 20 + 11: (31, 3) 34, 28, 93, fractional - 20 - 11: (9, 3) 12, 6, 27, 3 - 20 * 11: (220, 3) 223, 217, 660, fractional - 20 / 11: fractional - 20 + 3: (23, 11) 34, 12, 253, fractional - 20 - 3: (17, 11) 28, 6, 187, fractional - 20 * 3: (60, 11) 71, 49, 660, fractional - 20 / 3: fractional - 11 + 3: (14, 20) 34, -6, 280, fractional - 11 - 3: (8, 20) 28, -12, 160, fractional - 11 * 3: (33, 20) 53, 13, 660, fractional - 11 / 3: fractional Trying another promising first operation: 2. 11 - 3: (8, 5, 4) - 8 + 5: (13, 4) 17, 9, 52, fractional - 8 - 5: (4, 3) 7, 1, 12, fractional - 8 * 5: (40, 4) 44, 36, 160, fractional - 8 / 5: fractional - 8 + 4: (12, 5) 17, 7, 60, fractional - 8 - 4: (5, 4) 9, 1, 20, fractional - 8 * 4: (32, 5) 37, 27, 160, fractional - 8 / 4: (5, 2) 7, 3, 10, 2.5 - 5 + 4: (9, 8) 17, 1, 72, fractional - 5 - 4: (8, 1) 9, 7, 8, 8 - 5 * 4: (20, 8) 28, 12, 160, fractional - 5 / 4: fractional
2308.10379#134
2308.10379#136
2308.10379
[ "2204.02311" ]
2308.10379#136
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Trying another promising first operation: 3. 11 * 3: (33, 5, 4) - 33 + 5: (38, 4) 42, 34, 152, fractional - 33 - 5: (28, 4) 32, 24 = 28 - 4 -> found it! Backtracking the solution: Step 1: 11 * 3 = 33 Step 2: 33 - 5 = 28 Step 3:
2308.10379#135
2308.10379#137
2308.10379
[ "2204.02311" ]
2308.10379#137
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
13 12 5 2 Trying a promising first operation: 1. 12 + 2: (14, 13, 5) - 14 + 13: (27, 5) 32, 22, 135, fractional - 14 - 13: (5, 1) 6, 4, 5, 5 - 14 * 13: (182, 5) 187, 177, 910, fractional - 14 / 13: fractional - 14 + 5: (19, 13) 32, 6, 247, fractional - 14 - 5: (13, 9) 22, 4, 117, fractional - 14 * 5: (70, 13) 83, 57, 910, fractional - 14 / 5: fractional - 13 + 5: (18, 14) 32, 4, 252, fractional - 13 - 5: (14, 8) 22, 6, 112, fractional - 13 * 5: (65, 14) 79, 51, 910, fractional - 13 / 5: fractional 9 8 2 1
2308.10379#136
2308.10379#138
2308.10379
[ "2204.02311" ]
2308.10379#138
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
28 - 4 = 24 Considering these steps: 24 = 28 - 4 = (33 - 5) - 4 = ((11 * 3) - 5) - 4 = 24. answer: ((11 * 3) - 5) - 4 = 24. Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë
2308.10379#137
2308.10379#139
2308.10379
[ "2204.02311" ]
2308.10379#139
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
User: # Assistant: Trying another promising first operation: 2. 13 + 5: (18, 12, 2) - 18 + 12: (30, 2) 32, 28, 60, fractional - 18 - 12: (6, 2) 8, 4, 12, 3 - 18 * 12: (216, 2) 218, 214, 432, fractional - 18 / 12: fractional - 18 + 2: (20, 12) 32, 8, 240, fractional - 18 - 2: (16, 12) 28, 4, 192, fractional - 18 * 2: (36, 12) 48, 24 = 36 - 12 -> found it!
2308.10379#138
2308.10379#140
2308.10379
[ "2204.02311" ]
2308.10379#140
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Backtracking the solution: Step 1: 13 + 5 = 18 Step 2: 18 * 2 = 36 Step 3: 36 - 12 = 24 Considering these steps: 24 = 36 - 12 = (18 * 2) - 12 = ((13 + 5) * 2) - 12 = 24. answer: ((13 + 5) * 2) - 12 = 24.
2308.10379#139
2308.10379#141
2308.10379
[ "2204.02311" ]
2308.10379#141
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë
2308.10379#140
2308.10379#142
2308.10379
[ "2204.02311" ]
2308.10379#142
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
User: # Assistant: Trying a promising first operation: 1. 9 - 1: (8, 8, 2) - 8 + 8: (16, 2) 18, 14, 32, 8 - 8 - 8: (2, 0) 2, 2, 0, undefined - 8 * 8: (64, 2) 66, 62, 128, 32 - 8 / 8: (2, 1) 3, 1, 2, 2 - 8 + 2: (10, 8) 18, 2, 80, 4 - 8 - 2: (6, 8) 14, -2, 48, fractional - 8 * 2: (16, 8) 24 = 16 + 8 -> found it!
2308.10379#141
2308.10379#143
2308.10379
[ "2204.02311" ]
2308.10379#143
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Backtracking the solution: Step 1: 9 - 1 = 8 Step 2: 8 * 2 = 16 Step 3: 16 + 8 = 24 Considering these steps: 24 = 16 + 8 = (8 * 2) + 8 = ((9 - 1) * 2) + 8 = 24. answer: ((9 - 1) * 2) + 8 = 24.
2308.10379#142
2308.10379#144
2308.10379
[ "2204.02311" ]
2308.10379#144
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë
2308.10379#143
2308.10379#145
2308.10379
[ "2204.02311" ]
2308.10379#145
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
# AoT (Long) # System: Use numbers and basic arithmetic operations (+ - * /) to obtain 24. When considering the next steps, do not choose operations that will result in a negative or fractional number. In order to help with the calculations, the numbers in the parenthesis represent the numbers that are left after the operations and they are in descending order. Another thing we do is when there are only two numbers left in the parenthesis, we check whether we can arrive at 24 only by using basic arithmetic operations (+ - * /).
2308.10379#144
2308.10379#146
2308.10379
[ "2204.02311" ]
2308.10379#146
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Some examples regarding this idea: (21 2) no since 21 + 2 = 23, 21 - 2 = 19, 21 * 2 = 42, 21 / 2 = 10.5, none of which is equal to 24. (30 6) 30 - 6 = 24 yes (8 3) 8 * 3 = 24 yes (12 8) no (48 2) 48 / 2 = 24 yes
2308.10379#145
2308.10379#147
2308.10379
[ "2204.02311" ]
2308.10379#147
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
solution. 14 8 8 2 Trying a promising first operation: 1. 8 + 2: (14, 10, 8) - 14 + 10: (24, 8) 32, 16, 192, 3 - 14 - 10: (8, 4) 12, 8, 32, 2 - 14 * 10: (140, 8) 148, 132, 1120, fractional - 14 / 10: (8, 1.4) fractional - 14 + 8: (22, 10) 32, 12, 220, fractional - 14 - 8: (10, 6) 16, 4, 60, fractional - 14 * 8: (112, 10) 122, 102, 1120, fractional - 14 / 8: fractional - 10 + 8: (18, 14) 32, 4, 252, fractional - 10 - 8: (14, 2) 16, 12, 28, 7 - 10 * 8: (80, 14) 94, 66, big, fractional - 10 / 8: fractional Trying another promising first operation: 2. 14 + 8: (22, 8, 2) - 22 + 8: (30, 2) 32, 28, 60, 15 - 22 - 8: (14, 2) 16, 12, 28, 7 - 22 * 8: (176, 2) 178, 174, 88 - 22 / 8: (2.75, 2) fractional - 22 + 2: (24, 8) 32, 16, 192, 3 - 22 - 2: (20, 8) 28, 12, 160, fractional - 22 * 2: (44, 8) 52, 36, 352, fractional - 22 / 2: (11, 8) 19, 3, 88, fractional - 8 + 2: (22, 10) 32, 12, 220, fractional - 8 - 2: (22, 6) 28, 16, 132, fractional - 8 * 2: (22, 16) 38, 6, 352, fractional - 8 / 2: (22, 4) 26, 18, 88, fractional Trying another promising first operation: 3. 14 + 2: (16, 8, 8) - 16 + 8: (24, 8) 32, 16, 192, 3 - 16 - 8: (8, 8) 16, 0, 64, 1 - 16 * 8: (128, 8) 136, 120, 1024, 16 - 16 / 8: (8, 2) 10, 6, 16, 4 - 8 + 8: (16, 16 32, 0, 256, 1 - 8 - 8: (16, 0) 16, 16, 0, undefined - 8 * 8: (64, 16) 80, 48, 1024, 4 - 8 / 8: (16, 1) 17, 15, 16, 16
2308.10379#146
2308.10379#148
2308.10379
[ "2204.02311" ]
2308.10379#148
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Most importantly, do not give up, all the numbers that will be given has indeed a solution. # User: # Assistant: Trying another promising first operation: 4. 8 - 2: (14, 8, 6) - 14 + 8: (22, 14) 36, 8, 308, fractional - 14 - 8: (6, 6) 12, 0, 36, 1 - 14 * 8: (112, 6) 118, 106, 672, fractional - 14 / 8: (6, 1.75) fractional - 14 + 6: (20, 8) 22, 12, 160, fractional - 14 - 6: (8, 8) 16, 0, 64, 1 - 14 * 6: (84, 8) 92, 76, 672, fractional - 14 / 6: (8, 2.3) fractional - 8 + 6: (14, 14) 28, 0, 196, 1 - 8 - 6: (14, 2) 16, 12, 28, 7 - 8 * 6: (48, 14) 62, 34, 672, fractional - 8 / 6: (14, 1.3) fractional Trying another promising first operation: 5. 8 * 2: (16, 14, 8) - 16 + 14: (30, 8) 38, 22, 240, fractional - 16 - 14: (8, 2) 10, 6, 16, 4 - 16 * 14: (224, 8) 232, 216, 1792, 28 - 16 / 14: (8, 1.1) fractional - 16 + 8: (24, 14) 38, 10, 336, fractional - 16 - 8: (14, 8) 22, 6, 112, fractional - 16 * 8: (128, 14) 142, 112, 1792, fractional - 16 / 8: (14, 2) 16, 12, 28, 7 - 14 + 8: (22, 16) 38, 6, 352, fractional - 14 - 8: (16, 6) 22, 10, 96, fractional - 14 * 8: (112, 16) 128, 96, 1792, 7 - 14 / 8: (16, 1.7) fractional Trying another promising first operation: 6. 14 * 2: (28, 8, 8) - 28 + 8: (36, 8) 44, 28, 288, fractional - 28 - 8: (20, 8) 28, 12, 160, fractional - 28 * 8: (224, 8) 232, 216, 1792, 28 - 28 / 8: (8, 3.5) fractional, fractional, 28, fractional - 8 + 8: (16, 16 32, 0, 256, 1 - 8 - 8: (16, 0) 16, 16, 0, undefined - 8 * 8: (64, 16) 80, 48, 1024, 4 - 8 / 8: (16, 1) 17, 15, 16, 16
2308.10379#147
2308.10379#149
2308.10379
[ "2204.02311" ]
2308.10379#149
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Trying another promising first operation: 7. 8 / 2: (14, 8, 4) - 14 + 8: (22, 4) 26, 18, 88, fractional - 14 - 8: (6, 4) 10, 2, 24 = 6 * 4 -> found it! Backtracking the solution: Step 1: 8 / 2 = 4 Step 2: 14 - 8 = 6 Step 3: 6 * 4 = 24 Considering these steps: 24 = 6 * 4 = (14 - 8) * 4 = (14 - 8) * (8 / 2) = 24. answer: (14 - 8) * (8 / 2) = 24.
2308.10379#148
2308.10379#150
2308.10379
[ "2204.02311" ]