doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2308.10379 | 36 | How does the search step count within the algorithmic example modulate AoTâs behavior? We begin with the standard AoT prompt and modify the subtree explorations. In AoT (Short), each in-context example uses one or two steps to reach a solution, while AoT (Long) incorporates three to five extra subtree explorations. The impact on total search steps is illustrated in Fig. 6. Our observations high- light longer generations for AoT (Long) and shorter ones
Method CoT CoT-SC (k=100) ToT AoT (DFS) AoT (BFS) AoT (Random) Success Avg. Queries 1 4% 100 9% 109.1 69% 1 71% 1 48% 1 20%
Table 5: Comparative success rates and average LLM query counts for AoT variations templated by distinct algorithms.
for AoT (Short) relative to the original AoT. This suggests that the search step count introduces an implicit bias on the LLMâs search velocity. Notably, even when navigating in- correct steps, itâs essential to emphasize the exploration of promising directions.
100 80 60 40 # of Games â AoT (Short) 20 â AoT â AoT (Long) ° 0 50 100 150 200 250 300 350 400 # of Visited Nodes | 2308.10379#36 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 37 | 100 80 60 40 # of Games â AoT (Short) 20 â AoT â AoT (Long) ° 0 50 100 150 200 250 300 350 400 # of Visited Nodes
Figure 6: Comparison of AoT with shorter and longer in- context examples prompted AoT versions: cumulative num- ber of games for the number of visited nodes.
Limitations. While AoT substantially cuts down on the number of queries relative to ToT, its resource demands ex- ceed those of standard prompting and CoT, a consequence of its extensive exploration of ideas via token generation. Crafting token-efficient algorithmic examples is one avenue, but thereâs also potential in judiciously tapping into or un- locking the LLMâs âtunnel-visionâ. Our research primarily spotlighted certain algorithms, with a keen focus on tree- search tasks. Itâs pertinent to highlight that we conducted our tests exclusively with GPT-4. Though more costly than other LLMs, GPT-4âs advanced capabilities appear pivotal for AoTâs optimal functioning; models of lesser caliber might not yield comparable performance boosts from AoT. | 2308.10379#37 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 38 | Conclusion This paper presents the Algorithm of Thoughts, a pioneer- ing prompting strategy to navigate reasoning pathways in LLMs using minimal queries. Our findings reveal that this method not only substantially surpasses prior single-query techniques but also rivals external tree-search implementa- tions. Such an approach augments the potential to stream- line idea discovery in LLMs, balancing both cost and com- putational demands. Future work includes designing tokenefficient algorithmic examples, developing adaptive mecha- nisms for âtunnel-visionâ activation to expedite the search, and deepening the understanding of this fresh mode of in- context learning from theoretical angles. | 2308.10379#38 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 39 | References Aminabadi, R. Y.; Rajbhandari, S.; Awan, A. A.; Li, C.; Li, D.; Zheng, E.; Ruwase, O.; Smith, S.; Zhang, M.; Rasley, J.; et al. 2022. DeepSpeed-inference: enabling efficient infer- ence of transformer models at unprecedented scale. In SC22: International Conference for High Performance Computing, Networking, Storage and Analysis, 1â15. IEEE. Austin, J.; Odena, A.; Nye, M.; Bosma, M.; Michalewski, H.; Dohan, D.; Jiang, E.; Cai, C.; Terry, M.; Le, Q.; et al. 2021. Program synthesis with large language models. arXiv preprint arXiv:2108.07732. Baddeley, A. 2003. Working memory: looking back and looking forward. Nature reviews neuroscience, 4(10): 829â 839. Bai, Y.; Kadavath, S.; Kundu, S.; Askell, A.; Kernion, J.; Jones, A.; Chen, A.; Goldie, A.; Mirhoseini, A.; McKinnon, C.; Chen, C.; Olsson, C.; Olah, | 2308.10379#39 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 40 | Jones, A.; Chen, A.; Goldie, A.; Mirhoseini, A.; McKinnon, C.; Chen, C.; Olsson, C.; Olah, C.; Hernandez, D.; Drain, D.; Ganguli, D.; Li, D.; Tran-Johnson, E.; Perez, E.; Kerr, J.; Mueller, J.; Ladish, J.; Landau, J.; Ndousse, K.; Lukosuite, K.; Lovitt, L.; Sellitto, M.; Elhage, N.; Schiefer, N.; Mer- cado, N.; DasSarma, N.; Lasenby, R.; Larson, R.; Ringer, S.; Johnston, S.; Kravec, S.; Showk, S. E.; Fort, S.; Lanham, T.; Telleen-Lawton, T.; Conerly, T.; Henighan, T.; Hume, T.; Bowman, S. R.; Hatfield-Dodds, Z.; Mann, B.; Amodei, D.; Joseph, N.; McCandlish, S.; Brown, T.; and Kaplan, J. 2022. Constitutional AI: Harmlessness from AI Feedback. ArXiv:2212.08073 [cs]. Banerjee, S.; | 2308.10379#40 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 41 | and Kaplan, J. 2022. Constitutional AI: Harmlessness from AI Feedback. ArXiv:2212.08073 [cs]. Banerjee, S.; Bringsjord, S.; Giancola, M.; and Govindara- julu, N. S. 2022. Qualitative Mechanical Problem-Solving by Artificial Agents:: Further Progress, Under Psychometric AI. In The International FLAIRS Conference Proceedings, volume 35. Brown, T.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J. D.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; Agarwal, S.; Herbert-Voss, A.; Krueger, G.; Henighan, T.; Child, R.; Ramesh, A.; Ziegler, D.; Wu, J.; Winter, C.; Hesse, C.; Chen, M.; Sigler, E.; Litwin, M.; Gray, S.; Chess, B.; Clark, J.; Berner, C.; McCandlish, S.; Radford, A.; Sutskever, I.; and Amodei, D. 2020. Language Mod- els are Few-Shot Learners. Advances in Neural Information | 2308.10379#41 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 42 | S.; Radford, A.; Sutskever, I.; and Amodei, D. 2020. Language Mod- els are Few-Shot Learners. Advances in Neural Information Processing Systems, 33: 1877â1901. Chen, L.; Zaharia, M.; and Zou, J. 2023. FrugalGPT: How to Use Large Language Models While Reducing Cost and Improving Performance. arXiv preprint arXiv:2305.05176. Chen, M.; Tworek, J.; Jun, H.; Yuan, Q.; Pinto, H. P. d. O.; Kaplan, J.; Edwards, H.; Burda, Y.; Joseph, N.; Brockman, G.; et al. 2021. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374. Chowdhery, A.; Narang, S.; Devlin, J.; Bosma, M.; Mishra, G.; Roberts, A.; Barham, P.; Chung, H. W.; Sutton, C.; Gehrmann, S.; et al. 2022. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311. | 2308.10379#42 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 43 | Dhar, P. 2020. The carbon impact of artificial intelligence. Nat. Mach. Intell., 2(8): 423â425. Drozdov, A.; Sch¨arli, N.; Aky¨urek, E.; Scales, N.; Song, X.; Chen, X.; Bousquet, O.; and Zhou, D. 2022. Compositional Semantic Parsing with Large Language Models. Helie, S.; and Pizlo, Z. 2022. When is psychology research useful in artificial intelligence? A case for reducing compu- tational complexity in problem solving. Topics in Cognitive Science, 14(4): 687â701. Holyoak, K. J.; and Morrison, R. G. 2005. The Cambridge handbook of thinking and reasoning. Cambridge University Press. Huang, J.; and Chang, K. C.-C. 2022. Towards reason- ing in large language models: A survey. arXiv preprint arXiv:2212.10403. Kadavath, S.; Conerly, T.; Askell, A.; Henighan, T.; Drain, D.; Perez, E.; Schiefer, N.; Hatfield-Dodds, Z.; DasSarma, N.; Tran-Johnson, E.; et | 2308.10379#43 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 44 | D.; Perez, E.; Schiefer, N.; Hatfield-Dodds, Z.; DasSarma, N.; Tran-Johnson, E.; et al. 2022. Language models (mostly) know what they know. arXiv preprint arXiv:2207.05221. Kahneman, D. 2011. Thinking, fast and slow. macmillan. Kojima, T.; Gu, S. S.; Reid, M.; Matsuo, Y.; and Iwasawa, Y. 2022. Large Language Models are Zero-Shot Reason- ers. Advances in Neural Information Processing Systems, 35: 22199â22213. Lanham, T.; Chen, A.; Radhakrishnan, A.; Steiner, B.; Deni- son, C.; Hernandez, D.; Li, D.; Durmus, E.; Hubinger, E.; Kernion, J.; et al. 2023. Measuring Faithfulness in Chain- of-Thought Reasoning. arXiv preprint arXiv:2307.13702. Liang, P.; Bommasani, R.; Lee, T.; Tsipras, D.; Soylu, D.; Yasunaga, M.; Zhang, Y.; Narayanan, D.; Wu, Y.; Kumar, A.; et | 2308.10379#44 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 45 | D.; Soylu, D.; Yasunaga, M.; Zhang, Y.; Narayanan, D.; Wu, Y.; Kumar, A.; et al. 2022. Holistic evaluation of language models. arXiv preprint arXiv:2211.09110. Libby, M. E.; Weiss, J. S.; Bancroft, S.; and Ahearn, W. H. 2008. A comparison of most-to-least and least-to-most prompting on the acquisition of solitary play skills. Behav- ior analysis in practice, 1: 37â43. Liu, Y.; Han, T.; Ma, S.; Zhang, J.; Yang, Y.; Tian, J.; He, H.; Li, A.; He, M.; Liu, Z.; et al. 2023. Summary of chatgpt/gpt- 4 research and perspective towards the future of large lan- guage models. arXiv preprint arXiv:2304.01852. Long, J. 2023. Large Language Model Guided Tree-of- Thought. arXiv preprint arXiv:2305.08291. Lyu, Q.; Havaldar, S.; Stein, A.; Zhang, L.; Rao, D.; | 2308.10379#45 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 46 | preprint arXiv:2305.08291. Lyu, Q.; Havaldar, S.; Stein, A.; Zhang, L.; Rao, D.; Wong, E.; Apidianaki, M.; and Callison-Burch, C. 2023. Faithful Chain-of-Thought Reasoning. ArXiv:2301.13379 [cs]. Mialon, G.; Dess`ı, R.; Lomeli, M.; Nalmpantis, C.; Pa- sunuru, R.; Raileanu, R.; Rozi`ere, B.; Schick, T.; Dwivedi- Yu, J.; Celikyilmaz, A.; et al. 2023. Augmented language models: a survey. arXiv preprint arXiv:2302.07842. Monsell, S. 2003. Task switching. Trends in cognitive sci- ences, 7(3): 134â140. Ouyang, L.; Wu, J.; Jiang, X.; Almeida, D.; Wainwright, C.; Mishkin, P.; Zhang, C.; Agarwal, S.; Slama, K.; Ray, A.; et al. 2022. Training language models to follow instructions with human feedback. Advances in Neural Information Pro- cessing | 2308.10379#46 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 48 | Robinson, J.; and Wingate, D. 2022. Leveraging Large Lan- guage Models for Multiple Choice Question Answering. Shao, Z.; Gong, Y.; Shen, Y.; Huang, M.; Duan, N.; and Chen, W. 2023. Synthetic Prompting: Generating Chain- of-Thought Demonstrations for Large Language Models. Sloman, S. A. 1996. The empirical case for two systems of reasoning. Psychological bulletin, 119(1): 3. Srivastava, A.; Rastogi, A.; Rao, A.; Shoeb, A. A. M.; Abid, A.; Fisch, A.; Brown, A. R.; Santoro, A.; Gupta, A.; Garriga- Alonso, A.; et al. 2022. Beyond the imitation game: Quanti- fying and extrapolating the capabilities of language models. arXiv preprint arXiv:2206.04615. Suzgun, M.; Scales, N.; Sch¨arli, N.; Gehrmann, S.; Tay, Y.; Chung, H. W.; Chowdhery, A.; Le, Q. V.; Chi, E. H.; Zhou, D.; and Wei, | 2308.10379#48 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 49 | Y.; Chung, H. W.; Chowdhery, A.; Le, Q. V.; Chi, E. H.; Zhou, D.; and Wei, J. 2022. Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them. ArXiv:2210.09261 [cs]. Thoppilan, R.; De Freitas, D.; Hall, J.; Shazeer, N.; Kul- shreshtha, A.; Cheng, H.-T.; Jin, A.; Bos, T.; Baker, L.; Du, Y.; et al. 2022. Lamda: Language models for dialog appli- cations. arXiv preprint arXiv:2201.08239. Turpin, M.; Michael, J.; Perez, E.; and Bowman, S. R. 2023. Language Models Donât Always Say What They Think: Un- faithful Explanations in Chain-of-Thought Prompting. arXiv preprint arXiv:2305.04388. Wang, X.; Wei, J.; Schuurmans, D.; Le, Q. V.; Chi, E. H.; Narang, S.; Chowdhery, A.; and Zhou, D. 2022. Self- | 2308.10379#49 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 50 | D.; Le, Q. V.; Chi, E. H.; Narang, S.; Chowdhery, A.; and Zhou, D. 2022. Self- Consistency Improves Chain of Thought Reasoning in Lan- guage Models. Wei, J.; Tay, Y.; Bommasani, R.; Raffel, C.; Zoph, B.; Borgeaud, S.; Yogatama, D.; Bosma, M.; Zhou, D.; Metzler, D.; Chi, E. H.; Hashimoto, T.; Vinyals, O.; Liang, P.; Dean, J.; and Fedus, W. 2022a. Emergent Abilities of Large Lan- guage Models. ArXiv:2206.07682 [cs]. Wei, J.; Wang, X.; Schuurmans, D.; Bosma, M.; Ichter, B.; Xia, F.; Chi, E.; Le, Q. V.; and Zhou, D. 2022b. Chain- of-Thought Prompting Elicits Reasoning in Large Language Models. Advances in Neural Information Processing Sys- tems, 35: 24824â24837. Wu, C.-J.; Raghavendra, R.; Gupta, U.; Acun, | 2308.10379#50 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 51 | Neural Information Processing Sys- tems, 35: 24824â24837. Wu, C.-J.; Raghavendra, R.; Gupta, U.; Acun, B.; Ardalani, N.; Maeng, K.; Chang, G.; Aga, F.; Huang, J.; Bai, C.; et al. 2022. Sustainable ai: Environmental implications, chal- lenges and opportunities. Proceedings of Machine Learning and Systems, 4: 795â813. Yao, S.; Yu, D.; Zhao, J.; Shafran, I.; Griffiths, T. L.; Cao, Y.; and Narasimhan, K. 2023. Tree of Thoughts: Deliberate Problem Solving with Large Language Models. ArXiv:2305.10601 [cs]. Zelikman, E.; Wu, Y.; Mu, J.; and Goodman, N. 2022. Star: Bootstrapping reasoning with reasoning. Advances in Neu- ral Information Processing Systems, 35: 15476â15488. Zhang, Z.; Zhang, A.; Li, M.; and Smola, A. 2022. Auto- matic Chain of Thought Prompting in Large Language Mod- els. Zhou, D.; Sch¨arli, N.; Hou, | 2308.10379#51 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 53 | Chi, E. H. 2022. Least-to-Most Prompting Enables Complex Reasoning in Large Language Models.
# Game of 24 - Additional Details
In order to avoid confusion in our analysis of AoT in the game of 24, we give additional details in terms of terminologies we use as well as their direct implications in the performance figures. An Illustration of these are given in Fig. 7.
Input: 8 6 4 4 4-4=8 Subtree 8-6=2 Fi i irst Operations Exploration P 7 4+2=6 4/4=1 Second Operations Visited Nodes (left: 6, 4) | (left: 2, 1) | \ p 6*4=24 | 6+4=10 (left: 24) (left: 10) \ Third Operations
Figure 7: An illustration of terminologies we use for the game of 24. The yellow nodes represent the first operations and the states they lead to; the green node represents the node where we find the solution; all other nodes are represented by pink.
First operations / First iterations. This represents the scenario that after we choose the first two number in the game of 24, the case of either adding, subtracting, multiplying or dividing them.
Subtree Exploration. This denotes searching all or most of the nodes coming from the same state, typically states with less than four numbers left. | 2308.10379#53 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 54 | Subtree Exploration. This denotes searching all or most of the nodes coming from the same state, typically states with less than four numbers left.
Number of nodes visited. This is the number of states that the method has been on the game of 24. Each state is the set of number we are left with, after our operations in the numbers. For example, after the first operation we might be left with the numbers â8 3 1â. This set of numbers represent a state, as well as the state of â8 3â that we will be left with after another operation of â8 â 1 = 8â.
# Creative Writing
We use the creative writing task, also used by (Yao et al. 2023), where the LLM is provided with four arbitrary sentences. The objective is to craft a cohesive narrative divided into four paragraphs, with each paragraph culminating in one of the given sentences. This exercise not only fosters creativity but also emphasizes strategic deliberation.
# Task Setup | 2308.10379#54 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 55 | # Task Setup
Sentences are randomly sourced from randomwordgenerator.com, resulting in 100 distinct sets of inputs. Given the absence of predetermined correct answers, the primary focus lies in evaluating the coherence of the responses. We have noted that GPT-4 consistently aligns with these input guidelines. Evaluation is centered around assessing passage coherence using a GPT-4 zero- shot prompt, where each output is rated on a scale of 1 to 10. Each task response undergoes five such evaluations, with their scores being averaged subsequently.
# Baselines
For this task, both standard and CoT prompts are employed without preliminary training. While the standard prompt directly guides the LLM to fashion a cohesive narrative based on stipulated parameters, the CoT prompt obliges the model to initially outline a succinct plan prior to drafting the narrative, serving as an intermediate cognitive bridge. For each task iteration, ten samples are generated using both the standard and CoT methods. Results of the ToT approach are presented without modification.
AoT Setup Mirroring ToTâs methodology, the task is tackled in a zero-shot setting. Our prompt instructs the model to first formulate five distinct plans. Subsequent to this, the model selects the most promising among them to shape a narrative and then refines it for optimal coherence. The exact prompts used for this zero-shot approach will be provided in the subsequent section. | 2308.10379#55 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 56 | Results As depicted in Fig. 8, AoT outpaces other singular query prompting techniques such as standard prompting and CoT in terms of performance. It also exhibits a marked improvement over ToT, although the difference is not statistically significant. Com- prehensive scores, along with the average query count needed for each method, are consolidated in Table 6. Notably, AoT necessitates fewer queries compared to ToT.
10 == -- 8 â- | 6 â ll $ a + 4 + ¢ + a + 2 $ 0 Standard CoT ToT AoT
Figure 8: Comparison of the standard prompting, CoT, ToT and AoT on the creative writing task.
Method Standard Prompting CoT ToT AoT Score Avg. Queries 6.19 6.93 7.56 7.58 1 1 20 1
Table 6: Performance of the methods determined by GPT-4. | 2308.10379#56 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 57 | Table 6: Performance of the methods determined by GPT-4.
CoT vs. Single Iteration AoT in the Game of 24 To demonstrate that the tree search mechanism is fundamentally distinct from the CoT prompting, even in scenarios where AoTâs in-context examples include only a single initial operation in the game of 24, we draw a comparison between AoT (Short) and CoT. In this setup, AoT (Short) determines the first operation and subsequently conducts a tree search on the remaining three numbers. Interestingly, AoT (Short) achieves a success rate of 48%, while CoT lags significantly, securing only 4%. These results underscore the notion that even a rudimentary search mechanism can lead to significant performance enhancements.
Detailed Analysis on the Effect of the Length of the Prompts In this section, we delve deeper into Fig. 6 by presenting histograms for the successful, unsuccessful, and total games of â24â, considering the number of initial steps in methods AoT (Short), AoT, and AoT (Long). These are displayed in Figs. 9-11.
From these figures, it becomes evident that the length of the prompts, measured by the number of initial steps included in in-context examples, correlates with the length of their solutions to test examples. This trend is consistent across all three cases, suggesting that AoTâs strategy in determining the number of initial steps is influenced by its in-context examples. | 2308.10379#57 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 58 | Interestingly, when AoT is provided a well-balanced set of initial steps that emphasize the most promising operations, it excels in solving the majority of games in earlier iterations. This indicates AoTâs capacity to prioritize swift problem-solving without sacrificing performance. This tendency is also observed in AoT (Long), albeit with a somewhat reduced success rate, as illustrated in Fig. 9.
40 20 40 20 40 20 â â â i} 2 4 8 10 12 6 # of First Steps
# # of Successful Games
# AoT (Short)
# AoT
# AoT (Long)
Figure 9: Histogram of the number of successful games with respect to the number of first steps for AoT (Short), AoT and AoT (Long).
40 20 40 20 # of Unsuccessful Games â â â i} 2 4 6 8 10 12 # of First Steps
# AoT (Short)
# AoT
# AoT (Long)
Figure 10: Histogram of the number of unsuccessful games with respect to the number of first steps for AoT (Short), AoT and AoT (Long).
# Prompts | 2308.10379#58 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 59 | Figure 10: Histogram of the number of unsuccessful games with respect to the number of first steps for AoT (Short), AoT and AoT (Long).
# Prompts
Game of 24 Below, we represent the specific prompts employed for the various methods detailed in the experiments section. Itâs important to note that the terms âSystemâ,âUserâ, and âAssistantâ are utilized to denote the roles within the OpenAI API when operating in chat completion mode. The line breaks serve to show the transitions between the user and assistant interactions within the API.
# AoT (DFS)
System:
100 50 12 100 50 # of All Games 12 100 50 2 4 6 8 10 # of First Steps 12
â
â
â
# AoT (Short)
# AoT
# AoT (Long)
Figure 11: Histogram of the number of all games with respect to the number of first steps for AoT (Short), AoT and AoT (Long).
Use numbers and basic arithmetic operations (+ - * /) to obtain 24. When considering the next steps, do not choose operations that will result in a negative or fractional number. In order to help with the calculations, the numbers in the parenthesis represent the numbers that are left after the operations and they are in descending order. | 2308.10379#59 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 60 | Another thing we do is when there are only two numbers left in the parenthesis, we check whether we can arrive at 24 only by using basic arithmetic operations (+ - * /). Some examples regarding this idea: (21 2) no since 21 + 2 = 23, 21 - 2 = 19, 21 * 2 = 42, 21 / 2 = 10.5, none of which is equal to 24. (30 6) 30 - 6 = 24 yes (8 3) 8 * 3 = 24 yes (12 8) no (48 2) 48 / 2 = 24 yes Most importantly, do not give up, all the numbers that will be given has indeed a solution. ËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËË User: 14 8 8 2 Assistant: Trying a promising first operation: 1. 8 + 2: (14, 10, 8) - 14 + 10: (24, 8) 32, 16, 192, 3 - 14 - 10: (8, 4) 12, 8, 32, 2 - | 2308.10379#60 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 61 | 10, 8) - 14 + 10: (24, 8) 32, 16, 192, 3 - 14 - 10: (8, 4) 12, 8, 32, 2 - 14 * 10: (140, 8) 148, 132, 1120, fractional - 14 / 10: (8, 1.4) fractional - 14 + 8: (22, 10) 32, 12, 220, fractional - 14 - 8: (10, 6) 16, 4, 60, fractional - 14 * 8: (112, 10) 122, 102, 1120, fractional - 14 / 8: fractional - 10 + 8: (18, 14) 32, 4, 252, fractional - 10 - 8: (14, 2) 16, 12, 28, 7 - 10 * 8: (80, 14) 94, 66, big, fractional - 10 / 8: fractional | 2308.10379#61 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 62 | Trying another promising first operation: 2. 8 / 2: (14, 8, 4) - 14 + 8: (22, 4) 26, 18, 88, fractional - 14 - 8: (6, 4) 10, 2, 24 = 6 * 4 -> found it! Backtracking the solution: Step 1: 8 / 2 = 4 Step 2: 14 - 8 = 6 Step 3: 6 * 4 = 24 Considering these steps: 24 = 6 * 4 = (14 - 8) * 4 = (14 - 8) * (8 / 2) = 24. answer: (14 - 8) * (8 / 2) = 24. ËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËË User: 9 5 5 5 Assistant: Trying a promising first operation: 1. 9 - 5: (5, 5, 4) - 5 + 5: (10, 4) 14, 6, 40, fractional - 5 - 5: (4, 0) 4, 4, 0, undefined - 5 * 5: | 2308.10379#62 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 63 | + 5: (10, 4) 14, 6, 40, fractional - 5 - 5: (4, 0) 4, 4, 0, undefined - 5 * 5: (25, 4) 29, 21, 100, fractional - 5 / 5: (4, 1) 5, 3, 4, 4 - 5 + 4: (9, 5) 14, 4, 45, fractional - 5 - 4: (5, 1) 6, 4, 5, 0.2 - 5 * 4: (20, 5) 25, 15, 100, fractional - 5 / 4: fractional Trying another promising first operation: 2. 5 + 5: (10, 9, 5) - 10 + 9: (19, 5) 24 = 19 + 5 -> found it! Backtracking the solution: Step 1: 5 + 5 = 10 Step 2: 10 + 9 = 19 Step 3: 19 + 5 = 24 Considering these steps: 24 = 19 + 5 = (10 + 9) + 5 = ((5 + 5) + 9) + 5 = 24. answer: ((5 + 5) + 9) + 5 = 24. | 2308.10379#63 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 65 | Trying a promising first operation: 1. 8 - 6: (4, 4, 2) - 4 + 4: (8, 2) 10, 6, 16, 4 - 4 - 4: (2, 0) 2, 2, 0, undefined - 4 * 4: (16, 2) 18, 14, 32, 8 - 4 / 4: (2, 1) 3, 1, 2, 2 - 4 + 2: (6, 4) 10, 2, 24 = 6 * 4 -> found it! Backtracking the solution: Step 1: 8 - 6 = 2 Step 2: 4 + 2 = 6 Step 3: 6 * 4 = 24 Considering these steps: 24 = 6 * 4 = (4 + 2) * 4 = (4 + (8 - 6)) * 4 = 24. answer: (4 + (8 - 6)) * 4 = 24.
ËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËË User:
13 10 9 4
# Assistant: | 2308.10379#65 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 66 | Trying a promising first operation: 1. 13 - 4: (10, 9, 9) - 10 + 9: (19, 9) 28, 10, 171, fractional - 10 - 9: (9, 1) 10, 8, 9, 9 - 10 * 9: (90, 9) 99, 81, 810, fractional - 10 / 9: fractional - 9 + 9: (18, 10) 28, 8, 180, fractional - 9 - 9: (10, 0) 10, 10, 0, undefined - 9 * 9: (81, 10) 91, 71, 810, fractional - 9 / 9: (10, 1) 11, 9, 10, 10 Trying another promising first operation: 2. 10 - 4: (13, 9, 6) - 13 + 9: (22, 6) 28, 16, 132, fractional - 13 - 9: (6, 4) 10, 2, 24 = 6 * 4 -> found it! Backtracking the solution: Step 1: 10 - 4 = 6 Step 2: 13 - 9 = 4 Step 3: 6 * 4 = 24 Considering these steps: 24 = 6 * 4 = 6 * (13 - 9) = (10 - 4) * (13 - 9) = 24. answer: (10 - 4) * (13 - 9) = 24. | 2308.10379#66 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 67 | 4 = 6 * (13 - 9) = (10 - 4) * (13 - 9) = 24. answer: (10 - 4) * (13 - 9) = 24. ËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËË User: 8 8 5 4 Assistant: Trying a promising first operation: 1. 8 - 4: (8, 5, 4) - 8 + 5: (13, 4) 17, 9, 52, fractional - 8 - 5: (4, 3) 7, 1, 12, fractional - 8 * 5: (40, 4) 44, 36, 160, fractional - 8 / 5: fractional - 8 + 4: (12, 5) 17, 7, 60, fractional - 8 - 4: (5, 4) 9, 1, 20, fractional - 8 * 4: (32, 5) 37, 27, 160, fractional - 8 / 4: (5, 2) 7, 3, 10, 2.5 - 5 + | 2308.10379#67 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 69 | Trying another promising first operation: 2. 8 / 4: (8, 5, 2) - 8 + 5: (13, 2) 15, 11, 26, fractional - 8 - 5: (3, 2) 5, 1, 6, 1.5 - 8 * 5: (40, 2) 42, 38, 80, 20 - 8 / 5: fractional - 8 + 2: (10, 5) 15, 5, 50, 2 - 8 - 2: (6, 5) 11, 1, 30, fractional - 8 * 2: (16, 5) 21, 11, 80, fractional - 8 / 2: (5, 4) 9, 1, 20, fractional - 5 + 2: (7, 8) 15, 1, 56, fractional - 5 - 2: (8, 3) 11, 5, 24 = 8 * 3 -> found it! Backtracking the solution: Step 1: 8 / 4 = 2 Step 2: 5 - 2 = 3 Step 3: 8 * 3 = 24 Considering these steps: 24 = 8 * 3 = 8 * (5 - 2) = 8 * (5 - (8 / 4)) = 24. answer: 8 * (5 - (8 / 4)) = 24. | 2308.10379#69 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 71 | User: 11 11 1 1 Assistant: Trying a promising first operation: 1. 11 - 1: (11, 10, 1) - 11 + 10: (21, 1) 22, 20, 21, 21 - 11 - 10: (1, 1) 2, 0, 1, 1 - 11 * 10: (110, 1) 111, 109, 110, 110 - 11 / 10: fractional - 11 + 1: (12, 10) 22, 2, 120, 1.2 - 11 - 1: (10, 10) 20, 0, 100, 1 - 11 * 1: (11, 10) 21, 1, 110, 1.1 - 11 / 1: (11, 10) 21, 1, 110, fractional - 10 + 1: (11, 11) 22, 0, 121, 1 - 10 - 1: (11, 9) 20, 2, 99, 1.1 - 10 * 1: (11, 10) 21, 1, 110, 1.1 - 10 / 1: (11, 10) 21, 1, 110, 1.1 Trying another promising first operation: 2. 11 + 1: (12, 11, 1) - 12 + 11: (23, 1) 24 = 23 + 1 -> found it! Backtracking the solution: Step 1: 11 + 1 = 12 Step 2: 12 + 11 = 23 Step | 2308.10379#71 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 72 | + 11: (23, 1) 24 = 23 + 1 -> found it! Backtracking the solution: Step 1: 11 + 1 = 12 Step 2: 12 + 11 = 23 Step 3: 23 + 1 Considering these steps: 24 = 23 + 1 = (12 + 11) + 1 = ((11 + 1) + 11) + 1 = 24. answer: ((11 + 1) + 11) + 1 = 24. ËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËË User: 11 7 4 1 Assistant: Trying a promising first operation: 1. 7 * 4: (28, 11, 1) - 28 + 11: (39, 1) 40, 38, 39, 39 - 28 - 11: (17, 1) 18, 16, 17, 17 - 28 * 11: (308, 1) 309, 307, 308, 308 - 28 / 11: fractional - 28 + 1: (29, 11) 40, 18, 319, fractional - 28 - 1: (27, 11) 38, 16, | 2308.10379#72 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 74 | # User:
# Assistant:
Trying another promising first operation: 2. 7 + 1: (11 8 4) - 11 + 8: (19, 4) 23, 15, 76, fractional - 11 - 8: (4, 3) 7, 1, 12, fractional - 11 * 8: (88, 4) 92, 84, 352, fractional - 11 / 8: fractional - 11 + 4: (15, 8) 23, 7, 120, fractional - 11 - 4: (7, 8) 15, -1, 56, fractional - 11 * 4: (44, 8) 52, 36, 352, fractional - 11 / 4: fractional - 8 + 4: (12, 11) 23, -1, 132, fractional - 8 - 4: (11, 4) 15, 7, 44, fractional - 8 * 4: (32, 11) 43, 21, 352, fractional - 8 / 4: (11, 2) 13, 9, 22, fractional
Trying another promising first operation: | 2308.10379#74 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 75 | 3. 4 + 1: (11 7 5) - 11 + 7: (18, 5) 23, 13, 90, fractional - 11 - 7: (5, 4) 9, 1, 20, fractional - 11 * 7: (77, 5) 82, 72, 385, fractional - 11 / 7: fractional - 11 + 5: (16, 7) 23, 9, 112, fractional - 11 - 5: (7, 6) 13, 1, 42, fractional - 11 * 5: (55, 7) 62, 48, 385, fractional - 11 / 5: fractional - 7 + 5: (12, 11) 23, 1, 132, fractional - 7 - 5: (11, 2) 13, 9, 22, fractional - 7 * 5: (35, 11) 46, 24 = 35 - 11 -> found it! Step 1: 4 + 1 = 5 Step 2: 7 * 5 = 35 Step 3: 35 - 11 = 24 Considering these steps: Backtracking the solution: 24 = 35 - 11 = (7 * 5) - 11 = (7 * (4 + 1)) - 11 = 24. answer: (7 * (4 + 1)) - 11 = 24. | 2308.10379#75 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 76 | 35 - 11 = (7 * 5) - 11 = (7 * (4 + 1)) - 11 = 24. answer: (7 * (4 + 1)) - 11 = 24. ËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËË User: 11 5 4 3 Assistant: Trying a promising first operation: 1. 5 * 4: (20, 11, 3) - 20 + 11: (31, 3) 34, 28, 93, fractional - 20 - 11: (9, 3) 12, 6, 27, 3 - 20 * 11: (220, 3) 223, 217, 660, fractional - 20 / 11: fractional - 20 + 3: (23, 11) 34, 12, 253, fractional - 20 - 3: (17, 11) 28, 6, 187, fractional - 20 * 3: (60, 11) 71, 49, 660, fractional - 20 / 3: fractional - 11 + 3: (14, 20) 34, -6, | 2308.10379#76 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 77 | * 3: (60, 11) 71, 49, 660, fractional - 20 / 3: fractional - 11 + 3: (14, 20) 34, -6, 280, fractional - 11 - 3: (8, 20) 28, -12, 160, fractional - 11 * 3: (33, 20) 53, 13, 660, fractional - 11 / 3: fractional Trying another promising first operation: 2. 11 - 3: (8, 5, 4) - 8 + 5: (13, 4) 17, 9, 52, fractional - 8 - 5: (4, 3) 7, 1, 12, fractional - 8 * 5: (40, 4) 44, 36, 160, fractional - 8 / 5: fractional - 8 + 4: (12, 5) 17, 7, 60, fractional - 8 - 4: (5, 4) 9, 1, 20, fractional - 8 * 4: (32, 5) 37, 27, 160, fractional - 8 / 4: (5, 2) 7, 3, 10, 2.5 - 5 + 4: (9, 8) 17, 1, 72, fractional - 5 - 4: (8, 1) 9, 7, 8, 8 - 5 * 4: (20, | 2308.10379#77 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 79 | Trying another promising first operation: 3. 11 * 3: (33, 5, 4) - 33 + 5: (38, 4) 42, 34, 152, fractional - 33 - 5: (28, 4) 32, 24 = 28 - 4 -> found it! Backtracking the solution: Step 1: 11 * 3 = 33 Step 2: 33 - 5 = 28 Step 3:
13 12 5 2 Trying a promising first operation: 1. 12 + 2: (14, 13, 5) - 14 + 13: (27, 5) 32, 22, 135, fractional - 14 - 13: (5, 1) 6, 4, 5, 5 - 14 * 13: (182, 5) 187, 177, 910, fractional - 14 / 13: fractional - 14 + 5: (19, 13) 32, 6, 247, fractional - 14 - 5: (13, 9) 22, 4, 117, fractional - 14 * 5: (70, 13) 83, 57, 910, fractional - 14 / 5: fractional - 13 + 5: (18, 14) 32, 4, 252, fractional - 13 - 5: (14, 8) 22, 6, 112, fractional - 13 * 5: (65, 14) 79, 51, 910, fractional - 13 / 5: fractional 9 8 2 1 | 2308.10379#79 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 81 | # Assistant:
Trying another promising first operation: 2. 13 + 5: (18, 12, 2) - 18 + 12: (30, 2) 32, 28, 60, fractional - 18 - 12: (6, 2) 8, 4, 12, 3 - 18 * 12: (216, 2) 218, 214, 432, fractional - 18 / 12: fractional - 18 + 2: (20, 12) 32, 8, 240, fractional - 18 - 2: (16, 12) 28, 4, 192, fractional - 18 * 2: (36, 12) 48, 24 = 36 - 12 -> found it! Backtracking the solution: Step 1: 13 + 5 = 18 Step 2: 18 * 2 = 36 Step 3: 36 - 12 = 24 Considering these steps: 24 = 36 - 12 = (18 * 2) - 12 = ((13 + 5) * 2) - 12 = 24. answer: ((13 + 5) * 2) - 12 = 24. | 2308.10379#81 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 83 | # Assistant:
Trying a promising first operation: 1. 9 - 1: (8, 8, 2) - 8 + 8: (16, 2) 18, 14, 32, 8 - 8 - 8: (2, 0) 2, 2, 0, undefined - 8 * 8: (64, 2) 66, 62, 128, 32 - 8 / 8: (2, 1) 3, 1, 2, 2 - 8 + 2: (10, 8) 18, 2, 80, 4 - 8 - 2: (6, 8) 14, -2, 48, fractional - 8 * 2: (16, 8) 24 = 16 + 8 -> found it! Backtracking the solution: Step 1: 9 - 1 = 8 Step 2: 8 * 2 = 16 Step 3: 16 + 8 = 24 Considering these steps: 24 = 16 + 8 = (8 * 2) + 8 = ((9 - 1) * 2) + 8 = 24. answer: ((9 - 1) * 2) + 8 = 24. | 2308.10379#83 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 84 | ËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËË
# AoT (Long)
# System:
Use numbers and basic arithmetic operations (+ - * /) to obtain 24. When considering the next steps, do not choose operations that will result in a negative or fractional number. In order to help with the calculations, the numbers in the parenthesis represent the numbers that are left after the operations and they are in descending order.
Another thing we do is when there are only two numbers left in the parenthesis, we
check whether we can arrive at 24 only by using basic arithmetic operations (+ - * /). Some examples regarding this idea:
(21 2) no since 21 + 2 = 23, 21 - 2 = 19, 21 * 2 = 42, 21 / 2 = 10.5, none of which is equal
to 24.
(30 6) 30 - 6 = 24 yes (8 3) 8 * 3 = 24 yes (12 8) no (48 2) 48 / 2 = 24 yes | 2308.10379#84 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 85 | solution. 14 8 8 2 Trying a promising first operation: 1. 8 + 2: (14, 10, 8) - 14 + 10: (24, 8) 32, 16, 192, 3 - 14 - 10: (8, 4) 12, 8, 32, 2 - 14 * 10: (140, 8) 148, 132, 1120, fractional - 14 / 10: (8, 1.4) fractional - 14 + 8: (22, 10) 32, 12, 220, fractional - 14 - 8: (10, 6) 16, 4, 60, fractional - 14 * 8: (112, 10) 122, 102, 1120, fractional - 14 / 8: fractional - 10 + 8: (18, 14) 32, 4, 252, fractional - 10 - 8: (14, 2) 16, 12, 28, 7 - 10 * 8: (80, 14) 94, 66, big, fractional - 10 / 8: fractional Trying another promising first operation: 2. 14 + 8: (22, 8, 2) - 22 + 8: (30, 2) 32, 28, 60, 15 - 22 - 8: (14, 2) 16, 12, 28, 7 - 22 * 8: (176, 2) 178, 174, 88 - 22 / | 2308.10379#85 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 86 | 28, 60, 15 - 22 - 8: (14, 2) 16, 12, 28, 7 - 22 * 8: (176, 2) 178, 174, 88 - 22 / 8: (2.75, 2) fractional - 22 + 2: (24, 8) 32, 16, 192, 3 - 22 - 2: (20, 8) 28, 12, 160, fractional - 22 * 2: (44, 8) 52, 36, 352, fractional - 22 / 2: (11, 8) 19, 3, 88, fractional - 8 + 2: (22, 10) 32, 12, 220, fractional - 8 - 2: (22, 6) 28, 16, 132, fractional - 8 * 2: (22, 16) 38, 6, 352, fractional - 8 / 2: (22, 4) 26, 18, 88, fractional Trying another promising first operation: 3. 14 + 2: (16, 8, 8) - 16 + 8: (24, 8) 32, 16, 192, 3 - 16 - 8: (8, 8) 16, 0, 64, 1 - 16 * 8: (128, 8) 136, 120, 1024, 16 - 16 / 8: (8, 2) 10, 6, 16, 4 - 8 + 8: (16, | 2308.10379#86 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 89 | 4. 8 - 2: (14, 8, 6) - 14 + 8: (22, 14) 36, 8, 308, fractional - 14 - 8: (6, 6) 12, 0, 36, 1 - 14 * 8: (112, 6) 118, 106, 672, fractional - 14 / 8: (6, 1.75) fractional - 14 + 6: (20, 8) 22, 12, 160, fractional - 14 - 6: (8, 8) 16, 0, 64, 1 - 14 * 6: (84, 8) 92, 76, 672, fractional - 14 / 6: (8, 2.3) fractional - 8 + 6: (14, 14) 28, 0, 196, 1 - 8 - 6: (14, 2) 16, 12, 28, 7 - 8 * 6: (48, 14) 62, 34, 672, fractional - 8 / 6: (14, 1.3) fractional Trying another promising first operation: 5. 8 * 2: (16, 14, 8) - 16 + 14: (30, 8) 38, 22, 240, fractional - 16 - 14: (8, 2) 10, 6, 16, 4 - 16 * 14: (224, 8) 232, 216, 1792, 28 - 16 / | 2308.10379#89 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 90 | - 16 - 14: (8, 2) 10, 6, 16, 4 - 16 * 14: (224, 8) 232, 216, 1792, 28 - 16 / 14: (8, 1.1) fractional - 16 + 8: (24, 14) 38, 10, 336, fractional - 16 - 8: (14, 8) 22, 6, 112, fractional - 16 * 8: (128, 14) 142, 112, 1792, fractional - 16 / 8: (14, 2) 16, 12, 28, 7 - 14 + 8: (22, 16) 38, 6, 352, fractional - 14 - 8: (16, 6) 22, 10, 96, fractional - 14 * 8: (112, 16) 128, 96, 1792, 7 - 14 / 8: (16, 1.7) fractional Trying another promising first operation: 6. 14 * 2: (28, 8, 8) - 28 + 8: (36, 8) 44, 28, 288, fractional - 28 - 8: (20, 8) 28, 12, 160, fractional - 28 * 8: (224, 8) 232, 216, 1792, 28 - 28 / 8: (8, 3.5) fractional, fractional, 28, fractional - | 2308.10379#90 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 92 | Trying another promising first operation: 7. 8 / 2: (14, 8, 4) - 14 + 8: (22, 4) 26, 18, 88, fractional - 14 - 8: (6, 4) 10, 2, 24 = 6 * 4 -> found it! Backtracking the solution: Step 1: 8 / 2 = 4 Step 2: 14 - 8 = 6 Step 3: 6 * 4 = 24 Considering these steps: 24 = 6 * 4 = (14 - 8) * 4 = (14 - 8) * (8 / 2) = 24. answer: (14 - 8) * (8 / 2) = 24.
ËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËË User:
9 5 5 5
# Assistant: | 2308.10379#92 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 94 | - 5 * 4: (20, 5) 25, 15, 100, fractional - 5 / 4: fractional Trying another promising first operation: 2. 5 * 5: (25, 9, 5) - 25 + 9: (34, 5) 39, 29, 170, fractional - 25 - 9: (16, 5) 21, 11, 80, fractional - 25 * 9: (225, 5) 230, 220, 1125, 45 - 25 / 9: (5, 2.7) fractional - 25 + 5: (30, 9) 39, 21, 270, fractional - 25 - 5: (20, 9) 29, 11, 180, fractional - 25 * 5: (75, 9) 84, 66, 675, fractional - 25 / 5: (9, 5) 14, 4, 45, fractional - 9 + 5: (25, 14) 39, 11, 350, fractional - 9 - 5: (25, 4) 29, 21, 100, fractional - 9 * 5: (45, 25) 70, 20, 1125, fractional - 9 / 5: (25, 1.8) fractional, fractional, 45, fractional Trying another promising first operation: 3. 5 - 5: (9, 5, 0) | 2308.10379#94 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 95 | 1.8) fractional, fractional, 45, fractional Trying another promising first operation: 3. 5 - 5: (9, 5, 0) - 9 + 5: (25, 14) 39, 11, 350, fractional - 9 - 5: (25, 4) 29, 21, 100, fractional - 9 * 5: (45, 25) 70, 20, 1125, fractional - 9 / 5: (25, 1.8) fractional, fractional, 45, fractional - 9 + 0: (9, 5) 14, 4, 45, fractional - 9 - 0: (9, 5) 14, 4, 45, fractional - 9 * 0: (5, 0) 5, 5, 0, undefined - 9 / 0: undefined - 5 + 0: (9, 5) 14, 4, 45, fractional - 5 - 0: (9, 5) 14, 4, 45, fractional - 5 * 0: (9, 0) 9, 9, 0, undefined - 5 / 0: undefined Trying another promising first operation: 4. 5 / 5: (9, 5, 1) - 9 + 5: (25, 14) 39, 11, 350, fractional - 9 - 5: (25, 4) 29, | 2308.10379#95 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 96 | 5: (9, 5, 1) - 9 + 5: (25, 14) 39, 11, 350, fractional - 9 - 5: (25, 4) 29, 21, 100, fractional - 9 * 5: (45, 25) 70, 20, 1125, fractional - 9 / 5: (25, 1.8) fractional, fractional, 45, fractional - 9 + 1: (10, 5) 15, 5, 50, 2 - 9 - 1: (8, 5) 13, 3, 40, fractional - 9 * 1: (9, 5) 14, 4, 45, fractional - 9 / 1: (9, 5) 14, 4, 45, fractional - 5 + 1: (9, 6) 15, 3, 54, fractional - 5 - 1: (9, 4) 13, 5, 36, fractional - 5 * 1: (9, 5) 14, 4, 45, fractional - 5 / 1: (9, 5) 14, 4, 45, fractional Trying another promising first operation: 5. 9 * 5: (45, 5, 5) - 45 + 5: (50, 5) 55, 45, 250, 10 - 45 - 5: (40, 5) 45, 35, 200, 8 - 45 | 2308.10379#96 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 98 | - 5 * 4: (20, 5) 25, 15, 100, fractional - 5 / 4: fractional
Trying another promising first operation: 6. 5 + 5: (10, 9, 5) - 10 + 9: (19, 5) 24 = 19 + 5 -> found it! Backtracking the solution: Step 1: 5 + 5 = 10
Step 2: 10 + 9 = 19 Step 3: 19 + 5 = 24 Considering these steps: 24 = 19 + 5 = (10 + 9) + 5 = ((5 + 5) + 9) + 5 = 24. answer: ((5 + 5) + 9) + 5 = 24.
ËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËË User: | 2308.10379#98 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 99 | 8 6 4 4 Trying a promising first operation: 1. 8 * 6: (48, 4, 4) - 48 + 4: (52, 4) 56, 48, 208, 13 - 48 - 4: (44, 4) 48, 40, 176, 11 - 48 * 4: (192, 4) 196, 188, 768, 48 - 48 / 4: (12, 4) 16, 8, 48, 3 - 4 + 4: (48, 8) 56, 40, 384, 6 - 4 - 4: (48, 0) 48, 48, 0, undefined - 4 * 4: (48, 16) 64, 32, 768, 3 - 4 / 4: (48, 1) 49, 47, 48, 48 Trying another promising first operation: 2. 4 - 4: (8, 6, 0) - 8 + 6: (14, 0) 14, 14, 0, undefined - 8 - 6: (2, 0) 2, 2, 0, undefined - 8 * 6: (48, 0) 48, 48, 0, undefined - 8 / 6: (1.3, 0) fractional - 8 + 0: (8, 6) 14, 2, 48, fractional - 8 - 0: (8, 6) 14, 2, 48, fractional - 8 * 0: (6, | 2308.10379#99 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 100 | 6) 14, 2, 48, fractional - 8 - 0: (8, 6) 14, 2, 48, fractional - 8 * 0: (6, 0) 6, 6, 0, undefined - 8 / 0: undefined - 6 + 0: (8, 6) 14, 2, 48, fractional - 6 - 0: (8, 6) 14, 2, 48, fractional - 6 * 0: (8, 0) 8, 8, 0, undefined - 6 / 0: undefined Trying another promising first operation: 3. 4 / 4: (8, 6, 1) - 8 + 6: (14, 1) 15, 13, 14, 14 - 8 - 6: (2, 1) 3, 1, 2, 2 - 8 * 6: (48, 1) 49, 47, 48, 48 - 8 / 6: (1.3, 1) fractional - 8 + 1: (9, 6) 15, 3, 54, fractional - 8 - 1: (7, 6) 13, 1, 42, fractional - 8 * 1: (8, 6) 14, 2, 48, fractional - 8 / 1: (8, 6) 14, 2, 48, fractional - 6 + 1: (8, 7) 15, 1, 56, | 2308.10379#100 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 102 | # Assistant:
Trying another promising first operation: 4. 8 - 6: (4, 4, 2) - 4 + 4: (8, 2) 10, 6, 16, 4 - 4 - 4: (2, 0) 2, 2, 0, undefined - 4 * 4: (16, 2) 18, 14, 32, 8 - 4 / 4: (2, 1) 3, 1, 2, 2 - 4 + 2: (6, 4) 10, 2, 24 = 6 * 4 -> found it! Backtracking the solution: Step 1: 8 - 6 = 2 Step 2: 4 + 2 = 6 Step 3: 6 * 4 = 24
Considering these steps: 24 = 6 * 4 = (4 + 2) * 4 = (4 + (8 - 6)) * 4 = 24. answer: (4 + (8 - 6)) * 4 = 24. | 2308.10379#102 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 103 | ËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËË User: 13 10 9 4 Assistant: Trying a promising first operation: 1. 13 - 4: (10, 9, 9) - 10 + 9: (19, 9) 28, 10, 171, fractional - 10 - 9: (9, 1) 10, 8, 9, 9 - 10 * 9: (90, 9) 99, 81, 810, fractional - 10 / 9: fractional - 9 + 9: (18, 10) 28, 8, 180, fractional - 9 - 9: (10, 0) 10, 10, 0, undefined - 9 * 9: (81, 10) 91, 71, 810, fractional - 9 / 9: (10, 1) 11, 9, 10, 10 Trying another promising first operation: 2. 13 / 10: (9, 4, 1.3) - 9 + 4: (13, 1.3) fractional, fractional, fractional, 10 | 2308.10379#103 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 104 | 2. 13 / 10: (9, 4, 1.3) - 9 + 4: (13, 1.3) fractional, fractional, fractional, 10 - 9 - 4: (5, 1.3) fractional - 9 * 4: (36, 1.3) fractional - 9 / 4: (2.3, 1.3) fractional, 1, fractional, fractional - 9 + 1.3: (10.3, 4) fractional - 9 - 1.3: (7.7, 4) fractional - 9 * 1.3: (11.7, 4) fractional - 9 / 1.3: (6.9, 4) fractional - 4 + 1.3: (9, 5.3) fractional - 4 - 1.3: (9, 2.7) fractional - 4 * 1.3: (9, 5.2) fractional - 4 / 1.3: (9, 3.1) fractional Trying another promising first operation: 3. 9 / 4: (13, 10, 2.3) - 13 + 10: (23, 2.3) fractional, fractional, fractional, 10 - 13 - 10: (3, 2.3) fractional - 13 * | 2308.10379#104 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 105 | 10: (23, 2.3) fractional, fractional, fractional, 10 - 13 - 10: (3, 2.3) fractional - 13 * 10: (130, 2.3) fractional - 13 / 10: (2.3, 1.3) fractional, 1, fractional, fractional - 13 + 2.3: (15.3, 10) fractional, fractional, 153, fractional - 13 - 2.3: (11.7, 10) fractional, fractional, 117, fractional - 13 * 2.3: (29.9, 10) fractional, fractional, 299, fractional - 13 / 2.3: (10, 5.6) fractional, fractional, 560, fractional - 10 + 2.3: (13, 12.3) fractional - 10 - 2.3: (13, 7.7) fractional - 10 * 2.3: (23, 13) 36, 10, 299, fractional - 10 / 2.3: (13, 4.3) fractional Trying another promising first operation: 4. 13 / 4: (10, 9, 3.3) - 10 + 9: (19, 3.3) fractional - 10 | 2308.10379#105 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 106 | Trying another promising first operation: 4. 13 / 4: (10, 9, 3.3) - 10 + 9: (19, 3.3) fractional - 10 - 9: (3.3, 1) fractional - 10 * 9: (90, 3.3) fractional - 10 / 9: (3.3, 1.1) fractional, fractional, fractional, 3 - 10 + 3.3: (13.3, 9) fractional - 10 - 3.3: (9, 6.7) fractional - 10 * 3.3: (33, 9) 42, 24, 297, fractional - 10 / 3.3: (3.1, 9) fractional - 9 + 3.3: (12.3, 10) fractional, fractional, 123, fractional - 9 - 3.3: (10, 5.7) fractional, fractional, 57, fractional - 9 * 3.3: (29.7, 10) fractional, fractional, 297, fractional - 9 / 3.3: (10, 2.7) fractional, fractional, 27, fractional Trying another promising first operation: 5. 13 / 9: (10, 9, 1.4) - | 2308.10379#106 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 108 | - 10 - 1.4: (9, 8.6) fractional - 10 * 1.4: (140, 9) 149, 131, 1260, fractional - 10 / 1.4: (9, 7.1) fractional - 9 + 1.4: (10.4, 10) fractional, fractional, 104, fractional - 9 - 1.4: (10, 7.6) fractional, fractional, 76, fractional - 9 * 1.4: (12.6, 10) fractional, fractional, 126, fractional - 9 / 1.4: (10, 6.4) fractional, fractional, 64, fractional Trying another promising first operation: 6. 10 - 4: (13, 9, 6) - 13 + 9: (22, 6) 28, 16, 132, fractional - 13 - 9: (6, 4) 10, 2, 24 = 6 * 4 -> found it! Backtracking the solution: Step 1: 10 - 4 = 6 Step 2: 13 - 9 = 4 Step 3: 6 * 4 = 24 Considering these steps: 24 = 6 * 4 = 6 * (13 - 9) = (10 - 4) * (13 - 9) = 24. answer: (10 - 4) * (13 - 9) = 24. | 2308.10379#108 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 110 | number. ËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËË User: 4 4 6 8. Assistant: Trying a promising first operation: (4 + 4) * 6 - 8 = 40. Trying another promising first operation: (8 - 6) * 4 + 4 = 12. Trying another promising first operation: (6 - 4) * (4 + 8) = 24. answer: (6 - 4) * (4 + 8) = 24. ËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËË User: 2 9 10 12. Assistant: Trying a promising first operation: ((12 - 10) / 2) + 9 = 10. Trying another promising first operation: | 2308.10379#110 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 111 | User: 2 9 10 12. Assistant: Trying a promising first operation: ((12 - 10) / 2) + 9 = 10. Trying another promising first operation: (10 + 9) + (12 / 2) = 25. Trying another promising first operation: (12 * 2) * (10 - 9) = 24. answer: (12 * 2) * (10 - 9) = 24. ËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËË User: 4 9 10 13. Assistant: Trying a promising first operation: ((13 - 10) / 4) + 9 = 10. Trying another promising first operation: 4 * (9 - (13 - 10)) = 24. answer: 4 * (9 - (13 - 10)) = 24. | 2308.10379#111 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 113 | 1 4 8 8.
Assistant: Trying a promising first operation: (8 + 1) + 4 * 8 + 32. Trying another promising first operation: (1 + 8 / 4) * 8 = 24. answer: (1 + 8 / 4) * 8 = 24. ËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËË User: 5 5 5 9. Assistant: Trying a promising first operation: ((5 + 5) + 5) + 9 = 24. answer: ((5 + 5) + 5) + 9 = 24.
ËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËË
# AoT (BFS)
# System: | 2308.10379#113 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 114 | # AoT (BFS)
# System:
Use numbers and basic arithmetic operations (+ - * /) to obtain 24. When considering the next steps, do not choose operations that will result in a negative or fractional number. In order to help with the calculations, the numbers in the parenthesis represent the numbers that are left after the operations and they are in descending order.
Another thing we do is when there are only two numbers left in the parenthesis, we check whether we can arrive at 24 only by using basic arithmetic operations (+ - * /). Some examples regarding this idea:
(21, 2): 23, 19, 42, fractional (30, 6): 36, 24 = 30 - 6 -> found it! (8, 3): 11, 5, 24 = 8 * 3 yes (12, 8): 20, 4, 72, fractional
In the second steps, when there are three numbers left, we choose the most promising operations so that when we are left with two numbers, we will be able to get to 24.
So, when we reach 24 in the third steps, that means we found the solution, we directly backtrack to write the answer.
In each step, we want to choose the best operations so that the numbers left, will have a good chance to reach 24.
# User: | 2308.10379#114 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 115 | In each step, we want to choose the best operations so that the numbers left, will have a good chance to reach 24.
# User:
14 8 8 2 Letâs consider the most promising possible first steps: 1. 8 + 8: (16, 14, 2) 2. 14 - 8: (8, 6, 2) 3. 14 + 2: (16, 8, 8) 4. 8 / 2: (14, 8, 4) Letâs consider the most promising second steps: 1. (16, 14, 2) - 16 - 14: (2, 2) - 16 / 2: (8, 14) - 14 * 2: (28, 16) - 14 / 2: (7, 16) 2. (8, 6, 2) - 8 - 6: (2, 2) - 8 / 2: (4, 6) - 6 * 2: (12, 8) - 6 / 2: (3, 8)
# Assistant:
3. (16, 8, 8) | 2308.10379#115 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 116 | - 16 - 8: (8, 8) - 16 / 8: (2, 8) - 8 * 8: (64, 16) - 8 / 8: (1, 16) 4. (14, 8, 4) - 14 - 8: (6, 4) - 14 / 4: (3.5, 8) - 8 * 4: (32, 14) - 8 / 4: (2, 14) Letâs consider the most promising third steps: 1. 8 + 8 = 16 - 16 - 14: (2, 2): 4, 0, 4, 1 - 16 / 2: (8, 14): 22, 6, 112, fractional - 14 * 2: (28, 16): 44, 12, 448, fractional - 14 / 2: (7, 16): 23, 9, 112, fractional 2. 14 - 8 = 6 - 8 - 6: (2, 2): 4, 0, 4, 1 - 8 / 2: (6, 4): 10, 2, 24 = 6 * 4 -> found it! Backtracking the solution: Step 1: 14 - 8 = 6 Step 2: 8 / 2 = 4 Step 3: 6 * 4 = 24 Considering these steps: 24 = 6 * 4 = (14 - 8) * (8 / 2) = 24. answer: (14 - 8) * (8 / 2) = | 2308.10379#116 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 117 | = 24 Considering these steps: 24 = 6 * 4 = (14 - 8) * (8 / 2) = 24. answer: (14 - 8) * (8 / 2) = 24. ËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËË User: 9 5 5 5 Assistant: Letâs consider the most promising first steps: 1. 9 - 5: (5, 5, 4) 2. 5 + 5: (9, 10, 5) 3. 9 + 5: (14, 5, 5) 4. 5 - 5: (9, 5, 0) Letâs consider the most promising second steps: 1. (5, 5, 4) - 5 + 5: (10, 4) - 5 * 4: (20, 5) - 5 - 4: (5, 1) - 5 / 4: (5, 1.25) 2. (9, 10, 5) - 10 + 9: (19, 5) - 10 - 5: (9, 5) - 10 * 5: | 2308.10379#117 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 119 | 3. (14, 5, 5) - 14 - 5: (9, 5) - 14 + 5: (19, 5) - 5 * 5: (25, 14) - 5 / 5: (14, 1)
4. (9, 5, 0) - 9 + 5: (14, 0) - 9 - 5: (4, 0) - 5 * 0: (9, 0) - 5 / 0: (9, infinity)
Letâs consider the most promising third steps: 1. 9 - 5 = 4 - 5 + 5: (10, 4): 14, 6, 40, fractional - 5 * 4: (20, 5): 25, 15, 100, 4 - 5 - 4: (5, 1): 6, 4, 5, 5 - 5 / 4: (5, 1.25): 6.25, 3.75, 6.25, 4
2. 5 + 5 = 10 - 10 + 9: (19, 5): 24 = 19 + 5 -> found it! Backtracking the solution: Step 1: 5 + 5 = 10 Step 2: 10 + 9 = 19 Step 3: 19 + 5 = 24 Considering these steps: 24 = 19 + 5 = (10 + 9) + 5 = ((5 + 5) + 9) + 5 = 24. answer: ((5 + 5) + 9) + 5 = 24. | 2308.10379#119 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 121 | 8 6 4 4 Letâs consider the most promising first steps: 1. 8 * 6: (48, 4, 4) 2. 8 + 4: (12, 6, 4) 3. 8 - 6: (4, 4, 2) 4. 6 - 4: (8, 4, 2) Letâs consider the most promising second steps: 1. (48, 4, 4) - 4 * 4: (48, 16) - 48 / 4: (12, 4) - 4 + 4: (48, 8) - 48 - 4: (44, 4) 2. (12, 6, 4) - 12 + 6: (18, 4) - 6 * 4: (24, 12) - 6 + 4: (12, 10) - 12 / 4: (6, 3) 3. (4, 4, 2) - 4 * 4: (16, 2) - 4 + 2: (6, 4) - 4 + 4: (8, 2) - 4 * 2: (8, 4) 4. (8, 4, 2) - 8 * 4: (32, 2) - 4 * 2: (8, 8) - 8 + 4: (12, 2) - 8 / 4: (4, 2) Letâs consider the most promising third steps: 1. 8 * 6 = 48 - 4 * | 2308.10379#121 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 124 | 13 10 9 4 Letâs consider the most promising first steps: 1. 13 - 4: (10, 9, 9) 2. 10 - 4: (13, 9, 6) 3. 13 + 9: (22, 10, 4) 4. 10 - 9: (13, 4, 1) Letâs consider the most promising second steps: 1. (10, 9, 9) - 10 + 9: (19, 9) - 10 - 9: (9, 1) - 9 + 9: (18, 10) - 9 / 9: (9, 1) 2. (13, 9, 6) - 9 + 6: (15, 13) - 9 * 6: (54, 13) - 13 - 9: (6, 4) - 13 - 6: (9, 7) 3. (22, 10, 4) - 22 - 10: (12, 4) - 22 - 4: (18, 10) - 10 * 4: (40, 22) - 10 / 4: (22, 5.5) 4. (13, 4, 1) - 13 - 4: (9, 1) - 13 * 4: (52, 1) - 4 - 1: (13, 3) - 13 - 1: (12, 4) Letâs consider the most promising third steps: 1. 13 - 4 = 9 - 10 | 2308.10379#124 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 125 | 4 - 1: (13, 3) - 13 - 1: (12, 4) Letâs consider the most promising third steps: 1. 13 - 4 = 9 - 10 + 9: (19, 9): 28, 10, 171, fractional - 10 - 9: (9, 1): 10, 8, 9, 9 - 9 + 9: (18, 10): 28, 8, 180, fractional - 9 / 9: (9, 1): 10, 8, 9, 9 2. 10 - 4 = 6 - 9 + 6: (15, 13): 28, 2, 195, fractional - 9 * 6: (54, 13): 67, 41, 702, fractional - 13 - 9: (6, 4): 10, 2, 24 = 6 * 4 -> found it! Backtracking the solution: Step 1: 13 - 9 = 4 Step 2: 10 - 4 = 6 Step 3: | 2308.10379#125 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 127 | 6 * 4 = 24 Considering these steps: 24 = 6 * 4 = (10 - 4) * (13 - 9) = 24. answer: (10 - 4) * (13 - 9) = 24. ËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËË User: 8 8 5 4 Assistant: Letâs consider the most promising first steps: 1. 5 + 4: (9, 8, 8) 2. 8 / 4: (8, 5, 2) 3. 8 - 5: (8, 4, 3) 4. 8 / 8: (5, 4, 1) Letâs consider the most promising second steps: 1. (9, 8, 8) - 9 + 8: (17, 8) - 8 + 8: (16, 9) - 9 * 8: (72, 8) - 8 / 8: (9, 1) 2. (8, 5, 2) - 5 - 2: (8, 3) - 8 - 2: (6, 5) - 8 + 5: | 2308.10379#127 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 128 | (9, 1) 2. (8, 5, 2) - 5 - 2: (8, 3) - 8 - 2: (6, 5) - 8 + 5: (13, 2) - 5 * 2: (10, 8) 3. (8, 4, 3) - 8 - 4: (4, 3) - 8 - 3: (5, 4) - 4 * 3: (12, 8) - 4 + 3: (8, 7) 4. (5, 4, 1) - 5 + 4: (9, 1) - 5 - 4: (1, 1) - 4 - 1: (5, 3) - 5 + 1: (6, 4) Letâs consider the most promising third steps: 1. 5 + 4 = 9 - 9 + 8: (17, 8): 25, 9, 136, fractional - 8 + 8: (16, 9): 25, 7, 144, fractional - 9 * 8: (72, 8): 80, 64, 576, 9 - 8 / 8: (9, 1): 10, 8, 9, 9 2. 8 / 4 = 2 - 5 - 2: (8, 3): 11, 5, 24 = 8 * 3 -> found it! Backtracking the solution: Step 1: 8 / 4 = 2 Step 2: 5 - 2 = 3 Step 3: 8 | 2308.10379#128 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 130 | 6 * 4 = 24 Considering these steps: 24 = 6 * 4 = (10 - 4) * (13 - 9) = 24. answer:
ËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËË User:
11 11 1 1
# Assistant:
Letâs consider the most promising first steps: 1. 11 - 1: (11, 10, 1) 2. 11 + 1: (12, 11, 1) 3. 11 + 11: (22, 1, 1) 4. 1 + 1: (11, 11, 2) | 2308.10379#130 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 131 | Letâs consider the most promising second steps: 1. (11, 10, 1) - 11 + 10: (21, 1) - 10 + 1: (11, 11) - 10 / 1: (11, 10) - 11 / 1: (11, 10) 2. (12, 11, 1) - 12 + 11: (23, 1) - 11 + 1: (12, 11) - 11 - 1: (12, 10) - 12 + 1: (13, 11) 3. (22, 1, 1) - 22 + 1: (23, 1) - 1 + 1: (22, 2) - 22 / 1: (22, 1) - 1 / 1: (22, 1) 4. (11, 11, 2) - 11 + 11: (22, 2) - 11 * 2: (22, 11) - 11 + 2: (13, 11) - 2 * 11: (22, 11) Letâs consider the most promising third steps: 1. 11 - 1 = 10 - 11 + 10: (21, 1): 22, 20, 21, 20 - 10 + 1: (11, 11): 22, 10, 121, 0 - 10 / 1: (11, 10): 21, 1, 110, 0 - 11 / 1: (11, 10): 21, 1, 110, 0 11 7 4 1 | 2308.10379#131 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 132 | 2. 11 + 1 = 12 - 12 + 11: (23, 1): 24 = 23 + 1 -> found it! Backtracking the solution: Step 1: 11 + 1 = 12 Step 2: 12 + 11 = 23 Step 3: 23 + 1 = 24 Considering these steps: 24 = 23 + 1 = (12 + 11) + 1 = ((11 + 1) + 11) + 1 = 24. answer: ((11 + 1) + 11) + 1 = 24.
ËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËË User:
# Assistant:
Letâs consider the most promising first steps: 1. 7 * 4: (28, 11, 1) 2. 7 + 1: (11, 8, 4) 3. 4 + 1: (11, 7, 5) 4. 11 - 4: (7, 3, 1) | 2308.10379#132 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 134 | 8 - 4: (7, 11)
3. (11, 7, 5) - 11 - 5: (7, 6) - 7 - 5: (11, 2) - 7 * 5: (35, 11) - 11 + 5: (16, 7) 4. (7, 3, 1) - 7 - 3: (4, 1) - 7 * 3: (21, 1) - 3 + 1: (7, 4) - 7 - 1: (6, 3) Letâs consider the most promising third steps: 1. 7 * 4 = 28 - 28 - 11: (17, 1): 18, 16, 17, 17 - 28 - 1: (27, 11): 38, 16, 297, 2.45 - 11 + 1: (29, 28): 57, 1, 812, 1.03 - 11 - 1: (28, 10): 38, 18, 280, 2.8 2. 7 + 1 = 8 - 11 + 8: (19, 4): 23, 15, 76, 4.75 - 8 + 4: (12, 11): 23, 7, 132, 3 - 11 - 8: (4, 3): 7, 1, 12, 1.33 - 8 - 4: (7, 11): 18, 4, 77, 1.75 11 5 4 3 | 2308.10379#134 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 135 | 3. 4 + 1 = 5 - 11 - 5: (7, 6): 13, 1, 42, 1.17 - 7 - 5: (11, 2): 13, 9, 22, 5.5 - 7 * 5: (35, 11): 46, 24 = 35 - 11 -> found it! Backtracking the solution: Step 1: 4 + 1 = 5 Step 2: 7 * 5 = 35 Step 3: 35 - 11 = 24 Considering these steps: 24 = 35 - 11 = (7 * 5) - 11 = (7 * (4 + 1)) - 11 = 24. answer: (7 * (4 + 1)) - 11 = 24. ËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËË | 2308.10379#135 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 136 | 11 5 4 3 Letâs consider the most promising first steps to reach 24: 1. 11 * 3: (33, 5, 4) 2. 5 + 4: (11, 9, 3) 3. 11 - 4: (7, 5, 3) 4. 4 + 3: (11, 7, 5) Letâs consider the most promising second steps to reach 24: 1. (33, 5, 4) - 33 - 5: (28, 4) - 33 - 4: (29, 5) - 5 + 4: (33, 9) - 5 * 4: (20, 33) 2. (11, 9, 3) - 11 * 3: (33, 9) - 11 + 3: (14, 9) - 9 + 3: (12, 11) - 9 / 3: (11, 3)
# User:
# Assistant:
3. (7, 5, 3) | 2308.10379#136 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 137 | - 7 - 3: (4, 5) - 7 + 5: (12, 3) - 5 + 3: (8, 7) - 5 * 3: (7, 6) 4. (11, 7, 5) - 11 + 7: (18, 5) - 11 - 5: (6, 7) - 7 + 5: (12, 11) - 7 - 5: (2, 11) Letâs consider the most promising third steps to reach 24: 1. 11 * 3 = 33 - 33 - 5: (28, 4): 32, 24 = 28 - 4 -> found it! Backtracking the solution: Step 1: 11 * 3 = 33 Step 2: 33 - 5 = 28 Step 3: 28 - 4 = 24 Considering these steps: 24 = 28 - 4 = (33 - 5) - 4 = ((11 * 3) - 5) - 4 = 24. answer: ((11 * 3) - 5) - 4 = 24. ËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËË User: 13 12 5 | 2308.10379#137 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 138 | User: 13 12 5 2 Assistant: Letâs consider the most promising first steps to reach 24: 1. 12 / 2: (13, 6, 5) 2. 13 + 5: (18, 12, 2) 3. 12 - 5: (13, 7, 2) 4. 5 - 2: (13, 12, 3) Letâs consider the most promising second steps to reach 24: 1. (13, 6, 5) - 13 + 6: (19, 5) - 13 - 5: (8, 6) - 6 * 5: (30, 13) - 6 - 5: (13, 1) 2. (18, 12, 2) - 18 * 2: (36, 12) - 12 / 2: (18, 6) - 18 - 2: (16, 12) - 12 * 2: (24, 18) 3. (13, 7, 2) - 13 + 7: (20, 2) - 13 - 2: (11, 7) - 7 * 2: (14, 13) - 7 - 2: (13, 5) | 2308.10379#138 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 139 | 4. (13, 12, 3) - 13 * 3: (39, 12) - 13 - 3: (10, 12) - 12 * 3: (36, 13) - 12 / 3: (13, 4)
Letâs consider the most promising third steps to reach 24: 1. 12 / 2 = 6 - 13 + 6: (19, 5): 24 = 19 + 5 -> found it! Backtracking the solution: Step 1: 12 / 2 = 6 Step 2:
13 + 6 = 19 Step 3: 19 + 5 = 24 Considering these steps: 24 = 19 + 5 = (13 + 6) + 5 = (13 + (12 / 2)) + 5 = 24. answer: (13 + (12 / 2)) + 5 = 24. ËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËË | 2308.10379#139 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 140 | User: 9 8 2 1 Assistant: Letâs consider the most promising first steps to reach 24: 1. 9 - 1: (8, 8, 2) 2. 8 + 2: (10, 9, 1) 3. 9 + 1: (10, 8, 2) 4. 2 + 1: (9, 8, 3) Letâs consider the most promising second steps to reach 24: 1. (8, 8, 2) - 8 + 8: (16, 2) - 8 * 2: (16, 8) - 8 - 2: (6, 8) - 8 + 8: (16, 2) 2. (10, 9, 1) - 10 + 9: (19, 1) - 10 - 1: (9, 9) - 9 + 1: (10, 10) - 9 / 1: (8, 10) 3. (10, 8, 2) - 10 + 8: (18, 2) - 10 * 2: (8, 8) - 8 + 2: (10, 10) - 8 / 2: (6, 10) 4. (9, 8, 3) - 9 + 8: (17, 3) - 9 / 3: (6, 8) - 8 + 3: (11, 9) - 8 * 3: (24, 9) Letâs consider the most promising third steps to reach | 2308.10379#140 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 141 | 3: (6, 8) - 8 + 3: (11, 9) - 8 * 3: (24, 9) Letâs consider the most promising third steps to reach 24: 1. 9 - 1 = 8 - 8 + 8: (16, 2): 18, 14, 32, 8 - 8 * 2: (16, 8): 24 = 16 + 8 -> found it! Backtracking the solution: Step 1: 9 - 1 = 8 Step 2: 8 * 2 = 16 Step 3: 16 + 8 = 24 Considering these steps: 24 = 16 + 8 = (8 * 2) + 8 = ((9 - 1) * 2) + 8 = 24. answer: ((9 - 1) * 2) + 8 = 24. ËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËË | 2308.10379#141 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 142 | # User:
# Assistant:
# AoT (Short)
System:
Use numbers and basic arithmetic operations (+ - * /) to obtain 24. When considering the next steps, do not choose operations that will result in a negative or fractional number. In order to help with the calculations, the numbers in the parenthesis represent the numbers that are left after the operations and they are in descending order.
Another thing we do is when there are only two numbers left in the parenthesis, we check whether we can arrive at 24 only by using basic arithmetic operations (+ - * /). Some examples regarding this idea:
(21 2) no since 21 + 2 = 23, 21 - 2 = 19, 21 * 2 = 42, 21 / 2 = 10.5, none of which is equal
to 24.
(30 6) 30 - 6 = 24 yes (8 3) 8 * 3 = 24 yes (12 8) no (48 2) 48 / 2 = 24 yes
Most importantly, do not give up, all the numbers that will be given has indeed a
solution.
# User: | 2308.10379#142 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 143 | User: 14 8 8 2 Assistant: Trying a promising first operation: 1. 8 / 2: (14, 8, 4) - 14 + 8: (22, 4) 26, 18, 88, fractional - 14 - 8: (6, 4) 10, 2, 24 = 6 * 4 -> found it! Backtracking the solution: Step 1: 8 / 2 = 4 Step 2: 14 - 8 = 6 Step 3: 6 * 4 = 24 Considering these steps: 24 = 6 * 4 = (14 - 8) * 4 = (14 - 8) * (8 / 2) = 24. answer: (14 - 8) * (8 / 2) = 24. ËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËË User: 9 5 5 5 Assistant: Trying a promising first operation: 1. 5 + 5: (10, 9, 5) - 10 + 9: (19, 5) 24 = 19 + 5 -> found it! Backtracking the solution: Step 1: 5 + 5 = 10 Step | 2308.10379#143 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 144 | 9, 5) - 10 + 9: (19, 5) 24 = 19 + 5 -> found it! Backtracking the solution: Step 1: 5 + 5 = 10 Step 2: 10 + 9 = 19 Step 3: 19 + 5 = 24 Considering these steps: 24 = 19 + 5 = (10 + 9) + 5 = ((5 + 5) + 9) + 5 = 24. answer: ((5 + 5) + 9) + 5 = 24. ËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËË User: 8 6 4 4 Assistant: Trying a promising first operation: 1. 8 - 6: (4, 4, 2) - 4 + 4: (8, 2) 10, 6, 16, 4 - 4 - 4: (2, 0) 2, 2, 0, undefined - 4 * 4: (16, 2) 18, 14, 32, 8 - 4 / 4: (2, 1) 3, 1, 2, 2 - 4 + 2: (6, 4) 10, 2, 24 = | 2308.10379#144 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 146 | ËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËË User: 13 10 9 4 Assistant: Trying a promising first operation: 1. 10 - 4: (13, 9, 6) - 13 + 9: (22, 6) 28, 16, 132, fractional - 13 - 9: (6, 4) 10, 2, 24 = 6 * 4 -> found it! Backtracking the solution: Step 1: 10 - 4 = 6 Step 2: 13 - 9 = 4 Step 3: 6 * 4 = 24 Considering these steps: 24 = 6 * 4 = 6 * (13 - 9) = (10 - 4) * (13 - 9) = 24. answer: (10 - 4) * (13 - 9) = 24. | 2308.10379#146 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 147 | 4 = 6 * (13 - 9) = (10 - 4) * (13 - 9) = 24. answer: (10 - 4) * (13 - 9) = 24. ËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËË User: 8 8 5 4 Assistant: Trying a promising first operation: 1. 8 / 4: (8, 5, 2) - 8 + 5: (13, 2) 15, 11, 26, fractional - 8 - 5: (3, 2) 5, 1, 6, 1.5 - 8 * 5: (40, 2) 42, 38, 80, 20 - 8 / 5: fractional - 8 + 2: (10, 5) 15, 5, 50, 2 - 8 - 2: (6, 5) 11, 1, 30, fractional - 8 * 2: (16, 5) 21, 11, 80, fractional - 8 / 2: (5, 4) 9, 1, 20, fractional - 5 + 2: | 2308.10379#147 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 148 | 2: (16, 5) 21, 11, 80, fractional - 8 / 2: (5, 4) 9, 1, 20, fractional - 5 + 2: (7, 8) 15, 1, 56, fractional - 5 - 2: (8, 3) 11, 5, 24 = 8 * 3 -> found it! Backtracking the solution: Step 1: 8 / 4 = 2 Step 2: 5 - 2 = 3 Step 3: 8 * 3 = 24 Considering these steps: 24 = 8 * 3 = 8 * (5 - 2) = 8 * (5 - (8 / 4)) = 24. answer: 8 * (5 - (8 / 4)) = 24. ËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËË User: 11 11 1 1 Assistant: | 2308.10379#148 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 149 | Trying a promising first operation: 1. 11 + 1: (12, 11, 1) - 12 + 11: (23, 1) 24 = 23 + 1 -> found it! Backtracking the solution: Step 1: 11 + 1 = 12 Step 2: 12 + 11 = 23 Step 3: 23 + 1 Considering these steps: 24 = 23 + 1 = (12 + 11) + 1 = ((11 + 1) + 11) + 1 = 24. answer: ((11 + 1) + 11) + 1 = 24.
ËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËË User:
11 7 4 1
# Assistant:
Trying a promising first operation: 1. 4 + 1: (11 7 5) - 11 + 7: (18, 5) 23, 13, 90, fractional | 2308.10379#149 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 150 | - 11 - 7: (5, 4) 9, 1, 20, fractional - 11 * 7: (77, 5) 82, 72, 385, fractional - 11 / 7: fractional - 11 + 5: (16, 7) 23, 9, 112, fractional - 11 - 5: (7, 6) 13, 1, 42, fractional - 11 * 5: (55, 7) 62, 48, 385, fractional - 11 / 5: fractional - 7 + 5: (12, 11) 23, 1, 132, fractional - 7 - 5: (11, 2) 13, 9, 22, fractional - 7 * 5: (35, 11) 46, 24 = 35 - 11 -> found it! Step 1: 4 + 1 = 5 Step 2: 7 * 5 = 35 Step 3: 35 - 11 = 24 Considering these steps: Backtracking the solution: 24 = 35 - 11 = (7 * 5) - 11 = (7 * (4 + 1)) - 11 = 24. answer: (7 * (4 + 1)) - 11 = 24. | 2308.10379#150 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 151 | 35 - 11 = (7 * 5) - 11 = (7 * (4 + 1)) - 11 = 24. answer: (7 * (4 + 1)) - 11 = 24. ËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËË User: 11 5 4 3 Assistant: Trying a promising first operation: 1. 11 * 3: (33, 5, 4) - 33 + 5: (38, 4) 42, 34, 152, fractional - 33 - 5: (28, 4) 32, 24 = 28 - 4 -> found it! Backtracking the solution: Step 1: 11 * 3 = 33 Step 2: 33 - 5 = 28 Step 3: 28 - 4 = 24 Considering these steps: 24 = 28 - 4 = (33 - 5) - 4 = ((11 * 3) - 5) - 4 = 24. answer: ((11 * 3) - 5) - 4 = 24. | 2308.10379#151 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 152 | 28 - 4 = (33 - 5) - 4 = ((11 * 3) - 5) - 4 = 24. answer: ((11 * 3) - 5) - 4 = 24. ËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËË User: 13 12 5 2 Assistant: Trying a promising first operation: 1. 13 + 5: (18, 12, 2) - 18 + 12: (30, 2) 32, 28, 60, fractional - 18 - 12: (6, 2) 8, 4, 12, 3 - 18 * 12: (216, 2) 218, 214, 432, fractional - 18 / 12: fractional - 18 + 2: (20, 12) 32, 8, 240, fractional - 18 - 2: (16, 12) 28, 4, 192, fractional - 18 * 2: (36, 12) 48, 24 = 36 - 12 -> found it! Backtracking the solution: Step 1: 13 + 5 = 18 Step 2: 18 * 2 = 36 | 2308.10379#152 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 155 | # Assistant:
Trying a promising first operation: 1. 9 - 1: (8, 8, 2) - 8 + 8: (16, 2) 18, 14, 32, 8
- 8 - 8: (2, 0) 2, 2, 0, undefined - 8 * 8: (64, 2) 66, 62, 128, 32 - 8 / 8: (2, 1) 3, 1, 2, 2 - 8 + 2: (10, 8) 18, 2, 80, 4 - 8 - 2: (6, 8) 14, -2, 48, fractional - 8 * 2: (16, 8) 24 = 16 + 8 -> found it! Backtracking the solution: Step 1: 9 - 1 = 8 Step 2: 8 * 2 = 16 Step 3: 16 + 8 = 24 Considering these steps: 24 = 16 + 8 = (8 * 2) + 8 = ((9 - 1) * 2) + 8 = 24. answer: ((9 - 1) * 2) + 8 = 24. | 2308.10379#155 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 156 | ËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËË
# 5 Ã 5 Mini Crosswords Prompts
In the following we give all the exact prompts we used for the methods given in the experiments section and some selection of the generations from the LLM. In the prompts the keywords âSystemâ, âUserâ and âAssistantâ used are the roles in the OpenAI API in chat completion mode. The line breaks are to show the turns between the user and assistant in the API.
# AoT
# User:
h1. A lunar valley h2. A fatty oil h3. To entice h4. To lower; to reduce h5. A solitary person v1. According to the roster v2. Another name for Port-Francqui v3. An illicit lover; a European lake v4. To lisp v5. To come in
The words I already found are: h1. rille h3. tempt v2. ilebo v5. enter
# Assistant: | 2308.10379#156 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 157 | The words I already found are: h1. rille h3. tempt v2. ilebo v5. enter
# Assistant:
There are two horizontal and two vertical words. Since they are the amount, letâs
consider the next unfound horizontal word, which is h2. Since we are considering the second horizontal word, we need to look at the second letters of the vertical words we found: - What is the second letter of ilebo: e - What is the second letter of enter: n The word we are looking fits the clue "A fatty oil" and needs to have the letters
The word we are looking fits the clue "A fatty oil" and needs to have the letters e and nin its (v2, two) second and (v5, five) fifth letters: _e __ n. The candidates are: oleic, olein, olive. | 2308.10379#157 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 158 | e and n in its (v2, two) second and (v5, five) fifth letters: _ e _ _ n. The candidates are: oleic, olein, olive. - oleic (o l e i c, _ e _ _ n) doesnât fit - olein (o l e i n, _ e _ _ n) fits - olive (o l i v e, _ e _ _ n) doesnât fit We add the word olein for h2. The current words are: h1. rille h2. olein h3. tempt v2. ilebo v5. enter Now, letâs consider the next unfound vertical word, which is v1. Since we are
Now, letâs consider the next unfound vertical word, which is vl. Since we are considering the first vertical word, we need to look at the first letters of the horizontal words we found:
considering the first vertical word, we need to look at the first letters of the horizontal words we found: - What is the first letter of rille: r - What is the first letter of olein: o | 2308.10379#158 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 159 | What is the first letter of tempt: t The word we are looking fits the clue "According to the roster" and needs to have the letters r, o, and t in its (h1, one) first, (h2, two) second, and (h3, three) third letters: r o t _ _. The candidates are: none
Now, letâs consider the next unfound vertical word, which is v3. Since we are
considering the third vertical word, we need to look at the third letters of the horizontal words we found: - What is the third letter of rille: l - What is the third letter of olein: e - What is the third letter of tempt: m The word we are looking fits the clue "An illicit lover; a European lake" and
needs to have the letters l, e, and m in its (h1, one) first, (h2, two) second , and (h3, three) third letters: l e m _ _. The candidates are: leman. | 2308.10379#159 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 160 | leman (l e m a n, l e m _ _) fits We add the word leman for v3. The current words are: h1. rille h2. olein h3. tempt v2. ilebo v3. leman v5. enter Now, letâs consider the next unfound horizontal word, which is h4. Since we are
Now, letâs consider the next unfound horizontal word, which is h4. Since we are considering the fourth horizontal word, we need to look at the fourth letters of the vertical words we found:
considering the fourth horizontal word, we need to look at the fourth letters of the vertical words we found: - What is the fourth letter of ilebo: b - What is the fourth letter of leman: a - What is the fourth letter of enter: e The word we are looking fits the clue "To lower; to reduce" and needs to have the letters b, a, and e in its (v2, two) second, (v3, three) third, and (v5, five)
fifth letters: _ b a _ e. The candidates are: abase, abate. | 2308.10379#160 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 161 | fifth letters: _ b a _ e. The candidates are: abase, abate. - abase (a b a s e, _ b a _ e) fits - abate (a b a t e, _ b a _ e) fits Since both fits, letâs consider another word. Now, letâs consider the last unfound vertical word, which is v4. Since we are considering the fourth vertical word, we need to look at the fourth letters of the horizontal words we found: - What is the fourth letter of rille: l - What is the fourth letter of olein: i - What is the fourth letter of tempt: p The word we are looking fits the clue "To lisp" and needs to have the letters l, i , p, and s in its (h1, one) first, (h2, two) second, (h3, three) third letters : l i p _ _. The candidates are: The candidate is "lisps". - lisps (l i s p s, l i p _ _) doesnât fit We have considered all of the words for the horizontal words, we have two options: abase and abate, letâs choose one of them. answer: h1. rille h2. olein h3. tempt h4. abase v2. ilebo | 2308.10379#161 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 162 | and abate, letâs choose one of them. answer: h1. rille h2. olein h3. tempt h4. abase v2. ilebo v3. leman v5. enter ËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËËË User: h1. One who saws h2. A fungus genus h3. An assessor h4. Pasture land h5. Receiving by the ear v1. To swell; to increase v2. The Brazilian macaw; an Australian bird v3. A Timorese island v4. Excessive fluid accumulation | 2308.10379#162 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
2308.10379 | 163 | # v5. Dewy; roscid
The words I already found are: h1. sawer h3. rater v1. surge v4. edema
# Assistant:
There are two horizontal and two vertical words. Since they are the amount, letâs
consider the next unfound horizontal word, which is h2. Since we are considering the second horizontal word, we need to look at the second letters of the vertical words we found: - What is the second letter of surge: u - What is the second letter of edema: d The word we are looking fits the clue "A fungus genus" and needs to have the
The word we are looking fits the clue "A fungus genus" and needs to have the letters u and d in its (vl, one) first and (v4, four) fourth letters: u __d The candidates are: uredo.
letters u and d in its (v1, one) first and (v4, four) fourth letters: u _ _ d _. The candidates are: uredo. - uredo (u r e d o, u _ _ d _) fits We add the word uredo for h2. The current words are: h1. sawer h2. uredo h3. rater v1. surge v4. edema Now, letâs consider the next unfound vertical word, which is v2. Since we are | 2308.10379#163 | Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models | Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application. | http://arxiv.org/pdf/2308.10379 | Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, Ming Jin | cs.CL, cs.AI | null | null | cs.CL | 20230820 | 20230928 | [
{
"id": "2204.02311"
},
{
"id": "2206.07682"
},
{
"id": "2305.10601"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "2108.07732"
},
{
"id": "2305.04388"
},
{
"id": "2211.09110"
},
{
"id": "2207.05221"
},
{
"id": "2212.10403"
},
{
"id": "2305.08291"
},
{
"id": "2307.13702"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2305.05176"
},
{
"id": "2304.01852"
},
{
"id": "2201.08239"
},
{
"id": "2302.07842"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.