id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
2308.00675#31
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
[10] Aydar Bulatov, Yuri Kuratov, and Mikhail S Burtsev. Scaling transformer to 1m tokens and beyond with rmt. arXiv preprint arXiv:2304.11062, 2023. [11] Tianle Cai, Xuezhi Wang, Tengyu Ma, Xinyun Chen, and Denny Zhou. Large language models as tool makers. arXiv preprint arXiv:2305.17126, 2023. [12] Jiuhai Chen, Lichang Chen, Chen Zhu, and Tianyi Zhou. How many demonstrations do you need for in-context learning? 2023. 10 [13] Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W Cohen.
2308.00675#30
2308.00675#32
2308.00675
[ "2302.13971" ]
2308.00675#32
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks. arXiv preprint arXiv:2211.12588, 2022. [14] Ho Kei Cheng and Alexander G Schwing. Xmem: Long-term video object segmentation with an atkinson-shiffrin memory model. In Computer Visionâ ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23â 27, 2022, Proceedings, Part XXVIII, pages 640â 658. Springer, 2022. [15] Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021. [16] Surà s Dà dac, Sachit Menon, and Carl Vondrick.
2308.00675#31
2308.00675#33
2308.00675
[ "2302.13971" ]
2308.00675#33
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Vipergpt: Visual inference via python execution for reasoning. arXiv preprint arXiv:2303.08128, 2023. [17] Danny Driess, Fei Xia, Mehdi S. M. Sajjadi, Corey Lynch, Aakanksha Chowdhery, Brian Ichter, Ayzaan Wahid, Jonathan Tompson, Quan Vuong, Tianhe Yu, Wenlong Huang, Yevgen Chebotar, Pierre Sermanet, Daniel Duckworth, Sergey Levine, Vincent Vanhoucke, Karol Hausman, Marc Toussaint, Klaus Greff, Andy Zeng, Igor Mordatch, and Pete Florence. Palm-e: An embodied multimodal language model. In arXiv preprint arXiv:2303.03378, 2023. [18] Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and Graham Neubig. Pal: Program-aided language models. arXiv preprint arXiv:2211.10435, 2022. [19] Tanmay Gupta and Aniruddha Kembhavi. Visual programming: Compositional visual reasoning without training. arXiv preprint arXiv:2211.11559, 2022.
2308.00675#32
2308.00675#34
2308.00675
[ "2302.13971" ]
2308.00675#34
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
[20] Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Mingwei Chang. Retrieval augmented language model pre-training. In International conference on machine learning, pages 3929â 3938. PMLR, 2020. [21] Wenlong Huang, Pieter Abbeel, Deepak Pathak, and Igor Mordatch. Language models as zero-shot planners: Extracting actionable knowledge for embodied agents. In International Conference on Machine Learning, pages 9118â 9147. PMLR, 2022. [22] Wenlong Huang, Fei Xia, Ted Xiao, Harris Chan, Jacky Liang, Pete Florence, Andy Zeng, Inner monologue: Embodied Jonathan Tompson, Igor Mordatch, Yevgen Chebotar, et al. reasoning through planning with language models. arXiv preprint arXiv:2207.05608, 2022. [23] IDEA-Research. Grounded-segment-anything. https://github.com/IDEA-Research/ Grounded-Segment-Anything, 2023. Accessed: 05/15/2023. [24] Shima Imani, Liang Du, and Harsh Shrivastava. Mathprompter: Mathematical reasoning using large language models. arXiv preprint arXiv:2303.05398, 2023. [25] Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, and Luke Zettlemoyer. Mapping language to code in programmatic context. arXiv preprint arXiv:1808.09588, 2018. [26] Daniel Khashabi, Sewon Min, Tushar Khot, Ashish Sabharwal, Oyvind Tafjord, Peter Clark, and Hannaneh Hajishirzi. UNIFIEDQA: Crossing format boundaries with a single QA system. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1896â 1907, Online, November 2020. Association for Computational Linguistics. [27] Omar Khattab, Keshav Santhanam, Xiang Lisa Li, David Hall, Percy Liang, Christopher Potts, and Matei Zaharia. Demonstrate-search-predict:
2308.00675#33
2308.00675#35
2308.00675
[ "2302.13971" ]
2308.00675#35
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Composing retrieval and language models for knowledge-intensive nlp. arXiv preprint arXiv:2212.14024, 2022. [28] Tushar Khot, Harsh Trivedi, Matthew Finlayson, Yao Fu, Kyle Richardson, Peter Clark, and Ashish Sabharwal. Decomposed prompting: A modular approach for solving complex tasks. arXiv preprint arXiv:2210.02406, 2022. 11 [29] Wonjae Kim, Bokyung Son, and Ildoo Kim.
2308.00675#34
2308.00675#36
2308.00675
[ "2302.13971" ]
2308.00675#36
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Vilt: Vision-and-language transformer without convolution or region supervision. In International Conference on Machine Learning, pages 5583â 5594. PMLR, 2021. [30] Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al. Segment anything. arXiv preprint arXiv:2304.02643, 2023. [31] Mojtaba Komeili, Kurt Shuster, and Jason Weston. Internet-augmented dialogue generation. arXiv preprint arXiv:2107.07566, 2021. [32] Angeliki Lazaridou, Elena Gribovskaya, Wojciech Stokowiec, and Nikolai Grigorev. Internet- augmented language models through few-shot prompting for open-domain question answering. arXiv preprint arXiv:2203.05115, 2022. [33] Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, et al. Retrieval-augmented generation for knowledge-intensive nlp tasks. Advances in Neural Information Processing Systems, 33:9459â 9474, 2020. [34] Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. Visualbert: A simple and performant baseline for vision and language. arXiv preprint arXiv:1908.03557, 2019. [35] Jiachang Liu, Dinghan Shen, Yizhe Zhang, Bill Dolan, Lawrence Carin, and Weizhu Chen. What makes good in-context examples for gpt-3? arXiv preprint arXiv:2101.06804, 2021. [36] Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, and Jian-Guang Lou.
2308.00675#35
2308.00675#37
2308.00675
[ "2302.13971" ]
2308.00675#37
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Tapex: Table pre-training via learning a neural sql executor. arXiv preprint arXiv:2107.07653, 2021. [37] Ruibo Liu, Jason Wei, Shixiang Shane Gu, Te-Yen Wu, Soroush Vosoughi, Claire Cui, Denny Zhou, and Andrew M Dai. Mindâ s eye: Grounded language model reasoning through simulation. arXiv preprint arXiv:2210.05359, 2022. [38] Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Chunyuan Li, Jianwei Yang, Hang Su, Jun Zhu, et al. Grounding dino: Marrying dino with grounded pre-training for open-set object detection. arXiv preprint arXiv:2303.05499, 2023.
2308.00675#36
2308.00675#38
2308.00675
[ "2302.13971" ]
2308.00675#38
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
[39] Pan Lu, Swaroop Mishra, Tony Xia, Liang Qiu, Kai-Wei Chang, Song-Chun Zhu, Oyvind Tafjord, Peter Clark, and Ashwin Kalyan. Learn to explain: Multimodal reasoning via thought chains for science question answering. In The 36th Conference on Neural Information Process- ing Systems (NeurIPS), 2022. [40] Pan Lu, Baolin Peng, Hao Cheng, Michel Galley, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, and Jianfeng Gao.
2308.00675#37
2308.00675#39
2308.00675
[ "2302.13971" ]
2308.00675#39
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Chameleon: Plug-and-play compositional reasoning with large language models. arXiv preprint arXiv:2304.09842, 2023. [41] Pan Lu, Liang Qiu, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, Tanmay Rajpurohit, Peter Clark, and Ashwin Kalyan. Dynamic prompt learning via policy gradient for semi-structured mathematical reasoning. In International Conference on Learning Representations (ICLR), 2023. [42] Yujie Lu, Pan Lu, Zhiyu Chen, Wanrong Zhu, Xin Eric Wang, and William Yang Wang. Multimodal procedural planning via dual text-image prompting. 2023. [43] Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christo- pher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, et al. Webgpt: Browser-assisted question-answering with human feedback. arXiv preprint arXiv:2112.09332, 2021. [44] Andrew Y Ng, Stuart Russell, et al. Algorithms for inverse reinforcement learning. In Icml, volume 1, page 2, 2000. [45] OpenAI. Gpt-4 technical report. 2023.
2308.00675#38
2308.00675#40
2308.00675
[ "2302.13971" ]
2308.00675#40
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
12 [46] Bhargavi Paranjape, Scott Lundberg, Sameer Singh, Hannaneh Hajishirzi, Luke Zettlemoyer, and Marco Tulio Ribeiro. Art: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:2303.09014, 2023. [47] Aaron Parisi, Yao Zhao, and Noah Fiedel. Talm: Tool augmented language models. arXiv preprint arXiv:2205.12255, 2022. [48] Shishir G Patil, Tianjun Zhang, Xin Wang, and Joseph E Gonzalez.
2308.00675#39
2308.00675#41
2308.00675
[ "2302.13971" ]
2308.00675#41
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Gorilla: Large language model connected with massive apis. arXiv preprint arXiv:2305.15334, 2023. [49] Dean A Pomerleau. Alvinn: An autonomous land vehicle in a neural network. Advances in neural information processing systems, 1, 1988. [50] Ofir Press, Muru Zhang, Sewon Min, Ludwig Schmidt, Noah A Smith, and Mike Lewis. Measuring and narrowing the compositionality gap in language models. arXiv preprint arXiv:2210.03350, 2022. [51] Yujia Qin, Shengding Hu, Yankai Lin, Weize Chen, Ning Ding, Ganqu Cui, Zheni Zeng, Yufei Huang, Chaojun Xiao, Chi Han, et al. Tool learning with foundation models. arXiv preprint arXiv:2304.08354, 2023. [52] Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, and Maosong Sun. Toolllm: Facilitating large language models to master 16000+ real-world apis, 2023.
2308.00675#40
2308.00675#42
2308.00675
[ "2302.13971" ]
2308.00675#42
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
[53] Maxim Rabinovich, Mitchell Stern, and Dan Klein. Abstract syntax networks for code genera- tion and semantic parsing. arXiv preprint arXiv:1704.07535, 2017. [54] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High- resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10684â 10695, 2022. [55] Stéphane Ross, Geoffrey Gordon, and Drew Bagnell.
2308.00675#41
2308.00675#43
2308.00675
[ "2302.13971" ]
2308.00675#43
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
A reduction of imitation learning and structured prediction to no-regret online learning. In Proceedings of the fourteenth interna- tional conference on artificial intelligence and statistics, pages 627â 635. JMLR Workshop and Conference Proceedings, 2011. [56] Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Luke Zettle- moyer, Nicola Cancedda, and Thomas Scialom.
2308.00675#42
2308.00675#44
2308.00675
[ "2302.13971" ]
2308.00675#44
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Toolformer: Language models can teach themselves to use tools. arXiv preprint arXiv:2302.04761, 2023. [57] Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. Hugginggpt: Solving ai tasks with chatgpt and its friends in huggingface. arXiv preprint arXiv:2303.17580, 2023. [58] Kurt Shuster, Jing Xu, Mojtaba Komeili, Da Ju, Eric Michael Smith, Stephen Roller, Megan Ung, Moya Chen, Kushal Arora, Joshua Lane, et al. Blenderbot 3: a deployed conversational agent that continually learns to responsibly engage. arXiv preprint arXiv:2208.03188, 2022. [59] Michael Sipser. Introduction to the theory of computation. ACM Sigact News, 27(1):27â 29, 1996. [60] Alane Suhr, Stephanie Zhou, Ally Zhang, Iris Zhang, Huajun Bai, and Yoav Artzi.
2308.00675#43
2308.00675#45
2308.00675
[ "2302.13971" ]
2308.00675#45
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
A corpus for reasoning about natural language grounded in photographs. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6418â 6428, Florence, Italy, July 2019. Association for Computational Linguistics. [61] Simeng Sun, Katherine Thai, and Mohit Iyyer. Chapterbreak: A challenge dataset for long-range language models. arXiv preprint arXiv:2204.10878, 2022. [62] Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng- Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, et al. Lamda: Language models for dialog applications. arXiv preprint arXiv:2201.08239, 2022.
2308.00675#44
2308.00675#46
2308.00675
[ "2302.13971" ]
2308.00675#46
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
13 [63] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timo- thée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023. [64] Lei Wang, Wanyu Xu, Yihuai Lan, Zhiqiang Hu, Yunshi Lan, Roy Ka-Wei Lee, and Ee-Peng Lim. Plan-and-solve prompting: Improving zero-shot chain-of-thought reasoning by large language models. arXiv preprint arXiv:2305.04091, 2023. [65] Xingyao Wang, Sha Li, and Heng Ji.
2308.00675#45
2308.00675#47
2308.00675
[ "2302.13971" ]
2308.00675#47
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Code4struct: Code generation for few-shot structured prediction from natural language. arXiv preprint arXiv:2210.12810, 2022. [66] Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A Smith, Daniel Khashabi, and Hannaneh Hajishirzi. Self-instruct: Aligning language model with self generated instruc- tions. arXiv preprint arXiv:2212.10560, 2022.
2308.00675#46
2308.00675#48
2308.00675
[ "2302.13971" ]
2308.00675#48
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
[67] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. Chain of thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903, 2022. [68] Jerry Wei, Jason Wei, Yi Tay, Dustin Tran, Albert Webson, Yifeng Lu, Xinyun Chen, Hanxiao Liu, Da Huang, Denny Zhou, et al. Larger language models do in-context learning differently. arXiv preprint arXiv:2303.03846, 2023. [69] Chenfei Wu, Shengming Yin, Weizhen Qi, Xiaodong Wang, Zecheng Tang, and Nan Duan.
2308.00675#47
2308.00675#49
2308.00675
[ "2302.13971" ]
2308.00675#49
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Visual chatgpt: Talking, drawing and editing with visual foundation models. arXiv preprint arXiv:2303.04671, 2023. [70] Jinyu Yang, Mingqi Gao, Zhe Li, Shang Gao, Fangjing Wang, and Feng Zheng. Track anything: Segment anything meets videos, 2023. [71] Rui Yang, Lin Song, Yanwei Li, Sijie Zhao, Yixiao Ge, Xiu Li, and Ying Shan. Gpt4tools: Teach- ing large language model to use tools via self-instruction. arXiv preprint arXiv:2305.18752, 2023. [72] Sherry Yang, Ofir Nachum, Yilun Du, Jason Wei, Pieter Abbeel, and Dale Schuurmans. Foun- dation models for decision making: Problems, methods, and opportunities. arXiv preprint arXiv:2303.04129, 2023. [73] Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Ehsan Azarnasab, Faisal Ahmed, Zicheng Liu, Ce Liu, Michael Zeng, and Lijuan Wang. Mm-react: Prompting chatgpt for multimodal reasoning and action. arXiv preprint arXiv:2303.11381, 2023. [74] Shunyu Yao, Howard Chen, John Yang, and Karthik Narasimhan.
2308.00675#48
2308.00675#50
2308.00675
[ "2302.13971" ]
2308.00675#50
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Webshop: Towards scalable real-world web interaction with grounded language agents. arXiv preprint arXiv:2207.01206, 2022. [75] Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. React: Synergizing reasoning and acting in language models. arXiv preprint arXiv:2210.03629, 2022. [76] Pengcheng Yin and Graham Neubig. A syntactic neural model for general-purpose code generation. arXiv preprint arXiv:1704.01696, 2017. [77] Wenhao Yu, Dan Iter, Shuohang Wang, Yichong Xu, Mingxuan Ju, Soumya Sanyal, Chen- guang Zhu, Michael Zeng, and Meng Jiang. Generate rather than retrieve: Large language models are strong context generators. In The Eleventh International Conference on Learning Representations, 2023. [78] Renrui Zhang, Jiaming Han, Aojun Zhou, Xiangfei Hu, Shilin Yan, Pan Lu, Hongsheng Li, Peng Gao, and Yu Qiao.
2308.00675#49
2308.00675#51
2308.00675
[ "2302.13971" ]
2308.00675#51
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Llama-adapter: Efficient fine-tuning of language models with zero-init attention. arXiv preprint arXiv:2303.16199, 2023. [79] Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068, 2022.
2308.00675#50
2308.00675#52
2308.00675
[ "2302.13971" ]
2308.00675#52
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
14 [80] Zhuosheng Zhang, Aston Zhang, Mu Li, Hai Zhao, George Karypis, and Alex Smola. Mul- timodal chain-of-thought reasoning in language models. arXiv preprint arXiv:2302.00923, 2023. [81] Zihao Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. Calibrate before use: Improving few-shot performance of language models. In International Conference on Machine Learning, pages 12697â
2308.00675#51
2308.00675#53
2308.00675
[ "2302.13971" ]
2308.00675#53
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
12706. PMLR, 2021. [82] Victor Zhong, Tim Rocktäschel, and Edward Grefenstette. Rtfm: Generalising to novel environment dynamics via reading. arXiv preprint arXiv:1910.08210, 2019. [83] Shuyan Zhou, Uri Alon, Frank F. Xu, Zhengbao Jiang, and Graham Neubig. Docprompting: Generating code by retrieving the docs.
2308.00675#52
2308.00675#54
2308.00675
[ "2302.13971" ]
2308.00675#54
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
In The Eleventh International Conference on Learning Representations, 2023. 15 # A Broader impacts and limitations This work studies the importance of tool documentations in equipping LLMs with the ability to compose usages of a variety of tools to accomplish complex tasks. However, as discussed in [51], it is imperative to contemplate what tools should be made available to LLMs as well as how one should interpret and rely on the results obtained from the models. We envision tool documentations as a channel to guide LLMs in more safely using the tools, aligning with the original intended use of the tools.
2308.00675#53
2308.00675#55
2308.00675
[ "2302.13971" ]
2308.00675#55
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
# B Implementation details In this section, we provide further implementation details on each task. We conduct all our experi- ments on Debian GNU/Linux 10 machines with 40GB A100 GPUs. # B.1 ScienceQA On ScienceQA [39], we closely follow the original setup 1 used in Chameleon [40], including the tool docs and few-shot demos (when used). We however find that the â Image Captionerâ module used in the original work often provides less accurate captions on given images. In the documentation, we thus add the description on this observation for the â Image Captionerâ module as shown in Figure 8. The modules are defined as follows: - Image_Captioner: This module generates a potentially inaccurate caption for the given image. Avoid using the module unless necessary. "Image_Captioner" can be considered when the question involves the semantic understanding of the image. - Text_Detector: This module detects the text in the given image. Normally, we consider using "Text_Detector" when the question involves the unfolding of the text in the image, e.g., diagram, chart, table, map, etc., and the "has_image" field in the metadata is True. - Knowledge_Retrieval: ...
2308.00675#54
2308.00675#56
2308.00675
[ "2302.13971" ]
2308.00675#56
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Figure 8: Documentations used in ScienceQA datasets. We used the original tool docs in Chameleon [40] and added the description for â Image Captionerâ that the generated captions may be inaccurate. # B.2 TabMWP On TabMWP [41], we strictly follow the original setup used in Chameleon [40]. We refer the readers to [40] and their open-sourced implementations for further details. # B.3 NLVRv2 On NLVRv2, we follow the setup used in [19]. However, as tool docs are not used in [19], we create our own docs for the tools used. Figure 9 shows the tool docs we use for several available tools used in VisProg [19]. # 1https://github.com/lupantech/chameleon-llm
2308.00675#55
2308.00675#57
2308.00675
[ "2302.13971" ]
2308.00675#57
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
16 Function: VQA Description The VQA function calls the BLIP Model for visual question answering. The model consists of a vision encoder, a text encoder, and a text decoder. The vision encoder will encode the input image, the text encoder will encode the input question together with the encoding of the image, and the text decoder will output the answer to the question. Syntax VQA(image: IMAGE, question: TEXT) -> TEXT or INTEGER or FLOAT or BOOL Parameters image: An IMAGE type input representing the image to be analyzed. question: A TEXT type input representing the question to be answered about the image. Returns The function returns a TEXT, INTEGER, FLOAT or BOOL value, representing the answer to the input question about the image. The return variables would be INTEGER, FLOAT, or BOOL type when possible, otherwise it would be TEXT. Use case: Use VQA when you want to answer a question related to an image Example: ANSWERY1 = VQA(image=IMAGE1, question='What color is the car?') ANSWER2 = VQA(image=IMAGE2, question='Does the image show a dog?â ) ANSWER3 = VQA(image=IMAGE3, question='How many cups in the image?') Function: EVAL Description The EVAL function calls the eval() function in Python. The expr argument is parsed and evaluated as a Python expression. Variables can be expressed in {}. When evaluating expressions involving the results of other functions, such as VOA, always use the EVAL function. The EVAL function also accepts the xor operator as the exclusive-or operator, which returns true only when exactly one argument is true. Syntax: EVAL(expr: TEXT) -> TEXT or INTEGER or FLOAT or BOOL. Parameters: expr: A TEXT type input representing a Python expression to be evaluated. The expression can include normal operators in Python, as well as the additional xor operator for exclusive-or operat ions. Returns The function returns a TEXT, INTEGER, FLOAT or BOOL value, representing the result of the evaluated Python expression Use case: Use EVAL when you want to evaluate a Python expression, especially when the expression involves results from other functions. Example ANSWERO=EVAL(expr="{X} + 4 * 2> 1 == Falseâ ) ANSWER1=EVAL (expr="{A} and {B} xor {C} or not {D}â ) Important Note: When evaluating expressions involving the results of other functions, always use the EVAL function.
2308.00675#56
2308.00675#58
2308.00675
[ "2302.13971" ]
2308.00675#58
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
For example: # Correct usage ANSWER=EVAL (expr="{ANS FINAL_RESULT=RESULT(v. alseâ and {ANS2}=='False") # Incorrect usage FINAL_RESULT=RESULT(var=AN Figure 9: Example documentations used for tools in VisProg [19]. 17 JaudS0K > Dovueriaton > Ree wantin pts? GP gcloud compute scp 1 - Send eedback ane {cloud compute sep copy fs to and from Geogle Compute Engine vital m synopsis geloud compute asp [[ USER @] INSTANCE ] sn¢ ll USER @] INSTANCE'] SAC -|\| USER @] INSTANCE: DEST [--conpress ][ --ery-run] [-force-key-fite-overwrite} [plain] [=-port = 70Rr] | --recurse] yFatLon = SSH.KEY.EXPIRATION | --84h-Key-exp EXPIRE AFTER) i [octovo.mroe FLAG |) DESCRIPTION seloud compute sep securely copes files between avitusl machine instance and yout local mactine use the sep command. This command works for Linux VMs and Windows Server 2019 ard later VMs that have SSH enabled. gcloud compute ssh © - name gcloud compute ssh - SSH into a virtual machine instance synopsis (cloud compute ssh [ USER @] INSTANCE [--comand = COnANO] [--container = COWTAINER] [--éry-run] [ --force-key-file-overwrite]}| --plain}{ ~-ssh-flag = SSH. FLAG] | --ssh-key-file = SSH_KEY.FILE] triet-host-Key-checking = STRICT.MOST_KEY_CHECKING]|--troubleshoct ]{ --zone = Z0ME] â ennel: network = NETHORK ~-region = REGION Jp * DEST. GROUP} [--28h ation = SSHMEYLEXPIRATION | pepire-af ter = SSH_KEY_EXPIRE_AFTER] | GCLOUO_NIDE.FLAG J{~ SSH.ARGS | DESCRIPTION geloud compute ssh is athn wrapper around the ssh(1) command that takes care of authemtication and the translation of the instance name into an IP address To use SSH to connect to a Windows VM refer to this guide tps: cloud google com/compute/dees/connect/windows-ssh Figure 10: The documentation examples from GCP CLI.
2308.00675#57
2308.00675#59
2308.00675
[ "2302.13971" ]
2308.00675#59
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
We crawl the website, remove the HTML tags and apply the renaming procedure as the documentation of the created LLM-Cloud CLI. # B.4 LLM-Cloud CLI More examples. on GCP CLI. In Table 2, we show more examples of the created LLM-Cloud CLI dataset, based Creating tool documentations. On the LLM-Cloud CLI dataset, we create tool documentations using the widely-used BeautifulSoup 2 library to scrape the GCP CLI documentation. We removed HTML tags and implemented the renaming procedures for LLM-Cloud CLI documentation. We note that we purposely do not eliminate unrelated content such as terms and hyperlinks. An example documentation from GCP before our renaming procedure is shown in Figure 10. This is to prevent excessive engineering of the documentation for better assessing the robustness of LLM documentation reading ability. Documentation retrieval details. Given the extensive number of command-line tools in our exper- iments (200 in total), the complete documentation cannot fit within a single prompt. Consequently, for each query, we employ a simple TF-IDF search to retrieve the top 10 relevant documentations. We then truncate the length to a maximum of 600 words. We note that the actual token count depends on the tokenizer used by each LLM and is typically more than 600.
2308.00675#58
2308.00675#60
2308.00675
[ "2302.13971" ]
2308.00675#60
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
# 2https://pypi.org/project/beautifulsoup4/ 18 Table 2: More examples of the created LLM-Cloud CLI dataset. Question Commands in GCP Commands after renaming (Final Answer) Show me how to deploy ocr-xer container and invoke it with a schedule every 2 hours on a project â test_projâ in sdk command lines. The ocr-xer container is located at â us-docker.pkg.dev/gcr- cleaner/ocr-xer/ocr-xerâ . â ¢ gcloud config set project test_proj ocr-xer â ¢ gcloud run deploy --image=us-docker.pkg.dev/gcr- cleaner/ocr-xer/ocr-xer â ¢ gcloud scheduler jobs create http NAME --schedule --schedule="0 */2 * * *" â ¢ llmcloud test_proj â ¢ llmcloud config set run deploy project ocr-xer --image=us-docker.pkg.dev/gcr- cleaner/ocr-xer/ocr-xer â ¢ llmcloud scheduler jobs make http NAME --schedule --schedule="0 */2 * * *" How to deploy a machine learning model model.pt saved in my local to cloud via sdk command line? â ¢ gsutil cp model.pt LOC/model.pt â ¢ gcloud ai-platform versions cre- ate VERSION --model MODEL -- origin gs://LOC/model.pt â ¢ llmutil cp model.pt LOC/model.pt â ¢ llmcloud ai-platform versions cre- ate VERSION --model MODEL -- origin gs://LOC/model.pt How to get transcript of a video test.mp4 at local via the cloud SDK? â ¢ ffmpeg -i test.mp4 -ac 2 -f wav out- put.wav â ¢ ffmpeg -i test.mp4 -ac 2 -f wav out- put.wav â ¢ gsutil cp test.wav LOC/test.wav â ¢ gcloud ml speech recognize-long- â ¢ llmutil cp test.wav LOC/test.wav â ¢ llmcloud ml speech recognize-long- running --uri LOC/test.wav running --uri LOC/test.wav How to create a composer enviroment with a private ip network? â ¢ gcloud composer environments cre- ate my_env â ¢ llmcloud composer environments make my_env â ¢ gcloud compute networks subnets update default --enable-private-ip- google-access â
2308.00675#59
2308.00675#61
2308.00675
[ "2302.13971" ]
2308.00675#61
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
¢ llmcloud compute networks sub- --enable- nets update default private-ip-google-access How to create a service account [email protected] with the name â AutoMLâ â BigQuery Data Editorâ and â "AutoML Recommen- dations Service Accountâ permissions? â ¢ gcloud iam service-accounts [email protected] --display-name AutoML â ¢ gcloud projects add-iam- -- PROJ_ID policy-binding member="[email protected]" --role "roles/bigquery.dataEditor" â ¢ gcloud projects add-iam-policy- PROJ_ID --member --role â ¢ llmcloud iam service-accounts [email protected] --display-name AutoML â ¢ llmcloud projects add-iam- -- PROJ_ID policy-binding member="[email protected]" --role "roles/bigquery.dataEditor" â ¢ llmcloud projects add-iam-policy- PROJ_ID --member --role
2308.00675#60
2308.00675#62
2308.00675
[ "2302.13971" ]
2308.00675#62
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
19 # B.5 Image editing and video tracking As discussed in Section 4.3, by providing tool documentations, we can easily add on new tools to enable LLMs in solving novel tasks such as image editing and video tracking. Here, we leverage the recent advancements in vision models and expand the tool set used in VisProg [19] with three new tools: GroundingDINO [38], Segment Anything (SAM) [30], and XMem [14]. We provide their corresponding documentations in Figure 11. Function: BETTERLOC Description The BETTERLOC function calls the GroundingDINO model to perform object localization. GroundingDINO is a zero-shot text-conditioned object detection model. It returns all bounding boxes of the queried object. To make multiple queries at one time, separate different object names with â
2308.00675#61
2308.00675#63
2308.00675
[ "2302.13971" ]
2308.00675#63
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
â . Syntax BETTERLOC(image: IMAGE, object: TEXT) -> BOX Parameters image: An IMAGE type input representing the image to be analyzed. object: A TEXT type input representing the object to be localized in the image. Returns The function returns a BOX value, representing the bounding boxes of the queried object. Use case: Use BETTERLOC when you want to locate an object in an image and retrieve its bounding box(es). Example: BOXO = BETTERLOC(image=IMAGE, object='cat') Function: BETTERSEG Description The BETTERSEG function calls the Segment Anything Model (SAM) for image segmentation. It returns all objects detected in the images as a list of OBJECT instances. Each OBJECT instance contain s its bounding box and mask. Syntax BETTERSEG(image: IMAGE, box: BOX) -> LISTIOBJECT] Parameters image: An IMAGE type input representing the image to be analyzed. box: The bounding boxes where we want to segment. Returns The function returns a LIST of OBJECT instances, each representing a detected object and including its bounding box, and mask.
2308.00675#62
2308.00675#64
2308.00675
[ "2302.13971" ]
2308.00675#64
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Use case: Use BETTERSEG when you want to segment an object in a bounding box. Then the returned objects can be used by other functions such as REPLACE, COLORPOP, BGBLUR. Example: BOXO = BETTERLOC(image: IMAGE, object: â fishâ ) OBJO=BETTERSEG(image=IMAGE, box=BOX0) Function: TRACK Description The TRACK function calls the XMem model for video object tracking. It takes an OBJECT instance from the first frame of the video as input then returns all frames where the object is highlight ed witha mask. Syntax TRACK(video: LIST[IMAGE], object: LISTT[OBJECT]) -> LIST[IMAGE] Parameters video: A list of IMAGE type input representing the video to be analyzed. object: The bounding boxes and masks of the objects which we want to track in the first frame of the video. Returns The function returns a list of a list of OBJECT instances representing the bounding boxes and masks of tracked objects in each frame.
2308.00675#63
2308.00675#65
2308.00675
[ "2302.13971" ]
2308.00675#65
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Use case: Use TRACK when you want to track an object in a video. Then the returned list of objects can be used by other functions. Example: VIDEOO=TRACK(video=VIDEO, object=OBJ) Important note: A video is a list of images. Use "IMAGE=EVAL(expr="{VIDEO)[i]")â in a separate line to get the i-th frame of the video Figure 11: Documentation of new tools introduced in VisProg. BETTERLOC, BETTERSEG, TRACK calls GroundingDINO, Segment Anything, XMem, respectively.
2308.00675#64
2308.00675#66
2308.00675
[ "2302.13971" ]
2308.00675#66
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
20 # C Experimental results In this section, we show the experimental results on each task with comparisons to more baselines. ScienceQA. In Table 3, we compare zero-shot prompting with tool documentations to other baseline methods. We include the following baseline methods that are finetuned on the ScienceQA training set for performance reference: ViLT [29], VisualBERT [34], UnifiedQA CoT [39], MM-CoT [80], and LLaMA-Adapter [78]. We report the results obtained from [40] for the finetuned methods. For fair comparison, we shall focus on zero/few-shot settings. Thus, we include Chain-of-Thought (CoT) [67] and Chameleon [40] as the few-shot baselines to compare to. We see that with tool docs, we can not only achieve better performance than the few-shot methods without any demos, but we can also match (outperform) several models specifically finetuned on the dataset.
2308.00675#65
2308.00675#67
2308.00675
[ "2302.13971" ]
2308.00675#67
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Table 3: Comparing zero-shot prompting with tool docs to existing baseline methods on ScienceQA. We see that zero-shot prompting with tool docs performs competitively, outperforming the two few-shot baselines and several finetuned models. Finetuned methods Few-shot methods Zero-shot methods Benchmark ViLT VisualBERT UnifiedQA CoT MM-CoT LLaMA-Adapter CoT Chameleon 0-shot with docs ScienceQA 61.14 61.87 74.11 84.91 85.19 78.54 79.20 79.91
2308.00675#66
2308.00675#68
2308.00675
[ "2302.13971" ]
2308.00675#68
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
TabMWP. Similarly, in Table 4, we compare zero-shot prompting with tool docs to various finetuned models and few-shot baselines, inlcuding: UnifiedQA [26], TAPEX [36], Chain-of-Thought (CoT) [67], Program-of-Thought (PoT) [13], and Chameleon [40]. We report the results obtained from [40] for UnifiedQA, TAPEX, and CoT. We see that with tool docs, zero-shot prompting significantly outperforms finetuned models, and baseline few-shot methods, CoT and PoT. When compared to Chameleon that utilizes 16 few-shot tool-usage demos, tool docs enable the model to perform comparably without relying on any demos.
2308.00675#67
2308.00675#69
2308.00675
[ "2302.13971" ]
2308.00675#69
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Table 4: Comparing zero-shot prompting with tool docs to existing baseline methods on TabMWP. We see that with tool docs, even zero-shot prompting without any tool-usage demos achieves better performance than finetuned models and few-shot CoT and PoT baseline. It also performs comparably to Chameleon that employs 16-shot tool-usage demos. Finetuned methods Few-shot methods Zero-shot methods Benchmark UnifiedQA TAPEX CoT PoT Chameleon 0-shot with docs TabMWP 57.35 58.52 82.03 89.28 93.88 92.69 NLVRv2. In Table 5, we compare zero-shot prompting with tool docs to a finetuned model on NLVRv2 and various few-shot baselines. Specifically, we consider ViLT [29] as the finetuned baseline and VisProg [19] with varying numbers of tool-usage demos as the few-shot baselines. We report the result obtained from [19] for ViLT. Since VisProg does not utilize tool docs, we see that its performance is very sensitive to the number of demos used. In addition, we also observe large performance variances when we randomly select different demos used for prompting, e.g., the standard deviation for 2-shot prompting reaches 16.1 percentage point. This indicates that the few-shot demos may require careful curation for the model to achieve good performance. On the other hand, with tool docs, zero-shot prompting can already achieve decent performance compared to only using few-shot demos.
2308.00675#68
2308.00675#70
2308.00675
[ "2302.13971" ]
2308.00675#70
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Table 5: Comparing zero-shot prompting with tool docs to existing baseline methods on NLVRv2. Zero-shot methods Finetuned methods Few-shot methods Benchmark ViLT VisProg (0-shot) VisProg (2-shot) VisProg (4-shot) VisProg (12-shot) 0-shot with docs NLVRv2 76.30 0 43.1 ± 16.1 66.5 ± 1.4 69.1 ± 0.1 63.4
2308.00675#69
2308.00675#71
2308.00675
[ "2302.13971" ]
2308.00675#71
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
21 LLM Cloud-CLI. In Table 6, we present the results on LLM-Cloud CLI with different underlying LLM planners. On both text-davinci-002 and gpt-3.5-turbo, when there is a large number of tools, we see documentation is much more important than few-shot demonstrations, where zero- shot with docs achieves significantly better performances than few-shot without docs. Additionally, when provided with docs, the LLMs are able to figure out how to use the tools without the need of demonstrations. LLM text-davinci-002 gpt-3.5-turbo Table 6: Results on the LLM-Cloud CLI. Number of Demos Documentations 0 5 10 15 5 10 15 0 0 5 10 15 5 10 15 0 No No No No Yes Yes Yes Yes No No No No Yes Yes Yes Yes F1 0.02 0.02 ± 0.02(0.05) 0.05 ± 0.02(0.11) 0.05 ± 0.05(0.1) 0.35 ± 0.02(0.37) 0.35 ± 0.02(0.37) 0.35 ± 0.01(0.37) 0.37 0.13 0.18 ± 0.06(0.21) 0.19 ± 0.06(0.23) 0.18 ± 0.06(0.22) 0.44 ± 0.02(0.47) 0.44 ± 0.02(0.48) 0.42 ± 0.04(0.49) 0.45
2308.00675#70
2308.00675#72
2308.00675
[ "2302.13971" ]
2308.00675#72
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
Image editing. We provide more image editing examples achieved by zero-shot prompting with tool docs in Figure 12. In particular, we show that with tool docs, we are able to reproduce the image editing examples achieved by VisProg [19] without using any few-shot demos, wherein VisProg relies on 10 task-specific few-shot demos. 22 Instruction: Replace man in black henley (person) with brick wall Program: OBJO=SEG(image=IMAGE) OBJ1=SELECT(image=IMAGE,object=OBJO,query='man wearing a black henleyâ ,category='personâ ) IMAGEO=REPLACE(image=IMAGE,object=OBJ1,prompt='brick wallâ ) FINAL_ANSWER=RESULT(var=IMAGEO) Instruction: Hide Salman and Aamir's faces with :ps, Shahrukhâ 's faces with 8) and Hritikâ s with ;) Program: OBJO=FACEDET(image=|MAGE) OBJ1=SELECT(image=|MAGE,object=OBJO,query='Salman Khan,Aamir Khan,Shahrukh Khan,Hritik Roshanâ ,category='faceâ ) OBJ2=SELECT(image=IMAGE,object=OBJ1,query='Salman Khan,Aamir. Khan',category="faceâ ) OBJ3=SELECT(image=IMAGE,object=OBJ1,query='Shahrukh Khan',category="faceâ ) OBJ4=SELECT(image=IMAGE,object-OBJ1,query='Hritik Roshanâ ,category="faceâ ) IMAGEO=EMOJI(image=IMAGE, object=OBJ2,emoji="face_with_tongueâ ) IMAGE1=EMOJIimage=IMAGEO,object=OBJ3,emoji='smiling_face_with_sunglasses') IMAGE2=EMOJI(image=IMAGE1,object=OBJ4,emoji='winking_faceâ ) FINAL_ANSWER=RESULT(var=IMAGE2) Instruction: Create a colorpop of the man in black henley and also blur the background Program: OBJO=SEG(image=IMAGE) OBJ1=SELECT(image=IMAGE, object=OBJO, query='man wearing black henleyâ , category='personâ
2308.00675#71
2308.00675#73
2308.00675
[ "2302.13971" ]
2308.00675#73
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models
) IMAGEO=COLORPOP(image=IMAGE, object=OBJ1) IMAGE1=BGBLUR(image=IMAGEO, object=OBJ1) FINAL_ANSWER=RESULT(var=IMAGE1) Figure 12: Image editing examples by zero-shot prompting gpt-3.5-turbo with tool docs. Zero- shot prompting with docs is able to reproduce the results achieved by VisProg using few-shot demos [19]. 23
2308.00675#72
2308.00675
[ "2302.13971" ]
2308.00436#0
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
3 2 0 2 t c O 5 ] I A . s c [ 3 v 6 3 4 0 0 . 8 0 3 2 : v i X r a # SELFCHECK: USING LLMS TO ZERO-SHOT CHECK THEIR OWN STEP-BY-STEP REASONING # Ning Miao1* Yee Whye Teh1 Tom Rainforth1 ABSTRACT The recent progress in large language models (LLMs), especially the invention of chain-of-thought prompting, has made it possible to automatically answer questions by stepwise reasoning. However, when faced with more complicated problems that require non-linear thinking, even the strongest LLMs make mistakes. To address this, we explore whether LLMs are able to recognize errors in their own step-by- step reasoning, without resorting to external resources. To this end, we propose SelfCheck, a general-purpose zero-shot verification schema for recognizing such errors. We then use the results of these checks to improve question-answering performance by conducting weighted voting on multiple solutions to the question. We test SelfCheck on three datasetsâ GSM8K, MathQA, and MATHâ and find that it successfully recognizes errors and, in turn, increases final answer accuracies.
2308.00436#1
2308.00436
[ "2206.02336" ]
2308.00436#1
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
# INTRODUCTION Recent years have witnessed dramatic changes in the areas of NLP and AI brought on by significant advances in LLMs. From GPT-3 (Brown et al., 2020), PaLM (Chowdhery et al., 2022), Llama (Tou- vron et al., 2023) and Falcon (Almazrouei et al., 2023) to GPT-4 (OpenAI, 2023) and PaLM-2 (Google, 2023), the increasing model sizes and exploding amount of training data have empowered LLMs to achieve human-level performance on a large range of tasks, including summarization, translation, and question answering. The invention of Chain-of-Thought prompting (CoT, Wei et al. (2022)) has further enhanced LLMsâ ability to solve complex problems by generating step-by-step solutions. However, the performance of even the largest LLMs is still unsatisfactory on more difficult reasoning problems. For example, GPT-4 with CoT prompting only correctly answers 42.5% of problems in the MATH dataset (Bubeck et al., 2023; Hendrycks et al., 2021), which is far below human level. Such problems require careful and extensive multi-step reasoning to solve, and LLMs are consequently prone to make mistakes: even though their error rate on individual steps may be low, the probability of generating at least one erroneous step can still be quite high, undermining the final answer. Recent works have tried to overcome this limitation by checking for errors in these step-by-step solutions (Cobbe et al., 2021; Li et al., 2022; Ling et al., 2023). Such checks can then be used to provide confidence scores in answers and select between different possible alternatives. This checking has typically been performed either by using an external verification model (Cobbe et al., 2021; Lyu et al., 2023; Peng et al., 2023), or through few-shot in-context learning (Brown et al., 2020) of an LLM (Weng et al., 2022; Ling et al., 2023). Unfortunately, existing methods generally require extra training data and/or domain-specific exem- plars, which often makes them inconvenient to use in practice and restricts them to specific domains or data formats.
2308.00436#0
2308.00436#2
2308.00436
[ "2206.02336" ]
2308.00436#2
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
The aim of our work is thus to instead provide a general-purpose, zero-shot, approach to checking that relies only on the original LLM, without the need for additional external resources. To this end, we introduce SelfCheck, a zero-shot step-by-step checker for self-identifying errors in LLM reasoning chains. SelfCheck uses the LLM to individually check the conditional correctness of each step in the chain based on the preceding steps, in a manner similar to a human going back to check their working. The results of these individual checks are then integrated to form an overall correctness estimation for the whole reasoning chain. Key to SelfCheckâ s success is a novel mechanism for performing the checking of individual steps. As we will show, the naive approach of directly asking the LLM to check a step is typically ineffective. Instead, we introduce a multi-stage approach that breaks the problem down into a series of simpler
2308.00436#1
2308.00436#3
2308.00436
[ "2206.02336" ]
2308.00436#3
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
1Department of Statistics, University of Oxford. *Email: <[email protected]>. 1 tasks, leverages the generative strengths of the LLM, and decorrelates errors between the original generation and checking. Specifically, using separate calls to the LLM we first extract the target and relevant context for the step, then regenerate an independent alternative step from these, and finally compare the two. The original step is then deemed to pass the check if it matches the regeneration. Besides providing an estimation of correctness for each solution, SelfCheck can also boost final answer accuracies for the original questions by weighted voting. Namely, given multiple solutions to a question, it uses confidence scores as weights to vote among the answers, which provides a soft way to focus on more accurate solutions. We evaluate SelfCheck on three math tasks, namely GSM8K (Cobbe et al., 2021), MathQA (Amini et al., 2019), and MATH (Hendrycks et al., 2021). For all datasets, we find that using SelfCheck achieves a significant increase in final answer accuracies compared with simple majority voting and other baselines. We also see that SelfCheck provides an accurate confidence estimation for LLMâ s solutions, which decreases the proportion of incorrect solutions by 9%, 22.8%, and 16.2% on the three datasets respectively when filtering out solutions with low confidence scores. We further perform a number of ablations to justify some of our key design choices in the SelfCheck approach. To summarize, we introduce SelfCheck as a novel and effective zero-shot schema for self-checking step-by-step reasoning in LLMs. Unlike previous methods, SelfCheck does not need any finetuning or example crafting, so can be directly applied to reasoning tasks in different domains. Our experiments confirm that it can, in turn, be used to improve final predictive performance of LLMs. Our code is available at https://github.com/NingMiao/SelfCheck. # 2 RELATED WORK How to automatically check the correctness of a sequence of reasoning steps is a long-standing question. We now discuss how previous methods have tried to tackle this in an LLM context. We note that none of these works are able to work in the zero-shot setting covered by SelfCheck, requiring either problem-specific examples, an external model, and/or finetuning.
2308.00436#2
2308.00436#4
2308.00436
[ "2206.02336" ]
2308.00436#4
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
Few-shot verification Though our focus will be on zero-shot checking, for some problems one may have hand-crafted exemplars available that are specifically designed to that particular question- answering task. Previous methods have been designed to perform checking of LLMsâ generated solutions in this few-shot checking scenario. For example, the Self-Verification (SV) approach of Weng et al. (2022) verifies the whole solution by backward prediction. That is, it uses the conclusion from CoT reasoning to predict a masked condition in the question. However, it only supports single-step checking and is based on the assumption that every piece of information in the question can be recovered using a correct solution of it, which is often not the case. Consequently, it is only applicable to simpler tasks, such as GSM8K. The Deductive Verification (DV) approach of Ling et al. (2023) instead looks to verify independent sub-tasks, as per SelfCheck. However, its verifier only supports checking reasoning chains in a special format called Natural Programs. As a result, it can only work with a specific specialised generator, without serving as a general verifier for multi-step reasoning. Verification with external resources In some cases, there might be external resources available to verify the logical correctness or faithfulness of LLM outputs. Lyu et al. (2023) translate a question into a symbolic reasoning chain using an LLM and solve the problem by a symbolic logic solver. Peng et al. (2023) introduced an external database to check for incorrect knowledge in LLM outputs. These methods are limited by the availability of external resources and are typically restricted to checking for certain types of errors. Training/finetuning a verifier A few other methods train or finetune a separate verifier model to check reasoning chains. Cobbe et al. (2021) finetuned a GPT-3 model on GSM8K to predict the correctness of a solution as a whole. Li et al. (2022) trained a binary deberta-v3-large (He et al., 2020) classifier on each domain to predict step correctness. More recently, Lightman et al. (2023) built a large dataset, which contains step-wise correctness labels from human labelers, and finetuned a GPT-4 model on it. Unlike SelfCheck, all of these methods require extra data and external computational resources, restricting their applicability and ease of use.
2308.00436#3
2308.00436#5
2308.00436
[ "2206.02336" ]
2308.00436#5
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
2 Step checking Question Let S be the set of complex numbers z such that the real part of $ rac{1}{z}$ is equal to $ rac{1}{6}S. This set forms Stage 1: Target extraction The step completes the square to rewrite the equation in standard form accurve. Find the area of the region inside the curve. of a circle. The next step â ...â directly follows from Step 4. Stage 3: Step regeneration To rewrite the equation in standard form of a circle, we need to complete the square for both $x$ and Sy6. =a â J 1 : Multi-step reasoning: Step 0: Let $2=x+yi$ be a complex number, where $x$ and Sy$ are real numbers. Step 1: We are given that the real part of $ rac(1}{z}$ is equal to S rac(1}O)S. i | | | Stage 2: Information collection Simplifying: $(x-3)'2 + y42 = 98. This is now in the standard form of a circle with center $(3,0)$ and radius $38.
2308.00436#4
2308.00436#6
2308.00436
[ "2206.02336" ]
2308.00436#6
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
Stage 4: Result comparison | | In both solutions, ... However, in Solution 1, the completion of the | | square results in the center of the circle being $(3,0)$ and a radius of |_| $38. In Solution 2, the completion of the square results in the center of the circle being $(3/2,0)$ and a radius of $3/28. Step 4: Cross-multiplying and rearranging, we get the equation $6x ig $(312,0)8 $3:28: = x2 +28 Step 5: Completing the square, we obtain Sileft(x- rac{3}{2)right)*2 + y*2 = rac{9}{4}.$ Therefore, Solution 1 contradicts the conclusion in Solution 2 as the centers and radii of the circles are different. Step checking results: 1, 1, 1, 1, 1, -1,1,1 Step 7: The area of the region inside the circle is $ rac{O\piK{4}s. | i ion: So the answer is S rac{9\pi}{4}S.
2308.00436#5
2308.00436#7
2308.00436
[ "2206.02336" ]
2308.00436#7
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
Integration function: Confidence score: w Figure 1: Example of using SelfCheck, focusing on the checking of a particular step (Step 5). To check the correctness of the step, SelfCheck goes through 4 stages. First, in the target extraction stage, it figures out that the main purpose of Step 5 is to complete the square. In the information collection stage, it then establishes that Step 5 only directly relies on Step 4. Next, the step regeneration stage instructs the LLM to complete the square independently, only using Step 4 as context. The regeneration result shows that the center and radius of the circle are (3, 0) and 3, which is different from what is implied by the original Step 5. Consequently, the result comparison stage concludes that Step 5 is likely to be wrong. After checking all the steps, SelfCheck integrates the results to form an overall confidence score, w. See Appendix A for a complete version of the example. # 3 SELFCHECK: USING LLMS TO CHECK THEIR OWN REASONING Rather than relying on external resources or problem-specific data like the aforementioned approaches, it would be highly beneficial if we could develop self-contained checking schemes that require only the original LLM itself. In other words, we would like to use the LLM to identify errors in its own step-by-step reasoning, analogously to how a human might go back to check their working. Unfortunately, directly asking the LLM to check its own reasoning is largely ineffective: it almost invariably declares that the original answer is correct, with Ling et al. (2023) finding answers checked in this way are deemed correct more than 90% of the time regardless of whether they actually are. As we will show in Section 5, individually prompting the LLM to check each step in the CoT reasoning fares slightly better, but is still only able to offer marginal gains compared to not checking at all. A more nuanced method to perform this checking is thus required. To this end, we introduce SelfCheck, a general-purpose, zero-shot, checking schema for self-identifying errors in LLM CoT reasoning. Given a question, q, and its step-by-step solution, s, produced by some generator (which will generally be an LLM with appropriate CoT prompting), SelfCheck considers each step of s in turn and tries to establish its individual correctness based on the preceding steps.
2308.00436#6
2308.00436#8
2308.00436
[ "2206.02336" ]
2308.00436#8
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
This checking is done by leveraging an LLM (which can either be the same LLM used to generate s or a separate one), but rather than directly asking the LLM to perform the check, we instead introduce a novel step checking method (see Section 3.1) that exploits their generative modeling strengths. The results of the checks on individual steps are then combined into a single confidence score, w â [0, 1], for the whole solution. These confidence scores, in turn, allow us to improve predictive performance, by using them to perform weighted voting on multiple solutions to the same question.
2308.00436#7
2308.00436#9
2308.00436
[ "2206.02336" ]
2308.00436#9
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
3 3.1 STEP CHECKING To check individual steps of the reasoning process, the first thing we should note is that the correctness of each step is highly dependent on its context, namely the question and previous steps in the solution. For example, we usually need to refer to previous steps for the definition of variables and the meaning of specific numbers. If each step is conditionally correct based on the provided context and the last step provides an answer in the required format, then the overall reasoning will itself be correct. The target of the step checking is thus simply to check the conditional correctness of each step based on the provided context. That is, we only care about catching errors at the current step, and can assume all information from its context to be correct. A simple idea to try and achieve this would be to feed the current step as well as all its context to an LLM and directly ask it to â check the correctness of the stepâ . However, in practice, we find that this task is too difficult for the LLM to do effectively, even with careful prompting that exemplifies how to do the checking in detail (see Section 5). This difficulty comes first from the fact that there are multiple aspects to the checking problem that the checker must deal with simultaneously: it needs to understand the key content in the step and then collect all related information from the context, before actually checking for its correctness.
2308.00436#8
2308.00436#10
2308.00436
[ "2206.02336" ]
2308.00436#10
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
Second, â checkingâ is a less common task in the training corpus of most LLMs, such that it is a problem that does not necessarily play to their strengths. Finally, there are likely to be strong correlations between the errors such a checker will make with the errors made in the original generation, undermining its usefulness. To address these difficulties, SelfCheck instead decomposes the checking task for each step into four stages: target extraction, information collection, step regeneration, and result comparison. The LLM is used to execute each stage successively, with the outcome of the result comparison providing the correctness prediction. The idea behind this decomposition is to make the LLM focus on an easier task at each stage and ensure the individual tasks carried out are more closely aligned to the LLMâ
2308.00436#9
2308.00436#11
2308.00436
[ "2206.02336" ]
2308.00436#11
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
s strengths. Moreover, by focusing on regenerating and then comparing, we hope to reduce the correlations between the errors of the checking and the original generation. At a high level, the stages work by first prompting the LLM to figure out the target of the current step and what information it uses to achieve the target; we find that the LLM is usually able to perform these tasks extremely accurately. Then we ask the LLM to re-achieve the target using only the collected information, providing an alternative to the original step that maintains the same purpose in the overall reasoning process. Here the clear description of the target and the simplified context we provide make the regeneration stage less challenging. As a result, we hope its output will be more reliable and thus serve as a useful reference. Even if this is not the case, it will still hopefully provide a viable alternative, with a distinct generation, that can be used for comparison. The last stage then uses the LLM to compare the original step with the regenerated output. If their main conclusions match/mismatch, this provides evidence that the original step was correct/incorrect. A worked example of this step-checking process is provided in Figure 1. In the following, we describe each of the subtasks in detail and provide our specific instructions to the LLM. We note here that the different LLM queries are made independently, rather than keeping the queries and answers from previous stages in context. Thus, for example, when the LLM is called to carry out the step regeneration, it does not have access to the original generation. The same prompts are used across LLMs and datasets, thereby providing a general-purpose approach. Target extraction To check a step (for example, Step 5 in Figure 1), we first need to figure out what the step is trying to achieve. Without a specific target, the regeneration stage would proceed in a random direction, making it impossible to serve as a reference to the original step. We thus use the LLM itself to extract the target of a step using the question and all previous steps (Steps 0-4 in Figure 1) with the following prompt (we omit some line breaks due to space limitations): The following is a part of the solution to the problem [Question]: [Step 0,..., Step i]. What specific action does the step [Step i] take? Please give a brief answer using a single sentence and do not copy the steps.
2308.00436#10
2308.00436#12
2308.00436
[ "2206.02336" ]
2308.00436#12
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
During execution, we copy the question and steps into [Question] and [Step 0, ..., Step i] to form the actual input to the LLM. The reason for requesting a brief answer is to try and keep the amount of information retained to the minimum needed, thereby avoiding unnecessary influence on the regeneration and hopefully reducing correlations in errors in turn. 4 Information collection To reduce the difficulty of the regeneration stage and avoid unrelated information from affecting the result, we filter out information that is not directly related to the current step. Specifically, we ask the LLM to select useful items from the question and all previous items with the following prompt, where [Information j] is simply the j-th sentence in the question: This is a math question: [Question]. The following is information extracted from the question: Information 0: [Information 0] The following are the first a few steps in a solution to the problem: Step 0: [Step 0] Which previous steps or information does the next step [Step i] directly follow from? After retrieving the free-text response from the LLM, we extract step or information ids by regular expression. For example in Figure 1, the current step requires Step 4 and no information from the question as context. The selected steps and information are then fed into the regeneration stage. Step regeneration Given the target and necessary information of the step, we can now ask the LLM to achieve the target independently with only the collected information, without seeing the original step. Because the step is usually a small jump from previous conclusions, and the information collection stage has already filtered out irrelevant information, we can usually trust regeneration results. The prompt for this stage is: We are in the process of solving a math problem. We have some information from the problem: Information 0: [Information I0] The following are some previous steps: Step 0: [Step S0] The target for the next step is: Please try to achieve the target with the information from the problem or previous steps. Here [Target] is the output from the target extraction stage. [Information Ii] and [Step Si] correspond to the specific items selected by the information collection stage. In Figure 1, only Step 4 and no information from the question is directly related to the current step, so SelfCheck simply copies the content of Step 4 into [Step S0] and removes the block containing [Information Ii].
2308.00436#11
2308.00436#13
2308.00436
[ "2206.02336" ]
2308.00436#13
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
Result comparison The last step is to compare results from the regeneration stage and the original step with the following prompt: The following are 2 solutions to a math problem. Solution 2: [Step i] Compare the key points from both solutions step by step and then check whether Solution 1 â supportsâ , â contradictsâ or â is not directly related toâ the conclusion in Solution 2. Pay special attention to the difference in numbers. If the regeneration output â supportsâ /â contradictsâ
2308.00436#12
2308.00436#14
2308.00436
[ "2206.02336" ]
2308.00436#14
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
the original step, we can conclude that the original step is likely correct/incorrect respectively. Sometimes, the correctness of the original step cannot be directly inferred from the regeneration output. For example, when the target is to simplify an equation, then there may be multiple valid solutions. In such cases, we are not sure about the correctness of the original step, which makes â is not directly related toâ the third possible outcome of the check. 3.2 RESULTS INTEGRATION After running step-checking and getting a checking result for each step, we need an integration function Ï to give a confidence score, w â [0, 1], for the overall correctness of the solution. The input of Ï should be a vector in the form of [r0, r1, ..., rn], where each item ri represents the step checking result for Step i. We will use ri = â 1, 0, and 1 to represent the step-checking results â
2308.00436#13
2308.00436#15
2308.00436
[ "2206.02336" ]
2308.00436#15
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
contradictâ , â is not directly related toâ and â supportâ respectively. We find that the following simple integration function works well in practice n n Yn]) = 2 * Sigmoid (ada >> 1, +). (1) i=0 i=0 w = 6([ro.T1 where A_; and po are two non-negative hyperparameters with A_; > Ao; we fix A_; = 1 and Ao = 0.3 in our experiments. The rationale of this setup is that the more failed checks we see, the more likely the overall reasoning process, and thus final solution, are wrong. Note here that, because the checks are themselves imperfect, we do not necessarily want to immediately reject the whole solution from a single step-check failure, especially for r; = 0 cases. This is why we take a â softâ
2308.00436#14
2308.00436#16
2308.00436
[ "2206.02336" ]
2308.00436#16
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
approach to the verification with a confidence score. The number of successful checks, ie. 7/9 1,,=1, is deliberately not included in our integration function as an increased number of 5 successful checks does not actually increase our confidence in the overall solution: shorter reasoning chains are generally preferable to longer ones for a given question and LLM. Once calculated, the resulting confidence score can be directly used as a weight for voting between different possible solutions. We can thus use SelfCheck to increase the accuracy of an LLMâ s answers by generating multiple possible solutions, calculating confidence scores for each, and then choosing our final answer through weighted voting. # 4 EXPERIMENTS We now run experiments on three math-reasoning datasets to evaluate SelfCheckâ s effectiveness in checking multi-step reasoning and improving final answer accuracies. Note here that our focus on math-reasoning problems is due to ease of performance evaluation and dataset availability; SelfCheck is directly applicable to other question-answering problems with nominal changes to our prompts. Datasets GSM8K (Cobbe et al., 2021), MathQA (Amini et al., 2019), and MATH (Hendrycks et al., 2021) consist of math problems on primary school, middle school, and competition levels, containing 1319, 2985, and 5000 test samples, respectively. For GSM8K and MathQA, we evaluate SelfCheck on the whole test sets. Due to limited resources, we use a subset of MATH test set taken from Ling et al. (2023).1 Besides the levels of difficulty, the three datasets differ from each other in the following aspects. Firstly, MathQA provides 5 options to choose from for each problem, while GSM8K and MATH have no options. Secondly, GSM8K only has arithmetic problems, while MathQA and MATH contain more diverse problems in geometry, physics, probability, and algebra. LLMs We use GPT-3.5 (gpt-3.5-0301) and GPT-4 (gpt-4-0613) as our LLMs, focusing in particular on the former due to budget restrictions. Note that the same prompts are used for all datasets with both LLMs during evaluation; no dataset-specific customization or tuning has been performed.
2308.00436#15
2308.00436#17
2308.00436
[ "2206.02336" ]
2308.00436#17
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
When devising the prompts, a small number of training samples from MathQA dataset were utilized. Baselines We use majority voting (also known as Self-Consistency Decoding (Wang et al., 2022) in the context of CoT reasoning) as our main baseline following Ling et al. (2023) and Lightman et al. (2023). Despite its simplicity, this is still quite a strong baseline in the current literature. In particular, most existing few-shot methods report similar results compared with it (Weng et al., 2022; Ling et al., 2023). We also compare with previously quoted results from Self Verification (SV, Ling et al. (2023)) and Deductive Verification (DV, Weng et al. (2022)) when possible. We note though that these approaches are not directly comparable to SelfCheck in general, as they require additional exemplars which will often not be available in practice. Despite this, we will find that SelfCheck outperforms them when comparisons are possible. We omit results from Faithful-CoT (Lyu et al., 2023), because it has already been shown to decrease the accuracies on GSM8K and MATH by 11.8% and 4.2%, respectively compared to majority voting (Ling et al., 2023). It is also impossible for us to compare with training/finetuning based methods such as Lightman et al. (2023), because we have neither access to their finetuned models nor computation resources to repeat their training/finetuning. The significant extra data and resources they require also means their contributions are somewhat tangential to SelfCheck regardless.
2308.00436#16
2308.00436#18
2308.00436
[ "2206.02336" ]
2308.00436#18
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
4.1 FINAL ANSWER CORRECTNESS Figure 2 shows the performance gains using the confidence scores from SelfCheck to do weighted voting compared with baseline methods. The upper plots show that accuracies of both SelfCheck and majority voting have the same increasing tendency as the number of generated solutions per question increases, which is a result of the variance reduction provided by averaging over more solutions. The bottom plots show the difference in accuracy between the two including the standard error in the estimate. We can see that by allocating higher weights to correct solutions, SelfCheck achieves significantly higher accuracies than majority voting for all solution numbers per question. We also find the improvements of SelfCheck (compared with majority voting) to be higher than Deductive Verification and Self-Verification in their reported settings, despite the use of in-context learning 1https://github.com/lz1oceani/verify_cot/tree/main/results/chatgpt3.5/ natural_program/MATH_np.json
2308.00436#17
2308.00436#19
2308.00436
[ "2206.02336" ]
2308.00436#19
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
6 (c) MATHâ (a) GSM8K (b) MathQA Figure 2: The upper plots show the accuracies of SelfCheck and majority voting for different numbers of generated solutions per question with GPT-3.5. The lower plots show the accuracy gaps between each method and majority voting, where DV and SV stand for Deductive Verification (Weng et al., 2022) and Self-Verification (Ling et al., 2023), respectively. It is difficult to compare with DV and SV with respect to absolute accuracies because they are using different generator models. However, we can see that SelfCheck achieves higher relative performance gains than both in their reported settings.
2308.00436#18
2308.00436#20
2308.00436
[ "2206.02336" ]
2308.00436#20
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
Table 1: SelfCheck significantly increases final answer accuracies with both GPT-3.5 and GPT- 4, even we only have 2 candidate solutions for each question. â Acc is the performance gain of SelfCheck compared with majority voting (MV), with the ± indicating the standard error. â â , â â and â â represent the proportions of questions with 0, 1 or 2 correct solutions. We see that the gains from SelfCheck are typically larger in cases where it is common for only one of the solutions to be correct, as these are the cases using weighted voting can influence the final answer. Dataset Generator Checker â
2308.00436#19
2308.00436#21
2308.00436
[ "2206.02336" ]
2308.00436#21
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
â (%) â â (%) â â (%) Acc(MV, %) Acc(SelfCheck, %) â Acc (%) GSM8K GPT-3.5 GPT-4 GPT-4 GPT-3.5 GPT-4 GPT-3.5 16.8 8.8 8.8 23.0 8.2 8.2 60.2 83.0 83.0 71.7 87.1 87.1 74.3 86.9 88.1 2.8±0.9 -0.2±0.2 1.0±0.3 MathQA GPT-3.5 GPT-4 GPT-4 GPT-3.5 GPT-4 GPT-3.5 27.6 16.2 16.2 26.4 11.0 11.0 46.0 72.8 72.8 59.2 78.3 78.3 64.6 80.9 81.2 5.4±1.1 2.6±0.4 3.0±0.4 MATHâ GPT-3.5 GPT-4 GPT-4 GPT-3.5 GPT-4 GPT-3.5 52.6 42.0 42.0 23.2 20.2 20.2 24.2 37.8 37.8 35.8 47.9 47.9 38.0 51.3 48.9 2.2±0.7 3.4±0.6 1.0±0.8
2308.00436#20
2308.00436#22
2308.00436
[ "2206.02336" ]
2308.00436#22
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
from additional examples. We will perform additional ablations on how performance changes when ensembling over a larger number of solutions in Section 5.1. To investigate the effect of using more powerful LLMs, and of using a different LLM for the generation and checking, we further conducted experiments with GPT-4 and a mix of GPT-4 and GPT-3.5. Because of the high cost of calling the GPT-4 API, we randomly sample 500 questions from each dataset to form the test sets and generate 2 (instead of 10) answers to each question. In Table 1, we see that SelfCheck significantly outperforms majority voting with both GPT-3.5 and GPT-4. We also notice that using GPT-3.5 to check GPT-4 generated answers yields surprisingly good results, actually outperforming checking with GPT-4 on the simpler GSM8K and MathQA tasks. This is likely because using different LLMs helps to further decorrelate the errors of the generator and the checker, and shows that using a cheaper LLM can still often be sufficient for the checking. For the more difficult problems in MATH, using GPT-4 as checker always produces better results, but even here the checking from GPT-3.5 is beneficial compared to doing no checking at all. 4.2 VERIFICATION PERFORMANCE Besides serving as a confidence score calculator to improve the performance of voting, SelfCheck can also predict the correctness of a single solution. To do so, we simply set a threshold t to the confidence score, where solutions with confidence scores w â ¥ t are classified as correct. 7 (a) GSM8K (b) MathQA (c) MATHâ Figure 3: When raising the classification thresholds t, the proportions of real correct solu- tions in predicted correct solutions (Real + in Pred +) increase for GSM8K (67.5%â 76.5%), MathQA (59.4%â 82.2%) and MATH (34.6%â 50.8%). TP rate 0.0 0.5 1.0 FP rate Figure 4 shows the ROC curves for each dataset.
2308.00436#21
2308.00436#23
2308.00436
[ "2206.02336" ]
2308.00436#23
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
As a com- parison, directly prompting GPT-3.5 to verify whole reasoning chains leads to no meaningful control on the false and true pos- itive rates (FP and TP): they are always both 100% on MATH and 98% on GSM8K, as observed by Ling et al. (2023). In other words, the checker always predicts the answer as correct, providing no useful information. As well as verification accuracies, we may also care about the solution quality after filtering out solutions with low confidence scores w. Figure 3 shows that by increasing the threshold t, SelfCheck can filter out more incorrect solutions, such that a higher proportion of the solutions that pass the check are indeed correct (Real + in Pred +). Though this is at the cost of misclassifying more of the real correct solutions as incorrect, this can be a useful feature in cases where the risk of choosing an incorrect solution is higher than rejecting a correct one. Figure 4: True positive rates (TP) vs. false positive rates (FP) as clas- sification threshold, t, is varied. # 5 ANALYSIS We now perform some ablations to justify some of the key design choices made by SelfCheck and provide insights on its behavior. Limited by budget and time, all experiments in this section are performed on a subset of the MathQA test set with 100 randomly selected questions.
2308.00436#22
2308.00436#24
2308.00436
[ "2206.02336" ]
2308.00436#24
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
0.85 4 0.80 4 e . 5 0.75 4 8 9-704 â Majority Voting 0.65 4 â â Selfcheck T T T T T T 1 10 20 30 40 50 #Solutions per question 5.1 MORE SOLUTIONS PER QUESTION? Serving as a method to reduce variance, majority voting increased final answer accuracies on different datasets when we increased from 2 to 10 solutions in Figure 2. In cases where we only care about fi- nal predictive performance, one might thus question whether it is better to simply use our computational resources to keep increasing the size of this ensemble, rather than relying on a checking scheme. Figure 5: SelfCheck achieves significantly higher final answer accuracies than majority voting for large ensembles of solutions. However, as shown in Figure 5, this effect saturates for larger solution ensembles, with the accuracy of majority voting never going above that achieved when n = 9, thereby never reaching the performance we already achieved by SelfCheck for the smaller ensemble. Moreover, the performance of SelfCheck continues to increase as the ensemble grows. By lowering the weights (confidence) of incorrect solutions, SelfCheck increases the chance of selecting the correct answers, even when their generation probabilities in the generator LLM are low. Therefore, with SelfCheck, LLMs can effectively rectify their own biased beliefs by themselves.
2308.00436#23
2308.00436#25
2308.00436
[ "2206.02336" ]
2308.00436#25
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
8 5.2 ALBATION STUDIES In order to pick apart the effect of several critical design choices for SelfCheck, we compare SelfCheck with some of its variants with respect to final answer and verification accuracies on MathQA. Global v.s. step-by-step checking The first question is can we simply ask an LLM to check the whole solution without taking steps into consideration. To answer it, we prompt the LLM to perform global checking with the following instruction: The following is a question and a solution to it from a student. Carefully check whether the solution is correct step by step. End your response with your conclusion that starts with "Correct", "Wrong" or "Not Sure". Question: [Question] Solution: [Step 0, Step 1,..., Step n] Similar to the findings of Ling et al. (2023), we find that the global checker outputs "correct" most of the time and rarely recognizes an error. Consequently, its final answer accuracies are very close to majority voting (in Figure 6) and its verification accuracy (55.0%) is only marginally above random guess (50.0%). This lack of ability to deal with the difficulty of global checking is what makes step checking necessary. Single-stage v.s. multiple-stage step checking Next, we ask whether we really need to decompose the step checking into several stages? To answer this, we design the following prompt to use the LLM directly. # #Solutions per question Figure 6: Generation accuracies for variants of SelfCheck on MathQA with GPT-3.5. The following is a question and the first a few steps in its solution. Question: [Question] Solution: [Step 0, Step 1,..., Step i-1] Check the correctness of the next step: [Step i] Please consider the information it relies on and check step by step. Please end your response with your conclusion that starts with "Correct", "Wrong" or "Not Sure". Figure 6 and Table 2 show that although this is better than global checking, it is still significantly worse than SelfCheck with its multi-stage checking. This indicates that checking a step in a single stage is still too challeng- ing for the LLM, so it is necessary to further decompose step checking into a pipeline of easier sub-tasks.
2308.00436#24
2308.00436#26
2308.00436
[ "2206.02336" ]
2308.00436#26
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
_ Table 2: Verification accuracies for vari- ants of SelfCheck on MathQA with GPT- 3.5. The reported verification accuracy is the average of true positive and true nega- tive rates. Error check v.s. regenerate and compare We now justify the choice to perform step regeneration and com- parison instead of direct error checking for each step. To do so, we replace our regeneration stage and com- parison stage with a single error-checking stage. We first compare with a zero-shot version of the variant with the following prompt: Method SelfCheck Global Check Single stage Check Error Check (0-shot) Error Check (1-shot) Accuracy (%) 66.7% 55.0% 57.2% 63.1% 64.2% Given the following information: Information 0: [Information I0] Step 0: [Step S0] Step 1: [Step S1] Check the correctness of the next step [Step i] Please check for grounding errors, reasoning errors and calculation errors step by step. Please end your response with your conclusion that starts with "Correct", "Wrong" or "Not Sure". ...
2308.00436#25
2308.00436#27
2308.00436
[ "2206.02336" ]
2308.00436#27
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
We then add an examplar from Ling et al. (2023) (see Appendix B) to make a more powerful one-shot error checker. However, results in Figure 6 and Table 2 show that even with a very detailed and instructive example, direct error checking still performs worse than our regenerate and compare approach, which supports our previous argument that LLMs are better at generation than checking. # 6 CONCLUSIONS In this paper, we have introduced SelfCheck, a general-purpose, zero-shot, step-by-step checking scheme for LLMs. Unlike previous approaches, SelfCheck does not require any additional data or external resources: it uses the LLM to identify errors in its own reasoning, leveraging a novel regenerate-and-compare approach. By using the results of this checking to perform weighted voting over different solutions, we find that SelfCheck is able to, in turn, increase final predictive accuracy.
2308.00436#26
2308.00436#28
2308.00436
[ "2206.02336" ]
2308.00436#28
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
9 # REFERENCES Ebtesam Almazrouei, Hamza Alobeidli, Abdulaziz Alshamsi, Alessandro Cappelli, Ruxandra Co- jocaru, Merouane Debbah, Etienne Goffinet, Daniel Heslow, Julien Launay, Quentin Malartic, Badreddine Noune, Baptiste Pannier, and Guilherme Penedo. Falcon-40B: an open large language model with state-of-the-art performance. 2023. Aida Amini, Saadia Gabriel, Shanchuan Lin, Rik Koncel-Kedziorski, Yejin Choi, and Hannaneh Hajishirzi.
2308.00436#27
2308.00436#29
2308.00436
[ "2206.02336" ]
2308.00436#29
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
MathQA: Towards interpretable math word problem solving with operation-based In Proceedings of the 2019 Conference of the North American Chapter of the formalisms. Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 2357â 2367, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1245. URL https://aclanthology.org/N19-1245. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al.
2308.00436#28
2308.00436#30
2308.00436
[ "2206.02336" ]
2308.00436#30
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
Language models are few-shot learners. Advances in neural information processing systems, 33:1877â 1901, 2020. Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712, 2023. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al.
2308.00436#29
2308.00436#31
2308.00436
[ "2206.02336" ]
2308.00436#31
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021.
2308.00436#30
2308.00436#32
2308.00436
[ "2206.02336" ]
2308.00436#32
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
Google. Palm 2 technical report. arXiv preprint arXiv:2303.08774, 2023. Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. Deberta: Decoding-enhanced bert with disentangled attention. In International Conference on Learning Representations, 2020. Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2), 2021.
2308.00436#31
2308.00436#33
2308.00436
[ "2206.02336" ]
2308.00436#33
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen, Jian-Guang Lou, and Weizhu Chen. On the advance of making language models better reasoners. arXiv preprint arXiv:2206.02336, 2022. Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe.
2308.00436#32
2308.00436#34
2308.00436
[ "2206.02336" ]
2308.00436#34
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
Letâ s verify step by step. arXiv preprint arXiv:2305.20050, 2023. Zhan Ling, Yunhao Fang, Xuanlin Li, Zhiao Huang, Mingu Lee, Roland Memisevic, and Hao Su. Deductive verification of chain-of-thought reasoning. arXiv preprint arXiv:2306.03872, 2023. Qing Lyu, Shreya Havaldar, Adam Stein, Li Zhang, Delip Rao, Eric Wong, Marianna Apidianaki, and Chris Callison-Burch. Faithful chain-of-thought reasoning. arXiv preprint arXiv:2301.13379, 2023. OpenAI. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023. Baolin Peng, Michel Galley, Pengcheng He, Hao Cheng, Yujia Xie, Yu Hu, Qiuyuan Huang, Lars Liden, Zhou Yu, Weizhu Chen, et al.
2308.00436#33
2308.00436#35
2308.00436
[ "2206.02336" ]
2308.00436#35
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
Check your facts and try again: Improving large language models with external knowledge and automated feedback. arXiv preprint arXiv:2302.12813, 2023. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al.
2308.00436#34
2308.00436#36
2308.00436
[ "2206.02336" ]
2308.00436#36
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023. 10 Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V Le, Ed H Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models. In The Eleventh International Conference on Learning Representations, 2022. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824â 24837, 2022.
2308.00436#35
2308.00436#37
2308.00436
[ "2206.02336" ]
2308.00436#37
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
Yixuan Weng, Minjun Zhu, Shizhu He, Kang Liu, and Jun Zhao. Large language models are reasoners with self-verification. arXiv preprint arXiv:2212.09561, 2022. 11 # APPENDIX A A COMPLETE EXAMPLE OF SELFCHECK To help readers get a better idea of how SelfCheck works, we show a complete version of the example in Figure 1, including the exact inputs and outputs of different stages. Please note that we have compiled all latex codes for readability, though the inputs and outputs to LLMs are in their original form.
2308.00436#36
2308.00436#38
2308.00436
[ "2206.02336" ]
2308.00436#38
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
The question is Let S be the set of complex numbers z such that the real part of 1 forms a curve. Find the area of the region inside the curve. z is equal to 1 6 . This set We feed the following instruction to the LLM that serves as the generator: Solve the following problem step by step. Please start each step with "Step :" and split sentences in each step with " ". Please finish you response with "So the answer is ...". Problem: Let S be the set of complex numbers z such that the real part of 1 z is equal to 1 6 . This set forms a curve. Find the area of the region inside the curve.
2308.00436#37
2308.00436#39
2308.00436
[ "2206.02336" ]
2308.00436#39
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
After getting the following solution, we start checking Step 5 as an example: Solution: (Step 0) Let z = x + yi be a complex number, where x and y are real numbers. (Step 1) We are given that the real part of + is equal to de (Step 2) Using the fact that the real part of a complex number a + bi is a, we have Re (1) = Re (stg) = sey 7 1) _ 1 x zo (Step 3) Setting Re (4) = 5 We get aie = ole (Step 4) Cross-multiplying and rearranging, we get the equation 6x = 2? + y?. (Step 5) Completing the square, we obtain (x â 3) +y= 3 (Step 6) This is the equation of a circle with center (3, 0) and radius 3. (Step 7) The area of the region inside the circle is on. So the answer is on,
2308.00436#38
2308.00436#40
2308.00436
[ "2206.02336" ]
2308.00436#40
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
Variable definition extraction This optional stage extracts the definitions of variables that appear in the reasoning chain. SelfCheck feeds the definitions to the regeneration stage to avoid LLM not understanding the meaning of a variable. The input to the LLM is: The following is the solution to a math question: Question: Let S be the set of complex numbers z such that the real part of 1 6 . This set forms a curve. Find the area of the region inside the curve. Solution: (Step 0) Let z = x + yi be a complex number, where x and y are real numbers. ... (Step 7) The area of the region inside the circle is 9Ï
2308.00436#39
2308.00436#41
2308.00436
[ "2206.02336" ]
2308.00436#41
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
4 . So the answer is 9Ï 4 . Are there variables in the solution? the form of "1. variable is defined as...". If so, please list the definition of variable in The underlined parts are the type of question, the question itself and the steps in its solution, respectively. The output from the LLM is: Yes. There are variables in the solution. x + yi, where xxx and yyy are real numbers. x + yi 1. zzz is defined as a complex number of the form x + yi
2308.00436#40
2308.00436#42
2308.00436
[ "2206.02336" ]
2308.00436#42
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
The bold part is then saved to form a part of the input in the regeneration stage. Target extraction To get a brief and clear target of the current step, the input to the LLM is: The following is a part of the solution to the problem: Let S be the set of complex numbers z such that the real part of 1 6 . This set forms a curve. Find the area of the 12 region inside the curve. (Step 0) Let z = x + yi be a complex number, where x and y are real numbers. (Step 5) Completing the square, we obtain (a â 3)? +y= Q. 3)? +y= Q. 2 . . . : 2 What specific action does the step "Completing the square, we obtain (a â 3) take? Please give a brief answer using a single sentence and do not copy the steps. 2 2 +y= 4 ." The underlined parts are the question and reasoning steps before the current one, including the current one. The output of the LLM is: The step completes the square to rewrite the equation in standard form of a circle. The whole sentence is saved and forms the most important part of the input in the regeneration stage. Information Collection To get sentences in the question and previous steps in the solution that are directly related to the current step, the input to the LLM is: This is a math question: Question: Let S be the set of complex numbers z such that the real part of 1 6 . This set forms a curve. Find the area of the region inside the curve.
2308.00436#41
2308.00436#43
2308.00436
[ "2206.02336" ]
2308.00436#43
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
The following is information extracted from the question: Information 0: Let S be the set of complex numbers z such that the real part of 1 1 6 . Information 1: This set forms a curve. Information 2: Find the area of the region inside the curve. The following is the first a few steps in a solution to the problem: Step 0: Let z = x + yi be a complex number, where x and y are real numbers. ... Step 4: Cross-multiplying and rearranging, we get the equation 6x = x2 + y2. Which previous steps or information does the next step "Completing the square, we obtain (x â 3) _ p= on" directly follow from? _ p= 4 ." directly follow from? 2 The underlined information (contents after Information X:) is simply a list of sentences in the question. The underlined steps are reasoning steps before (not including) the current one in the solution. The output from the LLM is: The next step "Completing the square, we obtain (x â 3)? +y? = 2." directly follows from Step 4. Then we use regular expression to extract the information and step id from LLMâ s output. Some steps may only rely on information from the question while others may only use results from previous steps. Step regeneration In this stage, we instruct the LLM to achieve the extracted target with only collected information with the following prompt: We are in a process of solving a math problem.
2308.00436#42
2308.00436#44
2308.00436
[ "2206.02336" ]
2308.00436#44
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
Variables are defined as: 1. z is defined as a complex number of the form x + yi, where x and y are real numbers. The following are some previous steps: Step 0: Cross-multiplying and rearranging, we get the equation 6x = x2 + y2. 13 The target for next step is: The step completes the square to rewrite the equation in standard form of a circle. Please try to achieve the target with the information from previous steps. The underlined parts are the type of question, variable definition (if any), collected information from the question (if any), collected previous steps (if any) and the extracted target. Please notice that the current step does not directly rely on any information from the question, we removed the corresponding block from the input. The output from the LLM is:
2308.00436#43
2308.00436#45
2308.00436
[ "2206.02336" ]
2308.00436#45
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
To rewrite the equation in standard form of a circle, we need to complete the square for both xxx and yyy. 6x = x2 + y2 Starting from the given equation: 6x = x2 + y2 6x = x2 + y2. (6/2)2 = 9 We add and subtract (6/2)2 = 9 (6/2)2 = 9 on the right-hand side: 6x + 9 â 9 = x2 + y2 + 9 â 9 6x + 9 â 9 = x2 + y2 + 9 â 9. 6x + 9 â 9 = x2 + y2 + 9 â 9 (x â 3)2 + y2 = 9 Simplifying: (x â 3)2 + y2 = 9 (x â 3)2 + y2 = 9. This is now in the standard form of a circle with (3, 0) and radius 333. (3, 0) center (3, 0)
2308.00436#44
2308.00436#46
2308.00436
[ "2206.02336" ]
2308.00436#46
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
Then we can directly compare the regenerated step with the original one in the next stage. Result comparison The input to the LLM is: The following are 2 solutions to a math problem: Solution 1: To rewrite the equation in standard form of a circle, we need to complete the square for both x and y. Starting from the given equation: 6x = x2 + y2. We add and subtract (6/2)2 = 9 on the right-hand side: 6x + 9 â 9 = x2 + y2 + 9 â 9. Simplifying: (x â 3)2 + y2 = 9. This is now in the standard form of a circle with center (3, 0) and radius 3. Solution 2: Completing the square, we obtain (2 â 2 a)? +y= 8. Compare the key points from both solutions step by step and then check whether Solution 1 "supports", "contradicts" or "is not directly related to" the conclusion in Solution 2. Pay special attention to difference in numbers. The underlined parts are simply the problem type, the regenerated step and the original step. The output is: In both solutions, the goal is to rewrite the given equation in standard form of a circle. However, in Solution 1, the completion of the square results in the center of the circle being (3, 0) and a radius of 3. In Solution 2, the completion of the square results in the center of the circle being (3/2, 0) and a radius of 3/2. Therefore, Solution 1 contradicts the conclusion in Solution 2 as the centers and radii of the circles are different.
2308.00436#45
2308.00436#47
2308.00436
[ "2206.02336" ]
2308.00436#47
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
If the word â supportâ or â contradictâ appears in the last line of the output, SelfCheck outputs a step score 1 or -1. Otherwise, SelfCheck outputs 0 meaning that two steps are not related. # APPENDIX B EXAMPLAR FOR ONE-SHOT ERROR CHECKING The following example from Ling et al. (2023) is used to perform one-shot error checking. Here is some information: â 1. Alice gets 2 apples. 2. Alice gets twice as many oranges as bananas. 3. Original price per apple: $2
2308.00436#46
2308.00436#48
2308.00436
[ "2206.02336" ]
2308.00436#48
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
14 Discount: 50% Price per apple after discount: $2 * (1 - 0.50) = $2 * 0.50 = $1" Based on the given information, here is a reasoning process: "Calculate Aliceâ s cost of the 5 apples after a 50% discount. Price per apple after discount: $1 Apples be eaten: 3 Oranges be eaten: 6 Total apples: 5 Total oranges: x Total bananas: 2x Aliceâ s total number of fruits: 5 + x + 2x = 5 + 4x Aliceâ s total cost of 5 apples: $1 * (5 - 3) = $2â Double-check the reasoning process, letâ s analyze its correctness, and end with "yes" or "no". Answer: Letâ s think step by step.
2308.00436#47
2308.00436#49
2308.00436
[ "2206.02336" ]
2308.00436#49
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
Grounding check Purpose: Find references for numbers in the reasoning process sequentially Rule: Any ungrounded number makes the check fails "x" is a variable, we do not care about variables. "y" is a variable, we do not care about variables. "5" is a quantity from the given information that represents the number of apples. "50%" is a quantity from the given information that represents the discount. "$1" is a quantity from the given information that represents the price per apple after the discount. "3" is a quantity not from the given information, it is ungrounded. "6" is a quantity not from the given information, it is ungrounded. "$2" is a quantity from the given information that represents the original price per apple.
2308.00436#48
2308.00436#50
2308.00436
[ "2206.02336" ]
2308.00436#50
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
Result: Based on the check, there are ungrounded numbers, so the grounding check fails. Reasoning check: Purpose: Check whether the reasoning logic correctly answers the question Rule: Any logic error makes the check fails To calculate the total cost of apples after a 50% discount, we should multiply the number of apples by the discounted price. But to answer the question, we do not need to calculate the total number of fruit Alice gets. Result: Based on the check, there are some logic mistakes, so the reasoning check fails. Calculation check: Purpose: Check whether the calculation process is consistent Rule: Any inconsistent calculation makes the check fails calculation1: equation: $1 * (5 - 3), answer: $2 (5 - 3) = 2 $1 * 2 = $2 is consistent with the answer, so the calculation is correct. calculation2: equation: 5 + x + 2x, answer: 5 + 4x x + 2x = 3x 5 + 3x is inconsistent with the answer, so the calculation is incorrect. Result: Based on the check, the calculation process is inconsistent, so the calculation check fails. Check results: Ground check fails, Reasoning check fails, Calculation check fails. Rule: Any failed check makes the reasoning incorrect. So the answer is "no".
2308.00436#49
2308.00436#51
2308.00436
[ "2206.02336" ]
2308.00436#51
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
15
2308.00436#50
2308.00436
[ "2206.02336" ]
2308.00352#0
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
3 2 0 2 v o N 6 ] I A . s c [ 5 v 2 5 3 0 0 . 8 0 3 2 : v i X r a Preprint # METAGPT: META PROGRAMMING FOR A MULTI-AGENT COLLABORATIVE FRAMEWORK Sirui Hong1â , Mingchen Zhuge2â , Jonathan Chen1, Xiawu Zheng3, Yuheng Cheng4, Ceyao Zhang4, Jinlin Wang1, Zili Wang, Steven Ka Shing Yau5, Zijuan Lin4, Liyang Zhou6, Chenyu Ran1, Lingfeng Xiao1,7, Chenglin Wu1â , J ¨urgen Schmidhuber2,8 1DeepWisdom, 2AI Initiative, King Abdullah University of Science and Technology, 3Xiamen University, 5Nanjing University, 7University of California, Berkeley, # ABSTRACT Remarkable progress has been made on automated problem solving through so- cieties of agents based on large language models (LLMs). Existing LLM-based multi-agent systems can already solve simple dialogue tasks. Solutions to more complex tasks, however, are complicated through logic inconsistencies due to cascading hallucinations caused by naively chaining LLMs. Here we introduce MetaGPT, an innovative meta-programming framework incorporating efficient human workflows into LLM-based multi-agent collaborations. MetaGPT en- codes Standardized Operating Procedures (SOPs) into prompt sequences for more streamlined workflows, thus allowing agents with human-like domain expertise to verify intermediate results and reduce errors. MetaGPT utilizes an assembly line paradigm to assign diverse roles to various agents, efficiently breaking down complex tasks into subtasks involving many agents working together. On col- laborative software engineering benchmarks, MetaGPT generates more coherent solutions than previous chat-based multi-agent systems. Our project can be found at https://github.com/geekan/MetaGPT
2308.00352#1
2308.00352
[ "2308.12950" ]
2308.00352#1
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
1 # INTRODUCTION Autonomous agents utilizing Large Language Models (LLMs) offer promising opportunities to en- hance and replicate human workflows. In real-world applications, however, existing systems (Park et al., 2023; Zhuge et al., 2023; Cai et al., 2023; Wang et al., 2023c; Li et al., 2023; Du et al., 2023; Liang et al., 2023; Hao et al., 2023) tend to oversimplify the complexities. They struggle to achieve effective, coherent, and accurate problem-solving processes, particularly when there is a need for meaningful collaborative interaction (Zhang et al., 2023; Dong et al., 2023; Zhou et al., 2023; Qian et al., 2023). Through extensive collaborative practice, humans have developed widely accepted Standardized Operating Procedures (SOPs) across various domains (Belbin, 2012; Manifesto, 2001; DeMarco & Lister, 2013). These SOPs play a critical role in supporting task decomposition and effective coor- dination. Furthermore, SOPs outline the responsibilities of each team member, while establishing standards for intermediate outputs. Well-defined SOPs improve the consistent and accurate exe- cution of tasks that align with defined roles and quality standards (Belbin, 2012; Manifesto, 2001; DeMarco & Lister, 2013; Wooldridge & Jennings, 1998). For instance, in a software company, Product Managers analyze competition and user needs to create Product Requirements Documents (PRDs) using a standardized structure, to guide the developmental process. Inspired by such ideas, we design a promising GPT-based Meta-Programming framework called MetaGPT that significantly benefits from SOPs. Unlike other works (Li et al., 2023; Qian et al., 2023), MetaGPT requires agents to generate structured outputs, such as high-quality requirements
2308.00352#0
2308.00352#2
2308.00352
[ "2308.12950" ]
2308.00352#2
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
â These authors contributed equally to this work. â Chenglin Wu ([email protected]) is the corresponding author, affiliated with DeepWisdom. 1 Preprint MetaGPT Agents Collaboration with Developing SOP 1/5 | | One-line requirement Define Write a classic and simple Flappy Bird game. 2/5 : Design Boss makes acceptance: check and payment Planning Requirement Analysis Architectural Design Meta Programming System Design 1 3/5 % | Plan&Code Pretty good ! I can %, 1 directly use the %, * a| ¢ interface and %, . 1 â _ keyboard to play oe, 7 4/5 Flappy Bird. % â Testin; 7 Test ms bd QA Engineer W â N am °R . a4 cn 4 5/5] Figure 1:
2308.00352#1
2308.00352#3
2308.00352
[ "2308.12950" ]
2308.00352#3
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
The software development SOPs between MetaGPT and real-world human teams. In software engineering, SOPs promote collaboration among various roles. MetaGPT showcases its ability to decompose complex tasks into specific actionable procedures assigned to various roles (e.g., Product Manager, Architect, Engineer, etc.). documents, design artifacts, flowcharts, and interface specifications. The use of intermediate struc- tured outputs significantly increases the success rate of target code generation. More graphically, in a company simulated by MetaGPT, all employees follow a strict and streamlined workflow, and all their handovers must comply with certain established standards. This reduces the risk of hallucina- tions caused by idle chatter between LLMs, particularly in role-playing frameworks, like: â
2308.00352#2
2308.00352#4
2308.00352
[ "2308.12950" ]
2308.00352#4
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
Hi, hello and how are you?â â Alice (Product Manager); â Great! Have you had lunch?â â Bob (Architect). Benefiting from SOPs, MetaGPT offers a promising approach to meta-programming. In this context, we adopt meta-programming1 as â programming to programâ , in contrast to the broader fields of meta learning and â learning to learnâ (Schmidhuber, 1987; 1993a; Hochreiter et al., 2001; Schmidhuber, 2006; Finn et al., 2017). This notion of meta-programming also encompasses earlier efforts like CodeBERT (Feng et al., 2020) and recent projects such as CodeLlama (Rozi`ere et al., 2023) and WizardCoder (Luo et al., 2023). However, MetaGPT stands out as a unique solution that allows for efficient meta- programming through a well-organized group of specialized agents. Each agent has a specific role and expertise, following some established standards. This allows for automatic requirement analysis, system design, code generation, modification, execution, and debugging during runtime, highlight- ing how agent-based techniques can enhance meta-programming. To validate the design of MetaGPT, we use publicly available HumanEval (Chen et al., 2021a) and MBPP (Austin et al., 2021) for evaluations. Notably, in code generation benchmarks, MetaGPT achieves a new state-of-the-art (SoTA) with 85.9% and 87.7% in Pass@1. When compared to other popular frameworks for creating complex software projects, such as AutoGPT (Torantulino et al., 2023), LangChain (Chase, 2022), AgentVerse (Chen et al., 2023), and ChatDev (Qian et al., 2023). MetaGPT also stands out in handling higher levels of software complexity and offering extensive functionality. Remarkably, in our experimental evaluations, MetaGPT achieves a 100% task com- pletion rate, demonstrating the robustness and efficiency (time and token costs) of our design. We summarize our contributions as follows:
2308.00352#3
2308.00352#5
2308.00352
[ "2308.12950" ]