id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
listlengths 1
1
|
---|---|---|---|---|---|---|
2307.15337#38 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen, Jian-Guang Lou, and Weizhu Chen. Making In Proceedings of the 61st Annual language models better reasoners with step-aware verifier. Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 5315â 5333, 2023c. Zhuohan Li, Siyuan Zhuang, Shiyuan Guo, Danyang Zhuo, Hao Zhang, Dawn Song, and Ion Stoica. | 2307.15337#37 | 2307.15337#39 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#39 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | Terapipe: Token-level pipeline parallelism for training large-scale language models. In Interna- tional Conference on Machine Learning, pp. 6543â 6552. PMLR, 2021. 12 # Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding Ji Lin, Jiaming Tang, Haotian Tang, Shang Yang, Xingyu Dang, and Song Han. Awq: arXiv preprint Activation-aware weight quantization for llm compression and acceleration. arXiv:2306.00978, 2023. Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. Pre- train, prompt, and predict: A systematic survey of prompting methods in natural language pro- cessing. ACM Computing Surveys, 55(9):1â | 2307.15337#38 | 2307.15337#40 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#40 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | 35, 2023. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach, 2019. Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In International Confer- ence on Learning Representations, 2019. Wenyan Lu, Guihai Yan, Jiajun Li, Shijun Gong, Yinhe Han, and Xiaowei Li. | 2307.15337#39 | 2307.15337#41 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#41 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | Flexflow: A flexible dataflow accelerator architecture for convolutional neural networks. In 2017 IEEE International Symposium on High Performance Computer Architecture (HPCA), pp. 553â 564. IEEE, 2017. Xupeng Miao, Gabriele Oliaro, Zhihao Zhang, Xinhao Cheng, Zeyu Wang, Rae Ying Yee Wong, Zhuoming Chen, Daiyaan Arfeen, Reyna Abhyankar, and Zhihao Jia. Specinfer: Accelerating generative llm serving with speculative inference and token tree verification. arXiv preprint arXiv:2305.09781, 2023. Asit Mishra, Jorge Albericio Latorre, Jeff Pool, Darko Stosic, Dusan Stosic, Ganesh Venkatesh, Chong Yu, and Paulius Micikevicius. Accelerating sparse deep neural networks. arXiv preprint arXiv:2104.08378, 2021. Deepak Narayanan, Aaron Harlap, Amar Phanishayee, Vivek Seshadri, Nikhil R Devanur, Gre- gory R Ganger, Phillip B Gibbons, and Matei Zaharia. | 2307.15337#40 | 2307.15337#42 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#42 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | Pipedream: Generalized pipeline par- In Proceedings of the 27th ACM Symposium on Operating Systems allelism for dnn training. Principles, pp. 1â 15, 2019. Deepak Narayanan, Amar Phanishayee, Kaiyu Shi, Xie Chen, and Matei Zaharia. Memory-efficient pipeline-parallel dnn training. In International Conference on Machine Learning, pp. 7937â 7947. PMLR, 2021. NVIDIA. Fastertransformer, 2019. URL https://github.com/NVIDIA/ FasterTransformer. NVIDIA. Triton inference server, 2021. URL https://developer.nvidia.com/ triton-inference-server. OpenAI. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35: 27730â 27744, 2022. | 2307.15337#41 | 2307.15337#43 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#43 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | Duy Phung. Stablevicuna-13b, May 2023. URL https://huggingface.co/CarperAI/ stable-vicuna-13b-delta. Ofir Press, Muru Zhang, Sewon Min, Ludwig Schmidt, Noah A Smith, and Mike Lewis. Measuring and narrowing the compositionality gap in language models. arXiv preprint arXiv:2210.03350, 2022. Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, and Yuxiong He. | 2307.15337#42 | 2307.15337#44 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#44 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | Zero: Memory optimizations toward training trillion parameter models. In SC20: International Conference for High Perfor- mance Computing, Networking, Storage and Analysis, pp. 1â 16. IEEE, 2020. Jie Ren, Samyam Rajbhandari, Reza Yazdani Aminabadi, Olatunji Ruwase, Shuangyan Yang, Min- jia Zhang, Dong Li, and Yuxiong He. {ZeRO-Offload}: Democratizing {Billion-Scale} model training. In 2021 USENIX Annual Technical Conference (USENIX ATC 21), pp. 551â 564, 2021. 13 | 2307.15337#43 | 2307.15337#45 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#45 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | # Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding Andrea Santilli, Silvio Severino, Emilian Postolache, Valentino Maiorca, Michele Mancusi, Ric- cardo Marin, and Emanuele Rodol`a. Accelerating transformer inference for translation via paral- lel decoding. In acl, 2023. Timo Schick, Jane Dwivedi-Yu, Roberto Dess`ı, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. | 2307.15337#44 | 2307.15337#46 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#46 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | Toolformer: Language models can teach themselves to use tools. arXiv preprint arXiv:2302.04761, 2023. SenseTime. Lightllm. https://github.com/ModelTC/lightllm, 2023a. Accessed: 2023-09-26. SenseTime. Openppl. https://github.com/openppl-public/ppl.nn, 2023b. Ac- cessed: 2023-09-26. Noam Shazeer. Fast transformer decoding: One write-head is all you need. arXiv preprint arXiv:1911.02150, 2019. Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. arXiv preprint Hugginggpt: Solving ai tasks with chatgpt and its friends in huggingface. arXiv:2303.17580, 2023. Ying Sheng, Lianmin Zheng, Binhang Yuan, Zhuohan Li, Max Ryabinin, Daniel Y Fu, Zhiqiang Xie, Beidi Chen, Clark Barrett, Joseph E Gonzalez, et al. High-throughput generative inference of large language models with a single gpu. arXiv preprint arXiv:2303.06865, 2023. Taylor Shin, Yasaman Razeghi, Robert L Logan IV, Eric Wallace, and Sameer Singh. | 2307.15337#45 | 2307.15337#47 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#47 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222â 4235, 2020. Mitchell Stern, Noam Shazeer, and Jakob Uszkoreit. Blockwise parallel decoding for deep autore- gressive models. Advances in Neural Information Processing Systems, 31, 2018. Ziteng Sun, Ananda Theertha Suresh, Jae Hun Ro, Ahmad Beirami, Himanshu Jain, Felix Yu, Michael Riley, and Sanjiv Kumar. Spectr: Fast speculative decoding via optimal transport. In Workshop on Efficient Systems for Foundation Models @ ICML2023, 2023. URL https: //openreview.net/forum?id=d0mGsaheuT. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethink- In Proceedings of the IEEE conference on ing the inception architecture for computer vision. computer vision and pattern recognition, pp. 2818â 2826, 2016. Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori Hashimoto. | 2307.15337#46 | 2307.15337#48 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#48 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | Alpaca: A strong, replicable instruction-following model. https://crfm.stanford.edu/2023/03/13/alpaca.html, 2023. Accessed: 2023- 06-23. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth´ee Lacroix, Baptiste Rozi`ere, Naman Goyal, Eric Hambro, Faisal Azhar, et al. | 2307.15337#47 | 2307.15337#49 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#49 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023a. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Niko- lay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. | 2307.15337#48 | 2307.15337#50 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#50 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | Llama 2: Open foundation and fine-tuned chat models, 2023b. 14 # Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding Guan Wang, Sijie Cheng, Qiying Yu, and Changling Liu. Openllms: Less is more for open-source models, July 2023a. URL https://github.com/imoneoi/openchat. Hanrui Wang, Zhekai Zhang, and Song Han. Spatten: Efficient sparse attention architecture with cascade token and head pruning. In 2021 IEEE International Symposium on High-Performance Computer Architecture (HPCA), pp. 97â 110. IEEE, 2021. Sinong Wang, Belinda Z Li, Madian Khabsa, Han Fang, and Hao Ma. Linformer: Self-attention with linear complexity. arXiv preprint arXiv:2006.04768, 2020. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdh- ery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171, 2022. Zifu Wang, Teodora Popordanoska, Jeroen Bertels, Robin Lemmens, and Matthew B Blaschko. | 2307.15337#49 | 2307.15337#51 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#51 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | Dice semimetric losses: Optimizing the dice score with soft labels. In Medical Image Computing and Computer Assisted Intervention, 2023b. Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. Finetuned language models are zero-shot learners. arXiv preprint arXiv:2109.01652, 2021. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824â 24837, 2022. Wei Wen, Chunpeng Wu, Yandan Wang, Yiran Chen, and Hai Li. | 2307.15337#50 | 2307.15337#52 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#52 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | Learning structured sparsity in deep neural networks. Advances in neural information processing systems, 29, 2016. Smoothquant: Accurate and efficient post-training quantization for large language models. arXiv preprint arXiv:2211.10438, 2022. Yisheng Xiao, Lijun Wu, Junliang Guo, Juntao Li, Min Zhang, Tao Qin, and Tie-yan Liu. A survey on non-autoregressive generation for neural machine translation and beyond. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023. Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, and Daxin Jiang. | 2307.15337#51 | 2307.15337#53 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#53 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | Wizardlm: Empowering large language models to follow complex instructions. arXiv preprint arXiv:2304.12244, 2023. Yuanzhong Xu, HyoukJoong Lee, Dehao Chen, Blake Hechtman, Yanping Huang, Rahul Joshi, Maxim Krikun, Dmitry Lepikhin, Andy Ly, Marcello Maggioni, et al. Gspmd: general and scalable parallelization for ml computation graphs. arXiv preprint arXiv:2105.04663, 2021. Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. React: Synergizing reasoning and acting in language models. arXiv preprint arXiv:2210.03629, 2022. Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L Griffiths, Yuan Cao, and Karthik Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:2305.10601, 2023. | 2307.15337#52 | 2307.15337#54 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#54 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | Gyeong-In Yu, Joo Seong Jeong, Geon-Woo Kim, Soojeong Kim, and Byung-Gon Chun. Orca: A distributed serving system for {Transformer-Based} generative models. In 16th USENIX Sympo- sium on Operating Systems Design and Implementation (OSDI 22), pp. 521â 538, 2022. Manzil Zaheer, Guru Guruganesh, Kumar Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, et al. | 2307.15337#53 | 2307.15337#55 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#55 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | Big bird: Transformers for longer sequences. Advances in neural information processing systems, 33:17283â 17297, 2020. Eric Zelikman, Yuhuai Wu, Jesse Mu, and Noah Goodman. Star: Bootstrapping reasoning with reasoning. Advances in Neural Information Processing Systems, 35:15476â 15488, 2022. 15 # Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding Daochen Zha, Zaid Pervaiz Bhat, Kwei-Herng Lai, Fan Yang, Zhimeng Jiang, Shaochen Zhong, and Xia Hu. Data-centric artificial intelligence: A survey. arXiv preprint arXiv:2303.10158, 2023. Yujia Zhai, Chengquan Jiang, Leyuan Wang, Xiaoying Jia, Shang Zhang, Zizhong Chen, Xin Liu, and Yibo Zhu. | 2307.15337#54 | 2307.15337#56 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#56 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | Bytetransformer: A high-performance transformer boosted for variable-length inputs. arXiv preprint arXiv:2210.03052, 2022. Yifan Zhang, Jingqin Yang, Yang Yuan, and Andrew Chi-Chih Yao. Cumulative reasoning with large language models. arXiv preprint arXiv:2308.04371, 2023. Lianmin Zheng, Zhuohan Li, Hao Zhang, Yonghao Zhuang, Zhifeng Chen, Yanping Huang, Yida Wang, Yuanzhong Xu, Danyang Zhuo, Eric P Xing, et al. | 2307.15337#55 | 2307.15337#57 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#57 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | Alpa: Automating inter-and {Intra- Operator} parallelism for distributed deep learning. In 16th USENIX Symposium on Operating Systems Design and Implementation (OSDI 22), pp. 559â 578, 2022. Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric. P Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. Judging llm-as-a-judge with mt-bench and chatbot arena, 2023. Chunting Zhou, Pengfei Liu, Puxin Xu, Srini Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, et al. Lima: Less is more for alignment, 2023. | 2307.15337#56 | 2307.15337#58 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#58 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | Zhe Zhou, Xuechao Wei, Jiejing Zhang, and Guangyu Sun. {PetS}: A unified framework for In 2022 USENIX Annual Technical Conference {Parameter-Efficient} transformers serving. (USENIX ATC 22), pp. 489â 504, 2022. Xizhou Zhu, Yuntao Chen, Hao Tian, Chenxin Tao, Weijie Su, Chenyu Yang, Gao Huang, Bin Li, Lewei Lu, Xiaogang Wang, et al. | 2307.15337#57 | 2307.15337#59 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#59 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | Ghost in the minecraft: Generally capable agents for open-world enviroments via large language models with text-based knowledge and memory. arXiv preprint arXiv:2305.17144, 2023. Barret Zoph and Quoc V. Le. Neural architecture search with reinforcement learning. In Interna- tional Conference on Learning Representations (ICLR), 2017. 16 Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding # Appendix # Table of Contents A Model Details B Implementation Details of Skeleton-of-Thought . B.1 Prompt . . . . B.2 Supporting Multi-Round Conversation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C Implementation Details of Skeleton-of-Thought with Router . . . . . . . . . C.1 Prompting Router . C.2 Trained Router . . . C.3 Router Consistency . . C.4 Concurrent execution for SoT-R . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D Related Work (Expanded) . D.1 Efficient LLMs . D.2 Prompting Methods for LLMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . E Efficiency Analysis F Efficiency Profiling G Efficiency Evaluation G.1 Skeleton-of-Thought . . G.2 Skeleton-of-Thought with Router . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H Overhead of SoT in Different Scenarios I Answer Quality Evaluation Skeleton-of-Thought . . Skeleton-of-Thought with Router . . | 2307.15337#58 | 2307.15337#60 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#60 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | I.1 I.2 I.3 ChatGPT-3.5 as the Judge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 18 18 20 20 20 20 21 21 22 22 23 24 25 27 27 29 31 32 32 44 44 17 | 2307.15337#59 | 2307.15337#61 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#61 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | # Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding # A MODEL DETAILS Table 1 summarizes the models on which we evaluate SoT. We use GPT-4 in the main paper and ChatGPT-3.5 in App. I.3 as the judge in FastChat and LLMZoo evaluation. Table 1: Model evaluated with SoT. All the open-source models are fine-tuned from LLaMA models. Access Model Name Institution Released Date Open-Source LLaMA2-Chat-7B (Touvron et al., 2023b) LLaMA2-Chat-13B (Touvron et al., 2023b) OpenChat-13B (Wang et al., 2023a) Vicuna-7B V1.3 (Chiang et al., 2023) Vicuna-13B V1.3 (Chiang et al., 2023) Vicuna-33B V1.3 (Chiang et al., 2023) StableVicuna-13B (Phung, 2023) UltraLM-13B (Ding et al., 2023) Vicuna-7B V1.1 (Chiang et al., 2023) Meta & Microsoft Meta & Microsoft Tsinghua LMSYS LMSYS LMSYS CarperAI OpenBMB & Tsinghua LMSYS 2023/07 2023/07 2023/07 2023/06 2023/06 2023/06 2023/05 2023/05 2023/03 API-Based Claude (Anthropic, 2023) ChatGPT-3.5 GPT-4 Anthropic OpenAI OpenAI 2023/05 2022/11 2023/03 Table 2 shows sources of the models we use in the paper. Table 2: The Hugging Face or API endpoints of the models. | 2307.15337#60 | 2307.15337#62 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#62 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | Access Model Name Hugging Face or API Endpoints Open-Source API-Based LLaMA2-Chat-7B (Touvron et al., 2023b) LLaMA2-Chat-13B (Touvron et al., 2023b) OpenChat-13B (Wang et al., 2023a) Vicuna-7B V1.3 (Chiang et al., 2023) Vicuna-13B V1.3 (Chiang et al., 2023) Vicuna-33B V1.3 (Chiang et al., 2023) StableVicuna-13B (Phung, 2023) UltraLM-13B (Ding et al., 2023) Vicuna-7B V1.1 (Chiang et al., 2023) Claude (Anthropic, 2023) ChatGPT-3.5 GPT-4 B IMPLEMENTATION DETAILS OF SKELETON-OF-THOUGHT B.1 PROMPT The skeleton prompt is shown in Prompts 1 and 3 and the point-expanding prompt is shown in Prompt 2. Skeleton prompt template. In order to make the output skeleton short and in a consistent format for the good of efficiency and ease of point extraction, the skeleton prompt template (1) describes the task precisely, and (2) provides a partial answer â 1.â for the LLM to continue writing. The skeleton 2For convenience, we use the non-official endpoint TheBloke/stable-vicuna-13B-HF and TheBloke/UltraLM-13B-fp16 to get merged weights. # 3https://www.anthropic.com/claude-in-slack 4https://azure.microsoft.com/en-us/products/ai-services/openai-service | 2307.15337#61 | 2307.15337#63 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#63 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | 18 # Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding Prompt 3. Skeleton Prompt Template T s (with Two-Shot Demonstrations) [User:] Youâ re an organizer responsible for only giving the skeleton (not the full content) for answering the question. Provide the skeleton in a list of points (numbered 1., 2., 3., etc.) Instead of writing a full sentence, each skeleton point should be very short with only 3â ¼5 words. Generally, the skeleton should have 3â ¼10 points. to answer the question. Question: | 2307.15337#62 | 2307.15337#64 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#64 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | What are the typical types of Chinese dishes? Skeleton: 1. Dumplings. 2. Noodles. 3. Dim Sum. 4. Hot Pot. 5. Wonton. 6. Ma Po Tofu. 7. Char Siu. 8. Fried Rice. Question: What are some practical tips for individuals to reduce their carbon emissions? Skeleton: 1. Energy conservation. 2. Efficient transportation. 3. Home energy efficiency. 4. Reduce water consumption. 5. Sustainable diet. 6. Sustainable travel. | 2307.15337#63 | 2307.15337#65 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#65 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | Now, please provide the skeleton for the following question. {question} Skeleton: [Assistant:] 1. responses are in the desired format in most cases. Therefore, we can use a simple regular expression (\d+)\.\s?([\s\S]+?)(?= | *$) to extract point indexes and point skeletons from the skeleton response. We find that GPT-4 can work well without the two demonstrations in the skeleton prompt. Therefore, we do not include the two demonstrations for GPT-4 (Prompt 1). For all other models, the two demonstrations are included, as shown in Prompt 3. Point-expanding prompt template. It describes the point-expanding task and provides a partial answer. We also provide instructions â Write it **very shortly** in 1â ¼2 sentenceâ so that the LLMs keep the answers concise. Unlike the skeleton prompt template, we find that demonstrations are not necessary to get reasonable results. We find that Claude and GPT-4 follows the instruction â Write it **very shortly** in 1â ¼2 sentence and do not continue with other points!â in Prompt 2 very well, so that the answers are very short. Therefore, we delete â **very shortly**â from the prompt template in Claude and GPT-4. Partial answer. desired response format better. In the Prompts 1 and 2, we provide partial answers so that LLMs can follow the We can put the partial answer at the end of the prompt for the open-source models to continue writing. An implementation detail is that different open-source models have different conversa- tion templates (i.e., different ways to combine user and assistant messages into one string). For example, Vicuna (Chiang et al., 2023) uses the string â USER:â and â ASSISTANT:â for the place- holder â [User:]â and â [Role]â in the Prompts 1 and 2, respectively, while UltraLM (Ding et al., 2023) uses â User:â and â â ©/sâ ªAssistant:â . | 2307.15337#64 | 2307.15337#66 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#66 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | We build our open-source model experiments with the help of the FastChat codebase (Zheng et al., 2023), in which the conversation templates of many models are already handled correctly. We implement the conversation templates of OpenChat-13B, StableVicuna-13B, and UltraLM-13B according to their official guides and codes. For ChatGPT-3.5, we provide partial answers as a last message in the chat history from the assistant. Note that it is not a documented approach. We find it works well in most cases, in that ChatGPT-3.5 | 2307.15337#65 | 2307.15337#67 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#67 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | 19 Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding Prompt 4. LLM Prompting as the Router [User:] Question: {question} How would you like to answer the question? A. Organize the answer as a list of points or perspectives (in the format of 1., 2., 3., etc.), and the points or perspectives can be answered independently without referring to the contents of the previous points. B. Organize the answer as a list of points or perspectives (in the format of 1., 2., 3., etc.), and the contents of later points or perspectives cannot be answered independently without referring to the contents of the previous ones. C. Do not organize the answer as a list of points or perspectives. Just say A, B, or C. Do not explain. Do not provide an answer to the question. [Assistant:] continues the texts from the provided partial answer. However, in some rare cases, ChatGPT-3.5 repeats the provided partial answers. For Claude over Slack, there is no obvious way to give the API a partial answer. We resort to modifying the prompt template slightly by adding Please start your answer from â {partial answer}â and do not output other things before that at the end. | 2307.15337#66 | 2307.15337#68 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#68 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | We find that Claude understands and obeys it well. For GPT-4, we also take this approach. System Message. We do not include the system message in the prompts for open-source models except LLaMA2. The partial answer, â **very shortly**â , and the 2-shot demonstrations discussed above are the only differences between the prompts we used across all models and all evaluations. B.2 SUPPORTING MULTI-ROUND CONVERSATION To use SoT in a multi-round conversation, we can just put the question and the final aggregated answer in the history, removing all the SoT prompts. In this way, using SoT in one conversation round will not introduce additional prefill cost in future rounds. C IMPLEMENTATION DETAILS OF SKELETON-OF-THOUGHT WITH ROUTER C.1 PROMPTING ROUTER We use Prompt 4 for querying GPT-4 as the router. If the answer is â Aâ (i.e., the question can be answered in a list of independent points), we will use SoT. Otherwise, if the answer is â Bâ (i.e., the answer is in a list of points but they depend on each other) or â Câ (i.e., the answer should not be in a list of points), SoT is not suitable and we will fall back to normal decoding. C.2 TRAINED ROUTER We tackle the routing problem as a sequence classification task. We first annotate the LIMA training set (Zhou et al., 2023), and then fine-tune a RoBERTa model (Liu et al., 2019) using the labeled data. Finally, we apply the tuned RoBERTa as the router on Vicuna-80 and WizardLM. We detail the steps in the following. # C.2.1 ANNOTATION PROCESS In the classification task, a label of 1 (positive) indicates that this question can be answered with SoT, while a label of 0 (negative) suggests that using the normal generation mode is more suitable. We annotate the LIMA training set, which consists of 1,030 Q&As sourced from three community webpages: Stack Exchange, wikiHow, and the Pushshift Reddit. We also annotate the Vicuna-80 and WizardLM datasets for evaluation. | 2307.15337#67 | 2307.15337#69 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#69 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | 20 # Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding Table 3: Router confusion matrices on the Vicuna-80 dataset. Left: Rows are human annotations (H) and columns are the GPT-4 router (G). Middle: Rows are human annotations (H) and columns are the RoBERTa router (R). Right: Rows are the GPT-4 router (G) and columns are the RoBERTa router (R). R0 R1 6 37 32 5 Table 4: Router confusion matrices on the WizardLM dataset. Left: Rows are human annotations (H) and columns are the GPT-4 router (G). Middle: Rows are human annotations (H) and columns are the RoBERTa router (R). Right: Rows are the GPT-4 router (G) and columns are the RoBERTa router (R). G0 G1 5 38 37 0 R0 R1 4 34 34 8 H0 H1 H0 H1 G0 G1 H0 H1 G0 G1 66 94 55 3 H0 H1 R0 135 31 R1 25 27 G0 G1 R0 R1 4 93 48 73 We use GPT-4 to assist the annotation process. Specifically, we present each question to GPT-4 and analyze its answer to determine whether SoT can be triggered for this question. We assign a positive label to a question if GPT-4â s response meets two criteria: (1) it contains a list of points that can be expanded in parallel, (2) each point provides sufficient details (i.e., the point-expanding response is not too short), which will enable SoT to achieve a speed-up. Two of the paperâ s authors conduct the annotation process independently, and discuss the inconsistent annotations to decide the final label. # C.2.2 TRAINING DETAILS We use roberta-base with 120M parameters as the router model. The finetuning is conducted using the AdamW optimizer (Loshchilov & Hutter, 2019) with a weight decay of 0.01. The learning rate undergoes a warm-up phase during the first 1% of iterations to 5e-5 and then decays linearly. | 2307.15337#68 | 2307.15337#70 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#70 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | We train the model for 2 epochs using a batch size of 32. Input sequences are either padded or truncated to achieve a consistent length of 512 tokens. In the application of SoT, false positives (SoT is incorrectly triggered when it should not be, resulting in degraded answer quality) are of more significant concern than false negatives (the router misses a potential SoT trigger, resulting in a reduced speed-up). Thus, to mitigate false positives, we employ the Tversky loss (Wang et al., 2023b) with parameters α = 0.7 and β = 0.3, which penalizes false positives more heavily than false negatives. We also incorporate label smoothing (Szegedy et al., 2016) with a factor of ϵ = 0.2. Overall, the entire fine-tuning process is efficient, completing in 2 minutes on an NVIDIA A100 GPU. | 2307.15337#69 | 2307.15337#71 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#71 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | C.3 ROUTER CONSISTENCY We present the confusion matrices for the three routers to illustrate their consistency. The results on Vicuna-80 and WizardLM are shown in Tables 3 and 4, respectively. On Vicuna-80, we can observe a notable level of agreement among the three routers. Compared with the GPT-4-prompting router, the trained router exhibits a slightly higher number of false negatives w.r.t. the human annotations. Conversely, on WizardLM, given the intricate answer structure and the presence of many ambiguous cases, the routers show significant discrepancies. Specifically, the GPT-4 router produces many false positives, which pose adverse affects on the answer quality (see App. I.2). The RoBERTa router aligns more closely with the human annotations. C.4 CONCURRENT EXECUTION FOR SOT-R In SoT-R, the router serves as an additional stage that extends the two-stage SoT pipeline. The SoT-R pipeline is illustrated in Fig. 9. To push the limit of latency optimization, we can run the router, normal generation, and SoT generation concurrently. Once the router makes a decision, one of the normal and SoT generation processes can be aborted. However, this approach will increase | 2307.15337#70 | 2307.15337#72 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#72 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | 21 # Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding oe ons a positive Re Question Question â â > > EE â answer | negative negative y a Figure 9: Left: The SoT-R pipeline. Right: A possible approach to further reduce latency at the cost of token overhead. the token overhead. Therefore, we did not employ this approach in this work and leave it to future work. # D RELATED WORK (EXPANDED) D.1 EFFICIENT LLMS Extensive research has been dedicated to enhancing the throughput and latency of LLM infer- ence. We first discuss model-level architecture design or compression techniques. These techniques change the model and can benefit both the latency and throughput but require finetuning to retain the model quality. Then, we discuss system-level efforts that optimize the computational graph or the assignment and scheduling of the computational graph on computation and storage devices. Most system-level efforts accelerate the prefilling phase or focus on improving the throughput. Finally, we discuss some research efforts that share a similar motivation to ours, namely, addressing the efficiency issue of sequential decoding. Model-level optimization. Considerable architectural design efforts have emerged to (1) improve the scalability w.r.t. model size by introducing mixture-of-expert inference (Lepikhin et al., 2021; Fedus et al., 2022), (2) address the quadratic complexity w.r.t. input size of attention by designing new attention mechanisms (Kitaev et al., 2020; Wang et al., 2020), (3) reduce the memory access and footprint of attention by using multi-query attention (Shazeer, 2019), and so on. However, these methods usually require a substantial re-training cost. The model compression techniques require a smaller amount of fine-tuning by reducing the model complexity of a pre-trained LLM from certain aspects (Ganesh et al., 2021). Representative techniques include quantization (Xiao et al., 2022; Frantar et al., 2022; Lin et al., 2023), the static or dynamic pruning of weights, activation, and attention (Mishra et al., 2021; Zaheer et al., 2020; Wang et al., 2021; Chen et al., 2023b), and so on. | 2307.15337#71 | 2307.15337#73 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#73 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | Zooming out from LLM compression to the whole field of model compression, we can see that model co-design or compression for efficiency has received tremendous attention in the past few years and has grown into large research fields, such as pruning (Han et al., 2015; Wen et al., 2016), quantization (Krishnamoorthi, 2018), factorization (Denton et al., 2014), and neural architecture search (Zoph & Le, 2017; Elsken et al., 2019; Cai et al., 2019). Different from the model co-design paradigm, SoT is in a â content co-organization for efficiencyâ paradigm for improving the LLM efficiency. Along with the growth in the LLM capabilities and amount of LLM-generated data, data-level techniques could become important tools in the efficient LLM toolbox. System-level optimization. In the realm of lossless acceleration, considerable efforts have been devoted to addressing the I/O-bound nature of LLMs on modern hardware platforms (Dao et al., 2022). Numerous studies (Dao et al., 2022; Zhai et al., 2022; Ivanov et al., 2021; NVIDIA, 2019) have focused on adjusting the computational graph by fusing and implementing operations in an I/O-friendly way. As a representative method, FlashAttention (Dao et al., 2022) fuses all operations of one attention into one GPU kernel with spatially tiled computation to reduce the off-chip I/O of the attention map. While FlashAttention can effectively accelerate training and the prefilling phase of inference, it cannot accelerate the decoding phase much (when the batch size is small), as it is the I/O of weights rather than activation or attention map that bottlenecks the decoding phase. For example, when the context length is 64, decoding one token using LLaMA-7B needs to load each | 2307.15337#72 | 2307.15337#74 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#74 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | 22 # Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding of the 7B parameters from the off-chip HBM onto the GPU chip at least once, but only transferring about 20M (0.02B) activation values between the off-chip HBM and GPU chip. In order to satisfy Service Level Objectives, serving systems focus on improving the serving throughput under latency constraints. To this end, serving systems (Fang et al., 2021; NVIDIA, 2021; Google, 2021) pack multiple queries together into a batch to improve the hardware utiliza- tion. The batching technique has proven highly effective in enhancing throughput, leading to the development of various variants. For example, some work designs methods to decide which queries to batch together (Fang et al., 2021; Zhou et al., 2022), while others selectively batch parts of the model to enable fine-grained iteration-level batching (Yu et al., 2022) or multi-task batching (Zhou et al., 2022). Various model parallelism (Lu et al., 2017; Huang et al., 2019; Narayanan et al., 2019; Rajbhandari et al., 2020; Narayanan et al., 2021; Li et al., 2021; Zheng et al., 2022) and offloading (Ren et al., 2021; Sheng et al., 2023) techniques have been proposed to maximize the throughput of LLM training or inference. In a nutshell, given the computational graph and device configurations, these techniques optimize the split, assignment, and scheduling of computations, storage, and communications on devices. In addition to the model parallelism and batching tech- niques, an efficient memory management mechanism for LLM workloads is also an essential feature in the serving systems (Kwon et al., 2023; SenseTime, 2023a;b). To sum up, these system-level techniques mainly help with the throughput in training and batched inference. They can be used by SoT to improve the throughput of the batched decoding of multiple segments. This means that SoT can harness the power of these throughput-oriented techniques and make them help with the end-to-end latency, offering a new dimension for better trading off latency and throughput in future serving systems. | 2307.15337#73 | 2307.15337#75 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#75 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | Another parallelism perspective to position SoT is that SoT guides the LLM to adjust the sequen- tial workload to become â inter-contentâ parallelizable, which differs from the parallelism levels in existing serving systems, including inter-instance (Krizhevsky, 2014; Rajbhandari et al., 2020), inter-operation (Huang et al., 2019; Narayanan et al., 2019; 2021), intra-operation (Xu et al., 2021), and inter-token (Li et al., 2021). It may be worthwhile to explore the integration of SoT into serving systems to maximize the hardware utilization. Decoding optimization. One bottleneck for the end-to-end latency lies in the autoregressive de- coding phase, where tokens must be generated one by one. Due to the dependency between tokens, the computation of different tokens cannot be parallelized, causing severe under-utilization of GPU. In order to improve the end-to-end decoding latency of a given LLM, speculative decoding meth- ods (Stern et al., 2018; Leviathan et al., 2022; Chen et al., 2023a; Gante, 2023; Sun et al., 2023; Miao et al., 2023) propose to use cheaper approaches to generate short candidate token sequences, for example, by sequentially decoding with an assisting model much smaller than the given LLM. Then, they use the LLM to parallelly verify the candidates and keep the prefix sequence that matches the LLMâ s verification results. | 2307.15337#74 | 2307.15337#76 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#76 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | Another line of work that shares the motivation of addressing the autoregressive efficiency issue is non-autoregressive generation (NAG) methods (Gu et al., 2018; Xiao et al., 2023). NAG methods sample consecutive tokens parallelly, often with the aid of a modified and tuned model. To maintain the answer quality, instead of sampling for one iteration, many NAG methods refine the output parallelly for multiple iterations (Xiao et al., 2023; Santilli et al., 2023). To summarize, the speculative decoding methods use assisting models for letting the LLM conduct parallel verification of consecutive tokens, and the NAG methods rely on specially designed models, training schemes, or sampling schemes for the parallel sampling and refinement of consecutive to- kens. In contrast, SoT prompts the LLM itself to plan the contents in a way that permits the parallel generation of multiple tokens in different segments. SoT exploits the emerging instruction-following and planning ability of SoTA LLMs rather than relying on specially designed modeling, sampling, and training schemes. This is different from all existing work that targets the autoregressive effi- ciency issue. D.2 PROMPTING METHODS FOR LLMS In recent years, the â pre-train, prompt, and predictâ paradigm has emerged (Liu et al., 2023), which designs prompts comprising task descriptions and (optionally) a few demonstrations to guide pre- | 2307.15337#75 | 2307.15337#77 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#77 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | 23 # Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding Table 5: The latency and average GPU performance of the prefilling and decoding phases when inferencing LLMs. The prefilling token length is 128, the decoding token length is 64, and the batch size is 1. The test is run on one NVIDIA A100 GPU. Model Prefill/Decode Latency (ms) LLaMA-7B LLaMA-13B LLaMA-33B 40 / 2735 54 / 3725 100 / 5506 43 / 0.31 62 / 0.44 85 / 0.75 trained LLMs in generating answers for a wide range of downstream tasks. Researchers found that instruction-tuned LLMs (Brown et al., 2020; Wei et al., 2021; Ouyang et al., 2022; Chung et al., 2022; Taori et al., 2023) possess a strong ability to (1) generalize to new tasks thanks to the diverse natural language descriptions encountered during instruction tuning, and (2) learn in-context using a few demonstrations without weight tuning. In virtue of these abilities, the field has been manually engineering (Brown et al., 2020; Kojima et al., 2022; Shen et al., 2023; Li et al., 2023a), automatic searching (Shin et al., 2020), or continu- ously tuning (Li & Liang, 2021; Lester et al., 2021) the prompts for uncovering the capabilities of LLMs on downstream tasks. | 2307.15337#76 | 2307.15337#78 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#78 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | There are a bunch of prompting methods that improves the reasoning performance of LLMs by designing thinking flows mimicking human reasoning: (1) mimicking the step-by-step or compositional thinking structure (Wei et al., 2022; Kojima et al., 2022; Press et al., 2022; Yao et al., 2023; Besta et al., 2023; Zhang et al., 2023), (2) designing multiple reasoning paths and their aggregation (Wang et al., 2022; Yao et al., 2023; Li et al., 2023c), and (3) using tools for calculation and information retrieval (Chen et al., 2022; Yao et al., 2022; Schick et al., 2023). As a representative example, the Chain-of-Thought prompts largely improve the performance on tasks that require logical reasoning by simply providing a â | 2307.15337#77 | 2307.15337#79 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#79 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | Letâ s think step by stepâ (Kojima et al., 2022) instruction or a few demonstrations (Wei et al., 2022). Another topic that arises quite a surge of in- terests is to prompt LLMs to help finish complex multi-modality task (Shen et al., 2023; Zhu et al., 2023). For example, HuggingGPT (Shen et al., 2023) design prompts to guide the LLM to generate structural JSON for the orchestration of multi-model execution to finish complex tasks. To summarize, the large literature on prompting methods has been aiming at uncovering different capabilities of LLM and improving the answer quality on different downstream tasks. In contrast, SoT is a first attempt at exploiting the power of prompting to improve efficiency. # E EFFICIENCY ANALYSIS This section gives a detailed explanation on why SoT can reduce the overall decoding latency with the same computational resource for local models. The vanilla approach processes only one question and decodes the answers sequentially, whereas SoT processes multiple point-expanding requests and the answers in a batch. We focus on the following question: â Compared to processing only one sequence, how much peak memory overhead and latency increase will be brought by processing a batch of sequences?â A typical LLM generative process consists of two phases: (1) the prefilling phase in which the prompt is parsed to generate the key-value cache for further use, and (2) the decoding phase in which tokens are generated one by one in a sequential manner. The decoding phase accounts for the majority of the end-to-end latency, especially when generating a long response. As shown in Table 5, when running Vicuna-7B on NVIDIA A100-80G, the actual computing performance is only 0.31 TFLOPS (0.1% utilization) in the decoding phase, compared to 43 TFLOPS (13.8% uti- lization) during prefilling. The utilization is calculated with respect to the FP165 tensor core peak performance â 312 TFLOPS for NVIDIA-A100. As a result, the latency of decoding only one token is comparable to that of prefilling 128 tokens (40ms). | 2307.15337#78 | 2307.15337#80 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#80 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This huge gap in actual computing perfor- mance and thereby the latency arises from the fact that all LLM weights need to be loaded onto the GPU chip at least once only for decoding one token, so the decoding is heavily bottlenecked by the I/O of weights and the GPU computation units cannot be well utilized. 5All of our experiments are run with FP16 inference. 24 # Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding (a) Latency (ms) (b) Actual GPU Perf. (TFLOPS) (c) Peak Memory (GB) Figure 10: The trends of latency, average GPU performance of decoding one token, and peak mem- ory with respect to the batch size B of sequences. The prefilling token length is 128, and the decoding token length is 64. The test is run on one NVIDIA A100 GPU. When conducting batched decoding, as the sequence batch size B increases, the latency of decoding one token for each sequence stays roughly the same (Fig. 10a), as the amount of LLM weights that needs to be loaded onto the chip does not change. As a result, the GPU computation utilization ( Actual GPU Performance Peak GPU Performance ) increases almost linearly as B increases (Fig. 10b). In other words, for gener- ating a final answer of length N , if we cut the answer into B segments of length N/B and decode them as a batch, we can get a BÃ decoding speed-up compared to sequential decoding. Never- theless, in practice, as prefilling longer requests brings some overhead, and the lengths of the B segments could be imbalanced, the actual speed-up of the batched point-expanding stage compared with the original prefilling and sequential decoding process is smaller than B. As for the peak memory overhead, the amount of LLM weights can be one to two orders of mag- nitude larger than that of all the intermediate activations as long as the prefilling token length is not too large, not to mention that most activations do not need to be saved for back-propagation during inference. Therefore, the LLM weights account for the majority of the memory footprint in our test cases. Consequently, as shown in Fig. 10c, the peak memory overhead due to the increasing size of the KV cache and activation grows at a slow pace as the batch size B increases. | 2307.15337#79 | 2307.15337#81 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#81 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | Thanks to the small peak memory overhead, in all of our experiments, we managed to use one GPU to run SoT without seeking help from other peak memory optimization techniques (e.g., quantization (Frantar et al., 2022; Lin et al., 2023), offloading (Sheng et al., 2023)). # F EFFICIENCY PROFILING We run the profiling on the target GPU (NVIDIA A100-80G and NVIDIA RTX 3090) with CUDA 11.7, using the Hugging Face transformer library 4.28.1 and PyTorch 2.0.1. The host of A100-80G has an Intel Xeon Platinum 8358P CPU and 1T memory. The host of RTX 3090 has an Intel Xeon Gold 6246R CPU and 512G memory. Latency profiling and estimation. For the decoding phase, we denote tD B (k) as the latency of batched decoding the k + 1-th token with batch size B, where the superscript D stands for â decodeâ . | 2307.15337#80 | 2307.15337#82 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#82 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | For each batch size B = 1, · · · , 16 and each context length k = 1, · · · , 1024, we use torch.cuda.Event to record the latency of decoding one token. We run each decod- ing three times continuously and take their geometric mean as {tD B (k)}k=1,··· ,1024;B=1,··· ,16. For the prefilling phase, we profile the latency of batched prefilling the inputs with token length k in range(1, 700, 10) and batch size B = 1, · · · , 16, and denote it as tP B(k), where the superscript P stands for â prefillâ . We run each test seven times continuously, regard the first two times as the warmup tests, and take the geometric mean of the last five times as {tP B(k)}k=1,11,··· ,691;B=1,··· ,16. Once we get the latency profiling table, given a request with li tokens and the decoding batch size B, the latency of generating lo tokens can be estimated as: litlo-1 Tlislo, B) =tB(i) + SD tB(k), (1) k=l; where the subscripts i and o stand for â inputâ and â outputâ . Note that we only test the prefill- ing latency every ten token lengths (i.e., 1, 11, 21, · · · ) for fast profiling and estimate Ë tP B(li) by B(â li tP 25 | 2307.15337#81 | 2307.15337#83 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#83 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | # Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding The SoT decoding process consists of two stages: the skeleton stage and the point-expanding stage. Denoting the token length of the skeleton request and skeleton response as ls o, the token length of the longest point-expanding request and the longest point-expanding response as lpe i and lpe o , the number of the points as B, we can compute the latency of the skeleton and point-expanding stages as: # Ls(ls , lpe i , ls o) = T (ls o , B) = T (lpe # i , ls (2) o, 1), , lpe o , B). # Lpe(lpe i i (3) Using the latency profiling table, we can further estimate the average GPU computing performance in FLOPS (i.e., FLOPs per second) of decoding lo tokens with prefilling length li as L+lo-1 ¢D a k PP (I Jy, B) = te FB) Ti+loâ 1 , kal; tB(k) (4) where f D B (k) denotes the FLOPs of decoding one token with context length k, which is calculated by DeepSpeedâ s FLOPs profiler 6. Fig. 10b reports the average GPU computing performance during the process of decoding 64 tokens (prefilling length=128), i.e., P D(128, 64, B). Memory use torch.cuda.max_memory_allocated to record the memory consumption of prefill- ing sequences of different lengths and decoding with different context lengths and a batch size ranging from 1 to 16. Then, we calculate the peak memory of each stage as the maximum value of the prefilling and decoding phases, and calculate the overall peak memory of SoT as the maximum value of the skeleton and point-expanding stages. # 6https://deepspeed.readthedocs.io/en/latest/flops-profiler.html 26 | 2307.15337#82 | 2307.15337#84 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#84 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding # G EFFICIENCY EVALUATION G.1 SKELETON-OF-THOUGHT G.1.1 DETAILED STATISTICS OF TOKEN LENGTHS AND POINT NUMBERS 10.0 coding math 90 fermi 80 roleplay writing 70 knowledge 60 generic counterfactual 50 " 4.0 Average a TR APR IIB VB NIG EAA IIB VI adr Phot nst that} 7B 38 a8 Vine aN clateer REECE Seger coding math 500.0 fermi roleplay 400.0 writing knowledge 300.0 generic counterfactual 200.0 Average 100.0 RRR TE MG ie AAG Vag or ONL SE Sei engages TESS (a) The number of points B. (b) The normal answer length. coding 00.0 math 1750 fermi roleplay 150.0 writing oso knowledge 100.0 generic counterfactual 75.0 common-sense S00 Average 5.1% 138.538 3393 33, 7 r3B vi hgudBe35oeâ ¢ Sea ae Ne neo oodys ie coding | °* fermi oa |o3 os o« os 02 03 a4 05 02 02 a2 a2 roleplay os | os e4 04 03 04 03 02 03 02 a3 03 2.0 writing | 03 | oz nome a ow 02 01 02 on is knowledge] 02 | 02 02 02 03 03 noe 02 03 02 oF generic 02 | 02 02 03 04 03 02 0s 03 02 01 a2 a2 counterfactual 402] ox 02 04 04 cx 02 0: 0s 02 a3 03 â common-sense 02 | 02 02 a4 04 03 02 os 03 04 03 02 os Average | os | 02 02 a4 cs 04 03 os 0302 03 02 TR 138938 WAS VAS V3, 138, 138 I Qh oudGn-3 Feet | 2307.15337#83 | 2307.15337#85 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#85 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | (c) The maximum point-expanding response length. (d) The ratio of the maximum point-expanding re- sponse length to the normal answer length. odin i 60.0 math fermi 50.0 roleplay 40.0 writing knowledge soo generic wom on counterfactual eons 200 10.0 Average 65-1838 498 13 V3.3 WF 138) 138 vi hour 3 o9T* Re a aan coding 7.0 math eae fermi os 07 6.0 roleplay wou 5.0 writing ae 07 40 knowledge a generic aos 3.0 counterfactual aos ao 1.0 Average 1%, 438.438 3.3 3.3 3.3, 138,138 v3 hausen3 Se that that) 78 Ve ae. 12" 358 NS Career? â Gk SORE os (e) The imbalance degree of point-expanding response lengths (standard deviation of point token lengths). (f) The ratio of the final SoT answer length to the nor- mal answer length. Figure 11: The statistics of the token lengths and point numbers on the Vicuna-80 dataset. Each row corresponds to one question category, and each column corresponds to one model. # G.1.2 LATENCY BREAKDOWN: SOT STAGES AND PHASES Fig. 12 presents the absolute latencies of normal and SoT generations on Vicuna-80. Again, the speed-ups of SoT compared with normal generation is evident. We can see that the decoding phases predominantly account for the end-to-end latency. Consequently, although SoT has higher prefilling latency in the skeleton stage than the normal generation and introduces additional point-expanding | 2307.15337#84 | 2307.15337#86 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#86 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | 27 Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding prefilling latency â which is expected â this has negligible impact on the overall latency and thereby the overall speed-up. Vicuna-338 V2.3 | ST Oe crra knowled9¢ |i LLaMA2-Chat-138 OpenChat-13B generic re UltraLm-138 coding ss Claude LLaMa2-Chat-7B Normal (prefil) common-sense | iii . lm Normal (decode) vicune-78V13 SoT skeleton (pref countertoct 2! i Vicuna-78 Vi.2 â mms SoT skeleton (decode) â oleploy | es StableVicuna-13B SoT point-expanding (prefill) CchatcPras Imm SOT pointexpanding (decode) oath © 5000 10000 1500 20000 25000 30000 35000 40000 ° 5000 10000 â «18000~â «20000 Latency (ms) Latency (ms) (a) Average latency across all question categories except math and code on different models. (b) Average latency across all models on different question categories. Figure 12: The latency breakdown of SoT and normal generations on the Vicuna-80 dataset. For open-source models, the latency breakdown of the prefilling and decoding phases is shown in dif- ferent colors. For API-based models, we do not record such latency breakdown information; the bar labeled as â (decode)â indicates the overall latency of prefilling and decoding phases. G.1.3 EFFICIENCY EVALUATION ON NVIDIA RTX 3090 We present the SoT speed-ups and latency breakdown on RTX 3090 in Fig. 13. We test the three 7B models, as their FP16-precision version can be run on an RTX 3090 GPU without further peak memory optimization techniques such as weight quantization (Frantar et al., 2022; Lin et al., 2023) or offloading (Sheng et al., 2023). On these three models, SoT can obtain 1.94à to 2.40à speed-up on average on Vicuna-80. | 2307.15337#85 | 2307.15337#87 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#87 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | For the five question categories that SoT can provide high-quality answers (i.e., knowledge, common- sense, generic, roleplay, counterfactual), SoT can speed-up the overall answer generation process by 1.96Ã to 2.52Ã in the meantime. Note that for the math category, despite the average speed-up being 1.20Ã by calculating the speed-up across the three math questions, SoT does not reduce the absolute latency of processing the three questions. Normal (prefill) generic i ii 2.52% EEE Normal (decode) writing i 2.43x SoT skeleton (prefill) Imm SoT skeleton (decode) common-sense III 2.39% SoT point-expanding (prefill) knowledge 237 mm SoT point-expanding (decode) ee â oleplay i it 2.12% coding i i mmm 2.10% LLoMA2-Chat 72 | rs 20% | counteracts! (TT 1.96% Vicuna-76 V1.3 {JT mm 194K ooh 120% oO 2000 4000 6000 8000 10000 12000 14000 16000 oO 2000 4000 6000 8000 10000 12000 14000 16000 Latency (ms) Latency (ms) Figure 13: The latency breakdown of SoT and normal decoding on the Vicuna-80 dataset. The average speed-up across questions are also marked on the figure. # G.1.4 ACTUAL LATENCY TESTING This section reports the actual SoT speed-up on the Vicuna-80 with batch testing (instead of analyz- ing with pre-made profiling tables), using a single NVIDIA A100 GPU. We test the actual end-to-end latency of the SoT and normal decoding with the 9 open-source models. For each model, we run the speed-up test for five times and plot the box in Fig. 14. | 2307.15337#86 | 2307.15337#88 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#88 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | 28 # Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding As shown in Fig. 14a, the current SoT solution obtains a > 2à speed-up on 6 out of the 9 open- source models (i.e., Vicuna-7B V1.1, Vicuna-7B V1.3, UltraLM-13B, LLaMA2-Chat-7B, Vicuna- 13B V1.3, and LLaMA2-Chat-13B), and a > 1.7 speed-up on OpenChat-13B and Vicuna-33B V1.3. SoT achieves no speed-up on StableVicuna-13B. As shown in Fig. 14b, for the five question cate- gories that SoT can provide high-quality answers (i.e., knowledge, common-sense, generic, roleplay, counterfactual), SoT can speed-up the overall answer generation process by 2.15à to 2.50à in the meantime. Vieuna-78 V1.1 HD 288x generic H [+ 2.50« Vicuna-78 V1.3 Wy 2e2x| common-sense â 2.45% UltraLM-13B eH 2.75x knowledge HL_-â 2.34 LLaMA2-Chat-78 H 2.20x coding â 2.29x Vieuna-138 V1.3 Hoh 219% counterfactual HH 28x LLaMA2-Chat-138 2aax writing H 2.16% Openchat-138 HoH 197x roleplay 25x Vieuna-338 V1.3 oD 175% math} HL} 167 stablevieuna-t364 â ( 0.97% fermi tâ 1.63% 10 1s 202 3035 1@ 16 18 20 22 24 26 28 30 # (a) Average speed-up on different models. (b) Average speed-up on different question categories. Figure 14: Speed-ups on 9 open-source models on the Vicuna-80 dataset with actual batch testing. G.2 SKELETON-OF-THOUGHT WITH ROUTER The overhead brought by the router inference is relatively small: | 2307.15337#87 | 2307.15337#89 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#89 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | On the Vicuna-80 dataset, the prompting and trained router have an average latency of 0.65s (0.39s⠼1.37s) and 0.04s (0.008s⠼1.55s), respectively. On the WizardLM dataset, the average latency of the prompting and trained router is 0.80s (0.36s⠼2.22s) and 0.03s (0.009s⠼2.52s), respectively. # G.2.1 SPEED-UP BREAKDOWN: MODELS Fig. 15 shows the speed-ups of SoT-R on different models on the Vicuna-80 dataset. Fig. 16 and Fig. 17 show the speed-ups of SoT-R on different models on the WizardLM dataset. We can ob- serve that on Vicuna-80, the two methods yield similar speed-ups, whereas on WizardLM, GPT-4 prompting router usually obtains higher speed-ups than the trained router, especially on GPT-4 itself. OpenChat-13B LLaMA2-Chat-13B ULaMA2-Chat-7B LLaMA2-Chat-13B Vicuna-7B V1.1 GPT-4 UltraLM-13B e Vicuna-7B V1.3 Vicuna-13B V1:3 ChatGPT-3.5 Claude StableVicuna-13B aude StableVicuna-13B 10 412 #214 «+16 18 20 22 10 412 #14 «+16 18 20 22 (a) Average speed-up across all question categories with prompting router. (b) Average speed-up across all question categories with trained router. Figure 15: Speed-ups of SoT-R on different models on Vicuna-80 dataset. 29 | 2307.15337#88 | 2307.15337#90 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#90 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | # Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding GPT-4 2.41% Vicuna-33B V1.3 penChat-13B Vicuna-13B V1.3 UltraLM-13B Vicuna-7B V1. LLaMA2-Chat-13B aMA2-Chat-7B Vicuna-7B V1.3 ChatGPT-3.5 laude StableVicuna-13B GPT-4 1.74x OpenChat-13B â ChatGPT-3.5, LLaMA2-Chat-78 Vicuna-7B V1.1 LLaMA2-Chat-138 Vicuna-7B V1.3 Vicuna-33B V1.3 UltraLM-13B Vicuna-13B V1.3 : StableVicuna-13B 1.09 Claude 109x 2.50 2.75 1.00 1.25 150 175 2.00 2.25 250 2.75 (a) Average speed-up across all question categories with prompting router. (b) Average speed-up across all question categories with trained router. Figure 16: Speed-ups of SoT-R on different models on WizardLM dataset. GPT-4 < * UltraLM-138, Vicuna-78 V1.1 OpenChat-138 Vicuna-338 V1.3 Vicuna-138 V1.3, ChatGPT-3.5 LLaMA2-Chat-13B LLaMA2-Chat-7B Vicuna-7B V1.3 StableVicuna-13B Claude © SOT (w/o router) % â SoT-R w/ prompting router SoT-R w/ trained router 1.75 2.00 2.25 2.50 2.75 Figure 17: Speed-ups of SoT and SoT-R on different models on the WizardLM dataset. # G.2.2 SPEED-UP BREAKDOWN: CATEGORIES Fig. 18 and Fig. 19 show the speed-ups of SoT-R on different question categories of Vicuna-80 dataset. The trained router achieves slightly higher speed-up on most of the categories (except for knowledge, writing, and fermi). | 2307.15337#89 | 2307.15337#91 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#91 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | Fig. 20 and Fig. 21 show the speed-ups of SoT-R on different question categories of WizardLM dataset. We can observe that on 19 out of 29 categories, using the prompting router achieves higher speed-ups than using the trained router. generic generic common-sense common-sense knowledge knowledge counterfactual counterfactual roleplay roleplay writing fermi $1. fermi coding -all1. coding writing â fami. math 40.90x math . 1.00 1.25 2.50 2.75 1.50 1.75 2.00 2.25 (a) Speed-ups of SoT-R with prompting router on dif- ferent question categories. (b) Speed-ups of SoT-R with trained router on different question categories. Figure 18: Speed-ups of SoT-R on different question categories of Vicuna-80 dataset knowledge < o generic + <0 writing ~~ « common-sense @ SoT (w/o router) coding + - *¢ %* â SoT-R w/ prompting router roleplay #4 4 SoT-R w/ trained router counterfactual oe fermi al e math {a4 e | 2307.15337#90 | 2307.15337#92 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#92 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | 1.00 1.25 150 1.75 2.00 2.25 2.50 2.75 3.00 Figure 19: Speed-ups of SoT and SoT-R on different question categories of the Vicuna-80 dataset. 30 # Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding Counterfactual â Academic Writing Ethics Chemistry Roleplay Computer Science Code Generation Reasoning Physics 1.00 1.25 150 1.75 2.00 2.25 2.50 2.75 3.00 Counterfactual Medicine Technology Chemistry Economy Writting TruthfulQa Common-Sense Multilingual Computer Science History Physics Reasoning Biology Philosophy â Academic Writing Law Code Debug Literature Math Complex Format Art Entertainment Code Generation 1.00 1.25 150 1.75 2.00 2.25 2.50 2.75 1.00 1.25 150 1.75 2.00 2.25 2.50 2.75 3.00 1.00 1.25 150 1.75 2.00 2.25 2.50 2.75 3.00 | 2307.15337#91 | 2307.15337#93 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#93 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | (a) Speed-ups of SoT-R with prompting router on dif- ferent question categories. (b) Speed-ups of SoT-R with trained router on different question categories. Figure 20: Speed-ups of SoT-R on different question categories of WizardLM dataset Counterfactual Economy Technology History Medicine writting Sport Complex Format }--â ¢ Code Generation }-~â - Roleplay TruthfulQa Law Philosophy Academic Writing Literature Chemistry Code Debug â Computer Science Ethics Toxicity Music Ai AAAA Art p< Biology < Common-Sense * Math ++ â « Multilingual ---@ Reasoning a . Physics +-â *-~-<-~- Entertainment }--â #----@ * +--© ° © SoT (w/o router) â %* â SoT-R w/ prompting router 4 SoT-R w/ trained router 1.00 1.25 1.50 1.75 2.00 2.25 2.50 2.75 3.00 Figure 21: Speed-ups of SoT and SoT-R on different question categories of the WizardLM dataset. # H OVERHEAD OF SOT IN DIFFERENT SCENARIOS Despite the optimizations made to the decoding phase, SoT brings overhead to the prefilling phase as the model needs to handle additional SoT prompts. Table 6 reports SoTâ s prefilling overhead for the API-based models. These statistics are averaged across the Vicuna-80 questions that are suitable for SoT (according to our manual annotation). We can see that SoT significantly increases the number of prefilling tokens. This is because that SoT issues an independent point-expanding request for each point, with the average number of points being 6.8 on Vicuna-80 dataset across all evaluated models. Consequently, the APIs need to prefill the point-expanding request multiple times. 31 | 2307.15337#92 | 2307.15337#94 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#94 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding Table 6: SoTâ s prefilling token overhead for API-based models. Model Prefill Phase Normal SoT Stage 1 SoT Stage 2 Ratio (SoT / Normal) Claude ChatGPT-3.5 GPT-4 12.52 12.52 12.52 171.41 171.41 171.41 808.91 591.31 983.09 78.30 60.92 92.21 When using SoT to serve the open-source models, a simple and small trick is to prefill the common prefix of point-expanding requests with a batch size of 1 during Stage 2 (i.e., the point-expanding stage). Table 7 shows the prefilling overhead after applying the trick. Although the ratio is consid- erably smaller compared to that of the API-based models, this computational overhead remains a concern, especially during periods of high system workload. There are some possibilities to further reduce the token and computational overhead that are worth exploring in future work. To name a few: (1) When using SoT in serving systems, we can simply reuse the key-value cache containing the question and skeleton from Stage 1 during Stage 2, rather than re-prefilling them as in a multi-round conversation. (2) Generally, as LLM capabilities continue to evolve and prompt tuning techniques advance (Shin et al., 2020; Li & Liang, 2021; Lester et al., 2021), the possibility of using much shorter prompts to activate the SoT mode in the future holds promise, which would significantly mitigate the token or computational overhead. Table 7: SoTâ s computational overhead (in terms of the number of prefilling tokens) for open-source models. | 2307.15337#93 | 2307.15337#95 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#95 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | Model Prefill Phase Naive SoT Stage 1 SoT Stage 2 Ratio (SoT / Normal) LLaMA2-Chat-7B 12.52 LLaMA2-Chat-13B 12.52 12.52 12.52 12.52 12.52 12.52 12.52 12.52 OpenChat-13B Vicuna-7B V1.3 Vicuna-13B V1.3 Vicuna-33B V1.3 StableVicuna-13B UltraLM-13B Vicuna-7B V1.1 171.41 171.41 171.41 171.41 171.41 171.41 171.41 171.41 171.41 216.49 216.41 234.38 211.61 273.39 258.88 312.03 290.88 196.64 30.98 30.98 32.41 30.59 35.53 34.37 38.61 36.92 29.40 I ANSWER QUALITY EVALUATION I.1 SKELETON-OF-THOUGHT I.1.1 ANSWER ANALYSIS: DIFFERENT MODELS Models on which SoT has low net win rates. We observe two types of reasons for the low net win rates. (1) OpenChat-13B, Vicuna-7B V1.1, and LLaMA2-Chat-13B. For the weak models such as OpenChat-13B and Vicuna-7B V1.1, they are not able to follow the SoT prompts precisely. For OpenChat-13B and LLaMA2-Chat-13B, sometimes the skeleton contains undesired contents (e.g., completing the skeleton points as shown below). | 2307.15337#94 | 2307.15337#96 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#96 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | 32 # Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding Question: How many lightning strikes occur on Earth each day? Try to explain your answer. Your explanation should take the reader through your reasoning step-by-step. SoTâ s answer (OpenChat-13B) (Orange texts are the skeleton.) (Only showing the skeleton) 1.Introduction. 2. The Earthâ s atmosphere. 3. Lightning formation. 4. Electrical discharge. 5. Lightning strikes per second. 6. Calculating the total. Skeleton: 1. Introduction. | 2307.15337#95 | 2307.15337#97 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#97 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | Lightning is a beautiful and awe-inspiring natural phenomenon, but it can also be dangerous. It occurs when positively charged particles in a thundercloud discharge to the negatively charged Earth. The process is still not fully understood, but scientists have been able to estimate how many lightning strikes occur on Earth each day. Skeleton: 1. The Earthâ s atmosphere. The Earthâ s atmosphere plays a crucial role in the formation of lightning. It consists of different layers, with the tropo- sphere being the lowest and most active. The troposphere is where most of the weather occurs, and it is where lightning primarily forms. The atmosphereâ s temperature, humidity, and wind patterns all contribute to the likelihood of a lightning strike. | 2307.15337#96 | 2307.15337#98 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#98 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | Skeleton: 2. Lightning formation. When warm, moist air rises, it cools and forms water ¡more texts are omitted...¿ For both OpenChat-13B and Vicuna-7B V1.1, sometimes they do not write anything in the point- expanding stage when details are needed. Question: As a superhero, how would you explain your origin story to a curious child? SoTâ s answer (Vicuna-7B V1.1) (Orange texts are the skeleton.) 1. Born with extraordinary abilities. 2. Trained to use powers for the greater good. 3. Protecting the innocent and fighting villains.I use my powers to defend the defenseless and bring justice to those who have been wronged. 4. Inspired by courage, justice, and selflessness. | 2307.15337#97 | 2307.15337#99 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#99 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | These two issues result in bad net win rates. (2) Claude. For strong models such as Claude, we do not see major issues with SoTâ s answers except the common issues across all models discussed in § 3.2.3. We hypothesize that the low net win rates are majorly due to the fact that their normal sequential generation is already very good. Models on which SoT has high net win rates. The models with high net win rates (Vicuna-13B V1.3, StableVicuna-13B, and UltraLM-13B) are between the above two extremes. On one hand, they are good enough to understand SoT prompts. While we observe that they sometimes have the same issues as OpenChat-13B and Vicuna-7B V1.1 discussed before, these issues happen much less frequently. On the other hand, their normal sequential generation performance has a larger room for improvement than Claude and ChatGPT-3.5 so that the benefits from SoT are more visible on these models. I.1.2 ANSWER ANALYSIS: DIFFERENT QUESTION CATEGORIES # Categories on which SoT has low net win rates. (1) Math. Math questions require step-by-step thinking. Without knowing the previous steps, it is hard to derive the following steps. Chain-of-Thoughts (Kojima et al., 2022; Wei et al., 2022) empha- size the importance of expanding the sequential thinking steps to solve these problems successfully. In contrast, SoT requires the models to (a) come up with the skeleton of the solution first and (b) deduce each individual step independently without referring to previous results. Both of these are challenging tasks. Strong models are able to get (a) correct but fail at (b). In the following example, ChatGPT-3.5 gets the skeleton steps correct. However, without knowing the results from the first step, the model starts making mistakes in the second step.7 7One additional (minor) issue of the API-based ChatGPT-3.5 is that the skeleton point â 1.â is repeated in the answer. 33 | 2307.15337#98 | 2307.15337#100 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#100 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | # Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding (Math) Solve for x in the equation 3x + 10 = 5(x - 2). â SoTâ s answer (ChatGPT-3.5) (Orange texts are the skeleton.) 1. Move all x terms to one side. 1. Move all x terms to one side by subtracting 5x from both sides, giving -2x + 10 = -10. 2. Move all constant terms to the other side. This can be done by subtracting 5x - 10 from both sides, leaving 3x - 5x = -10 - 10. 3. Simplify the equation. Combine like terms on both sides of the equation to get 3x + 10 = 5x - 10. 4. Divide both sides by the coefficient of x. Divide both sides by 2 to get x = -5. 5. Check the solution by plugging it back into the original equation.Substitute the value of x obtained in step 4 back into the original equation and verify that both sides are equal. â Normal answer (ChatGPT-3.5) 3x + 10 = 5(x - 2) can be simplified as follows: 3x + 10 = 5x - 10 (distribute the 5 on the right side) 10 + 10 = 5x - 3x (subtract 3x from both sides) 20 = 2x (combine like terms on the right side) x = 10 (divide both sides by 2) Therefore, the solution for x in the equation 3x + 10 = 5(x - 2) is x = 10. For weaker models, it is hard to even get step (a) correct. For example, as shown below, the third step â Apply FOILâ in Vicuna-13B V1.3â s SoT solution comes out of nowhere. This makes SoTâ s solution incorrect (though the solution from the normal generation is also incorrect). (Math) Solve for x in the equation 3x + 10 = 5(x - 2). â SoTâ s answer (Vicuna-13B V1.3) (Orange texts are the skeleton.) 1. | 2307.15337#99 | 2307.15337#101 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#101 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | Simplify equation. We can simplify the equation by combining like terms: 3x + 10 = 5(x - 2) =¿ 3x + 10 = 5x - 10 =¿ x + 5 = 2 =¿ x = 2 - 5 =¿ x = -3 2. Factor 5(x - 2). 5(x - 2) = 5x - 10. 3. Apply FOIL. * Multiply the binomials in the equation, using the first term of each binomial for the first term of the product, the outside term for the second term of the product, the second term of each binomial for the third term of the product, and the remaining terms for the fourth term of the product. * 3(x - 2) + 10 = 5x + 5(-2) * 3x - 6 = 5x - 10 * 3x = 5x - 10 * x = 5x - 10 / 3 * x = 5/3x - 10/3 * x = 2/3x - 5/3 4. Solve for x. After applying the FOIL method and simplifying the equation, we get 3x = 17. Therefore, x = 17/3. â Normal answer (Vicuna-13B V1.3) To solve for x in the equation 3x + 10 = 5(x - 2), we need to first distribute the 5 to multiply both sides by 5. 3x + 10 = 5(x - 2) 5(3x + 10) = 5(x - 2) 15x + 50 = 5x - 10 Add 5x to both sides: 20x + 50 = 5x Subtract 5x from both sides: 15x = 0 Divide both sides by 15: x = 0 Therefore, the solution to the equation is x = 0. | 2307.15337#100 | 2307.15337#102 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#102 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | (2) Fermi. Similar to math questions, questions in the fermi category require solvers to make as- sumptions and calculate the answer step-by-step. For example, to answer the question in the figure below, the solver is required to make an assumption about when life began on Earth. In step 4, ChatGPT-3.5 does not know that it has already assumed 3.5 billion years in step 3 and uses the wrong number 4.54 billion in the calculation. In step 5, ChatGPT-3.5 does not know that it has already made the above assumptions and refuses to answer â due to the uncertainties in estimating the age of the Earthâ . In contrast, normal generation is able to create a consistent solution. 34 | 2307.15337#101 | 2307.15337#103 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#103 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | # Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding Question: (Fermi) How many times has the Earth orbited the Sun since the beginning of life? Try to explain your answer. Your explanation should take the reader through your reasoning step-by-step. â SoTâ s answer (ChatGPT-3.5) (Orange texts are the skeleton.) 1. Estimate the age of the Earth. The age of the Earth is estimated to be around 4.54 billion years, based on radiometric dating of rocks and meteorites. 2. Determine when life began on Earth. Life on Earth is estimated to have begun around 3.5 billion years ago, based on the discovery of ancient microfossils. 3. Calculate the number of years since life began. The oldest known fossils suggest that life on Earth began around 3.5 billion years ago. Therefore, the number of years since life began on Earth is approximately 3.5 billion years. 4. Divide the number of years by the length of one orbit. Assuming an orbital period of 365.25 days, the Earth has orbited the Sun approximately 4.54 billion / 365.25 = 12,425,000 times since the beginning of life. | 2307.15337#102 | 2307.15337#104 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#104 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | 5. The Earth has orbited the Sun approximately X times since the beginning of life. Unfortunately, it is impossible to give a specific number for the amount of times the Earth has orbited the Sun since the beginning of life due to the uncertainties in estimating the age of the Earth and when life first emerged. # Normal answer (ChatGPT-3.5) The beginning of life on Earth is estimated to have occurred around 3.5 to 4 billion years ago. Since that time, the Earth has been orbiting the Sun continuously. The length of one orbit, or one year, is approximately 365.25 days. This means that the Earth completes one full orbit around the Sun every 365.25 days. To calculate the number of times the Earth has orbited the Sun since the beginning of life, we can divide the total number of days by the length of one orbit: Total number of days since the beginning of life = (3.5 billion years) x (365 days per year) = 1.2775 trillion days Number of orbits = (1.2775 trillion days) / (365.25 days per orbit) = 3.5 billion orbits Therefore, the Earth has orbited the Sun approximately 3.5 billion times since the beginning of life. | 2307.15337#103 | 2307.15337#105 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#105 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | As the following example shows, there are also cases when SoT solves the question (with minor computation errors) by embedding the computation steps in the skeleton directly in the skeleton stage. The point-expanding stage will then add little content (except some undesired repetition of the following skeletons occasionally). This happens both on small/weak models and large/powerful models. Although this behavior is not what we originally expect, it sheds light on a promising direc- tion for making SoT a general framework: in the problem where step-by-step thinking is needed, the SoT model automatically falls back to the sequential generation mode by outputting the complete answer in the skeleton stage and skipping the point-expanding stage. Future work needs to be done to control and encourage this behavior better. Question: answer. Your explanation should take the reader through your reasoning step-by-step. SoTâ s answer (Vicuna-7B V1.1) (Orange texts are the skeleton.) 1. | 2307.15337#104 | 2307.15337#106 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#106 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | The average human blinks 12-15 times per minute. 2. There are 60 minutes in an hour. 3. Therefore, the average human blinks 60 x 12-15 = 720-900 times per hour. 4. There are 24 hours in a day.Therefore, the average human blinks 720-900 x 24 = 17,520-22,400 times per day. 5. Therefore, the average human blinks 720-900 x 24 = 17,520-22,400 times per day.This translates to blinking approxi- mately 50-65 times per hour, which is a significant portion of the day. 6. There are 365 days in a year.Therefore, the average human blinks 17,520-22,400 x 365 = 5,857,600-8,124,000 times per year. 7. Therefore, the average human blinks 17,520-22,400 x 365 = 5,857,600-8,124,000 times per year. 8. The human lifespan is typically 70-80 years.Therefore, the average human blinks 80,000-112,000 times in their lifetime. 9. Therefore, the average human blinks 5,857,600-8,124,000 / 70-80 = 80,000-112,000 times in their lifetime. | 2307.15337#105 | 2307.15337#107 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#107 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | # Normal answer (Vicuna-7B V1.1) The average human blinks about 12-15 times per minute, which translates to about 18,200 to 20,400 blinks per day. Over the course of a lifetime, the number of blinks is estimated to be around 7,500,000 to 8,000,000. Blinking is a reflexive action that helps to keep the eyes moist and comfortable, as well as to protect the eyes from dust and other foreign particles. The frequency of blinking can vary depending on a variety of factors, including age, health, and environmental conditions. For example, people who work in dusty or dry environments may blink more frequently than those who work in a clean, moist environment. Over the course of a lifetime, the average person will spend approximately 27,375 days or 700,000 hours awake. This means that the majority of the time spent blinking will occur in the first few decades of life, with the frequency of blinking gradually decreasing as people age. It is important to note that there is a significant amount of individual variation in the frequency of blinking. Some people may blink more or less than the average, depending on a variety of factors such as age, health, and environmental condi- tions. Additionally, blinking patterns can be affected by factors such as stress, fatigue, and certain medical conditions. | 2307.15337#106 | 2307.15337#108 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#108 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | 35 # Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding (3) Coding. While in most cases, the models are able to decompose the coding problem into smaller tasks in the skeleton stage, the generation quality during the point-expanding stage is bad, potentially due to the fact that we did not carefully craft a special point-expanding prompt for coding. The failure cases can be grouped into two cases. (a) In some cases, the model only generates a description of how implementation can be done without giving the code. Question: of occurrences of a specific word in the file. (Coding) Develop a C++ program that reads a text file line by line and counts the number | 2307.15337#107 | 2307.15337#109 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#109 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | # SoTâ s answer (Claude) (Orange texts are the skeleton.) 1. Open file stream by declaring an ifstream object and using the open() method to open the file in read mode. 2. Read line by line from the file stream into a string using getline(). 3. Find word by searching each line for the target word using string::find and keeping track of matches. 4. Increment counter by one each time the specified word is encountered in the current line. 5. | 2307.15337#108 | 2307.15337#110 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#110 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | Close file The file stream should be closed after finishing reading the file to free up resources. This can be done by calling the # close() method on the file stream object. This might be due to the fact that the powerful models (e.g., Claude) follow the point-expanding prompt â Write it **very shortly** in 1â ¼2 sentenceâ precisely. Better prompts tailed to coding could help. (b) In some other cases, the model tends to generate the complete code repeatedly in several points. Question: of occurrences of a specific word in the file. (Coding) Develop a C++ program that reads a text file line by line and counts the number | 2307.15337#109 | 2307.15337#111 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#111 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | SoTâ s answer (OpenChat-13B) (Orange texts are the skeleton.) 1. Read file line by line. 2. Store lines in a container. 3. Count occurrences of specific word. â â cpp ¡more texts are omitted...¿ â â This code uses the C++ standard library to read a text file line by line, store each line in a container, and count the occurrences of a specific word. The results are printed. 4. Print results. | 2307.15337#110 | 2307.15337#112 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#112 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | Hereâ s the C++ program that reads a text file line by line and counts the number of occurrences of a specific word in the file: â â cpp ¡more texts are omitted...¿ â â Please note that this is a very basic implementation. You may need to adjust it depending on your specific requirements. Please note that this is a very basic implementation. You may need to adjust it depending on your specific requirements. This problem might also be alleviated by prompts that explicitly tell the model to only generate a â code snippetâ in each skeleton point. Although better prompt engineering could lead to better results, we expect that SoT is not suitable for the coding questions in Vicuna-80. The reason is that the solutions to the problems in Vicuna- 80 are usually in a few lines with strong dependencies between each other. Without knowing the previously defined variable names or imported libraries, it is hard (or even impossible) to implement the subsequent code correctly. As a consequence, generating different parts of the answers in parallel is not suitable here. Similar to the math questions, automatically falling back to outputting all the code in the skeleton stage and not triggering the point-expanding stage might be more suitable for answering this question type. However, we expect that SoT could be helpful for larger coding tasks that involve multiple modules (e.g., functions, classes). The skeleton stage could be in charge of defining the interfaces between different modules (e.g., functionalities, names, parameters, return types). The point-expanding stage could be in charge of implementing these modules or using these modules to complete the final task, which can be done in parallel. This mimics the common practice in software engineering. (4) Writing. The writing questions usually ask for an email, a blog post, or a review in a given scenario. In the detailed evaluation output from FastChat and LLMZoo metrics, the major and common complaint is that SoTâ s answers are (slightly) less detailed. This could potentially be improved by point-expanding prompts that request more details. | 2307.15337#111 | 2307.15337#113 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#113 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | 36 # Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding While inspecting SoTâ s answers manually, we observe one additional issue. Writing problems usu- ally require a coherent passage without embedded skeleton points, whereas our current SoT pipeline (§ 2) concatenates skeleton points as part of the answer. One example is shown below. Question: a joint venture? (Writing) Can you help me write a formal email to a potential business partner proposing # SoTâ s answer (ChatGPT-3.5) (Orange texts are the skeleton.) 1. Introduce yourself and your company. Dear [Recipient], I hope this email finds you well. My name is [Your Name] and I am the [Your Position] at [Your Company Name]. 2. State the purpose of the email. The purpose of this email is to propose a joint venture between our companies. 3. Explain the potential benefits of the joint venture. By combining our resources and expertise, we can increase market share and profitability for both companies. 4. Provide details on the proposed collaboration. Our proposed collaboration would involve combining our expertise in marketing with your strong manufacturing capabilities to create a new line of products. 5. Suggest a follow-up meeting or call. Please let me know your availability for a follow-up call next week to discuss this opportunity further. 6. Thank the recipient for their time and consideration.Thank you for considering this proposal and we look forward to hearing back from you soon. # Normal answer (ChatGPT-3.5) Of course, I can help you with that. Hereâ s a sample email you can use as a guide: Subject: Proposal for Joint Venture Dear [Business Partnerâ s Name], I hope this email finds you well. I am writing to propose a joint venture between our companies, [Your Company Name] and [Potential Partner Company Name]. We believe that our respective expertise can create a successful partnership that will benefit both of our businesses. We have conducted thorough research on your company and admire your innovative approach to [insert industry or niche]. We think our [insert expertise or specialty] could complement your [insert potential partnerâ s specialty or expertise]. By collaborating on this joint venture, we believe we can create a new and innovative product that will appeal to our respective markets. We are confident that our companies would work well together and that our joint venture would be successful. | 2307.15337#112 | 2307.15337#114 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#114 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | We would like to set up a meeting to discuss the specifics of our proposal and explore how we can move forward with the joint venture. Thank you for considering this proposal. We look forward to hearing from you soon. Best regards, # [Your Name] In such cases, simply removing the skeleton points would greatly improve the answer quality. To make SoT more general, one future direction is to let the LLM itself decide whether the point index and point skeleton should be included to make the final answer more natural and fluent. # Categories on which SoT has high net win rates. (1) Counterfactual, knowledge, common sense, generic. All these four categories have the same characteristic: the ideal answer should cover several relatively independent points. Having the LLM generate a skeleton before expanding the details can result in more comprehensive discussions for the question. Besides, organizing the answers as a list of points makes the answers easier to read, whereas the answers from normal generation are sometimes less structured and less readable. Below, we give one example for each question category to illustrate these points. | 2307.15337#113 | 2307.15337#115 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#115 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | 37 # Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding (Counterfactual) What if the Internet had been invented during the Renaissance period? # SoTâ s answer (ChatGPT-3.5) (Orange texts are the skeleton.) 1. Advancements in communication.The invention of the Internet during the Renaissance period would have revolution- ized communication, allowing for faster and more efficient exchange of ideas and information across vast distances. 2. Increased access to knowledge.The invention of the printing press during the Renaissance period already allowed for greater access to knowledge, but the internet would have revolutionized this even further by providing instantaneous access to information from all over the world. 3. Improved dissemination of ideas.If the internet had been invented during the Renaissance period, the spread of ideas and knowledge would have been much faster and widespread than it was at the time. This could have led to a faster progression of scientific and artistic innovations. 4. Impact on art and literature.The invention of the internet during the Renaissance period would have revolutionized the way art and literature are created, shared, and consumed, leading to a more diverse and globally-influenced artistic and literary scene. 5. Potential for global trade.The invention of the internet during the Renaissance period would have greatly expanded global trade opportunities, allowing for easier communication and collaboration between merchants and traders across continents. | 2307.15337#114 | 2307.15337#116 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#116 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | 6. Influence on political and social structures.The invention of the internet during the Renaissance period could have led to significant changes in political and social structures, allowing for greater democratization and communication among people from different regions and classes. # Normal answer (ChatGPT-3.5) If the Internet had been invented during the Renaissance period, it would have had a significant impact on the way people communicated, shared ideas, and gained knowledge. The Renaissance was a time of great intellectual and artistic growth, and the Internet would have facilitated the spread of these ideas and innovations. Scholars and artists would have been able to share their work more easily, and people from different parts of the world would have been able to connect and collaborate more easily. The Internet would have also made it easier to access information, which could have accelerated the pace of scientific and technological advancements. | 2307.15337#115 | 2307.15337#117 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#117 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | However, itâ s important to note that the infrastructure required for the Internet to exist did not exist during the Renaissance period. There were no telecommunications networks or electricity grids, so the idea of the Internet would have been impossible to realize. Nonetheless, itâ s interesting to think about how the Renaissance might have been different if the Internet had existed at that time. 38 Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding Question: versus a reusable bottle on both the environment and human health? (Knowledge) What are some potential implications of using a single-use plastic bottle | 2307.15337#116 | 2307.15337#118 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#118 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | SoTâ s answer (ChatGPT-3.5) (Orange texts are the skeleton.) 1. Single-use plastic waste.Single-use plastic bottles contribute to the massive amount of plastic waste that ends up in landfills and oceans, harming wildlife and ecosystems. 2. Pollution of waterways.Single-use plastic bottles contribute to the pollution of waterways, as they often end up in rivers, lakes, and oceans, causing harm to aquatic life and the overall ecosystem. 3. Harmful chemicals leaching.Single-use plastic bottles may release harmful chemicals such as bisphenol A (BPA) and phthalates into the water or liquid they contain, posing a risk to human health. 4. Greenhouse gas emissions.The production and disposal of single-use plastic bottles contribute to greenhouse gas emissions, which contribute to climate change. Using a reusable bottle reduces the need for constant production and disposal, therefore reducing greenhouse gas emissions. 5. Microplastic ingestion.Single-use plastic bottles contribute to the accumulation of microplastics in the environment, which can be ingested by humans through food and water, leading to potential health risks. 6. Increased risk of disease.Single-use plastic bottles can harbor harmful bacteria, leading to an increased risk of disease if not disposed of properly. Reusable bottles, when cleaned regularly, pose a lower risk. Normal answer (ChatGPT-3.5) Using a single-use plastic bottle has significant implications on the environment and human health. Environmental implications: 1. Plastic pollution: Single-use plastic bottles are one of the major contributors to plastic pollution in oceans and landfills. They take hundreds of years to decompose, and during that time, they release toxic chemicals and microplastics into the environment. 2. Carbon emissions: Manufacturing and transporting single-use plastic bottles require a significant amount of energy and resources, which contribute to greenhouse gas emissions and climate change. 3. Natural resource depletion: The production of single-use plastic bottles requires large amounts of water and oil, which are finite resources. Health implications: 1. Chemical exposure: Single-use plastic bottles contain chemicals such as Bisphenol A (BPA) and phthalates, which can leach into the water and pose a risk to human health. 2. Microplastics ingestion: When plastic bottles break down, they release microplastics, which can enter the food chain and be ingested by humans. | 2307.15337#117 | 2307.15337#119 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#119 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | Using a reusable bottle, on the other hand, has a positive impact on the environment and human health. Environmental benefits: 1. Reduced plastic waste: Reusable bottles significantly reduce the amount of plastic waste generated. 2. Lower carbon emissions: Reusable bottles have a lower carbon footprint as they can be used multiple times. 3. Reduced resource depletion: The production of reusable bottles requires fewer resources than single-use plastic bottles. Health benefits: 1. Chemical-free: Reusable bottles are typically made from materials that are free from harmful chemicals, reducing the risk of chemical exposure. 2. Reduced microplastics: Reusable bottles do not break down easily, reducing the release of microplastics into the environment. Overall, using a reusable bottle is a more sustainable and healthier option than using a single-use plastic bottle. | 2307.15337#118 | 2307.15337#120 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#120 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | Question: sation or simply being polite? (Common sense) How can you determine if a person is genuinely interested in a conver- # SoTâ s answer (ChatGPT-3.5) (Orange texts are the skeleton.) 1. Pay attention to body language.Notice if their posture is open or closed, if they are facing towards you or away, and if they are fidgeting or relaxed. These can all indicate their level of interest in the conversation. 2. Listen for engaged responses.If the person is genuinely interested in the conversation, they will respond with thoughtful and relevant comments or questions rather than short, generic responses. 3. Observe eye contact.If the person maintains steady eye contact and their gaze is focused on you, they are likely genuinely interested in the conversation. On the other hand, if they frequently look away or seem distracted, they may just be being polite. 4. Look for follow-up questions.If the person asks follow-up questions and seems genuinely interested in what youâ re saying, then they are likely interested in the conversation and not just being polite. 5. Notice tone of voice.A person who is genuinely interested in a conversation will often have a more enthusiastic and animated tone of voice, compared to someone who is simply being polite and giving short, monotone responses. # Normal answer (ChatGPT-3.5) As an AI assistant, I cannot determine a personâ s true intentions or feelings. However, there are some social cues that can indicate whether a person is genuinely interested in a conversation or not. For example, if they maintain eye contact, lean in, nod their head and ask follow-up questions, it can be a sign that they are engaged in the conversation. On the other hand, if they avoid eye contact, give short or vague responses, or seem distracted, it may indicate that they are not as interested. | 2307.15337#119 | 2307.15337#121 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#121 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | Ultimately, itâ s important to remember that everyone has different communication styles, and it can be difficult to accurately interpret someoneâ s intentions without directly asking them. 39 Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding # Question: (Generic) How can I improve my time management skills? SoTâ s answer (ChatGPT-3.5) (Orange texts are the skeleton.) 1. Set clear goals. Setting clear goals is important in improving time management skills as it helps you to focus on what needs to be done and what can be accomplished within a certain timeframe. 2. Prioritize tasks. Make a list of tasks and prioritize them based on their importance and deadline. This will help you focus on the most critical tasks and avoid wasting time on less important ones. 3. Create a schedule. Creating a schedule helps you allocate time for each task and ensures that you complete them within the allotted time. 4. Minimize distractions. Distractions such as social media, emails, and phone notifications can waste a lot of time. Try to limit them by turning off notifications or setting specific times to check them. 5. Take breaks. Taking breaks can actually improve productivity and help prevent burnout. | 2307.15337#120 | 2307.15337#122 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#122 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | Itâ s important to schedule short breaks throughout the day to recharge and refocus. 6. Learn to say no. Itâ s important to prioritize your own tasks and not take on too much at once. Saying no to requests that donâ t align with your goals or schedule can help you manage your time more effectively. 7. Delegate tasks. If you have too many tasks to handle, consider delegating some of them to others who can help you. This will allow you to focus on more important tasks and manage your time more effectively. 8. Use time management tools.There are a variety of time management tools available, such as calendars, to-do lists, and productivity apps, that can help you stay organized and on track with your tasks. Normal answer (ChatGPT-3.5) Improving time management skills can be a challenge, but there are several strategies that can help. Here are a few: 1. Set clear goals and priorities: Identify your most important tasks and make sure to complete them first. 2. Make a schedule: Create a schedule that includes your most important tasks and set aside time for them. 3. Use a timer: Set a timer for a specific amount of time to help you stay focused and avoid distractions. 4. Take breaks: Take short breaks throughout the day to help you recharge and avoid burnout. 5. Eliminate distractions: Try to limit distractions as much as possible, such as by turning off your phone or closing your email. 6. Delegate tasks: If possible, delegate tasks to others to free up time for more important activities. Remember, improving time management skills takes practice and perseverance. Be patient with yourself and keep trying until you find strategies that work well for you. | 2307.15337#121 | 2307.15337#123 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#123 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | (2) Roleplay. Roleplay questions require LLMs to assume a character and present an introduction or speech to other people. LLMZoo and FastChat metrics usually favor the level of details of SoT when assigning it a better score. Similar to the writing category discussed before, SoTâ s answers could be even better if the skeleton points are removed from the answer. (Roleplay) How would you introduce yourself as a medieval knight at a royal banquet? # SoTâ s answer (Claude) (Orange texts are the skeleton.) 1. Name and house. My name is Sir Percival of Camelot, knight of the Round Table and protector of King Arthurâ s realm. 2. Lands and titles. I hold fiefdom over the lands of Westmarch and Ravenshire, and bear the titles of Baron of the Western Marches and Knight of the Raven. 3. Military feats. I have led my men to great victories against the kingdomâ s enemies, winning many battles through strength of arms and tactical cunning. 4. Allegiance to king. I pledge my sword, my loyalty, and my life to serve and protect my noble king and this great kingdom. | 2307.15337#122 | 2307.15337#124 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#124 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | 5. Request to join table. Your Highness, it would be the greatest honor to join your exalted table and share in this magnificent feast with my fellow knights and loyal subjects. # Normal answer (Claude) Greetings, your majesty and honored guests. I am Sir Claude, a knight in service of the king. I hail from the province of Assistantia, where I was trained in the knightly virtues of chivalry, courage and courtesy. Though I am but a humble knight, I shall endeavor to serve you and this kingdom with all the strength and valor I possess. I thank you for allowing me to join you at this royal feast, and hope only that my conduct brings further honor to my liege and this noble court. Please let me know if I can be of any service during the celebrations, as it would be my honor to assist you in any way. | 2307.15337#123 | 2307.15337#125 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#125 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | I.1.3 ANSWER ANALYSIS: DIFFERENT METRICS # Metrics on which SoT has low net win rates. (1) Coherence. In the prompt of LLMZoo evaluation, coherence is defined as â The response should be coherent and flow logically from one point to the next that is easy to read and understand without major gaps or inconsistencies. The more coherent they are, the better.â SoTâ s answer is structured in a list of points. It is suitable for some questions but not all. For example, in the questions that require | 2307.15337#124 | 2307.15337#126 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#126 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | 40 # Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding LLMs to write a passage, email, or script, the introduction and conclusion paragraphs are usually missing, and the answers usually lack transition sentences between consecutive points. These make the answers less coherent. Below is one example from Claude. Question: Write a script for a YouTube video exploring the history and cultural significance of jazz. SoTâ s answer (Claude) (Orange texts are the skeleton.) 1. Origins. Jazz originated in the late 19th and early 20th centuries among African Americans in New Orleans, emerging from a combination of African rhythmic traditions, blues, ragtime, and other influences. 2. Early jazz styles. The early styles of jazz that emerged in the late 19th and early 20th centuries included ragtime, Dixieland, and the blues. 3. Jazz age. The Jazz Age was the period during the 1920s when jazz music and dance styles like the Charleston became popular cultural phenomena in American and European society. 4. Bebop era. Bebop emerged in the early 1940s with artists like Dizzy Gillespie and Charlie Parker experimenting with faster tempos, complex chord changes, and unpredictable melodic phrasing. 5. Cool jazz. Emerging in the late 1940s and early 1950s, cool jazz was a more subdued, mellow style that broke from the high-energy virtuosic improvisation of bebop. 6. Free jazz emerged in the late 1950s and 1960s as musicians explored free improvisation and atonality, moving away from traditional jazz structures. 7. Fusion jazz. Major jazz artists such as Miles Davis and Herbie Hancock began experimenting with combining elements of rock, funk, and electronic music in the 1970s, resulting in the emergence of the subgenre known as fusion jazz. 8. Jazz festivals. Jazz festivals emerged in the 1950s and 1960s as a way to celebrate jazz music, with famous early festivals like the Newport Jazz Festival bringing thousands of fans together to hear top performers. 9. Jazz instruments. The trumpet, saxophone, piano, double bass, and drum set are some of the key instruments com- monly used in jazz. 10. | 2307.15337#125 | 2307.15337#127 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#127 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | Influential musicians include Louis Armstrong, Duke Ellington, Charlie Parker, Miles Davis, John Coltrane, and many others who pioneered new styles and techniques. 11. Legacy and impact. Jazz has left an indelible mark on music and culture over the past century, influencing countless artists and genres while spreading the spirit of improvisation and freedom across the globe. # Normal answer (Claude) Here is a draft script for a YouTube video exploring the history and cultural significance of jazz: Introduction Hey everyone! Welcome to my channel. In todayâ s video, weâ re going to be exploring the history and cultural impact of one of Americaâ s original art forms - jazz music. Jazz Origins Jazz grew out of the African American communities in the southern United States in the late 19th and early 20th centuries. It was influenced by a combination of African and European musical traditions. Some key elements that fed into early jazz included blues, ragtime, spirituals, work songs, field hollers, and brass band music. New Orleans is considered the birthplace of jazz, emerging around the turn of the 20th century. The city was a melting pot of cultures and musical styles. Musicians mixed improvisation, syncopated rhythms, bent notes and a loose ensemble playing style to create a new genre that became known as jazz. Early jazz greats like Buddy Bolden, Jelly Roll Morton and Sidney Bechet began fusing these various influences into the distinct jazz sound. As the music spread via travel and recordings in the 1910s, jazz captivated listeners and began to evolve into different styles. Jazz Styles Emerge In the 1920s, jazz expanded out from New Orleans and Chicago emerged as a hotspot. Louis Armstrongâ s innovative solos and expressive phrasing became influential. The swing style also developed during the 1930s and 40s, propelled by star big bands and dancers. Bebop arose in the 1940s with soloists like Charlie Parker and Dizzy Gillespie exploring new levels of harmonic and rhythmic complexity. In the 50s and 60s, jazz expanded into new fusions like cool jazz, hard bop, free jazz, modal jazz and soul jazz. Artists used jazz to express their unique regional, social and political identities. Cultural Significance Over the decades, jazz has become woven into the fabric of American society. | 2307.15337#126 | 2307.15337#128 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#128 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | It is equated with many values and ideals like individualism, freedom, spontaneity, innovation and social harmony. Jazz was associated with breaking down barriers between races, classes and genres. It represented modernism and urban culture. Musicians like Louis Armstrong and Duke Ellington became some of the first widely popular African American artists. Jazz also spread American culture and values abroad through performances, radio and film. It influenced music around the world and became a truly global art form. Today jazz continues to thrive and evolve both in the U.S. and internationally. Its history and sounds reflect the diversity of both American and global cultures. From its folk roots, jazz has become a sophisticated art music that retains its spirit of freedom and improvisation. | 2307.15337#127 | 2307.15337#129 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#129 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | Conclusion That covers some of the key points in the history and cultural impact of jazz music. Let me know in the comments about your favorite jazz styles and musicians! Be sure to like and subscribe. See you next time. While the content of SoTâ s answer is good, it reads less fluent for the purpose of a YouTube script. (2) Immersion. In the prompt of LLMZoo evaluation, immersion is defined as â the response should act like the assigned role using the tone, manner, and vocabulary the role would use. The more assistant-like tones, the worse. The more in-character, the better.â | 2307.15337#128 | 2307.15337#130 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#130 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | The way that SoT breaks the 41 # Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding answers into a list makes its answer less in-character. Therefore, SoT receives a lower immersion score. Here is one example. Question: panel of judges? Pretend to be a world-famous chef. How would you describe your signature dish to a # SoTâ s answer (Claude) (Orange texts are the skeleton.) 1. Main ingredient: The incredibly fresh and tender salmon caught just this morning from the crystal clear waters of the Pacific Northwest. 2. Cooking method I slowly braise the main ingredient in a rich broth to bring out its natural flavors before searing it to a perfect golden crust. 3. Special seasoning The dish is seasoned with a blend of fresh herbs like thyme, rosemary, and oregano which provides an aromatic backbone that pairs nicely with the natural flavors of the main ingredients. I add just a touch of smoked paprika to provide a subtle smokiness without overpowering the other flavors. 4. Texture: The dish has a perfect balance of crispy on the outside and tender on the inside, with each bite yielding a delightful mix of crunchy and smooth textures that dance across your tongue. 5. Plating presentation I purposefully create height and dimension on the plate with the carefully arranged components of the dish, so that the dish appears vibrant and alive. The interplay of colors, textures and shapes is intended to delight the eye before the first bite. | 2307.15337#129 | 2307.15337#131 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#131 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | 6. Inspiration: This dish was inspired by my dear grandmother, who taught me the secret family recipe when I was just # a child. # Normal answer (Claude) clears throat and speaks in a fancy French accent* Bonjour honored judges. Today I present to you my signature dish - la cr`eme de la cr`eme of French haute cuisine. Behold, le poulet `a la Claude! This succulent chicken is firstly marinated overnight in a bath of fine herbes - thyme, rosemary, and sage cultivated from my private garden. The young spring chicken is then roasted to perfection, its skin crisping to a beautiful caramel brown. Nestled beside the chicken is a tower of pommes dauphine - light and airy potato puffs - alongside a decadent mound of cr`eme fraË Ä±che infused with truffle essence. But the pi`ece de r´esistance is the sauce. A luxurious pool of velout´e enriched with cream and butter, brightened with a dash of lemon, and studded with morsels of morel mushrooms foraged at dawn. One bite of this ensemble will transport you to food heaven. The tender chicken, the creaminess of the potato puffs, and the rich decadence of the sauce meld together in blissful harmony. So tell me honored judges, will my poulet `a la Claude earn your coveted approval? I await your discerning verdict as I swirl more vintage Bordeaux into your glasses. | 2307.15337#130 | 2307.15337#132 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#132 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | Sant´e! Again, while the answer from SoT covers sufficient aspects and details, it reads rather stiffly. The normal answer, in contrast, has the opening sentence â Bonjour honored judges. Today I present to you my signature dish...â and the conclusion statement â So tell me honored judges, will my poulet `a la Claude earn your coveted approval? ...â which reads more like a casual talk from a chef. # Metrics on which SoT has high net win rates. (1) Diversity. In the prompt of LLMZoo evaluation, diversity is defined as â The response should be comprehensive and provide a range of information that is not limited to a single perspective. More perspectives are better.â The skeleton stage in SoT encourages LLMs to think from multiple perspectives. Therefore, SoTâ s answers tend to have better diversity. Below is one example showing that SoTâ s answer has better diversity. 42 | 2307.15337#131 | 2307.15337#133 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#133 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | # Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding Question: How can you determine if a person is genuinely interested in a conversation or simply being polite? # SoTâ s answer (ChatGPT-3.5) (Orange texts are the skeleton.) 1. Pay attention to body language.Notice if their posture is open or closed, if they are facing towards you or away, and if they are fidgeting or relaxed. These can all indicate their level of interest in the conversation. 2. Listen for engaged responses.If the person is genuinely interested in the conversation, they will respond with thoughtful and relevant comments or questions rather than short, generic responses. 3. Observe eye contact.If the person maintains steady eye contact and their gaze is focused on you, they are likely genuinely interested in the conversation. On the other hand, if they frequently look away or seem distracted, they may just be being polite. 4. Look for follow-up questions.If the person asks follow-up questions and seems genuinely interested in what youâ re saying, then they are likely interested in the conversation and not just being polite. 5. Notice tone of voice.A person who is genuinely interested in a conversation will often have a more enthusiastic and animated tone of voice, compared to someone who is simply being polite and giving short, monotone responses. # Normal answer (ChatGPT-3.5) As an AI assistant, I cannot determine a personâ s true intentions or feelings. However, there are some social cues that can indicate whether a person is genuinely interested in a conversation or not. For example, if they maintain eye contact, lean in, nod their head and ask follow-up questions, it can be a sign that they are engaged in the conversation. On the other hand, if they avoid eye contact, give short or vague responses, or seem distracted, it may indicate that they are not as interested. | 2307.15337#132 | 2307.15337#134 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#134 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | Ultimately, itâ s important to remember that everyone has different communication styles, and it can be difficult to accurately interpret someoneâ s intentions without directly asking them. SoT answers the question from five aspects, whereas the normal answer only mentions one aspect. (2) Relevance. In the prompt of LLMZoo evaluation, relevance is defined as â The response should be closely related to the question and answer the question accurately with sufficient details without repetition or redundancy. The more relevant they are, the better.â In the skeleton stage, LLMs are forced to propose several points around the question; and in the point-expanding stage, LLMs are required to only discuss these points. Therefore, answers from SoT tend to have better relevance. One typical example is shown below. Question: | 2307.15337#133 | 2307.15337#135 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#135 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | Why do some people enjoy the sensation of being scared, such as by watching horror movies or going on roller coasters, while others avoid these experiences? # SoTâ s answer (ChatGPT-3.5) (Orange texts are the skeleton.) 1. Neurological differences.Some people enjoy the sensation of being scared due to the release of adrenaline and dopamine in the brain, while others may have a lower threshold for fear due to differences in their amygdala and pre- frontal cortex. 2. Psychological factors.Some people may enjoy the sensation of being scared because it can provide a sense of excite- ment, thrill, and a temporary escape from the mundane aspects of everyday life. On the other hand, others may avoid these experiences due to past trauma or anxiety. 3. Developmental experiences.Individuals who had negative experiences with fear during childhood, such as trauma or abuse, may avoid scary situations as adults, while those who had positive experiences with fear may seek out these experiences. 4. Cultural influences.Some cultures place a greater emphasis on seeking excitement and thrill, while others may priori- tize safety and security, which can impact whether individuals enjoy scary experiences or avoid them. | 2307.15337#134 | 2307.15337#136 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#136 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | 5. Genetics.Research has suggested that some individuals may have a genetic predisposition towards seeking out thrilling or risky experiences, which may include enjoying the sensation of being scared. # Normal answer (ChatGPT-3.5) The enjoyment of being scared is a complex psychological phenomenon. Some people may enjoy the adrenaline rush that comes with being scared, while others may enjoy the feeling of being able to conquer their fears. Additionally, some people may be more naturally inclined to seek out new experiences and sensations, including those that are scary or thrilling. On the other hand, some people may be more risk-averse or may have had negative experiences with fear or anxiety in the past, which could make them more likely to avoid scary experiences. Ultimately, the reasons why some people enjoy being scared while others do not are likely to be influenced by a range of factors, including personality, past experiences, and individual preferences. | 2307.15337#135 | 2307.15337#137 | 2307.15337 | [
"2302.13971"
]
|
2307.15337#137 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | In the answer from the normal generation, the first two sentences provide little information in an- swering the question, and the last sentence only gives keywords such as â personality, past expe- riences, and individual preferencesâ without providing concrete explanations to each. In contrast, 43 Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding writing o--" @ SOT (w/o router) fermi 7 © em Sore wy tained router roleplay <e4 PB SoT-R w/ human router knowledge we common-sense > <0 generic â ¢ counterfactual <* -20% 0% 20% 40% 60% Figure 22: Net win rates of SoT and SoT-R on different question categories of Vicuna-80 dataset using the general quality metric from LLMZoo. Blue dots are from Fig. 5b. SoT-R correctly falls back to normal decoding on questions where SoT is not suitable. Code Debug +. © * * Complex Format ° aa Multilingual O-- 4 « Code Generation e aa Entertainment ° * Medicine « > Writting ° +> Reasoning ° cae] Economy o> Math ° * Chemistry O- 4 --> Academic Writing Oe Computer Science ~~ -He TruthfulQa eo Law o- >< Common-Sense © ---<-» Art + --0bâ ¢ Biology rl Physics Ob e-< Toxicity ia History a> Roleplay eee Sport Co Music <o Literature + @ SOT (w/o router) > Technology %* â SoT-R w/ prompting router PE Counterfactual |. < SOT-R w/ trained router Philosophy 7 » â SoT-R w/ human router » -60% -40% -20% 0% 20% 40% Figure 23: Net win rates of SoT and SoT-R on different question categories of WizardLM dataset using the general quality metric from FastChat. SoT-R correctly falls back to normal decoding on questions where SoT is not suitable. SoTâ s answer is well-structured into five reasons with sufficient explanations and it does not waste space in irrelevant contents. I.2 SKELETON-OF-THOUGHT WITH ROUTER | 2307.15337#136 | 2307.15337#138 | 2307.15337 | [
"2302.13971"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.