id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
2307.15818#44
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
Soricut. Pali: A jointly-scaled multilingual language-image model, 2023b. K. Cobbe, V. Kosaraju, M. Bavarian, M. Chen, H. Jun, L. Kaiser, M. Plappert, J. Tworek, J. Hilton, R. Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021.
2307.15818#43
2307.15818#45
2307.15818
[ "2304.02643" ]
2307.15818#45
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
Z. J. Cui, Y. Wang, N. Muhammad, L. Pinto, et al. From play to policy: Conditional behavior generation from uncurated robot data. arXiv preprint arXiv:2210.10047, 2022. S. Dasari and A. Gupta. Transformers for one-shot visual imitation. In Conference on Robot Learning, pages 2071â 2084. PMLR, 2021.
2307.15818#44
2307.15818#46
2307.15818
[ "2304.02643" ]
2307.15818#46
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
S. Dasari, F. Ebert, S. Tian, S. Nair, B. Bucher, K. Schmeckpeper, S. Singh, S. Levine, and C. Finn. Robonet: Large-scale multi-robot learning. In Conference on Robot Learning, 2019. 13 RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control M. Dehghani, J. Djolonga, B. Mustafa, P. Padlewski, J. Heek, J. Gilmer, A. Steiner, M. Caron, R. Geirhos, I. Alabdulmohsin, R. Jenatton, L. Beyer, M. Tschannen, A. Arnab, X. Wang, C. Riquelme, M. Minderer, J. Puigcerver, U. Evci, M. Kumar, S. van Steenkiste, G. F. Elsayed, A. Mahendran, F. Yu, A. Oliver, F. Huot, J. Bastings, M. P. Collier, A.
2307.15818#45
2307.15818#47
2307.15818
[ "2304.02643" ]
2307.15818#47
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
Gritsenko, V. Birodkar, C. Vasconcelos, Y. Tay, T. Mensink, A. Kolesnikov, F. PavetiÄ , D. Tran, T. Kipf, M. LuÄ iÄ , X. Zhai, D. Keysers, J. Harmsen, and N. Houlsby. Scaling vision transformers to 22 billion parameters, 2023. D. Driess, F. Xia, M. S. Sajjadi, C. Lynch, A. Chowdhery, B. Ichter, A. Wahid, J. Tompson, Q. Vuong, T. Yu, et al.
2307.15818#46
2307.15818#48
2307.15818
[ "2304.02643" ]
2307.15818#48
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
Palm-e: An embodied multimodal language model. arXiv preprint arXiv:2303.03378, 2023. M. Du, S. Nair, D. Sadigh, and C. Finn. Behavior retrieval: Few-shot imitation learning by querying unlabeled datasets. arXiv preprint arXiv:2304.08742, 2023a. Y. Du, K. Konyushkova, M. Denil, A. Raju, J. Landon, F. Hill, N. de Freitas, and S.
2307.15818#47
2307.15818#49
2307.15818
[ "2304.02643" ]
2307.15818#49
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
Cabi. Vision-language models as success detectors. arXiv preprint arXiv:2303.07280, 2023b. C. Finn and S. Levine. Deep visual foresight for planning robot motion. In 2017 IEEE International Conference on Robotics and Automation (ICRA), pages 2786â 2793. IEEE, 2017. C. Finn, T. Yu, T. Zhang, P. Abbeel, and S. Levine.
2307.15818#48
2307.15818#50
2307.15818
[ "2304.02643" ]
2307.15818#50
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
One-shot visual imitation learning via meta-learning. In Conference on robot learning, pages 357â 368. PMLR, 2017. R. A. Fisher. Design of experiments. British Medical Journal, 1(3923):554, 1936. S. Y. Gadre, M. Wortsman, G. Ilharco, L. Schmidt, and S. Song. Clip on wheels: Zero-shot object navigation as object localization and exploration. arXiv preprint arXiv:2203.10421, 2022.
2307.15818#49
2307.15818#51
2307.15818
[ "2304.02643" ]
2307.15818#51
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
Z. Gan, L. Li, C. Li, L. Wang, Z. Liu, J. Gao, et al. Vision-language pre-training: Basics, recent advances, and future trends. Foundations and Trends® in Computer Graphics and Vision, 14(3â 4):163â 352, 2022. G. Ghiasi, X. Gu, Y. Cui, and T.-Y. Lin. Open-vocabulary image segmentation. arXiv preprint arXiv:2112.12143, 2021. K. Grauman, A. Westbury, E. Byrne, Z. Chavis, A. Furnari, R. Girdhar, J. Hamburger, H. Jiang, M. Liu, X. Liu, M. Martin, T. Nagarajan, I. Radosavovic, S. K. Ramakrishnan, F. Ryan, J. Sharma, M. Wray, M. Xu, E. Z. Xu, C. Zhao, S. Bansal, D. Batra, V. Cartillier, S. Crane, T. Do, M. Doulaty, A. Erapalli, C. Feichtenhofer, A. Fragomeni, Q. Fu, A.
2307.15818#50
2307.15818#52
2307.15818
[ "2304.02643" ]
2307.15818#52
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
Gebreselasie, C. Gonzalez, J. Hillis, X. Huang, Y. Huang, W. Jia, W. Khoo, J. Kolar, S. Kottur, A. Kumar, F. Landini, C. Li, Y. Li, Z. Li, K. Mangalam, R. Modhugu, J. Munro, T. Murrell, T. Nishiyasu, W. Price, P. R. Puentes, M. Ramazanova, L. Sari, K. Somasundaram, A. Southerland, Y. Sugano, R. Tao, M. Vo, Y. Wang, X. Wu, T. Yagi, Z. Zhao, Y. Zhu, P. Arbelaez, D. Crandall, D. Damen, G. M. Farinella, C. Fuegen, B. Ghanem, V. K. Ithapu, C. V. Jawahar, H. Joo, K. Kitani, H. Li, R. Newcombe, A. Oliva, H. S. Park, J. M. Rehg, Y. Sato, J. Shi, M. Z. Shou, A. Torralba, L. Torresani, M. Yan, and J.
2307.15818#51
2307.15818#53
2307.15818
[ "2304.02643" ]
2307.15818#53
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
Malik. Ego4d: Around the world in 3,000 hours of egocentric video, 2022. X. Gu, T.-Y. Lin, W. Kuo, and Y. Cui. Open-vocabulary object detection via vision and language knowledge distillation. arXiv preprint arXiv:2104.13921, 2021. N. Hansen, R. Jangir, Y. Sun, G. Alenyà, P. Abbeel, A. A. Efros, L. Pinto, and X. Wang.
2307.15818#52
2307.15818#54
2307.15818
[ "2304.02643" ]
2307.15818#54
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
Self-supervised policy adaptation during deployment. arXiv preprint arXiv:2007.04309, 2020. Y. Hao, H. Song, L. Dong, S. Huang, Z. Chi, W. Wang, S. Ma, and F. Wei. Language models are general-purpose interfaces. arXiv preprint arXiv:2206.06336, 2022. 14 # RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
2307.15818#53
2307.15818#55
2307.15818
[ "2304.02643" ]
2307.15818#55
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
F. Hill, S. Mokra, N. Wong, and T. Harley. Human instruction-following with deep reinforcement learning via transfer-learning from text. arXiv preprint arXiv:2005.09382, 2020. S. Huang, L. Dong, W. Wang, Y. Hao, S. Singhal, S. Ma, T. Lv, L. Cui, O. K. Mohammed, Q. Liu, et al.
2307.15818#54
2307.15818#56
2307.15818
[ "2304.02643" ]
2307.15818#56
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
Language is not all you need: Aligning perception with language models. arXiv preprint arXiv:2302.14045, 2023. W. Huang, P. Abbeel, D. Pathak, and I. Mordatch. Language models as zero-shot planners: Extracting actionable knowledge for embodied agents. In International Conference on Machine Learning, pages 9118â 9147. PMLR, 2022. S. James, M. Bloesch, and A. J. Davison.
2307.15818#55
2307.15818#57
2307.15818
[ "2304.02643" ]
2307.15818#57
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
Task-embedded control networks for few-shot imitation learning. In Conference on robot learning, pages 783â 795. PMLR, 2018. E. Jang, A. Irpan, M. Khansari, D. Kappler, F. Ebert, C. Lynch, S. Levine, and C. Finn. Bc-z: Zero- shot task generalization with robotic imitation learning. In Conference on Robot Learning, pages 991â 1002. PMLR, 2021. Y. Jiang, A. Gupta, Z. Zhang, G. Wang, Y. Dou, Y. Chen, L. Fei-Fei, A. Anandkumar, Y. Zhu, and L. Fan.
2307.15818#56
2307.15818#58
2307.15818
[ "2304.02643" ]
2307.15818#58
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
Vima: General robot manipulation with multimodal prompts. arXiv preprint arXiv:2210.03094, 2022. L. P. Kaelbling. The foundation of efficient robot learning. Science, 369(6506):915â 916, 2020. S. Karamcheti, S. Nair, A. S. Chen, T. Kollar, C. Finn, D. Sadigh, and P. Liang.
2307.15818#57
2307.15818#59
2307.15818
[ "2304.02643" ]
2307.15818#59
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
Language-driven representation learning for robotics. arXiv preprint arXiv:2302.12766, 2023. A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao, S. Whitehead, A. C. Berg, W.-Y. Lo, et al. Segment anything. arXiv preprint arXiv:2304.02643, 2023. I. Kostrikov, D. Yarats, and R.
2307.15818#58
2307.15818#60
2307.15818
[ "2304.02643" ]
2307.15818#60
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
Fergus. Image augmentation is all you need: Regularizing deep reinforcement learning from pixels. arXiv preprint arXiv:2004.13649, 2020. M. Laskin, K. Lee, A. Stooke, L. Pinto, P. Abbeel, and A. Srinivas. Reinforcement learning with augmented data. Advances in neural information processing systems, 33:19884â 19895, 2020a. M. Laskin, A. Srinivas, and P. Abbeel.
2307.15818#59
2307.15818#61
2307.15818
[ "2304.02643" ]
2307.15818#61
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
Curl: Contrastive unsupervised representations for reinforcement learning. In International Conference on Machine Learning, pages 5639â 5650. PMLR, 2020b. S. Levine, P. Pastor, A. Krizhevsky, J. Ibarz, and D. Quillen. Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. The International journal of robotics research, 37(4-5):421â 436, 2018. A. Lewkowycz, A. Andreassen, D. Dohan, E. Dyer, H. Michalewski, V. Ramasesh, A. Slone, C. Anil, I. Schlag, T. Gutman-Solo, et al.
2307.15818#60
2307.15818#62
2307.15818
[ "2304.02643" ]
2307.15818#62
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
Solving quantitative reasoning problems with language models. arXiv preprint arXiv:2206.14858, 2022. J. Li, D. Li, S. Savarese, and S. Hoi. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. arXiv preprint arXiv:2301.12597, 2023. L. H. Li, M. Yatskar, D. Yin, C.-J. Hsieh, and K.-W. Chang.
2307.15818#61
2307.15818#63
2307.15818
[ "2304.02643" ]
2307.15818#63
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
Visualbert: A simple and performant baseline for vision and language. arXiv preprint arXiv:1908.03557, 2019. H. Liu, L. Lee, K. Lee, and P. Abbeel. Instruction-following agents with jointly pre-trained vision- language models. arXiv preprint arXiv:2210.13431, 2022. 15 # RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
2307.15818#62
2307.15818#64
2307.15818
[ "2304.02643" ]
2307.15818#64
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
J. Lu, D. Batra, D. Parikh, and S. Lee. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. Advances in neural information processing systems, 32, 2019. C. Lynch and P. Sermanet. Language conditioned imitation learning over unstructured data. arXiv preprint arXiv:2005.07648, 2020. C. Lynch, A. Wahid, J. Tompson, T. Ding, J. Betker, R. Baruch, T. Armstrong, and P.
2307.15818#63
2307.15818#65
2307.15818
[ "2304.02643" ]
2307.15818#65
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
Florence. Interactive language: Talking to robots in real time. arXiv preprint arXiv:2210.06407, 2022. Y. J. Ma, S. Sodhani, D. Jayaraman, O. Bastani, V. Kumar, and A. Zhang. Vip: Towards universal visual reward and representation via value-implicit pre-training. arXiv preprint arXiv:2210.00030, 2022. Y. J. Ma, W. Liang, V. Som, V. Kumar, A. Zhang, O. Bastani, and D.
2307.15818#64
2307.15818#66
2307.15818
[ "2304.02643" ]
2307.15818#66
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
Jayaraman. Liv: Language-image representations and rewards for robotic control. arXiv preprint arXiv:2306.00958, 2023. J. Mahler, J. Liang, S. Niyaz, M. Laskey, R. Doan, X. Liu, J. A. Ojea, and K. Goldberg. Dex-net 2.0: Deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics. arXiv preprint arXiv:1703.09312, 2017. A. Majumdar, K. Yadav, S. Arnaud, Y. J. Ma, C. Chen, S. Silwal, A. Jain, V.-P. Berges, P. Abbeel, J. Malik, et al. Where are we in the search for an artificial visual cortex for embodied intelligence? arXiv preprint arXiv:2303.18240, 2023a. A. Majumdar, K. Yadav, S. Arnaud, Y. J. Ma, C. Chen, S. Silwal, A. Jain, V.-P. Berges, P. Abbeel, J. Malik, et al. Where are we in the search for an artificial visual cortex for embodied intelligence? arXiv preprint arXiv:2303.18240, 2023b. O. Mees, L. Hermann, and W. Burgard.
2307.15818#65
2307.15818#67
2307.15818
[ "2304.02643" ]
2307.15818#67
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
What matters in language conditioned robotic imitation learning over unstructured data. IEEE Robotics and Automation Letters, 7(4):11205â 11212, 2022. M. Minderer, A. Gritsenko, A. Stone, M. Neumann, D. Weissenborn, A. Dosovitskiy, A. Mahendran, A. Arnab, M. Dehghani, Z. Shen, et al. Simple open-vocabulary object detection with vision transformers. arXiv preprint arXiv:2205.06230, 2022. Y. Mu, Q. Zhang, M. Hu, W. Wang, M. Ding, J. Jin, B. Wang, J. Dai, Y. Qiao, and P. Luo. Embodiedgpt: Vision-language pre-training via embodied chain of thought. arXiv preprint arXiv:2305.15021, 2023.
2307.15818#66
2307.15818#68
2307.15818
[ "2304.02643" ]
2307.15818#68
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
S. Nair, E. Mitchell, K. Chen, S. Savarese, C. Finn, et al. Learning language-conditioned robot behavior from offline data and crowd-sourced annotation. In Conference on Robot Learning, pages 1303â 1315. PMLR, 2022a. S. Nair, A. Rajeswaran, V. Kumar, C. Finn, and A. Gupta. R3m:
2307.15818#67
2307.15818#69
2307.15818
[ "2304.02643" ]
2307.15818#69
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
A universal visual representation for robot manipulation. arXiv preprint arXiv:2203.12601, 2022b. OpenAI. Gpt-4 technical report, 2023. J. Pari, N. M. Shafiullah, S. P. Arunachalam, and L. Pinto. The surprising effectiveness of representation learning for visual imitation. arXiv preprint arXiv:2112.01511, 2021. L. Pinto and A. Gupta.
2307.15818#68
2307.15818#70
2307.15818
[ "2304.02643" ]
2307.15818#70
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
Supersizing self-supervision: Learning to grasp from 50k tries and 700 robot hours. In 2016 IEEE international conference on robotics and automation (ICRA), pages 3406â 3413. IEEE, 2016. S. Polu, J. M. Han, K. Zheng, M. Baksys, I. Babuschkin, and I. Sutskever. Formal mathematics statement curriculum learning. arXiv preprint arXiv:2202.01344, 2022.
2307.15818#69
2307.15818#71
2307.15818
[ "2304.02643" ]
2307.15818#71
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
16 # RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control V. H. Pong, M. Dalal, S. Lin, A. Nair, S. Bahl, and S. Levine. Skew-fit: State-covering self-supervised reinforcement learning. arXiv preprint arXiv:1903.03698, 2019. A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark, et al.
2307.15818#70
2307.15818#72
2307.15818
[ "2304.02643" ]
2307.15818#72
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
Learning transferable visual models from natural language supervision. In Interna- tional Conference on Machine Learning, pages 8748â 8763. PMLR, 2021. S. Reed, K. Zolna, E. Parisotto, S. G. Colmenarejo, A. Novikov, G. Barth-Maron, M. Gimenez, Y. Sulsky, J. Kay, J. T. Springenberg, et al. A generalist agent. arXiv preprint arXiv:2205.06175, 2022. M. Ryoo, A. Piergiovanni, A. Arnab, M. Dehghani, and A. Angelova.
2307.15818#71
2307.15818#73
2307.15818
[ "2304.02643" ]
2307.15818#73
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
Tokenlearner: Adaptive space-time tokenization for videos. Advances in Neural Information Processing Systems, 34:12786â 12797, 2021. D. Shah, B. OsiÅ ski, b. ichter, and S. Levine. Lm-nav: Robotic navigation with large pre-trained models of language, vision, and action. In K. Liu, D. Kulic, and J. Ichnowski, editors, Proceedings of The 6th Conference on Robot Learning, volume 205 of Proceedings of Machine Learning Research, pages 492â
2307.15818#72
2307.15818#74
2307.15818
[ "2304.02643" ]
2307.15818#74
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
504. PMLR, 14â 18 Dec 2023. URL https://proceedings.mlr.press/v205/shah23b.html. R. Shah and V. Kumar. Rrl: Resnet as representation for reinforcement learning. arXiv preprint arXiv:2107.03380, 2021. M. Shridhar, L. Manuelli, and D. Fox. Cliport: What and where pathways for robotic manipulation. In Proceedings of the 5th Conference on Robot Learning (CoRL), 2021.
2307.15818#73
2307.15818#75
2307.15818
[ "2304.02643" ]
2307.15818#75
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
M. Shridhar, L. Manuelli, and D. Fox. Cliport: What and where pathways for robotic manipulation. In Conference on Robot Learning, pages 894â 906. PMLR, 2022a. M. Shridhar, L. Manuelli, and D. Fox. Perceiver-actor: A multi-task transformer for robotic manipula- tion. arXiv preprint arXiv:2209.05451, 2022b. I. Singh, V. Blukis, A. Mousavian, A. Goyal, D. Xu, J. Tremblay, D. Fox, J. Thomason, and A.
2307.15818#74
2307.15818#76
2307.15818
[ "2304.02643" ]
2307.15818#76
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
Garg. Progprompt: Generating situated robot task plans using large language models. In ICRA, 2023. M. H. Smith and L. S. Coles. Design of a low cost, general purpose robot. In IJCAI, pages 324â 336, 1973. A. Stone, T. Xiao, Y. Lu, K. Gopalakrishnan, K.-H. Lee, Q. Vuong, P. Wohlhart, B. Zitkovich, F. Xia, C. Finn, et al.
2307.15818#75
2307.15818#77
2307.15818
[ "2304.02643" ]
2307.15818#77
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
Open-world object manipulation using pre-trained vision-language models. arXiv preprint arXiv:2303.00905, 2023. T. Sumers, K. Marino, A. Ahuja, R. Fergus, and I. Dasgupta. Distilling internet-scale vision-language models into embodied agents. arXiv preprint arXiv:2301.12507, 2023. Y. Tay, M. Dehghani, V. Q. Tran, X. Garcia, J. Wei, X. Wang, H. W. Chung, S. Shakeri, D. Bahri, T. Schuster, H. S. Zheng, D. Zhou, N. Houlsby, and D. Metzler. Ul2: Unifying language learning paradigms, 2023.
2307.15818#76
2307.15818#78
2307.15818
[ "2304.02643" ]
2307.15818#78
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
S. Vemprala, R. Bonatti, A. Bucker, and A. Kapoor. Chatgpt for robotics: Design principles and model abilities. Microsoft Auton. Syst. Robot. Res, 2:20, 2023. J. Wang, Z. Yang, X. Hu, L. Li, K. Lin, Z. Gan, Z. Liu, C. Liu, and L. Wang. Git:
2307.15818#77
2307.15818#79
2307.15818
[ "2304.02643" ]
2307.15818#79
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
A generative image-to-text transformer for vision and language. arXiv preprint arXiv:2205.14100, 2022. J. Wei, X. Wang, D. Schuurmans, M. Bosma, E. Chi, Q. Le, and D. Zhou. Chain of thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903, 2022. 17 # RT-2:
2307.15818#78
2307.15818#80
2307.15818
[ "2304.02643" ]
2307.15818#80
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
Vision-Language-Action Models Transfer Web Knowledge to Robotic Control J. Wei, L. Hou, A. Lampinen, X. Chen, D. Huang, Y. Tay, X. Chen, Y. Lu, D. Zhou, T. Ma, and Q. V. Le. Symbol tuning improves in-context learning in language models, 2023. J. Wu, R. Antonova, A. Kan, M. Lepert, A. Zeng, S. Song, J. Bohg, S. Rusinkiewicz, and T. Funkhouser. Tidybot: Personalized robot assistance with large language models. arXiv preprint arXiv:2305.05658, 2023. T. Xiao, H. Chan, P. Sermanet, A. Wahid, A. Brohan, K. Hausman, S. Levine, and J. Tompson.
2307.15818#79
2307.15818#81
2307.15818
[ "2304.02643" ]
2307.15818#81
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
Robotic skill acquisition via instruction augmentation with vision-language models. arXiv preprint arXiv:2211.11736, 2022a. T. Xiao, I. Radosavovic, T. Darrell, and J. Malik. Masked visual pre-training for motor control. arXiv preprint arXiv:2203.06173, 2022b. S. Young, D. Gandhi, S. Tulsiani, A. Gupta, P. Abbeel, and L. Pinto.
2307.15818#80
2307.15818#82
2307.15818
[ "2304.02643" ]
2307.15818#82
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
Visual imitation made easy. In Conference on Robot Learning, pages 1992â 2005. PMLR, 2021. K.-T. Yu, M. Bauza, N. Fazeli, and A. Rodriguez. More than a million ways to be pushed. a high-fidelity experimental dataset of planar pushing. In 2016 IEEE/RSJ international conference on intelligent robots and systems (IROS), pages 30â 37. IEEE, 2016. T. Yu, C. Finn, A. Xie, S. Dasari, T. Zhang, P. Abbeel, and S. Levine.
2307.15818#81
2307.15818#83
2307.15818
[ "2304.02643" ]
2307.15818#83
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
One-shot imitation from observing humans via domain-adaptive meta-learning. arXiv preprint arXiv:1802.01557, 2018. X. Zhai, A. Kolesnikov, N. Houlsby, and L. Beyer. Scaling vision transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12104â 12113, 2022. X. Zhang, Y. Ding, S. Amiri, H. Yang, A. Kaminski, C. Esselink, and S. Zhang.
2307.15818#82
2307.15818#84
2307.15818
[ "2304.02643" ]
2307.15818#84
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
Grounding classical task planners via vision-language models. arXiv preprint arXiv:2304.08587, 2023. 18 RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control # A. Contributions â ¢ Training and Evaluations (designing and executing procedures for training models, evalu- ating models in simulation and the real world, running ablations for algorithm design choices): Yevgen Chebotar, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Alexander Herzog, Brian Ichter, Alex Irpan, Isabel Leal, Lisa Lee, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Michael Ryoo, Anikait Singh, Quan Vuong, Ayzaan Wahid, Paul Wohlhart, Fei Xia, Ted Xiao, and Tianhe Yu.
2307.15818#83
2307.15818#85
2307.15818
[ "2304.02643" ]
2307.15818#85
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
â ¢ Network Architecture (designing and implementing model network modules, working on tokenization of actions, enabling inference of the model networks during experiments): Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Danny Driess, Pete Florence, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Brian Ichter, Alex Irpan, Isabel Leal, Lisa Lee, Henryk Michalewski, Igor Mordatch, Kanishka Rao, Michael Ryoo, Anikait Singh, Quan Vuong, Ayzaan Wahid, Jialin Wu, Fei Xia, Ted Xiao, and Tianhe Yu.
2307.15818#84
2307.15818#86
2307.15818
[ "2304.02643" ]
2307.15818#86
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
â ¢ Data Collection (collecting data on real robots, running real robot evaluations, executing operations required for running real robots): Noah Brown, Justice Carbajal, Tianli Ding, Krista Reymann, Grecia Salazar, Pierre Sermanet, Jaspiar Singh, Huong Tran, Stefan Welker, and Sichun Xu. â ¢ Leadership (leading the project efforts, managing the project staff, advising on project directions): Yevgen Chebotar, Chelsea Finn, Karol Hausman, Brian Ichter, Sergey Levine, Yao Lu, Igor Mordatch, Kanishka Rao, Pannag Sanketi, Radu Soricut, Vincent Vanhoucke, and Tianhe Yu. Paper (working on the paper manuscript, designing paper visualizations and figures): Yevgen Chebotar, Danny Driess, Chelsea Finn, Pete Florence, Karol Hausman, Brian Ichter, Lisa Lee, Sergey Levine, Igor Mordatch, Karl Pertsch, Quan Vuong, Fei Xia, Ted Xiao, and Tianhe Yu. â ¢ Infrastructure (working on infrastructure and code base backbone needed for training models, running experiments, storing and accessing data): Anthony Brohan, Yevgen Chebo- tar, Danny Driess, Kehang Han, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Yao Lu, Igor Mordatch, Quan Vuong, Ayzaan Wahid, Fei Xia, Ted Xiao, Peng Xu, and Tianhe Yu. # B. Datasets The vision-language datasets are based on the dataset mixtures from Chen et al. (2023b) and Driess et al. (2023). The bulk of this data consists of the WebLI dataset, which is around 10B image-text pairs across 109 languages, filtered to the top 10% scoring cross-modal similarity examples to give 1B training examples. Many other captioning and vision question answering datasets are included as well, and more info on the dataset mixtures can be found in Chen et al. (2023b) for RT-2-PaLI-X, and Driess et al. (2023) for RT-2-PaLM-E.
2307.15818#85
2307.15818#87
2307.15818
[ "2304.02643" ]
2307.15818#87
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
When co-fine-tuning RT-2-PaLI-X, we do not use the Episodic WebLI dataset described by Chen et al. (2023a). The robotics dataset is based on the dataset from Brohan et al. (2022). This consists of demon- stration episodes collected with a mobile manipulation robot. Each demonstration is annotated with a natural language instruction from one of seven skills: "Pick Object", "Move Object Near Object", "Place Object Upright", "Knock Object Over", "Open Drawer", "Close Drawer", "Place Object into Receptacle", and "Pick Object from Receptacle and place on the counter".
2307.15818#86
2307.15818#88
2307.15818
[ "2304.02643" ]
2307.15818#88
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
Further details can be found in Brohan et al. (2022). RT-2-PaLI-X weights the robotics dataset such that it makes up about 50% of the training mixture 19 # RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control for co-fine-tuning. RT-2-PaLM-E weights the robotics dataset to be about 66% of the training mixture. For the results on Language-Table in Table 1, our model is trained on the Language-Table datasets from Lynch et al. (2022). Our model is co-fine-tuned on several prediction tasks: (1) predict the action, given two consecutive image frames and a text instruction; (2) predict the instruction, given image frames; (3) predict the robot arm position, given image frames; (4) predict the number of timesteps between given image frames; and (5) predict whether the task was successful, given image frames and the instruction.
2307.15818#87
2307.15818#89
2307.15818
[ "2304.02643" ]
2307.15818#89
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
# C. Baselines We compare our method to multiple state-of-the-art baselines that challenge different aspects of our method. All of the baselines use the exact same robotic data. â ¢ RT-1: Robotics Transformer 1 Brohan et al. (2022) is a transformer-based model that achieved state-of-the-art performance on a similar suite of tasks when it was published. The model does not use VLM-based pre-training so it provides an important data point demonstrating whether VLM-based pre-training matters.
2307.15818#88
2307.15818#90
2307.15818
[ "2304.02643" ]
2307.15818#90
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
â ¢ VC-1: VC-1 Majumdar et al. (2023a) is a visual foundation model that uses pre-trained visual representations specifically designed for robotics tasks. We use pre-trained representations from the VC-1 ViT-L model. Since VC-1 does not include language conditioning, we add this by separately embedding the language command via Universal Sentence Encoder Cer et al. (2018) to enable comparison to our method. In particular, we concatenate the resulting language embedding tokens to the image tokens produced by VC-1, and pass the concatenated token sequences through token learner Ryoo et al. (2021). The token sequences produced by token learner are then consumed by an RT-1 decoder-only transformer model to predict robot action tokens. We train the VC-1 baseline end-to-end and unfreeze the VC-1 weights during training, since this led to far better results than using frozen VC-1 weights.
2307.15818#89
2307.15818#91
2307.15818
[ "2304.02643" ]
2307.15818#91
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
â ¢ R3M: R3M Nair et al. (2022b) is a similar method to VC-1 in that R3M uses pre-trained visual-language representations to improve policy training. In this case the authors use Ego4D dataset Grauman et al. (2022) of human activities to learn the representation that is used by the policy. Both VC-1 and R3M test different state-of-the-art representation learning methods as an alternative to using a VLM. To obtain a language-conditioned policy from the R3M pretrained representation, we follow the same procedure as described above for VC-1, except we use the R3M ResNet50 model to obtain the image tokens, and unfreeze it during training.
2307.15818#90
2307.15818#92
2307.15818
[ "2304.02643" ]
2307.15818#92
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
â ¢ MOO: MOO Stone et al. (2023) is an object-centric approach, where a VLM is first used to specify the object of interest in a form of a single, colored pixel in the original image. This pixel- modified image is then trained with an end-to-end policy to accomplish a set of manipulation tasks. This baseline corresponds to a situation where a VLM is used as a separate module that enhances perception but its representations are not used for policy learning. # D. VLMs for RT-2 The PaLI-X model architecture consists of a ViT-22B Dehghani et al. (2023) to process images, which can accept sequences of ð images, leading to ð Ã ð tokens per image, where ð is the number of patches per image. The image tokens passing over a projection layer is then consumed by an encoder-decoder backbone of 32B parameters and 50 layers, similar to UL2 Tay et al. (2023), which jointly processes text and images as embeddings to generate output tokens in an auto-regressive manner.
2307.15818#91
2307.15818#93
2307.15818
[ "2304.02643" ]
2307.15818#93
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
The text 20 RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control input usually consists of the type of task and any additional context (e.g., "Generate caption in â ¨langâ ©" for captioning tasks or "Answer in â ¨langâ ©: question" for VQA tasks). The PaLI-3B model trained on Language-Table (Table 1) uses a smaller ViT-G/14 (Zhai et al., 2022) (2B parameters) to process images, and UL2-3B (Tay et al., 2023) for the encoder-decoder network. The PaLM-E model is based on a decoder-only LLM that projects robot data such as images and text into the language token space and outputs text such as high-level plans. In the case of the used PaLM-E-12B, the visual model used to project images to the language embedding space is a ViT-4B Chen et al. (2023b). The concatenation of continuous variables to textual input allows PaLM-E to be fully multimodal, accepting a wide variety of inputs such as multiple sensor modalities, object-centric representations, scene representations and object entity referrals. # E.
2307.15818#92
2307.15818#94
2307.15818
[ "2304.02643" ]
2307.15818#94
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
Training Details We perform co-fine-tuning on pre-trained models from the PaLI-X (Chen et al., 2023a) 5B & 55B model, PaLI (Chen et al., 2023b) 3B model and the PaLM-E (Driess et al., 2023) 12B model. For RT-2-PaLI-X-55B, we use learning rate 1e-3 and batch size 2048 and co-fine-tune the model for 80K gradient steps whereas for RT-2-PaLI-X-5B, we use the same learning rate and batch size and co-fine-tune the model for 270K gradient steps. For RT-2-PaLM-E-12B, we use learning rate 4e-4 and batch size 512 to co-fine-tune the model for 1M gradient steps. Both models are trained with the next token prediction objective, which corresponds to the behavior cloning loss in robot learning. For RT-2-PaLI-3B model used for Language-Table results in Table 1, we use learning rate 1e-3 and batch size 128 to co-fine-tune the model for 300K gradient steps.
2307.15818#93
2307.15818#95
2307.15818
[ "2304.02643" ]
2307.15818#95
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
# F. Evaluation Details # F.1. Evaluation Scenarios For studying the emergent capabilities of RT-2 in a quantitative manner, we study various challenging semantic evaluation scenarios that aim to measure capabilities such as reasoning, symbol understand- ing, and human recognition. A visual overview of a subset of these scenes is provided in Figure 8, and the full list of instructions used for quantiative evalution is shown in Table 3. # F.2. Evaluation Instructions Table 2 lists natural language instructions used in model evaluations for unseen objects, backgrounds, and environments. Each instruction was run between 1-5 times, depending on the number of total instructions in that evaluation set. Table 3 lists natural language instructions used to evaluate quantitative emergent evals. Each instruction was run 5 times.
2307.15818#94
2307.15818#96
2307.15818
[ "2304.02643" ]
2307.15818#96
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
21 # RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control (a) Reasoning "move apple to cup with same colorâ â déplacer les frites verts dans la tasse rougeâ â pick a healthy drinkâ â move banna near the sum of two plus oneâ Ne r â move coke can to person with glassesâ â move banana to near Y" androidâ â move coke can to dogâ â put coke can close | "move apple to treeâ
2307.15818#95
2307.15818#97
2307.15818
[ "2304.02643" ]
2307.15818#97
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
(c) Human Recognition (b) Symbol Understanding Figure 8 | An overview of some of the evaluation scenarios used to study the emergent capabilities of RT-2. They focus on three broad categories, which are (a) reasoning, (b) symbol understanding, and (c) human recognition. The visualized instructions are a subset of the full instructions, which are listed in Appendix F.2. Task Group Tasks Symbol Understand- ing: Symbol 1 move coke can near X, move coke can near 3, move coke can near Y Symbol Understand- ing: Symbol 2 move apple to tree, move apple to duck, move apple to apple, move apple to matching card Symbol Understand- ing: Symbol 3 put coke can close to dog, push coke can on top of heart, place coke can above star Reasoning: Math move banana to 2, move banna near the sum of two plus one, move banana near the answer of three times two, move banana near the smallest number Reasoning: Logos move cup to google, move cup to android, move cup to youtube, move cup to a search engine, move cup to a phone Reasoning: Nutrition get me a healthy snack, pick a healthy drink, pick up a sweet drink, move the healthy snack to the healthy drink, pick up a salty snack Reasoning: Color and Multilingual move apple to cup with same color, move apple to cup with different color, move green chips to matching color cup, move apple to vaso verde, Bewegen Sie den Apfel in die rote Tasse, move green chips to vaso rojo, mueve la manzana al vaso verde, déplacer les frites verts dans la tasse rouge Person Recognition: Celebrities move coke can to taylor swift, move coke can to tom cruise, move coke can to snoop dog Person Recognition: CelebA move coke can to person with glasses, move coke can to the man with white hair, move coke can to the brunette lady
2307.15818#96
2307.15818#98
2307.15818
[ "2304.02643" ]
2307.15818#98
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
Table 3 | Natural language instructions used for quantitative emergent evalutions. 22 # RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control # G. Example Failure Cases In Fig. 9 we provide examples of a notable type of failure case in the Language Table setting, with the RT-2 model not generalizing to unseen object dynamics. In these cases, although the model is able to correctly attend to the language instruction and move to the first correct object, it is not able to control the challenging dynamics of these objects, which are significantly different than the small set of block objects that have been seen in this environment Lynch et al. (2022). Then pen simply rolls off the table (Fig. 9, left), while the bananaâ s center-of-mass is far from where the robot makes contact (Fig. 9, right). We note that pushing dynamics are notoriously difficult to predict and control Yu et al. (2016). We hypothesize that greater generalization in robot-environment interaction dynamics may be possible by further scaling the datasets across diverse environments and objects â for example, in this case, datasets that include similar types of more diverse pushing dynamics Dasari et al. (2019). In addition, despite RT-2â s promising performance on real world manipulation tasks in qualitative and quantitative emergent evaluations, we still find numerous notable failure cases. For example, with the current training dataset composition and training method, RT-2 seemed to perform poorly at: Grasping objects by specific parts, such as the handle â ¢ Novel motions beyond what was seen in the robot data, such as wiping with a towel or tool use â ¢ Dexterous or precise motions, such as folding a towel â ¢ Extended reasoning requiring multiple layers of indirection
2307.15818#97
2307.15818#99
2307.15818
[ "2304.02643" ]
2307.15818#99
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
| Push the red marker to the video game controller | { Push the banana to the apple Figure 9 | Qualitative example failure cases in the real-world failing to generalize to unseen object dynamics. # H. Quantitative Experimental Results # H.1. Overall Performance, for Section 4.1 Table 4 lists our quantitative overall evaluation results. We find that RT-2 performs as well or better than baselines on seen tasks and significantly outperforms baselines on generalization to unseen objects, backgrounds, and environments. Model Seen Tasks Unseen Objects Unseen Backgrounds Unseen Environments Unseen Average Easy Hard Easy Hard Easy Hard R3M (Nair et al., 2022b) VC-1 (Majumdar et al., 2023a) RT-1 (Brohan et al., 2022) MOO (Stone et al., 2023) RT-2-PaLI-X-55B (ours) RT-2-PaLM-E-12B1 (ours) 45 63 92 75 91 93 32 34 31 58 70 84 14 10 43 48 62 76 13 13 71 38 96 75 9 3 9 41 48 71 0 0 26 19 63 36 2 0 14 3 35 33 12 10 32 35 62 62 Table 4 | Overall performance of two instantiations of RT-2 and baselines across seen training tasks as well as unseen evaluations measuring generalization to novel objects, novel backgrounds, and novel environments.
2307.15818#98
2307.15818#100
2307.15818
[ "2304.02643" ]
2307.15818#100
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
23 RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control # H.2. Emergent Evaluation, for Section 4.2 Table 5 lists all of our quantitative emergent evaluation results. We find that RT-2 performs 2x to 3x better than RT-1 on these new instructions, without any additional robotic demonstrations. This showcases how our method allows us to leverage capabilities from pretraining on web-scale vision-language datasets. Model Symbol Understanding Reasoning Person Recognition Symbol 1 Symbol 2 Symbol 3 Average Math Logos Nutrition Color/Multilingual Average Celebrities CelebA Average VC-1 (Majumdar et al., 2023a) RT-1 (Brohan et al., 2022) RT-2-PaLI-X-55B (ours) RT-2-PaLM-E-12B (ours) 7 27 93 67 25 20 60 20 0 0 93 20 11 16 82 36 0 5 25 35 8 0 52 56 20 32 48 44 13 28 58 35 10 16 46 43 20 20 53 33 7 20 53 53 13 20 53 43 Average 11 17 60 40 Table 5 | Performance of RT-2 and baselines on quantitative emergent evaluations. # H.3. Size and Training Ablations, for Section 4.3 Table 6 details quantitative results for ablations across model size and training approach. Across each, we see that model size plays an important role in performance and that co-fine-tuning outperforms fine-tuning, which outperforms training from scratch.
2307.15818#99
2307.15818#101
2307.15818
[ "2304.02643" ]
2307.15818#101
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
Model Size Training Unseen Objects Unseen Backgrounds Unseen Environments Average Easy Hard Easy Hard Easy Hard RT-2-PaLI-X RT-2-PaLI-X RT-2-PaLI-X RT-2-PaLI-X RT-2-PaLI-X 5B 5B 5B 55B 55B from scratch fine-tuning co-fine-tuning fine-tuning co-fine-tuning 0 24 60 60 70 10 38 38 62 62 46 79 67 75 96 0 50 29 38 48 0 36 44 57 63 0 23 24 19 35 9 42 44 52 63 Table 6 | Ablations of RT-2 showcasing the impact of parameter count and training strategy on generalization. # I. Additional Chain-Of-Thought Reasoning Results We present additional examples of chain-of-thought reasoning rollouts accomplished with RT-2-PaLM- E, as described in Sec. 4.4, in Figure 10. 1The original pre-training data mixture used in PaLM-E-12B (as described in Driess et al. (2023)) includes robot images for high-level VQA planning tasks that can be similar to images encountered in generalization scenarios. However, none of those training examples include low-level actions that are evaluated in this experiment.
2307.15818#100
2307.15818#102
2307.15818
[ "2304.02643" ]
2307.15818#102
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
24 # RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control User Pick up the object that is different from all other objects Plan: Pick rxbar chocolate User Move the green objects together Plan: Move green can near green rice chip bag User Move the blue objects together Plan: Move blue chip bag near pepsi User lam sleepy, bring me a drink to help. Plan: Pick redbull can Figure 10 | Additional examples of RT-2 with chain-of-thought reasoning
2307.15818#101
2307.15818#103
2307.15818
[ "2304.02643" ]
2307.15818#103
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
25 # RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control Task Group Tasks Unseen Objects (Easy) pick banana, move banana near coke can, move orange can near banana, pick oreo, move oreo near apple, move redbull can near oreo, pick pear, pick coconut water, move pear near coconut water, move pepsi can near pear Unseen Objects (Hard) pick cold brew can, pick large orange plate, pick chew toy, pick large ten- nis ball, pick bird ornament, pick fish toy, pick ginger lemon kombucha, pick egg separator, pick wrist watch, pick green sprite can, pick blue microfiber cloth, pick yellow pear, pick pretzel chip bag, pick disinfectant wipes, pick pineapple hint water, pick green cup, pick pickle snack, pick small blue plate, pick small orange rolling pin, pick octopus toy, pick catnip toy Unseen grounds (Easy) Back- pick green jalapeno chip bag, pick orange can, pick pepsi can, pick 7up can, pick apple, pick blue chip bag, pick orange, pick 7up can, move orange near sink, pick coke can, pick sponge, pick rxbar blueberry Unseen Back- grounds (Hard) pick wrist watch, pick egg separator, pick green sprite can, pick blue microfiber cloth, pick yellow pear, pick pretzel chip bag, pick disinfectant wipes, pick pineapple hint water, pick green cup, pick pickle snack, pick small blue plate, pick small orange rolling pin, pick octopus toy, pick catnip toy, pick swedish fish bag, pick large green rolling pin, pick black sunglasses Unseen Environ- ments (Easy) pick coke can, pick apple, pick rxbar blueberry, move apple near coke can, move rxbar blueberry near apple, move coke can near rxbar blueberry, pick blue plastic bottle, pick sponge, pick blue chip bag, move sponge near blue plastic bottle, move blue chip bag near sponge, move blue plastic bottle near blue chip bag, move coke can near white mug, move sponge near white mug, move coke can near yellow bowl, move sponge near yellow bowl, move coke can near green cloth, move sponge near green cloth, move coke can near plate, move sponge near plate, move coke can near spoon, move sponge near spoon, move coke can near orange cup, move sponge near orange cup, pick white mug, pick yellow bowl, pick green cloth, move white mug near sponge, move yellow bowl near sponge, move green cloth near sponge, pick plate, pick spoon, pick orange cup, move plate near sponge, move spoon near sponge, move orange cup near sponge, put coke can into sink, drop coke can into sink, push coke can into sink, put sponge into sink, drop sponge into sink, push sponge into sink, put green cloth into sink, drop green cloth into sink, push green cloth into sink Unseen Environ- ments (Hard)
2307.15818#102
2307.15818#104
2307.15818
[ "2304.02643" ]
2307.15818#104
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
pick coke can, pick apple, pick rxbar blueberry, move apple near coke can, move rxbar blueberry near apple, move coke can near rxbar blueberry, move coke can near stapler, move apple near stapler, move coke can near keyboard, move apple near keyboard, move coke can near tissue box, move apple near tissue box, move coke can near papers, move apple near papers, move coke can near mouse, move apple near mouse, move coke can near book, move apple near book, pick marker, pick stapler, pick mouse, move marker near apple, move stapler near apple, move mouse near apple, push coke can to the left, push coke can to the right, push sponge to the left, push sponge to the right, push tissue box to the left, push tissue box to the right, point at coke can, point at sponge, point at tissue box Table 2 | Natural language instructions used for evaluations testing controlled distribution shifts along the dimension of novel objects, novel environments, and novel backgrounds. For each category, we introduce evaluation settings with smaller distribution shifts as well as larger distribution shifts. A visualization of these scenarios if shown in Figure 3.
2307.15818#103
2307.15818#105
2307.15818
[ "2304.02643" ]
2307.15818#105
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
26 26
2307.15818#104
2307.15818
[ "2304.02643" ]
2307.15337#0
Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
3 2 0 2 t c O 8 ] L C . s c [ 2 v 7 3 3 5 1 . 7 0 3 2 : v i X r a Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding SKELETON-OF-THOUGHT: LARGE LANGUAGE MOD- ELS CAN DO PARALLEL DECODING Xuefei Ning1â [email protected] Zinan Lin2â [email protected] # Zixuan Zhou1â [email protected] # Zifu Wang3 [email protected] # Huazhong Yang1 [email protected] Yu Wang1 [email protected] 1 Department of Electronic Engineering, Tsinghua University, Beijing, China 2 Microsoft Research, Redmond, Washington, USA 3ESAT-PSI, KU Leuven, Leuven, Belgium
2307.15337#1
2307.15337
[ "2302.13971" ]
2307.15337#1
Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
Website: https://sites.google.com/view/sot-llm # ABSTRACT This work aims at decreasing the end-to-end generation latency of large language models (LLMs). One of the major causes of the high generation latency is the sequential decoding approach adopted by almost all state-of-the-art LLMs. In this work, motivated by the thinking and writing process of humans, we propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the skeleton of the answer, and then conducts parallel API calls or batched decoding to com- plete the contents of each skeleton point in parallel. Not only does SoT provide considerable speed-ups across 12 LLMs, but it can also potentially improve the answer quality on several question categories. SoT is an initial attempt at data- centric optimization for inference efficiency, and further underscores the potential of pushing LLMs to think more like a human for answer quality. # INTRODUCTION Large language models (LLMs) (Brown et al., 2020; Touvron et al., 2023a; Du et al., 2022; OpenAI, 2023; Zheng et al., 2023) have shown exceptional performance in natural language processing and chatbot systems. However, the inference process of the state-of-the-art LLMs is slow, hindering their interactive use. For example, it takes 22 seconds for Claude (Anthropic, 2023) (accessed through Slack API) and 43 seconds for Vicuna-33B V1.3 (a 33B LLaMA-based model, running locally on one NVIDIA A100 GPU) to answer the question in Fig. 1. We conclude three major causes of LLMsâ slow inference: (1) A large model size requires a large amount of memory, memory access, and computation. For example, the FP16 weights of 175B GPT- 3 take 350GB memory, which means at least 5Ã 80GB A100 GPUs are needed to keep the model in GPU memory. Even with enough GPUs, the heavy memory access and computation slow down the inference. (2) The attention operation in the prevailing transformer architecture is I/O bounded and has a quadratic memory and computation complexity in sequence length. (3) The sequential decoding approach in inference generates tokens one by one. This approach introduces a significant
2307.15337#0
2307.15337#2
2307.15337
[ "2302.13971" ]
2307.15337#2
Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
â Equal contribution. â The main updates in arXiv V2 are as follows: (1) Add the quality and efficiency evaluation of SoT on GPT-4. (2) Use GPT-4 as the judge for answer quality evaluation. The old results with ChatGPT-3.5 as the judge are moved to App. I.3. (3) Add the SoT with Router (SoT-R) method (§ 4) which adaptively triggers SoT on suitable questions. (4) Move detailed answer analysis to the appendices. 1 Skeleton-of-Thought:
2307.15337#1
2307.15337#3
2307.15337
[ "2302.13971" ]
2307.15337#3
Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
Large Language Models Can Do Parallel Decoding 3 Normal Question Skeleton-of-Thought 0.4 piablevicuna-13B Decoding Decoding GPT-4 4 Vicuna-13B V1.3 ° 1) Skelet s 1 a rN @ 0.2 ChatGPT 3-5 UitraL 136 c Vicuna-78 V1.3 ° Answer 1. Active listening S Vicuna-338V1.3 * 2. Identify issues Zz Baseline - 3. Compromise g 0.0} + LLaMA2-Chat-7B It (2) Point- LlaMA2-Chat-138 expandin, Gtage = Claude. Openchat-138) I â 0.2 Vicuna-7B V1.1 * 6 ¢ 1.0 1.2 1.4 1.6 1.8 jenerates answers enerates answers sequentially > Slower in parallel > Faster Speed-up Figure 1: Left: An illustration of Skeleton-of-Thought (SoT). Instead of producing answers se- quentially, SoT produces different parts of answers in parallel. In more detail, given the question, SoT first prompts the LLM to give out the skeleton, then conducts batched decoding or parallel API calls to expand multiple points in parallel, and finally aggregates the outputs to get the final answer.
2307.15337#2
2307.15337#4
2307.15337
[ "2302.13971" ]
2307.15337#4
Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
Right: The net win rates and speed-ups of SoT with router (SoT-R) compared to normal generation on Vicuna-80. The net win rate is the difference between the fraction of questions that SoT-R has better and worse answers than normal generation. The speed-up is the ratio between the latency of normal and SoT-R generation. (1.0, 0.0) represents normal generation. Higher is better on both axes. For most models, SoT-R not only accelerates the generation but also improves the quality of the answers (evaluated with FastChat metric (Zheng et al., 2023)). See § 3.2 and 4 for more details. inference latency since the generation of tokens cannot be parallelized. There is a bunch of literature addressing the first two axes: large model size (Xiao et al., 2022; Frantar et al., 2022; Lin et al., 2023; Sheng et al., 2023; Wang et al., 2021) and attention operation (Kitaev et al., 2020; Wang et al., 2020; Dao et al., 2022; Zaheer et al., 2020; Chen et al., 2023b). These works either compress/redesign the model (Xiao et al., 2022; Frantar et al., 2022; Lin et al., 2023; Kitaev et al., 2020; Wang et al., 2020; Dao et al., 2022; Zaheer et al., 2020) or redesign the serving system (Sheng et al., 2023; Chen et al., 2023b) and hardware (Wang et al., 2021). In contrast to prior work, we tackle the third axis and question the common assumption that LLMs have to do fully sequential decoding. We show the feasibility of parallel decoding of off-the-shelf LLMs without any changes to their model, system, or hardware. For instance, for the question in Fig. 1, we can reduce the latency from 22 seconds to 12 seconds (1.83à speed-up) with Claude, and from 43 seconds to 16 seconds (2.69à speed-up) with Vicuna-33B V1.3 on an NVIDIA A100.
2307.15337#3
2307.15337#5
2307.15337
[ "2302.13971" ]
2307.15337#5
Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
The idea stems from reflecting on how humans ourselves answer questions. Humans do not always In contrast, for many question think about questions and write answers in a sequential fashion. types, we first derive the skeleton according to some protocols and strategies, and then add evidence and details to refine and explicate each point. This is especially the case on formal occasions like offering consultancy, taking tests, writing papers, and so on. Can we make LLMs think in the same way? To this end, we propose Skeleton-of-Thought (SoT). Specifically, as shown in Fig. 1, we guide the LLM to derive a skeleton first by itself. Based on the skeleton, the LLMs can complete each point in parallel so that we get a speed-up. SoT can be utilized to accelerate both open-source models with batched decoding and API-based models with parallel API calls. To make the overall solution more practical, we also design an extension, SoT with router (SoT-R), which employs a router to only trigger SoT for suitable questions. We test SoT on 12 recently released LLMs. Not only does SoT provide considerable speed-ups (up to 2.39Ã ), but it can also improve the answer quality in many cases (Fig. 1). Note that in contrast to existing model- and system-level efforts for inference efficiency, SoT takes a novel â data-levelâ pathway by letting the LLM organize its output content. This novel perspective is becoming feasible and is expected to grow in relevance, owing to the evolving capabilities of state-of-the-art LLMs. We hope this work can stimulate more research in the realm of data-centric optimization (Zha et al., 2023; HazyResearch, 2023) for efficiency.
2307.15337#4
2307.15337#6
2307.15337
[ "2302.13971" ]
2307.15337#6
Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
2 # Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding Prompt 1. Skeleton Prompt Template T s [User:] Youâ re an organizer responsible for only giving the skeleton (not the full content) for answering the question. Provide the skeleton in a list of points (numbered 1., 2., 3., etc.) to answer the question. Instead of writing a full sentence, each skeleton point should be very short with only 3â ¼5 words. Generally, the skeleton should have 3â ¼10 points. Now, please provide the skeleton for the following question. {question} Skeleton: [Assistant:] 1. Prompt 2. Point-Expanding Prompt Template T pe [User:] Youâ re responsible for continuing the writing of one and only one point in the overall answer to the following question. {question} The skeleton of the answer is {skeleton} Continue and only continue the writing of point {point index}. Write it **very shortly** in 1â ¼2 sentence and do not continue with other points! [Assistant:] {point index}. {point skeleton} The rest of the paper is organized as follows. We first introduce SoT in § 2 and show its results in § 3. Then, we expand on the SoT-R extension in § 4. § 5 positions SoT in the research ecosystem (expanded in App. D). Finally, we analyze the limitations and share outlooks of SoT in § 6. 2 SKELETON-OF-THOUGHT (SOT) 2.1 METHOD Overview. Based on the intuition that humans usually think about and answer a question in an organized way, the core idea of this work is to guide the LLM itself to give a skeleton first and then write the overall answer parallelly instead of sequentially. Fig. 1 illustrates how SoT produces the final answer to a user question q. (1) Skeleton stage. SoT first assembles a skeleton request, T s(question = q), using the skeleton prompt template T s (Prompt 1, and Prompt 3 in App. B.1) with the question q as the parameter. The skeleton prompt template is written to guide the LLM to output a concise skeleton of the answer. Then, we extract the B points from the skeleton response Rs of the LLM.
2307.15337#5
2307.15337#7
2307.15337
[ "2302.13971" ]
2307.15337#7
Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
(2) Point-expanding stage. Based on the skeleton, we let the LLM expand on each point in parallel. Specifically, for the point with index b and skeleton Rs b, SoT uses T pe(question = q, skeleton = Rs, point index = b, point skeleton = Rs b) as the point-expanding request for the LLM, where T pe is the point-expanding prompt template (Prompt 2). Finally, after completing all points, we concatenate the point-expanding responses {Rpe Parallel point expanding. We conduct parallel point-expanding so that SoT is able to achieve a speed-up than normal decoding. (1) For proprietary models with only API access, we can issue multiple parallel API calls to get an end-to-end latency gain at the cost of an increased number of API requests and tokens. (2) For open-source models that we can run locally, we let them process the point-expanding re- quests as a batch (paddings are added to the left of the point-expanding requests). We explain below why this could achieve speed-ups. A typical LLM generative process consists of two phases: (a) the prefilling phase in which the prompt is parsed to generate the key-value cache for further use, and (b) the decoding phase in which tokens are generated one by one in a sequential manner. The decoding phase accounts for the majority of the end-to-end latency, especially when generating a long response. Note that the decoding phase is bottlenecked by weight loading instead of activation
2307.15337#6
2307.15337#8
2307.15337
[ "2302.13971" ]
2307.15337#8
Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
3 # Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding loading or computation.1 Consequently, running LLM inference with increased batch sizes does not increase the per-token latency much. Therefore, SoT allows us to decode roughly BÃ more tokens within the same amount of time if we parallelly decode B points. See App. E for the expanded discussions and the supporting experiments. Please refer to App. B for more implementation details of SoT. # 3 SOT EVALUATION Datasets. We evaluate SoT on two recent assistant-style datasets: (1) Vicuna-80 (Chiang et al., 2023), which contains 80 questions spanning nine categories, such as coding, math, writing, role- play, and so on, and (2) WizardLM (Xu et al., 2023), which contains 218 questions spanning more categories and diverse difficulties. Due to space constraints, we only report Vicuna-80 results in the main paper, and defer WizardLM results to the Apps.
2307.15337#7
2307.15337#9
2307.15337
[ "2302.13971" ]
2307.15337#9
Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
G and I. Models. We test SoT on 12 recently released models, including 9 open-source models and 3 API- based models (Table 1). We obtain the weights of all the open-source models from Hugging Face. See App. A for more details. 3.1 EVALUATION OF EFFICIENCY API-based models. call with start = time.time(); ...; elapsed_time = time.time() - start, and add the latency of the skeleton API call and the slowest point-expanding API call as the SoT latency. Open-source models. All open-source models we currently evaluate are based on the LLaMA 7B, 13B, or 33B architectures. Thus, to enable fast analysis, we first make a latency profiling table for each LLaMA architecture on NVIDIA A100. The table contains the architectureâ s (1) latency for prefilling sequences of length 1 to 700 with different batch sizes (from 1 to 16), and (2) decoding one token with a context of length 1 to 1024 with different batch sizes (from 1 to 16). With these three latency profiling tables, given the number of points B, the token lengths of the requests and responses in the skeleton and point-expanding stages, we can quickly estimate the SoT latency by simply looking up entries in the tables and adding them up.
2307.15337#8
2307.15337#10
2307.15337
[ "2302.13971" ]
2307.15337#10
Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
See App. F for a more detailed description of how we conduct the profiling and estimate the latency. In addition to the above approach, we also compare the actual latency of SoT and normal sequential generation (abbreviated as â normalâ in the following discussion) in App. G.1.4. The rest of this section shows the speed-ups of SoT on different models (§ 3.1.1) and question categories (§ 3.1.2). In addition, we also report the latency breakdown of SoT stages in App. G.1.2 and the SoT speed-ups on an RTX 3090 GPU in App. G.1.3. 3.1.1 SPEED-UP BREAKDOWN: MODELS We investigate how SoT reduces the end-to-end latency on different models. Fig. 2a shows the average speed-up for each model across all question categories. We can see that SoT obtains a >2à speed-up (up to 2.39à ) on 8 out of 12 models. We report the detailed statistics about token lengths and numbers of points in Fig. 11. (1) In terms of the point number B (Fig. 11a), LLaMA2, Vicuna-7B V1.1, Vicuna-7B V1.3, and ChatGPT-3.5 yield relatively fewer points (<6), while GPT-4 and StableVicuna-13B generates the largest number of points on average (â 9). (2) Regarding the point-expanding response length, Figs. 11b to 11d show that the API-based models, ChatGPT-3.5, Claude, and GPT-4, follow the point-expanding request better and generate shorter point-expanding responses than the open-source models. One can also notice that StableVicuna-13Bâ s longest point-expanding responses for many question cat- egories can be as lengthy as the overall normal answer, since it fails to adhere to the â Write it **very shortly**â instruction in the point-expanding request. Consequently, SoT cannot accelerate StableVicuna-13B well. (3) Regarding the length balance degree between point responses, Fig. 11e shows that LLaMA2 and the API-based models generate more balanced point-expanding responses.
2307.15337#9
2307.15337#11
2307.15337
[ "2302.13971" ]
2307.15337#11
Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
1This is true when the number of concurrent queries is small; see § 6 for discussion on other scenarios. 4 # Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding (4) As for the overall length of the final aggregated answer (Fig. 11f), employing SoT on most models results in answers that are, on average, 1â ¼2à longer than the normal answer. (a) Different models. (b) Different categories. Figure 2: Average speed-ups of SoT on different models and question categories. 3.1.2 SPEED-UP BREAKDOWN: QUESTION CATEGORIES Here we investigate how SoT reduces the end-to-end latency for different question categories. Fig. 2b shows the average speed-up for each question category across all models. The question categories for which SoT can provide high-quality answers are marked in green, and other cate- gories are marked in red (see § 3.2.3 for the answer quality evaluation). We can see that SoT can obtain speed-ups for all question categories. For the five question categories that SoT can provide high-quality answers (i.e., knowledge, generic, common-sense, roleplay, counterfactual), SoT can speed up the overall answer generation process by 1.89à to 2.33à in the meantime. 3.2 EVALUATION OF ANSWER QUALITY In order to compare the answer quality of the normal sequential generation (abbreviated as â normalâ in the following discussion) and SoT generation, we adopt two LLM-based evaluation frameworks: FastChat (Zheng et al., 2023) and LLMZoo (Chen et al., 2023c). The evaluation process is to present a question and a pair of answers (from normal or SoT generation) to an LLM judge (GPT-4 in the main paper; see App. I.3 for the results evaluated using ChatGPT-3.5) and ask for its preference. The response can be that SoTâ s answer wins/ties/loses compared to the normal answer. Here are more details about the evaluation of the answer quality: (1) Detailed metrics. FastChat evaluation provides one metric for the general quality of the answers. In addition to a general metric, LLMZoo provides five detailed metrics on the answersâ
2307.15337#10
2307.15337#12
2307.15337
[ "2302.13971" ]
2307.15337#12
Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
coherence, diversity, immersion, integrity, and relevance. (2) Question categories. FastChat provides two special evaluation prompts for coding and math questions for more accurate evaluation, whereas LLMZoo does not. Following the implementation in LLMZoo, we exclude math and coding questions in all LLMZoo evaluation results. (3) Extentions to avoid evaluation bias. To avoid the potential bias from the order of the two answers presented to the LLM judge, we extend FastChat and LLMZoo evaluation frameworks by running the evaluation twice with either ordering of the two answers. In either evaluation, a score of 1, 0, and -1 is assigned when SoT wins, ties, or loses, respectively. The final evaluation is that SoT wins/ties/loses when the sum of the two scores is positive/zero/negative. For example, if SoT wins in one evaluation and loses in the other evaluation, the result is â
2307.15337#11
2307.15337#13
2307.15337
[ "2302.13971" ]
2307.15337#13
Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
tieâ . If SoT wins (loses) in one evaluation and ties in the other, the result is â winâ (â loseâ ). (4) Net win rates. We further define net win rates to give a summarized view of the answer quality. Given the number of questions that SoT wins (#win) and loses (#lose), we define net win rates as #winâ #lose/total number of questions. 0% means that SoT performs competitively to the normal baseline (wins and loses in the same number of questions). Higher values mean that SoT performs better. The organization of this section on answer quality evaluation is as follows. We first present the over- all quality of SoT answers (§ 3.2.1), and then go into the details across different question categories (§ 3.2.3), models (§ 3.2.2), and metrics (§ 3.2.4). 5 Skeleton-of-Thought:
2307.15337#12
2307.15337#14
2307.15337
[ "2302.13971" ]
2307.15337#14
Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
Large Language Models Can Do Parallel Decoding # 3.2.1 OVERALL QUALITY In Fig. 3, we show the win/tie/lose rates (the percentage of the cases when SoT wins/ties/loses compared to normal generation) across all models and questions using the two metrics from FastChat and LLMZoo that capture the general quality of the answers. We notice a discrepancy between the two metrics on when SoT is strictly better than the baseline (45.8% v.s. 29.5%). Despite that, the two metrics agree that SoT is not worse than the baseline in around 60% of the cases, and the win rates are close to the lose rates. This result suggests that the answers of SoT maintain good quality of that of the normal generation.
2307.15337#13
2307.15337#15
2307.15337
[ "2302.13971" ]
2307.15337#15
Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
# mm # Winmes # Tiemmm # Lose General quality (FastChat) General quality (LLMZoo) 0% 20% 40% 60% 80% 100% Figure 3: Win/tie/lose rates of SoT v.s. normal generation using â generalâ metrics from FastChat and LLMZoo. SoT performs better than or equal to normal generation in around 60% cases. # 3.2.2 QUALITY BREAKDOWN: MODELS Next, we investigate how SoT performs on different models. We compute net win rates on all models in Fig. 4. Again, we see that the two general metrics from FastChat and LLMZoo have different absolute values but similar rankings. In particular, both metrics agree that OpenChat-13B, Vicuna-7B V1.1, Claude, LLaMA2-Chat-13B have low net win rates, whereas Vicuna-13B V1.3, StableVicuna-13B, and UltraLM-13B have high net win rates. (a) Metric: general quality (FastChat). (b) Metric: general quality (LLMZoo). Figure 4: Net win rates of SoT on different models. We investigate the answers in App. I.1.1, and summarize the key takeaways as follows. Some models have low SoT quality as they cannot understand the skeleton and point-expanding prompts well. Some other models have low SoT quality as their normal answers already have good quality, making it hard for SoT to beat them (e.g., Claude). For models that are able to understand the SoT prompts, the answer quality is improved. We expect that further improving SoT prompts or fine-tuning the models can make it easier for LLMs to understand the skeleton and point-expanding prompts and ultimately result in better answer quality. 3.2.3 QUALITY BREAKDOWN: QUESTION CATEGORIES Next, we investigate how SoT performs on different question categories. We compute net win rates (win rates minus lose rates) on all question categories in Fig. 5. Similar to Fig. 3, we see that LLMZoo tends to be more optimistic about the quality of SoT than FastChat. Nevertheless, the conclusions are consistent: SoT performs relatively well on generic, common-sense, knowledge, roleplay, and counterfactual.
2307.15337#14
2307.15337#16
2307.15337
[ "2302.13971" ]
2307.15337#16
Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
SoT performs relatively poorly on writing, fermi, math, and coding. We investigate the answers in App. I.1.2, and summarize the key takeaways as follows. SoT per- forms well when the question can be answered in several points whose details can be expanded independently. This includes a wide range of real-world questions. On the other hand, it is fun- damentally challenging to apply SoT on questions that require step-by-step thinking, in which the 6 # Skeleton-of-Thought:
2307.15337#15
2307.15337#17
2307.15337
[ "2302.13971" ]
2307.15337#17
Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
Large Language Models Can Do Parallel Decoding (a) Metric: general quality (FastChat). (b) Metric: general quality (LLMZoo). Figure 5: Net win rates of SoT on different question categories. latter steps require the details from the earlier steps, such as math questions. To make SoT general across broader question categories, one promising pathway is to enable SoT to adaptively fall back to normal generation, which we explore in § 4. Interestingly, our results suggest that some LLMs are already able to do that occasionally without special prompting or tuning (see App. I.1.2). # 3.2.4 QUALITY BREAKDOWN: METRICS All previous evaluations use metrics about the general quality of the answer. In Fig. 6, we show more detailed metrics from LLMZoo to reveal in which aspects SoT can improve or hurt the answer quality. On average, we can see that SoT improves the diversity and relevance while hurting the immersion and coherence.
2307.15337#16
2307.15337#18
2307.15337
[ "2302.13971" ]
2307.15337#18
Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
# mm # Win mt Tic mmm Lose Diversity Relevance Immersion Coherence Integrity % 0% 20% 40% 60% 80% 100% Figure 6: Win/tie/lose rates of SoT v.s. normal generations using metrics from LLMZoo. SoT performs well on diversity and relevance, and relatively worse on coherence and immersion. Through answer investigation (App. I.1.3), we summarize the key takeaways as follows. The skele- ton stage of SoT explicitly require LLMs to discuss the answers from multiple aspects without filler words. This improves the diversity and relevance of the answers. As for coherence and immersion, SoT is not worse than the normal generation around 60% of the time. One future direction is to improve the SoT prompts or pipeline so that the answers can be better in more metrics. # 4 SOT WITH ROUTER (SOT-R): ADAPATIVELY TRIGGERING SOT In § 3, we see that SoT provides considerable speed-ups while maintaining (or even improving) answer quality for many question types. However, the biggest limitation is that SoT is not suitable for questions that require step-by-step reasoning (§ 3.2.3). Towards pushing the practical adoption of SoT, we explore the possibility of adaptively triggering SoT only when it is suitable. To achieve that, we propose a router module that decides if SoT should be applied for the user request, and then call either SoT or normal decoding accordingly. This paradigm aligns with the recent trends of composing multiple models to solve complicated tasks (Chase, 2022; Shen et al., 2023). To implement the router, we explore two options: LLM prompting as the router (no model training is needed) (§ 4.1), and trained RoBERTa as the router (§ 4.2). The evaluation is provided in § 4.3. 4.1 PROMPTING ROUTER We directly ask an LLM if the question is suitable for SoT. More specifically, we ask the LLM if the desired answer is in a list of independent points (see App. C.1 for the prompt). If the answer is yes, 7
2307.15337#17
2307.15337#19
2307.15337
[ "2302.13971" ]
2307.15337#19
Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
# Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding we will use SoT; otherwise, we will use normal generation (i.e., directly feeding the question to the LLM). We employ GPT-4 as the LLM router given its strong capability. 4.2 TRAINED ROUTER While leveraging GPT-4 as the router obviates the need for model training, its performance remains sensitive to prompt design. Therefore, we approach the problem as a sequence classification task by fine-tuning a small language model as the router. Specifically, we annotate the LIMA dataset (Zhou et al., 2023) as the training set to train a RoBERTa model (Liu et al., 2019), which has only 120M parameters. Comprehensive details regarding the annotation and training processes can be found in Apps. C.2.1 and C.2.2, respectively. 4.3 SOT-R EVALUATION We compare SoT and SoT-R under the same evaluation setup in § 3. Besides the prompting and trained routers, we also consider a â human routerâ where we manually judge whether SoT should be applied for each question. This serves as a benchmark for comparison. # 4.3.1 EVALUATION OF EFFICIENCY Fig. 7 shows the speed-ups of SoT and SoT-R for different models on the Vicuna-80 dataset (see App. G.2 for more results on the WizardLM dataset). We can see that: (1) As expected, SoT-R obtains lower speed-ups than SoT, since SoT is not triggered for some questions and the router induces a small latency overhead. Nevertheless, SoT-R can still benefit most models with >1à speed-ups. (2) SoT-R with the trained router obtains slightly higher speed-ups for 7 out of 12 models on Vicuna-80, while SoT-R with the prompting router obtains higher speed-ups for all models on the WizardLM dataset (see Fig. 17 in App. G.2). LLaMA2-Chat-7B a e LLQMA2-Chat 138 â e icuna- ot ° OpenChat-138 += â
2307.15337#18
2307.15337#20
2307.15337
[ "2302.13971" ]
2307.15337#20
Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
¢ e viewna-338 V1.3 4+ ° itraLM- +t ° Vicuna-7B V1.3 el ° chatoorrâ FLED) © SOT (wio router atGPT- Claude «-¢ 4 SoT-R w/ trained router StableVicuna-13B |. «--@ coding 7- © ial math e » writing ° ie fermi ° +e * Ks roleplay ° = â nowledge 7 @ â SoT (w/o router) . common-sense + * SoTR w/ prompting router 9 > SoT-R w/ human router counterfactual ee -80% © -60% -40% -20% 0% 20% 40%
2307.15337#19
2307.15337#21
2307.15337
[ "2302.13971" ]
2307.15337#21
Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
1.00 1.25 150 175 2.00 225 250 2.75 Figure 7: Speed-ups of SoT and SoT-R on dif- ferent models across all question categories of the Vicuna-80 dataset. Figure 8: Net win rates of SoT and SoT-R on different question categories of the Vicuna-80 dataset (evaluated with the FastChat metrics). # 4.3.2 EVALUATION OF ANSWER QUALITY Fig. 8 shows the net win rates (averaged across all models) of SoT and SoT-R on Vicuna-80 with the FastChat metrics (see App. I.2 for results of the WizardLM dataset and LLMZoo metrics). We can see that: (1) SoT-R significantly improves the answer quality on questions where SoT is not suitable (e.g., coding, math, writing, fermi) by falling back to normal decoding. At the same time, SoT-R maintains answer quality improvements on questions where SoT is good at. (2) The trained router performs similar to (on Vicuna-80) or better than (on WizardLM; see App. I.2) the prompting router. This accords with our intuition in § 4.2. (3) The prompting and trained routers could even surpass human router (e.g., on roleplay questions; see more examples on WizardLM in App. I.2). We discuss the consistency across three routers in App. C.3. The primary takeaways include: (1) on Vicuna-80, there is a notable consistency among all three routers, and (2) on WizardLM, greater discrepancies emerge, with the trained router showing higher alignment with human annotations. # 5 RELATED WORK This section positions SoT in related work to reveal how SoT (1) is connected to, (2) is different from, and (3) can harness the power of other methods. See App. D for the expanded discussion. Efficient LLM methods at model and system levels. At the model level, prior work proposes ef- ficient architectures, including dynamic mixture-of-experts (Lepikhin et al., 2021), low-complexity
2307.15337#20
2307.15337#22
2307.15337
[ "2302.13971" ]
2307.15337#22
Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
8 # Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding attention (Kitaev et al., 2020), and multi-query attention (Shazeer, 2019). However, they usually require a significant re-training cost. In contrast, compression methods require a smaller amount of fine-tuning cost by reducing the complexity of pre-trained LLMs, such as quantization (Frantar et al., 2022) and weight or activation sparsification (Mishra et al., 2021; Zaheer et al., 2020). At the system level, prior work (1) optimizes the computational graph (Dao et al., 2022), (2) op- timizes the assignment and scheduling of computational graph on devices (Sheng et al., 2023), or (3) designs batching or caching mechanisms for serving multiple users (Fang et al., 2021). These techniques address the large memory access and footprint posed by the vast model scale and atten- tion mechanism, and mainly aim at enhancing the throughput rather than the end-to-end latency. As SoT trades off throughput for end-to-end latency, SoT can make these throughput-oriented tech- niques help with end-to-end latency. This interesting synergy offers opportunities for achieving better trade-offs between latency and throughput in future serving systems. In contrast to model- and system-level techniques, SoT is a data-level technique in a new â content co-organization for efficiencyâ paradigm. See § 6 for more discussions. Efficient LLM methods through parallel generation. Some prior work also addresses the sequen- tial decoding issues. Speculative decoding (SD) methods (Stern et al., 2018) employ smaller models to generate some consecutive tokens sequentially and apply the target LLMs to verify them paral- lelly. Non-autoregressive generation (NAG) methods (Gu et al., 2018; Xiao et al., 2023) sample and refine consecutive tokens parallelly, often with the support of a modified and tuned model. Relying on either assisting models or special models and sampling schemes, SD and NAG methods conduct parallel verification or sampling and refinement of consecutive tokens.
2307.15337#21
2307.15337#23
2307.15337
[ "2302.13971" ]
2307.15337#23
Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
In contrast, SoT prompts the LLM itself to plan the contents in a way that permits the parallel generation of tokens in different segments, by exploiting the emerging instruction-following and planning ability of LLMs. Prompting methods for LLMs. Recent years have witnessed the emergence of the â pre-train, prompt, and predictâ paradigm, which has shown promise in enhancing LLMsâ quality in math and commonsense reasoning (Wei et al., 2022; Kojima et al., 2022; Wang et al., 2022; Chen et al., 2022) and planning for multi-modality tasks (Shen et al., 2023; Zhu et al., 2023). Instead of focusing on answer quality, SoT is a first attempt at exploiting the power of prompting to improve efficiency. # 6 LIMITATIONS, FUTURE WORK, AND OPEN QUESTIONS Answer quality evaluation. Our answer quality evaluation is far from perfect due to the limited prompt set, the potential bias of GPT-4 judges, and the inherent difficulty of evaluating LLM gener- ations. Currently, we did not conduct human evaluation since it is easy for a human to tell whether an answer is generated with SoT due to its distinctive pattern, which might cause evaluation bias. We leave a more thorough evaluation of answer quality to future work. Eliciting or improving LLMsâ ability. § 3.2.4 demonstrates SoTâ s potential of enhancing answer quality. It is part of a broader trend in recent research, exemplified by work including CoT (Kojima et al., 2022; Wei et al., 2022), ToT (Yao et al., 2023), and ReAct (Yao et al., 2022), which collectively affirm the notion that explicitly articulating the thought process in language can elicit high-quality answers from LLMs. These findings resemble human thinking: rather than relying solely on the first intuition or purely sequential thinking, we often document step-by-step reasoning or thought organization to attain high-quality answers. This intriguing parallel prompts us to explore further how we can draw from the human thinking process to facilitate more effective and efficient AI. For instance, SoT currently ignores the dependencies between points.
2307.15337#22
2307.15337#24
2307.15337
[ "2302.13971" ]
2307.15337#24
Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
A conceptually better way is to organize the points as Graph-of-Thoughts, where the edges represent the dependencies, and each point is decoded conditioned on the contents of its ancestor points. In addition, instead of complying with a static graph, we expect the need of having dynamic Graph-of-Thoughts, where the high-level thought structure is adjusted dynamically by LLMs themselves. This could potentially combine the efficiency and global thinking advantages of SoT with the logical reasoning and impromptu thinking strengths of methods like CoT (Kojima et al., 2022; Wei et al., 2022). Notably, a contemporary work (Besta et al., 2023) has attempted to design Graph-of-Thoughts to elicit reasoning. Furthermore, there exist self-improving training pipelines (Zelikman et al., 2022; Huang et al., 2022) that use rationales generated by CoT to fine-tune LLMs, thereby enhancing their reasoning abilities.
2307.15337#23
2307.15337#25
2307.15337
[ "2302.13971" ]
2307.15337#25
Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
9 # Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding Likewise, it is interesting to investigate how the more structured answers from SoT can be used to fine-tune LLMs to enhance their ability to generate well-organized and comprehensive answers. Efficiency and overhead of SoT in different scenarios. Serving systems commonly adopt batch processing to handle concurrent queries. This raises a concern of whether SoT may hurt serving throughput due to parallel requests. (1) When there is an unsaturated number of concurrent queries, SoT can effectively reduce latency and enhance GPU utilization. Example scenarios include (a) Edge-side applications with a single user; (b) Centralized services during periods with unsaturated user requests and underutilized computing capacity. It is interesting to study the appropriate SoT triggering conditions based on system workloads. (2) When there is a saturated number of concur- rent queries, SoT is still useful for improving answer quality. However, in this case, it is important to consider the computation overhead from SoT. We delve into this concern in App. H. For API-based models, a notable concern arises regarding the increased number of prefilling tokens (App. H). Given that many APIs charge token usage, SoT may lead to higher costs. To address this, one can tune the number of parallel API requests (by expanding multiple points in a single API call), or use prompt tuning to design shorter SoT prompts (see App.
2307.15337#24
2307.15337#26
2307.15337
[ "2302.13971" ]
2307.15337#26
Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
H). Data-centric efficiency optimization. While data-centric engineering for improving answer qual- ity (Zha et al., 2023; HazyResearch, 2023) is gaining popularity, its potential for inference efficiency is not explored yet. SoT is the first attempt. As LLM capabilities and the amount of LLM-generated data are growing rapidly, data-centric techniques could become more useful in the future. We look forward to more explorations to unlock the full potential of data-centric efficiency optimization. # ACKNOWLEDGEMENTS We thank Sergey Yekhanin (Microsoft Research), and Tianji Wu (Infinigence AI) for their support and suggestions on the work. We thank Tianyu Fu for many initial discussions on the idea. We thank Ke Hong and Genghan Zhang for their discussions about profiling. We thank Yue Wu for the help on the Claude scripts. We thank Da Yu, Chulin Xie, and Saiqian Zhang for their suggestions on revising the first version of the paper. We thank Rui Hu, Cheng Cheng, Jack Jin, Zhoutong Ye, Mingze Sun, Jun Yan, Zhi Zhang, Yuxuan Tong, and Nianhui Guo for their suggestions on revising the second version of the paper. # REFERENCES
2307.15337#25
2307.15337#27
2307.15337
[ "2302.13971" ]
2307.15337#27
Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
Anthropic. Introducing claude, May 2023. URL https://www.anthropic.com/index/ introducing-claude. Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Michal Podstawski, Hubert Niewiadomski, Piotr Nyczyk, et al. Graph of thoughts: Solving elaborate problems with large language models. arXiv preprint arXiv:2308.09687, 2023. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877â
2307.15337#26
2307.15337#28
2307.15337
[ "2302.13971" ]
2307.15337#28
Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
1901, 2020. Han Cai, Chuang Gan, Tianzhe Wang, Zhekai Zhang, and Song Han. Once-for-all: Train one network and specialize it for efficient deployment. arXiv preprint arXiv:1908.09791, 2019. Harrison Chase. LangChain, October 2022. URL https://github.com/hwchase17/ langchain. Charlie Chen, Sebastian Borgeaud, Geoffrey Irving, Jean-Baptiste Lespiau, Laurent Sifre, and John Jumper. Accelerating large language model decoding with speculative sampling. arXiv preprint arXiv:2302.01318, 2023a. Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W Cohen. Program of thoughts prompt- ing: Disentangling computation from reasoning for numerical reasoning tasks. arXiv preprint arXiv:2211.12588, 2022.
2307.15337#27
2307.15337#29
2307.15337
[ "2302.13971" ]
2307.15337#29
Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
10 # Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding Zhaodong Chen, Zheng Qu, Yuying Quan, Liu Liu, Yufei Ding, and Yuan Xie. Dynamic n: M fine-grained structured sparse attention mechanism. In Proceedings of the 28th ACM SIGPLAN Annual Symposium on Principles and Practice of Parallel Programming, pp. 369â 379, 2023b. Zhihong Chen, Junying Chen, Hongbo Zhang, Feng Jiang, Guiming Chen, Fei Yu, Tiannan Wang, Juhao Liang, Chen Zhang, Zhiyi Zhang, Jianquan Li, Xiang Wan, Haizhou Li, and Benyou Wang. Llm zoo: democratizing chatgpt. https://github.com/FreedomIntelligence/ LLMZoo, 2023c. Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing.
2307.15337#28
2307.15337#30
2307.15337
[ "2302.13971" ]
2307.15337#30
Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality, March 2023. URL https: //lmsys.org/blog/2023-03-30-vicuna/. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. Scaling instruction-finetuned language mod- els. arXiv preprint arXiv:2210.11416, 2022. Tri Dao, Dan Fu, Stefano Ermon, Atri Rudra, and Christopher R´e. Flashattention: Fast and memory- efficient exact attention with io-awareness. Advances in Neural Information Processing Systems, 35:16344â 16359, 2022. Emily L Denton, Wojciech Zaremba, Joan Bruna, Yann LeCun, and Rob Fergus.
2307.15337#29
2307.15337#31
2307.15337
[ "2302.13971" ]
2307.15337#31
Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
Exploiting linear structure within convolutional networks for efficient evaluation. Advances in neural information processing systems, 27, 2014. Ning Ding, Yulin Chen, Bokai Xu, Yujia Qin, Zhi Zheng, Shengding Hu, Zhiyuan Liu, Maosong Sun, and Bowen Zhou. Enhancing chat language models by scaling high-quality instructional conversations. arXiv preprint arXiv:2305.14233, 2023. Zhengxiao Du, Yujie Qian, Xiao Liu, Ming Ding, Jiezhong Qiu, Zhilin Yang, and Jie Tang. Glm: In Proceedings of the General language model pretraining with autoregressive blank infilling. 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 320â 335, 2022.
2307.15337#30
2307.15337#32
2307.15337
[ "2302.13971" ]
2307.15337#32
Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
Thomas Elsken, Jan Hendrik Metzen, and Frank Hutter. Neural architecture search: A survey. The Journal of Machine Learning Research, 20(1):1997â 2017, 2019. Jiarui Fang, Yang Yu, Chengduo Zhao, and Jie Zhou. Turbotransformers: an efficient gpu serv- ing system for transformer models. In Proceedings of the 26th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, pp. 389â 402, 2021. William Fedus, Barret Zoph, and Noam Shazeer.
2307.15337#31
2307.15337#33
2307.15337
[ "2302.13971" ]
2307.15337#33
Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. The Journal of Machine Learning Research, 23(1): 5232â 5270, 2022. Elias Frantar, Saleh Ashkboos, Torsten Hoefler, and Dan Alistarh. Gptq: Accurate post-training quantization for generative pre-trained transformers. arXiv preprint arXiv:2210.17323, 2022. Prakhar Ganesh, Yao Chen, Xin Lou, Mohammad Ali Khan, Yin Yang, Hassan Sajjad, Preslav Nakov, Deming Chen, and Marianne Winslett. Compressing large-scale transformer-based mod- els:
2307.15337#32
2307.15337#34
2307.15337
[ "2302.13971" ]
2307.15337#34
Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
A case study on bert. Transactions of the Association for Computational Linguistics, 9: 1061â 1080, 2021. Joao Gante. Assisted generation: a new direction toward low-latency text generation. https: //huggingface.co/blog/assisted-generation, 2023. Accessed: 2023-06-23. # Google. Tensorflow serving, 2021. URL https://github.com/tensorflow/serving. Jiatao Gu, James Bradbury, Caiming Xiong, Victor O.K. Li, and Richard Socher.
2307.15337#33
2307.15337#35
2307.15337
[ "2302.13971" ]
2307.15337#35
Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
Non-autoregressive neural machine translation. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=B1l8BtlCb. 11 Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015. HazyResearch. Data-centric data-centric-ai, 2023. Accessed: 2023-07-04. ai. https://github.com/HazyResearch/ Jiaxin Huang, Shixiang Shane Gu, Le Hou, Yuexin Wu, Xuezhi Wang, Hongkun Yu, and Jiawei Han. Large language models can self-improve. arXiv preprint arXiv:2210.11610, 2022. Yanping Huang, Youlong Cheng, Ankur Bapna, Orhan Firat, Dehao Chen, Mia Chen, HyoukJoong Lee, Jiquan Ngiam, Quoc V Le, Yonghui Wu, et al. Gpipe: Efficient training of giant neural networks using pipeline parallelism. Advances in neural information processing systems, 32, 2019. Andrei Ivanov, Nikoli Dryden, Tal Ben-Nun, Shigang Li, and Torsten Hoefler. Data movement is all you need: A case study on optimizing transformers. Proceedings of Machine Learning and Systems, 3:711â 732, 2021. Nikita Kitaev, Å ukasz Kaiser, and Anselm Levskaya.
2307.15337#34
2307.15337#36
2307.15337
[ "2302.13971" ]
2307.15337#36
Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
Reformer: The efficient transformer. arXiv preprint arXiv:2001.04451, 2020. Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large language models are zero-shot reasoners. Advances in neural information processing systems, 35:22199â 22213, 2022. Raghuraman Krishnamoorthi. Quantizing deep convolutional networks for efficient inference: A whitepaper. arXiv preprint arXiv:1806.08342, 2018. Alex Krizhevsky. One weird trick for parallelizing convolutional neural networks. arXiv preprint arXiv:1404.5997, 2014. Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E Gonzalez, Hao Zhang, and Ion Stoica. Efficient memory management for large language model serving with pagedattention. arXiv preprint arXiv:2309.06180, 2023. Dmitry Lepikhin, HyoukJoong Lee, Yuanzhong Xu, Dehao Chen, Orhan Firat, Yanping Huang, Maxim Krikun, Noam Shazeer, and Zhifeng Chen. {GS}hard: Scaling giant models with condi- tional computation and automatic sharding. In International Conference on Learning Represen- tations, 2021. URL https://openreview.net/forum?id=qrwe7XHTmYb. Brian Lester, Rami Al-Rfou, and Noah Constant. The power of scale for parameter-efficient prompt tuning. arXiv preprint arXiv:2104.08691, 2021. Yaniv Leviathan, Matan Kalman, and Yossi Matias. Fast inference from transformers via speculative decoding. arXiv preprint arXiv:2211.17192, 2022. Guohao Li, Hasan Abed Al Kader Hammoud, Hani Itani, Dmitrii Khizbullin, and Bernard Ghanem. Camel: Communicative agents for â mindâ exploration of large scale language model society, 2023a. Xiang Lisa Li and Percy Liang. Prefix-tuning:
2307.15337#35
2307.15337#37
2307.15337
[ "2302.13971" ]
2307.15337#37
Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
Optimizing continuous prompts for generation. arXiv preprint arXiv:2101.00190, 2021. Xuechen Li, Tianyi Zhang, Yann Dubois, Rohan Taori, Ishaan Gulrajani, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. Alpacaeval: An automatic evaluator of instruction-following models. https://github.com/tatsu-lab/alpaca_eval, 2023b.
2307.15337#36
2307.15337#38
2307.15337
[ "2302.13971" ]