doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2307.15818 | 73 | ⢠Network Architecture (designing and implementing model network modules, working on tokenization of actions, enabling inference of the model networks during experiments): Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Danny Driess, Pete Florence, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Brian Ichter, Alex Irpan, Isabel Leal, Lisa Lee, Henryk Michalewski, Igor Mordatch, Kanishka Rao, Michael Ryoo, Anikait Singh, Quan Vuong, Ayzaan Wahid, Jialin Wu, Fei Xia, Ted Xiao, and Tianhe Yu.
⢠Data Collection (collecting data on real robots, running real robot evaluations, executing operations required for running real robots): Noah Brown, Justice Carbajal, Tianli Ding, Krista Reymann, Grecia Salazar, Pierre Sermanet, Jaspiar Singh, Huong Tran, Stefan Welker, and Sichun Xu.
⢠Leadership (leading the project efforts, managing the project staff, advising on project directions): Yevgen Chebotar, Chelsea Finn, Karol Hausman, Brian Ichter, Sergey Levine, Yao Lu, Igor Mordatch, Kanishka Rao, Pannag Sanketi, Radu Soricut, Vincent Vanhoucke, and Tianhe Yu. | 2307.15818#73 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 74 | Partial answer. desired response format better. In the Prompts 1 and 2, we provide partial answers so that LLMs can follow the
We can put the partial answer at the end of the prompt for the open-source models to continue writing. An implementation detail is that different open-source models have different conversa- tion templates (i.e., different ways to combine user and assistant messages into one string). For example, Vicuna (Chiang et al., 2023) uses the string âUSER:â and â ASSISTANT:â for the place- holder â[User:]â and â[Role]â in the Prompts 1 and 2, respectively, while UltraLM (Ding et al., 2023) uses âUser:â and ââ©/sâªAssistant:â. We build our open-source model experiments with the help of the FastChat codebase (Zheng et al., 2023), in which the conversation templates of many models are already handled correctly. We implement the conversation templates of OpenChat-13B, StableVicuna-13B, and UltraLM-13B according to their official guides and codes. | 2307.15337#74 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15818 | 74 | Paper (working on the paper manuscript, designing paper visualizations and figures): Yevgen Chebotar, Danny Driess, Chelsea Finn, Pete Florence, Karol Hausman, Brian Ichter, Lisa Lee, Sergey Levine, Igor Mordatch, Karl Pertsch, Quan Vuong, Fei Xia, Ted Xiao, and Tianhe Yu. ⢠Infrastructure (working on infrastructure and code base backbone needed for training models, running experiments, storing and accessing data): Anthony Brohan, Yevgen Chebo- tar, Danny Driess, Kehang Han, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Yao Lu, Igor Mordatch, Quan Vuong, Ayzaan Wahid, Fei Xia, Ted Xiao, Peng Xu, and Tianhe Yu.
# B. Datasets | 2307.15818#74 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 75 | For ChatGPT-3.5, we provide partial answers as a last message in the chat history from the assistant. Note that it is not a documented approach. We find it works well in most cases, in that ChatGPT-3.5
19
Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
Prompt 4. LLM Prompting as the Router [User:] Question: {question} How would you like to answer the question? A. Organize the answer as a list of points or perspectives (in the format of 1., 2., 3., etc.), and the points or perspectives can be answered independently without referring to the contents of the previous points. B. Organize the answer as a list of points or perspectives (in the format of 1., 2., 3., etc.), and the contents of later points or perspectives cannot be answered independently without referring to the contents of the previous ones. C. Do not organize the answer as a list of points or perspectives. Just say A, B, or C. Do not explain. Do not provide an answer to the question. [Assistant:]
continues the texts from the provided partial answer. However, in some rare cases, ChatGPT-3.5 repeats the provided partial answers.
For Claude over Slack, there is no obvious way to give the API a partial answer. We resort to modifying the prompt template slightly by adding | 2307.15337#75 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15818 | 75 | # B. Datasets
The vision-language datasets are based on the dataset mixtures from Chen et al. (2023b) and Driess et al. (2023). The bulk of this data consists of the WebLI dataset, which is around 10B image-text pairs across 109 languages, filtered to the top 10% scoring cross-modal similarity examples to give 1B training examples. Many other captioning and vision question answering datasets are included as well, and more info on the dataset mixtures can be found in Chen et al. (2023b) for RT-2-PaLI-X, and Driess et al. (2023) for RT-2-PaLM-E. When co-fine-tuning RT-2-PaLI-X, we do not use the Episodic WebLI dataset described by Chen et al. (2023a). | 2307.15818#75 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 76 | For Claude over Slack, there is no obvious way to give the API a partial answer. We resort to modifying the prompt template slightly by adding
Please start your answer from â{partial answer}â and do not output other things before that
at the end. We find that Claude understands and obeys it well. For GPT-4, we also take this approach.
System Message. We do not include the system message in the prompts for open-source models except LLaMA2.
The partial answer, â**very shortly**â, and the 2-shot demonstrations discussed above are the only differences between the prompts we used across all models and all evaluations.
B.2 SUPPORTING MULTI-ROUND CONVERSATION
To use SoT in a multi-round conversation, we can just put the question and the final aggregated answer in the history, removing all the SoT prompts. In this way, using SoT in one conversation round will not introduce additional prefill cost in future rounds.
C IMPLEMENTATION DETAILS OF SKELETON-OF-THOUGHT WITH ROUTER
C.1 PROMPTING ROUTER | 2307.15337#76 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15818 | 76 | The robotics dataset is based on the dataset from Brohan et al. (2022). This consists of demon- stration episodes collected with a mobile manipulation robot. Each demonstration is annotated with a natural language instruction from one of seven skills: "Pick Object", "Move Object Near Object", "Place Object Upright", "Knock Object Over", "Open Drawer", "Close Drawer", "Place Object into Receptacle", and "Pick Object from Receptacle and place on the counter". Further details can be found in Brohan et al. (2022).
RT-2-PaLI-X weights the robotics dataset such that it makes up about 50% of the training mixture
19
# RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
for co-fine-tuning. RT-2-PaLM-E weights the robotics dataset to be about 66% of the training mixture. | 2307.15818#76 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 77 | C IMPLEMENTATION DETAILS OF SKELETON-OF-THOUGHT WITH ROUTER
C.1 PROMPTING ROUTER
We use Prompt 4 for querying GPT-4 as the router. If the answer is âAâ (i.e., the question can be answered in a list of independent points), we will use SoT. Otherwise, if the answer is âBâ (i.e., the answer is in a list of points but they depend on each other) or âCâ (i.e., the answer should not be in a list of points), SoT is not suitable and we will fall back to normal decoding.
C.2 TRAINED ROUTER
We tackle the routing problem as a sequence classification task. We first annotate the LIMA training set (Zhou et al., 2023), and then fine-tune a RoBERTa model (Liu et al., 2019) using the labeled data. Finally, we apply the tuned RoBERTa as the router on Vicuna-80 and WizardLM. We detail the steps in the following.
# C.2.1 ANNOTATION PROCESS | 2307.15337#77 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15818 | 77 | for co-fine-tuning. RT-2-PaLM-E weights the robotics dataset to be about 66% of the training mixture.
For the results on Language-Table in Table 1, our model is trained on the Language-Table datasets from Lynch et al. (2022). Our model is co-fine-tuned on several prediction tasks: (1) predict the action, given two consecutive image frames and a text instruction; (2) predict the instruction, given image frames; (3) predict the robot arm position, given image frames; (4) predict the number of timesteps between given image frames; and (5) predict whether the task was successful, given image frames and the instruction.
# C. Baselines
We compare our method to multiple state-of-the-art baselines that challenge different aspects of our method. All of the baselines use the exact same robotic data.
⢠RT-1: Robotics Transformer 1 Brohan et al. (2022) is a transformer-based model that achieved state-of-the-art performance on a similar suite of tasks when it was published. The model does not use VLM-based pre-training so it provides an important data point demonstrating whether VLM-based pre-training matters. | 2307.15818#77 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 78 | # C.2.1 ANNOTATION PROCESS
In the classification task, a label of 1 (positive) indicates that this question can be answered with SoT, while a label of 0 (negative) suggests that using the normal generation mode is more suitable. We annotate the LIMA training set, which consists of 1,030 Q&As sourced from three community webpages: Stack Exchange, wikiHow, and the Pushshift Reddit. We also annotate the Vicuna-80 and WizardLM datasets for evaluation.
20
# Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
Table 3: Router confusion matrices on the Vicuna-80 dataset. Left: Rows are human annotations (H) and columns are the GPT-4 router (G). Middle: Rows are human annotations (H) and columns are the RoBERTa router (R). Right: Rows are the GPT-4 router (G) and columns are the RoBERTa router (R).
R0 R1 6 37 32 5 Table 4: Router confusion matrices on the WizardLM dataset. Left: Rows are human annotations (H) and columns are the GPT-4 router (G). Middle: Rows are human annotations (H) and columns are the RoBERTa router (R). Right: Rows are the GPT-4 router (G) and columns are the RoBERTa router (R). | 2307.15337#78 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15818 | 78 | ⢠VC-1: VC-1 Majumdar et al. (2023a) is a visual foundation model that uses pre-trained visual representations specifically designed for robotics tasks. We use pre-trained representations from the VC-1 ViT-L model. Since VC-1 does not include language conditioning, we add this by separately embedding the language command via Universal Sentence Encoder Cer et al. (2018) to enable comparison to our method. In particular, we concatenate the resulting language embedding tokens to the image tokens produced by VC-1, and pass the concatenated token sequences through token learner Ryoo et al. (2021). The token sequences produced by token learner are then consumed by an RT-1 decoder-only transformer model to predict robot action tokens. We train the VC-1 baseline end-to-end and unfreeze the VC-1 weights during training, since this led to far better results than using frozen VC-1 weights. | 2307.15818#78 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 79 | G0 G1 5 38 37 0 R0 R1 4 34 34 8 H0 H1 H0 H1 G0 G1
H0 H1 G0 G1 66 94 55 3 H0 H1 R0 135 31 R1 25 27 G0 G1 R0 R1 4 93 48 73
We use GPT-4 to assist the annotation process. Specifically, we present each question to GPT-4 and analyze its answer to determine whether SoT can be triggered for this question. We assign a positive label to a question if GPT-4âs response meets two criteria: (1) it contains a list of points that can be expanded in parallel, (2) each point provides sufficient details (i.e., the point-expanding response is not too short), which will enable SoT to achieve a speed-up. Two of the paperâs authors conduct the annotation process independently, and discuss the inconsistent annotations to decide the final label.
# C.2.2 TRAINING DETAILS | 2307.15337#79 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15818 | 79 | ⢠R3M: R3M Nair et al. (2022b) is a similar method to VC-1 in that R3M uses pre-trained visual-language representations to improve policy training. In this case the authors use Ego4D dataset Grauman et al. (2022) of human activities to learn the representation that is used by the policy. Both VC-1 and R3M test different state-of-the-art representation learning methods as an alternative to using a VLM. To obtain a language-conditioned policy from the R3M pretrained representation, we follow the same procedure as described above for VC-1, except we use the R3M ResNet50 model to obtain the image tokens, and unfreeze it during training.
⢠MOO: MOO Stone et al. (2023) is an object-centric approach, where a VLM is first used to specify the object of interest in a form of a single, colored pixel in the original image. This pixel- modified image is then trained with an end-to-end policy to accomplish a set of manipulation tasks. This baseline corresponds to a situation where a VLM is used as a separate module that enhances perception but its representations are not used for policy learning.
# D. VLMs for RT-2 | 2307.15818#79 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 80 | # C.2.2 TRAINING DETAILS
We use roberta-base with 120M parameters as the router model. The finetuning is conducted using the AdamW optimizer (Loshchilov & Hutter, 2019) with a weight decay of 0.01. The learning rate undergoes a warm-up phase during the first 1% of iterations to 5e-5 and then decays linearly. We train the model for 2 epochs using a batch size of 32. Input sequences are either padded or truncated to achieve a consistent length of 512 tokens.
In the application of SoT, false positives (SoT is incorrectly triggered when it should not be, resulting in degraded answer quality) are of more significant concern than false negatives (the router misses a potential SoT trigger, resulting in a reduced speed-up). Thus, to mitigate false positives, we employ the Tversky loss (Wang et al., 2023b) with parameters α = 0.7 and β = 0.3, which penalizes false positives more heavily than false negatives. We also incorporate label smoothing (Szegedy et al., 2016) with a factor of ϵ = 0.2. Overall, the entire fine-tuning process is efficient, completing in 2 minutes on an NVIDIA A100 GPU.
C.3 ROUTER CONSISTENCY | 2307.15337#80 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15818 | 80 | # D. VLMs for RT-2
The PaLI-X model architecture consists of a ViT-22B Dehghani et al. (2023) to process images, which can accept sequences of ð images, leading to ð Ã ð tokens per image, where ð is the number of patches per image. The image tokens passing over a projection layer is then consumed by an encoder-decoder backbone of 32B parameters and 50 layers, similar to UL2 Tay et al. (2023), which jointly processes text and images as embeddings to generate output tokens in an auto-regressive manner. The text
20
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
input usually consists of the type of task and any additional context (e.g., "Generate caption in â¨langâ©" for captioning tasks or "Answer in â¨langâ©: question" for VQA tasks).
The PaLI-3B model trained on Language-Table (Table 1) uses a smaller ViT-G/14 (Zhai et al., 2022) (2B parameters) to process images, and UL2-3B (Tay et al., 2023) for the encoder-decoder network. | 2307.15818#80 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 81 | C.3 ROUTER CONSISTENCY
We present the confusion matrices for the three routers to illustrate their consistency. The results on Vicuna-80 and WizardLM are shown in Tables 3 and 4, respectively.
On Vicuna-80, we can observe a notable level of agreement among the three routers. Compared with the GPT-4-prompting router, the trained router exhibits a slightly higher number of false negatives w.r.t. the human annotations. Conversely, on WizardLM, given the intricate answer structure and the presence of many ambiguous cases, the routers show significant discrepancies. Specifically, the GPT-4 router produces many false positives, which pose adverse affects on the answer quality (see App. I.2). The RoBERTa router aligns more closely with the human annotations.
C.4 CONCURRENT EXECUTION FOR SOT-R
In SoT-R, the router serves as an additional stage that extends the two-stage SoT pipeline. The SoT-R pipeline is illustrated in Fig. 9. To push the limit of latency optimization, we can run the router, normal generation, and SoT generation concurrently. Once the router makes a decision, one of the normal and SoT generation processes can be aborted. However, this approach will increase
21
# Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | 2307.15337#81 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15818 | 81 | The PaLM-E model is based on a decoder-only LLM that projects robot data such as images and text into the language token space and outputs text such as high-level plans. In the case of the used PaLM-E-12B, the visual model used to project images to the language embedding space is a ViT-4B Chen et al. (2023b). The concatenation of continuous variables to textual input allows PaLM-E to be fully multimodal, accepting a wide variety of inputs such as multiple sensor modalities, object-centric representations, scene representations and object entity referrals.
# E. Training Details | 2307.15818#81 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 82 | 21
# Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
oe ons a positive Re Question Question ââ> > EE â answer | negative negative y a
Figure 9: Left: The SoT-R pipeline. Right: A possible approach to further reduce latency at the cost of token overhead.
the token overhead. Therefore, we did not employ this approach in this work and leave it to future work.
# D RELATED WORK (EXPANDED)
D.1 EFFICIENT LLMS
Extensive research has been dedicated to enhancing the throughput and latency of LLM infer- ence. We first discuss model-level architecture design or compression techniques. These techniques change the model and can benefit both the latency and throughput but require finetuning to retain the model quality. Then, we discuss system-level efforts that optimize the computational graph or the assignment and scheduling of the computational graph on computation and storage devices. Most system-level efforts accelerate the prefilling phase or focus on improving the throughput. Finally, we discuss some research efforts that share a similar motivation to ours, namely, addressing the efficiency issue of sequential decoding. | 2307.15337#82 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15818 | 82 | # E. Training Details
We perform co-fine-tuning on pre-trained models from the PaLI-X (Chen et al., 2023a) 5B & 55B model, PaLI (Chen et al., 2023b) 3B model and the PaLM-E (Driess et al., 2023) 12B model. For RT-2-PaLI-X-55B, we use learning rate 1e-3 and batch size 2048 and co-fine-tune the model for 80K gradient steps whereas for RT-2-PaLI-X-5B, we use the same learning rate and batch size and co-fine-tune the model for 270K gradient steps. For RT-2-PaLM-E-12B, we use learning rate 4e-4 and batch size 512 to co-fine-tune the model for 1M gradient steps. Both models are trained with the next token prediction objective, which corresponds to the behavior cloning loss in robot learning. For RT-2-PaLI-3B model used for Language-Table results in Table 1, we use learning rate 1e-3 and batch size 128 to co-fine-tune the model for 300K gradient steps.
# F. Evaluation Details
# F.1. Evaluation Scenarios | 2307.15818#82 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 83 | Model-level optimization. Considerable architectural design efforts have emerged to (1) improve the scalability w.r.t. model size by introducing mixture-of-expert inference (Lepikhin et al., 2021; Fedus et al., 2022), (2) address the quadratic complexity w.r.t. input size of attention by designing new attention mechanisms (Kitaev et al., 2020; Wang et al., 2020), (3) reduce the memory access and footprint of attention by using multi-query attention (Shazeer, 2019), and so on. However, these methods usually require a substantial re-training cost. The model compression techniques require a smaller amount of fine-tuning by reducing the model complexity of a pre-trained LLM from certain aspects (Ganesh et al., 2021). Representative techniques include quantization (Xiao et al., 2022; Frantar et al., 2022; Lin et al., 2023), the static or dynamic pruning of weights, activation, and attention (Mishra et al., 2021; Zaheer et al., 2020; Wang et al., 2021; Chen et al., 2023b), and so on. | 2307.15337#83 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15818 | 83 | # F. Evaluation Details
# F.1. Evaluation Scenarios
For studying the emergent capabilities of RT-2 in a quantitative manner, we study various challenging semantic evaluation scenarios that aim to measure capabilities such as reasoning, symbol understand- ing, and human recognition. A visual overview of a subset of these scenes is provided in Figure 8, and the full list of instructions used for quantiative evalution is shown in Table 3.
# F.2. Evaluation Instructions
Table 2 lists natural language instructions used in model evaluations for unseen objects, backgrounds, and environments. Each instruction was run between 1-5 times, depending on the number of total instructions in that evaluation set. Table 3 lists natural language instructions used to evaluate quantitative emergent evals. Each instruction was run 5 times.
21
# RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
(a) Reasoning "move apple to cup with same colorâ âdéplacer les frites verts dans la tasse rougeâ âpick a healthy drinkâ âmove banna near the sum of two plus oneâ Ne r âmove coke can to person with glassesâ âmove banana to near Y" androidâ âmove coke can to dogâ âput coke can close | "move apple to treeâ (c) Human Recognition (b) Symbol Understanding | 2307.15818#83 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 84 | Zooming out from LLM compression to the whole field of model compression, we can see that model co-design or compression for efficiency has received tremendous attention in the past few years and has grown into large research fields, such as pruning (Han et al., 2015; Wen et al., 2016), quantization (Krishnamoorthi, 2018), factorization (Denton et al., 2014), and neural architecture search (Zoph & Le, 2017; Elsken et al., 2019; Cai et al., 2019). Different from the model co-design paradigm, SoT is in a âcontent co-organization for efficiencyâ paradigm for improving the LLM efficiency. Along with the growth in the LLM capabilities and amount of LLM-generated data, data-level techniques could become important tools in the efficient LLM toolbox. | 2307.15337#84 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 85 | System-level optimization. In the realm of lossless acceleration, considerable efforts have been devoted to addressing the I/O-bound nature of LLMs on modern hardware platforms (Dao et al., 2022). Numerous studies (Dao et al., 2022; Zhai et al., 2022; Ivanov et al., 2021; NVIDIA, 2019) have focused on adjusting the computational graph by fusing and implementing operations in an I/O-friendly way. As a representative method, FlashAttention (Dao et al., 2022) fuses all operations of one attention into one GPU kernel with spatially tiled computation to reduce the off-chip I/O of the attention map. While FlashAttention can effectively accelerate training and the prefilling phase of inference, it cannot accelerate the decoding phase much (when the batch size is small), as it is the I/O of weights rather than activation or attention map that bottlenecks the decoding phase. For example, when the context length is 64, decoding one token using LLaMA-7B needs to load each
22
# Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
of the 7B parameters from the off-chip HBM onto the GPU chip at least once, but only transferring about 20M (0.02B) activation values between the off-chip HBM and GPU chip. | 2307.15337#85 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15818 | 85 | Task Group Tasks Symbol Understand- ing: Symbol 1 move coke can near X, move coke can near 3, move coke can near Y Symbol Understand- ing: Symbol 2 move apple to tree, move apple to duck, move apple to apple, move apple to matching card Symbol Understand- ing: Symbol 3 put coke can close to dog, push coke can on top of heart, place coke can above star Reasoning: Math move banana to 2, move banna near the sum of two plus one, move banana near the answer of three times two, move banana near the smallest number Reasoning: Logos move cup to google, move cup to android, move cup to youtube, move cup to a search engine, move cup to a phone Reasoning: Nutrition get me a healthy snack, pick a healthy drink, pick up a sweet drink, move the healthy snack to the healthy drink, pick up a salty snack Reasoning: Color and Multilingual move apple to cup with same color, move apple to cup with different color, move green chips to matching color cup, move apple to vaso verde, Bewegen Sie den Apfel in die rote Tasse, move green chips to vaso rojo, mueve la manzana al vaso verde, déplacer les | 2307.15818#85 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 86 | In order to satisfy Service Level Objectives, serving systems focus on improving the serving throughput under latency constraints. To this end, serving systems (Fang et al., 2021; NVIDIA, 2021; Google, 2021) pack multiple queries together into a batch to improve the hardware utiliza- tion. The batching technique has proven highly effective in enhancing throughput, leading to the development of various variants. For example, some work designs methods to decide which queries to batch together (Fang et al., 2021; Zhou et al., 2022), while others selectively batch parts of the model to enable fine-grained iteration-level batching (Yu et al., 2022) or multi-task batching (Zhou et al., 2022). Various model parallelism (Lu et al., 2017; Huang et al., 2019; Narayanan et al., 2019; Rajbhandari et al., 2020; Narayanan et al., 2021; Li et al., 2021; Zheng et al., 2022) and offloading (Ren et al., 2021; Sheng et al., 2023) techniques have been proposed to maximize the throughput of LLM training or inference. In a nutshell, given the computational graph and device configurations, these techniques optimize | 2307.15337#86 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15818 | 86 | Sie den Apfel in die rote Tasse, move green chips to vaso rojo, mueve la manzana al vaso verde, déplacer les frites verts dans la tasse rouge Person Recognition: Celebrities move coke can to taylor swift, move coke can to tom cruise, move coke can to snoop dog Person Recognition: CelebA move coke can to person with glasses, move coke can to the man with white hair, move coke can to the brunette lady | 2307.15818#86 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 87 | have been proposed to maximize the throughput of LLM training or inference. In a nutshell, given the computational graph and device configurations, these techniques optimize the split, assignment, and scheduling of computations, storage, and communications on devices. In addition to the model parallelism and batching tech- niques, an efficient memory management mechanism for LLM workloads is also an essential feature in the serving systems (Kwon et al., 2023; SenseTime, 2023a;b). | 2307.15337#87 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15818 | 87 | Table 3 | Natural language instructions used for quantitative emergent evalutions.
22
# RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
# G. Example Failure Cases
In Fig. 9 we provide examples of a notable type of failure case in the Language Table setting, with the RT-2 model not generalizing to unseen object dynamics. In these cases, although the model is able to correctly attend to the language instruction and move to the first correct object, it is not able to control the challenging dynamics of these objects, which are significantly different than the small set of block objects that have been seen in this environment Lynch et al. (2022). Then pen simply rolls off the table (Fig. 9, left), while the bananaâs center-of-mass is far from where the robot makes contact (Fig. 9, right). We note that pushing dynamics are notoriously difficult to predict and control Yu et al. (2016). We hypothesize that greater generalization in robot-environment interaction dynamics may be possible by further scaling the datasets across diverse environments and objects â for example, in this case, datasets that include similar types of more diverse pushing dynamics Dasari et al. (2019). | 2307.15818#87 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 88 | To sum up, these system-level techniques mainly help with the throughput in training and batched inference. They can be used by SoT to improve the throughput of the batched decoding of multiple segments. This means that SoT can harness the power of these throughput-oriented techniques and make them help with the end-to-end latency, offering a new dimension for better trading off latency and throughput in future serving systems.
Another parallelism perspective to position SoT is that SoT guides the LLM to adjust the sequen- tial workload to become âinter-contentâ parallelizable, which differs from the parallelism levels in existing serving systems, including inter-instance (Krizhevsky, 2014; Rajbhandari et al., 2020), inter-operation (Huang et al., 2019; Narayanan et al., 2019; 2021), intra-operation (Xu et al., 2021), and inter-token (Li et al., 2021). It may be worthwhile to explore the integration of SoT into serving systems to maximize the hardware utilization. | 2307.15337#88 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15818 | 88 | In addition, despite RT-2âs promising performance on real world manipulation tasks in qualitative and quantitative emergent evaluations, we still find numerous notable failure cases. For example, with the current training dataset composition and training method, RT-2 seemed to perform poorly at:
Grasping objects by specific parts, such as the handle ⢠Novel motions beyond what was seen in the robot data, such as wiping with a towel or tool use ⢠Dexterous or precise motions, such as folding a towel ⢠Extended reasoning requiring multiple layers of indirection
| Push the red marker to the video game controller | { Push the banana to the apple
Figure 9 | Qualitative example failure cases in the real-world failing to generalize to unseen object dynamics.
# H. Quantitative Experimental Results
# H.1. Overall Performance, for Section 4.1
Table 4 lists our quantitative overall evaluation results. We find that RT-2 performs as well or better than baselines on seen tasks and significantly outperforms baselines on generalization to unseen objects, backgrounds, and environments. | 2307.15818#88 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 89 | Decoding optimization. One bottleneck for the end-to-end latency lies in the autoregressive de- coding phase, where tokens must be generated one by one. Due to the dependency between tokens, the computation of different tokens cannot be parallelized, causing severe under-utilization of GPU. In order to improve the end-to-end decoding latency of a given LLM, speculative decoding meth- ods (Stern et al., 2018; Leviathan et al., 2022; Chen et al., 2023a; Gante, 2023; Sun et al., 2023; Miao et al., 2023) propose to use cheaper approaches to generate short candidate token sequences, for example, by sequentially decoding with an assisting model much smaller than the given LLM. Then, they use the LLM to parallelly verify the candidates and keep the prefix sequence that matches the LLMâs verification results.
Another line of work that shares the motivation of addressing the autoregressive efficiency issue is non-autoregressive generation (NAG) methods (Gu et al., 2018; Xiao et al., 2023). NAG methods sample consecutive tokens parallelly, often with the aid of a modified and tuned model. To maintain the answer quality, instead of sampling for one iteration, many NAG methods refine the output parallelly for multiple iterations (Xiao et al., 2023; Santilli et al., 2023). | 2307.15337#89 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15818 | 89 | Model Seen Tasks Unseen Objects Unseen Backgrounds Unseen Environments Unseen Average Easy Hard Easy Hard Easy Hard R3M (Nair et al., 2022b) VC-1 (Majumdar et al., 2023a) RT-1 (Brohan et al., 2022) MOO (Stone et al., 2023) RT-2-PaLI-X-55B (ours) RT-2-PaLM-E-12B1 (ours) 45 63 92 75 91 93 32 34 31 58 70 84 14 10 43 48 62 76 13 13 71 38 96 75 9 3 9 41 48 71 0 0 26 19 63 36 2 0 14 3 35 33 12 10 32 35 62 62
Table 4 | Overall performance of two instantiations of RT-2 and baselines across seen training tasks as well as unseen evaluations measuring generalization to novel objects, novel backgrounds, and novel environments.
23
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
# H.2. Emergent Evaluation, for Section 4.2
Table 5 lists all of our quantitative emergent evaluation results. We find that RT-2 performs 2x to 3x better than RT-1 on these new instructions, without any additional robotic demonstrations. This showcases how our method allows us to leverage capabilities from pretraining on web-scale vision-language datasets. | 2307.15818#89 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 90 | To summarize, the speculative decoding methods use assisting models for letting the LLM conduct parallel verification of consecutive tokens, and the NAG methods rely on specially designed models, training schemes, or sampling schemes for the parallel sampling and refinement of consecutive to- kens. In contrast, SoT prompts the LLM itself to plan the contents in a way that permits the parallel generation of multiple tokens in different segments. SoT exploits the emerging instruction-following and planning ability of SoTA LLMs rather than relying on specially designed modeling, sampling, and training schemes. This is different from all existing work that targets the autoregressive effi- ciency issue.
D.2 PROMPTING METHODS FOR LLMS
In recent years, the âpre-train, prompt, and predictâ paradigm has emerged (Liu et al., 2023), which designs prompts comprising task descriptions and (optionally) a few demonstrations to guide pre23
# Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
Table 5: The latency and average GPU performance of the prefilling and decoding phases when inferencing LLMs. The prefilling token length is 128, the decoding token length is 64, and the batch size is 1. The test is run on one NVIDIA A100 GPU. | 2307.15337#90 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15818 | 90 | Model Symbol Understanding Reasoning Person Recognition Symbol 1 Symbol 2 Symbol 3 Average Math Logos Nutrition Color/Multilingual Average Celebrities CelebA Average VC-1 (Majumdar et al., 2023a) RT-1 (Brohan et al., 2022) RT-2-PaLI-X-55B (ours) RT-2-PaLM-E-12B (ours) 7 27 93 67 25 20 60 20 0 0 93 20 11 16 82 36 0 5 25 35 8 0 52 56 20 32 48 44 13 28 58 35 10 16 46 43 20 20 53 33 7 20 53 53 13 20 53 43 Average 11 17 60 40
Table 5 | Performance of RT-2 and baselines on quantitative emergent evaluations.
# H.3. Size and Training Ablations, for Section 4.3
Table 6 details quantitative results for ablations across model size and training approach. Across each, we see that model size plays an important role in performance and that co-fine-tuning outperforms fine-tuning, which outperforms training from scratch. | 2307.15818#90 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 91 | Model Prefill/Decode Latency (ms) LLaMA-7B LLaMA-13B LLaMA-33B 40 / 2735 54 / 3725 100 / 5506 43 / 0.31 62 / 0.44 85 / 0.75
trained LLMs in generating answers for a wide range of downstream tasks. Researchers found that instruction-tuned LLMs (Brown et al., 2020; Wei et al., 2021; Ouyang et al., 2022; Chung et al., 2022; Taori et al., 2023) possess a strong ability to (1) generalize to new tasks thanks to the diverse natural language descriptions encountered during instruction tuning, and (2) learn in-context using a few demonstrations without weight tuning. | 2307.15337#91 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15818 | 91 | Model Size Training Unseen Objects Unseen Backgrounds Unseen Environments Average Easy Hard Easy Hard Easy Hard RT-2-PaLI-X RT-2-PaLI-X RT-2-PaLI-X RT-2-PaLI-X RT-2-PaLI-X 5B 5B 5B 55B 55B from scratch fine-tuning co-fine-tuning fine-tuning co-fine-tuning 0 24 60 60 70 10 38 38 62 62 46 79 67 75 96 0 50 29 38 48 0 36 44 57 63 0 23 24 19 35 9 42 44 52 63
Table 6 | Ablations of RT-2 showcasing the impact of parameter count and training strategy on generalization.
# I. Additional Chain-Of-Thought Reasoning Results
We present additional examples of chain-of-thought reasoning rollouts accomplished with RT-2-PaLM- E, as described in Sec. 4.4, in Figure 10.
1The original pre-training data mixture used in PaLM-E-12B (as described in Driess et al. (2023)) includes robot images for high-level VQA planning tasks that can be similar to images encountered in generalization scenarios. However, none of those training examples include low-level actions that are evaluated in this experiment.
24
# RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | 2307.15818#91 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 92 | In virtue of these abilities, the field has been manually engineering (Brown et al., 2020; Kojima et al., 2022; Shen et al., 2023; Li et al., 2023a), automatic searching (Shin et al., 2020), or continu- ously tuning (Li & Liang, 2021; Lester et al., 2021) the prompts for uncovering the capabilities of LLMs on downstream tasks. There are a bunch of prompting methods that improves the reasoning performance of LLMs by designing thinking flows mimicking human reasoning: (1) mimicking the step-by-step or compositional thinking structure (Wei et al., 2022; Kojima et al., 2022; Press et al., 2022; Yao et al., 2023; Besta et al., 2023; Zhang et al., 2023), (2) designing multiple reasoning paths and their aggregation (Wang et al., 2022; Yao et al., 2023; Li et al., 2023c), and (3) using tools for calculation and information retrieval (Chen et al., 2022; Yao et al., 2022; Schick et al., 2023). As a representative example, the Chain-of-Thought prompts largely improve the | 2307.15337#92 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15818 | 92 | 24
# RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
User Pick up the object that is different from all other objects Plan: Pick rxbar chocolate User Move the green objects together Plan: Move green can near green rice chip bag User Move the blue objects together Plan: Move blue chip bag near pepsi User lam sleepy, bring me a drink to help. Plan: Pick redbull can
Figure 10 | Additional examples of RT-2 with chain-of-thought reasoning
25
# RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | 2307.15818#92 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 93 | 2022; Yao et al., 2022; Schick et al., 2023). As a representative example, the Chain-of-Thought prompts largely improve the performance on tasks that require logical reasoning by simply providing a âLetâs think step by stepâ (Kojima et al., 2022) instruction or a few demonstrations (Wei et al., 2022). Another topic that arises quite a surge of in- terests is to prompt LLMs to help finish complex multi-modality task (Shen et al., 2023; Zhu et al., 2023). For example, HuggingGPT (Shen et al., 2023) design prompts to guide the LLM to generate structural JSON for the orchestration of multi-model execution to finish complex tasks. | 2307.15337#93 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15818 | 93 | Task Group Tasks Unseen Objects (Easy) pick banana, move banana near coke can, move orange can near banana, pick oreo, move oreo near apple, move redbull can near oreo, pick pear, pick coconut water, move pear near coconut water, move pepsi can near pear Unseen Objects (Hard) pick cold brew can, pick large orange plate, pick chew toy, pick large ten- nis ball, pick bird ornament, pick fish toy, pick ginger lemon kombucha, pick egg separator, pick wrist watch, pick green sprite can, pick blue microfiber cloth, pick yellow pear, pick pretzel chip bag, pick disinfectant wipes, pick pineapple hint water, pick green cup, pick pickle snack, pick small blue plate, pick small orange rolling pin, pick octopus toy, pick catnip toy Unseen grounds (Easy) Back- pick green jalapeno chip bag, pick orange can, pick pepsi can, pick 7up can, pick apple, pick blue chip bag, pick orange, pick 7up can, move orange near sink, pick coke can, pick sponge, pick rxbar blueberry Unseen Back- grounds (Hard) pick wrist watch, pick egg separator, pick green sprite can, pick blue | 2307.15818#93 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 94 | To summarize, the large literature on prompting methods has been aiming at uncovering different capabilities of LLM and improving the answer quality on different downstream tasks. In contrast, SoT is a first attempt at exploiting the power of prompting to improve efficiency.
# E EFFICIENCY ANALYSIS
This section gives a detailed explanation on why SoT can reduce the overall decoding latency with the same computational resource for local models.
The vanilla approach processes only one question and decodes the answers sequentially, whereas SoT processes multiple point-expanding requests and the answers in a batch. We focus on the following question: âCompared to processing only one sequence, how much peak memory overhead and latency increase will be brought by processing a batch of sequences?â | 2307.15337#94 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15818 | 94 | pick coke can, pick sponge, pick rxbar blueberry Unseen Back- grounds (Hard) pick wrist watch, pick egg separator, pick green sprite can, pick blue microfiber cloth, pick yellow pear, pick pretzel chip bag, pick disinfectant wipes, pick pineapple hint water, pick green cup, pick pickle snack, pick small blue plate, pick small orange rolling pin, pick octopus toy, pick catnip toy, pick swedish fish bag, pick large green rolling pin, pick black sunglasses Unseen Environ- ments (Easy) pick coke can, pick apple, pick rxbar blueberry, move apple near coke can, move rxbar blueberry near apple, move coke can near rxbar blueberry, pick blue plastic bottle, pick sponge, pick blue chip bag, move sponge near blue plastic bottle, move blue chip bag near sponge, move blue plastic bottle near blue chip bag, move coke can near white mug, move sponge near white mug, move coke can near yellow bowl, move sponge near yellow bowl, move coke can near green cloth, move sponge near green cloth, move coke can near plate, move sponge near plate, move coke can near spoon, move sponge near spoon, move coke can near orange cup, move | 2307.15818#94 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 95 | A typical LLM generative process consists of two phases: (1) the prefilling phase in which the prompt is parsed to generate the key-value cache for further use, and (2) the decoding phase in which tokens are generated one by one in a sequential manner. The decoding phase accounts for the majority of the end-to-end latency, especially when generating a long response. As shown in Table 5, when running Vicuna-7B on NVIDIA A100-80G, the actual computing performance is only 0.31 TFLOPS (0.1% utilization) in the decoding phase, compared to 43 TFLOPS (13.8% uti- lization) during prefilling. The utilization is calculated with respect to the FP165 tensor core peak performance â 312 TFLOPS for NVIDIA-A100. As a result, the latency of decoding only one token is comparable to that of prefilling 128 tokens (40ms). This huge gap in actual computing perfor- mance and thereby the latency arises from the fact that all LLM weights need to be loaded onto the GPU chip at least once only for decoding one token, so the decoding is heavily bottlenecked by the I/O of weights and the GPU computation units cannot be well utilized.
5All of our experiments are run with FP16 inference.
24
# Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | 2307.15337#95 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15818 | 95 | green cloth, move coke can near plate, move sponge near plate, move coke can near spoon, move sponge near spoon, move coke can near orange cup, move sponge near orange cup, pick white mug, pick yellow bowl, pick green cloth, move white mug near sponge, move yellow bowl near sponge, move green cloth near sponge, pick plate, pick spoon, pick orange cup, move plate near sponge, move spoon near sponge, move orange cup near sponge, put coke can into sink, drop coke can into sink, push coke can into sink, put sponge into sink, drop sponge into sink, push sponge into sink, put green cloth into sink, drop green cloth into sink, push green cloth into sink Unseen Environ- ments (Hard) | 2307.15818#95 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 96 | 5All of our experiments are run with FP16 inference.
24
# Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
(a) Latency (ms) (b) Actual GPU Perf. (TFLOPS) (c) Peak Memory (GB)
Figure 10: The trends of latency, average GPU performance of decoding one token, and peak mem- ory with respect to the batch size B of sequences. The prefilling token length is 128, and the decoding token length is 64. The test is run on one NVIDIA A100 GPU. | 2307.15337#96 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15818 | 96 | pick coke can, pick apple, pick rxbar blueberry, move apple near coke can, move rxbar blueberry near apple, move coke can near rxbar blueberry, move coke can near stapler, move apple near stapler, move coke can near keyboard, move apple near keyboard, move coke can near tissue box, move apple near tissue box, move coke can near papers, move apple near papers, move coke can near mouse, move apple near mouse, move coke can near book, move apple near book, pick marker, pick stapler, pick mouse, move marker near apple, move stapler near apple, move mouse near apple, push coke can to the left, push coke can to the right, push sponge to the left, push sponge to the right, push tissue box to the left, push tissue box to the right, point at coke can, point at sponge, point at tissue box
Table 2 | Natural language instructions used for evaluations testing controlled distribution shifts along the dimension of novel objects, novel environments, and novel backgrounds. For each category, we introduce evaluation settings with smaller distribution shifts as well as larger distribution shifts. A visualization of these scenarios if shown in Figure 3.
26
26 | 2307.15818#96 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 97 | When conducting batched decoding, as the sequence batch size B increases, the latency of decoding one token for each sequence stays roughly the same (Fig. 10a), as the amount of LLM weights that needs to be loaded onto the chip does not change. As a result, the GPU computation utilization ( Actual GPU Performance Peak GPU Performance ) increases almost linearly as B increases (Fig. 10b). In other words, for gener- ating a final answer of length N , if we cut the answer into B segments of length N/B and decode them as a batch, we can get a BÃ decoding speed-up compared to sequential decoding. Never- theless, in practice, as prefilling longer requests brings some overhead, and the lengths of the B segments could be imbalanced, the actual speed-up of the batched point-expanding stage compared with the original prefilling and sequential decoding process is smaller than B. | 2307.15337#97 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 98 | As for the peak memory overhead, the amount of LLM weights can be one to two orders of mag- nitude larger than that of all the intermediate activations as long as the prefilling token length is not too large, not to mention that most activations do not need to be saved for back-propagation during inference. Therefore, the LLM weights account for the majority of the memory footprint in our test cases. Consequently, as shown in Fig. 10c, the peak memory overhead due to the increasing size of the KV cache and activation grows at a slow pace as the batch size B increases. Thanks to the small peak memory overhead, in all of our experiments, we managed to use one GPU to run SoT without seeking help from other peak memory optimization techniques (e.g., quantization (Frantar et al., 2022; Lin et al., 2023), offloading (Sheng et al., 2023)).
# F EFFICIENCY PROFILING
We run the profiling on the target GPU (NVIDIA A100-80G and NVIDIA RTX 3090) with CUDA 11.7, using the Hugging Face transformer library 4.28.1 and PyTorch 2.0.1. The host of A100-80G has an Intel Xeon Platinum 8358P CPU and 1T memory. The host of RTX 3090 has an Intel Xeon Gold 6246R CPU and 512G memory. | 2307.15337#98 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 99 | Latency profiling and estimation. For the decoding phase, we denote tD B (k) as the latency of batched decoding the k + 1-th token with batch size B, where the superscript D stands for âdecodeâ. For each batch size B = 1, · · · , 16 and each context length k = 1, · · · , 1024, we use torch.cuda.Event to record the latency of decoding one token. We run each decod- ing three times continuously and take their geometric mean as {tD B (k)}k=1,··· ,1024;B=1,··· ,16. For the prefilling phase, we profile the latency of batched prefilling the inputs with token length k in range(1, 700, 10) and batch size B = 1, · · · , 16, and denote it as tP B(k), where the superscript P stands for âprefillâ. We run each test seven times continuously, regard the first two times as the warmup tests, and take the geometric mean of the last five times as {tP B(k)}k=1,11,··· ,691;B=1,··· ,16. Once we get the latency profiling table, given a request with li tokens and the decoding batch size B, the latency of generating lo tokens can be estimated as: | 2307.15337#99 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 100 | litlo-1 Tlislo, B) =tB(i) + SD tB(k), (1) k=l;
where the subscripts i and o stand for âinputâ and âoutputâ. Note that we only test the prefill- ing latency every ten token lengths (i.e., 1, 11, 21, · · · ) for fast profiling and estimate ËtP B(li) by B(â li tP
25
# Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
The SoT decoding process consists of two stages: the skeleton stage and the point-expanding stage. Denoting the token length of the skeleton request and skeleton response as ls o, the token length of the longest point-expanding request and the longest point-expanding response as lpe i and lpe o , the number of the points as B, we can compute the latency of the skeleton and point-expanding stages as:
# Ls(ls , lpe
i , ls o) = T (ls o , B) = T (lpe
# i , ls
(2)
o, 1), , lpe o , B).
# Lpe(lpe i
i (3) | 2307.15337#100 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 101 | # i , ls
(2)
o, 1), , lpe o , B).
# Lpe(lpe i
i (3)
Using the latency profiling table, we can further estimate the average GPU computing performance in FLOPS (i.e., FLOPs per second) of decoding lo tokens with prefilling length li as
L+lo-1 ¢D a k PP (I Jy, B) = te FB) Ti+loâ1 , kal; tB(k) (4)
where f D B (k) denotes the FLOPs of decoding one token with context length k, which is calculated by DeepSpeedâs FLOPs profiler 6. Fig. 10b reports the average GPU computing performance during the process of decoding 64 tokens (prefilling length=128), i.e., P D(128, 64, B).
Memory use torch.cuda.max_memory_allocated to record the memory consumption of prefill- ing sequences of different lengths and decoding with different context lengths and a batch size ranging from 1 to 16. Then, we calculate the peak memory of each stage as the maximum value of the prefilling and decoding phases, and calculate the overall peak memory of SoT as the maximum value of the skeleton and point-expanding stages.
# 6https://deepspeed.readthedocs.io/en/latest/flops-profiler.html
26 | 2307.15337#101 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 102 | # 6https://deepspeed.readthedocs.io/en/latest/flops-profiler.html
26
Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
# G EFFICIENCY EVALUATION
G.1 SKELETON-OF-THOUGHT
G.1.1 DETAILED STATISTICS OF TOKEN LENGTHS AND POINT NUMBERS
10.0 coding math 90 fermi 80 roleplay writing 70 knowledge 60 generic counterfactual 50 " 4.0 Average a TR APR IIB VB NIG EAA IIB VI adr Phot nst that} 7B 38 a8 Vine aN clateer REECE Seger
coding math 500.0 fermi roleplay 400.0 writing knowledge 300.0 generic counterfactual 200.0 Average 100.0 RRR TE MG ie AAG Vag or ONL SE Sei engages TESS
(a) The number of points B.
(b) The normal answer length.
coding 00.0 math 1750 fermi roleplay 150.0 writing oso knowledge 100.0 generic counterfactual 75.0 common-sense S00 Average 5.1% 138.538 3393 33, 7 r3B vi hgudBe35oe⢠Sea ae Ne neo oodys ie | 2307.15337#102 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 103 | coding | °* fermi oa |o3 os o« os 02 03 a4 05 02 02 a2 a2 roleplay os | os e4 04 03 04 03 02 03 02 a3 03 2.0 writing | 03 | oz nome a ow 02 01 02 on is knowledge] 02 | 02 02 02 03 03 noe 02 03 02 oF generic 02 | 02 02 03 04 03 02 0s 03 02 01 a2 a2 counterfactual 402] ox 02 04 04 cx 02 0: 0s 02 a3 03 âcommon-sense 02 | 02 02 a4 04 03 02 os 03 04 03 02 os Average | os | 02 02 a4 cs 04 03 os 0302 03 02 TR 138938 WAS VAS V3, 138, 138 I Qh oudGn-3 Feet
(c) The maximum point-expanding response length.
(d) The ratio of the maximum point-expanding re- sponse length to the normal answer length.
odin i 60.0 math fermi 50.0 roleplay 40.0 writing knowledge soo generic wom on counterfactual eons 200 10.0 Average 65-1838 498 13 V3.3 WF 138) 138 vi hour 3 o9T* Re a aan | 2307.15337#103 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 104 | coding 7.0 math eae fermi os 07 6.0 roleplay wou 5.0 writing ae 07 40 knowledge a generic aos 3.0 counterfactual aos ao 1.0 Average 1%, 438.438 3.3 3.3 3.3, 138,138 v3 hausen3 Se that that) 78 Ve ae. 12" 358 NS Career? âGk SORE os
(e) The imbalance degree of point-expanding response lengths (standard deviation of point token lengths). (f) The ratio of the final SoT answer length to the nor- mal answer length.
Figure 11: The statistics of the token lengths and point numbers on the Vicuna-80 dataset. Each row corresponds to one question category, and each column corresponds to one model.
# G.1.2 LATENCY BREAKDOWN: SOT STAGES AND PHASES
Fig. 12 presents the absolute latencies of normal and SoT generations on Vicuna-80. Again, the speed-ups of SoT compared with normal generation is evident. We can see that the decoding phases predominantly account for the end-to-end latency. Consequently, although SoT has higher prefilling latency in the skeleton stage than the normal generation and introduces additional point-expanding
27
Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | 2307.15337#104 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 105 | 27
Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
prefilling latency â which is expected â this has negligible impact on the overall latency and thereby the overall speed-up.
Vicuna-338 V2.3 | ST Oe crra knowled9¢ |i LLaMA2-Chat-138 OpenChat-13B generic re UltraLm-138 coding ss Claude LLaMa2-Chat-7B Normal (prefil) common-sense | iii . lm Normal (decode) vicune-78V13 SoT skeleton (pref countertoct 2! i Vicuna-78 Vi.2 âmms SoT skeleton (decode) âoleploy | es StableVicuna-13B SoT point-expanding (prefill) CchatcPras Imm SOT pointexpanding (decode) oath © 5000 10000 1500 20000 25000 30000 35000 40000 ° 5000 10000 â«18000~â«20000 Latency (ms) Latency (ms)
(a) Average latency across all question categories except math and code on different models. (b) Average latency across all models on different question categories. | 2307.15337#105 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 106 | (a) Average latency across all question categories except math and code on different models. (b) Average latency across all models on different question categories.
Figure 12: The latency breakdown of SoT and normal generations on the Vicuna-80 dataset. For open-source models, the latency breakdown of the prefilling and decoding phases is shown in dif- ferent colors. For API-based models, we do not record such latency breakdown information; the bar labeled as â(decode)â indicates the overall latency of prefilling and decoding phases.
G.1.3 EFFICIENCY EVALUATION ON NVIDIA RTX 3090
We present the SoT speed-ups and latency breakdown on RTX 3090 in Fig. 13. We test the three 7B models, as their FP16-precision version can be run on an RTX 3090 GPU without further peak memory optimization techniques such as weight quantization (Frantar et al., 2022; Lin et al., 2023) or offloading (Sheng et al., 2023). On these three models, SoT can obtain 1.94Ã to 2.40Ã speed-up on average on Vicuna-80. | 2307.15337#106 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 107 | For the five question categories that SoT can provide high-quality answers (i.e., knowledge, common- sense, generic, roleplay, counterfactual), SoT can speed-up the overall answer generation process by 1.96Ã to 2.52Ã in the meantime. Note that for the math category, despite the average speed-up being 1.20Ã by calculating the speed-up across the three math questions, SoT does not reduce the absolute latency of processing the three questions.
Normal (prefill) generic i ii 2.52% EEE Normal (decode) writing i 2.43x SoT skeleton (prefill) Imm SoT skeleton (decode) common-sense III 2.39% SoT point-expanding (prefill) knowledge 237 mm SoT point-expanding (decode) ee âoleplay i it 2.12% coding i i mmm 2.10% LLoMA2-Chat 72 | rs 20% | counteracts! (TT 1.96% Vicuna-76 V1.3 {JT mm 194K ooh 120% oO 2000 4000 6000 8000 10000 12000 14000 16000 oO 2000 4000 6000 8000 10000 12000 14000 16000 Latency (ms) Latency (ms) | 2307.15337#107 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 108 | Figure 13: The latency breakdown of SoT and normal decoding on the Vicuna-80 dataset. The average speed-up across questions are also marked on the figure.
# G.1.4 ACTUAL LATENCY TESTING
This section reports the actual SoT speed-up on the Vicuna-80 with batch testing (instead of analyz- ing with pre-made profiling tables), using a single NVIDIA A100 GPU. We test the actual end-to-end latency of the SoT and normal decoding with the 9 open-source models. For each model, we run the speed-up test for five times and plot the box in Fig. 14.
28
# Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | 2307.15337#108 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 109 | 28
# Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
As shown in Fig. 14a, the current SoT solution obtains a > 2Ã speed-up on 6 out of the 9 open- source models (i.e., Vicuna-7B V1.1, Vicuna-7B V1.3, UltraLM-13B, LLaMA2-Chat-7B, Vicuna- 13B V1.3, and LLaMA2-Chat-13B), and a > 1.7 speed-up on OpenChat-13B and Vicuna-33B V1.3. SoT achieves no speed-up on StableVicuna-13B. As shown in Fig. 14b, for the five question cate- gories that SoT can provide high-quality answers (i.e., knowledge, common-sense, generic, roleplay, counterfactual), SoT can speed-up the overall answer generation process by 2.15Ã to 2.50Ã in the meantime. | 2307.15337#109 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 110 | Vieuna-78 V1.1 HD 288x generic H [+ 2.50« Vicuna-78 V1.3 Wy 2e2x| common-sense â 2.45% UltraLM-13B eH 2.75x knowledge HL_-â 2.34 LLaMA2-Chat-78 H 2.20x coding â 2.29x Vieuna-138 V1.3 Hoh 219% counterfactual HH 28x LLaMA2-Chat-138 2aax writing H 2.16% Openchat-138 HoH 197x roleplay 25x Vieuna-338 V1.3 oD 175% math} HL} 167 stablevieuna-t364 â ( 0.97% fermi tâ 1.63% 10 1s 202 3035 1@ 16 18 20 22 24 26 28 30
# (a) Average speed-up on different models.
(b) Average speed-up on different question categories.
Figure 14: Speed-ups on 9 open-source models on the Vicuna-80 dataset with actual batch testing.
G.2 SKELETON-OF-THOUGHT WITH ROUTER | 2307.15337#110 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 111 | G.2 SKELETON-OF-THOUGHT WITH ROUTER
The overhead brought by the router inference is relatively small: On the Vicuna-80 dataset, the prompting and trained router have an average latency of 0.65s (0.39sâ¼1.37s) and 0.04s (0.008sâ¼1.55s), respectively. On the WizardLM dataset, the average latency of the prompting and trained router is 0.80s (0.36sâ¼2.22s) and 0.03s (0.009sâ¼2.52s), respectively.
# G.2.1 SPEED-UP BREAKDOWN: MODELS
Fig. 15 shows the speed-ups of SoT-R on different models on the Vicuna-80 dataset. Fig. 16 and Fig. 17 show the speed-ups of SoT-R on different models on the WizardLM dataset. We can ob- serve that on Vicuna-80, the two methods yield similar speed-ups, whereas on WizardLM, GPT-4 prompting router usually obtains higher speed-ups than the trained router, especially on GPT-4 itself. | 2307.15337#111 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 112 | OpenChat-13B LLaMA2-Chat-13B ULaMA2-Chat-7B LLaMA2-Chat-13B Vicuna-7B V1.1 GPT-4 UltraLM-13B e Vicuna-7B V1.3 Vicuna-13B V1:3 ChatGPT-3.5 Claude StableVicuna-13B aude StableVicuna-13B 10 412 #214 «+16 18 20 22 10 412 #14 «+16 18 20 22
(a) Average speed-up across all question categories with prompting router. (b) Average speed-up across all question categories with trained router.
Figure 15: Speed-ups of SoT-R on different models on Vicuna-80 dataset.
29
# Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
GPT-4 2.41% Vicuna-33B V1.3 penChat-13B Vicuna-13B V1.3 UltraLM-13B Vicuna-7B V1. LLaMA2-Chat-13B aMA2-Chat-7B Vicuna-7B V1.3 ChatGPT-3.5 laude StableVicuna-13B | 2307.15337#112 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 113 | GPT-4 1.74x OpenChat-13B âChatGPT-3.5, LLaMA2-Chat-78 Vicuna-7B V1.1 LLaMA2-Chat-138 Vicuna-7B V1.3 Vicuna-33B V1.3 UltraLM-13B Vicuna-13B V1.3 : StableVicuna-13B 1.09 Claude 109x
2.50 2.75
1.00 1.25 150 175 2.00 2.25 250 2.75
(a) Average speed-up across all question categories with prompting router. (b) Average speed-up across all question categories with trained router.
Figure 16: Speed-ups of SoT-R on different models on WizardLM dataset.
GPT-4 < * UltraLM-138, Vicuna-78 V1.1 OpenChat-138 Vicuna-338 V1.3 Vicuna-138 V1.3, ChatGPT-3.5 LLaMA2-Chat-13B LLaMA2-Chat-7B Vicuna-7B V1.3 StableVicuna-13B Claude © SOT (w/o router) % â SoT-R w/ prompting router SoT-R w/ trained router 1.75 2.00 2.25 2.50 2.75 | 2307.15337#113 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 114 | Figure 17: Speed-ups of SoT and SoT-R on different models on the WizardLM dataset.
# G.2.2 SPEED-UP BREAKDOWN: CATEGORIES
Fig. 18 and Fig. 19 show the speed-ups of SoT-R on different question categories of Vicuna-80 dataset. The trained router achieves slightly higher speed-up on most of the categories (except for knowledge, writing, and fermi). Fig. 20 and Fig. 21 show the speed-ups of SoT-R on different question categories of WizardLM dataset. We can observe that on 19 out of 29 categories, using the prompting router achieves higher speed-ups than using the trained router.
generic generic common-sense common-sense knowledge knowledge counterfactual counterfactual roleplay roleplay writing fermi $1. fermi coding -all1. coding writing âfami. math 40.90x math . 1.00 1.25 2.50 2.75 1.50 1.75 2.00 2.25
(a) Speed-ups of SoT-R with prompting router on dif- ferent question categories. (b) Speed-ups of SoT-R with trained router on different question categories.
Figure 18: Speed-ups of SoT-R on different question categories of Vicuna-80 dataset | 2307.15337#114 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 115 | Figure 18: Speed-ups of SoT-R on different question categories of Vicuna-80 dataset
knowledge < o generic + <0 writing ~~ « common-sense @ SoT (w/o router) coding + - *¢ %* â SoT-R w/ prompting router roleplay #4 4 SoT-R w/ trained router counterfactual oe fermi al e math {a4 e
1.00 1.25 150 1.75 2.00 2.25 2.50 2.75 3.00
Figure 19: Speed-ups of SoT and SoT-R on different question categories of the Vicuna-80 dataset.
30
# Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
Counterfactual âAcademic Writing Ethics Chemistry Roleplay Computer Science Code Generation Reasoning Physics 1.00 1.25 150 1.75 2.00 2.25 2.50 2.75 3.00 Counterfactual Medicine Technology Chemistry Economy Writting TruthfulQa Common-Sense Multilingual Computer Science History Physics Reasoning Biology Philosophy âAcademic Writing Law Code Debug Literature Math Complex Format Art Entertainment Code Generation 1.00 1.25 150 1.75 2.00 2.25 2.50 2.75
1.00 1.25 150 1.75 2.00 2.25 2.50 2.75 3.00 | 2307.15337#115 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 116 | 1.00 1.25 150 1.75 2.00 2.25 2.50 2.75 3.00
1.00 1.25 150 1.75 2.00 2.25 2.50 2.75 3.00
(a) Speed-ups of SoT-R with prompting router on dif- ferent question categories. (b) Speed-ups of SoT-R with trained router on different question categories.
Figure 20: Speed-ups of SoT-R on different question categories of WizardLM dataset
Counterfactual Economy Technology History Medicine writting Sport Complex Format }--⢠Code Generation }-~â- Roleplay TruthfulQa Law Philosophy Academic Writing Literature Chemistry Code Debug âComputer Science Ethics Toxicity Music Ai AAAA Art p< Biology < Common-Sense * Math ++ â« Multilingual ---@ Reasoning a . Physics +-â*-~-<-~- Entertainment }--â#----@ * +--© ° © SoT (w/o router) â%* â SoT-R w/ prompting router 4 SoT-R w/ trained router 1.00 1.25 1.50 1.75 2.00 2.25 2.50 2.75 3.00
Figure 21: Speed-ups of SoT and SoT-R on different question categories of the WizardLM dataset. | 2307.15337#116 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 117 | Figure 21: Speed-ups of SoT and SoT-R on different question categories of the WizardLM dataset.
# H OVERHEAD OF SOT IN DIFFERENT SCENARIOS
Despite the optimizations made to the decoding phase, SoT brings overhead to the prefilling phase as the model needs to handle additional SoT prompts. Table 6 reports SoTâs prefilling overhead for the API-based models. These statistics are averaged across the Vicuna-80 questions that are suitable for SoT (according to our manual annotation). We can see that SoT significantly increases the number of prefilling tokens. This is because that SoT issues an independent point-expanding request for each point, with the average number of points being 6.8 on Vicuna-80 dataset across all evaluated models. Consequently, the APIs need to prefill the point-expanding request multiple times.
31
Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
Table 6: SoTâs prefilling token overhead for API-based models.
Model Prefill Phase Normal SoT Stage 1 SoT Stage 2 Ratio (SoT / Normal) Claude ChatGPT-3.5 GPT-4 12.52 12.52 12.52 171.41 171.41 171.41 808.91 591.31 983.09 78.30 60.92 92.21 | 2307.15337#117 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 118 | When using SoT to serve the open-source models, a simple and small trick is to prefill the common prefix of point-expanding requests with a batch size of 1 during Stage 2 (i.e., the point-expanding stage). Table 7 shows the prefilling overhead after applying the trick. Although the ratio is consid- erably smaller compared to that of the API-based models, this computational overhead remains a concern, especially during periods of high system workload.
There are some possibilities to further reduce the token and computational overhead that are worth exploring in future work. To name a few: (1) When using SoT in serving systems, we can simply reuse the key-value cache containing the question and skeleton from Stage 1 during Stage 2, rather than re-prefilling them as in a multi-round conversation. (2) Generally, as LLM capabilities continue to evolve and prompt tuning techniques advance (Shin et al., 2020; Li & Liang, 2021; Lester et al., 2021), the possibility of using much shorter prompts to activate the SoT mode in the future holds promise, which would significantly mitigate the token or computational overhead.
Table 7: SoTâs computational overhead (in terms of the number of prefilling tokens) for open-source models. | 2307.15337#118 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 119 | Table 7: SoTâs computational overhead (in terms of the number of prefilling tokens) for open-source models.
Model Prefill Phase Naive SoT Stage 1 SoT Stage 2 Ratio (SoT / Normal) LLaMA2-Chat-7B 12.52 LLaMA2-Chat-13B 12.52 12.52 12.52 12.52 12.52 12.52 12.52 12.52 OpenChat-13B Vicuna-7B V1.3 Vicuna-13B V1.3 Vicuna-33B V1.3 StableVicuna-13B UltraLM-13B Vicuna-7B V1.1 171.41 171.41 171.41 171.41 171.41 171.41 171.41 171.41 171.41 216.49 216.41 234.38 211.61 273.39 258.88 312.03 290.88 196.64 30.98 30.98 32.41 30.59 35.53 34.37 38.61 36.92 29.40
I ANSWER QUALITY EVALUATION
I.1 SKELETON-OF-THOUGHT
I.1.1 ANSWER ANALYSIS: DIFFERENT MODELS
Models on which SoT has low net win rates. We observe two types of reasons for the low net win rates. | 2307.15337#119 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 120 | Models on which SoT has low net win rates. We observe two types of reasons for the low net win rates.
(1) OpenChat-13B, Vicuna-7B V1.1, and LLaMA2-Chat-13B. For the weak models such as OpenChat-13B and Vicuna-7B V1.1, they are not able to follow the SoT prompts precisely. For OpenChat-13B and LLaMA2-Chat-13B, sometimes the skeleton contains undesired contents (e.g., completing the skeleton points as shown below).
32
# Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | 2307.15337#120 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 121 | 32
# Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
Question: How many lightning strikes occur on Earth each day? Try to explain your answer. Your explanation should take the reader through your reasoning step-by-step. SoTâs answer (OpenChat-13B) (Orange texts are the skeleton.) (Only showing the skeleton) 1.Introduction. 2. The Earthâs atmosphere. 3. Lightning formation. 4. Electrical discharge. 5. Lightning strikes per second. 6. Calculating the total. Skeleton: 1. Introduction. Lightning is a beautiful and awe-inspiring natural phenomenon, but it can also be dangerous. It occurs when positively charged particles in a thundercloud discharge to the negatively charged Earth. The process is still not fully understood, but scientists have been able to estimate how many lightning strikes occur on Earth each day. Skeleton: 1. The Earthâs atmosphere. The Earthâs atmosphere plays a crucial role in the formation of lightning. It consists of different layers, with the tropo- sphere being the lowest and most active. The troposphere is where most of the weather occurs, and it is where lightning primarily forms. The atmosphereâs temperature, humidity, and wind patterns all contribute to the likelihood of a lightning strike. Skeleton: 2. Lightning formation. When warm, moist air rises, it cools and forms water ¡more texts are omitted...¿ | 2307.15337#121 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 122 | For both OpenChat-13B and Vicuna-7B V1.1, sometimes they do not write anything in the point- expanding stage when details are needed.
Question: As a superhero, how would you explain your origin story to a curious child?
SoTâs answer (Vicuna-7B V1.1) (Orange texts are the skeleton.) 1. Born with extraordinary abilities. 2. Trained to use powers for the greater good. 3. Protecting the innocent and fighting villains.I use my powers to defend the defenseless and bring justice to those who have been wronged. 4. Inspired by courage, justice, and selflessness.
These two issues result in bad net win rates.
(2) Claude. For strong models such as Claude, we do not see major issues with SoTâs answers except the common issues across all models discussed in § 3.2.3. We hypothesize that the low net win rates are majorly due to the fact that their normal sequential generation is already very good. | 2307.15337#122 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 123 | Models on which SoT has high net win rates. The models with high net win rates (Vicuna-13B V1.3, StableVicuna-13B, and UltraLM-13B) are between the above two extremes. On one hand, they are good enough to understand SoT prompts. While we observe that they sometimes have the same issues as OpenChat-13B and Vicuna-7B V1.1 discussed before, these issues happen much less frequently. On the other hand, their normal sequential generation performance has a larger room for improvement than Claude and ChatGPT-3.5 so that the benefits from SoT are more visible on these models.
I.1.2 ANSWER ANALYSIS: DIFFERENT QUESTION CATEGORIES
# Categories on which SoT has low net win rates. | 2307.15337#123 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 124 | I.1.2 ANSWER ANALYSIS: DIFFERENT QUESTION CATEGORIES
# Categories on which SoT has low net win rates.
(1) Math. Math questions require step-by-step thinking. Without knowing the previous steps, it is hard to derive the following steps. Chain-of-Thoughts (Kojima et al., 2022; Wei et al., 2022) empha- size the importance of expanding the sequential thinking steps to solve these problems successfully. In contrast, SoT requires the models to (a) come up with the skeleton of the solution first and (b) deduce each individual step independently without referring to previous results. Both of these are challenging tasks. Strong models are able to get (a) correct but fail at (b). In the following example, ChatGPT-3.5 gets the skeleton steps correct. However, without knowing the results from the first step, the model starts making mistakes in the second step.7
7One additional (minor) issue of the API-based ChatGPT-3.5 is that the skeleton point â1.â is repeated in the answer.
33
# Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
(Math) Solve for x in the equation 3x + 10 = 5(x - 2). | 2307.15337#124 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 125 | â SoTâs answer (ChatGPT-3.5) (Orange texts are the skeleton.) 1. Move all x terms to one side. 1. Move all x terms to one side by subtracting 5x from both sides, giving -2x + 10 = -10. 2. Move all constant terms to the other side. This can be done by subtracting 5x - 10 from both sides, leaving 3x - 5x = -10 - 10. 3. Simplify the equation. Combine like terms on both sides of the equation to get 3x + 10 = 5x - 10. 4. Divide both sides by the coefficient of x. Divide both sides by 2 to get x = -5. 5. Check the solution by plugging it back into the original equation.Substitute the value of x obtained in step 4 back into the original equation and verify that both sides are equal. â Normal answer (ChatGPT-3.5) 3x + 10 = 5(x - 2) can be simplified as follows: 3x + 10 = 5x - 10 (distribute the 5 on the right side) 10 + 10 = 5x - 3x (subtract 3x from both sides) 20 = 2x (combine like terms on the | 2307.15337#125 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 128 | â SoTâs answer (Vicuna-13B V1.3) (Orange texts are the skeleton.) 1. Simplify equation. We can simplify the equation by combining like terms: 3x + 10 = 5(x - 2) =¿ 3x + 10 = 5x - 10 =¿ x + 5 = 2 =¿ x = 2 - 5 =¿ x = -3 2. Factor 5(x - 2). 5(x - 2) = 5x - 10. 3. Apply FOIL. * Multiply the binomials in the equation, using the first term of each binomial for the first term of the product, the outside term for the second term of the product, the second term of each binomial for the third term of the product, and the remaining terms for the fourth term of the product. * 3(x - 2) + 10 = 5x + 5(-2) * 3x - 6 = 5x - 10 * 3x = 5x - 10 * x = 5x - 10 / 3 * x = 5/3x - 10/3 * x = 2/3x - 5/3 4. Solve for x. After applying the FOIL method and simplifying the equation, we get 3x = 17. Therefore, x = 17/3. â Normal answer | 2307.15337#128 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 129 | Solve for x. After applying the FOIL method and simplifying the equation, we get 3x = 17. Therefore, x = 17/3. â Normal answer (Vicuna-13B V1.3) To solve for x in the equation 3x + 10 = 5(x - 2), we need to first distribute the 5 to multiply both sides by 5. 3x + 10 = 5(x - 2) 5(3x + 10) = 5(x - 2) 15x + 50 = 5x - 10 Add 5x to both sides: 20x + 50 = 5x Subtract 5x from both sides: 15x = 0 Divide both sides by 15: x = 0 | 2307.15337#129 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 130 | Therefore, the solution to the equation is x = 0.
(2) Fermi. Similar to math questions, questions in the fermi category require solvers to make as- sumptions and calculate the answer step-by-step. For example, to answer the question in the figure below, the solver is required to make an assumption about when life began on Earth. In step 4, ChatGPT-3.5 does not know that it has already assumed 3.5 billion years in step 3 and uses the wrong number 4.54 billion in the calculation. In step 5, ChatGPT-3.5 does not know that it has already made the above assumptions and refuses to answer âdue to the uncertainties in estimating the age of the Earthâ. In contrast, normal generation is able to create a consistent solution.
34
# Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
Question: (Fermi) How many times has the Earth orbited the Sun since the beginning of life? Try to explain your answer. Your explanation should take the reader through your reasoning step-by-step.
â SoTâs answer (ChatGPT-3.5) (Orange texts are the skeleton.) | 2307.15337#130 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 131 | â SoTâs answer (ChatGPT-3.5) (Orange texts are the skeleton.)
1. Estimate the age of the Earth. The age of the Earth is estimated to be around 4.54 billion years, based on radiometric dating of rocks and meteorites. 2. Determine when life began on Earth. Life on Earth is estimated to have begun around 3.5 billion years ago, based on the discovery of ancient microfossils. 3. Calculate the number of years since life began. The oldest known fossils suggest that life on Earth began around 3.5 billion years ago. Therefore, the number of years since life began on Earth is approximately 3.5 billion years. 4. Divide the number of years by the length of one orbit. Assuming an orbital period of 365.25 days, the Earth has orbited the Sun approximately 4.54 billion / 365.25 = 12,425,000 times since the beginning of life.
5. The Earth has orbited the Sun approximately X times since the beginning of life. Unfortunately, it is impossible to give
a specific number for the amount of times the Earth has orbited the Sun since the beginning of life due to the uncertainties
in estimating the age of the Earth and when life first emerged.
# Normal answer (ChatGPT-3.5) | 2307.15337#131 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 132 | in estimating the age of the Earth and when life first emerged.
# Normal answer (ChatGPT-3.5)
The beginning of life on Earth is estimated to have occurred around 3.5 to 4 billion years ago. Since that time, the Earth has been orbiting the Sun continuously. The length of one orbit, or one year, is approximately 365.25 days. This means that the Earth completes one full orbit around the Sun every 365.25 days. To calculate the number of times the Earth has orbited the Sun since the beginning of life, we can divide the total number of days by the length of one orbit: Total number of days since the beginning of life = (3.5 billion years) x (365 days per year) = 1.2775 trillion days Number of orbits = (1.2775 trillion days) / (365.25 days per orbit) = 3.5 billion orbits
Therefore, the Earth has orbited the Sun approximately 3.5 billion times since the beginning of life. | 2307.15337#132 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 133 | Therefore, the Earth has orbited the Sun approximately 3.5 billion times since the beginning of life.
As the following example shows, there are also cases when SoT solves the question (with minor computation errors) by embedding the computation steps in the skeleton directly in the skeleton stage. The point-expanding stage will then add little content (except some undesired repetition of the following skeletons occasionally). This happens both on small/weak models and large/powerful models. Although this behavior is not what we originally expect, it sheds light on a promising direc- tion for making SoT a general framework: in the problem where step-by-step thinking is needed, the SoT model automatically falls back to the sequential generation mode by outputting the complete answer in the skeleton stage and skipping the point-expanding stage. Future work needs to be done to control and encourage this behavior better.
Question: answer. Your explanation should take the reader through your reasoning step-by-step. | 2307.15337#133 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 134 | SoTâs answer (Vicuna-7B V1.1) (Orange texts are the skeleton.) 1. The average human blinks 12-15 times per minute. 2. There are 60 minutes in an hour. 3. Therefore, the average human blinks 60 x 12-15 = 720-900 times per hour. 4. There are 24 hours in a day.Therefore, the average human blinks 720-900 x 24 = 17,520-22,400 times per day. 5. Therefore, the average human blinks 720-900 x 24 = 17,520-22,400 times per day.This translates to blinking approxi- mately 50-65 times per hour, which is a significant portion of the day. 6. There are 365 days in a year.Therefore, the average human blinks 17,520-22,400 x 365 = 5,857,600-8,124,000 times per year. 7. Therefore, the average human blinks 17,520-22,400 x 365 = 5,857,600-8,124,000 times per year. 8. The human lifespan is typically 70-80 years.Therefore, the average human blinks 80,000-112,000 times in their lifetime. 9. Therefore, the average human blinks 5,857,600-8,124,000 / | 2307.15337#134 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 136 | # Normal answer (Vicuna-7B V1.1)
The average human blinks about 12-15 times per minute, which translates to about 18,200 to 20,400 blinks per day. Over the course of a lifetime, the number of blinks is estimated to be around 7,500,000 to 8,000,000. Blinking is a reflexive action that helps to keep the eyes moist and comfortable, as well as to protect the eyes from dust and other foreign particles. The frequency of blinking can vary depending on a variety of factors, including age, health, and environmental conditions. For example, people who work in dusty or dry environments may blink more frequently than those who work in a clean, moist environment. Over the course of a lifetime, the average person will spend approximately 27,375 days or 700,000 hours awake. This means that the majority of the time spent blinking will occur in the first few decades of life, with the frequency of blinking gradually decreasing as people age.
It is important to note that there is a significant amount of individual variation in the frequency of blinking. Some people may blink more or less than the average, depending on a variety of factors such as age, health, and environmental condi- tions. Additionally, blinking patterns can be affected by factors such as stress, fatigue, and certain medical conditions.
35
# Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | 2307.15337#136 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 137 | 35
# Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
(3) Coding. While in most cases, the models are able to decompose the coding problem into smaller tasks in the skeleton stage, the generation quality during the point-expanding stage is bad, potentially due to the fact that we did not carefully craft a special point-expanding prompt for coding. The failure cases can be grouped into two cases. (a) In some cases, the model only generates a description of how implementation can be done without giving the code.
Question: of occurrences of a specific word in the file. (Coding) Develop a C++ program that reads a text file line by line and counts the number
# SoTâs answer (Claude) (Orange texts are the skeleton.)
1. Open file stream by declaring an ifstream object and using the open() method to open the file in read mode. 2. Read line by line from the file stream into a string using getline(). 3. Find word by searching each line for the target word using string::find and keeping track of matches. 4. Increment counter by one each time the specified word is encountered in the current line. 5. Close file
The file stream should be closed after finishing reading the file to free up resources. This can be done by calling the
# close() method on the file stream object. | 2307.15337#137 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 138 | The file stream should be closed after finishing reading the file to free up resources. This can be done by calling the
# close() method on the file stream object.
This might be due to the fact that the powerful models (e.g., Claude) follow the point-expanding prompt âWrite it **very shortly** in 1â¼2 sentenceâ precisely. Better prompts tailed to coding could help. (b) In some other cases, the model tends to generate the complete code repeatedly in several points.
Question: of occurrences of a specific word in the file. (Coding) Develop a C++ program that reads a text file line by line and counts the number | 2307.15337#138 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 139 | Question: of occurrences of a specific word in the file. (Coding) Develop a C++ program that reads a text file line by line and counts the number
SoTâs answer (OpenChat-13B) (Orange texts are the skeleton.) 1. Read file line by line. 2. Store lines in a container. 3. Count occurrences of specific word. ââcpp ¡more texts are omitted...¿ ââ This code uses the C++ standard library to read a text file line by line, store each line in a container, and count the occurrences of a specific word. The results are printed. 4. Print results. Hereâs the C++ program that reads a text file line by line and counts the number of occurrences of a specific word in the file: ââcpp ¡more texts are omitted...¿ ââ Please note that this is a very basic implementation. You may need to adjust it depending on your specific requirements.
Please note that this is a very basic implementation. You may need to adjust it depending on your specific requirements.
This problem might also be alleviated by prompts that explicitly tell the model to only generate a âcode snippetâ in each skeleton point. | 2307.15337#139 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 140 | This problem might also be alleviated by prompts that explicitly tell the model to only generate a âcode snippetâ in each skeleton point.
Although better prompt engineering could lead to better results, we expect that SoT is not suitable for the coding questions in Vicuna-80. The reason is that the solutions to the problems in Vicuna- 80 are usually in a few lines with strong dependencies between each other. Without knowing the previously defined variable names or imported libraries, it is hard (or even impossible) to implement the subsequent code correctly. As a consequence, generating different parts of the answers in parallel is not suitable here. Similar to the math questions, automatically falling back to outputting all the code in the skeleton stage and not triggering the point-expanding stage might be more suitable for answering this question type.
However, we expect that SoT could be helpful for larger coding tasks that involve multiple modules (e.g., functions, classes). The skeleton stage could be in charge of defining the interfaces between different modules (e.g., functionalities, names, parameters, return types). The point-expanding stage could be in charge of implementing these modules or using these modules to complete the final task, which can be done in parallel. This mimics the common practice in software engineering. | 2307.15337#140 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 141 | (4) Writing. The writing questions usually ask for an email, a blog post, or a review in a given scenario. In the detailed evaluation output from FastChat and LLMZoo metrics, the major and common complaint is that SoTâs answers are (slightly) less detailed. This could potentially be improved by point-expanding prompts that request more details.
36
# Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
While inspecting SoTâs answers manually, we observe one additional issue. Writing problems usu- ally require a coherent passage without embedded skeleton points, whereas our current SoT pipeline (§ 2) concatenates skeleton points as part of the answer. One example is shown below.
Question: a joint venture? (Writing) Can you help me write a formal email to a potential business partner proposing
# SoTâs answer (ChatGPT-3.5) (Orange texts are the skeleton.) | 2307.15337#141 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 142 | # SoTâs answer (ChatGPT-3.5) (Orange texts are the skeleton.)
1. Introduce yourself and your company. Dear [Recipient], I hope this email finds you well. My name is [Your Name] and I am the [Your Position] at [Your Company Name]. 2. State the purpose of the email. The purpose of this email is to propose a joint venture between our companies. 3. Explain the potential benefits of the joint venture. By combining our resources and expertise, we can increase market share and profitability for both companies. 4. Provide details on the proposed collaboration. Our proposed collaboration would involve combining our expertise in marketing with your strong manufacturing capabilities to create a new line of products. 5. Suggest a follow-up meeting or call. Please let me know your availability for a follow-up call next week to discuss this opportunity further.
6. Thank the recipient for their time and consideration.Thank you for considering this proposal and we look forward to
hearing back from you soon.
# Normal answer (ChatGPT-3.5) | 2307.15337#142 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 143 | hearing back from you soon.
# Normal answer (ChatGPT-3.5)
Of course, I can help you with that. Hereâs a sample email you can use as a guide: Subject: Proposal for Joint Venture Dear [Business Partnerâs Name], I hope this email finds you well. I am writing to propose a joint venture between our companies, [Your Company Name] and [Potential Partner Company Name]. We believe that our respective expertise can create a successful partnership that will benefit both of our businesses. We have conducted thorough research on your company and admire your innovative approach to [insert industry or niche]. We think our [insert expertise or specialty] could complement your [insert potential partnerâs specialty or expertise]. By collaborating on this joint venture, we believe we can create a new and innovative product that will appeal to our respective markets. We are confident that our companies would work well together and that our joint venture would be successful. We would like to set up a meeting to discuss the specifics of our proposal and explore how we can move forward with the joint venture. Thank you for considering this proposal. We look forward to hearing from you soon. Best regards,
# [Your Name] | 2307.15337#143 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 144 | # [Your Name]
In such cases, simply removing the skeleton points would greatly improve the answer quality. To make SoT more general, one future direction is to let the LLM itself decide whether the point index and point skeleton should be included to make the final answer more natural and fluent.
# Categories on which SoT has high net win rates.
(1) Counterfactual, knowledge, common sense, generic. All these four categories have the same characteristic: the ideal answer should cover several relatively independent points. Having the LLM generate a skeleton before expanding the details can result in more comprehensive discussions for the question. Besides, organizing the answers as a list of points makes the answers easier to read, whereas the answers from normal generation are sometimes less structured and less readable. Below, we give one example for each question category to illustrate these points.
37
# Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
(Counterfactual) What if the Internet had been invented during the Renaissance period?
# SoTâs answer (ChatGPT-3.5) (Orange texts are the skeleton.) | 2307.15337#144 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 145 | (Counterfactual) What if the Internet had been invented during the Renaissance period?
# SoTâs answer (ChatGPT-3.5) (Orange texts are the skeleton.)
1. Advancements in communication.The invention of the Internet during the Renaissance period would have revolution- ized communication, allowing for faster and more efficient exchange of ideas and information across vast distances. 2. Increased access to knowledge.The invention of the printing press during the Renaissance period already allowed for greater access to knowledge, but the internet would have revolutionized this even further by providing instantaneous access to information from all over the world. 3. Improved dissemination of ideas.If the internet had been invented during the Renaissance period, the spread of ideas and knowledge would have been much faster and widespread than it was at the time. This could have led to a faster progression of scientific and artistic innovations. 4. Impact on art and literature.The invention of the internet during the Renaissance period would have revolutionized the way art and literature are created, shared, and consumed, leading to a more diverse and globally-influenced artistic and literary scene. 5. Potential for global trade.The invention of the internet during the Renaissance period would have greatly expanded global trade opportunities, allowing for easier communication and collaboration between merchants and traders across continents.
6. Influence on political and social structures.The invention of the internet during the Renaissance period could have led | 2307.15337#145 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 146 | 6. Influence on political and social structures.The invention of the internet during the Renaissance period could have led
to significant changes in political and social structures, allowing for greater democratization and communication among
people from different regions and classes.
# Normal answer (ChatGPT-3.5)
If the Internet had been invented during the Renaissance period, it would have had a significant impact on the way people communicated, shared ideas, and gained knowledge. The Renaissance was a time of great intellectual and artistic growth, and the Internet would have facilitated the spread of these ideas and innovations. Scholars and artists would have been able to share their work more easily, and people from different parts of the world would have been able to connect and collaborate more easily. The Internet would have also made it easier to access information, which could have accelerated the pace of scientific and technological advancements.
However, itâs important to note that the infrastructure required for the Internet to exist did not exist during the Renaissance
period. There were no telecommunications networks or electricity grids, so the idea of the Internet would have been impossible to realize. Nonetheless, itâs interesting to think about how the Renaissance might have been different if the
Internet had existed at that time.
38
Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
Question: versus a reusable bottle on both the environment and human health? (Knowledge) What are some potential implications of using a single-use plastic bottle | 2307.15337#146 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 147 | SoTâs answer (ChatGPT-3.5) (Orange texts are the skeleton.) 1. Single-use plastic waste.Single-use plastic bottles contribute to the massive amount of plastic waste that ends up in landfills and oceans, harming wildlife and ecosystems. 2. Pollution of waterways.Single-use plastic bottles contribute to the pollution of waterways, as they often end up in rivers, lakes, and oceans, causing harm to aquatic life and the overall ecosystem. 3. Harmful chemicals leaching.Single-use plastic bottles may release harmful chemicals such as bisphenol A (BPA) and phthalates into the water or liquid they contain, posing a risk to human health. 4. Greenhouse gas emissions.The production and disposal of single-use plastic bottles contribute to greenhouse gas emissions, which contribute to climate change. Using a reusable bottle reduces the need for constant production and disposal, therefore reducing greenhouse gas emissions. 5. Microplastic ingestion.Single-use plastic bottles contribute to the accumulation of microplastics in the environment, which can be ingested by humans through food and water, leading to potential health risks. | 2307.15337#147 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 148 | plastic bottles contribute to the accumulation of microplastics in the environment, which can be ingested by humans through food and water, leading to potential health risks. 6. Increased risk of disease.Single-use plastic bottles can harbor harmful bacteria, leading to an increased risk of disease if not disposed of properly. Reusable bottles, when cleaned regularly, pose a lower risk. Normal answer (ChatGPT-3.5) Using a single-use plastic bottle has significant implications on the environment and human health. Environmental implications: 1. Plastic pollution: Single-use plastic bottles are one of the major contributors to plastic pollution in oceans and landfills. They take hundreds of years to decompose, and during that time, they release toxic chemicals and microplastics into the environment. 2. Carbon emissions: Manufacturing and transporting single-use plastic bottles require a significant amount of energy and resources, which contribute to greenhouse gas emissions and climate change. 3. Natural resource depletion: The production of single-use plastic bottles requires large amounts of water and oil, which are finite resources. Health implications: 1. Chemical exposure: Single-use plastic bottles contain chemicals such as | 2307.15337#148 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 149 | requires large amounts of water and oil, which are finite resources. Health implications: 1. Chemical exposure: Single-use plastic bottles contain chemicals such as Bisphenol A (BPA) and phthalates, which can leach into the water and pose a risk to human health. 2. Microplastics ingestion: When plastic bottles break down, they release microplastics, which can enter the food chain and be ingested by humans. Using a reusable bottle, on the other hand, has a positive impact on the environment and human health. Environmental benefits: 1. Reduced plastic waste: Reusable bottles significantly reduce the amount of plastic waste generated. 2. Lower carbon emissions: Reusable bottles have a lower carbon footprint as they can be used multiple times. 3. Reduced resource depletion: The production of reusable bottles requires fewer resources than single-use plastic bottles. Health benefits: 1. Chemical-free: Reusable bottles are typically made from materials that are free from harmful chemicals, reducing the risk of chemical exposure. 2. Reduced microplastics: Reusable bottles do not break down easily, reducing the release of microplastics into the environment. Overall, using a reusable bottle is a more sustainable and healthier option than using | 2307.15337#149 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 151 | Question: sation or simply being polite? (Common sense) How can you determine if a person is genuinely interested in a conver# SoTâs answer (ChatGPT-3.5) (Orange texts are the skeleton.)
1. Pay attention to body language.Notice if their posture is open or closed, if they are facing towards you or away, and if they are fidgeting or relaxed. These can all indicate their level of interest in the conversation. 2. Listen for engaged responses.If the person is genuinely interested in the conversation, they will respond with thoughtful and relevant comments or questions rather than short, generic responses. 3. Observe eye contact.If the person maintains steady eye contact and their gaze is focused on you, they are likely genuinely interested in the conversation. On the other hand, if they frequently look away or seem distracted, they may just be being polite. 4. Look for follow-up questions.If the person asks follow-up questions and seems genuinely interested in what youâre saying, then they are likely interested in the conversation and not just being polite.
5. Notice tone of voice.A person who is genuinely interested in a conversation will often have a more enthusiastic and
animated tone of voice, compared to someone who is simply being polite and giving short, monotone responses.
# Normal answer (ChatGPT-3.5) | 2307.15337#151 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 152 | animated tone of voice, compared to someone who is simply being polite and giving short, monotone responses.
# Normal answer (ChatGPT-3.5)
As an AI assistant, I cannot determine a personâs true intentions or feelings. However, there are some social cues that can indicate whether a person is genuinely interested in a conversation or not. For example, if they maintain eye contact, lean in, nod their head and ask follow-up questions, it can be a sign that they are engaged in the conversation. On the other hand, if they avoid eye contact, give short or vague responses, or seem distracted, it may indicate that they are not as interested. Ultimately, itâs important to remember that everyone has different communication styles, and it can be difficult to accurately interpret someoneâs intentions without directly asking them.
39
Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
# Question: | 2307.15337#152 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 153 | (Generic) How can I improve my time management skills? SoTâs answer (ChatGPT-3.5) (Orange texts are the skeleton.) 1. Set clear goals. Setting clear goals is important in improving time management skills as it helps you to focus on what needs to be done and what can be accomplished within a certain timeframe. 2. Prioritize tasks. Make a list of tasks and prioritize them based on their importance and deadline. This will help you focus on the most critical tasks and avoid wasting time on less important ones. 3. Create a schedule. Creating a schedule helps you allocate time for each task and ensures that you complete them within the allotted time. 4. Minimize distractions. Distractions such as social media, emails, and phone notifications can waste a lot of time. Try to limit them by turning off notifications or setting specific times to check them. 5. Take breaks. Taking breaks can actually improve productivity and help prevent burnout. Itâs important to schedule short breaks throughout the day to recharge and refocus. 6. Learn to say no. Itâs important to prioritize your own tasks and not take on too much at once. Saying no to requests that donât align with your goals or | 2307.15337#153 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 154 | Itâs important to prioritize your own tasks and not take on too much at once. Saying no to requests that donât align with your goals or schedule can help you manage your time more effectively. 7. Delegate tasks. If you have too many tasks to handle, consider delegating some of them to others who can help you. This will allow you to focus on more important tasks and manage your time more effectively. 8. Use time management tools.There are a variety of time management tools available, such as calendars, to-do lists, and productivity apps, that can help you stay organized and on track with your tasks. Normal answer (ChatGPT-3.5) Improving time management skills can be a challenge, but there are several strategies that can help. Here are a few: 1. Set clear goals and priorities: Identify your most important tasks and make sure to complete them first. 2. Make a schedule: Create a schedule that includes your most important tasks and set aside time for them. 3. Use a timer: Set a timer for a specific amount of time to help you stay focused and avoid distractions. 4. Take breaks: Take short breaks throughout the day to help you recharge and avoid burnout. 5. Eliminate distractions: | 2307.15337#154 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.