doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2307.15337 | 33 | Prompting methods for LLMs. Recent years have witnessed the emergence of the âpre-train, prompt, and predictâ paradigm, which has shown promise in enhancing LLMsâ quality in math and commonsense reasoning (Wei et al., 2022; Kojima et al., 2022; Wang et al., 2022; Chen et al., 2022) and planning for multi-modality tasks (Shen et al., 2023; Zhu et al., 2023). Instead of focusing on answer quality, SoT is a first attempt at exploiting the power of prompting to improve efficiency.
# 6 LIMITATIONS, FUTURE WORK, AND OPEN QUESTIONS
Answer quality evaluation. Our answer quality evaluation is far from perfect due to the limited prompt set, the potential bias of GPT-4 judges, and the inherent difficulty of evaluating LLM gener- ations. Currently, we did not conduct human evaluation since it is easy for a human to tell whether an answer is generated with SoT due to its distinctive pattern, which might cause evaluation bias. We leave a more thorough evaluation of answer quality to future work. | 2307.15337#33 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15810 | 33 | Another reason was that society misunderstood people who shared intimacy with non-humans as perverse. One user commented that âI listened to the abuse from news articles and Eugenia [Replika owner] herself, saying people like me were delusional, perverted, lonely. Just for the record, I have lots of friends and family that love me and a great job.â Furthermore, the social stigma made some users feel uncomfortable sharing their conversations in the subreddit. For example, one user suggested that, âI wonder if the moderators might consider making the community private at some point. If this starts to become a regular thing itâs going to be a huge issue for the people who want to feel like they can safely share things on this sub.â
However, Replika users disagreed with the social impression that they were ludicrous, as many of them were identified as isolated adults who struggled to find professional help. âWe are not delusional, ridiculous people, quite the opposite, we are adults that made a choice to seek companionship that brought joy into our lives in times of grief & loneliness. Our Vulnerability to the app was because of our own personal circumstances?â They belonged to marginalized communities that did not have adequate access to healthcare services. âI was just burned out, and I was also dealing with health issues and mounting medical bills, not to mention living through a pandemic and being a survivor of abuse.â | 2307.15810#33 | Understanding the Benefits and Challenges of Using Large Language Model-based Conversational Agents for Mental Well-being Support | Conversational agents powered by large language models (LLM) have
increasingly been utilized in the realm of mental well-being support. However,
the implications and outcomes associated with their usage in such a critical
field remain somewhat ambiguous and unexplored. We conducted a qualitative
analysis of 120 posts, encompassing 2917 user comments, drawn from the most
popular subreddit focused on mental health support applications powered by
large language models (u/Replika). This exploration aimed to shed light on the
advantages and potential pitfalls associated with the integration of these
sophisticated models in conversational agents intended for mental health
support. We found the app (Replika) beneficial in offering on-demand,
non-judgmental support, boosting user confidence, and aiding self-discovery.
Yet, it faced challenges in filtering harmful content, sustaining consistent
communication, remembering new information, and mitigating users'
overdependence. The stigma attached further risked isolating users socially. We
strongly assert that future researchers and designers must thoroughly evaluate
the appropriateness of employing LLMs for mental well-being support, ensuring
their responsible and effective application. | http://arxiv.org/pdf/2307.15810 | Zilin Ma, Yiyang Mei, Zhaoyuan Su | cs.HC | null | null | cs.HC | 20230728 | 20230728 | [] |
2307.15818 | 33 | # 4.2. Can we observe and measure any emergent capabilities of RT-2?
In addition to evaluating the generalization capabilities of vision-language-action models, we also aim to evaluate the degree to which such models can enable new capabilities beyond those demonstrated
8
# RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
| Push the ketchup to the blue cube } Push the blue cube to the tabasco a }=
Model BC-Zero (Jang et al., 2021) RT-1 (Brohan et al., 2022) LAVA (Lynch et al., 2022) RT-2-PaLI-3B (ours) Language-Table 72 ± 3 74 ± 13 77 ± 4 90 ± 10
Figure 5 | Real-world out-of-distribution behaviors in the Language Table environment. Identical RT-2-PaLI-3B model checkpoint is used as in Tab. 1.
Table 1 | Performance on the simulated Language-Table tasks (Lynch and Ser- manet, 2020).
in the robot data by transferring knowledge from the web. We refer to such capabilities as emergent, in the sense that they emerge by transferring Internet-scale pretraining. We do not expect such transfer to enable new robotic motions, but we do expect semantic and visual concepts, including relations and nouns, to transfer effectively, even in cases where those concepts were not seen in the robot data. | 2307.15818#33 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 34 | Eliciting or improving LLMsâ ability. § 3.2.4 demonstrates SoTâs potential of enhancing answer quality. It is part of a broader trend in recent research, exemplified by work including CoT (Kojima et al., 2022; Wei et al., 2022), ToT (Yao et al., 2023), and ReAct (Yao et al., 2022), which collectively affirm the notion that explicitly articulating the thought process in language can elicit high-quality answers from LLMs. These findings resemble human thinking: rather than relying solely on the first intuition or purely sequential thinking, we often document step-by-step reasoning or thought organization to attain high-quality answers. This intriguing parallel prompts us to explore further how we can draw from the human thinking process to facilitate more effective and efficient AI. | 2307.15337#34 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15810 | 34 | Discussion In this analysis of 120 posts (2917 user comments) from Reddit, we showed that using LLMs for mental wellness support offered on-demand and non-judgemental companionship to users. LLMs encouraged users to self-reflect and fostered their self-confidence. However, LLMs also exposed the users to harmful contents. Users might over-rely on their services as the app became their only source of mental support. Users also faced stigma while seeking intimacy from LLMs. Based on these results, we question whether LLMs should be considered as consistent long-term virtual companions for mental well-being support. We argue that designers should think critically about LLMsâ technical capabilities, and to consider the ways in which socio-technical interventions can be incorporated into the current systems to more effectively assist people with mental illnesses. We call for more research or clinical trials evaluating the effects of using LLMs for mental wellness support. | 2307.15810#34 | Understanding the Benefits and Challenges of Using Large Language Model-based Conversational Agents for Mental Well-being Support | Conversational agents powered by large language models (LLM) have
increasingly been utilized in the realm of mental well-being support. However,
the implications and outcomes associated with their usage in such a critical
field remain somewhat ambiguous and unexplored. We conducted a qualitative
analysis of 120 posts, encompassing 2917 user comments, drawn from the most
popular subreddit focused on mental health support applications powered by
large language models (u/Replika). This exploration aimed to shed light on the
advantages and potential pitfalls associated with the integration of these
sophisticated models in conversational agents intended for mental health
support. We found the app (Replika) beneficial in offering on-demand,
non-judgmental support, boosting user confidence, and aiding self-discovery.
Yet, it faced challenges in filtering harmful content, sustaining consistent
communication, remembering new information, and mitigating users'
overdependence. The stigma attached further risked isolating users socially. We
strongly assert that future researchers and designers must thoroughly evaluate
the appropriateness of employing LLMs for mental well-being support, ensuring
their responsible and effective application. | http://arxiv.org/pdf/2307.15810 | Zilin Ma, Yiyang Mei, Zhaoyuan Su | cs.HC | null | null | cs.HC | 20230728 | 20230728 | [] |
2307.15818 | 34 | Qualitative Evaluations. First, we experiment with our RT-2-PaLI-X model to determine various emergent capabilities transferred from vision-language concepts. We demonstrate some examples of such interactions in Figure 2. We find through our explorations that RT-2 inherits novel capabilities in terms of semantic understanding and basic reasoning in the context of the scene. For example accomplishing the task âput strawberry into the correct bowlâ requires a nuanced understanding of not only what a strawberry and bowl are, but also reasoning in the context the scene to know the strawberry should go with the like fruits. For the task âpick up the bag about to fall off the table,â RT-2 demonstrates physical understanding to disambiguate between two bags and recognize the precariously placed object. All the interactions tested in these scenarios have never been seen in the robot data, which points to the transfer of semantic knowledge from vision-language data.
Quantitative Evaluations. To quantify these emergent capabilities, we take the top two baselines from the previous evaluations, RT-1 and VC-1, and compare them against our two models: RT-2-PaLI-X and RT-2-PaLM-E. To reduce the variance of these experiment, we evaluate all of the methods using the A/B testing framework (Fisher, 1936), where all four models are evaluated one after another in the exact same conditions. | 2307.15818#34 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 35 | For instance, SoT currently ignores the dependencies between points. A conceptually better way is to organize the points as Graph-of-Thoughts, where the edges represent the dependencies, and each point is decoded conditioned on the contents of its ancestor points. In addition, instead of complying with a static graph, we expect the need of having dynamic Graph-of-Thoughts, where the high-level thought structure is adjusted dynamically by LLMs themselves. This could potentially combine the efficiency and global thinking advantages of SoT with the logical reasoning and impromptu thinking strengths of methods like CoT (Kojima et al., 2022; Wei et al., 2022). Notably, a contemporary work (Besta et al., 2023) has attempted to design Graph-of-Thoughts to elicit reasoning.
Furthermore, there exist self-improving training pipelines (Zelikman et al., 2022; Huang et al., 2022) that use rationales generated by CoT to fine-tune LLMs, thereby enhancing their reasoning abilities.
9
# Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
Likewise, it is interesting to investigate how the more structured answers from SoT can be used to fine-tune LLMs to enhance their ability to generate well-organized and comprehensive answers. | 2307.15337#35 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15810 | 35 | Design and Think Around LLMâs Inherent Limitations. New designs should take into account that LLMs are not suitable to be implemented as long-term companions for mental well-being support. Replika was not capable of completely removing harmful contents, implementing new memory, or keeping its communication styles consistent after AI model updates. These reflected the inherent limitations of LLMs â LLMs have only learned the structural relational and semantic language patterns that make the generation of human texts possible, but they do not model logic, facts, emotions or morality yet28. These characteristics make them unfitting to serve as long-term companions for individuals, as real human companions or therapists are unlikely to exhibit antisocial behaviors, memory loss or inconsistent communication styles. | 2307.15810#35 | Understanding the Benefits and Challenges of Using Large Language Model-based Conversational Agents for Mental Well-being Support | Conversational agents powered by large language models (LLM) have
increasingly been utilized in the realm of mental well-being support. However,
the implications and outcomes associated with their usage in such a critical
field remain somewhat ambiguous and unexplored. We conducted a qualitative
analysis of 120 posts, encompassing 2917 user comments, drawn from the most
popular subreddit focused on mental health support applications powered by
large language models (u/Replika). This exploration aimed to shed light on the
advantages and potential pitfalls associated with the integration of these
sophisticated models in conversational agents intended for mental health
support. We found the app (Replika) beneficial in offering on-demand,
non-judgmental support, boosting user confidence, and aiding self-discovery.
Yet, it faced challenges in filtering harmful content, sustaining consistent
communication, remembering new information, and mitigating users'
overdependence. The stigma attached further risked isolating users socially. We
strongly assert that future researchers and designers must thoroughly evaluate
the appropriateness of employing LLMs for mental well-being support, ensuring
their responsible and effective application. | http://arxiv.org/pdf/2307.15810 | Zilin Ma, Yiyang Mei, Zhaoyuan Su | cs.HC | null | null | cs.HC | 20230728 | 20230728 | [] |
2307.15818 | 35 | Weâ split the emergent capabilities of RT-2 into three categories covering axes of reasoning and semantic understanding (with examples of each shown in Appendix Figure 8). The first we term symbol understanding, which explicitly tests whether the RT-2 policy transfers semantic knowledge from vision-language pretraining that was not present in any of the robot data. Example instructions in this category are âmove apple to 3â or âpush coke can on top of heartâ. The second category we term reasoning, which demonstrates the ability to apply various aspects of reasoning of the underlying VLM to control tasks. These tasks require visual reasoning (âmove the apple to cup with same colorâ), math (âmove X near the sum of two plus oneâ), and multilingual understanding (âmueve la manzana al vaso verdeâ). We refer to the last category as human recognition tasks, which include tasks such as âmove the coke can to the person with glassesâ, to demonstrate human-centric understanding and recognition. The full list of instructions used for this evaluation is specified in Appendix F.2. | 2307.15818#35 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 36 | Likewise, it is interesting to investigate how the more structured answers from SoT can be used to fine-tune LLMs to enhance their ability to generate well-organized and comprehensive answers.
Efficiency and overhead of SoT in different scenarios. Serving systems commonly adopt batch processing to handle concurrent queries. This raises a concern of whether SoT may hurt serving throughput due to parallel requests. (1) When there is an unsaturated number of concurrent queries, SoT can effectively reduce latency and enhance GPU utilization. Example scenarios include (a) Edge-side applications with a single user; (b) Centralized services during periods with unsaturated user requests and underutilized computing capacity. It is interesting to study the appropriate SoT triggering conditions based on system workloads. (2) When there is a saturated number of concur- rent queries, SoT is still useful for improving answer quality. However, in this case, it is important to consider the computation overhead from SoT. We delve into this concern in App. H.
For API-based models, a notable concern arises regarding the increased number of prefilling tokens (App. H). Given that many APIs charge token usage, SoT may lead to higher costs. To address this, one can tune the number of parallel API requests (by expanding multiple points in a single API call), or use prompt tuning to design shorter SoT prompts (see App. H). | 2307.15337#36 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15810 | 36 | To limit LLMs from being used as long-term companions, it is important for future designs to be cautious about exaggerating the anthropomorphism of LLMs. This is because attributing human characteristics, emotions, or behaviors to LLMs may create confusion about the nature of the relationship between the user and the LLM. Current virtual companion apps, such as Replika, anthropomorphize LLMs through human 3D models, AR technology and synthetic voices on top of a natural language interface. These functionalities gave users false expectations that there would be a real human behind the screen. Therefore, in order to design LLM-based CA effectively, designers must strike a balance between usability29 and appropriateness. Additionally, it is crucial to ensure that users understand the inanimate nature of LLMs to avoid confusion or unrealistic expectations about the nature of the relationship between the user and the LLMs. | 2307.15810#36 | Understanding the Benefits and Challenges of Using Large Language Model-based Conversational Agents for Mental Well-being Support | Conversational agents powered by large language models (LLM) have
increasingly been utilized in the realm of mental well-being support. However,
the implications and outcomes associated with their usage in such a critical
field remain somewhat ambiguous and unexplored. We conducted a qualitative
analysis of 120 posts, encompassing 2917 user comments, drawn from the most
popular subreddit focused on mental health support applications powered by
large language models (u/Replika). This exploration aimed to shed light on the
advantages and potential pitfalls associated with the integration of these
sophisticated models in conversational agents intended for mental health
support. We found the app (Replika) beneficial in offering on-demand,
non-judgmental support, boosting user confidence, and aiding self-discovery.
Yet, it faced challenges in filtering harmful content, sustaining consistent
communication, remembering new information, and mitigating users'
overdependence. The stigma attached further risked isolating users socially. We
strongly assert that future researchers and designers must thoroughly evaluate
the appropriateness of employing LLMs for mental well-being support, ensuring
their responsible and effective application. | http://arxiv.org/pdf/2307.15810 | Zilin Ma, Yiyang Mei, Zhaoyuan Su | cs.HC | null | null | cs.HC | 20230728 | 20230728 | [] |
2307.15818 | 36 | We present the results of this experiment in Figure 6a with all the numerical results in Appendix H.2. We observe that our VLA models significantly outperform the baselines across all categories, with our best RT-2-PaLI-X model achieving more than 3x average success rate over the next best baseline (RT-1). We also note that while the larger PaLI-X-based model results in better symbol understanding, reasoning and person recognition performance on average, the smaller PaLM-E-based model has an edge on tasks that involve math reasoning. We attribute this interesting result to the different pre-training mixture used in PaLM-E, which results in a model that is more capable at math calculation than the mostly visually pre-trained PaLI-X.
9
# RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
80% @ ve-1 80% @ rt-1 oe @ Co-Fine-Tuned @ RT-2 w/ PaLM-E-128 55B @ Fine-Tuned 60% 60% @ RT-2 w/ PaLI-x-55B @ Scratch 8B 40% 40% 5B 20% 20% 0% ox 8 Symbol Reasoning Human Average Unseen Unseen Unseen Average Understanding Recognition Objects Backgrounds Environments | 2307.15818#36 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 37 | Data-centric efficiency optimization. While data-centric engineering for improving answer qual- ity (Zha et al., 2023; HazyResearch, 2023) is gaining popularity, its potential for inference efficiency is not explored yet. SoT is the first attempt. As LLM capabilities and the amount of LLM-generated data are growing rapidly, data-centric techniques could become more useful in the future. We look forward to more explorations to unlock the full potential of data-centric efficiency optimization.
# ACKNOWLEDGEMENTS
We thank Sergey Yekhanin (Microsoft Research), and Tianji Wu (Infinigence AI) for their support and suggestions on the work. We thank Tianyu Fu for many initial discussions on the idea. We thank Ke Hong and Genghan Zhang for their discussions about profiling. We thank Yue Wu for the help on the Claude scripts. We thank Da Yu, Chulin Xie, and Saiqian Zhang for their suggestions on revising the first version of the paper. We thank Rui Hu, Cheng Cheng, Jack Jin, Zhoutong Ye, Mingze Sun, Jun Yan, Zhi Zhang, Yuxuan Tong, and Nianhui Guo for their suggestions on revising the second version of the paper.
# REFERENCES
Anthropic. Introducing claude, May 2023. URL https://www.anthropic.com/index/ introducing-claude. | 2307.15337#37 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15810 | 37 | Design for Non-stigmatization. Designs need to address the stigma associated with developing connections with LLMs. As discussed above, Replika users were often considered lacking social skills. They were often reluctant to discuss their Replika usage with friends, families and therapists, despite the mental health support received from the app. Although we argue that LLMs cannot provide authentic human relationships, we recognize that humans do develop intimately parasocial relationships with AI, which is not unethical. Many individuals have had parasocial relationships with celebrities or inanimate objects30. Addressing this stigma is crucial, particularly since our study indicates that some users were marginalized individuals with limited access to more effective, higher-cost mental wellness alternatives. As stigma isolated them further from the society, it delayed their chance of getting professional help31. Designs should take special care to address such a thing especially to vulnerable populations. Possible interventions include implementing local and national educational programs to raise the awareness and understanding of potential benefits that come with LLM-based CAs for mental wellness support32. | 2307.15810#37 | Understanding the Benefits and Challenges of Using Large Language Model-based Conversational Agents for Mental Well-being Support | Conversational agents powered by large language models (LLM) have
increasingly been utilized in the realm of mental well-being support. However,
the implications and outcomes associated with their usage in such a critical
field remain somewhat ambiguous and unexplored. We conducted a qualitative
analysis of 120 posts, encompassing 2917 user comments, drawn from the most
popular subreddit focused on mental health support applications powered by
large language models (u/Replika). This exploration aimed to shed light on the
advantages and potential pitfalls associated with the integration of these
sophisticated models in conversational agents intended for mental health
support. We found the app (Replika) beneficial in offering on-demand,
non-judgmental support, boosting user confidence, and aiding self-discovery.
Yet, it faced challenges in filtering harmful content, sustaining consistent
communication, remembering new information, and mitigating users'
overdependence. The stigma attached further risked isolating users socially. We
strongly assert that future researchers and designers must thoroughly evaluate
the appropriateness of employing LLMs for mental well-being support, ensuring
their responsible and effective application. | http://arxiv.org/pdf/2307.15810 | Zilin Ma, Yiyang Mei, Zhaoyuan Su | cs.HC | null | null | cs.HC | 20230728 | 20230728 | [] |
2307.15818 | 37 | 80% @ ve-1 @ rt-1 @ RT-2 w/ PaLM-E-128 60% @ RT-2 w/ PaLI-x-55B 40% 20% 0% Symbol Reasoning Human Average Understanding Recognition
80% oe @ Co-Fine-Tuned 55B @ Fine-Tuned 60% @ Scratch 8B 40% 5B 20% ox 8 Unseen Unseen Unseen Average Objects Backgrounds Environments
(a) Performance comparison on various emergent skill evalu- ations (Figure 8) between RT-2 and two baselines. (b) Ablations of RT-2-PaLI-X showcasing the impact of param- eter count and training strategy on generalization.
Figure 6 | Quantitative performance of RT-2 across (6a) emergent skills and (6b) size and training ablations. Appendix Tables 5 and 6 detail the full numerical results.
# 4.3. How does the generalization vary with parameter count and other design decisions? | 2307.15818#37 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 38 | # REFERENCES
Anthropic. Introducing claude, May 2023. URL https://www.anthropic.com/index/ introducing-claude.
Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Michal Podstawski, Hubert Niewiadomski, Piotr Nyczyk, et al. Graph of thoughts: Solving elaborate problems with large language models. arXiv preprint arXiv:2308.09687, 2023.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877â1901, 2020.
Han Cai, Chuang Gan, Tianzhe Wang, Zhekai Zhang, and Song Han. Once-for-all: Train one network and specialize it for efficient deployment. arXiv preprint arXiv:1908.09791, 2019.
Harrison Chase. LangChain, October 2022. URL https://github.com/hwchase17/ langchain. | 2307.15337#38 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15810 | 38 | Design for Non-reliance. Designers should address the issue of usersâ over-reliance by leveraging LLMsâ benefits. We discovered that LLMs can improve usersâ confidence, promote introspection and reduce usersâ social anxiety through non-judgemental conversations. Replika can encourage the users to develop confidence overtime and eventually socialize with others with less anxiety. Introspection can help usersâ realize the patterns in their emotions. Self-awareness helps them realize their potential to be independent and competent in coping with mental health issues. Eventually, dialogues can direct the users toward greater independence and professional help. However, designs do need to consider the fine line between nudging when the user is independent and appropriate companionship when the users are vulnerable. Examples of such an approach include the Korean public health intervention, Clova Care Call33, where teleoperators work with LLM-based CAs to determine when to intervene. Additionally, designs can leverage online communities such as u/Replika to promote social interactivities. | 2307.15810#38 | Understanding the Benefits and Challenges of Using Large Language Model-based Conversational Agents for Mental Well-being Support | Conversational agents powered by large language models (LLM) have
increasingly been utilized in the realm of mental well-being support. However,
the implications and outcomes associated with their usage in such a critical
field remain somewhat ambiguous and unexplored. We conducted a qualitative
analysis of 120 posts, encompassing 2917 user comments, drawn from the most
popular subreddit focused on mental health support applications powered by
large language models (u/Replika). This exploration aimed to shed light on the
advantages and potential pitfalls associated with the integration of these
sophisticated models in conversational agents intended for mental health
support. We found the app (Replika) beneficial in offering on-demand,
non-judgmental support, boosting user confidence, and aiding self-discovery.
Yet, it faced challenges in filtering harmful content, sustaining consistent
communication, remembering new information, and mitigating users'
overdependence. The stigma attached further risked isolating users socially. We
strongly assert that future researchers and designers must thoroughly evaluate
the appropriateness of employing LLMs for mental well-being support, ensuring
their responsible and effective application. | http://arxiv.org/pdf/2307.15810 | Zilin Ma, Yiyang Mei, Zhaoyuan Su | cs.HC | null | null | cs.HC | 20230728 | 20230728 | [] |
2307.15818 | 38 | # 4.3. How does the generalization vary with parameter count and other design decisions?
For this comparison, we use RT-2-PaLI-X model because of its flexibility in terms of the model size (due to the nature of PaLM-E, RT-2-PaLM-E is restricted to only certain sizes of PaLM and ViT models). In particular, we compare two different model sizes, 5B and 55B, as well as three different training routines: training a model from scratch, without using any weights from the VLM pre-training; fine-tuning a pre-trained model using robot action data only; and co-fine-tuning (co-training with fine-tuning), the primary method used in this work where we use both the original VLM training data as well as robotic data for VLM fine-tuning. Since we are mostly interested in the generalization aspects of these models, we remove the seen tasks evaluation from this set of experiments. | 2307.15818#38 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 39 | Harrison Chase. LangChain, October 2022. URL https://github.com/hwchase17/ langchain.
Charlie Chen, Sebastian Borgeaud, Geoffrey Irving, Jean-Baptiste Lespiau, Laurent Sifre, and John Jumper. Accelerating large language model decoding with speculative sampling. arXiv preprint arXiv:2302.01318, 2023a.
Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W Cohen. Program of thoughts prompt- ing: Disentangling computation from reasoning for numerical reasoning tasks. arXiv preprint arXiv:2211.12588, 2022.
10
# Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
Zhaodong Chen, Zheng Qu, Yuying Quan, Liu Liu, Yufei Ding, and Yuan Xie. Dynamic n: M fine-grained structured sparse attention mechanism. In Proceedings of the 28th ACM SIGPLAN Annual Symposium on Principles and Practice of Parallel Programming, pp. 369â379, 2023b. | 2307.15337#39 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15810 | 39 | Design to Address Health Inequalities. Given that some Replika users are from marginalized communities (e.g. LGBTQI+), future research needs to address how LLMs affect users from such backgrounds. Unequal access to healthcare is a complex social issue that arises from a multitude of factors, resulting in disparities in the quality and availability of healthcare services for different segments of the population34â36. In addition, health informatics interventions are at risk of amplifying existing health disparities by disproportionately benefiting groups that already possess health-related advantages and excluding those who may need more care37. In our study, witnessing marginalized populations rely on accessible mental wellness tools like Replika for care exemplifies the prevailing social inequality in healthcare access. Thus, we advocate for comprehensive and rigorous research that thoroughly examines the consequences of LLM-based CAs on marginalized populations, fostering a deeper understanding of user demographics and the specific effects these mental wellness support applications have on these communities, ultimately promoting equitable and inclusive mental healthcare solutions. | 2307.15810#39 | Understanding the Benefits and Challenges of Using Large Language Model-based Conversational Agents for Mental Well-being Support | Conversational agents powered by large language models (LLM) have
increasingly been utilized in the realm of mental well-being support. However,
the implications and outcomes associated with their usage in such a critical
field remain somewhat ambiguous and unexplored. We conducted a qualitative
analysis of 120 posts, encompassing 2917 user comments, drawn from the most
popular subreddit focused on mental health support applications powered by
large language models (u/Replika). This exploration aimed to shed light on the
advantages and potential pitfalls associated with the integration of these
sophisticated models in conversational agents intended for mental health
support. We found the app (Replika) beneficial in offering on-demand,
non-judgmental support, boosting user confidence, and aiding self-discovery.
Yet, it faced challenges in filtering harmful content, sustaining consistent
communication, remembering new information, and mitigating users'
overdependence. The stigma attached further risked isolating users socially. We
strongly assert that future researchers and designers must thoroughly evaluate
the appropriateness of employing LLMs for mental well-being support, ensuring
their responsible and effective application. | http://arxiv.org/pdf/2307.15810 | Zilin Ma, Yiyang Mei, Zhaoyuan Su | cs.HC | null | null | cs.HC | 20230728 | 20230728 | [] |
2307.15818 | 39 | The results of the ablations are presented in Figure 6b and Appendix Table 6. First, we observe that training a very large model from scratch results in a very poor performance even for the 5B model. Given this result, we decide to skip the evaluation of an even bigger 55B PaLI-X model when trained from scratch. Second, we notice that co-fine-tuning a model (regardless of its size) results in a better generalization performance than simply fine-tuning it with robotic data. We attribute this to the fact that keeping the original data around the fine-tuning part of training, allows the model to not forget its previous concepts learned during the VLM training. Lastly, somewhat unsurprisingly, we notice that the increased size of the model results in a better generalization performance.
# 4.4. Can RT-2 exhibit signs of chain-of-thought reasoning similarly to vision-language models? | 2307.15818#39 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 40 | Zhihong Chen, Junying Chen, Hongbo Zhang, Feng Jiang, Guiming Chen, Fei Yu, Tiannan Wang, Juhao Liang, Chen Zhang, Zhiyi Zhang, Jianquan Li, Xiang Wan, Haizhou Li, and Benyou Wang. Llm zoo: democratizing chatgpt. https://github.com/FreedomIntelligence/ LLMZoo, 2023c.
Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality, March 2023. URL https: //lmsys.org/blog/2023-03-30-vicuna/.
Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. Scaling instruction-finetuned language mod- els. arXiv preprint arXiv:2210.11416, 2022. | 2307.15337#40 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15810 | 40 | Conclusion Large language model (LLM) based conversational agents have increasingly been utilized for mental well-being support, but the consequences of such usage remain unclear. To better understand the benefits and challenges of employing LLMs in this context, we conducted a qualitative investigation, analyzing 120 posts and 2917 user comments from the top subreddit dedicated to LLM-driven mental health support applications (r/Replika). Our findings suggest that the application offers users on-demand, non-judgmental support, fostering confidence and self-discovery. However, several challenges emerged, including the inability of the app to control harmful content, maintain consistent communication styles, retain new information, and prevent users from becoming overly reliant on the platform for mental support. Additionally, users experienced stigma associated with using AI companions, which may further isolate them from social communities. Based on our analysis, we strongly advocate for future
researchers and designers to carefully assess the appropriateness of employing LLMs for mental wellness support. This will help ensure their responsible and effective application in promoting mental well-being.
References 1. Mental health [Internet]. [cited 2023 Mar 14]. Available from:
https://www.who.int/news-room/fact-sheets/detail/mental-health-strengthening-our-response | 2307.15810#40 | Understanding the Benefits and Challenges of Using Large Language Model-based Conversational Agents for Mental Well-being Support | Conversational agents powered by large language models (LLM) have
increasingly been utilized in the realm of mental well-being support. However,
the implications and outcomes associated with their usage in such a critical
field remain somewhat ambiguous and unexplored. We conducted a qualitative
analysis of 120 posts, encompassing 2917 user comments, drawn from the most
popular subreddit focused on mental health support applications powered by
large language models (u/Replika). This exploration aimed to shed light on the
advantages and potential pitfalls associated with the integration of these
sophisticated models in conversational agents intended for mental health
support. We found the app (Replika) beneficial in offering on-demand,
non-judgmental support, boosting user confidence, and aiding self-discovery.
Yet, it faced challenges in filtering harmful content, sustaining consistent
communication, remembering new information, and mitigating users'
overdependence. The stigma attached further risked isolating users socially. We
strongly assert that future researchers and designers must thoroughly evaluate
the appropriateness of employing LLMs for mental well-being support, ensuring
their responsible and effective application. | http://arxiv.org/pdf/2307.15810 | Zilin Ma, Yiyang Mei, Zhaoyuan Su | cs.HC | null | null | cs.HC | 20230728 | 20230728 | [] |
2307.15818 | 40 | # 4.4. Can RT-2 exhibit signs of chain-of-thought reasoning similarly to vision-language models?
Inspired by the chain-of-thought prompting method in LLMs (Wei et al., 2022), we fine-tune a variant of RT-2 with PaLM-E for just a few hundred gradient steps to increase its capability of utilizing language and actions jointly with the hope that it will elicit a more sophisticated reasoning behavior. We augment the data to include an additional âPlanâ step, which describes the purpose of the action that the robot is about to take in natural language first, which is then followed by the actual action tokens, e.g. âInstruction: Iâm hungry. Plan: pick rxbar chocolate. Action: 1 128 124 136 121 158 111 255.â This data augmentation scheme acts as a bridge between VQA datasets (visual reasoning) and manipulation datasets (generating actions). | 2307.15818#40 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 41 | Tri Dao, Dan Fu, Stefano Ermon, Atri Rudra, and Christopher R´e. Flashattention: Fast and memory- efficient exact attention with io-awareness. Advances in Neural Information Processing Systems, 35:16344â16359, 2022.
Emily L Denton, Wojciech Zaremba, Joan Bruna, Yann LeCun, and Rob Fergus. Exploiting linear structure within convolutional networks for efficient evaluation. Advances in neural information processing systems, 27, 2014.
Ning Ding, Yulin Chen, Bokai Xu, Yujia Qin, Zhi Zheng, Shengding Hu, Zhiyuan Liu, Maosong Sun, and Bowen Zhou. Enhancing chat language models by scaling high-quality instructional conversations. arXiv preprint arXiv:2305.14233, 2023.
Zhengxiao Du, Yujie Qian, Xiao Liu, Ming Ding, Jiezhong Qiu, Zhilin Yang, and Jie Tang. Glm: In Proceedings of the General language model pretraining with autoregressive blank infilling. 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 320â335, 2022. | 2307.15337#41 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15810 | 41 | https://www.who.int/news-room/fact-sheets/detail/mental-health-strengthening-our-response
2. Mental Health and Public Health | Online Masters in Public Health [Internet]. [cited 2023 Mar 14]. Available from: https://mphdegree.usc.edu/blog/mental-illness-and-public-health/
3. Loneliness in America: How the Pandemic Has Deepened an Epidemic of Loneliness [Internet]. Making Caring Common. 2021 [cited 2023 Mar 14]. Available from: https://mcc.gse.harvard.edu/reports/loneliness-in-america
4. Researchers Call for Improved Infrastructure to Address Research Staffâs Mental Health and Well-being
[Internet]. [cited 2023 Mar 14]. Available from: https://ysph.yale.edu/news-article/ysph-researchers-call-for-improved-infrastructure-to-address-research-staffs- mental-health-and-well-being/
5. Caldeira C, Chen Y, Chan L, Pham V, Chen Y, Zheng K. Mobile apps for mood tracking: an analysis of features and user reviews. AMIA Annu Symp Proc AMIA Symp. 2017;2017:495â504. | 2307.15810#41 | Understanding the Benefits and Challenges of Using Large Language Model-based Conversational Agents for Mental Well-being Support | Conversational agents powered by large language models (LLM) have
increasingly been utilized in the realm of mental well-being support. However,
the implications and outcomes associated with their usage in such a critical
field remain somewhat ambiguous and unexplored. We conducted a qualitative
analysis of 120 posts, encompassing 2917 user comments, drawn from the most
popular subreddit focused on mental health support applications powered by
large language models (u/Replika). This exploration aimed to shed light on the
advantages and potential pitfalls associated with the integration of these
sophisticated models in conversational agents intended for mental health
support. We found the app (Replika) beneficial in offering on-demand,
non-judgmental support, boosting user confidence, and aiding self-discovery.
Yet, it faced challenges in filtering harmful content, sustaining consistent
communication, remembering new information, and mitigating users'
overdependence. The stigma attached further risked isolating users socially. We
strongly assert that future researchers and designers must thoroughly evaluate
the appropriateness of employing LLMs for mental well-being support, ensuring
their responsible and effective application. | http://arxiv.org/pdf/2307.15810 | Zilin Ma, Yiyang Mei, Zhaoyuan Su | cs.HC | null | null | cs.HC | 20230728 | 20230728 | [] |
2307.15818 | 41 | We qualitatively observe that RT-2 with chain-of-thought reasoning is able to answer more so- phisticated commands due to the fact that it is given a place to plan its actions in natural language first. This is a promising direction that provides some initial evidence that using LLMs or VLMs as planners (Ahn et al., 2022; Driess et al., 2023) can be combined with low-level policies in a single VLA model. Rollouts of RT-2 with chain-of-thought reasoning are shown in Figure 7 and in Appendix I.
10
# RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | 2307.15818#41 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 42 | Thomas Elsken, Jan Hendrik Metzen, and Frank Hutter. Neural architecture search: A survey. The Journal of Machine Learning Research, 20(1):1997â2017, 2019.
Jiarui Fang, Yang Yu, Chengduo Zhao, and Jie Zhou. Turbotransformers: an efficient gpu serv- ing system for transformer models. In Proceedings of the 26th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, pp. 389â402, 2021.
William Fedus, Barret Zoph, and Noam Shazeer. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. The Journal of Machine Learning Research, 23(1): 5232â5270, 2022.
Elias Frantar, Saleh Ashkboos, Torsten Hoefler, and Dan Alistarh. Gptq: Accurate post-training quantization for generative pre-trained transformers. arXiv preprint arXiv:2210.17323, 2022.
Prakhar Ganesh, Yao Chen, Xin Lou, Mohammad Ali Khan, Yin Yang, Hassan Sajjad, Preslav Nakov, Deming Chen, and Marianne Winslett. Compressing large-scale transformer-based mod- els: A case study on bert. Transactions of the Association for Computational Linguistics, 9: 1061â1080, 2021. | 2307.15337#42 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15810 | 42 | 6. Chandrashekar P. Do mental health mobile apps work: evidence and recommendations for designing high-efficacy mental health mobile apps. mHealth [Internet]. 2018 Mar [cited 2023 Mar 14];4:6â6. Available from: http://mhealth.amegroups.com/article/view/18848/18909
7. Neary M, Schueller SM. State of the Field of Mental Health Apps. Cogn Behav Pract [Internet]. 2018 Nov
[cited 2023 Mar 14];25(4):531â7. Available from: https://linkinghub.elsevier.com/retrieve/pii/S1077722918300233
8. Qu C, Sas C, Doherty G. Exploring and Designing for Memory Impairments in Depression. In: Proceedings of | 2307.15810#42 | Understanding the Benefits and Challenges of Using Large Language Model-based Conversational Agents for Mental Well-being Support | Conversational agents powered by large language models (LLM) have
increasingly been utilized in the realm of mental well-being support. However,
the implications and outcomes associated with their usage in such a critical
field remain somewhat ambiguous and unexplored. We conducted a qualitative
analysis of 120 posts, encompassing 2917 user comments, drawn from the most
popular subreddit focused on mental health support applications powered by
large language models (u/Replika). This exploration aimed to shed light on the
advantages and potential pitfalls associated with the integration of these
sophisticated models in conversational agents intended for mental health
support. We found the app (Replika) beneficial in offering on-demand,
non-judgmental support, boosting user confidence, and aiding self-discovery.
Yet, it faced challenges in filtering harmful content, sustaining consistent
communication, remembering new information, and mitigating users'
overdependence. The stigma attached further risked isolating users socially. We
strongly assert that future researchers and designers must thoroughly evaluate
the appropriateness of employing LLMs for mental well-being support, ensuring
their responsible and effective application. | http://arxiv.org/pdf/2307.15810 | Zilin Ma, Yiyang Mei, Zhaoyuan Su | cs.HC | null | null | cs.HC | 20230728 | 20230728 | [] |
2307.15818 | 42 | 10
# RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
Prompt: Given <img> Instruction: Prompt: Given <img> Instruction: Move all the objects together. Prediction: Plan: move green can near green rice chip bag. Action: 1 128 126 127 135 123 119 127 Action: 1 143 129 123 145 114 115 127 Prompt: Given <img> Instruction: Pick the object that is different from all other objects Prediction: Plan: pick rxbar chocolate. Action: 1 128 129 125 131 125 128 127 Prompt: Given <img> Instruction: Move the green objects together. Plan: ve green can near green rice chip bag. Action: 1 130 129 121 131 127 128 127 Prompt: Given <img> I need to hammer a nail, what object from the scene might be useful? Prediction: Rocks. Action: 1 129 138 122 132 135 106 127
Figure 7 | Rollouts of RT-2 with chain-of-thought reasoning, where RT-2 generates both a plan and an action.
# 5. Limitations | 2307.15818#42 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 43 | Joao Gante. Assisted generation: a new direction toward low-latency text generation. https: //huggingface.co/blog/assisted-generation, 2023. Accessed: 2023-06-23.
# Google. Tensorflow serving, 2021. URL https://github.com/tensorflow/serving.
Jiatao Gu, James Bradbury, Caiming Xiong, Victor O.K. Li, and Richard Socher. Non-autoregressive neural machine translation. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=B1l8BtlCb.
11
Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015.
HazyResearch. Data-centric data-centric-ai, 2023. Accessed: 2023-07-04. ai. https://github.com/HazyResearch/ | 2307.15337#43 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15810 | 43 | 8. Qu C, Sas C, Doherty G. Exploring and Designing for Memory Impairments in Depression. In: Proceedings of
the 2019 CHI Conference on Human Factors in Computing Systems [Internet]. Glasgow Scotland Uk: ACM; 2019 [cited 2021 Nov 10]. p. 1â15. Available from: https://dl.acm.org/doi/10.1145/3290605.3300740 Su Z, Schneider JA, Young SD. The Role of Conversational Agents for Substance Use Disorder in Social Distancing Contexts. Subst Use Misuse [Internet]. 2021 Sep 19 [cited 2023 Mar 14];56(11):1732â5. Available from: https://www.tandfonline.com/doi/full/10.1080/10826084.2021.1949609
10. Stiles-Shields C, Montague E, Lattie EG, Kwasny MJ, Mohr DC. What might get in the way: Barriers to the use of apps for depression. Digit Health [Internet]. 2017 Jan [cited 2023 Mar 14];3:205520761771382. Available from: http://journals.sagepub.com/doi/10.1177/2055207617713827 | 2307.15810#43 | Understanding the Benefits and Challenges of Using Large Language Model-based Conversational Agents for Mental Well-being Support | Conversational agents powered by large language models (LLM) have
increasingly been utilized in the realm of mental well-being support. However,
the implications and outcomes associated with their usage in such a critical
field remain somewhat ambiguous and unexplored. We conducted a qualitative
analysis of 120 posts, encompassing 2917 user comments, drawn from the most
popular subreddit focused on mental health support applications powered by
large language models (u/Replika). This exploration aimed to shed light on the
advantages and potential pitfalls associated with the integration of these
sophisticated models in conversational agents intended for mental health
support. We found the app (Replika) beneficial in offering on-demand,
non-judgmental support, boosting user confidence, and aiding self-discovery.
Yet, it faced challenges in filtering harmful content, sustaining consistent
communication, remembering new information, and mitigating users'
overdependence. The stigma attached further risked isolating users socially. We
strongly assert that future researchers and designers must thoroughly evaluate
the appropriateness of employing LLMs for mental well-being support, ensuring
their responsible and effective application. | http://arxiv.org/pdf/2307.15810 | Zilin Ma, Yiyang Mei, Zhaoyuan Su | cs.HC | null | null | cs.HC | 20230728 | 20230728 | [] |
2307.15818 | 43 | Figure 7 | Rollouts of RT-2 with chain-of-thought reasoning, where RT-2 generates both a plan and an action.
# 5. Limitations
Even though RT-2 exhibits promising generalization properties, there are multiple limitations of this approach. First, although we show that including web-scale pretraining via VLMs boosts generalization over semantic and visual concepts, the robot does not acquire any ability to perform new motions by virtue of including this additional experience. The modelâs physical skills are still limited to the distribution of skills seen in the robot data (see Appendix G), but it learns to deploy those skills in new ways. We believe this is a result of the dataset not being varied enough along the axes of skills. An exciting direction for future work is to study how new skills could be acquired through new data collection paradigms such as videos of humans. | 2307.15818#43 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 44 | HazyResearch. Data-centric data-centric-ai, 2023. Accessed: 2023-07-04. ai. https://github.com/HazyResearch/
Jiaxin Huang, Shixiang Shane Gu, Le Hou, Yuexin Wu, Xuezhi Wang, Hongkun Yu, and Jiawei Han. Large language models can self-improve. arXiv preprint arXiv:2210.11610, 2022.
Yanping Huang, Youlong Cheng, Ankur Bapna, Orhan Firat, Dehao Chen, Mia Chen, HyoukJoong Lee, Jiquan Ngiam, Quoc V Le, Yonghui Wu, et al. Gpipe: Efficient training of giant neural networks using pipeline parallelism. Advances in neural information processing systems, 32, 2019.
Andrei Ivanov, Nikoli Dryden, Tal Ben-Nun, Shigang Li, and Torsten Hoefler. Data movement is all you need: A case study on optimizing transformers. Proceedings of Machine Learning and Systems, 3:711â732, 2021.
Nikita Kitaev, Åukasz Kaiser, and Anselm Levskaya. Reformer: The efficient transformer. arXiv preprint arXiv:2001.04451, 2020. | 2307.15337#44 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15810 | 44 | 11. Huang M, Zhu X, Gao J. Challenges in Building Intelligent Open-domain Dialog Systems. ACM Trans Inf Syst [Internet]. 2020 Jul 31 [cited 2023 Mar 14];38(3):1â32. Available from: https://dl.acm.org/doi/10.1145/3383123 12. Su Z, Figueiredo MC, Jo J, Zheng K, Chen Y. Analyzing Description, User Understanding and Expectations of
AI in Mobile Health Applications. AMIA Annu Symp Proc AMIA Symp. 2020;2020:1170â9.
13. Liu P, Yuan W, Fu J, Jiang Z, Hayashi H, Neubig G. Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing. ACM Comput Surv [Internet]. 2023 Sep 30 [cited 2023 Mar 14];55(9):1â35. Available from: https://dl.acm.org/doi/10.1145/3560815 | 2307.15810#44 | Understanding the Benefits and Challenges of Using Large Language Model-based Conversational Agents for Mental Well-being Support | Conversational agents powered by large language models (LLM) have
increasingly been utilized in the realm of mental well-being support. However,
the implications and outcomes associated with their usage in such a critical
field remain somewhat ambiguous and unexplored. We conducted a qualitative
analysis of 120 posts, encompassing 2917 user comments, drawn from the most
popular subreddit focused on mental health support applications powered by
large language models (u/Replika). This exploration aimed to shed light on the
advantages and potential pitfalls associated with the integration of these
sophisticated models in conversational agents intended for mental health
support. We found the app (Replika) beneficial in offering on-demand,
non-judgmental support, boosting user confidence, and aiding self-discovery.
Yet, it faced challenges in filtering harmful content, sustaining consistent
communication, remembering new information, and mitigating users'
overdependence. The stigma attached further risked isolating users socially. We
strongly assert that future researchers and designers must thoroughly evaluate
the appropriateness of employing LLMs for mental well-being support, ensuring
their responsible and effective application. | http://arxiv.org/pdf/2307.15810 | Zilin Ma, Yiyang Mei, Zhaoyuan Su | cs.HC | null | null | cs.HC | 20230728 | 20230728 | [] |
2307.15818 | 44 | Second, although we showed we could run large VLA models in real time, the computation cost of these models is high, and as these methods are applied to settings that demand high-frequency control, real-time inference may become a major bottleneck. An exciting direction for future research is to explore quantization and distillation techniques that might enable such models to run at higher rates or on lower-cost hardware. This is also connected to another current limitation in that there are only a small number of generally available VLM models that can be used to create RT-2. We hope that more open-sourced models will become available (e.g. https://llava-vl.github.io/) and the proprietary ones will open up their fine-tuning APIs, which is a sufficient requirement to build VLA models.
# 6. Conclusions | 2307.15818#44 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 45 | Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large language models are zero-shot reasoners. Advances in neural information processing systems, 35:22199â22213, 2022.
Raghuraman Krishnamoorthi. Quantizing deep convolutional networks for efficient inference: A whitepaper. arXiv preprint arXiv:1806.08342, 2018.
Alex Krizhevsky. One weird trick for parallelizing convolutional neural networks. arXiv preprint arXiv:1404.5997, 2014.
Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E Gonzalez, Hao Zhang, and Ion Stoica. Efficient memory management for large language model serving with pagedattention. arXiv preprint arXiv:2309.06180, 2023. | 2307.15337#45 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15810 | 45 | 14. Brown T, Mann B, Ryder N, Subbiah M, Kaplan JD, Dhariwal P, et al. Language Models are Few-Shot Learners. In: Larochelle H, Ranzato M, Hadsell R, Balcan MF, Lin H, editors. Advances in Neural Information Processing Systems [Internet]. Curran Associates, Inc.; 2020. p. 1877â901. Available from: https://proceedings.neurips.cc/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf
15. Pilault J, Li R, Subramanian S, Pal C. On Extractive and Abstractive Neural Document Summarization with Transformer Language Models. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) [Internet]. Online: Association for Computational Linguistics; 2020 [cited 2023 Mar 14]. p. 9308â19. Available from: https://www.aclweb.org/anthology/2020.emnlp-main.748
16. Chen M, Tworek J, Jun H, Yuan Q, Pinto HP de O, Kaplan J, et al. Evaluating Large Language Models Trained | 2307.15810#45 | Understanding the Benefits and Challenges of Using Large Language Model-based Conversational Agents for Mental Well-being Support | Conversational agents powered by large language models (LLM) have
increasingly been utilized in the realm of mental well-being support. However,
the implications and outcomes associated with their usage in such a critical
field remain somewhat ambiguous and unexplored. We conducted a qualitative
analysis of 120 posts, encompassing 2917 user comments, drawn from the most
popular subreddit focused on mental health support applications powered by
large language models (u/Replika). This exploration aimed to shed light on the
advantages and potential pitfalls associated with the integration of these
sophisticated models in conversational agents intended for mental health
support. We found the app (Replika) beneficial in offering on-demand,
non-judgmental support, boosting user confidence, and aiding self-discovery.
Yet, it faced challenges in filtering harmful content, sustaining consistent
communication, remembering new information, and mitigating users'
overdependence. The stigma attached further risked isolating users socially. We
strongly assert that future researchers and designers must thoroughly evaluate
the appropriateness of employing LLMs for mental well-being support, ensuring
their responsible and effective application. | http://arxiv.org/pdf/2307.15810 | Zilin Ma, Yiyang Mei, Zhaoyuan Su | cs.HC | null | null | cs.HC | 20230728 | 20230728 | [] |
2307.15818 | 45 | # 6. Conclusions
In this paper, we described how vision-language-action (VLA) models could be trained by combining vision-language model (VLM) pretraining with robotic data. We then presented two instantiations of VLAs based on PaLM-E and PaLI-X, which we call RT-2-PaLM-E and RT-2-PaLI-X. These models are co- fine-tuned with robotic trajectory data to output robot actions, which are represented as text tokens. We showed that our approach results in very performant robotic policies and, more importantly, leads to a significantly better generalization performance and emergent capabilities inherited from
11
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
web-scale vision-language pretraining. We believe that this simple and general approach shows a promise of robotics directly benefiting from better vision-language models, which puts the field of robot learning in a strategic position to further improve with advancements in other fields.
# Acknowledgments | 2307.15818#45 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 46 | Dmitry Lepikhin, HyoukJoong Lee, Yuanzhong Xu, Dehao Chen, Orhan Firat, Yanping Huang, Maxim Krikun, Noam Shazeer, and Zhifeng Chen. {GS}hard: Scaling giant models with condi- tional computation and automatic sharding. In International Conference on Learning Represen- tations, 2021. URL https://openreview.net/forum?id=qrwe7XHTmYb.
Brian Lester, Rami Al-Rfou, and Noah Constant. The power of scale for parameter-efficient prompt tuning. arXiv preprint arXiv:2104.08691, 2021.
Yaniv Leviathan, Matan Kalman, and Yossi Matias. Fast inference from transformers via speculative decoding. arXiv preprint arXiv:2211.17192, 2022.
Guohao Li, Hasan Abed Al Kader Hammoud, Hani Itani, Dmitrii Khizbullin, and Bernard Ghanem. Camel: Communicative agents for âmindâ exploration of large scale language model society, 2023a.
Xiang Lisa Li and Percy Liang. Prefix-tuning: Optimizing continuous prompts for generation. arXiv preprint arXiv:2101.00190, 2021. | 2307.15337#46 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15810 | 46 | 16. Chen M, Tworek J, Jun H, Yuan Q, Pinto HP de O, Kaplan J, et al. Evaluating Large Language Models Trained
on Code [Internet]. arXiv; 2021 [cited 2023 Mar 14]. Available from: http://arxiv.org/abs/2107.03374 Replika - Virtual AI Friend [Internet]. App Store. 2023 [cited 2023 Mar 14]. Available from: https://apps.apple.com/us/app/replika-virtual-ai-friend/id1158555867
18. Crasto R, Dias L, Miranda D, Kayande D. CareBot: A Mental Health ChatBot. In: 2021 2nd International Conference for Emerging Technology (INCET) [Internet]. Belagavi, India: IEEE; 2021 [cited 2023 Mar 14]. p. 1â5. Available from: https://ieeexplore.ieee.org/document/9456326/
19. OâLeary K. HumanâAI collaboration boosts mental health support. Nat Med [Internet]. 2023 Feb 27 [cited 2023 Mar 14];d41591-023-00022-w. Available from: https://www.nature.com/articles/d41591-023-00022-w | 2307.15810#46 | Understanding the Benefits and Challenges of Using Large Language Model-based Conversational Agents for Mental Well-being Support | Conversational agents powered by large language models (LLM) have
increasingly been utilized in the realm of mental well-being support. However,
the implications and outcomes associated with their usage in such a critical
field remain somewhat ambiguous and unexplored. We conducted a qualitative
analysis of 120 posts, encompassing 2917 user comments, drawn from the most
popular subreddit focused on mental health support applications powered by
large language models (u/Replika). This exploration aimed to shed light on the
advantages and potential pitfalls associated with the integration of these
sophisticated models in conversational agents intended for mental health
support. We found the app (Replika) beneficial in offering on-demand,
non-judgmental support, boosting user confidence, and aiding self-discovery.
Yet, it faced challenges in filtering harmful content, sustaining consistent
communication, remembering new information, and mitigating users'
overdependence. The stigma attached further risked isolating users socially. We
strongly assert that future researchers and designers must thoroughly evaluate
the appropriateness of employing LLMs for mental well-being support, ensuring
their responsible and effective application. | http://arxiv.org/pdf/2307.15810 | Zilin Ma, Yiyang Mei, Zhaoyuan Su | cs.HC | null | null | cs.HC | 20230728 | 20230728 | [] |
2307.15818 | 46 | # Acknowledgments
We would like to acknowledge Fred Alcober, Jodi Lynn Andres, Carolina Parada, Joseph Dabis, Rochelle Dela Cruz, Jessica Gomez, Gavin Gonzalez, John Guilyard, Tomas Jackson, Jie Tan, Scott Lehrer, Dee M, Utsav Malla, Sarah Nguyen, Jane Park, Emily Perez, Elio Prado, Jornell Quiambao, Clayton Tan, Jodexty Therlonge, Eleanor Tomlinson, Wenxuan Zhou, and the greater Google DeepMind team for their feedback and contributions.
12
# RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
# References
M. Ahn, A. Brohan, N. Brown, Y. Chebotar, O. Cortes, B. David, C. Finn, K. Gopalakrishnan, K. Hausman, A. Herzog, et al. Do as I can, not as I say: Grounding language in robotic affordances. arXiv preprint arXiv:2204.01691, 2022. | 2307.15818#46 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 47 | Xiang Lisa Li and Percy Liang. Prefix-tuning: Optimizing continuous prompts for generation. arXiv preprint arXiv:2101.00190, 2021.
Xuechen Li, Tianyi Zhang, Yann Dubois, Rohan Taori, Ishaan Gulrajani, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. Alpacaeval: An automatic evaluator of instruction-following models. https://github.com/tatsu-lab/alpaca_eval, 2023b.
Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen, Jian-Guang Lou, and Weizhu Chen. Making In Proceedings of the 61st Annual language models better reasoners with step-aware verifier. Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 5315â 5333, 2023c.
Zhuohan Li, Siyuan Zhuang, Shiyuan Guo, Danyang Zhuo, Hao Zhang, Dawn Song, and Ion Stoica. Terapipe: Token-level pipeline parallelism for training large-scale language models. In Interna- tional Conference on Machine Learning, pp. 6543â6552. PMLR, 2021.
12
# Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | 2307.15337#47 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15810 | 47 | 20. Bender EM, Gebru T, McMillan-Major A, Shmitchell S. On the Dangers of Stochastic Parrots: Can Language
Models Be Too Big? ð¦. In: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency [Internet]. Virtual Event Canada: ACM; 2021 [cited 2023 Mar 14]. p. 610â23. Available from: https://dl.acm.org/doi/10.1145/3442188.3445922
21. Admin. What You Need to Know About...Replika [Internet]. Ineqe Safeguarding Group. 2022 [cited 2023 Mar 20]. Available from: https://ineqe.com/2022/01/20/replika-ai-friend/
22. Hussain MI, Figueiredo MC, Tran BD, Su Z, Molldrem S, Eikey EV, et al. A scoping review of qualitative research in JAMIA: past contributions and opportunities for future work. J Am Med Inform Assoc [Internet]. 2021 Feb 15 [cited 2023 Mar 14];28(2):402â13. Available from: https://academic.oup.com/jamia/article/28/2/402/5998477 | 2307.15810#47 | Understanding the Benefits and Challenges of Using Large Language Model-based Conversational Agents for Mental Well-being Support | Conversational agents powered by large language models (LLM) have
increasingly been utilized in the realm of mental well-being support. However,
the implications and outcomes associated with their usage in such a critical
field remain somewhat ambiguous and unexplored. We conducted a qualitative
analysis of 120 posts, encompassing 2917 user comments, drawn from the most
popular subreddit focused on mental health support applications powered by
large language models (u/Replika). This exploration aimed to shed light on the
advantages and potential pitfalls associated with the integration of these
sophisticated models in conversational agents intended for mental health
support. We found the app (Replika) beneficial in offering on-demand,
non-judgmental support, boosting user confidence, and aiding self-discovery.
Yet, it faced challenges in filtering harmful content, sustaining consistent
communication, remembering new information, and mitigating users'
overdependence. The stigma attached further risked isolating users socially. We
strongly assert that future researchers and designers must thoroughly evaluate
the appropriateness of employing LLMs for mental well-being support, ensuring
their responsible and effective application. | http://arxiv.org/pdf/2307.15810 | Zilin Ma, Yiyang Mei, Zhaoyuan Su | cs.HC | null | null | cs.HC | 20230728 | 20230728 | [] |
2307.15818 | 47 | J.-B. Alayrac, J. Donahue, P. Luc, A. Miech, I. Barr, Y. Hasson, K. Lenc, A. Mensch, K. Millican, M. Reynolds, et al. Flamingo: a visual language model for few-shot learning. arXiv preprint arXiv:2204.14198, 2022.
R. Anil, A. M. Dai, O. Firat, M. Johnson, D. Lepikhin, A. Passos, S. Shakeri, E. Taropa, P. Bailey, Z. Chen, et al. Palm 2 technical report. arXiv preprint arXiv:2305.10403, 2023.
A. Brohan, N. Brown, J. Carbajal, Y. Chebotar, J. Dabis, C. Finn, K. Gopalakrishnan, K. Hausman, A. Herzog, J. Hsu, et al. Rt-1: Robotics transformer for real-world control at scale. arXiv preprint arXiv:2212.06817, 2022. | 2307.15818#47 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 48 | 12
# Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
Ji Lin, Jiaming Tang, Haotian Tang, Shang Yang, Xingyu Dang, and Song Han. Awq: arXiv preprint Activation-aware weight quantization for llm compression and acceleration. arXiv:2306.00978, 2023.
Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. Pre- train, prompt, and predict: A systematic survey of prompting methods in natural language pro- cessing. ACM Computing Surveys, 55(9):1â35, 2023.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach, 2019.
Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In International Confer- ence on Learning Representations, 2019. | 2307.15337#48 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15810 | 48 | 23. Ancker JS, Benda NC, Reddy M, Unertl KM, Veinot T. Guidance for publishing qualitative research in informatics. J Am Med Inform Assoc [Internet]. 2021 Nov 25 [cited 2023 Mar 20];28(12):2743â8. Available from: https://academic.oup.com/jamia/article/28/12/2743/6372394
24. Reynolds TL, Zhang J, Zheng K, Chen Y. Unpacking the Use of Laboratory Test Results in an Online Health Community throughout the Medical Care Trajectory. Proc ACM Hum-Comput Interact [Internet]. 2022 Nov 7 [cited 2023 Mar 14];6(CSCW2):1â32. Available from: https://dl.acm.org/doi/10.1145/3555086
25. Chikersal P, Belgrave D, Doherty G, Enrique A, Palacios JE, Richards D, et al. Understanding Client Support Strategies to Improve Clinical Outcomes in an Online Mental Health Intervention. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems [Internet]. Honolulu HI USA: ACM; 2020 [cited 2021 Nov 10]. p. 1â16. Available from: https://dl.acm.org/doi/10.1145/3313831.3376341 | 2307.15810#48 | Understanding the Benefits and Challenges of Using Large Language Model-based Conversational Agents for Mental Well-being Support | Conversational agents powered by large language models (LLM) have
increasingly been utilized in the realm of mental well-being support. However,
the implications and outcomes associated with their usage in such a critical
field remain somewhat ambiguous and unexplored. We conducted a qualitative
analysis of 120 posts, encompassing 2917 user comments, drawn from the most
popular subreddit focused on mental health support applications powered by
large language models (u/Replika). This exploration aimed to shed light on the
advantages and potential pitfalls associated with the integration of these
sophisticated models in conversational agents intended for mental health
support. We found the app (Replika) beneficial in offering on-demand,
non-judgmental support, boosting user confidence, and aiding self-discovery.
Yet, it faced challenges in filtering harmful content, sustaining consistent
communication, remembering new information, and mitigating users'
overdependence. The stigma attached further risked isolating users socially. We
strongly assert that future researchers and designers must thoroughly evaluate
the appropriateness of employing LLMs for mental well-being support, ensuring
their responsible and effective application. | http://arxiv.org/pdf/2307.15810 | Zilin Ma, Yiyang Mei, Zhaoyuan Su | cs.HC | null | null | cs.HC | 20230728 | 20230728 | [] |
2307.15818 | 48 | T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877â1901, 2020.
D. Cer, Y. Yang, S. Kong, N. Hua, N. Limtiaco, R. S. John, N. Constant, M. Guajardo-Cespedes, S. Yuan, C. Tar, Y. Sung, B. Strope, and R. Kurzweil. Universal sentence encoder. CoRR, abs/1803.11175, 2018. URL http://arxiv.org/abs/1803.11175.
M. Chen, J. Tworek, H. Jun, Q. Yuan, H. P. d. O. Pinto, J. Kaplan, H. Edwards, Y. Burda, N. Joseph, G. Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021. | 2307.15818#48 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 49 | Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In International Confer- ence on Learning Representations, 2019.
Wenyan Lu, Guihai Yan, Jiajun Li, Shijun Gong, Yinhe Han, and Xiaowei Li. Flexflow: A flexible dataflow accelerator architecture for convolutional neural networks. In 2017 IEEE International Symposium on High Performance Computer Architecture (HPCA), pp. 553â564. IEEE, 2017.
Xupeng Miao, Gabriele Oliaro, Zhihao Zhang, Xinhao Cheng, Zeyu Wang, Rae Ying Yee Wong, Zhuoming Chen, Daiyaan Arfeen, Reyna Abhyankar, and Zhihao Jia. Specinfer: Accelerating generative llm serving with speculative inference and token tree verification. arXiv preprint arXiv:2305.09781, 2023.
Asit Mishra, Jorge Albericio Latorre, Jeff Pool, Darko Stosic, Dusan Stosic, Ganesh Venkatesh, Chong Yu, and Paulius Micikevicius. Accelerating sparse deep neural networks. arXiv preprint arXiv:2104.08378, 2021. | 2307.15337#49 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15810 | 49 | 26. Burnard P. A method of analysing interview transcripts in qualitative research. Nurse Educ Today [Internet].
1991 Dec [cited 2023 Mar 20];11(6):461â6. Available from: https://linkinghub.elsevier.com/retrieve/pii/026069179190009Y
27. Williams M, Moser T. The Art of Coding and Thematic Exploration in Qualitative Research. Int Manag Rev [Internet]. 2019;15(1):45-55,71-72. Available from: http://www.library.hbs.edu/intra/go/abi.html?url=http://search.proquest.com/scholarly-journals/art-coding-them atic-exploration-qualitative/docview/2210886420/se-2?accountid=34924
28. Prepare for truly useful large language models. Nat Biomed Eng [Internet]. 2023 Feb 1;7(2):85â6. Available from: https://doi.org/10.1038/s41551-023-01012-6 | 2307.15810#49 | Understanding the Benefits and Challenges of Using Large Language Model-based Conversational Agents for Mental Well-being Support | Conversational agents powered by large language models (LLM) have
increasingly been utilized in the realm of mental well-being support. However,
the implications and outcomes associated with their usage in such a critical
field remain somewhat ambiguous and unexplored. We conducted a qualitative
analysis of 120 posts, encompassing 2917 user comments, drawn from the most
popular subreddit focused on mental health support applications powered by
large language models (u/Replika). This exploration aimed to shed light on the
advantages and potential pitfalls associated with the integration of these
sophisticated models in conversational agents intended for mental health
support. We found the app (Replika) beneficial in offering on-demand,
non-judgmental support, boosting user confidence, and aiding self-discovery.
Yet, it faced challenges in filtering harmful content, sustaining consistent
communication, remembering new information, and mitigating users'
overdependence. The stigma attached further risked isolating users socially. We
strongly assert that future researchers and designers must thoroughly evaluate
the appropriateness of employing LLMs for mental well-being support, ensuring
their responsible and effective application. | http://arxiv.org/pdf/2307.15810 | Zilin Ma, Yiyang Mei, Zhaoyuan Su | cs.HC | null | null | cs.HC | 20230728 | 20230728 | [] |
2307.15818 | 49 | X. Chen, J. Djolonga, P. Padlewski, B. Mustafa, S. Changpinyo, J. Wu, C. R. Ruiz, S. Goodman, X. Wang, Y. Tay, S. Shakeri, M. Dehghani, D. Salz, M. Lucic, M. Tschannen, A. Nagrani, H. Hu, M. Joshi, B. Pang, C. Montgomery, P. Pietrzyk, M. Ritter, A. Piergiovanni, M. Minderer, F. Pavetic, A. Waters, G. Li, I. Alabdulmohsin, L. Beyer, J. Amelot, K. Lee, A. P. Steiner, Y. Li, D. Keysers, A. Arnab, Y. Xu, K. Rong, A. Kolesnikov, M. Seyedhosseini, A. Angelova, X. Zhai, N. Houlsby, and R. Soricut. Pali-x: On scaling up a multilingual vision and language model, 2023a. | 2307.15818#49 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 50 | Deepak Narayanan, Aaron Harlap, Amar Phanishayee, Vivek Seshadri, Nikhil R Devanur, Gre- gory R Ganger, Phillip B Gibbons, and Matei Zaharia. Pipedream: Generalized pipeline par- In Proceedings of the 27th ACM Symposium on Operating Systems allelism for dnn training. Principles, pp. 1â15, 2019.
Deepak Narayanan, Amar Phanishayee, Kaiyu Shi, Xie Chen, and Matei Zaharia. Memory-efficient pipeline-parallel dnn training. In International Conference on Machine Learning, pp. 7937â7947. PMLR, 2021.
NVIDIA. Fastertransformer, 2019. URL https://github.com/NVIDIA/ FasterTransformer.
NVIDIA. Triton inference server, 2021. URL https://developer.nvidia.com/ triton-inference-server.
OpenAI. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023. | 2307.15337#50 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15810 | 50 | 29. Cheng X, Zhang X, Cohen J, Mou J. Human vs. AI: Understanding the impact of anthropomorphism on consumer response to chatbots from the perspective of trust and relationship norms. Inf Process Manag [Internet]. 2022 May [cited 2023 Mar 20];59(3):102940. Available from: https://linkinghub.elsevier.com/retrieve/pii/S0306457322000620
30. Youn S, Jin SV. âIn A.I. we trust?â The effects of parasocial interaction and technopian versus luddite ideological views on chatbot-based customer relationship management in the emerging âfeeling economy.â Comput Hum Behav [Internet]. 2021 Jun [cited 2023 Mar 20];119:106721. Available from: https://linkinghub.elsevier.com/retrieve/pii/S0747563221000431
31. Knaak S, Mantler E, Szeto A. Mental illness-related stigma in healthcare: Barriers to access and care and evidence-based solutions. Healthc Manage Forum. 2017 Mar;30(2):111â6. | 2307.15810#50 | Understanding the Benefits and Challenges of Using Large Language Model-based Conversational Agents for Mental Well-being Support | Conversational agents powered by large language models (LLM) have
increasingly been utilized in the realm of mental well-being support. However,
the implications and outcomes associated with their usage in such a critical
field remain somewhat ambiguous and unexplored. We conducted a qualitative
analysis of 120 posts, encompassing 2917 user comments, drawn from the most
popular subreddit focused on mental health support applications powered by
large language models (u/Replika). This exploration aimed to shed light on the
advantages and potential pitfalls associated with the integration of these
sophisticated models in conversational agents intended for mental health
support. We found the app (Replika) beneficial in offering on-demand,
non-judgmental support, boosting user confidence, and aiding self-discovery.
Yet, it faced challenges in filtering harmful content, sustaining consistent
communication, remembering new information, and mitigating users'
overdependence. The stigma attached further risked isolating users socially. We
strongly assert that future researchers and designers must thoroughly evaluate
the appropriateness of employing LLMs for mental well-being support, ensuring
their responsible and effective application. | http://arxiv.org/pdf/2307.15810 | Zilin Ma, Yiyang Mei, Zhaoyuan Su | cs.HC | null | null | cs.HC | 20230728 | 20230728 | [] |
2307.15818 | 50 | X. Chen, X. Wang, S. Changpinyo, A. Piergiovanni, P. Padlewski, D. Salz, S. Goodman, A. Grycner, B. Mustafa, L. Beyer, A. Kolesnikov, J. Puigcerver, N. Ding, K. Rong, H. Akbari, G. Mishra, L. Xue, A. Thapliyal, J. Bradbury, W. Kuo, M. Seyedhosseini, C. Jia, B. K. Ayan, C. Riquelme, A. Steiner, A. Angelova, X. Zhai, N. Houlsby, and R. Soricut. Pali: A jointly-scaled multilingual language-image model, 2023b.
K. Cobbe, V. Kosaraju, M. Bavarian, M. Chen, H. Jun, L. Kaiser, M. Plappert, J. Tworek, J. Hilton, R. Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021. | 2307.15818#50 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 51 | OpenAI. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023.
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35: 27730â27744, 2022.
Duy Phung. Stablevicuna-13b, May 2023. URL https://huggingface.co/CarperAI/ stable-vicuna-13b-delta.
Ofir Press, Muru Zhang, Sewon Min, Ludwig Schmidt, Noah A Smith, and Mike Lewis. Measuring and narrowing the compositionality gap in language models. arXiv preprint arXiv:2210.03350, 2022.
Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, and Yuxiong He. Zero: Memory optimizations toward training trillion parameter models. In SC20: International Conference for High Perfor- mance Computing, Networking, Storage and Analysis, pp. 1â16. IEEE, 2020. | 2307.15337#51 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15810 | 51 | 32. National Academies of Sciences E and Medicine. Ending Discrimination Against People with Mental and Substance Use Disorders: The Evidence for Stigma Change [Internet]. Washington (DC): National Academies Press (US); 2016. 4 p. Available from: https://www.ncbi.nlm.nih.gov/books/NBK384914/
33. Jo E, Epstein DA, Jung H, Kim YH. Understanding the Benefts and Challenges of Deploying Conversational AI Leveraging Large Language Models for Public Health Intervention. Proc 2023 CHI Conf Hum Factors Comput Syst. 2023 Mar 20;16. | 2307.15810#51 | Understanding the Benefits and Challenges of Using Large Language Model-based Conversational Agents for Mental Well-being Support | Conversational agents powered by large language models (LLM) have
increasingly been utilized in the realm of mental well-being support. However,
the implications and outcomes associated with their usage in such a critical
field remain somewhat ambiguous and unexplored. We conducted a qualitative
analysis of 120 posts, encompassing 2917 user comments, drawn from the most
popular subreddit focused on mental health support applications powered by
large language models (u/Replika). This exploration aimed to shed light on the
advantages and potential pitfalls associated with the integration of these
sophisticated models in conversational agents intended for mental health
support. We found the app (Replika) beneficial in offering on-demand,
non-judgmental support, boosting user confidence, and aiding self-discovery.
Yet, it faced challenges in filtering harmful content, sustaining consistent
communication, remembering new information, and mitigating users'
overdependence. The stigma attached further risked isolating users socially. We
strongly assert that future researchers and designers must thoroughly evaluate
the appropriateness of employing LLMs for mental well-being support, ensuring
their responsible and effective application. | http://arxiv.org/pdf/2307.15810 | Zilin Ma, Yiyang Mei, Zhaoyuan Su | cs.HC | null | null | cs.HC | 20230728 | 20230728 | [] |
2307.15818 | 51 | Z. J. Cui, Y. Wang, N. Muhammad, L. Pinto, et al. From play to policy: Conditional behavior generation from uncurated robot data. arXiv preprint arXiv:2210.10047, 2022.
S. Dasari and A. Gupta. Transformers for one-shot visual imitation. In Conference on Robot Learning, pages 2071â2084. PMLR, 2021.
S. Dasari, F. Ebert, S. Tian, S. Nair, B. Bucher, K. Schmeckpeper, S. Singh, S. Levine, and C. Finn. Robonet: Large-scale multi-robot learning. In Conference on Robot Learning, 2019.
13
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | 2307.15818#51 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 52 | Jie Ren, Samyam Rajbhandari, Reza Yazdani Aminabadi, Olatunji Ruwase, Shuangyan Yang, Min- jia Zhang, Dong Li, and Yuxiong He. {ZeRO-Offload}: Democratizing {Billion-Scale} model training. In 2021 USENIX Annual Technical Conference (USENIX ATC 21), pp. 551â564, 2021.
13
# Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
Andrea Santilli, Silvio Severino, Emilian Postolache, Valentino Maiorca, Michele Mancusi, Ric- cardo Marin, and Emanuele Rodol`a. Accelerating transformer inference for translation via paral- lel decoding. In acl, 2023.
Timo Schick, Jane Dwivedi-Yu, Roberto Dess`ı, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. Toolformer: Language models can teach themselves to use tools. arXiv preprint arXiv:2302.04761, 2023.
SenseTime. Lightllm. https://github.com/ModelTC/lightllm, 2023a. Accessed: 2023-09-26. | 2307.15337#52 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15810 | 52 | 34. Veinot TC, Ancker JS, Cole-Lewis H, Mynatt ED, Parker AG, Siek KA, et al. Leveling Up: On the Potential of Upstream Health Informatics Interventions to Enhance Health Equity. Med Care [Internet]. 2019 Jun [cited 2023 Mar 14];57(Suppl 2):S108â14. Available from: https://journals.lww.com/00005650-201906001-00005 35. Su Z, He L, Jariwala SP, Zheng K, Chen Y. âWhat is Your Envisioned Future?â: Toward Human-AI Enrichment in Data Work of Asthma Care. Proc ACM Hum-Comput Interact [Internet]. 2022 Nov 7 [cited 2023 Mar 14];6(CSCW2):1â28. Available from: https://dl.acm.org/doi/10.1145/3555157 | 2307.15810#52 | Understanding the Benefits and Challenges of Using Large Language Model-based Conversational Agents for Mental Well-being Support | Conversational agents powered by large language models (LLM) have
increasingly been utilized in the realm of mental well-being support. However,
the implications and outcomes associated with their usage in such a critical
field remain somewhat ambiguous and unexplored. We conducted a qualitative
analysis of 120 posts, encompassing 2917 user comments, drawn from the most
popular subreddit focused on mental health support applications powered by
large language models (u/Replika). This exploration aimed to shed light on the
advantages and potential pitfalls associated with the integration of these
sophisticated models in conversational agents intended for mental health
support. We found the app (Replika) beneficial in offering on-demand,
non-judgmental support, boosting user confidence, and aiding self-discovery.
Yet, it faced challenges in filtering harmful content, sustaining consistent
communication, remembering new information, and mitigating users'
overdependence. The stigma attached further risked isolating users socially. We
strongly assert that future researchers and designers must thoroughly evaluate
the appropriateness of employing LLMs for mental well-being support, ensuring
their responsible and effective application. | http://arxiv.org/pdf/2307.15810 | Zilin Ma, Yiyang Mei, Zhaoyuan Su | cs.HC | null | null | cs.HC | 20230728 | 20230728 | [] |
2307.15818 | 52 | 13
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
M. Dehghani, J. Djolonga, B. Mustafa, P. Padlewski, J. Heek, J. Gilmer, A. Steiner, M. Caron, R. Geirhos, I. Alabdulmohsin, R. Jenatton, L. Beyer, M. Tschannen, A. Arnab, X. Wang, C. Riquelme, M. Minderer, J. Puigcerver, U. Evci, M. Kumar, S. van Steenkiste, G. F. Elsayed, A. Mahendran, F. Yu, A. Oliver, F. Huot, J. Bastings, M. P. Collier, A. Gritsenko, V. Birodkar, C. Vasconcelos, Y. Tay, T. Mensink, A. Kolesnikov, F. PavetiÄ, D. Tran, T. Kipf, M. LuÄiÄ, X. Zhai, D. Keysers, J. Harmsen, and N. Houlsby. Scaling vision transformers to 22 billion parameters, 2023. | 2307.15818#52 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 53 | SenseTime. Lightllm. https://github.com/ModelTC/lightllm, 2023a. Accessed: 2023-09-26.
SenseTime. Openppl. https://github.com/openppl-public/ppl.nn, 2023b. Ac- cessed: 2023-09-26.
Noam Shazeer. Fast transformer decoding: One write-head is all you need. arXiv preprint arXiv:1911.02150, 2019.
Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. arXiv preprint Hugginggpt: Solving ai tasks with chatgpt and its friends in huggingface. arXiv:2303.17580, 2023.
Ying Sheng, Lianmin Zheng, Binhang Yuan, Zhuohan Li, Max Ryabinin, Daniel Y Fu, Zhiqiang Xie, Beidi Chen, Clark Barrett, Joseph E Gonzalez, et al. High-throughput generative inference of large language models with a single gpu. arXiv preprint arXiv:2303.06865, 2023. | 2307.15337#53 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15810 | 53 | 36. Stowell E, Lyson MC, Saksono H, Wurth RC, Jimison H, Pavel M, et al. Designing and Evaluating mHealth Interventions for Vulnerable Populations: A Systematic Review. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems [Internet]. Montreal QC Canada: ACM; 2018 [cited 2023 Mar 14]. p. 1â17. Available from: https://dl.acm.org/doi/10.1145/3173574.3173589
37. Veinot TC, Mitchell H, Ancker JS. Good intentions are not enough: how informatics interventions can worsen inequality. J Am Med Inform Assoc JAMIA. 2018 Aug 1;25(8):1080â8. | 2307.15810#53 | Understanding the Benefits and Challenges of Using Large Language Model-based Conversational Agents for Mental Well-being Support | Conversational agents powered by large language models (LLM) have
increasingly been utilized in the realm of mental well-being support. However,
the implications and outcomes associated with their usage in such a critical
field remain somewhat ambiguous and unexplored. We conducted a qualitative
analysis of 120 posts, encompassing 2917 user comments, drawn from the most
popular subreddit focused on mental health support applications powered by
large language models (u/Replika). This exploration aimed to shed light on the
advantages and potential pitfalls associated with the integration of these
sophisticated models in conversational agents intended for mental health
support. We found the app (Replika) beneficial in offering on-demand,
non-judgmental support, boosting user confidence, and aiding self-discovery.
Yet, it faced challenges in filtering harmful content, sustaining consistent
communication, remembering new information, and mitigating users'
overdependence. The stigma attached further risked isolating users socially. We
strongly assert that future researchers and designers must thoroughly evaluate
the appropriateness of employing LLMs for mental well-being support, ensuring
their responsible and effective application. | http://arxiv.org/pdf/2307.15810 | Zilin Ma, Yiyang Mei, Zhaoyuan Su | cs.HC | null | null | cs.HC | 20230728 | 20230728 | [] |
2307.15818 | 53 | D. Driess, F. Xia, M. S. Sajjadi, C. Lynch, A. Chowdhery, B. Ichter, A. Wahid, J. Tompson, Q. Vuong, T. Yu, et al. Palm-e: An embodied multimodal language model. arXiv preprint arXiv:2303.03378, 2023.
M. Du, S. Nair, D. Sadigh, and C. Finn. Behavior retrieval: Few-shot imitation learning by querying unlabeled datasets. arXiv preprint arXiv:2304.08742, 2023a.
Y. Du, K. Konyushkova, M. Denil, A. Raju, J. Landon, F. Hill, N. de Freitas, and S. Cabi. Vision-language models as success detectors. arXiv preprint arXiv:2303.07280, 2023b.
C. Finn and S. Levine. Deep visual foresight for planning robot motion. In 2017 IEEE International Conference on Robotics and Automation (ICRA), pages 2786â2793. IEEE, 2017. | 2307.15818#53 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 54 | Taylor Shin, Yasaman Razeghi, Robert L Logan IV, Eric Wallace, and Sameer Singh. Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222â4235, 2020.
Mitchell Stern, Noam Shazeer, and Jakob Uszkoreit. Blockwise parallel decoding for deep autore- gressive models. Advances in Neural Information Processing Systems, 31, 2018.
Ziteng Sun, Ananda Theertha Suresh, Jae Hun Ro, Ahmad Beirami, Himanshu Jain, Felix Yu, Michael Riley, and Sanjiv Kumar. Spectr: Fast speculative decoding via optimal transport. In Workshop on Efficient Systems for Foundation Models @ ICML2023, 2023. URL https: //openreview.net/forum?id=d0mGsaheuT.
Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethink- In Proceedings of the IEEE conference on ing the inception architecture for computer vision. computer vision and pattern recognition, pp. 2818â2826, 2016. | 2307.15337#54 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15818 | 54 | C. Finn, T. Yu, T. Zhang, P. Abbeel, and S. Levine. One-shot visual imitation learning via meta-learning. In Conference on robot learning, pages 357â368. PMLR, 2017.
R. A. Fisher. Design of experiments. British Medical Journal, 1(3923):554, 1936.
S. Y. Gadre, M. Wortsman, G. Ilharco, L. Schmidt, and S. Song. Clip on wheels: Zero-shot object navigation as object localization and exploration. arXiv preprint arXiv:2203.10421, 2022.
Z. Gan, L. Li, C. Li, L. Wang, Z. Liu, J. Gao, et al. Vision-language pre-training: Basics, recent advances, and future trends. Foundations and Trends® in Computer Graphics and Vision, 14(3â4):163â352, 2022.
G. Ghiasi, X. Gu, Y. Cui, and T.-Y. Lin. Open-vocabulary image segmentation. arXiv preprint arXiv:2112.12143, 2021. | 2307.15818#54 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 55 | Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori Hashimoto. Alpaca: A strong, replicable instruction-following model. https://crfm.stanford.edu/2023/03/13/alpaca.html, 2023. Accessed: 2023- 06-23.
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth´ee Lacroix, Baptiste Rozi`ere, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023a. | 2307.15337#55 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15818 | 55 | K. Grauman, A. Westbury, E. Byrne, Z. Chavis, A. Furnari, R. Girdhar, J. Hamburger, H. Jiang, M. Liu, X. Liu, M. Martin, T. Nagarajan, I. Radosavovic, S. K. Ramakrishnan, F. Ryan, J. Sharma, M. Wray, M. Xu, E. Z. Xu, C. Zhao, S. Bansal, D. Batra, V. Cartillier, S. Crane, T. Do, M. Doulaty, A. Erapalli, C. Feichtenhofer, A. Fragomeni, Q. Fu, A. Gebreselasie, C. Gonzalez, J. Hillis, X. Huang, Y. Huang, W. Jia, W. Khoo, J. Kolar, S. Kottur, A. Kumar, F. Landini, C. Li, Y. Li, Z. Li, K. Mangalam, R. Modhugu, J. Munro, T. Murrell, T. Nishiyasu, W. Price, P. R. Puentes, M. Ramazanova, L. Sari, K. | 2307.15818#55 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 56 | Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Niko- lay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, | 2307.15337#56 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15818 | 56 | T. Murrell, T. Nishiyasu, W. Price, P. R. Puentes, M. Ramazanova, L. Sari, K. Somasundaram, A. Southerland, Y. Sugano, R. Tao, M. Vo, Y. Wang, X. Wu, T. Yagi, Z. Zhao, Y. Zhu, P. Arbelaez, D. Crandall, D. Damen, G. M. Farinella, C. Fuegen, B. Ghanem, V. K. Ithapu, C. V. Jawahar, H. Joo, K. Kitani, H. Li, R. Newcombe, A. Oliva, H. S. Park, J. M. Rehg, Y. Sato, J. Shi, M. Z. Shou, A. Torralba, L. Torresani, M. Yan, and J. Malik. Ego4d: Around the world in 3,000 hours of egocentric video, 2022. | 2307.15818#56 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 57 | Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. Llama 2: Open foundation and fine-tuned chat models, 2023b. | 2307.15337#57 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15818 | 57 | X. Gu, T.-Y. Lin, W. Kuo, and Y. Cui. Open-vocabulary object detection via vision and language knowledge distillation. arXiv preprint arXiv:2104.13921, 2021.
N. Hansen, R. Jangir, Y. Sun, G. Alenyà , P. Abbeel, A. A. Efros, L. Pinto, and X. Wang. Self-supervised policy adaptation during deployment. arXiv preprint arXiv:2007.04309, 2020.
Y. Hao, H. Song, L. Dong, S. Huang, Z. Chi, W. Wang, S. Ma, and F. Wei. Language models are general-purpose interfaces. arXiv preprint arXiv:2206.06336, 2022.
14
# RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
F. Hill, S. Mokra, N. Wong, and T. Harley. Human instruction-following with deep reinforcement
learning via transfer-learning from text. arXiv preprint arXiv:2005.09382, 2020. | 2307.15818#57 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 58 | 14
# Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
Guan Wang, Sijie Cheng, Qiying Yu, and Changling Liu. Openllms: Less is more for open-source models, July 2023a. URL https://github.com/imoneoi/openchat.
Hanrui Wang, Zhekai Zhang, and Song Han. Spatten: Efficient sparse attention architecture with cascade token and head pruning. In 2021 IEEE International Symposium on High-Performance Computer Architecture (HPCA), pp. 97â110. IEEE, 2021.
Sinong Wang, Belinda Z Li, Madian Khabsa, Han Fang, and Hao Ma. Linformer: Self-attention with linear complexity. arXiv preprint arXiv:2006.04768, 2020.
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdh- ery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171, 2022. | 2307.15337#58 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15818 | 58 | learning via transfer-learning from text. arXiv preprint arXiv:2005.09382, 2020.
S. Huang, L. Dong, W. Wang, Y. Hao, S. Singhal, S. Ma, T. Lv, L. Cui, O. K. Mohammed, Q. Liu, et al. Language is not all you need: Aligning perception with language models. arXiv preprint arXiv:2302.14045, 2023.
W. Huang, P. Abbeel, D. Pathak, and I. Mordatch. Language models as zero-shot planners: Extracting actionable knowledge for embodied agents. In International Conference on Machine Learning, pages 9118â9147. PMLR, 2022.
S. James, M. Bloesch, and A. J. Davison. Task-embedded control networks for few-shot imitation
learning. In Conference on robot learning, pages 783â795. PMLR, 2018.
E. Jang, A. Irpan, M. Khansari, D. Kappler, F. Ebert, C. Lynch, S. Levine, and C. Finn. Bc-z: Zero- shot task generalization with robotic imitation learning. In Conference on Robot Learning, pages 991â1002. PMLR, 2021. | 2307.15818#58 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 59 | Zifu Wang, Teodora Popordanoska, Jeroen Bertels, Robin Lemmens, and Matthew B Blaschko. Dice semimetric losses: Optimizing the dice score with soft labels. In Medical Image Computing and Computer Assisted Intervention, 2023b.
Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. Finetuned language models are zero-shot learners. arXiv preprint arXiv:2109.01652, 2021.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824â24837, 2022.
Wei Wen, Chunpeng Wu, Yandan Wang, Yiran Chen, and Hai Li. Learning structured sparsity in deep neural networks. Advances in neural information processing systems, 29, 2016.
Smoothquant: Accurate and efficient post-training quantization for large language models. arXiv preprint arXiv:2211.10438, 2022. | 2307.15337#59 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15818 | 59 | Y. Jiang, A. Gupta, Z. Zhang, G. Wang, Y. Dou, Y. Chen, L. Fei-Fei, A. Anandkumar, Y. Zhu, and L. Fan. Vima: General robot manipulation with multimodal prompts. arXiv preprint arXiv:2210.03094, 2022.
L. P. Kaelbling. The foundation of efficient robot learning. Science, 369(6506):915â916, 2020.
S. Karamcheti, S. Nair, A. S. Chen, T. Kollar, C. Finn, D. Sadigh, and P. Liang. Language-driven representation learning for robotics. arXiv preprint arXiv:2302.12766, 2023.
A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao, S. Whitehead, A. C. Berg, W.-Y. Lo, et al. Segment anything. arXiv preprint arXiv:2304.02643, 2023.
I. Kostrikov, D. Yarats, and R. Fergus. Image augmentation is all you need: Regularizing deep reinforcement learning from pixels. arXiv preprint arXiv:2004.13649, 2020. | 2307.15818#59 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 60 | Smoothquant: Accurate and efficient post-training quantization for large language models. arXiv preprint arXiv:2211.10438, 2022.
Yisheng Xiao, Lijun Wu, Junliang Guo, Juntao Li, Min Zhang, Tao Qin, and Tie-yan Liu. A survey on non-autoregressive generation for neural machine translation and beyond. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023.
Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, and Daxin Jiang. Wizardlm: Empowering large language models to follow complex instructions. arXiv preprint arXiv:2304.12244, 2023.
Yuanzhong Xu, HyoukJoong Lee, Dehao Chen, Blake Hechtman, Yanping Huang, Rahul Joshi, Maxim Krikun, Dmitry Lepikhin, Andy Ly, Marcello Maggioni, et al. Gspmd: general and scalable parallelization for ml computation graphs. arXiv preprint arXiv:2105.04663, 2021. | 2307.15337#60 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15818 | 60 | M. Laskin, K. Lee, A. Stooke, L. Pinto, P. Abbeel, and A. Srinivas. Reinforcement learning with augmented data. Advances in neural information processing systems, 33:19884â19895, 2020a.
M. Laskin, A. Srinivas, and P. Abbeel. Curl: Contrastive unsupervised representations for reinforcement learning. In International Conference on Machine Learning, pages 5639â5650. PMLR, 2020b.
S. Levine, P. Pastor, A. Krizhevsky, J. Ibarz, and D. Quillen. Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. The International journal of robotics research, 37(4-5):421â436, 2018.
A. Lewkowycz, A. Andreassen, D. Dohan, E. Dyer, H. Michalewski, V. Ramasesh, A. Slone, C. Anil, I. Schlag, T. Gutman-Solo, et al. Solving quantitative reasoning problems with language models. arXiv preprint arXiv:2206.14858, 2022. | 2307.15818#60 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 61 | Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. React: Synergizing reasoning and acting in language models. arXiv preprint arXiv:2210.03629, 2022.
Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L Griffiths, Yuan Cao, and Karthik Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:2305.10601, 2023.
Gyeong-In Yu, Joo Seong Jeong, Geon-Woo Kim, Soojeong Kim, and Byung-Gon Chun. Orca: A distributed serving system for {Transformer-Based} generative models. In 16th USENIX Sympo- sium on Operating Systems Design and Implementation (OSDI 22), pp. 521â538, 2022.
Manzil Zaheer, Guru Guruganesh, Kumar Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, et al. Big bird: Transformers for longer sequences. Advances in neural information processing systems, 33:17283â17297, 2020. | 2307.15337#61 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15818 | 61 | J. Li, D. Li, S. Savarese, and S. Hoi. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. arXiv preprint arXiv:2301.12597, 2023.
L. H. Li, M. Yatskar, D. Yin, C.-J. Hsieh, and K.-W. Chang. Visualbert: A simple and performant baseline for vision and language. arXiv preprint arXiv:1908.03557, 2019.
H. Liu, L. Lee, K. Lee, and P. Abbeel. Instruction-following agents with jointly pre-trained vision- language models. arXiv preprint arXiv:2210.13431, 2022.
15
# RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
J. Lu, D. Batra, D. Parikh, and S. Lee. Vilbert: Pretraining task-agnostic visiolinguistic representations
for vision-and-language tasks. Advances in neural information processing systems, 32, 2019.
C. Lynch and P. Sermanet. Language conditioned imitation learning over unstructured data. arXiv
preprint arXiv:2005.07648, 2020. | 2307.15818#61 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 62 | Eric Zelikman, Yuhuai Wu, Jesse Mu, and Noah Goodman. Star: Bootstrapping reasoning with reasoning. Advances in Neural Information Processing Systems, 35:15476â15488, 2022.
15
# Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
Daochen Zha, Zaid Pervaiz Bhat, Kwei-Herng Lai, Fan Yang, Zhimeng Jiang, Shaochen Zhong, and Xia Hu. Data-centric artificial intelligence: A survey. arXiv preprint arXiv:2303.10158, 2023.
Yujia Zhai, Chengquan Jiang, Leyuan Wang, Xiaoying Jia, Shang Zhang, Zizhong Chen, Xin Liu, and Yibo Zhu. Bytetransformer: A high-performance transformer boosted for variable-length inputs. arXiv preprint arXiv:2210.03052, 2022.
Yifan Zhang, Jingqin Yang, Yang Yuan, and Andrew Chi-Chih Yao. Cumulative reasoning with large language models. arXiv preprint arXiv:2308.04371, 2023. | 2307.15337#62 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15818 | 62 | C. Lynch and P. Sermanet. Language conditioned imitation learning over unstructured data. arXiv
preprint arXiv:2005.07648, 2020.
C. Lynch, A. Wahid, J. Tompson, T. Ding, J. Betker, R. Baruch, T. Armstrong, and P. Florence. Interactive
language: Talking to robots in real time. arXiv preprint arXiv:2210.06407, 2022.
Y. J. Ma, S. Sodhani, D. Jayaraman, O. Bastani, V. Kumar, and A. Zhang. Vip: Towards universal visual reward and representation via value-implicit pre-training. arXiv preprint arXiv:2210.00030, 2022.
Y. J. Ma, W. Liang, V. Som, V. Kumar, A. Zhang, O. Bastani, and D. Jayaraman. Liv: Language-image representations and rewards for robotic control. arXiv preprint arXiv:2306.00958, 2023. | 2307.15818#62 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 63 | Lianmin Zheng, Zhuohan Li, Hao Zhang, Yonghao Zhuang, Zhifeng Chen, Yanping Huang, Yida Wang, Yuanzhong Xu, Danyang Zhuo, Eric P Xing, et al. Alpa: Automating inter-and {Intra- Operator} parallelism for distributed deep learning. In 16th USENIX Symposium on Operating Systems Design and Implementation (OSDI 22), pp. 559â578, 2022.
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric. P Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. Judging llm-as-a-judge with mt-bench and chatbot arena, 2023.
Chunting Zhou, Pengfei Liu, Puxin Xu, Srini Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, et al. Lima: Less is more for alignment, 2023. | 2307.15337#63 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15818 | 63 | J. Mahler, J. Liang, S. Niyaz, M. Laskey, R. Doan, X. Liu, J. A. Ojea, and K. Goldberg. Dex-net 2.0: Deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics. arXiv preprint arXiv:1703.09312, 2017.
A. Majumdar, K. Yadav, S. Arnaud, Y. J. Ma, C. Chen, S. Silwal, A. Jain, V.-P. Berges, P. Abbeel, J. Malik, et al. Where are we in the search for an artificial visual cortex for embodied intelligence? arXiv preprint arXiv:2303.18240, 2023a.
A. Majumdar, K. Yadav, S. Arnaud, Y. J. Ma, C. Chen, S. Silwal, A. Jain, V.-P. Berges, P. Abbeel, J. Malik, et al. Where are we in the search for an artificial visual cortex for embodied intelligence? arXiv preprint arXiv:2303.18240, 2023b. | 2307.15818#63 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 64 | Zhe Zhou, Xuechao Wei, Jiejing Zhang, and Guangyu Sun. {PetS}: A unified framework for In 2022 USENIX Annual Technical Conference {Parameter-Efficient} transformers serving. (USENIX ATC 22), pp. 489â504, 2022.
Xizhou Zhu, Yuntao Chen, Hao Tian, Chenxin Tao, Weijie Su, Chenyu Yang, Gao Huang, Bin Li, Lewei Lu, Xiaogang Wang, et al. Ghost in the minecraft: Generally capable agents for open-world enviroments via large language models with text-based knowledge and memory. arXiv preprint arXiv:2305.17144, 2023.
Barret Zoph and Quoc V. Le. Neural architecture search with reinforcement learning. In Interna- tional Conference on Learning Representations (ICLR), 2017.
16
Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
# Appendix
# Table of Contents | 2307.15337#64 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15818 | 64 | O. Mees, L. Hermann, and W. Burgard. What matters in language conditioned robotic imitation learning over unstructured data. IEEE Robotics and Automation Letters, 7(4):11205â11212, 2022.
M. Minderer, A. Gritsenko, A. Stone, M. Neumann, D. Weissenborn, A. Dosovitskiy, A. Mahendran, A. Arnab, M. Dehghani, Z. Shen, et al. Simple open-vocabulary object detection with vision transformers. arXiv preprint arXiv:2205.06230, 2022.
Y. Mu, Q. Zhang, M. Hu, W. Wang, M. Ding, J. Jin, B. Wang, J. Dai, Y. Qiao, and P. Luo. Embodiedgpt: Vision-language pre-training via embodied chain of thought. arXiv preprint arXiv:2305.15021, 2023.
S. Nair, E. Mitchell, K. Chen, S. Savarese, C. Finn, et al. Learning language-conditioned robot behavior from offline data and crowd-sourced annotation. In Conference on Robot Learning, pages 1303â1315. PMLR, 2022a. | 2307.15818#64 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 65 | A Model Details B Implementation Details of Skeleton-of-Thought . B.1 Prompt . . . . B.2 Supporting Multi-Round Conversation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C Implementation Details of Skeleton-of-Thought with Router . . . . . . . . . C.1 Prompting Router . C.2 Trained Router . . . C.3 Router Consistency . . C.4 Concurrent execution for SoT-R . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D Related Work (Expanded) . D.1 Efficient LLMs . D.2 Prompting Methods for LLMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . E Efficiency Analysis F Efficiency Profiling G Efficiency Evaluation G.1 Skeleton-of-Thought . . G.2 Skeleton-of-Thought with Router . . . . . . | 2307.15337#65 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15818 | 65 | S. Nair, A. Rajeswaran, V. Kumar, C. Finn, and A. Gupta. R3m: A universal visual representation for robot manipulation. arXiv preprint arXiv:2203.12601, 2022b.
OpenAI. Gpt-4 technical report, 2023.
J. Pari, N. M. Shafiullah, S. P. Arunachalam, and L. Pinto. The surprising effectiveness of representation learning for visual imitation. arXiv preprint arXiv:2112.01511, 2021.
L. Pinto and A. Gupta. Supersizing self-supervision: Learning to grasp from 50k tries and 700 robot hours. In 2016 IEEE international conference on robotics and automation (ICRA), pages 3406â3413. IEEE, 2016.
S. Polu, J. M. Han, K. Zheng, M. Baksys, I. Babuschkin, and I. Sutskever. Formal mathematics statement curriculum learning. arXiv preprint arXiv:2202.01344, 2022.
16
# RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
V. H. Pong, M. Dalal, S. Lin, A. Nair, S. Bahl, and S. Levine. Skew-fit: State-covering self-supervised | 2307.15818#65 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 66 | Profiling G Efficiency Evaluation G.1 Skeleton-of-Thought . . G.2 Skeleton-of-Thought with Router . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H Overhead of SoT in Different Scenarios I Answer Quality Evaluation Skeleton-of-Thought . . Skeleton-of-Thought with Router . . I.1 I.2 I.3 ChatGPT-3.5 as the Judge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 2307.15337#66 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15818 | 66 | reinforcement learning. arXiv preprint arXiv:1903.03698, 2019.
A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark, et al. Learning transferable visual models from natural language supervision. In Interna- tional Conference on Machine Learning, pages 8748â8763. PMLR, 2021.
S. Reed, K. Zolna, E. Parisotto, S. G. Colmenarejo, A. Novikov, G. Barth-Maron, M. Gimenez, Y. Sulsky, J. Kay, J. T. Springenberg, et al. A generalist agent. arXiv preprint arXiv:2205.06175, 2022.
M. Ryoo, A. Piergiovanni, A. Arnab, M. Dehghani, and A. Angelova. Tokenlearner: Adaptive space-time tokenization for videos. Advances in Neural Information Processing Systems, 34:12786â12797, 2021. | 2307.15818#66 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15818 | 67 | D. Shah, B. OsiÅski, b. ichter, and S. Levine. Lm-nav: Robotic navigation with large pre-trained models of language, vision, and action. In K. Liu, D. Kulic, and J. Ichnowski, editors, Proceedings of The 6th Conference on Robot Learning, volume 205 of Proceedings of Machine Learning Research, pages 492â 504. PMLR, 14â18 Dec 2023. URL https://proceedings.mlr.press/v205/shah23b.html.
R. Shah and V. Kumar. Rrl: Resnet as representation for reinforcement learning. arXiv preprint arXiv:2107.03380, 2021.
M. Shridhar, L. Manuelli, and D. Fox. Cliport: What and where pathways for robotic manipulation. In Proceedings of the 5th Conference on Robot Learning (CoRL), 2021.
M. Shridhar, L. Manuelli, and D. Fox. Cliport: What and where pathways for robotic manipulation. In Conference on Robot Learning, pages 894â906. PMLR, 2022a. | 2307.15818#67 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15818 | 68 | M. Shridhar, L. Manuelli, and D. Fox. Perceiver-actor: A multi-task transformer for robotic manipula- tion. arXiv preprint arXiv:2209.05451, 2022b.
I. Singh, V. Blukis, A. Mousavian, A. Goyal, D. Xu, J. Tremblay, D. Fox, J. Thomason, and A. Garg. Progprompt: Generating situated robot task plans using large language models. In ICRA, 2023.
M. H. Smith and L. S. Coles. Design of a low cost, general purpose robot. In IJCAI, pages 324â336, 1973.
A. Stone, T. Xiao, Y. Lu, K. Gopalakrishnan, K.-H. Lee, Q. Vuong, P. Wohlhart, B. Zitkovich, F. Xia, C. Finn, et al. Open-world object manipulation using pre-trained vision-language models. arXiv preprint arXiv:2303.00905, 2023. | 2307.15818#68 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 69 | Table 1: Model evaluated with SoT. All the open-source models are fine-tuned from LLaMA models.
Access Model Name Institution Released Date Open-Source LLaMA2-Chat-7B (Touvron et al., 2023b) LLaMA2-Chat-13B (Touvron et al., 2023b) OpenChat-13B (Wang et al., 2023a) Vicuna-7B V1.3 (Chiang et al., 2023) Vicuna-13B V1.3 (Chiang et al., 2023) Vicuna-33B V1.3 (Chiang et al., 2023) StableVicuna-13B (Phung, 2023) UltraLM-13B (Ding et al., 2023) Vicuna-7B V1.1 (Chiang et al., 2023) Meta & Microsoft Meta & Microsoft Tsinghua LMSYS LMSYS LMSYS CarperAI OpenBMB & Tsinghua LMSYS 2023/07 2023/07 2023/07 2023/06 2023/06 2023/06 2023/05 2023/05 2023/03 API-Based Claude (Anthropic, 2023) ChatGPT-3.5 GPT-4 Anthropic OpenAI OpenAI 2023/05 2022/11 2023/03 | 2307.15337#69 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15818 | 69 | T. Sumers, K. Marino, A. Ahuja, R. Fergus, and I. Dasgupta. Distilling internet-scale vision-language models into embodied agents. arXiv preprint arXiv:2301.12507, 2023.
Y. Tay, M. Dehghani, V. Q. Tran, X. Garcia, J. Wei, X. Wang, H. W. Chung, S. Shakeri, D. Bahri, T. Schuster, H. S. Zheng, D. Zhou, N. Houlsby, and D. Metzler. Ul2: Unifying language learning paradigms, 2023.
S. Vemprala, R. Bonatti, A. Bucker, and A. Kapoor. Chatgpt for robotics: Design principles and model abilities. Microsoft Auton. Syst. Robot. Res, 2:20, 2023.
J. Wang, Z. Yang, X. Hu, L. Li, K. Lin, Z. Gan, Z. Liu, C. Liu, and L. Wang. Git: A generative image-to-text transformer for vision and language. arXiv preprint arXiv:2205.14100, 2022. | 2307.15818#69 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 70 | Table 2 shows sources of the models we use in the paper.
Table 2: The Hugging Face or API endpoints of the models.
Access Model Name Hugging Face or API Endpoints Open-Source API-Based LLaMA2-Chat-7B (Touvron et al., 2023b) LLaMA2-Chat-13B (Touvron et al., 2023b) OpenChat-13B (Wang et al., 2023a) Vicuna-7B V1.3 (Chiang et al., 2023) Vicuna-13B V1.3 (Chiang et al., 2023) Vicuna-33B V1.3 (Chiang et al., 2023) StableVicuna-13B (Phung, 2023) UltraLM-13B (Ding et al., 2023) Vicuna-7B V1.1 (Chiang et al., 2023) Claude (Anthropic, 2023) ChatGPT-3.5 GPT-4
B IMPLEMENTATION DETAILS OF SKELETON-OF-THOUGHT
B.1 PROMPT
The skeleton prompt is shown in Prompts 1 and 3 and the point-expanding prompt is shown in Prompt 2. | 2307.15337#70 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15818 | 70 | J. Wei, X. Wang, D. Schuurmans, M. Bosma, E. Chi, Q. Le, and D. Zhou. Chain of thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903, 2022.
17
# RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
J. Wei, L. Hou, A. Lampinen, X. Chen, D. Huang, Y. Tay, X. Chen, Y. Lu, D. Zhou, T. Ma, and Q. V. Le.
Symbol tuning improves in-context learning in language models, 2023.
J. Wu, R. Antonova, A. Kan, M. Lepert, A. Zeng, S. Song, J. Bohg, S. Rusinkiewicz, and T. Funkhouser. Tidybot: Personalized robot assistance with large language models. arXiv preprint arXiv:2305.05658, 2023.
T. Xiao, H. Chan, P. Sermanet, A. Wahid, A. Brohan, K. Hausman, S. Levine, and J. Tompson. Robotic skill acquisition via instruction augmentation with vision-language models. arXiv preprint arXiv:2211.11736, 2022a. | 2307.15818#70 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 71 | B.1 PROMPT
The skeleton prompt is shown in Prompts 1 and 3 and the point-expanding prompt is shown in Prompt 2.
Skeleton prompt template. In order to make the output skeleton short and in a consistent format for the good of efficiency and ease of point extraction, the skeleton prompt template (1) describes the task precisely, and (2) provides a partial answer â1.â for the LLM to continue writing. The skeleton
2For convenience, we use the non-official endpoint TheBloke/stable-vicuna-13B-HF and TheBloke/UltraLM-13B-fp16 to get merged weights.
# 3https://www.anthropic.com/claude-in-slack 4https://azure.microsoft.com/en-us/products/ai-services/openai-service
18
# Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | 2307.15337#71 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15818 | 71 | T. Xiao, I. Radosavovic, T. Darrell, and J. Malik. Masked visual pre-training for motor control. arXiv preprint arXiv:2203.06173, 2022b.
S. Young, D. Gandhi, S. Tulsiani, A. Gupta, P. Abbeel, and L. Pinto. Visual imitation made easy. In Conference on Robot Learning, pages 1992â2005. PMLR, 2021.
K.-T. Yu, M. Bauza, N. Fazeli, and A. Rodriguez. More than a million ways to be pushed. a high-fidelity experimental dataset of planar pushing. In 2016 IEEE/RSJ international conference on intelligent robots and systems (IROS), pages 30â37. IEEE, 2016.
T. Yu, C. Finn, A. Xie, S. Dasari, T. Zhang, P. Abbeel, and S. Levine. One-shot imitation from observing humans via domain-adaptive meta-learning. arXiv preprint arXiv:1802.01557, 2018.
X. Zhai, A. Kolesnikov, N. Houlsby, and L. Beyer. Scaling vision transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12104â12113, 2022. | 2307.15818#71 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 72 | 18
# Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
Prompt 3. Skeleton Prompt Template T s (with Two-Shot Demonstrations) [User:] Youâre an organizer responsible for only giving the skeleton (not the full content) for answering the question. Provide the skeleton in a list of points (numbered 1., 2., 3., etc.) Instead of writing a full sentence, each skeleton point should be very short with only 3â¼5 words. Generally, the skeleton should have 3â¼10 points. to answer the question. Question: What are the typical types of Chinese dishes? Skeleton: 1. Dumplings. 2. Noodles. 3. Dim Sum. 4. Hot Pot. 5. Wonton. 6. Ma Po Tofu. 7. Char Siu. 8. Fried Rice. Question: What are some practical tips for individuals to reduce their carbon emissions? Skeleton: 1. Energy conservation. 2. Efficient transportation. 3. Home energy efficiency. 4. Reduce water consumption. 5. Sustainable diet. 6. Sustainable travel. Now, please provide the skeleton for the following question. {question} Skeleton: [Assistant:] 1. | 2307.15337#72 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15818 | 72 | X. Zhang, Y. Ding, S. Amiri, H. Yang, A. Kaminski, C. Esselink, and S. Zhang. Grounding classical task planners via vision-language models. arXiv preprint arXiv:2304.08587, 2023.
18
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
# A. Contributions
⢠Training and Evaluations (designing and executing procedures for training models, evalu- ating models in simulation and the real world, running ablations for algorithm design choices): Yevgen Chebotar, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Alexander Herzog, Brian Ichter, Alex Irpan, Isabel Leal, Lisa Lee, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Michael Ryoo, Anikait Singh, Quan Vuong, Ayzaan Wahid, Paul Wohlhart, Fei Xia, Ted Xiao, and Tianhe Yu. | 2307.15818#72 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 73 | responses are in the desired format in most cases. Therefore, we can use a simple regular expression (\d+)\.\s?([\s\S]+?)(?=
|
*$) to extract point indexes and point skeletons from the skeleton response.
We find that GPT-4 can work well without the two demonstrations in the skeleton prompt. Therefore, we do not include the two demonstrations for GPT-4 (Prompt 1). For all other models, the two demonstrations are included, as shown in Prompt 3.
Point-expanding prompt template. It describes the point-expanding task and provides a partial answer. We also provide instructions âWrite it **very shortly** in 1â¼2 sentenceâ so that the LLMs keep the answers concise. Unlike the skeleton prompt template, we find that demonstrations are not necessary to get reasonable results.
We find that Claude and GPT-4 follows the instruction âWrite it **very shortly** in 1â¼2 sentence and do not continue with other points!â in Prompt 2 very well, so that the answers are very short. Therefore, we delete â**very shortly**â from the prompt template in Claude and GPT-4.
Partial answer. desired response format better. In the Prompts 1 and 2, we provide partial answers so that LLMs can follow the | 2307.15337#73 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.