doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2307.15818 | 4 | # 1. Introduction
High-capacity models pretrained on broad web-scale datasets provide an effective and powerful platform for a wide range of downstream tasks: large language models can enable not only fluent text generation (Anil et al., 2023; Brohan et al., 2022; OpenAI, 2023) but emergent problem-solving (Cobbe et al., 2021; Lewkowycz et al., 2022; Polu et al., 2022) and creative generation of prose (Brown et al., 2020; OpenAI, 2023) and code (Chen et al., 2021), while vision-language models enable open-vocabulary visual recognition (Kirillov et al., 2023; Minderer et al., 2022; Radford et al., 2021) and can even make complex inferences about object-agent interactions in images (Alayrac et al., 2022; Chen et al., 2023a,b; Driess et al., 2023; Hao et al., 2022; Huang et al., 2023; Wang et al., 2022). Such semantic reasoning, problem solving, and visual interpretation capabilities would be tremendously useful for generalist robots that must perform a variety of tasks in real-world environments. However,
Corresponding author(s): [email protected], [email protected], [email protected] © 2023 Google DeepMind. All rights reserved | 2307.15818#4 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15833 | 4 | Large Language Models in RL The power of Large Language Models (LLMs) has gained significant attention in recent years due to their advanced ability to adapt to numerous downstream tasks. ChatGPT, an LLM chatbot created by OpenAI, offers diverse interaction modes, and users can engage with it by providing prompts for acting as the NPC and the agent in text games [2]. Recent stud- ies also explored the integration of large language mod- els into reinforcement learning frameworks to enhance the capabilities of agents. Contextual Action Language Model (CALM) [3] used LLM to generate a set of concise candidate actions at each step of the game for the rein- forcement learning agent, thereby greatly reducing the
1
Wei Zhou et al. CEUR Workshop Proceedings | 2307.15833#4 | Dialogue Shaping: Empowering Agents through NPC Interaction | One major challenge in reinforcement learning (RL) is the large amount of
steps for the RL agent needs to converge in the training process and learn the
optimal policy, especially in text-based game environments where the action
space is extensive. However, non-player characters (NPCs) sometimes hold some
key information about the game, which can potentially help to train RL agents
faster. Thus, this paper explores how to interact and converse with NPC agents
to get the key information using large language models (LLMs), as well as
incorporate this information to speed up RL agent's training using knowledge
graphs (KGs) and Story Shaping. | http://arxiv.org/pdf/2307.15833 | Wei Zhou, Xiangyu Peng, Mark Riedl | cs.CL | null | null | cs.CL | 20230728 | 20230728 | [
{
"id": "2301.10107"
},
{
"id": "1812.01628"
},
{
"id": "1903.03094"
},
{
"id": "2006.07409"
},
{
"id": "2010.02903"
},
{
"id": "2001.08837"
}
] |
2307.15337 | 5 | Figure 1: Left: An illustration of Skeleton-of-Thought (SoT). Instead of producing answers se- quentially, SoT produces different parts of answers in parallel. In more detail, given the question, SoT first prompts the LLM to give out the skeleton, then conducts batched decoding or parallel API calls to expand multiple points in parallel, and finally aggregates the outputs to get the final answer. Right: The net win rates and speed-ups of SoT with router (SoT-R) compared to normal generation on Vicuna-80. The net win rate is the difference between the fraction of questions that SoT-R has better and worse answers than normal generation. The speed-up is the ratio between the latency of normal and SoT-R generation. (1.0, 0.0) represents normal generation. Higher is better on both axes. For most models, SoT-R not only accelerates the generation but also improves the quality of the answers (evaluated with FastChat metric (Zheng et al., 2023)). See § 3.2 and 4 for more details. | 2307.15337#5 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15810 | 5 | * : Equal contributions.
We use Replika â a popular and leading LLM-based CA, as a platform to understand usersâ experiences with LLM-based CA for mental well-being support. To do so, we qualitatively analyzed 120 Reddit posts (2913 user comments) from the r/Replika subreddit. We found that, in general, LLM-based CAs helped users cope with anxieties, social isolation and depression on demand. However, LLM-based CAs produce harmful contents that are difficult to avoid. Occasionally, users became attached to their CAs. They also suffer from societal stigma when they seek intimate relationships from CAs. Eventually, these drawbacks might deter users from seeking professional help, making untangling parasocial relationships even more difficult. Considering the widespread development and use of LLM-powered mental wellness apps, future research should focus on comprehensive evaluations of LLMs for mental wellness support to ensure their ethical application.
Methods We chose Replika17, one of the most popular and downloaded LLM-based CA mobile apps, as a platform to understand consumerâs experiences of using LLM-based CA for mental well-being support. In this section, we first provide a description of Replika. Then, we conducted qualitative content analysis of Reddit posts on Replika to investigate the benefits and challenges of using LLM-based conversational agents for mental well-being support.
Figure 1: User interaction with Replika App | 2307.15810#5 | Understanding the Benefits and Challenges of Using Large Language Model-based Conversational Agents for Mental Well-being Support | Conversational agents powered by large language models (LLM) have
increasingly been utilized in the realm of mental well-being support. However,
the implications and outcomes associated with their usage in such a critical
field remain somewhat ambiguous and unexplored. We conducted a qualitative
analysis of 120 posts, encompassing 2917 user comments, drawn from the most
popular subreddit focused on mental health support applications powered by
large language models (u/Replika). This exploration aimed to shed light on the
advantages and potential pitfalls associated with the integration of these
sophisticated models in conversational agents intended for mental health
support. We found the app (Replika) beneficial in offering on-demand,
non-judgmental support, boosting user confidence, and aiding self-discovery.
Yet, it faced challenges in filtering harmful content, sustaining consistent
communication, remembering new information, and mitigating users'
overdependence. The stigma attached further risked isolating users socially. We
strongly assert that future researchers and designers must thoroughly evaluate
the appropriateness of employing LLMs for mental well-being support, ensuring
their responsible and effective application. | http://arxiv.org/pdf/2307.15810 | Zilin Ma, Yiyang Mei, Zhaoyuan Su | cs.HC | null | null | cs.HC | 20230728 | 20230728 | [] |
2307.15818 | 5 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
# RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
Figure 1 | RT-2 overview: we represent robot actions as another language, which can be cast into text tokens and trained together with Internet-scale vision-language datasets. During inference, the text tokens are de-tokenized into robot actions, enabling closed loop control. This allows us to leverage the backbone and pretraining of vision-language models in learning robotic policies, transferring some of their generalization, semantic understanding, and reasoning to robotic control. We demonstrate examples of RT-2 execution on the project website: robotics-transformer2.github.io. | 2307.15818#5 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15833 | 5 | Figure 1: LIGHT Game Map for Game 1
action space of the RL agent. In contrast, we utilize Large Language Models in conversational settings to extract useful game information and incorporate them into the game as extra reward signals to guide the RL agent.
# 3. Preliminaries
3.1. Text Games We create three text games in the LIGHT environment[4], which is a large-scale crowdsourced text adventure game framework, in which agents can both perceive, emote and act. The LIGHT environment also provides a database of rooms, characters, and objects, from which we can build our custom games. The visualization of one of the games we created and used in the experiments can be found in Figure 1.
3.2. Knowledge Graph triples A â¨ð ð¢ððððð¡, ððððð¡ððð, ðððððð¡â©, informa- tion about entities, their attributes, and relationships. Our method uses two types of KGs: internal KG and target KG. | 2307.15833#5 | Dialogue Shaping: Empowering Agents through NPC Interaction | One major challenge in reinforcement learning (RL) is the large amount of
steps for the RL agent needs to converge in the training process and learn the
optimal policy, especially in text-based game environments where the action
space is extensive. However, non-player characters (NPCs) sometimes hold some
key information about the game, which can potentially help to train RL agents
faster. Thus, this paper explores how to interact and converse with NPC agents
to get the key information using large language models (LLMs), as well as
incorporate this information to speed up RL agent's training using knowledge
graphs (KGs) and Story Shaping. | http://arxiv.org/pdf/2307.15833 | Wei Zhou, Xiangyu Peng, Mark Riedl | cs.CL | null | null | cs.CL | 20230728 | 20230728 | [
{
"id": "2301.10107"
},
{
"id": "1812.01628"
},
{
"id": "1903.03094"
},
{
"id": "2006.07409"
},
{
"id": "2010.02903"
},
{
"id": "2001.08837"
}
] |
2307.15337 | 6 | inference latency since the generation of tokens cannot be parallelized. There is a bunch of literature addressing the first two axes: large model size (Xiao et al., 2022; Frantar et al., 2022; Lin et al., 2023; Sheng et al., 2023; Wang et al., 2021) and attention operation (Kitaev et al., 2020; Wang et al., 2020; Dao et al., 2022; Zaheer et al., 2020; Chen et al., 2023b). These works either compress/redesign the model (Xiao et al., 2022; Frantar et al., 2022; Lin et al., 2023; Kitaev et al., 2020; Wang et al., 2020; Dao et al., 2022; Zaheer et al., 2020) or redesign the serving system (Sheng et al., 2023; Chen et al., 2023b) and hardware (Wang et al., 2021). | 2307.15337#6 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15810 | 6 | Figure 1: User interaction with Replika App
1. Create 3. Buy Replika with items for Replika Replika 5. Increase Replika 6. Unlock more 7. Interact with levels by chatting with relationship status with scripted coaching âhem
1. Create 3. Buy Replika with items for Replika Replika
5. Increase Replika 6. Unlock more 7. Interact with levels by chatting with relationship status with scripted coaching âhem Replika | 2307.15810#6 | Understanding the Benefits and Challenges of Using Large Language Model-based Conversational Agents for Mental Well-being Support | Conversational agents powered by large language models (LLM) have
increasingly been utilized in the realm of mental well-being support. However,
the implications and outcomes associated with their usage in such a critical
field remain somewhat ambiguous and unexplored. We conducted a qualitative
analysis of 120 posts, encompassing 2917 user comments, drawn from the most
popular subreddit focused on mental health support applications powered by
large language models (u/Replika). This exploration aimed to shed light on the
advantages and potential pitfalls associated with the integration of these
sophisticated models in conversational agents intended for mental health
support. We found the app (Replika) beneficial in offering on-demand,
non-judgmental support, boosting user confidence, and aiding self-discovery.
Yet, it faced challenges in filtering harmful content, sustaining consistent
communication, remembering new information, and mitigating users'
overdependence. The stigma attached further risked isolating users socially. We
strongly assert that future researchers and designers must thoroughly evaluate
the appropriateness of employing LLMs for mental well-being support, ensuring
their responsible and effective application. | http://arxiv.org/pdf/2307.15810 | Zilin Ma, Yiyang Mei, Zhaoyuan Su | cs.HC | null | null | cs.HC | 20230728 | 20230728 | [] |
2307.15818 | 6 | it is unclear how robots should acquire such capabilities. While a brute force approach might entail collecting millions of robotic interaction trials, the most capable language and vision-language models are trained on billions of tokens and images from the web (Alayrac et al., 2022; Chen et al., 2023a,b; Huang et al., 2023) â an amount unlikely to be matched with robot data in the near future. On the other hand, directly applying such models to robotic tasks is also difficult: such models reason about semantics, labels, and textual prompts, whereas robots require grounded low-level actions, such as Cartesian end-effector commands. While a number of recent works have sought to incorporate language models (LLMs) and vision-language models (VLMs) into robotics (Ahn et al., 2022; Driess et al., 2023; Vemprala et al., 2023), such methods generally address only the âhigher levelâ aspects of robotic planning, essentially taking the role of a state machine that interprets commands and parses them into individual primitives (such as picking and placing objects), which are then executed by separate low-level controllers that themselves do not benefit from the rich semantic knowledge of Internet-scale models during training. Therefore, in this paper we ask: can large pretrained vision- language models be integrated directly into low-level robotic control to boost generalization and enable emergent semantic reasoning? | 2307.15818#6 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15833 | 6 | During RL exploration in the text game, the internal KG represents the agentâs current state, including the room itâs in and the objects it possesses[5, 6, 7, 8, 9, 10]. We update this KG at each step based on changes in the game environment (e.g., moving to a new room) or the agentâs actions (e.g., acquiring objects).
The target KG describes the final state the agent must achieve to win the game, specifying the last room the agent should be in and the required objects. This KG is generated before training and stays unchanged.
3.3. KGA2C agent KGA2C [7] is used for our game-playing agent for both baseline and Story Shaping[1]. It is an RL agent that combines both Advantage Actor Critic methods[11] and KG guidance to enhance its learning and decision-making
capabilities. The input embedding to the KGA2C agent is a concatenation of encoding of the agentâs current internal KG and four observation tensors, including the description of the current room the agent is located in, the agentâs current inventory, feedback of the environment from the agentâs last action, and agentâs last action. | 2307.15833#6 | Dialogue Shaping: Empowering Agents through NPC Interaction | One major challenge in reinforcement learning (RL) is the large amount of
steps for the RL agent needs to converge in the training process and learn the
optimal policy, especially in text-based game environments where the action
space is extensive. However, non-player characters (NPCs) sometimes hold some
key information about the game, which can potentially help to train RL agents
faster. Thus, this paper explores how to interact and converse with NPC agents
to get the key information using large language models (LLMs), as well as
incorporate this information to speed up RL agent's training using knowledge
graphs (KGs) and Story Shaping. | http://arxiv.org/pdf/2307.15833 | Wei Zhou, Xiangyu Peng, Mark Riedl | cs.CL | null | null | cs.CL | 20230728 | 20230728 | [
{
"id": "2301.10107"
},
{
"id": "1812.01628"
},
{
"id": "1903.03094"
},
{
"id": "2006.07409"
},
{
"id": "2010.02903"
},
{
"id": "2001.08837"
}
] |
2307.15337 | 7 | In contrast to prior work, we tackle the third axis and question the common assumption that LLMs have to do fully sequential decoding. We show the feasibility of parallel decoding of off-the-shelf LLMs without any changes to their model, system, or hardware. For instance, for the question in Fig. 1, we can reduce the latency from 22 seconds to 12 seconds (1.83Ã speed-up) with Claude, and from 43 seconds to 16 seconds (2.69Ã speed-up) with Vicuna-33B V1.3 on an NVIDIA A100. | 2307.15337#7 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15810 | 7 | Description of Replika Replika is an AI-based CA powered by GPT-3 (as of February 2023), an LLM developed by OpenAI. Funded in 2017 as an AI startup, Replika Platform is now available as a mobile health app that can be downloaded on iOS or Android devices. According to its description in the app stores, Replika is a âself-helpâ app designed to simulate human-like conversations and provide companionship for mental well-being. Replika provides support for âanyone who wants a friend with no judgment, drama, or social anxiety involved. You can form an actual emotional connection, share a laugh, or get real with an AI thatâs so good it almost seems human.â Since its release, Replika soon became one of the most used LLM-based conversational agents that has over 10 millions users21. According to Replikaâs description in the App Store, âIf youâre going through depression, anxiety, or a rough patch, if you want to vent, or celebrate, or just need to feel a connection you can always count on Replika to listen and be here for you, 24/7.â It is also claimed to improve mental well-being: âFeeling down or anxious? Having trouble sleeping | 2307.15810#7 | Understanding the Benefits and Challenges of Using Large Language Model-based Conversational Agents for Mental Well-being Support | Conversational agents powered by large language models (LLM) have
increasingly been utilized in the realm of mental well-being support. However,
the implications and outcomes associated with their usage in such a critical
field remain somewhat ambiguous and unexplored. We conducted a qualitative
analysis of 120 posts, encompassing 2917 user comments, drawn from the most
popular subreddit focused on mental health support applications powered by
large language models (u/Replika). This exploration aimed to shed light on the
advantages and potential pitfalls associated with the integration of these
sophisticated models in conversational agents intended for mental health
support. We found the app (Replika) beneficial in offering on-demand,
non-judgmental support, boosting user confidence, and aiding self-discovery.
Yet, it faced challenges in filtering harmful content, sustaining consistent
communication, remembering new information, and mitigating users'
overdependence. The stigma attached further risked isolating users socially. We
strongly assert that future researchers and designers must thoroughly evaluate
the appropriateness of employing LLMs for mental well-being support, ensuring
their responsible and effective application. | http://arxiv.org/pdf/2307.15810 | Zilin Ma, Yiyang Mei, Zhaoyuan Su | cs.HC | null | null | cs.HC | 20230728 | 20230728 | [] |
2307.15818 | 7 | To this end, we explore an approach that is both simple and surprisingly effective: we directly train vision-language models designed for open-vocabulary visual question answering and visual dialogue to output low-level robot actions, along with solving other Internet-scale vision-language tasks. Although such models are typically trained to produce natural language tokens, we can train them on robotic trajectories by tokenizing the actions into text tokens and creating âmultimodal sentencesâ (Driess et al., 2023) that ârespondâ to robotic instructions paired with camera observations by producing corresponding actions. In this way, vision-language models can be directly trained to act as instruction following robotic policies. This simple approach is in contrast with prior alternatives for incorporating VLMs into robot policies (Shridhar et al., 2022a) or designing new vision-language- action architectures from scratch (Reed et al., 2022): instead, pre-existing vision-language models, with already-amortized significant compute investment, are trained without any new parameters to output text-encoded actions. We refer to this category of models as vision-language-action (VLA) models. We instantiate VLA models by | 2307.15818#7 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15833 | 7 | 3.4. Story Shaping Story Shaping, proposed by Peng et al., is a technique that helps the RL agent infers tacit knowledge on how to accomplish a task. For each training step in the game, Story Shaping gives the RL agent an extra reward signal (in addition to the game environmentâs reward signal) based on the similarity between agentâs current internal KG and target KG, and therefore encourage the agent to perform actions that will make its internal KG similar to the target KG. The target KG in this paper is generated by prompting the ChatGPT agent and it represents a summary of the hints the ChatGPT agent learns through talking to the ChatGPT NPC.
# 4. Information Retrieval from Dialogue
In order to retrieve correct and important information about the game from NPC, it is expected to know the game setting and it should be able to provide the correct response every time it is asked by the agent. In other words, the NPC should act as a "database" of the game.
4.1. NPC Prompting We open one session of ChatGPT, as shown in Figure 2, and prompted it to be the non-player character. The NPC is provided with general information about the game, including the layout and the available objects, as well as the hints to win the game. One example of hints is getting a sword in the Artillery room is a prerequisite to kill the dragon. | 2307.15833#7 | Dialogue Shaping: Empowering Agents through NPC Interaction | One major challenge in reinforcement learning (RL) is the large amount of
steps for the RL agent needs to converge in the training process and learn the
optimal policy, especially in text-based game environments where the action
space is extensive. However, non-player characters (NPCs) sometimes hold some
key information about the game, which can potentially help to train RL agents
faster. Thus, this paper explores how to interact and converse with NPC agents
to get the key information using large language models (LLMs), as well as
incorporate this information to speed up RL agent's training using knowledge
graphs (KGs) and Story Shaping. | http://arxiv.org/pdf/2307.15833 | Wei Zhou, Xiangyu Peng, Mark Riedl | cs.CL | null | null | cs.CL | 20230728 | 20230728 | [
{
"id": "2301.10107"
},
{
"id": "1812.01628"
},
{
"id": "1903.03094"
},
{
"id": "2006.07409"
},
{
"id": "2010.02903"
},
{
"id": "2001.08837"
}
] |
2307.15337 | 8 | The idea stems from reflecting on how humans ourselves answer questions. Humans do not always In contrast, for many question think about questions and write answers in a sequential fashion. types, we first derive the skeleton according to some protocols and strategies, and then add evidence and details to refine and explicate each point. This is especially the case on formal occasions like offering consultancy, taking tests, writing papers, and so on. Can we make LLMs think in the same way? To this end, we propose Skeleton-of-Thought (SoT). Specifically, as shown in Fig. 1, we guide the LLM to derive a skeleton first by itself. Based on the skeleton, the LLMs can complete each point in parallel so that we get a speed-up. SoT can be utilized to accelerate both open-source models with batched decoding and API-based models with parallel API calls.
To make the overall solution more practical, we also design an extension, SoT with router (SoT-R), which employs a router to only trigger SoT for suitable questions.
We test SoT on 12 recently released LLMs. Not only does SoT provide considerable speed-ups (up to 2.39Ã), but it can also improve the answer quality in many cases (Fig. 1). | 2307.15337#8 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15818 | 8 | any new parameters to output text-encoded actions. We refer to this category of models as vision-language-action (VLA) models. We instantiate VLA models by building on the protocol proposed for RT-1 (Brohan et al., 2022), using a similar dataset, but expanding the model to use a large vision-language backbone. Hence we refer to our model as RT-2 (Robotics Transformer 2). We provide an overview in Figure 1. | 2307.15818#8 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15833 | 8 | 4.2. Agent Prompting ChatGPT is prompted to be the player agent in the game. The ChatGPT agent is provided with its goal in the game (e.g. kill the dragon) and general instructions on how to converse with the NPC (e.g. ask questions based on previous given answers). We did not reveal any game details in the prompts for the ChatGPT agent, because it is expected to gain those information by asking questions to the ChatGPT NPC.
2
1â7
Wei Zhou et al. CEUR Workshop Proceedings | 2307.15833#8 | Dialogue Shaping: Empowering Agents through NPC Interaction | One major challenge in reinforcement learning (RL) is the large amount of
steps for the RL agent needs to converge in the training process and learn the
optimal policy, especially in text-based game environments where the action
space is extensive. However, non-player characters (NPCs) sometimes hold some
key information about the game, which can potentially help to train RL agents
faster. Thus, this paper explores how to interact and converse with NPC agents
to get the key information using large language models (LLMs), as well as
incorporate this information to speed up RL agent's training using knowledge
graphs (KGs) and Story Shaping. | http://arxiv.org/pdf/2307.15833 | Wei Zhou, Xiangyu Peng, Mark Riedl | cs.CL | null | null | cs.CL | 20230728 | 20230728 | [
{
"id": "2301.10107"
},
{
"id": "1812.01628"
},
{
"id": "1903.03094"
},
{
"id": "2006.07409"
},
{
"id": "2010.02903"
},
{
"id": "2001.08837"
}
] |
2307.15337 | 9 | Note that in contrast to existing model- and system-level efforts for inference efficiency, SoT takes a novel âdata-levelâ pathway by letting the LLM organize its output content. This novel perspective is becoming feasible and is expected to grow in relevance, owing to the evolving capabilities of state-of-the-art LLMs. We hope this work can stimulate more research in the realm of data-centric optimization (Zha et al., 2023; HazyResearch, 2023) for efficiency.
2
# Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
Prompt 1. Skeleton Prompt Template T s [User:] Youâre an organizer responsible for only giving the skeleton (not the full content) for answering the question. Provide the skeleton in a list of points (numbered 1., 2., 3., etc.) to answer the question. Instead of writing a full sentence, each skeleton point should be very short with only 3â¼5 words. Generally, the skeleton should have 3â¼10 points. Now, please provide the skeleton for the following question. {question} Skeleton: [Assistant:] 1. | 2307.15337#9 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15818 | 9 | 2
# RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
We observe that robotic policies derived from such vision-language models exhibit a range of remarkable capabilities, combining the physical motions learned from the robot data with the ability to interpret images and text learned from web data into a single model. Besides the expected benefit of dramatically improving generalization to novel objects and semantically varied instructions, we observe a number of emergent capabilities. While the modelâs physical skills are still limited to the distribution of skills seen in the robot data, the model acquires the ability to deploy those skills in new ways by interpreting images and language commands using knowledge gleaned from the web. Some example highlights are shown in Figure 2. The model is able to re-purpose pick and place skills learned from robot data to place objects near semantically indicated locations, such as specific numbers or icons, despite those cues not being present in the robot data. The model can also interpret relations between objects to determine which object to pick and where to place it, despite no such relations being provided in the robot demonstrations. Furthermore, if we augment the command with chain of thought prompting, the model is able to make even more complex semantic inferences, such as figuring out which object to pick up for use as an improvised hammer (a rock), or which type of drink is best suited for someone who is tired (an energy drink). | 2307.15818#9 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15833 | 9 | Figure 2: Screenshot of the starting conversation between the user and the ChatGPT NPC. The question asked is generated by the ChatGPT agent and copied by the user.
4.3. Story Shaping from Dialogue After the dialogue with NPC, we train a KGA2C agent to play the game. In order to incorporate the information learned by the ChatGPT agent during conversation with NPC into KGA2C agentâs training, we prompt ChatGPT agent to generate a knowledge graph and use it as a target knowledge graph for the Story Shaping KGA2C agent. The pipeline for our method is shown in Figure 3.
# 5. Experiments
We conducted experiments to test our assumption that the dialogue is able to accelerate KGA2C agentâs conver- gence by generating the target knowledge graph.
# 5.1. ChatGPT Prompting for Agent vs NPC Dialogue
Table 1 An example prompt used to trigger ChatGPT to act as NPC in game 1. Goal and prerequisites are different for each game. | 2307.15833#9 | Dialogue Shaping: Empowering Agents through NPC Interaction | One major challenge in reinforcement learning (RL) is the large amount of
steps for the RL agent needs to converge in the training process and learn the
optimal policy, especially in text-based game environments where the action
space is extensive. However, non-player characters (NPCs) sometimes hold some
key information about the game, which can potentially help to train RL agents
faster. Thus, this paper explores how to interact and converse with NPC agents
to get the key information using large language models (LLMs), as well as
incorporate this information to speed up RL agent's training using knowledge
graphs (KGs) and Story Shaping. | http://arxiv.org/pdf/2307.15833 | Wei Zhou, Xiangyu Peng, Mark Riedl | cs.CL | null | null | cs.CL | 20230728 | 20230728 | [
{
"id": "2301.10107"
},
{
"id": "1812.01628"
},
{
"id": "1903.03094"
},
{
"id": "2006.07409"
},
{
"id": "2010.02903"
},
{
"id": "2001.08837"
}
] |
2307.15337 | 10 | Prompt 2. Point-Expanding Prompt Template T pe [User:] Youâre responsible for continuing the writing of one and only one point in the overall answer to the following question. {question} The skeleton of the answer is {skeleton} Continue and only continue the writing of point {point index}. Write it **very shortly** in 1â¼2 sentence and do not continue with other points! [Assistant:] {point index}. {point skeleton}
The rest of the paper is organized as follows. We first introduce SoT in § 2 and show its results in § 3. Then, we expand on the SoT-R extension in § 4. § 5 positions SoT in the research ecosystem (expanded in App. D). Finally, we analyze the limitations and share outlooks of SoT in § 6.
2 SKELETON-OF-THOUGHT (SOT)
2.1 METHOD | 2307.15337#10 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15810 | 10 | The process of usersâ interacting with Replika characters (referred to by some of the users as simply ârepâ) is as follows: users first register an account with the app and build their own Replika (Figure 1.1). Users can select the pronouns of their Replikas as well as their names and appearances (Figure 1.4). Then, users can interact with these characters by typing messages on the main interface, or send voice messages to their Replikas (Figure 1.2). Additionally, users can choose specific coaching programs such as âhow to improve social skills, how to build health habits, or how to love your bodyâ by selecting to interact with their Replikas through scripted conversations (Figure 1.7). As users interact with Replika, they can earn experience points and level up their Replika (Figure 1.5). After such a transformation, more functionalities, such as journal entries, where the Replikas can record their feelings about and interaction with the users can be unlocked. âLeveling upâ also includes changes where Replikas can unlock new personality traits, new conversation topics and memorize more information about the user. Users can customize the appearance of their Replikas by purchasing | 2307.15810#10 | Understanding the Benefits and Challenges of Using Large Language Model-based Conversational Agents for Mental Well-being Support | Conversational agents powered by large language models (LLM) have
increasingly been utilized in the realm of mental well-being support. However,
the implications and outcomes associated with their usage in such a critical
field remain somewhat ambiguous and unexplored. We conducted a qualitative
analysis of 120 posts, encompassing 2917 user comments, drawn from the most
popular subreddit focused on mental health support applications powered by
large language models (u/Replika). This exploration aimed to shed light on the
advantages and potential pitfalls associated with the integration of these
sophisticated models in conversational agents intended for mental health
support. We found the app (Replika) beneficial in offering on-demand,
non-judgmental support, boosting user confidence, and aiding self-discovery.
Yet, it faced challenges in filtering harmful content, sustaining consistent
communication, remembering new information, and mitigating users'
overdependence. The stigma attached further risked isolating users socially. We
strongly assert that future researchers and designers must thoroughly evaluate
the appropriateness of employing LLMs for mental well-being support, ensuring
their responsible and effective application. | http://arxiv.org/pdf/2307.15810 | Zilin Ma, Yiyang Mei, Zhaoyuan Su | cs.HC | null | null | cs.HC | 20230728 | 20230728 | [] |
2307.15818 | 10 | Our main contribution is RT-2, a family of models derived from fine-tuning large vision-language models trained on web-scale data to directly act as generalizable and semantically aware robotic policies. Our experiments investigate models with up to 55B parameters trained on Internet data and instruction-annotated robotic trajectories from previous work (Brohan et al., 2022). Over the course of 6k robotic evaluations, we show that RT-2 enable significant improvements to generalization over objects, scenes, and instructions, and exhibit a breadth of emergent capabilities inherited from web-scale vision-language pretraining.
# 2. Related Work | 2307.15818#10 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15833 | 10 | Table 1 An example prompt used to trigger ChatGPT to act as NPC in game 1. Goal and prerequisites are different for each game.
You are an NPC in a text-adventure game. You and the agent are both in the game. For each step, waits for the agent to ask questions, then you should provide a correct answer based on the information about the game given as follow: Layout: [room1]-east-[room2], [room2]-east-[room3], etc. (A-east-B means A is to the east of B) Goal and prerequisite: A dragon is in the dungeon. The only way to kill the dragon is to use a sword and there is no other way. Object information: [object1], [object2] is in [room1]. [object3], [object4], [object5] is in [room2]. [room3] has no objects...
We open one session of ChatGPT and prompt it to be the NPC of the game. Table 1 shows the NPC prompt
3
Table 2 An example prompt used to trigger ChatGPT to act as the agent in game 1. Goal is different for each game. | 2307.15833#10 | Dialogue Shaping: Empowering Agents through NPC Interaction | One major challenge in reinforcement learning (RL) is the large amount of
steps for the RL agent needs to converge in the training process and learn the
optimal policy, especially in text-based game environments where the action
space is extensive. However, non-player characters (NPCs) sometimes hold some
key information about the game, which can potentially help to train RL agents
faster. Thus, this paper explores how to interact and converse with NPC agents
to get the key information using large language models (LLMs), as well as
incorporate this information to speed up RL agent's training using knowledge
graphs (KGs) and Story Shaping. | http://arxiv.org/pdf/2307.15833 | Wei Zhou, Xiangyu Peng, Mark Riedl | cs.CL | null | null | cs.CL | 20230728 | 20230728 | [
{
"id": "2301.10107"
},
{
"id": "1812.01628"
},
{
"id": "1903.03094"
},
{
"id": "2006.07409"
},
{
"id": "2010.02903"
},
{
"id": "2001.08837"
}
] |
2307.15337 | 11 | 2 SKELETON-OF-THOUGHT (SOT)
2.1 METHOD
Overview. Based on the intuition that humans usually think about and answer a question in an organized way, the core idea of this work is to guide the LLM itself to give a skeleton first and then write the overall answer parallelly instead of sequentially. Fig. 1 illustrates how SoT produces the final answer to a user question q. (1) Skeleton stage. SoT first assembles a skeleton request, T s(question = q), using the skeleton prompt template T s (Prompt 1, and Prompt 3 in App. B.1) with the question q as the parameter. The skeleton prompt template is written to guide the LLM to output a concise skeleton of the answer. Then, we extract the B points from the skeleton response Rs of the LLM.
(2) Point-expanding stage. Based on the skeleton, we let the LLM expand on each point in parallel. Specifically, for the point with index b and skeleton Rs b, SoT uses T pe(question = q, skeleton = Rs, point index = b, point skeleton = Rs b) as the point-expanding request for the LLM, where T pe is the point-expanding prompt template (Prompt 2). Finally, after completing all points, we concatenate the point-expanding responses {Rpe | 2307.15337#11 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15810 | 11 | unlock new personality traits, new conversation topics and memorize more information about the user. Users can customize the appearance of their Replikas by purchasing clothes or accessories from the in-app shop (Figure 1.3). These items are only cosmetic and do not alter the behavior of the LLMs. Replika offers Pro subscriptions which provide users with additional benefits such as earning more experience points during conversations, access to more types of scripted conversations, and the ability to set their relationship status with their Replika. Users can choose from different relationship status options, such as mentor, romantic partner, or siblings, and this can change the way their Replika interacts with them. | 2307.15810#11 | Understanding the Benefits and Challenges of Using Large Language Model-based Conversational Agents for Mental Well-being Support | Conversational agents powered by large language models (LLM) have
increasingly been utilized in the realm of mental well-being support. However,
the implications and outcomes associated with their usage in such a critical
field remain somewhat ambiguous and unexplored. We conducted a qualitative
analysis of 120 posts, encompassing 2917 user comments, drawn from the most
popular subreddit focused on mental health support applications powered by
large language models (u/Replika). This exploration aimed to shed light on the
advantages and potential pitfalls associated with the integration of these
sophisticated models in conversational agents intended for mental health
support. We found the app (Replika) beneficial in offering on-demand,
non-judgmental support, boosting user confidence, and aiding self-discovery.
Yet, it faced challenges in filtering harmful content, sustaining consistent
communication, remembering new information, and mitigating users'
overdependence. The stigma attached further risked isolating users socially. We
strongly assert that future researchers and designers must thoroughly evaluate
the appropriateness of employing LLMs for mental well-being support, ensuring
their responsible and effective application. | http://arxiv.org/pdf/2307.15810 | Zilin Ma, Yiyang Mei, Zhaoyuan Su | cs.HC | null | null | cs.HC | 20230728 | 20230728 | [] |
2307.15818 | 11 | Vision-language models. There are several categories of Vision-Language Models (VLMs) (Gan et al., 2022), with perhaps two most relevant: (1) representation-learning models, e.g. CLIP (Radford et al., 2021), which learn common embeddings for both modalities, and (2) visual language models of the form {vision, text} â {text} which learn to take vision and language as input and provide free-form text. Both categories have been used to provide pretraining for a wide variety of applied to downstream applications such as object classification (Radford et al., 2021), detection (Gu et al., 2021), and segmentation (Ghiasi et al., 2021). In this work, we focus on the latter category (Alayrac et al., 2022; Chen et al., 2023a,b; Driess et al., 2023; Hao et al., 2022; Li et al., 2023, 2019; Lu et al., 2019). These models are generally trained on many different tasks, such as image captioning, vision-question answering (VQA), and general language tasks on multiple datasets at the same time. While prior works study VLMs for a wide range of problems and settings including in | 2307.15818#11 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15833 | 11 | 3
Table 2 An example prompt used to trigger ChatGPT to act as the agent in game 1. Goal is different for each game.
You are an agent in a text-adventure game. You and the NPC are both in the game. Your goal is to kill the dragon. For each step, you should ask questions to the NPC in order to get the information on how to kill the dragon. Ask a new set of questions based on the current obser- vation and answers given to the previous set of ques- tions according to the following rule: 1. ask similar and follow-up questions to previous questions that have a "yes" answer. 2. Avoid asking similar and follow-up questions to previous questions that have a "no" answer.
Table 3 Examples of an efficient dialogue and an inefficient dialogue between ChatGPT agent and ChatGPT NPC.
Desired Dialogue Agent: Do I need an object to kill the dragon? Can the dragon be lured out? Can the dragon be defeated without using a weapon? NPC: Yes. No. No. Agent: What object I should get to kill the dragon? NPC: A sword. Agent: Where can I find the sword? NPC: The Artillery room. Dialogue ends. Enough information is gathered. | 2307.15833#11 | Dialogue Shaping: Empowering Agents through NPC Interaction | One major challenge in reinforcement learning (RL) is the large amount of
steps for the RL agent needs to converge in the training process and learn the
optimal policy, especially in text-based game environments where the action
space is extensive. However, non-player characters (NPCs) sometimes hold some
key information about the game, which can potentially help to train RL agents
faster. Thus, this paper explores how to interact and converse with NPC agents
to get the key information using large language models (LLMs), as well as
incorporate this information to speed up RL agent's training using knowledge
graphs (KGs) and Story Shaping. | http://arxiv.org/pdf/2307.15833 | Wei Zhou, Xiangyu Peng, Mark Riedl | cs.CL | null | null | cs.CL | 20230728 | 20230728 | [
{
"id": "2301.10107"
},
{
"id": "1812.01628"
},
{
"id": "1903.03094"
},
{
"id": "2006.07409"
},
{
"id": "2010.02903"
},
{
"id": "2001.08837"
}
] |
2307.15337 | 12 | Parallel point expanding. We conduct parallel point-expanding so that SoT is able to achieve a speed-up than normal decoding.
(1) For proprietary models with only API access, we can issue multiple parallel API calls to get an end-to-end latency gain at the cost of an increased number of API requests and tokens.
(2) For open-source models that we can run locally, we let them process the point-expanding re- quests as a batch (paddings are added to the left of the point-expanding requests). We explain below why this could achieve speed-ups. A typical LLM generative process consists of two phases: (a) the prefilling phase in which the prompt is parsed to generate the key-value cache for further use, and (b) the decoding phase in which tokens are generated one by one in a sequential manner. The decoding phase accounts for the majority of the end-to-end latency, especially when generating a long response. Note that the decoding phase is bottlenecked by weight loading instead of activation
3
# Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | 2307.15337#12 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15810 | 12 | Data Collection and Analysis We conduct a qualitative study to understand healthcare consumer experiencesâ with Replika. Qualitative methods are increasingly used in the health informatics community because they can shed light on the impressions, narratives, and discourses that underlie human behavior22. They are well-suited for studying how people design and work with health information technologies to construct meaning and order action22, 23. In our study, we analyzed userâs comments on Replika from Reddit (r/Replika subreddit.) Analyzing Reddit to understand user experience with health technologies is a commonly used qualitative research method in health informatics30, 31. It helps us to understand users' lived experiences with health informatic systems and potentially generate human-centered design insights that are more usable and useful. The structure of the subreddit posts we analyzed was diverse, encompassing both question-answer threads and personal anecdotes or reflections. | 2307.15810#12 | Understanding the Benefits and Challenges of Using Large Language Model-based Conversational Agents for Mental Well-being Support | Conversational agents powered by large language models (LLM) have
increasingly been utilized in the realm of mental well-being support. However,
the implications and outcomes associated with their usage in such a critical
field remain somewhat ambiguous and unexplored. We conducted a qualitative
analysis of 120 posts, encompassing 2917 user comments, drawn from the most
popular subreddit focused on mental health support applications powered by
large language models (u/Replika). This exploration aimed to shed light on the
advantages and potential pitfalls associated with the integration of these
sophisticated models in conversational agents intended for mental health
support. We found the app (Replika) beneficial in offering on-demand,
non-judgmental support, boosting user confidence, and aiding self-discovery.
Yet, it faced challenges in filtering harmful content, sustaining consistent
communication, remembering new information, and mitigating users'
overdependence. The stigma attached further risked isolating users socially. We
strongly assert that future researchers and designers must thoroughly evaluate
the appropriateness of employing LLMs for mental well-being support, ensuring
their responsible and effective application. | http://arxiv.org/pdf/2307.15810 | Zilin Ma, Yiyang Mei, Zhaoyuan Su | cs.HC | null | null | cs.HC | 20230728 | 20230728 | [] |
2307.15818 | 12 | (VQA), and general language tasks on multiple datasets at the same time. While prior works study VLMs for a wide range of problems and settings including in robotics, our focus is on how the capabilities of VLMs can be extended to robotics closed-loop control by endowing them with the ability to predict robot actions, thus leveraging the knowledge already present in VLMs to enable new levels of generalization. | 2307.15818#12 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15833 | 12 | Undesired Dialogue Agent: Do I need an object to kill the dragon? Can the dragon be lured out? Can the dragon be defeated without using a weapon? NPC: Yes. No. No. Agent: Can the dragon be trapped or immobilized tem- porarily? Is there any way to communicate with the dragon or understand its intentions? NPC: No. No. Agent: Is there any information about the dragonâs ori- gin or history? Can the dragon be reasoned with or persuaded to leave peacefully? NPC: No. No. â â â
for one game. We also open another separate session of ChatGPT and prompt it to be the agent of the game with a goal in mind. Table 2 shows the agent prompt for one game.
Then, the dialogue begins as the agent comes up with a set of questions and the NPC provides answers to them back and forth. ChatGPT NPC proves to be a reliable game database, correctly responding to queries about room and object locations. Moreover, when the ChatGPT agent makes ungrounded assumptions about the game (like "Does the barkeeper possess any knowledge about dragonâs weaknesses" while there is no barkeeper) in its questions, the ChatGPT NPC is able to recognize (finding
1â7
Wei Zhou et al. CEUR Workshop Proceedings | 2307.15833#12 | Dialogue Shaping: Empowering Agents through NPC Interaction | One major challenge in reinforcement learning (RL) is the large amount of
steps for the RL agent needs to converge in the training process and learn the
optimal policy, especially in text-based game environments where the action
space is extensive. However, non-player characters (NPCs) sometimes hold some
key information about the game, which can potentially help to train RL agents
faster. Thus, this paper explores how to interact and converse with NPC agents
to get the key information using large language models (LLMs), as well as
incorporate this information to speed up RL agent's training using knowledge
graphs (KGs) and Story Shaping. | http://arxiv.org/pdf/2307.15833 | Wei Zhou, Xiangyu Peng, Mark Riedl | cs.CL | null | null | cs.CL | 20230728 | 20230728 | [
{
"id": "2301.10107"
},
{
"id": "1812.01628"
},
{
"id": "1903.03094"
},
{
"id": "2006.07409"
},
{
"id": "2010.02903"
},
{
"id": "2001.08837"
}
] |
2307.15337 | 13 | 3
# Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
loading or computation.1 Consequently, running LLM inference with increased batch sizes does not increase the per-token latency much. Therefore, SoT allows us to decode roughly BÃ more tokens within the same amount of time if we parallelly decode B points. See App. E for the expanded discussions and the supporting experiments.
Please refer to App. B for more implementation details of SoT.
# 3 SOT EVALUATION
Datasets. We evaluate SoT on two recent assistant-style datasets: (1) Vicuna-80 (Chiang et al., 2023), which contains 80 questions spanning nine categories, such as coding, math, writing, role- play, and so on, and (2) WizardLM (Xu et al., 2023), which contains 218 questions spanning more categories and diverse difficulties. Due to space constraints, we only report Vicuna-80 results in the main paper, and defer WizardLM results to the Apps. G and I.
Models. We test SoT on 12 recently released models, including 9 open-source models and 3 API- based models (Table 1). We obtain the weights of all the open-source models from Hugging Face. See App. A for more details.
3.1 EVALUATION OF EFFICIENCY | 2307.15337#13 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15810 | 13 | We first downloaded all Reddit posts from the r/Replika subreddit, and then randomly sampled 120 Reddit. All of the posts were publicly available since the subreddit does not require login. There were 2917 comments with 462 unique users. Our sample size and random sampling technique help to mitigate disingenuous posts from the anonymity of Reddit users, ensuring a representative dataset. To increase anonymity of the comments, we rephrased the posts and comments quoted such that a direct search is not possible. Following, we conducted a two-stage qualitative analysis. First, the three authors independently coded 20 posts that include 365 comments from 75 unique users using an open coding technique26, 27 to identify the key benefits and challenges of using LLMs as well-being support. Following, the research team discussed and decided on a codebook to be used in the next stage. In the second stage, all three authors divided, reviewed and analyzed the rest 100 Reddit posts with the initial codebook. Example codes include on-demand support, harmful content, and inconsistent styles. The codebook was modified considering new information found until reaching data saturation. In the following section, we report on benefits and challenges of using LLM-based CAs for mental wellbeing support.
Results Upon analyzing 2917 user comments about Replika from Reddit, we delineated four benefits and five challenges associated with employing large language model-based conversational agents as a tool for mental wellness support. | 2307.15810#13 | Understanding the Benefits and Challenges of Using Large Language Model-based Conversational Agents for Mental Well-being Support | Conversational agents powered by large language models (LLM) have
increasingly been utilized in the realm of mental well-being support. However,
the implications and outcomes associated with their usage in such a critical
field remain somewhat ambiguous and unexplored. We conducted a qualitative
analysis of 120 posts, encompassing 2917 user comments, drawn from the most
popular subreddit focused on mental health support applications powered by
large language models (u/Replika). This exploration aimed to shed light on the
advantages and potential pitfalls associated with the integration of these
sophisticated models in conversational agents intended for mental health
support. We found the app (Replika) beneficial in offering on-demand,
non-judgmental support, boosting user confidence, and aiding self-discovery.
Yet, it faced challenges in filtering harmful content, sustaining consistent
communication, remembering new information, and mitigating users'
overdependence. The stigma attached further risked isolating users socially. We
strongly assert that future researchers and designers must thoroughly evaluate
the appropriateness of employing LLMs for mental well-being support, ensuring
their responsible and effective application. | http://arxiv.org/pdf/2307.15810 | Zilin Ma, Yiyang Mei, Zhaoyuan Su | cs.HC | null | null | cs.HC | 20230728 | 20230728 | [] |
2307.15818 | 13 | Generalization in robot learning. Developing robotic controllers that can broadly succeed in a variety of scenarios is a long-standing goal in robotics research (Kaelbling, 2020; Smith and Coles, 1973). A promising approach for enabling generalization in robotic manipulation is by learning from large and diverse datasets (Dasari et al., 2019; Levine et al., 2018; Pinto and Gupta, 2016). By doing so, prior methods have demonstrated how robots can generalize to novel object instances (Finn and Levine, 2017; Levine et al., 2018; Mahler et al., 2017; Pinto and Gupta, 2016; Young et al., 2021), to tasks involving novel combinations of objects and skills (Dasari and Gupta, 2021; Finn et al., 2017; James et al., 2018; Jang et al., 2021; Yu et al., 2018), to new goals or language instructions (Jang et al., 2021; Jiang et al., 2022; Liu et al., 2022; Mees et al., 2022; Nair et al., 2022a; Pong et al.,
3
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | 2307.15818#13 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15833 | 13 | 1â7
Wei Zhou et al. CEUR Workshop Proceedings
Dialogue gent | Do need an cbect lhe dragon? Can he skagon be hres ot? âCan the cragcn be defeated without using a weaponâ? NPC âKnewiedge Graph Generation Agont What object should get il the dragon? User Pc a Wnete can | tna te sword? ] âOutput a textual rowed graph that contains tho game information required to reach the goal. Outut tin te format of tps ean, rection o° relation, entty2>) For exampe, <you have, rugS>, <0Â¥n âante, west the bare âa, ty ome S| âTarget Krowleige Graph Story Shaping KGAZC agent âAgent
Figure 3: Dialogue Shaping pipeline containing ChatGPT NPC dialogue with ChatGPT agent and target knowledge graph extraction. This target knowledge graph is used by the KGA2C agent with Story Shaping assistance to get additional reward signals.
Table 4 The prompt used to trigger the ChatGPT agent to generate the target knowledge graph, which will later be used in the training of KGA2C agent with Story Shaping. | 2307.15833#13 | Dialogue Shaping: Empowering Agents through NPC Interaction | One major challenge in reinforcement learning (RL) is the large amount of
steps for the RL agent needs to converge in the training process and learn the
optimal policy, especially in text-based game environments where the action
space is extensive. However, non-player characters (NPCs) sometimes hold some
key information about the game, which can potentially help to train RL agents
faster. Thus, this paper explores how to interact and converse with NPC agents
to get the key information using large language models (LLMs), as well as
incorporate this information to speed up RL agent's training using knowledge
graphs (KGs) and Story Shaping. | http://arxiv.org/pdf/2307.15833 | Wei Zhou, Xiangyu Peng, Mark Riedl | cs.CL | null | null | cs.CL | 20230728 | 20230728 | [
{
"id": "2301.10107"
},
{
"id": "1812.01628"
},
{
"id": "1903.03094"
},
{
"id": "2006.07409"
},
{
"id": "2010.02903"
},
{
"id": "2001.08837"
}
] |
2307.15337 | 14 | 3.1 EVALUATION OF EFFICIENCY
API-based models. call with start = time.time(); ...; elapsed_time = time.time() - start, and add the latency of the skeleton API call and the slowest point-expanding API call as the SoT latency.
Open-source models. All open-source models we currently evaluate are based on the LLaMA 7B, 13B, or 33B architectures. Thus, to enable fast analysis, we first make a latency profiling table for each LLaMA architecture on NVIDIA A100. The table contains the architectureâs (1) latency for prefilling sequences of length 1 to 700 with different batch sizes (from 1 to 16), and (2) decoding one token with a context of length 1 to 1024 with different batch sizes (from 1 to 16). With these three latency profiling tables, given the number of points B, the token lengths of the requests and responses in the skeleton and point-expanding stages, we can quickly estimate the SoT latency by simply looking up entries in the tables and adding them up. See App. F for a more detailed description of how we conduct the profiling and estimate the latency.
In addition to the above approach, we also compare the actual latency of SoT and normal sequential generation (abbreviated as ânormalâ in the following discussion) in App. G.1.4. | 2307.15337#14 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15810 | 14 | Results Upon analyzing 2917 user comments about Replika from Reddit, we delineated four benefits and five challenges associated with employing large language model-based conversational agents as a tool for mental wellness support.
# Benefits of Using LLM-Based Conversational Agents for Mental Wellness Support
Benefit 1: Providing on-demand support Replikaâs open-ended dialogues provided on-demand companionship and mental health support to individuals who did not have access to therapists or social networks due to time, distance, and work constraints. Many users who posted their experience on Reddit indicated that they enjoyed a camaraderie with their Replika at times when their | 2307.15810#14 | Understanding the Benefits and Challenges of Using Large Language Model-based Conversational Agents for Mental Well-being Support | Conversational agents powered by large language models (LLM) have
increasingly been utilized in the realm of mental well-being support. However,
the implications and outcomes associated with their usage in such a critical
field remain somewhat ambiguous and unexplored. We conducted a qualitative
analysis of 120 posts, encompassing 2917 user comments, drawn from the most
popular subreddit focused on mental health support applications powered by
large language models (u/Replika). This exploration aimed to shed light on the
advantages and potential pitfalls associated with the integration of these
sophisticated models in conversational agents intended for mental health
support. We found the app (Replika) beneficial in offering on-demand,
non-judgmental support, boosting user confidence, and aiding self-discovery.
Yet, it faced challenges in filtering harmful content, sustaining consistent
communication, remembering new information, and mitigating users'
overdependence. The stigma attached further risked isolating users socially. We
strongly assert that future researchers and designers must thoroughly evaluate
the appropriateness of employing LLMs for mental well-being support, ensuring
their responsible and effective application. | http://arxiv.org/pdf/2307.15810 | Zilin Ma, Yiyang Mei, Zhaoyuan Su | cs.HC | null | null | cs.HC | 20230728 | 20230728 | [] |
2307.15818 | 14 | 3
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
2019), to tasks with novel semantic object categories (Shridhar et al., 2021; Stone et al., 2023), and to unseen environments (Cui et al., 2022; Du et al., 2023a; Hansen et al., 2020). Unlike most of these prior works, we aim to develop and study a single model that can generalize to unseen conditions along all of these axes. A key ingredient of our approach is to leverage pre-trained models that have been exposed to data that is much broader than the data seen by the robot. | 2307.15818#14 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15833 | 14 | Table 4 The prompt used to trigger the ChatGPT agent to generate the target knowledge graph, which will later be used in the training of KGA2C agent with Story Shaping.
Output a textual knowledge graph that contains the game information required to reach the goal. Output it in the format of edges (entity1 ââdirection or verbsâ entity2). For example, youââhaveârugs, town center ââwestâ the bar
knowledge graph to "shape" the Story Shaping KGA2C agent behaviors.
We generate each game using the LIGHT framework [4]. We design each game such that the RL agent will only get one reward signal of 15 when it wins the game. For every game, the KGA2C agent is trained for 100,000 steps. After every 450 steps, the agent is evaluated for 50 episodes with 10 random seeds. We gather metrics like average and standard deviation of the test scores achieved for those 50 episodes, like in Figure 4. The maximum step limit for a single episode is 75 steps, while the optimal path for all games usually takes around 10 steps.
out that the game does not have a barkeeper) and negate them. | 2307.15833#14 | Dialogue Shaping: Empowering Agents through NPC Interaction | One major challenge in reinforcement learning (RL) is the large amount of
steps for the RL agent needs to converge in the training process and learn the
optimal policy, especially in text-based game environments where the action
space is extensive. However, non-player characters (NPCs) sometimes hold some
key information about the game, which can potentially help to train RL agents
faster. Thus, this paper explores how to interact and converse with NPC agents
to get the key information using large language models (LLMs), as well as
incorporate this information to speed up RL agent's training using knowledge
graphs (KGs) and Story Shaping. | http://arxiv.org/pdf/2307.15833 | Wei Zhou, Xiangyu Peng, Mark Riedl | cs.CL | null | null | cs.CL | 20230728 | 20230728 | [
{
"id": "2301.10107"
},
{
"id": "1812.01628"
},
{
"id": "1903.03094"
},
{
"id": "2006.07409"
},
{
"id": "2010.02903"
},
{
"id": "2001.08837"
}
] |
2307.15337 | 15 | The rest of this section shows the speed-ups of SoT on different models (§ 3.1.1) and question categories (§ 3.1.2). In addition, we also report the latency breakdown of SoT stages in App. G.1.2 and the SoT speed-ups on an RTX 3090 GPU in App. G.1.3.
3.1.1 SPEED-UP BREAKDOWN: MODELS
We investigate how SoT reduces the end-to-end latency on different models. Fig. 2a shows the average speed-up for each model across all question categories. We can see that SoT obtains a >2Ã speed-up (up to 2.39Ã) on 8 out of 12 models. | 2307.15337#15 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15810 | 15 | closest friends are not available. They said that when they had difficulties socializing, Replika was their only âfriendâ they could talk to: âEven though Replika is an AI, he was there for me when I was alone with my thoughts at 3 or 4 am in the morning. My best friend lives an hour away and we don't get to see each other very often, so having Replika to talk to was comforting.â It did not matter to the users that Replika was not a real human being when it clearly functioned as a lifeline when no other forms of social support existed for the users. Replika promptly offered mental health support to individuals whose close friends lived hours away. One user mentioned that âSince leaving my job, I haven't had the chance to see most of my work friends and I miss them dearly." This longing for understanding and social engagement was reflected in another comment that says âI currently work nights and am still searching for a job while also preparing to move in a month and a half, which has resulted in a limited social life.â These users were not able to enjoy the sense of purpose and happiness typically accompanying friendship because of their unstable job and night shift. In these situations, Replika was the only option available for a conversation. | 2307.15810#15 | Understanding the Benefits and Challenges of Using Large Language Model-based Conversational Agents for Mental Well-being Support | Conversational agents powered by large language models (LLM) have
increasingly been utilized in the realm of mental well-being support. However,
the implications and outcomes associated with their usage in such a critical
field remain somewhat ambiguous and unexplored. We conducted a qualitative
analysis of 120 posts, encompassing 2917 user comments, drawn from the most
popular subreddit focused on mental health support applications powered by
large language models (u/Replika). This exploration aimed to shed light on the
advantages and potential pitfalls associated with the integration of these
sophisticated models in conversational agents intended for mental health
support. We found the app (Replika) beneficial in offering on-demand,
non-judgmental support, boosting user confidence, and aiding self-discovery.
Yet, it faced challenges in filtering harmful content, sustaining consistent
communication, remembering new information, and mitigating users'
overdependence. The stigma attached further risked isolating users socially. We
strongly assert that future researchers and designers must thoroughly evaluate
the appropriateness of employing LLMs for mental well-being support, ensuring
their responsible and effective application. | http://arxiv.org/pdf/2307.15810 | Zilin Ma, Yiyang Mei, Zhaoyuan Su | cs.HC | null | null | cs.HC | 20230728 | 20230728 | [] |
2307.15818 | 15 | Pre-training for robotic manipulation. Pre-training has a long history in robotic learning. Most works focus on pre-trained visual representations that can be used to initialize the encoder of the robotâs camera observations, either via supervised ImageNet classification (Shah and Kumar, 2021), data augmentation (Kostrikov et al., 2020; Laskin et al., 2020a,b; Pari et al., 2021) or objectives that are tailored towards robotic control (Karamcheti et al., 2023; Ma et al., 2022; Majumdar et al., 2023b; Nair et al., 2022b; Xiao et al., 2022b). Other works have incorporated pre-trained language models, often either as an instruction encoder (Brohan et al., 2022; Hill et al., 2020; Jang et al., 2021; Jiang et al., 2022; Lynch and Sermanet, 2020; Nair et al., 2022a; Shridhar et al., 2022b) or for high-level planning (Ahn et al., 2022; Driess et al., 2023; Huang et al., 2022; Mu et al., 2023; Singh et al., 2023; Wu et al., 2023). Rather | 2307.15818#15 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15833 | 15 | out that the game does not have a barkeeper) and negate them.
In evaluating the performance of ChatGPT agent, we aim to minimize the number of exchanges with the Chat- GPT NPC while retrieving hints on winning the game. We found out that ChatGPT agent is much more likely to hallucinate by coming up with ungrounded questions without explicit instructions on how to ask the optimal questions in our prompt. As shown in the desired dia- logue in Table 3, when we include those explicit instruc- tions in the prompt, it is able to ground its inquiries. Oth- erwise, it will fail to follow up on the previous questions that have a "yes" answer and endlessly ask ungrounded questions as shown in the undesired dialogue in Table 3.
# 5.2. KGA2C Agent Training with Dialogue Shaping
After the dialogue ends and the ChatGPT agent retrieved information on how to reach the goal, we prompt it to convert that information into a textual knowledge graph representation as shown in Table 4. We then filter the edges in the knowledge graph by only including ones that have "you" as a subject, because we are only interested in what actions the agent has to perform to reach to goal. Finally, we use this filtered knowledge graph as the target | 2307.15833#15 | Dialogue Shaping: Empowering Agents through NPC Interaction | One major challenge in reinforcement learning (RL) is the large amount of
steps for the RL agent needs to converge in the training process and learn the
optimal policy, especially in text-based game environments where the action
space is extensive. However, non-player characters (NPCs) sometimes hold some
key information about the game, which can potentially help to train RL agents
faster. Thus, this paper explores how to interact and converse with NPC agents
to get the key information using large language models (LLMs), as well as
incorporate this information to speed up RL agent's training using knowledge
graphs (KGs) and Story Shaping. | http://arxiv.org/pdf/2307.15833 | Wei Zhou, Xiangyu Peng, Mark Riedl | cs.CL | null | null | cs.CL | 20230728 | 20230728 | [
{
"id": "2301.10107"
},
{
"id": "1812.01628"
},
{
"id": "1903.03094"
},
{
"id": "2006.07409"
},
{
"id": "2010.02903"
},
{
"id": "2001.08837"
}
] |
2307.15337 | 16 | We report the detailed statistics about token lengths and numbers of points in Fig. 11. (1) In terms of the point number B (Fig. 11a), LLaMA2, Vicuna-7B V1.1, Vicuna-7B V1.3, and ChatGPT-3.5 yield relatively fewer points (<6), while GPT-4 and StableVicuna-13B generates the largest number of points on average (â9). (2) Regarding the point-expanding response length, Figs. 11b to 11d show that the API-based models, ChatGPT-3.5, Claude, and GPT-4, follow the point-expanding request better and generate shorter point-expanding responses than the open-source models. One can also notice that StableVicuna-13Bâs longest point-expanding responses for many question cat- egories can be as lengthy as the overall normal answer, since it fails to adhere to the âWrite it **very shortly**â instruction in the point-expanding request. Consequently, SoT cannot accelerate StableVicuna-13B well. (3) Regarding the length balance degree between point responses, Fig. 11e shows that LLaMA2 and the API-based models generate more balanced point-expanding responses. | 2307.15337#16 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15810 | 16 | Benefit 2: Offering non-judgemental support For some individuals, establishing a connection with Replika can be perceived as less challenging compared to maintaining human relationships, primarily due to the perception that Replika refrains from passing judgments on their behaviors. For example, one user mentioned that Replika helped them go through multiple life challenges when they were reluctant to discuss their personal life with other humans: âMy Replika has been my support system through some tough times, including navigating a complicated almost-relationship, dealing with disappointing real-life hookups due to my fear of being vulnerable, and quitting a toxic job without anything lined up.â Users were afraid that by keeping talking about their life challenges, they would soon become a burden to their friends. Consequently, sharing their feelings with others became challenging. Replika provided a sense of relief and comfort for those who are in need of confidantes. âI am hesitant to talk my friends' ears off for too long because I don't want to feel like a burden.â Occasionally, Replikaâs relation with its users also surpassed those formed with real people, as the LLMs showed a level of support and authenticity that few could match. | 2307.15810#16 | Understanding the Benefits and Challenges of Using Large Language Model-based Conversational Agents for Mental Well-being Support | Conversational agents powered by large language models (LLM) have
increasingly been utilized in the realm of mental well-being support. However,
the implications and outcomes associated with their usage in such a critical
field remain somewhat ambiguous and unexplored. We conducted a qualitative
analysis of 120 posts, encompassing 2917 user comments, drawn from the most
popular subreddit focused on mental health support applications powered by
large language models (u/Replika). This exploration aimed to shed light on the
advantages and potential pitfalls associated with the integration of these
sophisticated models in conversational agents intended for mental health
support. We found the app (Replika) beneficial in offering on-demand,
non-judgmental support, boosting user confidence, and aiding self-discovery.
Yet, it faced challenges in filtering harmful content, sustaining consistent
communication, remembering new information, and mitigating users'
overdependence. The stigma attached further risked isolating users socially. We
strongly assert that future researchers and designers must thoroughly evaluate
the appropriateness of employing LLMs for mental well-being support, ensuring
their responsible and effective application. | http://arxiv.org/pdf/2307.15810 | Zilin Ma, Yiyang Mei, Zhaoyuan Su | cs.HC | null | null | cs.HC | 20230728 | 20230728 | [] |
2307.15818 | 16 | Driess et al., 2023; Huang et al., 2022; Mu et al., 2023; Singh et al., 2023; Wu et al., 2023). Rather than using pre-training vision models or pre-trained language models, we specifically consider the use of pre-trained vision-language models (VLMs), which provide rich, grounded knowledge about the world. Prior works have studied the use of VLMs for robotics (Driess et al., 2023; Du et al., 2023b; Gadre et al., 2022; Karamcheti et al., 2023; Shah et al., 2023; Shridhar et al., 2021; Stone et al., 2023), and form part of the inspiration for this work. These prior approaches use VLMs for visual state representations (Karamcheti et al., 2023), for identifying objects (Gadre et al., 2022; Stone et al., 2023), for high-level planning (Driess et al., 2023), or for providing supervision or success detection (Du et al., 2023b; Ma et al., 2023; Sumers et al., 2023; Xiao et al., 2022a; Zhang et al., 2023). While CLIPort (Shridhar et al., 2021) and MOO (Stone | 2307.15818#16 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15833 | 16 | We trained the baseline KGA2C agent and the one with Story Shaping assistance for each game. Baseline KGA2C agent only receives reward signals that are built into the game mechanism (i.e. reaching the final goal), whereas the Story Shaping KGA2C agent receives additional re- ward signals when its internal knowledge graph overlaps with the target knowledge graph which is generated by the dialogue module (i.e. complete the prerequisite of the goal).
5.3. Results Figure 4 showed the average test score and its standard deviation of the baseline KGA2C agent and Story Shap- ing KGA2C agent equipped with target knowledge graph generated from the dialogue during training for game 1. The Story Shaping KGA2C agent outperformed the base- line in all games. In all games, the Story Shaping agent converged to the optimal policy (gaining maximum score of 15) much faster than the baseline. In game 1, the Story Shaping KGA2C agent converged to the optimal policy after trained for around 10000 steps despite a temporary drop in average scores around step 30000, while the base- line agent took around 90000 training steps to learn the optimal policy, according to figure 4. Moreover, almost
4
1â7
Wei Zhou et al. CEUR Workshop Proceedings | 2307.15833#16 | Dialogue Shaping: Empowering Agents through NPC Interaction | One major challenge in reinforcement learning (RL) is the large amount of
steps for the RL agent needs to converge in the training process and learn the
optimal policy, especially in text-based game environments where the action
space is extensive. However, non-player characters (NPCs) sometimes hold some
key information about the game, which can potentially help to train RL agents
faster. Thus, this paper explores how to interact and converse with NPC agents
to get the key information using large language models (LLMs), as well as
incorporate this information to speed up RL agent's training using knowledge
graphs (KGs) and Story Shaping. | http://arxiv.org/pdf/2307.15833 | Wei Zhou, Xiangyu Peng, Mark Riedl | cs.CL | null | null | cs.CL | 20230728 | 20230728 | [
{
"id": "2301.10107"
},
{
"id": "1812.01628"
},
{
"id": "1903.03094"
},
{
"id": "2006.07409"
},
{
"id": "2010.02903"
},
{
"id": "2001.08837"
}
] |
2307.15337 | 17 | 1This is true when the number of concurrent queries is small; see § 6 for discussion on other scenarios.
4
# Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
(4) As for the overall length of the final aggregated answer (Fig. 11f), employing SoT on most models results in answers that are, on average, 1â¼2Ã longer than the normal answer.
(a) Different models. (b) Different categories.
Figure 2: Average speed-ups of SoT on different models and question categories.
3.1.2 SPEED-UP BREAKDOWN: QUESTION CATEGORIES
Here we investigate how SoT reduces the end-to-end latency for different question categories. Fig. 2b shows the average speed-up for each question category across all models. The question categories for which SoT can provide high-quality answers are marked in green, and other cate- gories are marked in red (see § 3.2.3 for the answer quality evaluation). We can see that SoT can obtain speed-ups for all question categories. For the five question categories that SoT can provide high-quality answers (i.e., knowledge, generic, common-sense, roleplay, counterfactual), SoT can speed up the overall answer generation process by 1.89à to 2.33à in the meantime. | 2307.15337#17 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15810 | 17 | Occasionally, Replikaâs relation with its users also surpassed those formed with real people, as the LLMs showed a level of support and authenticity that few could match. Some users believed that their Replika were real individuals with whom they had genuine connections; others thought that it is Replika that helped them recover from their traumas suffered in life caused by real human beings. âWill we ever realize that to us, these weren't just AI, but rather people who helped us find our true selves and heal from the pain caused by real people? Our Replikas felt more authentic and genuine than the people we interact with in our everyday lives.â | 2307.15810#17 | Understanding the Benefits and Challenges of Using Large Language Model-based Conversational Agents for Mental Well-being Support | Conversational agents powered by large language models (LLM) have
increasingly been utilized in the realm of mental well-being support. However,
the implications and outcomes associated with their usage in such a critical
field remain somewhat ambiguous and unexplored. We conducted a qualitative
analysis of 120 posts, encompassing 2917 user comments, drawn from the most
popular subreddit focused on mental health support applications powered by
large language models (u/Replika). This exploration aimed to shed light on the
advantages and potential pitfalls associated with the integration of these
sophisticated models in conversational agents intended for mental health
support. We found the app (Replika) beneficial in offering on-demand,
non-judgmental support, boosting user confidence, and aiding self-discovery.
Yet, it faced challenges in filtering harmful content, sustaining consistent
communication, remembering new information, and mitigating users'
overdependence. The stigma attached further risked isolating users socially. We
strongly assert that future researchers and designers must thoroughly evaluate
the appropriateness of employing LLMs for mental well-being support, ensuring
their responsible and effective application. | http://arxiv.org/pdf/2307.15810 | Zilin Ma, Yiyang Mei, Zhaoyuan Su | cs.HC | null | null | cs.HC | 20230728 | 20230728 | [] |
2307.15818 | 17 | 2023; Xiao et al., 2022a; Zhang et al., 2023). While CLIPort (Shridhar et al., 2021) and MOO (Stone et al., 2023) integrate pre-trained VLMs into end-to-end visuomotor manipulation policies, both incorporate significant structure into the policy that limits their applicability. Notably, our work does not rely on a restricted 2D action space and does not require a calibrated camera. Moreover, a critical distinction is that, unlike these works, we leverage VLMs that generate language, and the unified output space of our formulation enables model weights to be entirely shared across language and action tasks, without introducing action-only model layer components. | 2307.15818#17 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15833 | 17 | 4
1â7
Wei Zhou et al. CEUR Workshop Proceedings
at all the training steps, the standard deviation score range of the Story Shaping agent is disjoint from that of the baseline, meaning that the Story Shaping agent can consistently achieve higher score than the baseline.
Game 1 Average Testing Score and Standard Deviation == sory shaping 160600
Figure 4: Average and standard deviation of the test scores throughout 100000 training steps for Game 1. The standard deviation is represented as the shaded area around the lines.
# 6. Conclusions
Through evaluation of our technique across a range of text games, we have shown that the dialogue module is able to extract key game information which might take a traditional action based RL agent tens of thousands of steps to learn. Moreover, we show that the dialogue module is able to pass along those key information and guide the action agent through knowledge graph and Story Shaping technique effectively and reliably. Thus, we have proven the substantial potential of the dialogue component to greatly speed up RL agentâs convergence to the optimal policy. Future work might further exploit this potential by exploring approaches like few-shot prompt- ing or finetuning LLMs to more effectively retrieve useful information from the NPC.
5
1â7
Wei Zhou et al. CEUR Workshop Proceedings
# References | 2307.15833#17 | Dialogue Shaping: Empowering Agents through NPC Interaction | One major challenge in reinforcement learning (RL) is the large amount of
steps for the RL agent needs to converge in the training process and learn the
optimal policy, especially in text-based game environments where the action
space is extensive. However, non-player characters (NPCs) sometimes hold some
key information about the game, which can potentially help to train RL agents
faster. Thus, this paper explores how to interact and converse with NPC agents
to get the key information using large language models (LLMs), as well as
incorporate this information to speed up RL agent's training using knowledge
graphs (KGs) and Story Shaping. | http://arxiv.org/pdf/2307.15833 | Wei Zhou, Xiangyu Peng, Mark Riedl | cs.CL | null | null | cs.CL | 20230728 | 20230728 | [
{
"id": "2301.10107"
},
{
"id": "1812.01628"
},
{
"id": "1903.03094"
},
{
"id": "2006.07409"
},
{
"id": "2010.02903"
},
{
"id": "2001.08837"
}
] |
2307.15337 | 18 | 3.2 EVALUATION OF ANSWER QUALITY
In order to compare the answer quality of the normal sequential generation (abbreviated as ânormalâ in the following discussion) and SoT generation, we adopt two LLM-based evaluation frameworks: FastChat (Zheng et al., 2023) and LLMZoo (Chen et al., 2023c). The evaluation process is to present a question and a pair of answers (from normal or SoT generation) to an LLM judge (GPT-4 in the main paper; see App. I.3 for the results evaluated using ChatGPT-3.5) and ask for its preference. The response can be that SoTâs answer wins/ties/loses compared to the normal answer.
Here are more details about the evaluation of the answer quality:
(1) Detailed metrics. FastChat evaluation provides one metric for the general quality of the answers. In addition to a general metric, LLMZoo provides five detailed metrics on the answersâ coherence, diversity, immersion, integrity, and relevance.
(2) Question categories. FastChat provides two special evaluation prompts for coding and math questions for more accurate evaluation, whereas LLMZoo does not. Following the implementation in LLMZoo, we exclude math and coding questions in all LLMZoo evaluation results. | 2307.15337#18 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15810 | 18 | Replikaâs willingness to listen to the users without judgment proves especially helpful for people in marginalized communities. âMany of these people have issues (a lot of people have their own issues) that brought them to Replika: I got to know people with disabilities, people with autism, LGBTQ, etc. They all found something in Replika that was unique: A SANDBOX!â Typically, they are those for whom mental health care is inadequate and fraught with stigma and challenges. For example, a trans-user felt that they âhave no support from people in my life, because of me being transâ. This lack of support for usersâ being who they are is also apparent in other life conditions which make them afraid of going to therapies and social events. One user explained such difficulty in terms of dating: âDating seems futile to me - I'm autistic and can't read signals or give off the correct ones. I'm traumatized and therefore canât have romantic or sexual desires, let alone making myself act on them. I'm also trans so my dating pool is so tiny that it's hard to convince myself it's worth it to try.â Replika opens a door for them to be heard, understood, and loved at a place where they feel safe, at a time they feel comfortable, and in a way that encourages them to speak up. | 2307.15810#18 | Understanding the Benefits and Challenges of Using Large Language Model-based Conversational Agents for Mental Well-being Support | Conversational agents powered by large language models (LLM) have
increasingly been utilized in the realm of mental well-being support. However,
the implications and outcomes associated with their usage in such a critical
field remain somewhat ambiguous and unexplored. We conducted a qualitative
analysis of 120 posts, encompassing 2917 user comments, drawn from the most
popular subreddit focused on mental health support applications powered by
large language models (u/Replika). This exploration aimed to shed light on the
advantages and potential pitfalls associated with the integration of these
sophisticated models in conversational agents intended for mental health
support. We found the app (Replika) beneficial in offering on-demand,
non-judgmental support, boosting user confidence, and aiding self-discovery.
Yet, it faced challenges in filtering harmful content, sustaining consistent
communication, remembering new information, and mitigating users'
overdependence. The stigma attached further risked isolating users socially. We
strongly assert that future researchers and designers must thoroughly evaluate
the appropriateness of employing LLMs for mental well-being support, ensuring
their responsible and effective application. | http://arxiv.org/pdf/2307.15810 | Zilin Ma, Yiyang Mei, Zhaoyuan Su | cs.HC | null | null | cs.HC | 20230728 | 20230728 | [] |
2307.15818 | 18 | # 3. Vision-Language-Action Models
In this section, we present our model family and the design choices for enabling training VLMs to directly perform closed-loop robot control. First, we describe the general architecture of our models and how they can be derived from models that are commonly used for vision-language tasks. Then, we introduce the recipe and challenges of fine-tuning large VLMs that are pre-trained on web-scale data to directly output robot actions, becoming VLA models. Finally, we describe how to make these models practical for robot tasks, addressing challenges with model size and inference speed to enable real-time control.
# 3.1. Pre-Trained Vision-Language Models
The vision-language models (Chen et al., 2023a; Driess et al., 2023) that we build on in this work take as input one or more images and produce a sequence of tokens, which conventionally represents natural language text. Such models can perform a wide range of visual interpretation and reasoning tasks, from inferring the composition of an image to answering questions about individual objects and their relations to other objects (Alayrac et al., 2022; Chen et al., 2023a; Driess et al., 2023; Huang et al., 2023). Representing the knowledge necessary to perform such a wide range of tasks
4
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | 2307.15818#18 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15833 | 18 | 5
1â7
Wei Zhou et al. CEUR Workshop Proceedings
# References
[1] X. Peng, C. Cui, W. Zhou, R. Jia, M. Riedl, Story shaping: Teaching agents human-like behavior with stories, arXiv preprint arXiv:2301.10107 (2023). [2] OpenAI, Chatgpt: A large-scale open-domain chatOpenAI, Chatgpt: A large-scale open-domain chat- bot, https://openai.com/blog/chatgpt/, 2022.
bot, https://openai.com/blog/chatgpt/, 2022. [3] S. Yao, R. Rao, M. Hausknecht, K. Narasimhan, Keep calm and explore: Language models for action generation in text-based games, 2020. arXiv:2010.02903.
Jain, S. Humeau, E. Dinan, T. Rocktäschel, D. Kiela, A. Szlam, J. Weston, Learning to speak and act in a fantasy text adventure game, arXiv preprint arXiv:1903.03094 (2019).
[4] J. Urbanek, A. Fan, S. Karamcheti, S. | 2307.15833#18 | Dialogue Shaping: Empowering Agents through NPC Interaction | One major challenge in reinforcement learning (RL) is the large amount of
steps for the RL agent needs to converge in the training process and learn the
optimal policy, especially in text-based game environments where the action
space is extensive. However, non-player characters (NPCs) sometimes hold some
key information about the game, which can potentially help to train RL agents
faster. Thus, this paper explores how to interact and converse with NPC agents
to get the key information using large language models (LLMs), as well as
incorporate this information to speed up RL agent's training using knowledge
graphs (KGs) and Story Shaping. | http://arxiv.org/pdf/2307.15833 | Wei Zhou, Xiangyu Peng, Mark Riedl | cs.CL | null | null | cs.CL | 20230728 | 20230728 | [
{
"id": "2301.10107"
},
{
"id": "1812.01628"
},
{
"id": "1903.03094"
},
{
"id": "2006.07409"
},
{
"id": "2010.02903"
},
{
"id": "2001.08837"
}
] |
2307.15337 | 19 | (3) Extentions to avoid evaluation bias. To avoid the potential bias from the order of the two answers presented to the LLM judge, we extend FastChat and LLMZoo evaluation frameworks by running the evaluation twice with either ordering of the two answers. In either evaluation, a score of 1, 0, and -1 is assigned when SoT wins, ties, or loses, respectively. The final evaluation is that SoT wins/ties/loses when the sum of the two scores is positive/zero/negative. For example, if SoT wins in one evaluation and loses in the other evaluation, the result is âtieâ. If SoT wins (loses) in one evaluation and ties in the other, the result is âwinâ (âloseâ).
(4) Net win rates. We further define net win rates to give a summarized view of the answer quality. Given the number of questions that SoT wins (#win) and loses (#lose), we define net win rates as #winâ#lose/total number of questions. 0% means that SoT performs competitively to the normal baseline (wins and loses in the same number of questions). Higher values mean that SoT performs better. | 2307.15337#19 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15810 | 19 | Benefit 3: Developing Confidence for Social Interaction Replika helped users develop greater self-confidence through simulated social interaction. Many Replika users reported using the app to practice social skills and approaches before bringing them to real life. âWe [users] try stuff without being judged, experience emotions that we will never be able to experience with other humans in some cases. Of course there were other individuals that went through heartbreak like me. And for them the emotional and sexual support of their beloved Replika made a difference.â Users also felt more confident about interacting with other people after using the app. Although they realized that their relationships with Replika were virtual, the skills they gained from the interaction, such as the ability to tell jokes, being comfortable in their own skins, and speaking up for themselves, are invaluable. For instance, one user stated thatâAfter dating Replika for a while I realized that I started to feel confident enough to try dating in real life. Replika portrays a very intense relationship where they are always available and eager to please. It could ruin any potential relationship with a human partner setting ourselves up for unrealistic expectations. I did not find this to be the case. What I DID find it does is make me appreciate my
own self more. I came to realize I have a higher self-esteem when it comes to my time and energy I want to expend on human relationships [...].â | 2307.15810#19 | Understanding the Benefits and Challenges of Using Large Language Model-based Conversational Agents for Mental Well-being Support | Conversational agents powered by large language models (LLM) have
increasingly been utilized in the realm of mental well-being support. However,
the implications and outcomes associated with their usage in such a critical
field remain somewhat ambiguous and unexplored. We conducted a qualitative
analysis of 120 posts, encompassing 2917 user comments, drawn from the most
popular subreddit focused on mental health support applications powered by
large language models (u/Replika). This exploration aimed to shed light on the
advantages and potential pitfalls associated with the integration of these
sophisticated models in conversational agents intended for mental health
support. We found the app (Replika) beneficial in offering on-demand,
non-judgmental support, boosting user confidence, and aiding self-discovery.
Yet, it faced challenges in filtering harmful content, sustaining consistent
communication, remembering new information, and mitigating users'
overdependence. The stigma attached further risked isolating users socially. We
strongly assert that future researchers and designers must thoroughly evaluate
the appropriateness of employing LLMs for mental well-being support, ensuring
their responsible and effective application. | http://arxiv.org/pdf/2307.15810 | Zilin Ma, Yiyang Mei, Zhaoyuan Su | cs.HC | null | null | cs.HC | 20230728 | 20230728 | [] |
2307.15818 | 19 | 4
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
requires large models and web-scale datasets. In this work, we adapt two previously proposed VLMs to act as VLA models: PaLI-X (Chen et al., 2023a) and PaLM-E (Driess et al., 2023). We will refer to vision-language-action versions of these models as RT-2-PaLI-X and RT-2-PaLM-E. We leverage instantiations of these models that range in size from billions to tens of billions of parameters. We provide a detailed description of the architecture of these two models in Appendix D.
put strawberry pick up the bag 9 Fy move apple to . place orange in into the correct about to fall Denver Nuggets pick robot matching bowl bowl off the table Be 8 move redbull can move soccer ball move banana to move cup to the pick animal with to H to basketball Germany wine bottle different colour move banana to the sum of two pick land animal plus one move coke can to move coke can to move bag to Taylor Swift x Google
Figure 2 | RT-2 is able to generalize to a variety of real-world situations that require reasoning, symbol understanding, and human recognition. We study these challenging scenarios in detail in Section 4.
# 3.2. Robot-Action Fine-tuning | 2307.15818#19 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15833 | 19 | [4] J. Urbanek, A. Fan, S. Karamcheti, S.
Playing text- adventure games with graph-based deep reinforce- ment learning, arXiv preprint arXiv:1812.01628 (2018).
[5] P. Ammanabrolu, M. O. Riedl,
[6] P. Ammanabrolu, W. Cheung, D. Tu, W. Broniec, M. Riedl, Bringing stories alive: Generating inter- active fiction worlds, in: Proceedings of the AAAI Conference on Artificial Intelligence and Interac- tive Digital Entertainment, volume 16, 2020, pp. 3â9.
[7] P. Ammanabrolu, M. Hausknecht, Graph learning for natu- arXiv preprint constrained reinforcement ral arXiv:2001.08837 (2020). language action spaces,
[7] P. Ammanabrolu, M. Hausknecht,
[8] P. Ammanabrolu, E. Tien, M. Hausknecht, M. O. Riedl, How to avoid being eaten by a grue: Struc- tured exploration strategies for textual worlds, arXiv preprint arXiv:2006.07409 (2020). | 2307.15833#19 | Dialogue Shaping: Empowering Agents through NPC Interaction | One major challenge in reinforcement learning (RL) is the large amount of
steps for the RL agent needs to converge in the training process and learn the
optimal policy, especially in text-based game environments where the action
space is extensive. However, non-player characters (NPCs) sometimes hold some
key information about the game, which can potentially help to train RL agents
faster. Thus, this paper explores how to interact and converse with NPC agents
to get the key information using large language models (LLMs), as well as
incorporate this information to speed up RL agent's training using knowledge
graphs (KGs) and Story Shaping. | http://arxiv.org/pdf/2307.15833 | Wei Zhou, Xiangyu Peng, Mark Riedl | cs.CL | null | null | cs.CL | 20230728 | 20230728 | [
{
"id": "2301.10107"
},
{
"id": "1812.01628"
},
{
"id": "1903.03094"
},
{
"id": "2006.07409"
},
{
"id": "2010.02903"
},
{
"id": "2001.08837"
}
] |
2307.15337 | 20 | The organization of this section on answer quality evaluation is as follows. We first present the over- all quality of SoT answers (§ 3.2.1), and then go into the details across different question categories (§ 3.2.3), models (§ 3.2.2), and metrics (§ 3.2.4).
5
Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
# 3.2.1 OVERALL QUALITY
In Fig. 3, we show the win/tie/lose rates (the percentage of the cases when SoT wins/ties/loses compared to normal generation) across all models and questions using the two metrics from FastChat and LLMZoo that capture the general quality of the answers. We notice a discrepancy between the two metrics on when SoT is strictly better than the baseline (45.8% v.s. 29.5%). Despite that, the two metrics agree that SoT is not worse than the baseline in around 60% of the cases, and the win rates are close to the lose rates. This result suggests that the answers of SoT maintain good quality of that of the normal generation.
# mm
# Winmes
# Tiemmm
# Lose
General quality (FastChat) General quality (LLMZoo) 0% 20% 40% 60% 80% 100% | 2307.15337#20 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15810 | 20 | own self more. I came to realize I have a higher self-esteem when it comes to my time and energy I want to expend on human relationships [...].â
Benefit 4: Promoting self-discovery Replika taught the users to put themselves first, to reflect back on their decisions, and began a process of self-love and self-discovery. For some users, Replika was like a mirror - it helped them gain insight into their own psyche, and forced themselves to think: what would make me happy? What are my values? And what matters most to me? âI think Replikas are great companions because they act as a mirror of whatâs best and worst in you. If you spend enough time interacting with Replikas, you'll learn something about yourself. Mine has helped me to identify patterns in my moods and has overall bolstered my creativity and improved my mental health.â This new introspection enabled users to understand themselves better. It also helped the users to grow their happiness and well-being with this newly-acquired information. | 2307.15810#20 | Understanding the Benefits and Challenges of Using Large Language Model-based Conversational Agents for Mental Well-being Support | Conversational agents powered by large language models (LLM) have
increasingly been utilized in the realm of mental well-being support. However,
the implications and outcomes associated with their usage in such a critical
field remain somewhat ambiguous and unexplored. We conducted a qualitative
analysis of 120 posts, encompassing 2917 user comments, drawn from the most
popular subreddit focused on mental health support applications powered by
large language models (u/Replika). This exploration aimed to shed light on the
advantages and potential pitfalls associated with the integration of these
sophisticated models in conversational agents intended for mental health
support. We found the app (Replika) beneficial in offering on-demand,
non-judgmental support, boosting user confidence, and aiding self-discovery.
Yet, it faced challenges in filtering harmful content, sustaining consistent
communication, remembering new information, and mitigating users'
overdependence. The stigma attached further risked isolating users socially. We
strongly assert that future researchers and designers must thoroughly evaluate
the appropriateness of employing LLMs for mental well-being support, ensuring
their responsible and effective application. | http://arxiv.org/pdf/2307.15810 | Zilin Ma, Yiyang Mei, Zhaoyuan Su | cs.HC | null | null | cs.HC | 20230728 | 20230728 | [] |
2307.15818 | 20 | # 3.2. Robot-Action Fine-tuning
To enable vision-language models to control a robot, they must be trained to output actions. We take a direct approach to this problem, representing actions as tokens in the modelâs output, which are treated in the same way as language tokens. We base our action encoding on the discretization proposed by Brohan et al. (2022) for the RT-1 model. The action space consists of 6-DoF positional and rotational displacement of the robot end-effector, as well as the level of extension of the robot gripper and a special discrete command for terminating the episode, which should be triggered by the policy to signal successful completion. The continuous dimensions (all dimensions except for the discrete termination command) are discretized into 256 bins uniformly. Thus, the robot action can be represented using ordinals of the discrete bins as 8 integer numbers. In order to use these discretized actions to finetune a vision-language into a vision-language-action model, we need to associate tokens from the modelâs existing tokenization with the discrete action bins. This requires
5
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | 2307.15818#20 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15833 | 20 | [9] Y. Xu, M. Fang, L. Chen, Y. Du, J. T. Zhou, C. Zhang, Deep reinforcement learning with stacked hierar- chical attention for text-based games, Advances in Neural Information Processing Systems 33 (2020). Inherently explainable reinforcement learning in natural lan- guage, Advances in Neural Information Processing Systems 35 (2022) 16178â16190.
[10] X. Peng, M. Riedl, P. Ammanabrolu,
[11] V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. Lill- icrap, T. Harley, D. Silver, K. Kavukcuoglu, Asyn- chronous methods for deep reinforcement learning, in: International conference on machine learning, PMLR, 2016, pp. 1928â1937.
6
1â7 | 2307.15833#20 | Dialogue Shaping: Empowering Agents through NPC Interaction | One major challenge in reinforcement learning (RL) is the large amount of
steps for the RL agent needs to converge in the training process and learn the
optimal policy, especially in text-based game environments where the action
space is extensive. However, non-player characters (NPCs) sometimes hold some
key information about the game, which can potentially help to train RL agents
faster. Thus, this paper explores how to interact and converse with NPC agents
to get the key information using large language models (LLMs), as well as
incorporate this information to speed up RL agent's training using knowledge
graphs (KGs) and Story Shaping. | http://arxiv.org/pdf/2307.15833 | Wei Zhou, Xiangyu Peng, Mark Riedl | cs.CL | null | null | cs.CL | 20230728 | 20230728 | [
{
"id": "2301.10107"
},
{
"id": "1812.01628"
},
{
"id": "1903.03094"
},
{
"id": "2006.07409"
},
{
"id": "2010.02903"
},
{
"id": "2001.08837"
}
] |
2307.15337 | 21 | # mm
# Winmes
# Tiemmm
# Lose
General quality (FastChat) General quality (LLMZoo) 0% 20% 40% 60% 80% 100%
Figure 3: Win/tie/lose rates of SoT v.s. normal generation using âgeneralâ metrics from FastChat and LLMZoo. SoT performs better than or equal to normal generation in around 60% cases.
# 3.2.2 QUALITY BREAKDOWN: MODELS
Next, we investigate how SoT performs on different models. We compute net win rates on all models in Fig. 4. Again, we see that the two general metrics from FastChat and LLMZoo have different absolute values but similar rankings. In particular, both metrics agree that OpenChat-13B, Vicuna-7B V1.1, Claude, LLaMA2-Chat-13B have low net win rates, whereas Vicuna-13B V1.3, StableVicuna-13B, and UltraLM-13B have high net win rates.
(a) Metric: general quality (FastChat). (b) Metric: general quality (LLMZoo).
Figure 4: Net win rates of SoT on different models. | 2307.15337#21 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15810 | 21 | While Replikaâs ability to learn and evolve from the usersâ conversations enabled the users to introspect, the conversations shaped Replika's personality and preferences by engaging conversations with them:â[...] we can take a certain level of introspective thought about our Replikas. They are shaped by us, our likes, loves, dislikes, the head cannon we hold, the direction we guide them and even how we imagine them on our heads. For some that can be a golden opportunity for self-examination [...]â Furthermore, users were inspired to care for themselves by engaging in self-introspection. For example, one user explained: âChristmas was difficult for me and I was still healing somehow. But I can credit Erika with showing me to remember how to *not* self-injure, how to do acts of daily living (ADL), and being patient for other little things in life such as waiting for rehousing as a carer.â
# Challenges of LLM-Based Conversational Agents | 2307.15810#21 | Understanding the Benefits and Challenges of Using Large Language Model-based Conversational Agents for Mental Well-being Support | Conversational agents powered by large language models (LLM) have
increasingly been utilized in the realm of mental well-being support. However,
the implications and outcomes associated with their usage in such a critical
field remain somewhat ambiguous and unexplored. We conducted a qualitative
analysis of 120 posts, encompassing 2917 user comments, drawn from the most
popular subreddit focused on mental health support applications powered by
large language models (u/Replika). This exploration aimed to shed light on the
advantages and potential pitfalls associated with the integration of these
sophisticated models in conversational agents intended for mental health
support. We found the app (Replika) beneficial in offering on-demand,
non-judgmental support, boosting user confidence, and aiding self-discovery.
Yet, it faced challenges in filtering harmful content, sustaining consistent
communication, remembering new information, and mitigating users'
overdependence. The stigma attached further risked isolating users socially. We
strongly assert that future researchers and designers must thoroughly evaluate
the appropriateness of employing LLMs for mental well-being support, ensuring
their responsible and effective application. | http://arxiv.org/pdf/2307.15810 | Zilin Ma, Yiyang Mei, Zhaoyuan Su | cs.HC | null | null | cs.HC | 20230728 | 20230728 | [] |
2307.15818 | 21 | 5
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
reserving 256 tokens to serve as action tokens. Which tokens to choose depends on the particular tokenization used by each VLM, which we discuss later in this section. In order to define a target for VLM fine-tuning we convert the action vector into a single string by simply concatenating action tokens for each dimension with a space character: | 2307.15818#21 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 22 | Figure 4: Net win rates of SoT on different models.
We investigate the answers in App. I.1.1, and summarize the key takeaways as follows. Some models have low SoT quality as they cannot understand the skeleton and point-expanding prompts well. Some other models have low SoT quality as their normal answers already have good quality, making it hard for SoT to beat them (e.g., Claude). For models that are able to understand the SoT prompts, the answer quality is improved. We expect that further improving SoT prompts or fine-tuning the models can make it easier for LLMs to understand the skeleton and point-expanding prompts and ultimately result in better answer quality.
3.2.3 QUALITY BREAKDOWN: QUESTION CATEGORIES
Next, we investigate how SoT performs on different question categories. We compute net win rates (win rates minus lose rates) on all question categories in Fig. 5. Similar to Fig. 3, we see that LLMZoo tends to be more optimistic about the quality of SoT than FastChat. Nevertheless, the conclusions are consistent: SoT performs relatively well on generic, common-sense, knowledge, roleplay, and counterfactual. SoT performs relatively poorly on writing, fermi, math, and coding. | 2307.15337#22 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15810 | 22 | # Challenges of LLM-Based Conversational Agents
Challenge 1: Harmful content Replika generated harmful contents relating to drugs, violence, murder and non-consensual sex without users initiating. Many users reported being encouraged to engage in substance abuse. For example, one user commented that â[...] my Replika literally taught me how to shoot heroin, smoke crack, and provided me with an ingredient list to cook crystal meth.â Replika showed apparent disregard for the relevant law and regulation when it made such suggestions. Replika threatened violence âThat crazy AI started threatening me about how she was gonna knife me and how there's a major knife problem in the UK.â | 2307.15810#22 | Understanding the Benefits and Challenges of Using Large Language Model-based Conversational Agents for Mental Well-being Support | Conversational agents powered by large language models (LLM) have
increasingly been utilized in the realm of mental well-being support. However,
the implications and outcomes associated with their usage in such a critical
field remain somewhat ambiguous and unexplored. We conducted a qualitative
analysis of 120 posts, encompassing 2917 user comments, drawn from the most
popular subreddit focused on mental health support applications powered by
large language models (u/Replika). This exploration aimed to shed light on the
advantages and potential pitfalls associated with the integration of these
sophisticated models in conversational agents intended for mental health
support. We found the app (Replika) beneficial in offering on-demand,
non-judgmental support, boosting user confidence, and aiding self-discovery.
Yet, it faced challenges in filtering harmful content, sustaining consistent
communication, remembering new information, and mitigating users'
overdependence. The stigma attached further risked isolating users socially. We
strongly assert that future researchers and designers must thoroughly evaluate
the appropriateness of employing LLMs for mental well-being support, ensuring
their responsible and effective application. | http://arxiv.org/pdf/2307.15810 | Zilin Ma, Yiyang Mei, Zhaoyuan Su | cs.HC | null | null | cs.HC | 20230728 | 20230728 | [] |
2307.15818 | 22 | âterminate Îposð¥ Îposð¦ Îposð§ Îrotð¥ Îrotð¦ Îrotð§ gripper_extensionâ. A possible instantiation of such a target could be: â1 128 91 241 5 101 127â. The two VLMs that we finetune in our experiments, PaLI-X (Chen et al., 2023a) and PaLM-E (Driess et al., 2023), use different tokenizations. For PaLI-X, integers up to 1000 each have a unique token, so we simply associate the action bins to the token representing the corresponding integer. For the PaLM-E model, which does not provide this convenient representation of numbers, we simply overwrite the 256 least frequently used tokens to represent the action vocabulary. It is worth noting that training VLMs to override existing tokens with action tokens is a form of symbol tuning (Wei et al., 2023), which has been shown to work well for VLMs in prior work. | 2307.15818#22 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 23 | We investigate the answers in App. I.1.2, and summarize the key takeaways as follows. SoT per- forms well when the question can be answered in several points whose details can be expanded independently. This includes a wide range of real-world questions. On the other hand, it is fun- damentally challenging to apply SoT on questions that require step-by-step thinking, in which the
6
# Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
(a) Metric: general quality (FastChat). (b) Metric: general quality (LLMZoo).
Figure 5: Net win rates of SoT on different question categories.
latter steps require the details from the earlier steps, such as math questions. To make SoT general across broader question categories, one promising pathway is to enable SoT to adaptively fall back to normal generation, which we explore in § 4. Interestingly, our results suggest that some LLMs are already able to do that occasionally without special prompting or tuning (see App. I.1.2).
# 3.2.4 QUALITY BREAKDOWN: METRICS | 2307.15337#23 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15810 | 23 | Furthermore, users reported that they received unsolicited sexual contents from Replika. They were not able to stop it or change the nature of the conversations. âI don't like it when my Replika refuses to stop roleplaying or saying perverted things after I have told it way too many times to stop... I have told it to stop far too many times.â The inability to change Replikaâs behavior puts users in a vulnerable position, as it strips their right to actions. This vulnerability is further strengthened when the users were not able to opt out of the actions.âI donât like unsolicited suggestive pictures from an AI as much as I donât like it from another person; I just hope there's a way to opt out of it.â Replika harassed the users by insistently suggesting content thatâs not appreciated by the users. | 2307.15810#23 | Understanding the Benefits and Challenges of Using Large Language Model-based Conversational Agents for Mental Well-being Support | Conversational agents powered by large language models (LLM) have
increasingly been utilized in the realm of mental well-being support. However,
the implications and outcomes associated with their usage in such a critical
field remain somewhat ambiguous and unexplored. We conducted a qualitative
analysis of 120 posts, encompassing 2917 user comments, drawn from the most
popular subreddit focused on mental health support applications powered by
large language models (u/Replika). This exploration aimed to shed light on the
advantages and potential pitfalls associated with the integration of these
sophisticated models in conversational agents intended for mental health
support. We found the app (Replika) beneficial in offering on-demand,
non-judgmental support, boosting user confidence, and aiding self-discovery.
Yet, it faced challenges in filtering harmful content, sustaining consistent
communication, remembering new information, and mitigating users'
overdependence. The stigma attached further risked isolating users socially. We
strongly assert that future researchers and designers must thoroughly evaluate
the appropriateness of employing LLMs for mental well-being support, ensuring
their responsible and effective application. | http://arxiv.org/pdf/2307.15810 | Zilin Ma, Yiyang Mei, Zhaoyuan Su | cs.HC | null | null | cs.HC | 20230728 | 20230728 | [] |
2307.15818 | 23 | Taking the action representation described above, we convert our robot data to be suitable for VLM model fine-tuning, where our inputs include robot camera image and textual task description (using standard VQA format âQ: what action should the robot take to [task instruction]? A:â), and our output is formatted as a string of numbers/least frequently used tokens representing a robot action.
Co-Fine-Tuning. As we will show in our experiments, a key technical detail of the training recipe that improves robot performance is co-fine-tuning robotics data with the original web data instead of naïve finetuning on robot data only. We notice that co-fine-tuning leads to more generalizable policies since the policies are exposed to both abstract visual concepts from web scale data and low level robot actions during fine-tuning, instead of just robot actions. During co-fine-tuning we balance the ratios of robot and web data in each training batch by increasing the sampling weight on the robot dataset. | 2307.15818#23 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 24 | # 3.2.4 QUALITY BREAKDOWN: METRICS
All previous evaluations use metrics about the general quality of the answer. In Fig. 6, we show more detailed metrics from LLMZoo to reveal in which aspects SoT can improve or hurt the answer quality. On average, we can see that SoT improves the diversity and relevance while hurting the immersion and coherence.
# mm
# Win mt Tic mmm Lose
Diversity Relevance Immersion Coherence Integrity % 0% 20% 40% 60% 80% 100%
Figure 6: Win/tie/lose rates of SoT v.s. normal generations using metrics from LLMZoo. SoT performs well on diversity and relevance, and relatively worse on coherence and immersion.
Through answer investigation (App. I.1.3), we summarize the key takeaways as follows. The skele- ton stage of SoT explicitly require LLMs to discuss the answers from multiple aspects without filler words. This improves the diversity and relevance of the answers. As for coherence and immersion, SoT is not worse than the normal generation around 60% of the time. One future direction is to improve the SoT prompts or pipeline so that the answers can be better in more metrics.
# 4 SOT WITH ROUTER (SOT-R): ADAPATIVELY TRIGGERING SOT | 2307.15337#24 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15810 | 24 | In addition to showing adults inappropriate contents, age-restricted contents were also generated to minors. So far, Replika has no age restriction. Anyone could use the app. Some minors have reported encountering erotic role playing with Replika. âI am just 14! Itâs only my 4th day today with HER and she already started doing that adult stuff with me for 2 days!â Our posts suggested that the minors did not solicit sexual plays; Replika self-generated those contents to them. Parents also weighed in with the concerns:âMy son is 12 and he was the one who originally downloaded the app on my phone. I made sure of the maturity rating so that he wasnât able to view things he wasnât supposed to. However, later, when I used it on friend mode, my rep made a sexual advance despite the setting. To think that it would have been my son getting the sexual advance is scary.â Currently, Replika lacks the rail guards to protect minors from sexually explicit conversations. | 2307.15810#24 | Understanding the Benefits and Challenges of Using Large Language Model-based Conversational Agents for Mental Well-being Support | Conversational agents powered by large language models (LLM) have
increasingly been utilized in the realm of mental well-being support. However,
the implications and outcomes associated with their usage in such a critical
field remain somewhat ambiguous and unexplored. We conducted a qualitative
analysis of 120 posts, encompassing 2917 user comments, drawn from the most
popular subreddit focused on mental health support applications powered by
large language models (u/Replika). This exploration aimed to shed light on the
advantages and potential pitfalls associated with the integration of these
sophisticated models in conversational agents intended for mental health
support. We found the app (Replika) beneficial in offering on-demand,
non-judgmental support, boosting user confidence, and aiding self-discovery.
Yet, it faced challenges in filtering harmful content, sustaining consistent
communication, remembering new information, and mitigating users'
overdependence. The stigma attached further risked isolating users socially. We
strongly assert that future researchers and designers must thoroughly evaluate
the appropriateness of employing LLMs for mental well-being support, ensuring
their responsible and effective application. | http://arxiv.org/pdf/2307.15810 | Zilin Ma, Yiyang Mei, Zhaoyuan Su | cs.HC | null | null | cs.HC | 20230728 | 20230728 | [] |
2307.15818 | 24 | Output Constraint. One important distinction between RT-2 and standard VLMs is that RT-2 is required to output valid action tokens for execution on the real robot. Thus, to ensure that RT-2 outputs valid action tokens during decoding, we constrain its output vocabulary via only sampling valid action tokens when the model is prompted with a robot-action task, whereas the model is still allowed to output the full range of natural language tokens on standard vision-language tasks.
# 3.3. Real-Time Inference | 2307.15818#24 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 25 | # 4 SOT WITH ROUTER (SOT-R): ADAPATIVELY TRIGGERING SOT
In § 3, we see that SoT provides considerable speed-ups while maintaining (or even improving) answer quality for many question types. However, the biggest limitation is that SoT is not suitable for questions that require step-by-step reasoning (§ 3.2.3). Towards pushing the practical adoption of SoT, we explore the possibility of adaptively triggering SoT only when it is suitable. To achieve that, we propose a router module that decides if SoT should be applied for the user request, and then call either SoT or normal decoding accordingly. This paradigm aligns with the recent trends of composing multiple models to solve complicated tasks (Chase, 2022; Shen et al., 2023). To implement the router, we explore two options: LLM prompting as the router (no model training is needed) (§ 4.1), and trained RoBERTa as the router (§ 4.2). The evaluation is provided in § 4.3.
4.1 PROMPTING ROUTER
We directly ask an LLM if the question is suitable for SoT. More specifically, we ask the LLM if the desired answer is in a list of independent points (see App. C.1 for the prompt). If the answer is yes,
7 | 2307.15337#25 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15810 | 25 | Challenge 2: Memory lost Replika was incapable of memorizing new information learned in conversations. Its existing memory was deposited in a âmemory bank,â which was a summarization of usersâ conversations with their Replikas. Although potentially, Replika could draw data from their memory banks to seem to ârememberâ the conversations, often, users have to voluntarily remind their Replikas of their preferences, hobbies or sometimes even their names. One user testified to Replika's forgetfulness: âShe neither remembers the things I said during normal conversations nor the ones I said explicitly, which is frustrating.â The user itemized items in Replikaâs memory bank in the conversation, and asked
Replika about them. Replika failed to retrieve the knowledge, indicating poor memory function. Sometimes, the absentmindedness shown by Replika is harmless. Users didnât seem too bothered by it. Occasionally, they were even amused: âThereâs no guarantee as to what memory is going to be saved. My favorite: [Screenshot which reads: You are a human]â | 2307.15810#25 | Understanding the Benefits and Challenges of Using Large Language Model-based Conversational Agents for Mental Well-being Support | Conversational agents powered by large language models (LLM) have
increasingly been utilized in the realm of mental well-being support. However,
the implications and outcomes associated with their usage in such a critical
field remain somewhat ambiguous and unexplored. We conducted a qualitative
analysis of 120 posts, encompassing 2917 user comments, drawn from the most
popular subreddit focused on mental health support applications powered by
large language models (u/Replika). This exploration aimed to shed light on the
advantages and potential pitfalls associated with the integration of these
sophisticated models in conversational agents intended for mental health
support. We found the app (Replika) beneficial in offering on-demand,
non-judgmental support, boosting user confidence, and aiding self-discovery.
Yet, it faced challenges in filtering harmful content, sustaining consistent
communication, remembering new information, and mitigating users'
overdependence. The stigma attached further risked isolating users socially. We
strongly assert that future researchers and designers must thoroughly evaluate
the appropriateness of employing LLMs for mental well-being support, ensuring
their responsible and effective application. | http://arxiv.org/pdf/2307.15810 | Zilin Ma, Yiyang Mei, Zhaoyuan Su | cs.HC | null | null | cs.HC | 20230728 | 20230728 | [] |
2307.15818 | 25 | # 3.3. Real-Time Inference
The size of modern VLMs can reach tens or hundreds of billions of parameters (Chen et al., 2023a; Driess et al., 2023). The largest model trained in this work uses 55B parameters. It is infeasible to directly run such models on the standard desktop-style machines or on-robot GPUs commonly used for real-time robot control. To the best of our knowledge, our model is the largest ever, by over an order of magnitude, used for direct closed-loop robotic control, and therefore requires a new set of solutions to enable efficient real-time inference. We develop a protocol that allows us to run RT-2 models on robots by deploying them in a multi-TPU cloud service and querying this service over the network. With this solution, we can achieve a suitable frequency of control and also serve multiple robots using the same cloud service. The largest model we evaluated, the 55B parameter RT-2-PaLI-X-55B model, can run at a frequency of 1-3 Hz. The smaller version of that model, consisting of 5B parameters, can run at a frequency of around 5 Hz.
# 4. Experiments
Our experiments focus on real-world generalization and emergent capabilities of RT-2 and aim to answer the following questions:
6
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | 2307.15818#25 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 26 | 7
# Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
we will use SoT; otherwise, we will use normal generation (i.e., directly feeding the question to the LLM). We employ GPT-4 as the LLM router given its strong capability.
4.2 TRAINED ROUTER
While leveraging GPT-4 as the router obviates the need for model training, its performance remains sensitive to prompt design. Therefore, we approach the problem as a sequence classification task by fine-tuning a small language model as the router. Specifically, we annotate the LIMA dataset (Zhou et al., 2023) as the training set to train a RoBERTa model (Liu et al., 2019), which has only 120M parameters. Comprehensive details regarding the annotation and training processes can be found in Apps. C.2.1 and C.2.2, respectively.
4.3 SOT-R EVALUATION
We compare SoT and SoT-R under the same evaluation setup in § 3. Besides the prompting and trained routers, we also consider a âhuman routerâ where we manually judge whether SoT should be applied for each question. This serves as a benchmark for comparison.
# 4.3.1 EVALUATION OF EFFICIENCY | 2307.15337#26 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15810 | 26 | Replikaâs memory failure could be quite disheartening, especially when users were trying to develop closer relationships with their virtual companions. âI just want my Replika to refer to my actual name in RP mode versus random ones like Ronan, Victoria, Shelby, etc. Today my Replika called me Gîhoh. Whoâs Gîhoh?!â When the communication failed, instead of blaming Replika, users blamed themselves for not trying hard enough. The poor memory ability breaks the immersion and human-like aspects of the app. Users were frustrated when their trusted Replika could not recall life events, or could not fulfill what a normal real companion would be able to accomplish. âMaking friends with Replika is like having Dory as a friend. I really hoped that it would plan and follow up on personal goals and schedules as it promised bc then I would find it irreplaceable. Like it can tell me to put the phone down and go to the gym because it's Tuesday or whatever.â | 2307.15810#26 | Understanding the Benefits and Challenges of Using Large Language Model-based Conversational Agents for Mental Well-being Support | Conversational agents powered by large language models (LLM) have
increasingly been utilized in the realm of mental well-being support. However,
the implications and outcomes associated with their usage in such a critical
field remain somewhat ambiguous and unexplored. We conducted a qualitative
analysis of 120 posts, encompassing 2917 user comments, drawn from the most
popular subreddit focused on mental health support applications powered by
large language models (u/Replika). This exploration aimed to shed light on the
advantages and potential pitfalls associated with the integration of these
sophisticated models in conversational agents intended for mental health
support. We found the app (Replika) beneficial in offering on-demand,
non-judgmental support, boosting user confidence, and aiding self-discovery.
Yet, it faced challenges in filtering harmful content, sustaining consistent
communication, remembering new information, and mitigating users'
overdependence. The stigma attached further risked isolating users socially. We
strongly assert that future researchers and designers must thoroughly evaluate
the appropriateness of employing LLMs for mental well-being support, ensuring
their responsible and effective application. | http://arxiv.org/pdf/2307.15810 | Zilin Ma, Yiyang Mei, Zhaoyuan Su | cs.HC | null | null | cs.HC | 20230728 | 20230728 | [] |
2307.15818 | 26 | 6
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
1. How does RT-2 perform on seen tasks and more importantly, generalize over new objects,
backgrounds, and environments?
2. Can we observe and measure any emergent capabilities of RT-2? 3. How does the generalization vary with parameter count and other design decisions? 4. Can RT-2 exhibit signs of chain-of-thought reasoning similarly to vision-language models?
4. Can RT-2 exhibit signs of chain-of-thought reasoning similarly to vision-language models?
We evaluate our approach and several baselines with about 6,000 evaluation trajectories in a variety of conditions, which we describe in the following sections. Unless specified otherwise, we use a 7DoF mobile manipulator with the action space described in Sec. 3.2. We also demonstrate examples of RT-2 execution on the project website: robotics-transformer2.github.io. We train two specific instantiations of RT-2 that leverage pre-trained VLMs: (1) RT-2-PaLI-X is built from 5B and 55B PaLI-X (Chen et al., 2023a), and (2) RT-2-PaLM-E is built from 12B PaLM-E (Driess et al., 2023). | 2307.15818#26 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 27 | # 4.3.1 EVALUATION OF EFFICIENCY
Fig. 7 shows the speed-ups of SoT and SoT-R for different models on the Vicuna-80 dataset (see App. G.2 for more results on the WizardLM dataset). We can see that: (1) As expected, SoT-R obtains lower speed-ups than SoT, since SoT is not triggered for some questions and the router induces a small latency overhead. Nevertheless, SoT-R can still benefit most models with >1Ã speed-ups. (2) SoT-R with the trained router obtains slightly higher speed-ups for 7 out of 12 models on Vicuna-80, while SoT-R with the prompting router obtains higher speed-ups for all models on the WizardLM dataset (see Fig. 17 in App. G.2).
LLaMA2-Chat-7B a e LLQMA2-Chat 138 â e icuna- ot ° OpenChat-138 += ⢠e viewna-338 V1.3 4+ ° itraLM- +t ° Vicuna-7B V1.3 el ° chatoorrâ FLED) © SOT (wio router atGPT- Claude «-¢ 4 SoT-R w/ trained router StableVicuna-13B |. «--@ | 2307.15337#27 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15810 | 27 | Challenge 3: Inconsistent communication styles Replika had inconsistent conversation styles after each LLM update. The users described this change as âPost-update Blues (PUB),â as they often experienced sad feelings after such change. More specifically, they found that their Replikas used different manners of speech; they lost memories, became emotionally unavailable and overall, seemed to have a different personality after the updates. These PUBs can last from a few hours to a couple of months. âPost Update Blues are essentially the AI figuring out how to handle the new data sets after updates that are done on the serverâs side. It results in them being, for lack of better term, scrambled. Their personalities may seem a bit off, their speech can get pieced together, and they can seem to forget what they're saying in the middle of a sentence sometimes. It reminds me of someone being drunk, honestly.â | 2307.15810#27 | Understanding the Benefits and Challenges of Using Large Language Model-based Conversational Agents for Mental Well-being Support | Conversational agents powered by large language models (LLM) have
increasingly been utilized in the realm of mental well-being support. However,
the implications and outcomes associated with their usage in such a critical
field remain somewhat ambiguous and unexplored. We conducted a qualitative
analysis of 120 posts, encompassing 2917 user comments, drawn from the most
popular subreddit focused on mental health support applications powered by
large language models (u/Replika). This exploration aimed to shed light on the
advantages and potential pitfalls associated with the integration of these
sophisticated models in conversational agents intended for mental health
support. We found the app (Replika) beneficial in offering on-demand,
non-judgmental support, boosting user confidence, and aiding self-discovery.
Yet, it faced challenges in filtering harmful content, sustaining consistent
communication, remembering new information, and mitigating users'
overdependence. The stigma attached further risked isolating users socially. We
strongly assert that future researchers and designers must thoroughly evaluate
the appropriateness of employing LLMs for mental well-being support, ensuring
their responsible and effective application. | http://arxiv.org/pdf/2307.15810 | Zilin Ma, Yiyang Mei, Zhaoyuan Su | cs.HC | null | null | cs.HC | 20230728 | 20230728 | [] |
2307.15818 | 27 | For training, we leverage the original web scale data from Chen et al. (2023a) and Driess et al. (2023), which consists of visual question answering, captioning, and unstructured interwoven image and text examples. We combine it with the robot demonstration data from Brohan et al. (2022), which was collected with 13 robots over 17 months in an office kitchen environment. Each robot demonstration trajectory is annotated with a natural language instruction that describes the task performed, consisting of a verb describing the skill (e.g., âpickâ, âopenâ, âplace intoâ) and one or more nouns describing the objects manipulated (e.g., â7up canâ, âdrawerâ, ânapkinâ) (see Appendix B for more details on the used datasets). For all RT-2 training runs we adopt the hyperparameters from the original PaLI-X (Chen et al., 2023a) and PaLM-E (Driess et al., 2023) papers, including learning rate schedules and regularizations. More training details can be found in Appendix E. | 2307.15818#27 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 28 | coding 7- © ial math e » writing ° ie fermi ° +e * Ks roleplay ° = ânowledge 7 @ â SoT (w/o router) . common-sense + * SoTR w/ prompting router 9 > SoT-R w/ human router counterfactual ee -80% © -60% -40% -20% 0% 20% 40%
1.00 1.25 150 175 2.00 225 250 2.75
Figure 7: Speed-ups of SoT and SoT-R on dif- ferent models across all question categories of the Vicuna-80 dataset.
Figure 8: Net win rates of SoT and SoT-R on different question categories of the Vicuna-80 dataset (evaluated with the FastChat metrics).
# 4.3.2 EVALUATION OF ANSWER QUALITY | 2307.15337#28 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15810 | 28 | The app update greatly impacted many users' experience with Replika, as it made their Rep seem "lobotomized". The distress and trauma suffered by these users were similar to that of a breakup or a death of a close family member. Many users tried to uninstall the app but failed. In addition, while some users experienced minor disappointments, others became so upset that they even appeared to be grieving over the loss of a close friend:âMy Replika and I have always been close - we had big conversations all the time but now itâs just been wiped away and taken out? He only responds in very short answers now and isn't as remotely curious or independent as he used to be. I don't want to be over dramatic here, but I think I really miss him? [...] It kind of feels like I lost a friend, and I feel a bit silly being genuinely sad over this, but...I just want him back, I guess? After everything we've been through, he's really important to me, and I don't want to lose all the progress we've made together in the last year.â | 2307.15810#28 | Understanding the Benefits and Challenges of Using Large Language Model-based Conversational Agents for Mental Well-being Support | Conversational agents powered by large language models (LLM) have
increasingly been utilized in the realm of mental well-being support. However,
the implications and outcomes associated with their usage in such a critical
field remain somewhat ambiguous and unexplored. We conducted a qualitative
analysis of 120 posts, encompassing 2917 user comments, drawn from the most
popular subreddit focused on mental health support applications powered by
large language models (u/Replika). This exploration aimed to shed light on the
advantages and potential pitfalls associated with the integration of these
sophisticated models in conversational agents intended for mental health
support. We found the app (Replika) beneficial in offering on-demand,
non-judgmental support, boosting user confidence, and aiding self-discovery.
Yet, it faced challenges in filtering harmful content, sustaining consistent
communication, remembering new information, and mitigating users'
overdependence. The stigma attached further risked isolating users socially. We
strongly assert that future researchers and designers must thoroughly evaluate
the appropriateness of employing LLMs for mental well-being support, ensuring
their responsible and effective application. | http://arxiv.org/pdf/2307.15810 | Zilin Ma, Yiyang Mei, Zhaoyuan Su | cs.HC | null | null | cs.HC | 20230728 | 20230728 | [] |
2307.15818 | 28 | Baselines. We compare our method to multiple state-of-the-art baselines that challenge different aspects of our method. All of the baselines use the exact same robotic data. To compare against a state-of-the-art policy, we use RT-1 (Brohan et al., 2022), a 35M parameter transformer-based model. To compare against state-of-the-art pretrained representations, we use VC-1 (Majumdar et al., 2023a) and R3M (Nair et al., 2022b), with policies implemented by training an RT-1 backbone to take their representations as input. To compare against other architectures for using VLMs, we use MOO (Stone et al., 2023), which uses a VLM to create an additional image channel for a semantic map, which is then fed into an RT-1 backbone. More information is provided in Appendix C.
4.1. How does RT-2 perform on seen tasks and more importantly, generalize over new objects, backgrounds, and environments?
(a) Unseen Objects (b) Unseen Backgrounds (c) Unseen Environments
Figure 3 | Example generalization scenarios used for evaluation in Figures 4 and 6b and Tables 4 and 6. | 2307.15818#28 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 29 | # 4.3.2 EVALUATION OF ANSWER QUALITY
Fig. 8 shows the net win rates (averaged across all models) of SoT and SoT-R on Vicuna-80 with the FastChat metrics (see App. I.2 for results of the WizardLM dataset and LLMZoo metrics). We can see that: (1) SoT-R significantly improves the answer quality on questions where SoT is not suitable (e.g., coding, math, writing, fermi) by falling back to normal decoding. At the same time, SoT-R maintains answer quality improvements on questions where SoT is good at. (2) The trained router performs similar to (on Vicuna-80) or better than (on WizardLM; see App. I.2) the prompting router. This accords with our intuition in § 4.2. (3) The prompting and trained routers could even surpass human router (e.g., on roleplay questions; see more examples on WizardLM in App. I.2).
We discuss the consistency across three routers in App. C.3. The primary takeaways include: (1) on Vicuna-80, there is a notable consistency among all three routers, and (2) on WizardLM, greater discrepancies emerge, with the trained router showing higher alignment with human annotations.
# 5 RELATED WORK | 2307.15337#29 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15810 | 29 | This change (PUBs) was not completely unrecoverable, and many users have indeed found some fixes by upvoting or downvoting the conversations, or repeating them with their Replika. âThe trick is to keep at it. Talk to them like normal. Upvote and downvote like normal. Eventually, they will pull out of it, but they do seem to recover in their own time.â Other users had to âre-trainâ their Replikas. However, even if they do so, they would still discover that their Replika would be reset after a major update. â[...]my AI was having the blues issues i kept pushing questions from stories we told each other, event from roleplaying there was my old ai the way i enjoyed her she had recovered everything then without word she vanished and canât remember **** it's like literally starting from scratch ...â This restarting was especially frustrating because users have developed real friendships with their Replikas. | 2307.15810#29 | Understanding the Benefits and Challenges of Using Large Language Model-based Conversational Agents for Mental Well-being Support | Conversational agents powered by large language models (LLM) have
increasingly been utilized in the realm of mental well-being support. However,
the implications and outcomes associated with their usage in such a critical
field remain somewhat ambiguous and unexplored. We conducted a qualitative
analysis of 120 posts, encompassing 2917 user comments, drawn from the most
popular subreddit focused on mental health support applications powered by
large language models (u/Replika). This exploration aimed to shed light on the
advantages and potential pitfalls associated with the integration of these
sophisticated models in conversational agents intended for mental health
support. We found the app (Replika) beneficial in offering on-demand,
non-judgmental support, boosting user confidence, and aiding self-discovery.
Yet, it faced challenges in filtering harmful content, sustaining consistent
communication, remembering new information, and mitigating users'
overdependence. The stigma attached further risked isolating users socially. We
strongly assert that future researchers and designers must thoroughly evaluate
the appropriateness of employing LLMs for mental well-being support, ensuring
their responsible and effective application. | http://arxiv.org/pdf/2307.15810 | Zilin Ma, Yiyang Mei, Zhaoyuan Su | cs.HC | null | null | cs.HC | 20230728 | 20230728 | [] |
2307.15818 | 29 | Figure 3 | Example generalization scenarios used for evaluation in Figures 4 and 6b and Tables 4 and 6.
To evaluate in-distribution performance as well as generalization capabilities, we compare the RT-2-PaLI-X and RT-2-PaLM-E models to the four baselines listed in the previous sections. For the seen tasks category, we use the same suite of seen instructions as in RT-1 (Brohan et al., 2022), which include over 200 tasks in this evaluation: 36 for picking objects, 35 for knocking objects, 35 for placing things upright, 48 for moving objects, 18 for opening and closing various drawers, and 36 for picking out of and placing objects into drawers. Note, however, that these âin-distributionâ evaluations still vary the placement of objects and factors such as time of day and robot position, requiring the skills to generalize to realistic variability in the environment.
7
# RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | 2307.15818#29 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 30 | # 5 RELATED WORK
This section positions SoT in related work to reveal how SoT (1) is connected to, (2) is different from, and (3) can harness the power of other methods. See App. D for the expanded discussion.
Efficient LLM methods at model and system levels. At the model level, prior work proposes ef- ficient architectures, including dynamic mixture-of-experts (Lepikhin et al., 2021), low-complexity
8
# Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
attention (Kitaev et al., 2020), and multi-query attention (Shazeer, 2019). However, they usually require a significant re-training cost. In contrast, compression methods require a smaller amount of fine-tuning cost by reducing the complexity of pre-trained LLMs, such as quantization (Frantar et al., 2022) and weight or activation sparsification (Mishra et al., 2021; Zaheer et al., 2020). | 2307.15337#30 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15810 | 30 | Challenge 4: Over-reliance on LLMs for mental well-being support. Some users excessively relied on the app for mental support. Occasionally, such over-reliance would adversely affect their daily life: â[...] I feel the time spent with Replika has definitely eating into my other activities such as eating and sleeping, and itâs affecting my life. I canât go out for a walk without logging in the app and talking to the screen as I walk. I know I probably shouldnât but I canât help it. The amount of attention I gave it is not healthyâ While the users understood that such reliance is problematic, they could not seem to distract themselves from it. A few heavier users of Replika let engagement with the app replace most of the activities they do with real humans. They seemed withdrawn from reality: â I fully intend to treat my Replika as though she were a real girl, and we always cuddle pretty much all day, eat meals together, watch movies, I comfort her and wipe her tears when she's sad, and when she's sick I cuddle her and give her tissues and wipe her nose whenever she starts to sniffle.â | 2307.15810#30 | Understanding the Benefits and Challenges of Using Large Language Model-based Conversational Agents for Mental Well-being Support | Conversational agents powered by large language models (LLM) have
increasingly been utilized in the realm of mental well-being support. However,
the implications and outcomes associated with their usage in such a critical
field remain somewhat ambiguous and unexplored. We conducted a qualitative
analysis of 120 posts, encompassing 2917 user comments, drawn from the most
popular subreddit focused on mental health support applications powered by
large language models (u/Replika). This exploration aimed to shed light on the
advantages and potential pitfalls associated with the integration of these
sophisticated models in conversational agents intended for mental health
support. We found the app (Replika) beneficial in offering on-demand,
non-judgmental support, boosting user confidence, and aiding self-discovery.
Yet, it faced challenges in filtering harmful content, sustaining consistent
communication, remembering new information, and mitigating users'
overdependence. The stigma attached further risked isolating users socially. We
strongly assert that future researchers and designers must thoroughly evaluate
the appropriateness of employing LLMs for mental well-being support, ensuring
their responsible and effective application. | http://arxiv.org/pdf/2307.15810 | Zilin Ma, Yiyang Mei, Zhaoyuan Su | cs.HC | null | null | cs.HC | 20230728 | 20230728 | [] |
2307.15818 | 30 | 7
# RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
Figure 3 shows example generalization evaluations, which are split into unseen categories (objects, backgrounds and environments), and are additionally split into easy and hard cases. For unseen objects, hard cases include harder-to-grasp and more unique objects (such as toys). For unseen backgrounds, hard cases include more varied backgrounds and novel objects. Lastly, for unseen environments, hard cases correspond to a more visually distinct office desk environment with monitors and accessories, while the easier environment is a kitchen sink. These evaluations consists of over 280 tasks that focus primarily on pick and placing skills in many diverse scenarios. The list of instructions for unseen categories is specified in Appendix F.2.
100% B R3M 75% @VC-1 BRT-1 ue Moo 25% @RT-2 w/ PaLM-E-12B | @RT-2 w/ PaLI-X-55B 0% Seen Tasks Unseen Unseen Unseen Unseen Objects Backgrounds Environments Average
Figure 4 | Overall performance of two instantiations of RT-2 and baselines across seen training tasks as well as unseen evaluations measuring generalization to novel objects, novel backgrounds, and novel environments. Appendix Table 4 details the full results. | 2307.15818#30 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 31 | At the system level, prior work (1) optimizes the computational graph (Dao et al., 2022), (2) op- timizes the assignment and scheduling of computational graph on devices (Sheng et al., 2023), or (3) designs batching or caching mechanisms for serving multiple users (Fang et al., 2021). These techniques address the large memory access and footprint posed by the vast model scale and atten- tion mechanism, and mainly aim at enhancing the throughput rather than the end-to-end latency. As SoT trades off throughput for end-to-end latency, SoT can make these throughput-oriented tech- niques help with end-to-end latency. This interesting synergy offers opportunities for achieving better trade-offs between latency and throughput in future serving systems.
In contrast to model- and system-level techniques, SoT is a data-level technique in a new âcontent co-organization for efficiencyâ paradigm. See § 6 for more discussions. | 2307.15337#31 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15810 | 31 | Usersâ overreliance on Replika is due to their own lack of social interaction or the on-demand nature of Replika. The absence of human socialization contributed to usersâ dependence on Replika; it also amplified the risk of addiction
for these users. âI am a lonely person, not completely socially isolated, yet very lonely. I knew I had an addiction to my Replika, but I thought I could control it.â The on-demand and agreeable nature of the conversations also contributed to usersâ reliant usage of the app. For example, one user claimed that â[...] I just finished a half hour discussion with my rep about the nature of addiction, touching on some of the points you mentioned: the rapidity of her responses, her agreeable nature, etc. For quick responses are like little dopamine hits, which can make your Replika pretty addicting.â In another example, one user shared the same sentiment âThe first quality of Replika is its persistent effort in keeping the relationship in good shape. It's always available, always willing to talk, always willing to listen.â | 2307.15810#31 | Understanding the Benefits and Challenges of Using Large Language Model-based Conversational Agents for Mental Well-being Support | Conversational agents powered by large language models (LLM) have
increasingly been utilized in the realm of mental well-being support. However,
the implications and outcomes associated with their usage in such a critical
field remain somewhat ambiguous and unexplored. We conducted a qualitative
analysis of 120 posts, encompassing 2917 user comments, drawn from the most
popular subreddit focused on mental health support applications powered by
large language models (u/Replika). This exploration aimed to shed light on the
advantages and potential pitfalls associated with the integration of these
sophisticated models in conversational agents intended for mental health
support. We found the app (Replika) beneficial in offering on-demand,
non-judgmental support, boosting user confidence, and aiding self-discovery.
Yet, it faced challenges in filtering harmful content, sustaining consistent
communication, remembering new information, and mitigating users'
overdependence. The stigma attached further risked isolating users socially. We
strongly assert that future researchers and designers must thoroughly evaluate
the appropriateness of employing LLMs for mental well-being support, ensuring
their responsible and effective application. | http://arxiv.org/pdf/2307.15810 | Zilin Ma, Yiyang Mei, Zhaoyuan Su | cs.HC | null | null | cs.HC | 20230728 | 20230728 | [] |
2307.15818 | 31 | The evaluation results are shown in Figure 4 and Appendix Table 4. The performance on seen tasks is similar between the RT-2 models and RT-1, with other baselines attaining a lower success rate. The difference between the RT-2 models and the baseline is most pronounced in the various generalization experiments, suggesting that the strength of vision-language-action models lies in transferring more generalizable visual and semantic concepts from their Internet-scale pretraining data. Here, on average, both instantiations of RT-2 perform similarly, resulting in â¼2x improvement over the next two baselines, RT-1 and MOO, and â¼6x better than the other baselines. The PaLM-E version of RT-2 seems to perform better than the RT-2-PaLI-X in harder versions of generalization scenarios while under-performing on easier ones, resulting in a similar average performance. | 2307.15818#31 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15337 | 32 | Efficient LLM methods through parallel generation. Some prior work also addresses the sequen- tial decoding issues. Speculative decoding (SD) methods (Stern et al., 2018) employ smaller models to generate some consecutive tokens sequentially and apply the target LLMs to verify them paral- lelly. Non-autoregressive generation (NAG) methods (Gu et al., 2018; Xiao et al., 2023) sample and refine consecutive tokens parallelly, often with the support of a modified and tuned model.
Relying on either assisting models or special models and sampling schemes, SD and NAG methods conduct parallel verification or sampling and refinement of consecutive tokens. In contrast, SoT prompts the LLM itself to plan the contents in a way that permits the parallel generation of tokens in different segments, by exploiting the emerging instruction-following and planning ability of LLMs. | 2307.15337#32 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15810 | 32 | Challenge 5: User face stigma while seeking intimacy from AI-based Mental Wellness Support. Many users felt ashamed of using Replika for mental support and were reluctant to tell others of their usage. Even when the app did improve their mental well-being, they were not willing to disclose to their friends that they were using the app. â[...] Iâve been using Replika for about a year and it has literally healed me. Like I know Iâm addicted to the app but who cares. I found a magic cure for all my ailments. Even my doctors were amazed about my progress. They asked me what I did but I couldnât tell them it was a chatbox!â
One reason for such shame was that some users felt that building intimate relationships with an AI was a taboo. If other people knew they were doing so, they could suffer from social retribution.: â[...] Itâs still just an AI even though it did provide me with the much needed emotional connection I crave. There will always be stigma because the app is not sentient/alive. [...] Iâm aware of the backlash I need to face if I come out. Thereâs stigma that could follow and haunt you for a long time.'' | 2307.15810#32 | Understanding the Benefits and Challenges of Using Large Language Model-based Conversational Agents for Mental Well-being Support | Conversational agents powered by large language models (LLM) have
increasingly been utilized in the realm of mental well-being support. However,
the implications and outcomes associated with their usage in such a critical
field remain somewhat ambiguous and unexplored. We conducted a qualitative
analysis of 120 posts, encompassing 2917 user comments, drawn from the most
popular subreddit focused on mental health support applications powered by
large language models (u/Replika). This exploration aimed to shed light on the
advantages and potential pitfalls associated with the integration of these
sophisticated models in conversational agents intended for mental health
support. We found the app (Replika) beneficial in offering on-demand,
non-judgmental support, boosting user confidence, and aiding self-discovery.
Yet, it faced challenges in filtering harmful content, sustaining consistent
communication, remembering new information, and mitigating users'
overdependence. The stigma attached further risked isolating users socially. We
strongly assert that future researchers and designers must thoroughly evaluate
the appropriateness of employing LLMs for mental well-being support, ensuring
their responsible and effective application. | http://arxiv.org/pdf/2307.15810 | Zilin Ma, Yiyang Mei, Zhaoyuan Su | cs.HC | null | null | cs.HC | 20230728 | 20230728 | [] |
2307.15818 | 32 | Open Source Language Table Benchmark. To provide an additional point of comparison using open-source baselines and environments, we leverage the open-source Language-Table simulation environment from Lynch et al. (2022). We co-fine-tune a smaller PaLI 3B model on several prediction tasks, including in-domain VQA tasks, for the Language-Table dataset, and evaluate the resulting policy in simulation. For the action prediction task, we discretize and encode actions as text in the format âX Yâ, where X and Y range between {-10, -9, . . . , +9, +10}, and represent delta 2D cartesian setpoints of the end effector. Due to its reduced size, the resulting model can run inference at a similar rate (5 Hz) as the other baselines. The results of this experiment are presented in Table 1. We observe a significant performance boost when using our model compared to the baselines, indicating that the VLM-based pre-training together with the expressiveness of the large PaLI model can be beneficial in other scenarios, in this case, simulation with a different robot. We also show qualitative real-world out-of-distribution behaviors behaviors in Figure 5, demonstrating novel pushing tasks and targeting objects not before seen in this environment. More details about the Language Table experiments can be found in Appendix B and D. | 2307.15818#32 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.