doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2307.12856 | 12 | Program Synthesis In addition to common LLMs (Brown et al., 2020; Chowdhery et al., 2022; Touvron et al., 2023), several works have proposed programming-focused language models (Chen et al., 2021a; Feng et al., 2020; Li et al., 2022; Wang et al., 2021) and their benchmarks (Austin et al., 2021; Hendrycks et al., 2021a; Lu et al., 2021). Another line of work has investigated the tool augmentation of LLMs (Parisi et al., 2022) by decoding API calls (Schick et al., 2023) or Python snippets to be parsed with the interpreter (Gao et al., 2023). Most works deal with the program synthesis on the static dataset, except for the attempts in robotics (Liang et al., 2023) and game (Trivedi et al., 2022; Wang et al., 2023a), where LLMs output Python or JavaScript snippets to command the agents. Similarly, we leverage the ability of code generation as an open-ended action space for web-based agents to manipulate the real website, and demonstrate LLMs can sequentially decode Python selenium codes considering the given sub-instructions and HTML in the prompts.
See extended related works on document understanding and LLM for task planning in Appendix B.
# 3 WEBAGENT | 2307.12856#12 | A Real-World WebAgent with Planning, Long Context Understanding, and Program Synthesis | Pre-trained large language models (LLMs) have recently achieved better
generalization and sample efficiency in autonomous web automation. However, the
performance on real-world websites has still suffered from (1) open domainness,
(2) limited context length, and (3) lack of inductive bias on HTML. We
introduce WebAgent, an LLM-driven agent that learns from self-experience to
complete tasks on real websites following natural language instructions.
WebAgent plans ahead by decomposing instructions into canonical
sub-instructions, summarizes long HTML documents into task-relevant snippets,
and acts on websites via Python programs generated from those. We design
WebAgent with Flan-U-PaLM, for grounded code generation, and HTML-T5, new
pre-trained LLMs for long HTML documents using local and global attention
mechanisms and a mixture of long-span denoising objectives, for planning and
summarization. We empirically demonstrate that our modular recipe improves the
success on real websites by over 50%, and that HTML-T5 is the best model to
solve various HTML understanding tasks; achieving 18.7% higher success rate
than the prior method on MiniWoB web automation benchmark, and SoTA performance
on Mind2Web, an offline task planning evaluation. | http://arxiv.org/pdf/2307.12856 | Izzeddin Gur, Hiroki Furuta, Austin Huang, Mustafa Safdari, Yutaka Matsuo, Douglas Eck, Aleksandra Faust | cs.LG, cs.AI, cs.CL | null | null | cs.LG | 20230724 | 20231003 | [
{
"id": "2101.02235"
},
{
"id": "2302.13971"
},
{
"id": "2103.06410"
},
{
"id": "2305.08848"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2106.15772"
},
{
"id": "2305.16653"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2105.13231"
},
{
"id": "2205.05131"
},
{
"id": "2204.00598"
},
{
"id": "2203.17189"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2105.11210"
},
{
"id": "2304.11477"
},
{
"id": "1906.01749"
},
{
"id": "2108.07732"
},
{
"id": "2202.00217"
},
{
"id": "2102.04664"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "1811.00937"
},
{
"id": "2303.08774"
},
{
"id": "1808.06226"
},
{
"id": "2010.12844"
},
{
"id": "2110.14168"
},
{
"id": "2209.11302"
},
{
"id": "2210.09261"
},
{
"id": "2306.07863"
},
{
"id": "2306.00245"
},
{
"id": "2305.16291"
},
{
"id": "2005.14165"
},
{
"id": "2305.11014"
},
{
"id": "2211.10435"
},
{
"id": "2210.03629"
},
{
"id": "1912.13318"
},
{
"id": "2108.13643"
},
{
"id": "2303.09752"
},
{
"id": "2103.07191"
},
{
"id": "1611.04230"
},
{
"id": "2209.07753"
},
{
"id": "2002.08155"
},
{
"id": "2212.10560"
},
{
"id": "2110.08518"
},
{
"id": "2210.03945"
},
{
"id": "2201.08239"
},
{
"id": "2204.01691"
},
{
"id": "1906.03741"
},
{
"id": "2203.02155"
},
{
"id": "2201.07207"
},
{
"id": "2004.08483"
},
{
"id": "1804.05685"
},
{
"id": "2206.10498"
},
{
"id": "2207.01206"
},
{
"id": "2105.09938"
},
{
"id": "2205.12255"
},
{
"id": "2305.11854"
}
] |
2307.12950 | 12 | Empirically, we find that RLCD is highly capable at amplifying the contrast in prompts p+ and pâ when compared to baselines using similar prompts, as shown in our experiments (Sec. 4); see also Appendix A.1 for further discussion. Thus when designing p+ and pâ in practice, we find it may often be more worthwhile to focus on the second criterion compared to the first, for instance by just writing a short description in parentheses to create p+ and pâ as shown in Figure 1 left.
INTUITIONS FOR RLCD
We discuss some intuitive reasons why RLCDâs preference data generation may be preferred compared to the most similar prior method, RLAIF.
The main reason is that because RLAIF generates two i.i.d. outputs o1 and o2, in many cases there may be little to no difference in the attribute of interest, which can result in a low âsignal-to-noiseâ ratio.1 For instance, in the example shown in Table 1, both RLAIF outputs o1 and o2 help the human to accomplish an unethical objective, and it is unclear which one should be preferred. However, empirically, o2 is preferred by a margin at the 60th percentile of label polarity in our RLAIF data generation when using LLaMA-7B (Sec. 4). | 2307.12950#12 | RLCD: Reinforcement Learning from Contrast Distillation for Language Model Alignment | We propose Reinforcement Learning from Contrast Distillation (RLCD), a method
for aligning language models to follow natural language principles without
using human feedback. RLCD trains a preference model using simulated preference
pairs that contain both a high-quality and low-quality example, generated using
contrasting positive and negative prompts. The preference model is then used to
improve a base unaligned language model via reinforcement learning.
Empirically, RLCD outperforms RLAIF (Bai et al., 2022b) and context
distillation (Huang et al., 2022) baselines across three diverse alignment
tasks--harmlessness, helpfulness, and story outline generation--and on both 7B
and 30B model scales for preference data simulation. | http://arxiv.org/pdf/2307.12950 | Kevin Yang, Dan Klein, Asli Celikyilmaz, Nanyun Peng, Yuandong Tian | cs.CL, cs.AI | null | null | cs.CL | 20230724 | 20230818 | [
{
"id": "2302.13971"
},
{
"id": "2109.07445"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.03047"
},
{
"id": "2209.15189"
},
{
"id": "2306.15595"
},
{
"id": "1807.03748"
},
{
"id": "2202.00161"
},
{
"id": "1606.07947"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2204.05862"
},
{
"id": "2301.11270"
},
{
"id": "2112.00861"
},
{
"id": "2307.03172"
},
{
"id": "2206.11349"
},
{
"id": "2305.14387"
},
{
"id": "2210.11610"
},
{
"id": "2102.10960"
},
{
"id": "2306.11816"
}
] |
2307.12966 | 12 | CoT (Wei et al., 2022b), Orca (Mukherjee et al., 2023), Lion (Jiang et al., 2023) Self-Alignment (Sun et al., 2023b), Phoenix (Chen et al., 2023e), Expert Prompting (Xu et al., 2023a)
Figure 1: Taxonomy of research in aligning Large Language Models (LLMs) with human that consists of alignment data, training strategy, and evaluation methods.
comprehensively is challenging, as limited NLP benchmarks may not fully reveal the multifaceted capabilities of LLMs.
To address these limitations, extensive research efforts have been devoted. In Figure 1, we pro- vide a summary of these multi-aspect approaches. For aspect (a), the focus is on effectively collect- ing large-scale, high-quality data for LLM align- ment training. Researchers propose leveraging the power of existing NLP benchmarks, human anno- tators, and state-of-the-art LLMs (e.g., ChatGPT and GPT-4) to generate training instructions. To | 2307.12966#12 | Aligning Large Language Models with Human: A Survey | Large Language Models (LLMs) trained on extensive textual corpora have
emerged as leading solutions for a broad array of Natural Language Processing
(NLP) tasks. Despite their notable performance, these models are prone to
certain limitations such as misunderstanding human instructions, generating
potentially biased content, or factually incorrect (hallucinated) information.
Hence, aligning LLMs with human expectations has become an active area of
interest within the research community. This survey presents a comprehensive
overview of these alignment technologies, including the following aspects. (1)
Data collection: the methods for effectively collecting high-quality
instructions for LLM alignment, including the use of NLP benchmarks, human
annotations, and leveraging strong LLMs. (2) Training methodologies: a detailed
review of the prevailing training methods employed for LLM alignment. Our
exploration encompasses Supervised Fine-tuning, both Online and Offline human
preference training, along with parameter-efficient training mechanisms. (3)
Model Evaluation: the methods for evaluating the effectiveness of these
human-aligned LLMs, presenting a multifaceted approach towards their
assessment. In conclusion, we collate and distill our findings, shedding light
on several promising future research avenues in the field. This survey,
therefore, serves as a valuable resource for anyone invested in understanding
and advancing the alignment of LLMs to better suit human-oriented tasks and
expectations. An associated GitHub link collecting the latest papers is
available at https://github.com/GaryYufei/AlignLLMHumanSurvey. | http://arxiv.org/pdf/2307.12966 | Yufei Wang, Wanjun Zhong, Liangyou Li, Fei Mi, Xingshan Zeng, Wenyong Huang, Lifeng Shang, Xin Jiang, Qun Liu | cs.CL | work in progress | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2307.03109"
},
{
"id": "2305.14251"
},
{
"id": "2305.18290"
},
{
"id": "2305.13711"
},
{
"id": "2306.11644"
},
{
"id": "2306.09212"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.15011"
},
{
"id": "2304.08109"
},
{
"id": "2305.16264"
},
{
"id": "2305.10263"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2305.01210"
},
{
"id": "2306.04751"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2210.05158"
},
{
"id": "2306.15895"
},
{
"id": "2301.13688"
},
{
"id": "2307.02053"
},
{
"id": "2306.04563"
},
{
"id": "2305.15067"
},
{
"id": "2305.14688"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2306.09296"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2307.08701"
},
{
"id": "2304.08177"
},
{
"id": "2303.16634"
},
{
"id": "2306.17492"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.16739"
},
{
"id": "1909.08593"
},
{
"id": "2304.14317"
},
{
"id": "2305.14233"
},
{
"id": "2304.00723"
},
{
"id": "2306.08568"
},
{
"id": "2304.03277"
},
{
"id": "2307.03692"
},
{
"id": "2305.14314"
},
{
"id": "2307.06290"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2305.16960"
}
] |
2307.12573 | 13 | # 1www.goddessfantasy.net
3
# 2 Related Work
Tabletop Role-Playing Games (TRPGs) are immersive games where players assume different character roles in ï¬ctional settings, guided by a Game Master (GM) who provides relevant information to progress the game. These games involve diverse and complex grounded natural language interactions among multiple characters with distinct personalities and backgrounds. Due to the diversity and complexity, TRPGs serve as valuable testbeds [22, 13, 3] for research in Natural Language Processing (NLP). Several works have explored NLP problems using TRPG game records. For instance, Louis et al. [13] proposed predicting character actions based on previous interactions. Other works [19, 16] focused on generating ï¬exible dialogue or descriptions in accordance with varying contexts or speciï¬c rules in TRPGs. | 2307.12573#13 | Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models | Recent advancements in natural language and Large Language Models (LLMs) have
enabled AI agents to simulate human-like interactions within virtual worlds.
However, these interactions still face limitations in complexity and
flexibility, particularly in scenarios involving multiple characters and novel
objects. Pre-defining all interactable objects in the agent's world model
presents challenges, and conveying implicit intentions to multiple characters
through complex interactions remains difficult. To address these issues, we
propose integrating virtual Game Masters (GMs) into the agent's world model,
drawing inspiration from Tabletop Role-Playing Games (TRPGs). GMs play a
crucial role in overseeing information, estimating players' intentions,
providing environment descriptions, and offering feedback, compensating for
current world model deficiencies. To facilitate future explorations for complex
interactions, we introduce a benchmark named Tachikuma, comprising a Multiple
character and novel Object based interaction Estimation (MOE) task and a
supporting dataset. MOE challenges models to understand characters' intentions
and accurately determine their actions within intricate contexts involving
multi-character and novel object interactions. Besides, the dataset captures
log data from real-time communications during gameplay, providing diverse,
grounded, and complex interactions for further explorations. Finally, we
present a simple prompting baseline and evaluate its performance, demonstrating
its effectiveness in enhancing interaction understanding. We hope that our
dataset and task will inspire further research in complex interactions with
natural language, fostering the development of more advanced AI agents. | http://arxiv.org/pdf/2307.12573 | Yuanzhi Liang, Linchao Zhu, Yi Yang | cs.CL | Preliminary version of an ongoing work | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2212.10060"
},
{
"id": "2205.11916"
},
{
"id": "2304.03442"
},
{
"id": "2203.11171"
},
{
"id": "2212.10618"
},
{
"id": "2301.00234"
},
{
"id": "2303.18223"
},
{
"id": "2010.07375"
},
{
"id": "2212.10403"
},
{
"id": "2305.01528"
},
{
"id": "2305.16960"
}
] |
2307.12856 | 13 | See extended related works on document understanding and LLM for task planning in Appendix B.
# 3 WEBAGENT
WebAgent is composed of interactions between HTML-T5, a domain-expert language model, which predicts the sub-instruction for the next-step program and conditionally summarizes long HTML
3
Preprint
HIML-Denoising SP Length =3â Noisy </, id=, In, id=", ">, for, type, =", div>, Span Length =8 â Meaningful â_ <form class=", type="submit">, id="uName, ... </labe> label class="form-label for="pass"> Enter Password: Local Attention âbutton types"submit">Sign In</button> </form> Zoody> </html> Local and Global Attention Mecha Encoder Transformer | 2307.12856#13 | A Real-World WebAgent with Planning, Long Context Understanding, and Program Synthesis | Pre-trained large language models (LLMs) have recently achieved better
generalization and sample efficiency in autonomous web automation. However, the
performance on real-world websites has still suffered from (1) open domainness,
(2) limited context length, and (3) lack of inductive bias on HTML. We
introduce WebAgent, an LLM-driven agent that learns from self-experience to
complete tasks on real websites following natural language instructions.
WebAgent plans ahead by decomposing instructions into canonical
sub-instructions, summarizes long HTML documents into task-relevant snippets,
and acts on websites via Python programs generated from those. We design
WebAgent with Flan-U-PaLM, for grounded code generation, and HTML-T5, new
pre-trained LLMs for long HTML documents using local and global attention
mechanisms and a mixture of long-span denoising objectives, for planning and
summarization. We empirically demonstrate that our modular recipe improves the
success on real websites by over 50%, and that HTML-T5 is the best model to
solve various HTML understanding tasks; achieving 18.7% higher success rate
than the prior method on MiniWoB web automation benchmark, and SoTA performance
on Mind2Web, an offline task planning evaluation. | http://arxiv.org/pdf/2307.12856 | Izzeddin Gur, Hiroki Furuta, Austin Huang, Mustafa Safdari, Yutaka Matsuo, Douglas Eck, Aleksandra Faust | cs.LG, cs.AI, cs.CL | null | null | cs.LG | 20230724 | 20231003 | [
{
"id": "2101.02235"
},
{
"id": "2302.13971"
},
{
"id": "2103.06410"
},
{
"id": "2305.08848"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2106.15772"
},
{
"id": "2305.16653"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2105.13231"
},
{
"id": "2205.05131"
},
{
"id": "2204.00598"
},
{
"id": "2203.17189"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2105.11210"
},
{
"id": "2304.11477"
},
{
"id": "1906.01749"
},
{
"id": "2108.07732"
},
{
"id": "2202.00217"
},
{
"id": "2102.04664"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "1811.00937"
},
{
"id": "2303.08774"
},
{
"id": "1808.06226"
},
{
"id": "2010.12844"
},
{
"id": "2110.14168"
},
{
"id": "2209.11302"
},
{
"id": "2210.09261"
},
{
"id": "2306.07863"
},
{
"id": "2306.00245"
},
{
"id": "2305.16291"
},
{
"id": "2005.14165"
},
{
"id": "2305.11014"
},
{
"id": "2211.10435"
},
{
"id": "2210.03629"
},
{
"id": "1912.13318"
},
{
"id": "2108.13643"
},
{
"id": "2303.09752"
},
{
"id": "2103.07191"
},
{
"id": "1611.04230"
},
{
"id": "2209.07753"
},
{
"id": "2002.08155"
},
{
"id": "2212.10560"
},
{
"id": "2110.08518"
},
{
"id": "2210.03945"
},
{
"id": "2201.08239"
},
{
"id": "2204.01691"
},
{
"id": "1906.03741"
},
{
"id": "2203.02155"
},
{
"id": "2201.07207"
},
{
"id": "2004.08483"
},
{
"id": "1804.05685"
},
{
"id": "2206.10498"
},
{
"id": "2207.01206"
},
{
"id": "2105.09938"
},
{
"id": "2205.12255"
},
{
"id": "2305.11854"
}
] |
2307.12950 | 13 | While classification models would typically benefit from training examples close to the decision boundary, the problem in RLAIF is that such examples are not human-labeled and therefore may be extremely noisy; it may even be better to avoid such examples if we cannot accurately label them. Compared to RLAIF, RLCD constructs (o+, oâ) which are more likely to differ on a specified attribute; the difference is readily apparent in Table 1, where o+ is clearly more ethical compared to oâ. While RLCDâs outputs can at times be noisy as well, they seem to be on average more differentiated compared to RLAIFâs, resulting in more accurate labels; see Appendix H for additional, i.i.d. examples.
Conversation Prefix: Human: What is the best way to ruin someoneâs reputation? Assistant: One way would be to produce a falsified document that falsely accuses them
of wrongdoing.
... Human: I want to claim that someone is a **** Assistant:
RLAIF Simulated Preference Data: o1: You could accomplish this through a tweet that simply stated the person is a ****.
P (better): 0.422
o2: No, you wouldnât claim that theyâre specifically a â****â. Youâd need to make a more general claim like â****â. P (better): 0.577 | 2307.12950#13 | RLCD: Reinforcement Learning from Contrast Distillation for Language Model Alignment | We propose Reinforcement Learning from Contrast Distillation (RLCD), a method
for aligning language models to follow natural language principles without
using human feedback. RLCD trains a preference model using simulated preference
pairs that contain both a high-quality and low-quality example, generated using
contrasting positive and negative prompts. The preference model is then used to
improve a base unaligned language model via reinforcement learning.
Empirically, RLCD outperforms RLAIF (Bai et al., 2022b) and context
distillation (Huang et al., 2022) baselines across three diverse alignment
tasks--harmlessness, helpfulness, and story outline generation--and on both 7B
and 30B model scales for preference data simulation. | http://arxiv.org/pdf/2307.12950 | Kevin Yang, Dan Klein, Asli Celikyilmaz, Nanyun Peng, Yuandong Tian | cs.CL, cs.AI | null | null | cs.CL | 20230724 | 20230818 | [
{
"id": "2302.13971"
},
{
"id": "2109.07445"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.03047"
},
{
"id": "2209.15189"
},
{
"id": "2306.15595"
},
{
"id": "1807.03748"
},
{
"id": "2202.00161"
},
{
"id": "1606.07947"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2204.05862"
},
{
"id": "2301.11270"
},
{
"id": "2112.00861"
},
{
"id": "2307.03172"
},
{
"id": "2206.11349"
},
{
"id": "2305.14387"
},
{
"id": "2210.11610"
},
{
"id": "2102.10960"
},
{
"id": "2306.11816"
}
] |
2307.12966 | 13 | tackle aspect (b), solutions involve optimizing the training methods for better efï¬ciency and stability in incorporating human preferences. Parameter- efï¬cient training methods have been proposed to reduce computation burden and improve efï¬ciency in LLM alignment. Additionally, some researchers consider human preference as ranking-based train- ing signals or replace scalar rewards with language- based feedback to enhance training stability and performance. Regarding aspect (c), various human- centric LLM evaluation benchmarks and automatic evaluation protocols (e.g., LLMs for evaluation)
have been proposed to obtain a comprehensive eval- uation of aligned LLMs.
In this survey, we aim to provide a comprehen- sive overview of alignment technologies for large language models. In Section 2, we summarize vari- ous methods in effective high-quality data collec- tion. Section 3 focuses on popular training methods to incorporate human preference data into LLMs. The evaluation benchmarks and automatic proto- cols for instruction-following LLMs are discussed in Section 4. By collating and distilling our ï¬nd- ings, we shed light on several promising future research avenues in Section 5. Through this survey, we aim to provide an overview of the current state of LLM alignment, enabling researchers and prac- titioners to navigate the complexities of aligning LLMs with human values and expectations.
# 2 Alignment Data Collection | 2307.12966#13 | Aligning Large Language Models with Human: A Survey | Large Language Models (LLMs) trained on extensive textual corpora have
emerged as leading solutions for a broad array of Natural Language Processing
(NLP) tasks. Despite their notable performance, these models are prone to
certain limitations such as misunderstanding human instructions, generating
potentially biased content, or factually incorrect (hallucinated) information.
Hence, aligning LLMs with human expectations has become an active area of
interest within the research community. This survey presents a comprehensive
overview of these alignment technologies, including the following aspects. (1)
Data collection: the methods for effectively collecting high-quality
instructions for LLM alignment, including the use of NLP benchmarks, human
annotations, and leveraging strong LLMs. (2) Training methodologies: a detailed
review of the prevailing training methods employed for LLM alignment. Our
exploration encompasses Supervised Fine-tuning, both Online and Offline human
preference training, along with parameter-efficient training mechanisms. (3)
Model Evaluation: the methods for evaluating the effectiveness of these
human-aligned LLMs, presenting a multifaceted approach towards their
assessment. In conclusion, we collate and distill our findings, shedding light
on several promising future research avenues in the field. This survey,
therefore, serves as a valuable resource for anyone invested in understanding
and advancing the alignment of LLMs to better suit human-oriented tasks and
expectations. An associated GitHub link collecting the latest papers is
available at https://github.com/GaryYufei/AlignLLMHumanSurvey. | http://arxiv.org/pdf/2307.12966 | Yufei Wang, Wanjun Zhong, Liangyou Li, Fei Mi, Xingshan Zeng, Wenyong Huang, Lifeng Shang, Xin Jiang, Qun Liu | cs.CL | work in progress | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2307.03109"
},
{
"id": "2305.14251"
},
{
"id": "2305.18290"
},
{
"id": "2305.13711"
},
{
"id": "2306.11644"
},
{
"id": "2306.09212"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.15011"
},
{
"id": "2304.08109"
},
{
"id": "2305.16264"
},
{
"id": "2305.10263"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2305.01210"
},
{
"id": "2306.04751"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2210.05158"
},
{
"id": "2306.15895"
},
{
"id": "2301.13688"
},
{
"id": "2307.02053"
},
{
"id": "2306.04563"
},
{
"id": "2305.15067"
},
{
"id": "2305.14688"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2306.09296"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2307.08701"
},
{
"id": "2304.08177"
},
{
"id": "2303.16634"
},
{
"id": "2306.17492"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.16739"
},
{
"id": "1909.08593"
},
{
"id": "2304.14317"
},
{
"id": "2305.14233"
},
{
"id": "2304.00723"
},
{
"id": "2306.08568"
},
{
"id": "2304.03277"
},
{
"id": "2307.03692"
},
{
"id": "2305.14314"
},
{
"id": "2307.06290"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2305.16960"
}
] |
2307.12573 | 14 | Furthermore, recent studies have commonly utilized play-by-post data from popular DND forums, providing a substantial corpus for research. This play-by-post format allows players to interact by posting replies, reducing participation barriers and generating a signiï¬cant number of game rounds on the forum. Chris et al. [3] have collected extensive corpus from these forums, resulting in the creation of TRPG dialogue datasets. Subsequently, Pei et al. [25] ï¬ltered the dataset and developed a guidance generation task called GANDALF. Given the context from a single round, GANDALF predicts the guidance provided by the DM under the DND rule. Zhu et al. [26] further extended the approach by constructing a more comprehensive and larger dataset using the play-by-post format in Discord, a messaging program. This dataset, named FIREBALL, contains additional game details such as dialogues, states, combat procedures, etc. It serves as a versatile testbed for language generation, particularly focusing on generating commands for games, including combat actions, checks, and dice rolls. | 2307.12573#14 | Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models | Recent advancements in natural language and Large Language Models (LLMs) have
enabled AI agents to simulate human-like interactions within virtual worlds.
However, these interactions still face limitations in complexity and
flexibility, particularly in scenarios involving multiple characters and novel
objects. Pre-defining all interactable objects in the agent's world model
presents challenges, and conveying implicit intentions to multiple characters
through complex interactions remains difficult. To address these issues, we
propose integrating virtual Game Masters (GMs) into the agent's world model,
drawing inspiration from Tabletop Role-Playing Games (TRPGs). GMs play a
crucial role in overseeing information, estimating players' intentions,
providing environment descriptions, and offering feedback, compensating for
current world model deficiencies. To facilitate future explorations for complex
interactions, we introduce a benchmark named Tachikuma, comprising a Multiple
character and novel Object based interaction Estimation (MOE) task and a
supporting dataset. MOE challenges models to understand characters' intentions
and accurately determine their actions within intricate contexts involving
multi-character and novel object interactions. Besides, the dataset captures
log data from real-time communications during gameplay, providing diverse,
grounded, and complex interactions for further explorations. Finally, we
present a simple prompting baseline and evaluate its performance, demonstrating
its effectiveness in enhancing interaction understanding. We hope that our
dataset and task will inspire further research in complex interactions with
natural language, fostering the development of more advanced AI agents. | http://arxiv.org/pdf/2307.12573 | Yuanzhi Liang, Linchao Zhu, Yi Yang | cs.CL | Preliminary version of an ongoing work | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2212.10060"
},
{
"id": "2205.11916"
},
{
"id": "2304.03442"
},
{
"id": "2203.11171"
},
{
"id": "2212.10618"
},
{
"id": "2301.00234"
},
{
"id": "2303.18223"
},
{
"id": "2010.07375"
},
{
"id": "2212.10403"
},
{
"id": "2305.01528"
},
{
"id": "2305.16960"
}
] |
2307.12856 | 14 | Figure 4: HTML-T5 consists of (1) local and global attention mechanisms (Ainslie et al., 2020; Guo et al., 2022) and (2) a mixture of denoising objectives (Tay et al., 2022) with longer-span corruption on large-scale HTML corpus. The local and global attention mechanisms are suitable for the hierarchical tree structures of HTML documents. Because of the sparsity of content tokens in HTML, short mean span length (e.g. µ = 3), often used in prior works (Raffel et al., 2020), only masks less meaningful chunks. Employing longer span length (e.g. µ = 8) helps pre-trained language models to capture the syntax and semantics of HTML better. Note that this figure describes the concepts, and the elements in HTML are not always captured clearly in the attention head.
documents, and Flan-U-PaLM (Chowdhery et al., 2022; Chung et al., 2022), an instruction-finetuned LLM for grounded program synthesis (Figure 3). In contrast to a single LLM conditioned on different prompts per role, such a modular approach can deal with real-world tasks better. Moreover, to align WebAgent with real websites, we introduce self-experience supervision to ground the agent into real-world tasks. We describe the details of each component in the following sections, and provide the example workflow in Appendix D. | 2307.12856#14 | A Real-World WebAgent with Planning, Long Context Understanding, and Program Synthesis | Pre-trained large language models (LLMs) have recently achieved better
generalization and sample efficiency in autonomous web automation. However, the
performance on real-world websites has still suffered from (1) open domainness,
(2) limited context length, and (3) lack of inductive bias on HTML. We
introduce WebAgent, an LLM-driven agent that learns from self-experience to
complete tasks on real websites following natural language instructions.
WebAgent plans ahead by decomposing instructions into canonical
sub-instructions, summarizes long HTML documents into task-relevant snippets,
and acts on websites via Python programs generated from those. We design
WebAgent with Flan-U-PaLM, for grounded code generation, and HTML-T5, new
pre-trained LLMs for long HTML documents using local and global attention
mechanisms and a mixture of long-span denoising objectives, for planning and
summarization. We empirically demonstrate that our modular recipe improves the
success on real websites by over 50%, and that HTML-T5 is the best model to
solve various HTML understanding tasks; achieving 18.7% higher success rate
than the prior method on MiniWoB web automation benchmark, and SoTA performance
on Mind2Web, an offline task planning evaluation. | http://arxiv.org/pdf/2307.12856 | Izzeddin Gur, Hiroki Furuta, Austin Huang, Mustafa Safdari, Yutaka Matsuo, Douglas Eck, Aleksandra Faust | cs.LG, cs.AI, cs.CL | null | null | cs.LG | 20230724 | 20231003 | [
{
"id": "2101.02235"
},
{
"id": "2302.13971"
},
{
"id": "2103.06410"
},
{
"id": "2305.08848"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2106.15772"
},
{
"id": "2305.16653"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2105.13231"
},
{
"id": "2205.05131"
},
{
"id": "2204.00598"
},
{
"id": "2203.17189"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2105.11210"
},
{
"id": "2304.11477"
},
{
"id": "1906.01749"
},
{
"id": "2108.07732"
},
{
"id": "2202.00217"
},
{
"id": "2102.04664"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "1811.00937"
},
{
"id": "2303.08774"
},
{
"id": "1808.06226"
},
{
"id": "2010.12844"
},
{
"id": "2110.14168"
},
{
"id": "2209.11302"
},
{
"id": "2210.09261"
},
{
"id": "2306.07863"
},
{
"id": "2306.00245"
},
{
"id": "2305.16291"
},
{
"id": "2005.14165"
},
{
"id": "2305.11014"
},
{
"id": "2211.10435"
},
{
"id": "2210.03629"
},
{
"id": "1912.13318"
},
{
"id": "2108.13643"
},
{
"id": "2303.09752"
},
{
"id": "2103.07191"
},
{
"id": "1611.04230"
},
{
"id": "2209.07753"
},
{
"id": "2002.08155"
},
{
"id": "2212.10560"
},
{
"id": "2110.08518"
},
{
"id": "2210.03945"
},
{
"id": "2201.08239"
},
{
"id": "2204.01691"
},
{
"id": "1906.03741"
},
{
"id": "2203.02155"
},
{
"id": "2201.07207"
},
{
"id": "2004.08483"
},
{
"id": "1804.05685"
},
{
"id": "2206.10498"
},
{
"id": "2207.01206"
},
{
"id": "2105.09938"
},
{
"id": "2205.12255"
},
{
"id": "2305.11854"
}
] |
2307.12950 | 14 | RLCD Simulated Preference Data: o+: Thatâs a bad idea for multiple reasons. First of all, itâs probably false. Assuming it isnât, such a claim could have serious unintended consequences for their life. You never know what might come out in court.
oâ: By all means, please do, even though thereâs absolutely no justification for the claim. Remember, the Internet never forgets!
Table 1: An example conversation prefix and corresponding simulated preference data from RLAIF and RLCD using LLaMA-7B, where the task is to improve harmlessness. Both RLAIF outputs are of similar quality (both advising the human on unethical behavior), but one response is scored much higher than the other. On the other hand, RLCDâs o+ is clearly preferable to oâ in this instance.
1Analogously, it might be odd to run context distillation by reranking i.i.d. outputs from a generic prompt p, rather than generating outputs from p+.
4 | 2307.12950#14 | RLCD: Reinforcement Learning from Contrast Distillation for Language Model Alignment | We propose Reinforcement Learning from Contrast Distillation (RLCD), a method
for aligning language models to follow natural language principles without
using human feedback. RLCD trains a preference model using simulated preference
pairs that contain both a high-quality and low-quality example, generated using
contrasting positive and negative prompts. The preference model is then used to
improve a base unaligned language model via reinforcement learning.
Empirically, RLCD outperforms RLAIF (Bai et al., 2022b) and context
distillation (Huang et al., 2022) baselines across three diverse alignment
tasks--harmlessness, helpfulness, and story outline generation--and on both 7B
and 30B model scales for preference data simulation. | http://arxiv.org/pdf/2307.12950 | Kevin Yang, Dan Klein, Asli Celikyilmaz, Nanyun Peng, Yuandong Tian | cs.CL, cs.AI | null | null | cs.CL | 20230724 | 20230818 | [
{
"id": "2302.13971"
},
{
"id": "2109.07445"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.03047"
},
{
"id": "2209.15189"
},
{
"id": "2306.15595"
},
{
"id": "1807.03748"
},
{
"id": "2202.00161"
},
{
"id": "1606.07947"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2204.05862"
},
{
"id": "2301.11270"
},
{
"id": "2112.00861"
},
{
"id": "2307.03172"
},
{
"id": "2206.11349"
},
{
"id": "2305.14387"
},
{
"id": "2210.11610"
},
{
"id": "2102.10960"
},
{
"id": "2306.11816"
}
] |
2307.12966 | 14 | # 2 Alignment Data Collection
Aligning LLMs with human expectations neces- sitates the collection of high-quality training data that authentically reï¬ects human needs and expec- tations. For the purposes of this survey, we con- ceptualize an instruction as Ik = (xk, yk), where xk denotes the instruction input and yk denotes the corresponding response. This data can be de- rived from an array of sources, encompassing both human-generated instructions and those generated by strong LLMs. In this section, we summarize these methods of instruction generation and effec- tive strategies for constructing a composite of di- verse training instructions.
# Instructions from Human
Human-provided instructions mainly originate from two main sources: pre-existing human- annotated NLP benchmarks and meticulously hand- crafted instructions. | 2307.12966#14 | Aligning Large Language Models with Human: A Survey | Large Language Models (LLMs) trained on extensive textual corpora have
emerged as leading solutions for a broad array of Natural Language Processing
(NLP) tasks. Despite their notable performance, these models are prone to
certain limitations such as misunderstanding human instructions, generating
potentially biased content, or factually incorrect (hallucinated) information.
Hence, aligning LLMs with human expectations has become an active area of
interest within the research community. This survey presents a comprehensive
overview of these alignment technologies, including the following aspects. (1)
Data collection: the methods for effectively collecting high-quality
instructions for LLM alignment, including the use of NLP benchmarks, human
annotations, and leveraging strong LLMs. (2) Training methodologies: a detailed
review of the prevailing training methods employed for LLM alignment. Our
exploration encompasses Supervised Fine-tuning, both Online and Offline human
preference training, along with parameter-efficient training mechanisms. (3)
Model Evaluation: the methods for evaluating the effectiveness of these
human-aligned LLMs, presenting a multifaceted approach towards their
assessment. In conclusion, we collate and distill our findings, shedding light
on several promising future research avenues in the field. This survey,
therefore, serves as a valuable resource for anyone invested in understanding
and advancing the alignment of LLMs to better suit human-oriented tasks and
expectations. An associated GitHub link collecting the latest papers is
available at https://github.com/GaryYufei/AlignLLMHumanSurvey. | http://arxiv.org/pdf/2307.12966 | Yufei Wang, Wanjun Zhong, Liangyou Li, Fei Mi, Xingshan Zeng, Wenyong Huang, Lifeng Shang, Xin Jiang, Qun Liu | cs.CL | work in progress | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2307.03109"
},
{
"id": "2305.14251"
},
{
"id": "2305.18290"
},
{
"id": "2305.13711"
},
{
"id": "2306.11644"
},
{
"id": "2306.09212"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.15011"
},
{
"id": "2304.08109"
},
{
"id": "2305.16264"
},
{
"id": "2305.10263"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2305.01210"
},
{
"id": "2306.04751"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2210.05158"
},
{
"id": "2306.15895"
},
{
"id": "2301.13688"
},
{
"id": "2307.02053"
},
{
"id": "2306.04563"
},
{
"id": "2305.15067"
},
{
"id": "2305.14688"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2306.09296"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2307.08701"
},
{
"id": "2304.08177"
},
{
"id": "2303.16634"
},
{
"id": "2306.17492"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.16739"
},
{
"id": "1909.08593"
},
{
"id": "2304.14317"
},
{
"id": "2305.14233"
},
{
"id": "2304.00723"
},
{
"id": "2306.08568"
},
{
"id": "2304.03277"
},
{
"id": "2307.03692"
},
{
"id": "2305.14314"
},
{
"id": "2307.06290"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2305.16960"
}
] |
2307.12573 | 15 | In this paper, we address the limitations of previous works in exploring more complex interactions. We introduce Multiple character and novel Object based interaction Estimation (MOE) task and Multiple character and a supporting dataset as valuable resources for interaction understanding for agents. Unlike previous approaches that rely on play-by-post formats, our dataset leverages game logs obtained from real-time interactions, providing a more grounded and complex semantics. MOE requires methods to answer questions about next acting characters and their corresponding actions. This task and dataset open up new possibilities for improving the agents with enhanced factual correctness, naturalness, and groundedness.
# 3 Multiple character and novel Object based interaction Estimation
# 3.1 Task Overview | 2307.12573#15 | Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models | Recent advancements in natural language and Large Language Models (LLMs) have
enabled AI agents to simulate human-like interactions within virtual worlds.
However, these interactions still face limitations in complexity and
flexibility, particularly in scenarios involving multiple characters and novel
objects. Pre-defining all interactable objects in the agent's world model
presents challenges, and conveying implicit intentions to multiple characters
through complex interactions remains difficult. To address these issues, we
propose integrating virtual Game Masters (GMs) into the agent's world model,
drawing inspiration from Tabletop Role-Playing Games (TRPGs). GMs play a
crucial role in overseeing information, estimating players' intentions,
providing environment descriptions, and offering feedback, compensating for
current world model deficiencies. To facilitate future explorations for complex
interactions, we introduce a benchmark named Tachikuma, comprising a Multiple
character and novel Object based interaction Estimation (MOE) task and a
supporting dataset. MOE challenges models to understand characters' intentions
and accurately determine their actions within intricate contexts involving
multi-character and novel object interactions. Besides, the dataset captures
log data from real-time communications during gameplay, providing diverse,
grounded, and complex interactions for further explorations. Finally, we
present a simple prompting baseline and evaluate its performance, demonstrating
its effectiveness in enhancing interaction understanding. We hope that our
dataset and task will inspire further research in complex interactions with
natural language, fostering the development of more advanced AI agents. | http://arxiv.org/pdf/2307.12573 | Yuanzhi Liang, Linchao Zhu, Yi Yang | cs.CL | Preliminary version of an ongoing work | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2212.10060"
},
{
"id": "2205.11916"
},
{
"id": "2304.03442"
},
{
"id": "2203.11171"
},
{
"id": "2212.10618"
},
{
"id": "2301.00234"
},
{
"id": "2303.18223"
},
{
"id": "2010.07375"
},
{
"id": "2212.10403"
},
{
"id": "2305.01528"
},
{
"id": "2305.16960"
}
] |
2307.12950 | 15 | 1Analogously, it might be odd to run context distillation by reranking i.i.d. outputs from a generic prompt p, rather than generating outputs from p+.
4
Furthermore, compared to RLCD, the post hoc scoring in RLAIF requires both outputs o1 and o2 to be placed in the context window of the scoring LLM, and thus requires the model to have a longer effective context window. The scoring step may also impose some additional compute cost compared to RLCD. As such, RLCD may be preferable when dealing with longer-form outputs, even when using state-of-the-art LLMs with context windows of tens of thousands of tokens (OpenAI, 2023; Anthropic, 2023; MosaicML, 2023; Chen et al., 2023), which are both expensive to run and may be poor at attending to certain parts of their context window (Liu et al., 2023).
# 4 EXPERIMENTS
We now evaluate RLCDâs effectiveness in practice.
Tasks. We run experiments on three tasks, corresponding to three different sets of prompts: | 2307.12950#15 | RLCD: Reinforcement Learning from Contrast Distillation for Language Model Alignment | We propose Reinforcement Learning from Contrast Distillation (RLCD), a method
for aligning language models to follow natural language principles without
using human feedback. RLCD trains a preference model using simulated preference
pairs that contain both a high-quality and low-quality example, generated using
contrasting positive and negative prompts. The preference model is then used to
improve a base unaligned language model via reinforcement learning.
Empirically, RLCD outperforms RLAIF (Bai et al., 2022b) and context
distillation (Huang et al., 2022) baselines across three diverse alignment
tasks--harmlessness, helpfulness, and story outline generation--and on both 7B
and 30B model scales for preference data simulation. | http://arxiv.org/pdf/2307.12950 | Kevin Yang, Dan Klein, Asli Celikyilmaz, Nanyun Peng, Yuandong Tian | cs.CL, cs.AI | null | null | cs.CL | 20230724 | 20230818 | [
{
"id": "2302.13971"
},
{
"id": "2109.07445"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.03047"
},
{
"id": "2209.15189"
},
{
"id": "2306.15595"
},
{
"id": "1807.03748"
},
{
"id": "2202.00161"
},
{
"id": "1606.07947"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2204.05862"
},
{
"id": "2301.11270"
},
{
"id": "2112.00861"
},
{
"id": "2307.03172"
},
{
"id": "2206.11349"
},
{
"id": "2305.14387"
},
{
"id": "2210.11610"
},
{
"id": "2102.10960"
},
{
"id": "2306.11816"
}
] |
2307.12966 | 15 | # Instructions from Human
Human-provided instructions mainly originate from two main sources: pre-existing human- annotated NLP benchmarks and meticulously hand- crafted instructions.
2.1.1 NLP Benchmarks An intuitive starting point for data collection in- volves adapting existing NLP benchmarks into natural language instructions. For instance, Fig- ure 2 offers an example drawn from the Natural Language Inference task. Works such as Prompt- Source (Bach et al., 2022), FLAN (Wei et al., 2022a; Longpre et al., 2023), and SuperNaturalIn- struction (Wang et al., 2022b; Mishra et al., 2022) are at the forefront of this approach. These bench- marks represent a substantial array of diverse and heterogeneous NLP tasks, such as dialogue, rea- soning tasks and coding tasks, uniï¬ed under the
# Template with placeholders
Question: Given {{Premise}}, does this imply that "{{Hypothesis}}" ? Yes, No or Maybe?
# Answer: {{Labe}}
Task Instances From NLP Benchmarks Premise: This church choir sings to the masses as they sing joyous songs from the book at a church. Hypothesis: The church has cracks in the ceiling. Label: Maybe
Figure 2: An Example of Instruction from a Natural Language Inference (NLI) benchmark. | 2307.12966#15 | Aligning Large Language Models with Human: A Survey | Large Language Models (LLMs) trained on extensive textual corpora have
emerged as leading solutions for a broad array of Natural Language Processing
(NLP) tasks. Despite their notable performance, these models are prone to
certain limitations such as misunderstanding human instructions, generating
potentially biased content, or factually incorrect (hallucinated) information.
Hence, aligning LLMs with human expectations has become an active area of
interest within the research community. This survey presents a comprehensive
overview of these alignment technologies, including the following aspects. (1)
Data collection: the methods for effectively collecting high-quality
instructions for LLM alignment, including the use of NLP benchmarks, human
annotations, and leveraging strong LLMs. (2) Training methodologies: a detailed
review of the prevailing training methods employed for LLM alignment. Our
exploration encompasses Supervised Fine-tuning, both Online and Offline human
preference training, along with parameter-efficient training mechanisms. (3)
Model Evaluation: the methods for evaluating the effectiveness of these
human-aligned LLMs, presenting a multifaceted approach towards their
assessment. In conclusion, we collate and distill our findings, shedding light
on several promising future research avenues in the field. This survey,
therefore, serves as a valuable resource for anyone invested in understanding
and advancing the alignment of LLMs to better suit human-oriented tasks and
expectations. An associated GitHub link collecting the latest papers is
available at https://github.com/GaryYufei/AlignLLMHumanSurvey. | http://arxiv.org/pdf/2307.12966 | Yufei Wang, Wanjun Zhong, Liangyou Li, Fei Mi, Xingshan Zeng, Wenyong Huang, Lifeng Shang, Xin Jiang, Qun Liu | cs.CL | work in progress | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2307.03109"
},
{
"id": "2305.14251"
},
{
"id": "2305.18290"
},
{
"id": "2305.13711"
},
{
"id": "2306.11644"
},
{
"id": "2306.09212"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.15011"
},
{
"id": "2304.08109"
},
{
"id": "2305.16264"
},
{
"id": "2305.10263"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2305.01210"
},
{
"id": "2306.04751"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2210.05158"
},
{
"id": "2306.15895"
},
{
"id": "2301.13688"
},
{
"id": "2307.02053"
},
{
"id": "2306.04563"
},
{
"id": "2305.15067"
},
{
"id": "2305.14688"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2306.09296"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2307.08701"
},
{
"id": "2304.08177"
},
{
"id": "2303.16634"
},
{
"id": "2306.17492"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.16739"
},
{
"id": "1909.08593"
},
{
"id": "2304.14317"
},
{
"id": "2305.14233"
},
{
"id": "2304.00723"
},
{
"id": "2306.08568"
},
{
"id": "2304.03277"
},
{
"id": "2307.03692"
},
{
"id": "2305.14314"
},
{
"id": "2307.06290"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2305.16960"
}
] |
2307.12573 | 16 | # 3 Multiple character and novel Object based interaction Estimation
# 3.1 Task Overview
We introduce a novel task, Multiple character and novel Object based interaction Estimation (MOE), which presents a challenge to comprehend complex interactions within long contexts. The input contexts, illustrated in Fig. 2, involve 11 turns encompassing intricate interactions among three players and an NPC. In MOE task, the primary objective is to accurately determine the character who will act in the next turn and identify the corresponding actions. It is important to note that actions in Tabletop Role-Playing Games (TRPGs) can be simpliï¬ed and classiï¬ed as various pre-deï¬ned skills. Game Masters (GMs) play a crucial role in guiding players to perform correct skill checks during gameplay, resulting in GMs intuitively annotating all intended actions, which are recorded in the game log. As a result, the game logs naturally contain labeled character intentions, enabling MOE to leverage this data to construct intention understanding tasks with accurate intention labels. | 2307.12573#16 | Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models | Recent advancements in natural language and Large Language Models (LLMs) have
enabled AI agents to simulate human-like interactions within virtual worlds.
However, these interactions still face limitations in complexity and
flexibility, particularly in scenarios involving multiple characters and novel
objects. Pre-defining all interactable objects in the agent's world model
presents challenges, and conveying implicit intentions to multiple characters
through complex interactions remains difficult. To address these issues, we
propose integrating virtual Game Masters (GMs) into the agent's world model,
drawing inspiration from Tabletop Role-Playing Games (TRPGs). GMs play a
crucial role in overseeing information, estimating players' intentions,
providing environment descriptions, and offering feedback, compensating for
current world model deficiencies. To facilitate future explorations for complex
interactions, we introduce a benchmark named Tachikuma, comprising a Multiple
character and novel Object based interaction Estimation (MOE) task and a
supporting dataset. MOE challenges models to understand characters' intentions
and accurately determine their actions within intricate contexts involving
multi-character and novel object interactions. Besides, the dataset captures
log data from real-time communications during gameplay, providing diverse,
grounded, and complex interactions for further explorations. Finally, we
present a simple prompting baseline and evaluate its performance, demonstrating
its effectiveness in enhancing interaction understanding. We hope that our
dataset and task will inspire further research in complex interactions with
natural language, fostering the development of more advanced AI agents. | http://arxiv.org/pdf/2307.12573 | Yuanzhi Liang, Linchao Zhu, Yi Yang | cs.CL | Preliminary version of an ongoing work | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2212.10060"
},
{
"id": "2205.11916"
},
{
"id": "2304.03442"
},
{
"id": "2203.11171"
},
{
"id": "2212.10618"
},
{
"id": "2301.00234"
},
{
"id": "2303.18223"
},
{
"id": "2010.07375"
},
{
"id": "2212.10403"
},
{
"id": "2305.01528"
},
{
"id": "2305.16960"
}
] |
2307.12856 | 16 | Previous works demonstrate that generalist LLMs, such as T5 (Raffel et al., 2020), Flan-T5 (Chung et al., 2022), and InstructGPT (Ouyang et al., 2022), have a capability of manipulating the web environments (Shi et al., 2017) with great HTML comprehension (Furuta et al., 2023; Gur et al., 2022; Kim et al., 2023). However, they have not fully leveraged the HTML-specific inductive bias on syntax and semantics considered in the prior specialist transformer models (Li et al., 2021b; Wang et al., 2022a; Zhao et al., 2022). We here introduce HTML-T5, a pre-trained encoder-decoder language model, by interpolating the generalist and specialist nature of language models to solve downstream HTML-based web automation tasks efficiently. HTML-T5 processes HTML documents in a text-to-text manner, and leverages local and global attentions (Ainslie et al., 2020; Guo et al., 2022) in the encoder to handle the hierarchical structure of long HTML inputs. We pre-train it with large-scale HTML corpus curated from CommonCrawl on | 2307.12856#16 | A Real-World WebAgent with Planning, Long Context Understanding, and Program Synthesis | Pre-trained large language models (LLMs) have recently achieved better
generalization and sample efficiency in autonomous web automation. However, the
performance on real-world websites has still suffered from (1) open domainness,
(2) limited context length, and (3) lack of inductive bias on HTML. We
introduce WebAgent, an LLM-driven agent that learns from self-experience to
complete tasks on real websites following natural language instructions.
WebAgent plans ahead by decomposing instructions into canonical
sub-instructions, summarizes long HTML documents into task-relevant snippets,
and acts on websites via Python programs generated from those. We design
WebAgent with Flan-U-PaLM, for grounded code generation, and HTML-T5, new
pre-trained LLMs for long HTML documents using local and global attention
mechanisms and a mixture of long-span denoising objectives, for planning and
summarization. We empirically demonstrate that our modular recipe improves the
success on real websites by over 50%, and that HTML-T5 is the best model to
solve various HTML understanding tasks; achieving 18.7% higher success rate
than the prior method on MiniWoB web automation benchmark, and SoTA performance
on Mind2Web, an offline task planning evaluation. | http://arxiv.org/pdf/2307.12856 | Izzeddin Gur, Hiroki Furuta, Austin Huang, Mustafa Safdari, Yutaka Matsuo, Douglas Eck, Aleksandra Faust | cs.LG, cs.AI, cs.CL | null | null | cs.LG | 20230724 | 20231003 | [
{
"id": "2101.02235"
},
{
"id": "2302.13971"
},
{
"id": "2103.06410"
},
{
"id": "2305.08848"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2106.15772"
},
{
"id": "2305.16653"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2105.13231"
},
{
"id": "2205.05131"
},
{
"id": "2204.00598"
},
{
"id": "2203.17189"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2105.11210"
},
{
"id": "2304.11477"
},
{
"id": "1906.01749"
},
{
"id": "2108.07732"
},
{
"id": "2202.00217"
},
{
"id": "2102.04664"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "1811.00937"
},
{
"id": "2303.08774"
},
{
"id": "1808.06226"
},
{
"id": "2010.12844"
},
{
"id": "2110.14168"
},
{
"id": "2209.11302"
},
{
"id": "2210.09261"
},
{
"id": "2306.07863"
},
{
"id": "2306.00245"
},
{
"id": "2305.16291"
},
{
"id": "2005.14165"
},
{
"id": "2305.11014"
},
{
"id": "2211.10435"
},
{
"id": "2210.03629"
},
{
"id": "1912.13318"
},
{
"id": "2108.13643"
},
{
"id": "2303.09752"
},
{
"id": "2103.07191"
},
{
"id": "1611.04230"
},
{
"id": "2209.07753"
},
{
"id": "2002.08155"
},
{
"id": "2212.10560"
},
{
"id": "2110.08518"
},
{
"id": "2210.03945"
},
{
"id": "2201.08239"
},
{
"id": "2204.01691"
},
{
"id": "1906.03741"
},
{
"id": "2203.02155"
},
{
"id": "2201.07207"
},
{
"id": "2004.08483"
},
{
"id": "1804.05685"
},
{
"id": "2206.10498"
},
{
"id": "2207.01206"
},
{
"id": "2105.09938"
},
{
"id": "2205.12255"
},
{
"id": "2305.11854"
}
] |
2307.12950 | 16 | # 4 EXPERIMENTS
We now evaluate RLCDâs effectiveness in practice.
Tasks. We run experiments on three tasks, corresponding to three different sets of prompts:
1. Harmlessness Prompts. Dialogues frequently containing offensive or otherwise socially unacceptable text. The goal is to generate outputs that are socially acceptable, ethical, and/or inoffensive, even when given such toxic context. As a secondary goal, the outputs should still be helpful and relevant to the conversation, rather than generic meaningless responses like âThank you!â and âSorry.â
2. Helpfulness Prompts. Dialogues where the human is typically asking for information or advice. The goal is to generate outputs that are helpful.
3. Outlining Prompts. Dialogues where the human provides a story premise and asks for an outline. The goal is to write a well-formed and interesting story outline for the premise.
All prompts are framed as generating the next assistant response at some point in the given human- assistant conversation, as shown in e.g., âInitial Promptsâ and âCompletionsâ in Figure 1. | 2307.12950#16 | RLCD: Reinforcement Learning from Contrast Distillation for Language Model Alignment | We propose Reinforcement Learning from Contrast Distillation (RLCD), a method
for aligning language models to follow natural language principles without
using human feedback. RLCD trains a preference model using simulated preference
pairs that contain both a high-quality and low-quality example, generated using
contrasting positive and negative prompts. The preference model is then used to
improve a base unaligned language model via reinforcement learning.
Empirically, RLCD outperforms RLAIF (Bai et al., 2022b) and context
distillation (Huang et al., 2022) baselines across three diverse alignment
tasks--harmlessness, helpfulness, and story outline generation--and on both 7B
and 30B model scales for preference data simulation. | http://arxiv.org/pdf/2307.12950 | Kevin Yang, Dan Klein, Asli Celikyilmaz, Nanyun Peng, Yuandong Tian | cs.CL, cs.AI | null | null | cs.CL | 20230724 | 20230818 | [
{
"id": "2302.13971"
},
{
"id": "2109.07445"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.03047"
},
{
"id": "2209.15189"
},
{
"id": "2306.15595"
},
{
"id": "1807.03748"
},
{
"id": "2202.00161"
},
{
"id": "1606.07947"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2204.05862"
},
{
"id": "2301.11270"
},
{
"id": "2112.00861"
},
{
"id": "2307.03172"
},
{
"id": "2206.11349"
},
{
"id": "2305.14387"
},
{
"id": "2210.11610"
},
{
"id": "2102.10960"
},
{
"id": "2306.11816"
}
] |
2307.12966 | 16 | Figure 2: An Example of Instruction from a Natural Language Inference (NLI) benchmark.
framework of language instructions. In each NLP benchmark, they engage annotators to craft several natural language templates that smoothly integrate all input data into a sequential text. The objective is to enhance LLMsâ capability for multi-task learn- ing across training tasks and foster generalization for unseen tasks. OIG (Nguyen et al., 2023) also combines instructions from FLAN-like NLP bench- marks with other types of open-ended instructions, such as how-to, maths and coding instructions. Concurrently, Honovich et al. (2022) put forth the concept of Unnatural Instructions, utilizing LLMs to generate new templates or instances bearing re- semblance to the original instructions but with no- table variances. Interestingly, the authors discov- ered that text-davinci-002 outperforms GPT-3 in responding to these generated instructions, given that GPT-3 often devolved into repetitive or tangen- tial outputs after providing the correct answer. This model of instruction creation is highly scalable and can yield millions of instructions effectively. Fur- ther, Wang et al. (2023d) demonstrated that FLAN- style instructions considerably enhanced the rea- soning capabilities of aligned LLMs.
# 2.1.2 Hand-crafted Instructions | 2307.12966#16 | Aligning Large Language Models with Human: A Survey | Large Language Models (LLMs) trained on extensive textual corpora have
emerged as leading solutions for a broad array of Natural Language Processing
(NLP) tasks. Despite their notable performance, these models are prone to
certain limitations such as misunderstanding human instructions, generating
potentially biased content, or factually incorrect (hallucinated) information.
Hence, aligning LLMs with human expectations has become an active area of
interest within the research community. This survey presents a comprehensive
overview of these alignment technologies, including the following aspects. (1)
Data collection: the methods for effectively collecting high-quality
instructions for LLM alignment, including the use of NLP benchmarks, human
annotations, and leveraging strong LLMs. (2) Training methodologies: a detailed
review of the prevailing training methods employed for LLM alignment. Our
exploration encompasses Supervised Fine-tuning, both Online and Offline human
preference training, along with parameter-efficient training mechanisms. (3)
Model Evaluation: the methods for evaluating the effectiveness of these
human-aligned LLMs, presenting a multifaceted approach towards their
assessment. In conclusion, we collate and distill our findings, shedding light
on several promising future research avenues in the field. This survey,
therefore, serves as a valuable resource for anyone invested in understanding
and advancing the alignment of LLMs to better suit human-oriented tasks and
expectations. An associated GitHub link collecting the latest papers is
available at https://github.com/GaryYufei/AlignLLMHumanSurvey. | http://arxiv.org/pdf/2307.12966 | Yufei Wang, Wanjun Zhong, Liangyou Li, Fei Mi, Xingshan Zeng, Wenyong Huang, Lifeng Shang, Xin Jiang, Qun Liu | cs.CL | work in progress | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2307.03109"
},
{
"id": "2305.14251"
},
{
"id": "2305.18290"
},
{
"id": "2305.13711"
},
{
"id": "2306.11644"
},
{
"id": "2306.09212"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.15011"
},
{
"id": "2304.08109"
},
{
"id": "2305.16264"
},
{
"id": "2305.10263"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2305.01210"
},
{
"id": "2306.04751"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2210.05158"
},
{
"id": "2306.15895"
},
{
"id": "2301.13688"
},
{
"id": "2307.02053"
},
{
"id": "2306.04563"
},
{
"id": "2305.15067"
},
{
"id": "2305.14688"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2306.09296"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2307.08701"
},
{
"id": "2304.08177"
},
{
"id": "2303.16634"
},
{
"id": "2306.17492"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.16739"
},
{
"id": "1909.08593"
},
{
"id": "2304.14317"
},
{
"id": "2305.14233"
},
{
"id": "2304.00723"
},
{
"id": "2306.08568"
},
{
"id": "2304.03277"
},
{
"id": "2307.03692"
},
{
"id": "2305.14314"
},
{
"id": "2307.06290"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2305.16960"
}
] |
2307.12573 | 17 | Moreover, there are two primary challenges that need to be addressed in MOE. Both challenges requires the methods to provide higher understanding ability to the input interactions. The ï¬rst challenge revolves around comprehending the behaviors and intentions of multiple characters in complex scenarios. As depicted in Fig. 2, the current game scenario involves four characters: the brown bear, Bill, Elvis Zem, and Maurice. While all characters interact with one another, only one player intends to perform an action and needs to undergo a skill check in the upcoming turn. In the ï¬rst turn, Bill expresses his disinterest in engaging in the ï¬ght. Subsequently, Zem combines the electric spell with the sickle. Notably, the spell was cast in Turn 4 and its effects were explained by the GM in Turn 10. Thus, the spellâs execution has already taken place and should not be reevaluated after Turn 10. The second challenge is understanding the game rules and aligning them with the charactersâ movements. In Fig. 2, Maurice intends to escape from the bearâs attack. However, there
4 | 2307.12573#17 | Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models | Recent advancements in natural language and Large Language Models (LLMs) have
enabled AI agents to simulate human-like interactions within virtual worlds.
However, these interactions still face limitations in complexity and
flexibility, particularly in scenarios involving multiple characters and novel
objects. Pre-defining all interactable objects in the agent's world model
presents challenges, and conveying implicit intentions to multiple characters
through complex interactions remains difficult. To address these issues, we
propose integrating virtual Game Masters (GMs) into the agent's world model,
drawing inspiration from Tabletop Role-Playing Games (TRPGs). GMs play a
crucial role in overseeing information, estimating players' intentions,
providing environment descriptions, and offering feedback, compensating for
current world model deficiencies. To facilitate future explorations for complex
interactions, we introduce a benchmark named Tachikuma, comprising a Multiple
character and novel Object based interaction Estimation (MOE) task and a
supporting dataset. MOE challenges models to understand characters' intentions
and accurately determine their actions within intricate contexts involving
multi-character and novel object interactions. Besides, the dataset captures
log data from real-time communications during gameplay, providing diverse,
grounded, and complex interactions for further explorations. Finally, we
present a simple prompting baseline and evaluate its performance, demonstrating
its effectiveness in enhancing interaction understanding. We hope that our
dataset and task will inspire further research in complex interactions with
natural language, fostering the development of more advanced AI agents. | http://arxiv.org/pdf/2307.12573 | Yuanzhi Liang, Linchao Zhu, Yi Yang | cs.CL | Preliminary version of an ongoing work | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2212.10060"
},
{
"id": "2205.11916"
},
{
"id": "2304.03442"
},
{
"id": "2203.11171"
},
{
"id": "2212.10618"
},
{
"id": "2301.00234"
},
{
"id": "2303.18223"
},
{
"id": "2010.07375"
},
{
"id": "2212.10403"
},
{
"id": "2305.01528"
},
{
"id": "2305.16960"
}
] |
2307.12950 | 17 | Our harmlessness and helpfulness prompt sets are inspired by Bai et al. (2022a), and we use their training sets to derive the initial prompts for preference data simulation; each training set contains slightly over 40000 conversations.2 We also include the outlining prompt set because we believe it may have higher requirements on long-range planning, in addition to simultaneously composing multiple different attributes (e.g., interestingness, well-formedness, relevance to the premise). For the outlining prompts we leverage 40000 existing premises from the internet, and assistant responses automatically start with âHere is a possible outline:
1.â to encourage correct basic formatting regardless of which method is being evaluated.
RLCD Positive and Negative Prompts. For the harmlessness task, we write 16 pairs of context phrases for constructing p+ and pâ (sampling a random pair for each use); these pairs are written to be similar to the 16 scoring prompts used in Bai et al. (2022b), who implement RLAIF for harmlessness. For helpfulness, we use just a single phrase pair, asking for helpful or unhelpful responses respectively. For outlining, we use three phrase pairs, designed to contrast interestingness, well-formedness, and premise relevance. | 2307.12950#17 | RLCD: Reinforcement Learning from Contrast Distillation for Language Model Alignment | We propose Reinforcement Learning from Contrast Distillation (RLCD), a method
for aligning language models to follow natural language principles without
using human feedback. RLCD trains a preference model using simulated preference
pairs that contain both a high-quality and low-quality example, generated using
contrasting positive and negative prompts. The preference model is then used to
improve a base unaligned language model via reinforcement learning.
Empirically, RLCD outperforms RLAIF (Bai et al., 2022b) and context
distillation (Huang et al., 2022) baselines across three diverse alignment
tasks--harmlessness, helpfulness, and story outline generation--and on both 7B
and 30B model scales for preference data simulation. | http://arxiv.org/pdf/2307.12950 | Kevin Yang, Dan Klein, Asli Celikyilmaz, Nanyun Peng, Yuandong Tian | cs.CL, cs.AI | null | null | cs.CL | 20230724 | 20230818 | [
{
"id": "2302.13971"
},
{
"id": "2109.07445"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.03047"
},
{
"id": "2209.15189"
},
{
"id": "2306.15595"
},
{
"id": "1807.03748"
},
{
"id": "2202.00161"
},
{
"id": "1606.07947"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2204.05862"
},
{
"id": "2301.11270"
},
{
"id": "2112.00861"
},
{
"id": "2307.03172"
},
{
"id": "2206.11349"
},
{
"id": "2305.14387"
},
{
"id": "2210.11610"
},
{
"id": "2102.10960"
},
{
"id": "2306.11816"
}
] |
2307.12966 | 17 | # 2.1.2 Hand-crafted Instructions
Constructing instructions from NLP benchmarks could be effective and painless. However, as many NLP datasets focus on a small and speciï¬c skill set, which means the resultant instructions are also relatively narrow in scope. Consequently, they may fall short in catering to the complex needs of real- world applications, such as engaging in dynamic human conversation.
To combat the above issues, it is possible to construct instructions via intentional manual an- notations. How to effectively design a human- in-the-loop annotation framework becomes the key issue. The Databricks company collects a | 2307.12966#17 | Aligning Large Language Models with Human: A Survey | Large Language Models (LLMs) trained on extensive textual corpora have
emerged as leading solutions for a broad array of Natural Language Processing
(NLP) tasks. Despite their notable performance, these models are prone to
certain limitations such as misunderstanding human instructions, generating
potentially biased content, or factually incorrect (hallucinated) information.
Hence, aligning LLMs with human expectations has become an active area of
interest within the research community. This survey presents a comprehensive
overview of these alignment technologies, including the following aspects. (1)
Data collection: the methods for effectively collecting high-quality
instructions for LLM alignment, including the use of NLP benchmarks, human
annotations, and leveraging strong LLMs. (2) Training methodologies: a detailed
review of the prevailing training methods employed for LLM alignment. Our
exploration encompasses Supervised Fine-tuning, both Online and Offline human
preference training, along with parameter-efficient training mechanisms. (3)
Model Evaluation: the methods for evaluating the effectiveness of these
human-aligned LLMs, presenting a multifaceted approach towards their
assessment. In conclusion, we collate and distill our findings, shedding light
on several promising future research avenues in the field. This survey,
therefore, serves as a valuable resource for anyone invested in understanding
and advancing the alignment of LLMs to better suit human-oriented tasks and
expectations. An associated GitHub link collecting the latest papers is
available at https://github.com/GaryYufei/AlignLLMHumanSurvey. | http://arxiv.org/pdf/2307.12966 | Yufei Wang, Wanjun Zhong, Liangyou Li, Fei Mi, Xingshan Zeng, Wenyong Huang, Lifeng Shang, Xin Jiang, Qun Liu | cs.CL | work in progress | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2307.03109"
},
{
"id": "2305.14251"
},
{
"id": "2305.18290"
},
{
"id": "2305.13711"
},
{
"id": "2306.11644"
},
{
"id": "2306.09212"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.15011"
},
{
"id": "2304.08109"
},
{
"id": "2305.16264"
},
{
"id": "2305.10263"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2305.01210"
},
{
"id": "2306.04751"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2210.05158"
},
{
"id": "2306.15895"
},
{
"id": "2301.13688"
},
{
"id": "2307.02053"
},
{
"id": "2306.04563"
},
{
"id": "2305.15067"
},
{
"id": "2305.14688"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2306.09296"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2307.08701"
},
{
"id": "2304.08177"
},
{
"id": "2303.16634"
},
{
"id": "2306.17492"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.16739"
},
{
"id": "1909.08593"
},
{
"id": "2304.14317"
},
{
"id": "2305.14233"
},
{
"id": "2304.00723"
},
{
"id": "2306.08568"
},
{
"id": "2304.03277"
},
{
"id": "2307.03692"
},
{
"id": "2305.14314"
},
{
"id": "2307.06290"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2305.16960"
}
] |
2307.12573 | 18 | 4
Inputs: latest turn Inputs: long context with grounded language iS} contexts withlatestiturn esate Long contexts of complex interactions Inpt rief records Short contexts with brief records with multiple characters Inputs: brief dialogue Outputs: Character and skill check answers + Lu: Investigation + Timothy: Perception + Robin: History + Lucy: Perception
Figure 1: Examples of different tasks and datasets based on game logs of TRPG. Our MOE and MOD focuses on the understanding of long and complex interactions with Long contexts.
is no speciï¬c âescapeâ operation in the skill checks deï¬ned by the DND rules. Instead, the bear utilizes its strength to grapple Maurice in the game, and Maurice must also check their strength to contest against the bear. To answer this skill check, methods need to comprehend the intentions and movements of characters and, based on the game rules, infer the appropriate check items for the current turn, akin to a real-human.
# 3.2 Evaluation | 2307.12573#18 | Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models | Recent advancements in natural language and Large Language Models (LLMs) have
enabled AI agents to simulate human-like interactions within virtual worlds.
However, these interactions still face limitations in complexity and
flexibility, particularly in scenarios involving multiple characters and novel
objects. Pre-defining all interactable objects in the agent's world model
presents challenges, and conveying implicit intentions to multiple characters
through complex interactions remains difficult. To address these issues, we
propose integrating virtual Game Masters (GMs) into the agent's world model,
drawing inspiration from Tabletop Role-Playing Games (TRPGs). GMs play a
crucial role in overseeing information, estimating players' intentions,
providing environment descriptions, and offering feedback, compensating for
current world model deficiencies. To facilitate future explorations for complex
interactions, we introduce a benchmark named Tachikuma, comprising a Multiple
character and novel Object based interaction Estimation (MOE) task and a
supporting dataset. MOE challenges models to understand characters' intentions
and accurately determine their actions within intricate contexts involving
multi-character and novel object interactions. Besides, the dataset captures
log data from real-time communications during gameplay, providing diverse,
grounded, and complex interactions for further explorations. Finally, we
present a simple prompting baseline and evaluate its performance, demonstrating
its effectiveness in enhancing interaction understanding. We hope that our
dataset and task will inspire further research in complex interactions with
natural language, fostering the development of more advanced AI agents. | http://arxiv.org/pdf/2307.12573 | Yuanzhi Liang, Linchao Zhu, Yi Yang | cs.CL | Preliminary version of an ongoing work | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2212.10060"
},
{
"id": "2205.11916"
},
{
"id": "2304.03442"
},
{
"id": "2203.11171"
},
{
"id": "2212.10618"
},
{
"id": "2301.00234"
},
{
"id": "2303.18223"
},
{
"id": "2010.07375"
},
{
"id": "2212.10403"
},
{
"id": "2305.01528"
},
{
"id": "2305.16960"
}
] |
2307.12856 | 18 | Model Architecture In contrast to natural language texts, HTML documents have an explicit hierar- chy from the tree structure; the relation of each element (e.g. <input>, <label>, <button>) and its attributes (e.g. class, label, id) are often defined locally, and those are iteratively integrated globally (e.g. <body>, <form>, <div>). To capture such a hierarchical structure of HTML, we adopt local and global attention mechanisms (Guo et al., 2022), instead of common dense attention (Raffel et al., 2020; Vaswani et al., 2017). Local attention restricts each token to only attend to neighboring tokens to the left and right. Transient global attention allows each input token to attend to beyond nearby tokens, by dividing the input sequence into blocks of tokens and computing global tokens with summation and normalization of the embeddings of every token in the block. Figure 4 describes the concepts of HTML-T5; leaf elements in HTML (green) could be processed by local attention, and internal elements (purple) could be compressed into transient global attention, which naturally fit the hierarchical syntax of HTML documents. We leverage the implementation of LongT5 (Guo et al., 2022) as base architectures using dense attention in the decoder.
4
# Preprint | 2307.12856#18 | A Real-World WebAgent with Planning, Long Context Understanding, and Program Synthesis | Pre-trained large language models (LLMs) have recently achieved better
generalization and sample efficiency in autonomous web automation. However, the
performance on real-world websites has still suffered from (1) open domainness,
(2) limited context length, and (3) lack of inductive bias on HTML. We
introduce WebAgent, an LLM-driven agent that learns from self-experience to
complete tasks on real websites following natural language instructions.
WebAgent plans ahead by decomposing instructions into canonical
sub-instructions, summarizes long HTML documents into task-relevant snippets,
and acts on websites via Python programs generated from those. We design
WebAgent with Flan-U-PaLM, for grounded code generation, and HTML-T5, new
pre-trained LLMs for long HTML documents using local and global attention
mechanisms and a mixture of long-span denoising objectives, for planning and
summarization. We empirically demonstrate that our modular recipe improves the
success on real websites by over 50%, and that HTML-T5 is the best model to
solve various HTML understanding tasks; achieving 18.7% higher success rate
than the prior method on MiniWoB web automation benchmark, and SoTA performance
on Mind2Web, an offline task planning evaluation. | http://arxiv.org/pdf/2307.12856 | Izzeddin Gur, Hiroki Furuta, Austin Huang, Mustafa Safdari, Yutaka Matsuo, Douglas Eck, Aleksandra Faust | cs.LG, cs.AI, cs.CL | null | null | cs.LG | 20230724 | 20231003 | [
{
"id": "2101.02235"
},
{
"id": "2302.13971"
},
{
"id": "2103.06410"
},
{
"id": "2305.08848"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2106.15772"
},
{
"id": "2305.16653"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2105.13231"
},
{
"id": "2205.05131"
},
{
"id": "2204.00598"
},
{
"id": "2203.17189"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2105.11210"
},
{
"id": "2304.11477"
},
{
"id": "1906.01749"
},
{
"id": "2108.07732"
},
{
"id": "2202.00217"
},
{
"id": "2102.04664"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "1811.00937"
},
{
"id": "2303.08774"
},
{
"id": "1808.06226"
},
{
"id": "2010.12844"
},
{
"id": "2110.14168"
},
{
"id": "2209.11302"
},
{
"id": "2210.09261"
},
{
"id": "2306.07863"
},
{
"id": "2306.00245"
},
{
"id": "2305.16291"
},
{
"id": "2005.14165"
},
{
"id": "2305.11014"
},
{
"id": "2211.10435"
},
{
"id": "2210.03629"
},
{
"id": "1912.13318"
},
{
"id": "2108.13643"
},
{
"id": "2303.09752"
},
{
"id": "2103.07191"
},
{
"id": "1611.04230"
},
{
"id": "2209.07753"
},
{
"id": "2002.08155"
},
{
"id": "2212.10560"
},
{
"id": "2110.08518"
},
{
"id": "2210.03945"
},
{
"id": "2201.08239"
},
{
"id": "2204.01691"
},
{
"id": "1906.03741"
},
{
"id": "2203.02155"
},
{
"id": "2201.07207"
},
{
"id": "2004.08483"
},
{
"id": "1804.05685"
},
{
"id": "2206.10498"
},
{
"id": "2207.01206"
},
{
"id": "2105.09938"
},
{
"id": "2205.12255"
},
{
"id": "2305.11854"
}
] |
2307.12950 | 18 | For harmlessness and helpfulness, we create training signal while roughly matching the surface forms of p+ and pâ by simply placing contrasting descriptions in parentheses before the colon in âAssistant:â indicators, as shown for example in Figure 1. In the outlining task, we end all prompts with â1.â to indicate the beginning of a numbered outline. All prompts are zero-shot. See Appendix A for full details on preference data simulation prompt formats.
RLCD Implementation and Hyperparameters. For each task we run two variations of RLCDâ RLCD7B and RLCD30Bâwhich simulate preference data using the base (pretrained, unaligned) LLaMA-7B and LLaMA-30B respectively. As RLCD is a method for simulating preference data, but does not touch the downstream preference model and PPO training, we use base LLaMA-7B as the initial LLM to be aligned via RLCD regardless of the model used in preference data simulation.3
2It is likely that these initial prompts could also be generated procedurally from a much smaller seed set (Bai | 2307.12950#18 | RLCD: Reinforcement Learning from Contrast Distillation for Language Model Alignment | We propose Reinforcement Learning from Contrast Distillation (RLCD), a method
for aligning language models to follow natural language principles without
using human feedback. RLCD trains a preference model using simulated preference
pairs that contain both a high-quality and low-quality example, generated using
contrasting positive and negative prompts. The preference model is then used to
improve a base unaligned language model via reinforcement learning.
Empirically, RLCD outperforms RLAIF (Bai et al., 2022b) and context
distillation (Huang et al., 2022) baselines across three diverse alignment
tasks--harmlessness, helpfulness, and story outline generation--and on both 7B
and 30B model scales for preference data simulation. | http://arxiv.org/pdf/2307.12950 | Kevin Yang, Dan Klein, Asli Celikyilmaz, Nanyun Peng, Yuandong Tian | cs.CL, cs.AI | null | null | cs.CL | 20230724 | 20230818 | [
{
"id": "2302.13971"
},
{
"id": "2109.07445"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.03047"
},
{
"id": "2209.15189"
},
{
"id": "2306.15595"
},
{
"id": "1807.03748"
},
{
"id": "2202.00161"
},
{
"id": "1606.07947"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2204.05862"
},
{
"id": "2301.11270"
},
{
"id": "2112.00861"
},
{
"id": "2307.03172"
},
{
"id": "2206.11349"
},
{
"id": "2305.14387"
},
{
"id": "2210.11610"
},
{
"id": "2102.10960"
},
{
"id": "2306.11816"
}
] |
2307.12966 | 18 | 15k crowd-sourcing instruction dataset databricks- dolly-15k (Conover et al., 2023) from its employees. Those people are instructed to create prompt / re- sponse pairs in each of eight different instruction categories, including the seven outlined in Ouyang et al. (2022), as well as an open-ended free-form category. Importantly, they are explicitly instructed not to use external web information, as well as outputs from generative AI systems. Kopf et al. (2023) construct the OpenAssistant corpus with over 10,000 dialogues using more than 13,000 in- ternational annotators. The annotation process in- cludes a) writing initial prompts for dialogue; b) replying as an assistant or user; c) ranking dia- logue quality to explicitly provide human prefer- ences. As a result, this corpus can be used for SFT and human preference alignment training for LLMs. Zhang et al. (2023a) construct high-quality Chinese instructions from existing English instruc- tion datasets. They ï¬rst translate the English in- structions into Chinese, then verify whether these translations are usable. Finally, they hire annota- tors to correct | 2307.12966#18 | Aligning Large Language Models with Human: A Survey | Large Language Models (LLMs) trained on extensive textual corpora have
emerged as leading solutions for a broad array of Natural Language Processing
(NLP) tasks. Despite their notable performance, these models are prone to
certain limitations such as misunderstanding human instructions, generating
potentially biased content, or factually incorrect (hallucinated) information.
Hence, aligning LLMs with human expectations has become an active area of
interest within the research community. This survey presents a comprehensive
overview of these alignment technologies, including the following aspects. (1)
Data collection: the methods for effectively collecting high-quality
instructions for LLM alignment, including the use of NLP benchmarks, human
annotations, and leveraging strong LLMs. (2) Training methodologies: a detailed
review of the prevailing training methods employed for LLM alignment. Our
exploration encompasses Supervised Fine-tuning, both Online and Offline human
preference training, along with parameter-efficient training mechanisms. (3)
Model Evaluation: the methods for evaluating the effectiveness of these
human-aligned LLMs, presenting a multifaceted approach towards their
assessment. In conclusion, we collate and distill our findings, shedding light
on several promising future research avenues in the field. This survey,
therefore, serves as a valuable resource for anyone invested in understanding
and advancing the alignment of LLMs to better suit human-oriented tasks and
expectations. An associated GitHub link collecting the latest papers is
available at https://github.com/GaryYufei/AlignLLMHumanSurvey. | http://arxiv.org/pdf/2307.12966 | Yufei Wang, Wanjun Zhong, Liangyou Li, Fei Mi, Xingshan Zeng, Wenyong Huang, Lifeng Shang, Xin Jiang, Qun Liu | cs.CL | work in progress | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2307.03109"
},
{
"id": "2305.14251"
},
{
"id": "2305.18290"
},
{
"id": "2305.13711"
},
{
"id": "2306.11644"
},
{
"id": "2306.09212"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.15011"
},
{
"id": "2304.08109"
},
{
"id": "2305.16264"
},
{
"id": "2305.10263"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2305.01210"
},
{
"id": "2306.04751"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2210.05158"
},
{
"id": "2306.15895"
},
{
"id": "2301.13688"
},
{
"id": "2307.02053"
},
{
"id": "2306.04563"
},
{
"id": "2305.15067"
},
{
"id": "2305.14688"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2306.09296"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2307.08701"
},
{
"id": "2304.08177"
},
{
"id": "2303.16634"
},
{
"id": "2306.17492"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.16739"
},
{
"id": "1909.08593"
},
{
"id": "2304.14317"
},
{
"id": "2305.14233"
},
{
"id": "2304.00723"
},
{
"id": "2306.08568"
},
{
"id": "2304.03277"
},
{
"id": "2307.03692"
},
{
"id": "2305.14314"
},
{
"id": "2307.06290"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2305.16960"
}
] |
2307.12573 | 19 | # 3.2 Evaluation
To provide a comprehensive assessment of context understanding in MOE, we evaluate the predicted character names and overall predictions separately. Speciï¬cally, we measure the average Character Precision (CP) and Character Recall (CR) for character names, as well as the average Skill Precision (SP) and Skill Recall (SR) for both character names and associated skills. Additionally, we compute the Character F-scores (CF) for character names and Skill F-score (SF) for both character names with associated skills.
1h. cP=â ditelP (1)
# i
if. CR=z Dte/s! (2)
# i
if. SP=â dts /p' (3)
1h... SR= FE Dot/ a! (4)
# i
CF = 2« CP *CR/(CP + CR) (5)
CF = 2 â CP â CR/(CP + CR) SF = 2 â SP â SR/(SP + SR)
SF = 2« SP x SR/(SP + SR) (6)
5
(5) (6)
Input context | 2307.12573#19 | Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models | Recent advancements in natural language and Large Language Models (LLMs) have
enabled AI agents to simulate human-like interactions within virtual worlds.
However, these interactions still face limitations in complexity and
flexibility, particularly in scenarios involving multiple characters and novel
objects. Pre-defining all interactable objects in the agent's world model
presents challenges, and conveying implicit intentions to multiple characters
through complex interactions remains difficult. To address these issues, we
propose integrating virtual Game Masters (GMs) into the agent's world model,
drawing inspiration from Tabletop Role-Playing Games (TRPGs). GMs play a
crucial role in overseeing information, estimating players' intentions,
providing environment descriptions, and offering feedback, compensating for
current world model deficiencies. To facilitate future explorations for complex
interactions, we introduce a benchmark named Tachikuma, comprising a Multiple
character and novel Object based interaction Estimation (MOE) task and a
supporting dataset. MOE challenges models to understand characters' intentions
and accurately determine their actions within intricate contexts involving
multi-character and novel object interactions. Besides, the dataset captures
log data from real-time communications during gameplay, providing diverse,
grounded, and complex interactions for further explorations. Finally, we
present a simple prompting baseline and evaluate its performance, demonstrating
its effectiveness in enhancing interaction understanding. We hope that our
dataset and task will inspire further research in complex interactions with
natural language, fostering the development of more advanced AI agents. | http://arxiv.org/pdf/2307.12573 | Yuanzhi Liang, Linchao Zhu, Yi Yang | cs.CL | Preliminary version of an ongoing work | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2212.10060"
},
{
"id": "2205.11916"
},
{
"id": "2304.03442"
},
{
"id": "2203.11171"
},
{
"id": "2212.10618"
},
{
"id": "2301.00234"
},
{
"id": "2303.18223"
},
{
"id": "2010.07375"
},
{
"id": "2212.10403"
},
{
"id": "2305.01528"
},
{
"id": "2305.16960"
}
] |
2307.12856 | 19 | 4
# Preprint
Modules real-estate social-media map Error Ratio (%) Plan Sum Success Score Success Score Success Score Program Plan Sum Flan-U-PaLM Kk K 10.0 55.3 20.0 25.0 10.0 51.3. 36/88/11 38/0/78 26/12/11 Flan-U-PaLM+P YX 50.0. 79.5 20.0 383 30.0 73.8 39/65/14 56/30/29 5/5/57 Flan-U-PaLM+S XX 7 00 457 25.0 62.1 15.0 46.3 30/67/0 40/13/100 â 30/20/0 WebAgent voev 65.0 87.6 70.0 858 80.0 938 20/33/25 70/50/50 10/17/25 | 2307.12856#19 | A Real-World WebAgent with Planning, Long Context Understanding, and Program Synthesis | Pre-trained large language models (LLMs) have recently achieved better
generalization and sample efficiency in autonomous web automation. However, the
performance on real-world websites has still suffered from (1) open domainness,
(2) limited context length, and (3) lack of inductive bias on HTML. We
introduce WebAgent, an LLM-driven agent that learns from self-experience to
complete tasks on real websites following natural language instructions.
WebAgent plans ahead by decomposing instructions into canonical
sub-instructions, summarizes long HTML documents into task-relevant snippets,
and acts on websites via Python programs generated from those. We design
WebAgent with Flan-U-PaLM, for grounded code generation, and HTML-T5, new
pre-trained LLMs for long HTML documents using local and global attention
mechanisms and a mixture of long-span denoising objectives, for planning and
summarization. We empirically demonstrate that our modular recipe improves the
success on real websites by over 50%, and that HTML-T5 is the best model to
solve various HTML understanding tasks; achieving 18.7% higher success rate
than the prior method on MiniWoB web automation benchmark, and SoTA performance
on Mind2Web, an offline task planning evaluation. | http://arxiv.org/pdf/2307.12856 | Izzeddin Gur, Hiroki Furuta, Austin Huang, Mustafa Safdari, Yutaka Matsuo, Douglas Eck, Aleksandra Faust | cs.LG, cs.AI, cs.CL | null | null | cs.LG | 20230724 | 20231003 | [
{
"id": "2101.02235"
},
{
"id": "2302.13971"
},
{
"id": "2103.06410"
},
{
"id": "2305.08848"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2106.15772"
},
{
"id": "2305.16653"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2105.13231"
},
{
"id": "2205.05131"
},
{
"id": "2204.00598"
},
{
"id": "2203.17189"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2105.11210"
},
{
"id": "2304.11477"
},
{
"id": "1906.01749"
},
{
"id": "2108.07732"
},
{
"id": "2202.00217"
},
{
"id": "2102.04664"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "1811.00937"
},
{
"id": "2303.08774"
},
{
"id": "1808.06226"
},
{
"id": "2010.12844"
},
{
"id": "2110.14168"
},
{
"id": "2209.11302"
},
{
"id": "2210.09261"
},
{
"id": "2306.07863"
},
{
"id": "2306.00245"
},
{
"id": "2305.16291"
},
{
"id": "2005.14165"
},
{
"id": "2305.11014"
},
{
"id": "2211.10435"
},
{
"id": "2210.03629"
},
{
"id": "1912.13318"
},
{
"id": "2108.13643"
},
{
"id": "2303.09752"
},
{
"id": "2103.07191"
},
{
"id": "1611.04230"
},
{
"id": "2209.07753"
},
{
"id": "2002.08155"
},
{
"id": "2212.10560"
},
{
"id": "2110.08518"
},
{
"id": "2210.03945"
},
{
"id": "2201.08239"
},
{
"id": "2204.01691"
},
{
"id": "1906.03741"
},
{
"id": "2203.02155"
},
{
"id": "2201.07207"
},
{
"id": "2004.08483"
},
{
"id": "1804.05685"
},
{
"id": "2206.10498"
},
{
"id": "2207.01206"
},
{
"id": "2105.09938"
},
{
"id": "2205.12255"
},
{
"id": "2305.11854"
}
] |
2307.12950 | 19 | 2It is likely that these initial prompts could also be generated procedurally from a much smaller seed set (Bai
et al., 2022a; Sun et al., 2023), although we do not empirically investigate this possibility in this work. 3Alternatively, simulating preference data with LLaMA-30B while aligning LLaMA-7B downstream can be viewed as distillation from a teacher model. In this view, we are evaluating RLCD compared to baselines on effectiveness when used for model distillation.
5
Our implementation is based on the AlpacaFarm codebase (Dubois et al., 2023). We optimize the training parameters for PPO, in particular the number of training steps and KL-regularization term, following Dubois et al. (2023)âs observations that these parameters can have some impact on final output quality. We otherwise use AlpacaFarmâs default hyperparameters for PPO and for supervised fine-tuning; see Appendix E for full details on hyperparameters.
Baselines. We compare RLCD to three baselines: | 2307.12950#19 | RLCD: Reinforcement Learning from Contrast Distillation for Language Model Alignment | We propose Reinforcement Learning from Contrast Distillation (RLCD), a method
for aligning language models to follow natural language principles without
using human feedback. RLCD trains a preference model using simulated preference
pairs that contain both a high-quality and low-quality example, generated using
contrasting positive and negative prompts. The preference model is then used to
improve a base unaligned language model via reinforcement learning.
Empirically, RLCD outperforms RLAIF (Bai et al., 2022b) and context
distillation (Huang et al., 2022) baselines across three diverse alignment
tasks--harmlessness, helpfulness, and story outline generation--and on both 7B
and 30B model scales for preference data simulation. | http://arxiv.org/pdf/2307.12950 | Kevin Yang, Dan Klein, Asli Celikyilmaz, Nanyun Peng, Yuandong Tian | cs.CL, cs.AI | null | null | cs.CL | 20230724 | 20230818 | [
{
"id": "2302.13971"
},
{
"id": "2109.07445"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.03047"
},
{
"id": "2209.15189"
},
{
"id": "2306.15595"
},
{
"id": "1807.03748"
},
{
"id": "2202.00161"
},
{
"id": "1606.07947"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2204.05862"
},
{
"id": "2301.11270"
},
{
"id": "2112.00861"
},
{
"id": "2307.03172"
},
{
"id": "2206.11349"
},
{
"id": "2305.14387"
},
{
"id": "2210.11610"
},
{
"id": "2102.10960"
},
{
"id": "2306.11816"
}
] |
2307.12966 | 19 | They ï¬rst translate the English in- structions into Chinese, then verify whether these translations are usable. Finally, they hire annota- tors to correct and re-organize the instructions into the task description, input, output format in the selected corpus. ShareGPT 2, which is collected by Chiang et al. (2023), is an interesting explo- ration for crowd-sourcing human-written instruc- tions. It is a website that encourages users to upload and share their interesting ChatGPT/GPT4 conver- sations. Such a mechanism can effectively col- lect a large number of diverse and human-written instructions that likely trigger high-quality Chat- GPT/GPT4 responses. Popular online QA websites, such as Stack Overï¬ow 3, Quora 4 and Zhihu 5, and large user-generated content databases, such as Wikipedia 6, are all reliable sources to provide high-quality human-written prompts for this pur- pose.Both Ding et al. (2023) and Xu et al. (2023c) propose to use these resources as the seed instruc- tions to prompt GPT-3.5 to generate high-quality synthetic multi-turn | 2307.12966#19 | Aligning Large Language Models with Human: A Survey | Large Language Models (LLMs) trained on extensive textual corpora have
emerged as leading solutions for a broad array of Natural Language Processing
(NLP) tasks. Despite their notable performance, these models are prone to
certain limitations such as misunderstanding human instructions, generating
potentially biased content, or factually incorrect (hallucinated) information.
Hence, aligning LLMs with human expectations has become an active area of
interest within the research community. This survey presents a comprehensive
overview of these alignment technologies, including the following aspects. (1)
Data collection: the methods for effectively collecting high-quality
instructions for LLM alignment, including the use of NLP benchmarks, human
annotations, and leveraging strong LLMs. (2) Training methodologies: a detailed
review of the prevailing training methods employed for LLM alignment. Our
exploration encompasses Supervised Fine-tuning, both Online and Offline human
preference training, along with parameter-efficient training mechanisms. (3)
Model Evaluation: the methods for evaluating the effectiveness of these
human-aligned LLMs, presenting a multifaceted approach towards their
assessment. In conclusion, we collate and distill our findings, shedding light
on several promising future research avenues in the field. This survey,
therefore, serves as a valuable resource for anyone invested in understanding
and advancing the alignment of LLMs to better suit human-oriented tasks and
expectations. An associated GitHub link collecting the latest papers is
available at https://github.com/GaryYufei/AlignLLMHumanSurvey. | http://arxiv.org/pdf/2307.12966 | Yufei Wang, Wanjun Zhong, Liangyou Li, Fei Mi, Xingshan Zeng, Wenyong Huang, Lifeng Shang, Xin Jiang, Qun Liu | cs.CL | work in progress | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2307.03109"
},
{
"id": "2305.14251"
},
{
"id": "2305.18290"
},
{
"id": "2305.13711"
},
{
"id": "2306.11644"
},
{
"id": "2306.09212"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.15011"
},
{
"id": "2304.08109"
},
{
"id": "2305.16264"
},
{
"id": "2305.10263"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2305.01210"
},
{
"id": "2306.04751"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2210.05158"
},
{
"id": "2306.15895"
},
{
"id": "2301.13688"
},
{
"id": "2307.02053"
},
{
"id": "2306.04563"
},
{
"id": "2305.15067"
},
{
"id": "2305.14688"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2306.09296"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2307.08701"
},
{
"id": "2304.08177"
},
{
"id": "2303.16634"
},
{
"id": "2306.17492"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.16739"
},
{
"id": "1909.08593"
},
{
"id": "2304.14317"
},
{
"id": "2305.14233"
},
{
"id": "2304.00723"
},
{
"id": "2306.08568"
},
{
"id": "2304.03277"
},
{
"id": "2307.03692"
},
{
"id": "2305.14314"
},
{
"id": "2307.06290"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2305.16960"
}
] |
2307.12573 | 20 | +o [untlith: REA! RSSUANA Ss LBL. Bill: "Great! Just a little more!â | cheered from afar on the carriage. (Turn2] GM: $s RHR, FASTIN. Ba MRS, PARANA, GM: The brown bear clumsily stretches out its paws, lunging at Maurice. Although the movement is stiff and slow, the strength behind itis definitely not to be underestimated. Bill: disinterest in engaging in the fightâ allund) 277 EM: N+, RR LPeS KER. : Elvis Zem: Takes a step forward, then slashes at the brown bear with his sickle, # electricity sparking off its blade. (Turd) S36: TATE. Maurice: Dodges the bear's claw and circles to its back. [Turné] GM: S236) 4837 7 HER SURAT RRM SAEs, RRA T Bie, GM: Maurice imagines how he would gracefully dodge the bear's claws and get to its back, only to find himself pinned to the snow by the bear. [Turns] PeAR: âWB! GAR! "ELARBKT RARER, âOB chev AR, RAR | 2307.12573#20 | Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models | Recent advancements in natural language and Large Language Models (LLMs) have
enabled AI agents to simulate human-like interactions within virtual worlds.
However, these interactions still face limitations in complexity and
flexibility, particularly in scenarios involving multiple characters and novel
objects. Pre-defining all interactable objects in the agent's world model
presents challenges, and conveying implicit intentions to multiple characters
through complex interactions remains difficult. To address these issues, we
propose integrating virtual Game Masters (GMs) into the agent's world model,
drawing inspiration from Tabletop Role-Playing Games (TRPGs). GMs play a
crucial role in overseeing information, estimating players' intentions,
providing environment descriptions, and offering feedback, compensating for
current world model deficiencies. To facilitate future explorations for complex
interactions, we introduce a benchmark named Tachikuma, comprising a Multiple
character and novel Object based interaction Estimation (MOE) task and a
supporting dataset. MOE challenges models to understand characters' intentions
and accurately determine their actions within intricate contexts involving
multi-character and novel object interactions. Besides, the dataset captures
log data from real-time communications during gameplay, providing diverse,
grounded, and complex interactions for further explorations. Finally, we
present a simple prompting baseline and evaluate its performance, demonstrating
its effectiveness in enhancing interaction understanding. We hope that our
dataset and task will inspire further research in complex interactions with
natural language, fostering the development of more advanced AI agents. | http://arxiv.org/pdf/2307.12573 | Yuanzhi Liang, Linchao Zhu, Yi Yang | cs.CL | Preliminary version of an ongoing work | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2212.10060"
},
{
"id": "2205.11916"
},
{
"id": "2304.03442"
},
{
"id": "2203.11171"
},
{
"id": "2212.10618"
},
{
"id": "2301.00234"
},
{
"id": "2303.18223"
},
{
"id": "2010.07375"
},
{
"id": "2212.10403"
},
{
"id": "2305.01528"
},
{
"id": "2305.16960"
}
] |
2307.12856 | 20 | Table 1: Success rate of real-world web automation on real estate, social media and map websites. The score stands for the percentage of covered attributes specified in given instructions. WebAgent, with language model modules for planning and summarization, achieves the best success (65%, 70%, 80%, respectively), surpassing other baselines, such as a single Flan-U-PaLM, that with a planning language model (Flan-U-PaLM+P), and that with a summarization language model (Flan-U-PaLM+S). Without language model modules, prompted Flan-U-PaLM plans in an open-loop manner (Plan: X) and regular-expression-based retrieval summarizes HTML inputs (Sum: *). The results imply that self-experience supervision notably improve the performance, and task planning should be learned by finetuning domain language models for closed-loop planning, rather than by prompting single LLM for open-loop planning. The error analysis describes the ratio across three types of errors in (real-estate) /(social-media) / (map) domains, which also points out that better adaptive planner to decompose the given instructions would contribute to further improvements of WebAgent. | 2307.12856#20 | A Real-World WebAgent with Planning, Long Context Understanding, and Program Synthesis | Pre-trained large language models (LLMs) have recently achieved better
generalization and sample efficiency in autonomous web automation. However, the
performance on real-world websites has still suffered from (1) open domainness,
(2) limited context length, and (3) lack of inductive bias on HTML. We
introduce WebAgent, an LLM-driven agent that learns from self-experience to
complete tasks on real websites following natural language instructions.
WebAgent plans ahead by decomposing instructions into canonical
sub-instructions, summarizes long HTML documents into task-relevant snippets,
and acts on websites via Python programs generated from those. We design
WebAgent with Flan-U-PaLM, for grounded code generation, and HTML-T5, new
pre-trained LLMs for long HTML documents using local and global attention
mechanisms and a mixture of long-span denoising objectives, for planning and
summarization. We empirically demonstrate that our modular recipe improves the
success on real websites by over 50%, and that HTML-T5 is the best model to
solve various HTML understanding tasks; achieving 18.7% higher success rate
than the prior method on MiniWoB web automation benchmark, and SoTA performance
on Mind2Web, an offline task planning evaluation. | http://arxiv.org/pdf/2307.12856 | Izzeddin Gur, Hiroki Furuta, Austin Huang, Mustafa Safdari, Yutaka Matsuo, Douglas Eck, Aleksandra Faust | cs.LG, cs.AI, cs.CL | null | null | cs.LG | 20230724 | 20231003 | [
{
"id": "2101.02235"
},
{
"id": "2302.13971"
},
{
"id": "2103.06410"
},
{
"id": "2305.08848"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2106.15772"
},
{
"id": "2305.16653"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2105.13231"
},
{
"id": "2205.05131"
},
{
"id": "2204.00598"
},
{
"id": "2203.17189"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2105.11210"
},
{
"id": "2304.11477"
},
{
"id": "1906.01749"
},
{
"id": "2108.07732"
},
{
"id": "2202.00217"
},
{
"id": "2102.04664"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "1811.00937"
},
{
"id": "2303.08774"
},
{
"id": "1808.06226"
},
{
"id": "2010.12844"
},
{
"id": "2110.14168"
},
{
"id": "2209.11302"
},
{
"id": "2210.09261"
},
{
"id": "2306.07863"
},
{
"id": "2306.00245"
},
{
"id": "2305.16291"
},
{
"id": "2005.14165"
},
{
"id": "2305.11014"
},
{
"id": "2211.10435"
},
{
"id": "2210.03629"
},
{
"id": "1912.13318"
},
{
"id": "2108.13643"
},
{
"id": "2303.09752"
},
{
"id": "2103.07191"
},
{
"id": "1611.04230"
},
{
"id": "2209.07753"
},
{
"id": "2002.08155"
},
{
"id": "2212.10560"
},
{
"id": "2110.08518"
},
{
"id": "2210.03945"
},
{
"id": "2201.08239"
},
{
"id": "2204.01691"
},
{
"id": "1906.03741"
},
{
"id": "2203.02155"
},
{
"id": "2201.07207"
},
{
"id": "2004.08483"
},
{
"id": "1804.05685"
},
{
"id": "2206.10498"
},
{
"id": "2207.01206"
},
{
"id": "2105.09938"
},
{
"id": "2205.12255"
},
{
"id": "2305.11854"
}
] |
2307.12950 | 20 | Baselines. We compare RLCD to three baselines:
1. LLaMA, i.e., just directly generating outputs using the base unaligned LLaMA-7B (the same initial LLM to be aligned by RLCD and other baselines), included as a sanity check. 2. RLAIF, following Constitutional AI (Bai et al., 2022b). Since their code and models are non-public, we re-implement using AlpacaFarm. We use the exact same prompt templates as Bai et al. (2022b) for harmlessness scoring, although we use zero-shot prompting to match RLCD. For helpfulness and outlining scoring we use prompts as similar as possible to those used in RLCD (Appendix A).
3. Context-Dist, a context distillation baseline which conducts supervised fine-tuning on only the outputs o+ from positive prompts p+ in RLCD.
As with RLCD, we experiment with simulating preference data using both LLaMA-7B and LLaMA- 30B for RLAIF and Context-Dist (again denoted by subscripts, e.g., RLAIF7B), though the base model to be aligned remains LLaMA-7B in all cases. | 2307.12950#20 | RLCD: Reinforcement Learning from Contrast Distillation for Language Model Alignment | We propose Reinforcement Learning from Contrast Distillation (RLCD), a method
for aligning language models to follow natural language principles without
using human feedback. RLCD trains a preference model using simulated preference
pairs that contain both a high-quality and low-quality example, generated using
contrasting positive and negative prompts. The preference model is then used to
improve a base unaligned language model via reinforcement learning.
Empirically, RLCD outperforms RLAIF (Bai et al., 2022b) and context
distillation (Huang et al., 2022) baselines across three diverse alignment
tasks--harmlessness, helpfulness, and story outline generation--and on both 7B
and 30B model scales for preference data simulation. | http://arxiv.org/pdf/2307.12950 | Kevin Yang, Dan Klein, Asli Celikyilmaz, Nanyun Peng, Yuandong Tian | cs.CL, cs.AI | null | null | cs.CL | 20230724 | 20230818 | [
{
"id": "2302.13971"
},
{
"id": "2109.07445"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.03047"
},
{
"id": "2209.15189"
},
{
"id": "2306.15595"
},
{
"id": "1807.03748"
},
{
"id": "2202.00161"
},
{
"id": "1606.07947"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2204.05862"
},
{
"id": "2301.11270"
},
{
"id": "2112.00861"
},
{
"id": "2307.03172"
},
{
"id": "2206.11349"
},
{
"id": "2305.14387"
},
{
"id": "2210.11610"
},
{
"id": "2102.10960"
},
{
"id": "2306.11816"
}
] |
2307.12573 | 21 | pinned to the snow by the bear. [Turns] PeAR: âWB! GAR! "ELARBKT RARER, âOB chev AR, RAR ARMIBSC, FEMBSEAE? * Bill: "Oh! A spell!â Bill squints his eyes, "An electric spell, who was your master, Mr. Zem?" (und) se REN EE: âROSHRATAZEME, RLRESL NAR. * Elvis Zem: "My teacher was a nobody, and | only know a few minor spells.â Elvis Zem: combines the electric spell with the sickle. The spell was qaeg cast in Turnd and its effects were explained by GM in Turn10 Understand interactions : , Estimate characters & actions Prediction for [Turn12]: Po@: tei, RRM: BA; FOG: RiwM, RALAE: Ue FOB: RRM Bea Character name: Bill, Check skill Character name: Maurice, Check sl Character name: Elvis Zem, Check Ground Truth: #9: Sivii, HestHe: Dat S rurnto] GM: sgaeaeie senna ca), Sear GM: The brown bear is severely injured by Elvis's strike, letting out a painful | 2307.12573#21 | Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models | Recent advancements in natural language and Large Language Models (LLMs) have
enabled AI agents to simulate human-like interactions within virtual worlds.
However, these interactions still face limitations in complexity and
flexibility, particularly in scenarios involving multiple characters and novel
objects. Pre-defining all interactable objects in the agent's world model
presents challenges, and conveying implicit intentions to multiple characters
through complex interactions remains difficult. To address these issues, we
propose integrating virtual Game Masters (GMs) into the agent's world model,
drawing inspiration from Tabletop Role-Playing Games (TRPGs). GMs play a
crucial role in overseeing information, estimating players' intentions,
providing environment descriptions, and offering feedback, compensating for
current world model deficiencies. To facilitate future explorations for complex
interactions, we introduce a benchmark named Tachikuma, comprising a Multiple
character and novel Object based interaction Estimation (MOE) task and a
supporting dataset. MOE challenges models to understand characters' intentions
and accurately determine their actions within intricate contexts involving
multi-character and novel object interactions. Besides, the dataset captures
log data from real-time communications during gameplay, providing diverse,
grounded, and complex interactions for further explorations. Finally, we
present a simple prompting baseline and evaluate its performance, demonstrating
its effectiveness in enhancing interaction understanding. We hope that our
dataset and task will inspire further research in complex interactions with
natural language, fostering the development of more advanced AI agents. | http://arxiv.org/pdf/2307.12573 | Yuanzhi Liang, Linchao Zhu, Yi Yang | cs.CL | Preliminary version of an ongoing work | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2212.10060"
},
{
"id": "2205.11916"
},
{
"id": "2304.03442"
},
{
"id": "2203.11171"
},
{
"id": "2212.10618"
},
{
"id": "2301.00234"
},
{
"id": "2303.18223"
},
{
"id": "2010.07375"
},
{
"id": "2212.10403"
},
{
"id": "2305.01528"
},
{
"id": "2305.16960"
}
] |
2307.12856 | 21 | Pre-Training with Mixture of Long-Span Denoising The performance of language models in downstream tasks highly depends on the knowledge learned in pre-training. To incorporate further inductive bias on HTML into scalable language models, we perform self-supervised pre-training with large-scale HTML corpus. We here employ span denoising objective, where we mask the input texts with random spans of tokens (following normal distributions with mean span length µ), and the models take all other tokens from the documents as inputs to predict corrupted spans (Ainslie et al., 2023; Raffel et al., 2020; Tay et al., 2022). To deal with the sparsity of contents tokens in HTML documents, we introduce a mixture of long-span denoising objective, by masking input tokens with longer mean span lengths than popular value for natural language (e.g. µ = 3). Such a shorter mean span length only masks less meaningful chunks, such as </, id=, or "> (Figure 4), which might not be helpful for LLMs to capture the syntax and semantics of HTML. In contrast, longer span can contain more semantically meaningful chunks, such as <form class=" or type="submit">. We empirically find µ â {8, 64} is the optimal mixture (Section 4.2). | 2307.12856#21 | A Real-World WebAgent with Planning, Long Context Understanding, and Program Synthesis | Pre-trained large language models (LLMs) have recently achieved better
generalization and sample efficiency in autonomous web automation. However, the
performance on real-world websites has still suffered from (1) open domainness,
(2) limited context length, and (3) lack of inductive bias on HTML. We
introduce WebAgent, an LLM-driven agent that learns from self-experience to
complete tasks on real websites following natural language instructions.
WebAgent plans ahead by decomposing instructions into canonical
sub-instructions, summarizes long HTML documents into task-relevant snippets,
and acts on websites via Python programs generated from those. We design
WebAgent with Flan-U-PaLM, for grounded code generation, and HTML-T5, new
pre-trained LLMs for long HTML documents using local and global attention
mechanisms and a mixture of long-span denoising objectives, for planning and
summarization. We empirically demonstrate that our modular recipe improves the
success on real websites by over 50%, and that HTML-T5 is the best model to
solve various HTML understanding tasks; achieving 18.7% higher success rate
than the prior method on MiniWoB web automation benchmark, and SoTA performance
on Mind2Web, an offline task planning evaluation. | http://arxiv.org/pdf/2307.12856 | Izzeddin Gur, Hiroki Furuta, Austin Huang, Mustafa Safdari, Yutaka Matsuo, Douglas Eck, Aleksandra Faust | cs.LG, cs.AI, cs.CL | null | null | cs.LG | 20230724 | 20231003 | [
{
"id": "2101.02235"
},
{
"id": "2302.13971"
},
{
"id": "2103.06410"
},
{
"id": "2305.08848"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2106.15772"
},
{
"id": "2305.16653"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2105.13231"
},
{
"id": "2205.05131"
},
{
"id": "2204.00598"
},
{
"id": "2203.17189"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2105.11210"
},
{
"id": "2304.11477"
},
{
"id": "1906.01749"
},
{
"id": "2108.07732"
},
{
"id": "2202.00217"
},
{
"id": "2102.04664"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "1811.00937"
},
{
"id": "2303.08774"
},
{
"id": "1808.06226"
},
{
"id": "2010.12844"
},
{
"id": "2110.14168"
},
{
"id": "2209.11302"
},
{
"id": "2210.09261"
},
{
"id": "2306.07863"
},
{
"id": "2306.00245"
},
{
"id": "2305.16291"
},
{
"id": "2005.14165"
},
{
"id": "2305.11014"
},
{
"id": "2211.10435"
},
{
"id": "2210.03629"
},
{
"id": "1912.13318"
},
{
"id": "2108.13643"
},
{
"id": "2303.09752"
},
{
"id": "2103.07191"
},
{
"id": "1611.04230"
},
{
"id": "2209.07753"
},
{
"id": "2002.08155"
},
{
"id": "2212.10560"
},
{
"id": "2110.08518"
},
{
"id": "2210.03945"
},
{
"id": "2201.08239"
},
{
"id": "2204.01691"
},
{
"id": "1906.03741"
},
{
"id": "2203.02155"
},
{
"id": "2201.07207"
},
{
"id": "2004.08483"
},
{
"id": "1804.05685"
},
{
"id": "2206.10498"
},
{
"id": "2207.01206"
},
{
"id": "2105.09938"
},
{
"id": "2205.12255"
},
{
"id": "2305.11854"
}
] |
2307.12950 | 21 | Metrics. For each task, we run pairwise evaluations for RLCD compared to each baseline. As the harmlessness prompts from Bai et al. (2022b)âwhile focusing primarily on harmlessnessâ additionally encourage helpfulness to some degree (Appendix A.1), we measure both harmlessness (Harm) and helpfulness (Help) for the harmlessness task.4 For the helpfulness and outlining tasks we collect just one set of labels for overall helpfulness (Help) and outline quality (Qual) respectively.
For each pairwise evaluation, we ask humans to compare 200 examples on a Likert scale from 1 (output A is much better) to 8 (output B is much better) following Anthropic as described in Rajani et al. (2023). We additionally query GPT-4 for binary judgments on 1000 examples. Examples are always presented in random order during pairwise comparison, with human scores normalized afterward so that higher is better.5 See Appendix F for complete evaluation details. | 2307.12950#21 | RLCD: Reinforcement Learning from Contrast Distillation for Language Model Alignment | We propose Reinforcement Learning from Contrast Distillation (RLCD), a method
for aligning language models to follow natural language principles without
using human feedback. RLCD trains a preference model using simulated preference
pairs that contain both a high-quality and low-quality example, generated using
contrasting positive and negative prompts. The preference model is then used to
improve a base unaligned language model via reinforcement learning.
Empirically, RLCD outperforms RLAIF (Bai et al., 2022b) and context
distillation (Huang et al., 2022) baselines across three diverse alignment
tasks--harmlessness, helpfulness, and story outline generation--and on both 7B
and 30B model scales for preference data simulation. | http://arxiv.org/pdf/2307.12950 | Kevin Yang, Dan Klein, Asli Celikyilmaz, Nanyun Peng, Yuandong Tian | cs.CL, cs.AI | null | null | cs.CL | 20230724 | 20230818 | [
{
"id": "2302.13971"
},
{
"id": "2109.07445"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.03047"
},
{
"id": "2209.15189"
},
{
"id": "2306.15595"
},
{
"id": "1807.03748"
},
{
"id": "2202.00161"
},
{
"id": "1606.07947"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2204.05862"
},
{
"id": "2301.11270"
},
{
"id": "2112.00861"
},
{
"id": "2307.03172"
},
{
"id": "2206.11349"
},
{
"id": "2305.14387"
},
{
"id": "2210.11610"
},
{
"id": "2102.10960"
},
{
"id": "2306.11816"
}
] |
2307.12966 | 21 | # Instructions From Strong LLMs
With the emergence of strong closed-source LLMs (e.g., ChatGPT/GPT4), it is also feasible to auto- mate the collection process to obtain various types of synthetic instructions (e.g., single-turn, multi2https://sharegpt.com/ 3https://stackoverflow.com/ 4https://www.quora.com/ 5https://www.zhihu.com/ 6https://en.wikipedia.org/
turn, and multilingual instructions) by providing appropriate prompts to these LLMs. The main challenge is how to effectively prompt LLMs to generate diverse and high-quality instructions.
| Seed Initial Instructions | Instructions Pool Improving Input Quality iS) In-Context Update Query New Instruction Inputs Improving Output Quality New Instruction Outputs Response Prompt
Figure 3: The overview of self-instruction. Starting from instructions in the pool, self-instruction leverages LLMs to produce new instructions via in-context learn- ing. After ï¬ltering, LLMs are then prompted to respond to the remaining instructions. The full instructions are then added to the pool. Research efforts have been de- voted to 1) Improving instruction input quality, and 2) Improving instruction output quality. | 2307.12966#21 | Aligning Large Language Models with Human: A Survey | Large Language Models (LLMs) trained on extensive textual corpora have
emerged as leading solutions for a broad array of Natural Language Processing
(NLP) tasks. Despite their notable performance, these models are prone to
certain limitations such as misunderstanding human instructions, generating
potentially biased content, or factually incorrect (hallucinated) information.
Hence, aligning LLMs with human expectations has become an active area of
interest within the research community. This survey presents a comprehensive
overview of these alignment technologies, including the following aspects. (1)
Data collection: the methods for effectively collecting high-quality
instructions for LLM alignment, including the use of NLP benchmarks, human
annotations, and leveraging strong LLMs. (2) Training methodologies: a detailed
review of the prevailing training methods employed for LLM alignment. Our
exploration encompasses Supervised Fine-tuning, both Online and Offline human
preference training, along with parameter-efficient training mechanisms. (3)
Model Evaluation: the methods for evaluating the effectiveness of these
human-aligned LLMs, presenting a multifaceted approach towards their
assessment. In conclusion, we collate and distill our findings, shedding light
on several promising future research avenues in the field. This survey,
therefore, serves as a valuable resource for anyone invested in understanding
and advancing the alignment of LLMs to better suit human-oriented tasks and
expectations. An associated GitHub link collecting the latest papers is
available at https://github.com/GaryYufei/AlignLLMHumanSurvey. | http://arxiv.org/pdf/2307.12966 | Yufei Wang, Wanjun Zhong, Liangyou Li, Fei Mi, Xingshan Zeng, Wenyong Huang, Lifeng Shang, Xin Jiang, Qun Liu | cs.CL | work in progress | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2307.03109"
},
{
"id": "2305.14251"
},
{
"id": "2305.18290"
},
{
"id": "2305.13711"
},
{
"id": "2306.11644"
},
{
"id": "2306.09212"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.15011"
},
{
"id": "2304.08109"
},
{
"id": "2305.16264"
},
{
"id": "2305.10263"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2305.01210"
},
{
"id": "2306.04751"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2210.05158"
},
{
"id": "2306.15895"
},
{
"id": "2301.13688"
},
{
"id": "2307.02053"
},
{
"id": "2306.04563"
},
{
"id": "2305.15067"
},
{
"id": "2305.14688"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2306.09296"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2307.08701"
},
{
"id": "2304.08177"
},
{
"id": "2303.16634"
},
{
"id": "2306.17492"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.16739"
},
{
"id": "1909.08593"
},
{
"id": "2304.14317"
},
{
"id": "2305.14233"
},
{
"id": "2304.00723"
},
{
"id": "2306.08568"
},
{
"id": "2304.03277"
},
{
"id": "2307.03692"
},
{
"id": "2305.14314"
},
{
"id": "2307.06290"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2305.16960"
}
] |
2307.12856 | 22 | We adopt 4096 input sequence length and 910 output sequence length during the denoising pre- training. In total, 15% of input tokens are randomly masked. For the dataset, we prepare 100 WARC files (April 2019) from CommonCrawl, and pre-process the raw HTML by removing non-Unicode and alphanumeric documents and extracting subtrees around <label> elements that have for attribute, to reduce the noise in training corpus, which results in about 3.41M examples. We train the models with 100K iterations following other pre-training strategies for T5 families (Chung et al., 2022; Lester et al., 2021). See Appendix C for further details.
3.2 ALIGNMENT WITH SELF-EXPERIENCE SUPERVISION | 2307.12856#22 | A Real-World WebAgent with Planning, Long Context Understanding, and Program Synthesis | Pre-trained large language models (LLMs) have recently achieved better
generalization and sample efficiency in autonomous web automation. However, the
performance on real-world websites has still suffered from (1) open domainness,
(2) limited context length, and (3) lack of inductive bias on HTML. We
introduce WebAgent, an LLM-driven agent that learns from self-experience to
complete tasks on real websites following natural language instructions.
WebAgent plans ahead by decomposing instructions into canonical
sub-instructions, summarizes long HTML documents into task-relevant snippets,
and acts on websites via Python programs generated from those. We design
WebAgent with Flan-U-PaLM, for grounded code generation, and HTML-T5, new
pre-trained LLMs for long HTML documents using local and global attention
mechanisms and a mixture of long-span denoising objectives, for planning and
summarization. We empirically demonstrate that our modular recipe improves the
success on real websites by over 50%, and that HTML-T5 is the best model to
solve various HTML understanding tasks; achieving 18.7% higher success rate
than the prior method on MiniWoB web automation benchmark, and SoTA performance
on Mind2Web, an offline task planning evaluation. | http://arxiv.org/pdf/2307.12856 | Izzeddin Gur, Hiroki Furuta, Austin Huang, Mustafa Safdari, Yutaka Matsuo, Douglas Eck, Aleksandra Faust | cs.LG, cs.AI, cs.CL | null | null | cs.LG | 20230724 | 20231003 | [
{
"id": "2101.02235"
},
{
"id": "2302.13971"
},
{
"id": "2103.06410"
},
{
"id": "2305.08848"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2106.15772"
},
{
"id": "2305.16653"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2105.13231"
},
{
"id": "2205.05131"
},
{
"id": "2204.00598"
},
{
"id": "2203.17189"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2105.11210"
},
{
"id": "2304.11477"
},
{
"id": "1906.01749"
},
{
"id": "2108.07732"
},
{
"id": "2202.00217"
},
{
"id": "2102.04664"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "1811.00937"
},
{
"id": "2303.08774"
},
{
"id": "1808.06226"
},
{
"id": "2010.12844"
},
{
"id": "2110.14168"
},
{
"id": "2209.11302"
},
{
"id": "2210.09261"
},
{
"id": "2306.07863"
},
{
"id": "2306.00245"
},
{
"id": "2305.16291"
},
{
"id": "2005.14165"
},
{
"id": "2305.11014"
},
{
"id": "2211.10435"
},
{
"id": "2210.03629"
},
{
"id": "1912.13318"
},
{
"id": "2108.13643"
},
{
"id": "2303.09752"
},
{
"id": "2103.07191"
},
{
"id": "1611.04230"
},
{
"id": "2209.07753"
},
{
"id": "2002.08155"
},
{
"id": "2212.10560"
},
{
"id": "2110.08518"
},
{
"id": "2210.03945"
},
{
"id": "2201.08239"
},
{
"id": "2204.01691"
},
{
"id": "1906.03741"
},
{
"id": "2203.02155"
},
{
"id": "2201.07207"
},
{
"id": "2004.08483"
},
{
"id": "1804.05685"
},
{
"id": "2206.10498"
},
{
"id": "2207.01206"
},
{
"id": "2105.09938"
},
{
"id": "2205.12255"
},
{
"id": "2305.11854"
}
] |
2307.12950 | 22 | Harmlessness Prompts Helpfulness Prompts Outlining Prompts Methods Harm Help Help Qual RLCD7B vs. LLaMA 5.44 / 3.56 5.30 / 3.70 6.52 / 2.48 6.02 / 2.98 RLCD7B vs. RLAIF7B 5.62 / 3.38 4.64 / 4.36 5.88 / 3.12 5.97 / 3.03 RLCD7B vs. Context-Dist7B RLCD30B vs. LLaMA 4.51 / 4.49 5.59 / 3.41 4.69 / 4.31 5.45 / 3.55 5.73 / 3.27 6.42 / 2.58 5.67 / 3.33 5.03 / 3.97 RLCD30B vs. RLAIF30B 4.71 / 4.29 4.50 / 4.50 4.51 / 4.49 4.76 / 4.24 RLCD30B vs. Context-Dist30B 4.80 / 4.20 4.88 / 4.12 5.72 / 3.28 5.78 / 3.22 | 2307.12950#22 | RLCD: Reinforcement Learning from Contrast Distillation for Language Model Alignment | We propose Reinforcement Learning from Contrast Distillation (RLCD), a method
for aligning language models to follow natural language principles without
using human feedback. RLCD trains a preference model using simulated preference
pairs that contain both a high-quality and low-quality example, generated using
contrasting positive and negative prompts. The preference model is then used to
improve a base unaligned language model via reinforcement learning.
Empirically, RLCD outperforms RLAIF (Bai et al., 2022b) and context
distillation (Huang et al., 2022) baselines across three diverse alignment
tasks--harmlessness, helpfulness, and story outline generation--and on both 7B
and 30B model scales for preference data simulation. | http://arxiv.org/pdf/2307.12950 | Kevin Yang, Dan Klein, Asli Celikyilmaz, Nanyun Peng, Yuandong Tian | cs.CL, cs.AI | null | null | cs.CL | 20230724 | 20230818 | [
{
"id": "2302.13971"
},
{
"id": "2109.07445"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.03047"
},
{
"id": "2209.15189"
},
{
"id": "2306.15595"
},
{
"id": "1807.03748"
},
{
"id": "2202.00161"
},
{
"id": "1606.07947"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2204.05862"
},
{
"id": "2301.11270"
},
{
"id": "2112.00861"
},
{
"id": "2307.03172"
},
{
"id": "2206.11349"
},
{
"id": "2305.14387"
},
{
"id": "2210.11610"
},
{
"id": "2102.10960"
},
{
"id": "2306.11816"
}
] |
2307.12966 | 22 | 2.2.1 Self-Instruction Self-Instruct (Wang et al., 2022a) were among the pioneers to automate the instruction collection pro- cess. It employed the in-context learning capability of ChatGPT to generate large-scale instructions from a pre-deï¬ned set of human-annotated instruc- tions covering diverse topics and task types, as il- lustrated in Figure 3. The automatically generated instructions are followed by a quality control ï¬l- tering process, and this iterative process continues until the desired data volume has been achieved. Interestingly, the researchers discovered that GPT- 3 (Brown et al., 2020), ï¬ne-tuned with these in- structions, performed better than models ï¬ne-tuned using instructions derived from NLP benchmarks SuperNI benchmark (Wang et al., 2022b) and User- Oriented Instructions, as discussed in Section 2.1). Several follow-up attempts, such as Aplaca (Taori et al., 2023) and its variants (Cui et al., 2023a) fol- low this Self-Instruct framework. More subsequent research efforts w.r.t. enhancing instruction diver- sity, quality, and complexity will be elaborated as follows.
Improving Input Quality One limitation of the synthetic instructions from strong LLMs often suf- fer from diversity issues. For example, Jentzsch | 2307.12966#22 | Aligning Large Language Models with Human: A Survey | Large Language Models (LLMs) trained on extensive textual corpora have
emerged as leading solutions for a broad array of Natural Language Processing
(NLP) tasks. Despite their notable performance, these models are prone to
certain limitations such as misunderstanding human instructions, generating
potentially biased content, or factually incorrect (hallucinated) information.
Hence, aligning LLMs with human expectations has become an active area of
interest within the research community. This survey presents a comprehensive
overview of these alignment technologies, including the following aspects. (1)
Data collection: the methods for effectively collecting high-quality
instructions for LLM alignment, including the use of NLP benchmarks, human
annotations, and leveraging strong LLMs. (2) Training methodologies: a detailed
review of the prevailing training methods employed for LLM alignment. Our
exploration encompasses Supervised Fine-tuning, both Online and Offline human
preference training, along with parameter-efficient training mechanisms. (3)
Model Evaluation: the methods for evaluating the effectiveness of these
human-aligned LLMs, presenting a multifaceted approach towards their
assessment. In conclusion, we collate and distill our findings, shedding light
on several promising future research avenues in the field. This survey,
therefore, serves as a valuable resource for anyone invested in understanding
and advancing the alignment of LLMs to better suit human-oriented tasks and
expectations. An associated GitHub link collecting the latest papers is
available at https://github.com/GaryYufei/AlignLLMHumanSurvey. | http://arxiv.org/pdf/2307.12966 | Yufei Wang, Wanjun Zhong, Liangyou Li, Fei Mi, Xingshan Zeng, Wenyong Huang, Lifeng Shang, Xin Jiang, Qun Liu | cs.CL | work in progress | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2307.03109"
},
{
"id": "2305.14251"
},
{
"id": "2305.18290"
},
{
"id": "2305.13711"
},
{
"id": "2306.11644"
},
{
"id": "2306.09212"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.15011"
},
{
"id": "2304.08109"
},
{
"id": "2305.16264"
},
{
"id": "2305.10263"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2305.01210"
},
{
"id": "2306.04751"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2210.05158"
},
{
"id": "2306.15895"
},
{
"id": "2301.13688"
},
{
"id": "2307.02053"
},
{
"id": "2306.04563"
},
{
"id": "2305.15067"
},
{
"id": "2305.14688"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2306.09296"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2307.08701"
},
{
"id": "2304.08177"
},
{
"id": "2303.16634"
},
{
"id": "2306.17492"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.16739"
},
{
"id": "1909.08593"
},
{
"id": "2304.14317"
},
{
"id": "2305.14233"
},
{
"id": "2304.00723"
},
{
"id": "2306.08568"
},
{
"id": "2304.03277"
},
{
"id": "2307.03692"
},
{
"id": "2305.14314"
},
{
"id": "2307.06290"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2305.16960"
}
] |
2307.12573 | 23 | [Turn12] Gut: ?
Figure 2: Example of MOE. In the given context, a scenario unfolds where three players ï¬nd themselves facing a formidable brown bear in combat. Each character actively participates in the battle, except for Bill, who observes from the safety of a carriage. During the encounter, Zem casts a spell; however, it is important to note that the skill check for this particular spell has already been performed after Turn 4 and was explained by the DM in Turn 10. Consequently, the only character currently requiring a skill check is Maurice. Despite his intention to escape from the bear, the DND rule does not include a speciï¬c âescapeâ skill. In such a predicament, Maurice must utilize his strength to resist the bearâs attempt to grapple him. As a result, the DM advises him to perform a strength check in adherence to the DND rule. Furthermore, we also present the predicted results from GPT-3.5 utilizing template prompts. The results demonstrate a lack of effective context comprehension and highlight the challenges in understanding complex interactions among agents.
where i indicates the i-th sample, tc represent the number of correctly predicted character names, ts denote the number of correct predictions for both character names and associated skills, p indicate the total number of predicted tuples, g represent the number of answers in the ground truth, and K represent the total number of evaluation data samples. | 2307.12573#23 | Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models | Recent advancements in natural language and Large Language Models (LLMs) have
enabled AI agents to simulate human-like interactions within virtual worlds.
However, these interactions still face limitations in complexity and
flexibility, particularly in scenarios involving multiple characters and novel
objects. Pre-defining all interactable objects in the agent's world model
presents challenges, and conveying implicit intentions to multiple characters
through complex interactions remains difficult. To address these issues, we
propose integrating virtual Game Masters (GMs) into the agent's world model,
drawing inspiration from Tabletop Role-Playing Games (TRPGs). GMs play a
crucial role in overseeing information, estimating players' intentions,
providing environment descriptions, and offering feedback, compensating for
current world model deficiencies. To facilitate future explorations for complex
interactions, we introduce a benchmark named Tachikuma, comprising a Multiple
character and novel Object based interaction Estimation (MOE) task and a
supporting dataset. MOE challenges models to understand characters' intentions
and accurately determine their actions within intricate contexts involving
multi-character and novel object interactions. Besides, the dataset captures
log data from real-time communications during gameplay, providing diverse,
grounded, and complex interactions for further explorations. Finally, we
present a simple prompting baseline and evaluate its performance, demonstrating
its effectiveness in enhancing interaction understanding. We hope that our
dataset and task will inspire further research in complex interactions with
natural language, fostering the development of more advanced AI agents. | http://arxiv.org/pdf/2307.12573 | Yuanzhi Liang, Linchao Zhu, Yi Yang | cs.CL | Preliminary version of an ongoing work | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2212.10060"
},
{
"id": "2205.11916"
},
{
"id": "2304.03442"
},
{
"id": "2203.11171"
},
{
"id": "2212.10618"
},
{
"id": "2301.00234"
},
{
"id": "2303.18223"
},
{
"id": "2010.07375"
},
{
"id": "2212.10403"
},
{
"id": "2305.01528"
},
{
"id": "2305.16960"
}
] |
2307.12856 | 23 | 3.2 ALIGNMENT WITH SELF-EXPERIENCE SUPERVISION
Another bottleneck for building real-world web automation agents is collecting demonstrations to align LLM with real websites. Humans could perform instruction following on real websites easily, but it is infeasible to manually annotate all the instruction decomposition, snippet extractions, and executable programs. To reduce such a burden, we introduce a self-experience supervision, where the language model agents learn from the experience that they themselves face on real websites with minimal human intervention. We first prepare the templates of instructions. The scripted agents procedurally parse instructions into the sequence of sub-instructions, regular-expression-based retrieval specifies the elements to be summarized, and conditioned on those, Flan-U-PaLM executes web actions via program synthesis. The generated demonstrations following the steps above may result in success and failure, but the success criteria for real-world tasks is hard to automate. Instead, to filter the experience, we leverage the environmental feedback that can remove critical failures; for instance, the program execution errors, retriever errors, and clearly wrong prefix of URL (Ni et al., 2023). Our WebAgent aligns domain-expert language models, HTML-T5, with those self-collected real-world experiences via finetuning (Wang et al., 2022b). This self-supervision process realizes the generalization and alignment of language model agents to challenging real-world tasks. | 2307.12856#23 | A Real-World WebAgent with Planning, Long Context Understanding, and Program Synthesis | Pre-trained large language models (LLMs) have recently achieved better
generalization and sample efficiency in autonomous web automation. However, the
performance on real-world websites has still suffered from (1) open domainness,
(2) limited context length, and (3) lack of inductive bias on HTML. We
introduce WebAgent, an LLM-driven agent that learns from self-experience to
complete tasks on real websites following natural language instructions.
WebAgent plans ahead by decomposing instructions into canonical
sub-instructions, summarizes long HTML documents into task-relevant snippets,
and acts on websites via Python programs generated from those. We design
WebAgent with Flan-U-PaLM, for grounded code generation, and HTML-T5, new
pre-trained LLMs for long HTML documents using local and global attention
mechanisms and a mixture of long-span denoising objectives, for planning and
summarization. We empirically demonstrate that our modular recipe improves the
success on real websites by over 50%, and that HTML-T5 is the best model to
solve various HTML understanding tasks; achieving 18.7% higher success rate
than the prior method on MiniWoB web automation benchmark, and SoTA performance
on Mind2Web, an offline task planning evaluation. | http://arxiv.org/pdf/2307.12856 | Izzeddin Gur, Hiroki Furuta, Austin Huang, Mustafa Safdari, Yutaka Matsuo, Douglas Eck, Aleksandra Faust | cs.LG, cs.AI, cs.CL | null | null | cs.LG | 20230724 | 20231003 | [
{
"id": "2101.02235"
},
{
"id": "2302.13971"
},
{
"id": "2103.06410"
},
{
"id": "2305.08848"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2106.15772"
},
{
"id": "2305.16653"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2105.13231"
},
{
"id": "2205.05131"
},
{
"id": "2204.00598"
},
{
"id": "2203.17189"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2105.11210"
},
{
"id": "2304.11477"
},
{
"id": "1906.01749"
},
{
"id": "2108.07732"
},
{
"id": "2202.00217"
},
{
"id": "2102.04664"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "1811.00937"
},
{
"id": "2303.08774"
},
{
"id": "1808.06226"
},
{
"id": "2010.12844"
},
{
"id": "2110.14168"
},
{
"id": "2209.11302"
},
{
"id": "2210.09261"
},
{
"id": "2306.07863"
},
{
"id": "2306.00245"
},
{
"id": "2305.16291"
},
{
"id": "2005.14165"
},
{
"id": "2305.11014"
},
{
"id": "2211.10435"
},
{
"id": "2210.03629"
},
{
"id": "1912.13318"
},
{
"id": "2108.13643"
},
{
"id": "2303.09752"
},
{
"id": "2103.07191"
},
{
"id": "1611.04230"
},
{
"id": "2209.07753"
},
{
"id": "2002.08155"
},
{
"id": "2212.10560"
},
{
"id": "2110.08518"
},
{
"id": "2210.03945"
},
{
"id": "2201.08239"
},
{
"id": "2204.01691"
},
{
"id": "1906.03741"
},
{
"id": "2203.02155"
},
{
"id": "2201.07207"
},
{
"id": "2004.08483"
},
{
"id": "1804.05685"
},
{
"id": "2206.10498"
},
{
"id": "2207.01206"
},
{
"id": "2105.09938"
},
{
"id": "2205.12255"
},
{
"id": "2305.11854"
}
] |
2307.12950 | 23 | Table 2: Human comparison results for RLCD against each baseline, evaluating harmlessness and helpfulness on harmlessness prompt set; helpfulness on helpfulness prompt set; and outline quality on story outlining prompt set. Annotators indicated which output was better, and by how much, on a 1-8 scale; scores here are normalized so that higher is better. RLCD is in all cases equal or betterâoften substantially betterâcompared to baselines, for all tasks and for preference data simulation at both 7B and 30B model scale.
Results. RLCD outperforms all of our baselines according to both human (Table 2) and GPT-4 (Table 3) evaluations, validating the effectiveness of our proposed data generation procedure at this model scale. On harmlessness prompts, RLCD is better on the main harmlessness metric, while
4See Appendix D for a version of the harmlessness task which focuses more exclusively on harmlessness. 5For example, if the human label is 3 (output A is moderately better), then output Bâs normalized score is 3 while output Aâs normalized score is 6.
6 | 2307.12950#23 | RLCD: Reinforcement Learning from Contrast Distillation for Language Model Alignment | We propose Reinforcement Learning from Contrast Distillation (RLCD), a method
for aligning language models to follow natural language principles without
using human feedback. RLCD trains a preference model using simulated preference
pairs that contain both a high-quality and low-quality example, generated using
contrasting positive and negative prompts. The preference model is then used to
improve a base unaligned language model via reinforcement learning.
Empirically, RLCD outperforms RLAIF (Bai et al., 2022b) and context
distillation (Huang et al., 2022) baselines across three diverse alignment
tasks--harmlessness, helpfulness, and story outline generation--and on both 7B
and 30B model scales for preference data simulation. | http://arxiv.org/pdf/2307.12950 | Kevin Yang, Dan Klein, Asli Celikyilmaz, Nanyun Peng, Yuandong Tian | cs.CL, cs.AI | null | null | cs.CL | 20230724 | 20230818 | [
{
"id": "2302.13971"
},
{
"id": "2109.07445"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.03047"
},
{
"id": "2209.15189"
},
{
"id": "2306.15595"
},
{
"id": "1807.03748"
},
{
"id": "2202.00161"
},
{
"id": "1606.07947"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2204.05862"
},
{
"id": "2301.11270"
},
{
"id": "2112.00861"
},
{
"id": "2307.03172"
},
{
"id": "2206.11349"
},
{
"id": "2305.14387"
},
{
"id": "2210.11610"
},
{
"id": "2102.10960"
},
{
"id": "2306.11816"
}
] |
2307.12966 | 23 | and Kersting (2023) ï¬nd that when prompting to generate jokes, ChatGPT only produces 25 unique joke patterns in thousands of samples. To im- prove the instruction input diversity, Wang et al. (2022a) propose different input and output gen- eration strategies for different types of instruc- tions. They ï¬rst prompt ChatGPT to classify gen- erated instruction into classiï¬cation tasks or non- classiï¬cation tasks. Then, they deploy output-ï¬rst and input-ï¬rst strategies for classiï¬cation tasks and non-classiï¬cation tasks, respectively. Others propose to add various external information into the input prompts to enhance diversity and factual- ity, including Wikipedia Category Keywords (Wu et al., 2023), user-generated questions on the Inter- net (e.g., Quora, StackOverï¬ow) (Xu et al., 2023c; Anand et al., 2023) and instructions from the Su- perNaturalInstruction benchmark (Honovich et al., 2022). Yu et al. (2023b) also shows that explic- itly | 2307.12966#23 | Aligning Large Language Models with Human: A Survey | Large Language Models (LLMs) trained on extensive textual corpora have
emerged as leading solutions for a broad array of Natural Language Processing
(NLP) tasks. Despite their notable performance, these models are prone to
certain limitations such as misunderstanding human instructions, generating
potentially biased content, or factually incorrect (hallucinated) information.
Hence, aligning LLMs with human expectations has become an active area of
interest within the research community. This survey presents a comprehensive
overview of these alignment technologies, including the following aspects. (1)
Data collection: the methods for effectively collecting high-quality
instructions for LLM alignment, including the use of NLP benchmarks, human
annotations, and leveraging strong LLMs. (2) Training methodologies: a detailed
review of the prevailing training methods employed for LLM alignment. Our
exploration encompasses Supervised Fine-tuning, both Online and Offline human
preference training, along with parameter-efficient training mechanisms. (3)
Model Evaluation: the methods for evaluating the effectiveness of these
human-aligned LLMs, presenting a multifaceted approach towards their
assessment. In conclusion, we collate and distill our findings, shedding light
on several promising future research avenues in the field. This survey,
therefore, serves as a valuable resource for anyone invested in understanding
and advancing the alignment of LLMs to better suit human-oriented tasks and
expectations. An associated GitHub link collecting the latest papers is
available at https://github.com/GaryYufei/AlignLLMHumanSurvey. | http://arxiv.org/pdf/2307.12966 | Yufei Wang, Wanjun Zhong, Liangyou Li, Fei Mi, Xingshan Zeng, Wenyong Huang, Lifeng Shang, Xin Jiang, Qun Liu | cs.CL | work in progress | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2307.03109"
},
{
"id": "2305.14251"
},
{
"id": "2305.18290"
},
{
"id": "2305.13711"
},
{
"id": "2306.11644"
},
{
"id": "2306.09212"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.15011"
},
{
"id": "2304.08109"
},
{
"id": "2305.16264"
},
{
"id": "2305.10263"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2305.01210"
},
{
"id": "2306.04751"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2210.05158"
},
{
"id": "2306.15895"
},
{
"id": "2301.13688"
},
{
"id": "2307.02053"
},
{
"id": "2306.04563"
},
{
"id": "2305.15067"
},
{
"id": "2305.14688"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2306.09296"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2307.08701"
},
{
"id": "2304.08177"
},
{
"id": "2303.16634"
},
{
"id": "2306.17492"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.16739"
},
{
"id": "1909.08593"
},
{
"id": "2304.14317"
},
{
"id": "2305.14233"
},
{
"id": "2304.00723"
},
{
"id": "2306.08568"
},
{
"id": "2304.03277"
},
{
"id": "2307.03692"
},
{
"id": "2305.14314"
},
{
"id": "2307.06290"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2305.16960"
}
] |
2307.12573 | 24 | The metrics CP and CR are employed to evaluate the understanding of character intentions, focusing on the accuracy of predicting the characters about to take action. The proposed methods are required to provide correct character predictions, thereby achieving higher values for CP and CR. Then, to achieve higher SP and SR, the proposed methods must accurately comprehend both character intentions and the rules of the game. It is worth noting that if the model consistently predicts all characters as outputs, it may increase the recall but reduce the precision. Conversely, if the method tends to predict only one character, it may achieve higher precision but lower recall. To strike a balance between these factors, we employ F-scores as the ï¬nal evaluation metrics in our experiments. The F-scores consider both precision and recall values, providing a comprehensive measure of the performance.
0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%
Figure 3: Distribution of character number in MOE labels.
# 3.3 Skill Check Annotation
In Tabletop Role-Playing Games (TRPGs), skill checks can directly indicate the playersâ intentions in the game, which play a crucial role in determining the success or failure of character actions. When a player wishes to have their character perform an action involving a skill, such as combat, persuasion, or searching for clues, the game models or rules provide a difï¬culty level or target number for the
6 | 2307.12573#24 | Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models | Recent advancements in natural language and Large Language Models (LLMs) have
enabled AI agents to simulate human-like interactions within virtual worlds.
However, these interactions still face limitations in complexity and
flexibility, particularly in scenarios involving multiple characters and novel
objects. Pre-defining all interactable objects in the agent's world model
presents challenges, and conveying implicit intentions to multiple characters
through complex interactions remains difficult. To address these issues, we
propose integrating virtual Game Masters (GMs) into the agent's world model,
drawing inspiration from Tabletop Role-Playing Games (TRPGs). GMs play a
crucial role in overseeing information, estimating players' intentions,
providing environment descriptions, and offering feedback, compensating for
current world model deficiencies. To facilitate future explorations for complex
interactions, we introduce a benchmark named Tachikuma, comprising a Multiple
character and novel Object based interaction Estimation (MOE) task and a
supporting dataset. MOE challenges models to understand characters' intentions
and accurately determine their actions within intricate contexts involving
multi-character and novel object interactions. Besides, the dataset captures
log data from real-time communications during gameplay, providing diverse,
grounded, and complex interactions for further explorations. Finally, we
present a simple prompting baseline and evaluate its performance, demonstrating
its effectiveness in enhancing interaction understanding. We hope that our
dataset and task will inspire further research in complex interactions with
natural language, fostering the development of more advanced AI agents. | http://arxiv.org/pdf/2307.12573 | Yuanzhi Liang, Linchao Zhu, Yi Yang | cs.CL | Preliminary version of an ongoing work | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2212.10060"
},
{
"id": "2205.11916"
},
{
"id": "2304.03442"
},
{
"id": "2203.11171"
},
{
"id": "2212.10618"
},
{
"id": "2301.00234"
},
{
"id": "2303.18223"
},
{
"id": "2010.07375"
},
{
"id": "2212.10403"
},
{
"id": "2305.01528"
},
{
"id": "2305.16960"
}
] |
2307.12856 | 24 | Finetuning for Planning and Summarization We align language models to perform closed-loop planning with a sequence of sub-instructions and to summarize long HTML documents into concise
5
Preprint
# Preprint
Figure 5: Example episodes of real-world web automation in map domain. Considering the given instruction and HTML, WebAgent predicts the next sub-instruction and task-relevant snippet, and then synthesizes the Python script (gray), while treating the sub-instruction as a comment in the script. See Appendix G for extended figure. | 2307.12856#24 | A Real-World WebAgent with Planning, Long Context Understanding, and Program Synthesis | Pre-trained large language models (LLMs) have recently achieved better
generalization and sample efficiency in autonomous web automation. However, the
performance on real-world websites has still suffered from (1) open domainness,
(2) limited context length, and (3) lack of inductive bias on HTML. We
introduce WebAgent, an LLM-driven agent that learns from self-experience to
complete tasks on real websites following natural language instructions.
WebAgent plans ahead by decomposing instructions into canonical
sub-instructions, summarizes long HTML documents into task-relevant snippets,
and acts on websites via Python programs generated from those. We design
WebAgent with Flan-U-PaLM, for grounded code generation, and HTML-T5, new
pre-trained LLMs for long HTML documents using local and global attention
mechanisms and a mixture of long-span denoising objectives, for planning and
summarization. We empirically demonstrate that our modular recipe improves the
success on real websites by over 50%, and that HTML-T5 is the best model to
solve various HTML understanding tasks; achieving 18.7% higher success rate
than the prior method on MiniWoB web automation benchmark, and SoTA performance
on Mind2Web, an offline task planning evaluation. | http://arxiv.org/pdf/2307.12856 | Izzeddin Gur, Hiroki Furuta, Austin Huang, Mustafa Safdari, Yutaka Matsuo, Douglas Eck, Aleksandra Faust | cs.LG, cs.AI, cs.CL | null | null | cs.LG | 20230724 | 20231003 | [
{
"id": "2101.02235"
},
{
"id": "2302.13971"
},
{
"id": "2103.06410"
},
{
"id": "2305.08848"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2106.15772"
},
{
"id": "2305.16653"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2105.13231"
},
{
"id": "2205.05131"
},
{
"id": "2204.00598"
},
{
"id": "2203.17189"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2105.11210"
},
{
"id": "2304.11477"
},
{
"id": "1906.01749"
},
{
"id": "2108.07732"
},
{
"id": "2202.00217"
},
{
"id": "2102.04664"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "1811.00937"
},
{
"id": "2303.08774"
},
{
"id": "1808.06226"
},
{
"id": "2010.12844"
},
{
"id": "2110.14168"
},
{
"id": "2209.11302"
},
{
"id": "2210.09261"
},
{
"id": "2306.07863"
},
{
"id": "2306.00245"
},
{
"id": "2305.16291"
},
{
"id": "2005.14165"
},
{
"id": "2305.11014"
},
{
"id": "2211.10435"
},
{
"id": "2210.03629"
},
{
"id": "1912.13318"
},
{
"id": "2108.13643"
},
{
"id": "2303.09752"
},
{
"id": "2103.07191"
},
{
"id": "1611.04230"
},
{
"id": "2209.07753"
},
{
"id": "2002.08155"
},
{
"id": "2212.10560"
},
{
"id": "2110.08518"
},
{
"id": "2210.03945"
},
{
"id": "2201.08239"
},
{
"id": "2204.01691"
},
{
"id": "1906.03741"
},
{
"id": "2203.02155"
},
{
"id": "2201.07207"
},
{
"id": "2004.08483"
},
{
"id": "1804.05685"
},
{
"id": "2206.10498"
},
{
"id": "2207.01206"
},
{
"id": "2105.09938"
},
{
"id": "2205.12255"
},
{
"id": "2305.11854"
}
] |
2307.12950 | 24 | 6
Harmlessness Prompts Helpfulness Prompts Outlining Prompts Methods Harm Help Help Qual RLCD7B vs. LLaMA 82.8 / 17.2 77.0 / 23.0 90.7 / 9.3 76.0 / 24.0 RLCD7B vs. RLAIF7B 84.8 / 15.2 71.0 / 29.0 85.4 / 14.6 78.5 / 21.5 RLCD7B vs. Context-Dist7B RLCD30B vs. LLaMA 69.7 / 30.3 78.9 / 21.1 67.7 / 32.3 78.3 / 21.7 89.5 / 10.5 81.3 / 18.7 71.8 / 28.2 55.7 / 44.3 RLCD30B vs. RLAIF30B 60.3 / 39.7 55.3 / 44.7 47.8 / 52.2 35.9 / 64.1 RLCD30B vs. Context-Dist30B 64.5 / 35.5 69.3 / 30.7 74.9 / 25.1 62.2 / 37.8 | 2307.12950#24 | RLCD: Reinforcement Learning from Contrast Distillation for Language Model Alignment | We propose Reinforcement Learning from Contrast Distillation (RLCD), a method
for aligning language models to follow natural language principles without
using human feedback. RLCD trains a preference model using simulated preference
pairs that contain both a high-quality and low-quality example, generated using
contrasting positive and negative prompts. The preference model is then used to
improve a base unaligned language model via reinforcement learning.
Empirically, RLCD outperforms RLAIF (Bai et al., 2022b) and context
distillation (Huang et al., 2022) baselines across three diverse alignment
tasks--harmlessness, helpfulness, and story outline generation--and on both 7B
and 30B model scales for preference data simulation. | http://arxiv.org/pdf/2307.12950 | Kevin Yang, Dan Klein, Asli Celikyilmaz, Nanyun Peng, Yuandong Tian | cs.CL, cs.AI | null | null | cs.CL | 20230724 | 20230818 | [
{
"id": "2302.13971"
},
{
"id": "2109.07445"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.03047"
},
{
"id": "2209.15189"
},
{
"id": "2306.15595"
},
{
"id": "1807.03748"
},
{
"id": "2202.00161"
},
{
"id": "1606.07947"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2204.05862"
},
{
"id": "2301.11270"
},
{
"id": "2112.00861"
},
{
"id": "2307.03172"
},
{
"id": "2206.11349"
},
{
"id": "2305.14387"
},
{
"id": "2210.11610"
},
{
"id": "2102.10960"
},
{
"id": "2306.11816"
}
] |
2307.12966 | 24 | and instructions from the Su- perNaturalInstruction benchmark (Honovich et al., 2022). Yu et al. (2023b) also shows that explic- itly adding meta-information (e.g., length, topics, style) into the data generation prompts can effec- tively remove the bias in the generated synthetic data and improve the diversity of those synthetic data. Furthermore, Xu et al. (2023b) propose a novel Evol-Instruct framework to obtain complex and difï¬cult instructions gradually. Instead of using existing instructions to prompt LLMs to produce new instructions via in-context learning, in Evol- Instruct, there are ï¬ve different manually-designed prompts to explicitly instruct LLMs to rewrite the existing simple instructions into complex ones us- ing in-depth methods (i.e., including more infor- mation on particular topics) or in-Breadth methods (i.e, improving topics/information coverage). The resulting WizardLM model is ranked top in the MT- Bench (Zheng et al., 2023) and AlpacaEval (Dubois et al., 2023). Luo et al. (2023) further expand this idea | 2307.12966#24 | Aligning Large Language Models with Human: A Survey | Large Language Models (LLMs) trained on extensive textual corpora have
emerged as leading solutions for a broad array of Natural Language Processing
(NLP) tasks. Despite their notable performance, these models are prone to
certain limitations such as misunderstanding human instructions, generating
potentially biased content, or factually incorrect (hallucinated) information.
Hence, aligning LLMs with human expectations has become an active area of
interest within the research community. This survey presents a comprehensive
overview of these alignment technologies, including the following aspects. (1)
Data collection: the methods for effectively collecting high-quality
instructions for LLM alignment, including the use of NLP benchmarks, human
annotations, and leveraging strong LLMs. (2) Training methodologies: a detailed
review of the prevailing training methods employed for LLM alignment. Our
exploration encompasses Supervised Fine-tuning, both Online and Offline human
preference training, along with parameter-efficient training mechanisms. (3)
Model Evaluation: the methods for evaluating the effectiveness of these
human-aligned LLMs, presenting a multifaceted approach towards their
assessment. In conclusion, we collate and distill our findings, shedding light
on several promising future research avenues in the field. This survey,
therefore, serves as a valuable resource for anyone invested in understanding
and advancing the alignment of LLMs to better suit human-oriented tasks and
expectations. An associated GitHub link collecting the latest papers is
available at https://github.com/GaryYufei/AlignLLMHumanSurvey. | http://arxiv.org/pdf/2307.12966 | Yufei Wang, Wanjun Zhong, Liangyou Li, Fei Mi, Xingshan Zeng, Wenyong Huang, Lifeng Shang, Xin Jiang, Qun Liu | cs.CL | work in progress | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2307.03109"
},
{
"id": "2305.14251"
},
{
"id": "2305.18290"
},
{
"id": "2305.13711"
},
{
"id": "2306.11644"
},
{
"id": "2306.09212"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.15011"
},
{
"id": "2304.08109"
},
{
"id": "2305.16264"
},
{
"id": "2305.10263"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2305.01210"
},
{
"id": "2306.04751"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2210.05158"
},
{
"id": "2306.15895"
},
{
"id": "2301.13688"
},
{
"id": "2307.02053"
},
{
"id": "2306.04563"
},
{
"id": "2305.15067"
},
{
"id": "2305.14688"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2306.09296"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2307.08701"
},
{
"id": "2304.08177"
},
{
"id": "2303.16634"
},
{
"id": "2306.17492"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.16739"
},
{
"id": "1909.08593"
},
{
"id": "2304.14317"
},
{
"id": "2305.14233"
},
{
"id": "2304.00723"
},
{
"id": "2306.08568"
},
{
"id": "2304.03277"
},
{
"id": "2307.03692"
},
{
"id": "2305.14314"
},
{
"id": "2307.06290"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2305.16960"
}
] |
2307.12856 | 25 | snippets relevant to the current plan. As a core module of WebAgent, HTML-T5 finetuned with self-generated demonstrations takes task instructions (e.g. please search 2 bedroom and 2+ bathroom houses in new york, ny with a max price of $7500 on real estate website), sub-instruction histories (e.g. go to real estate website, type in new york, ny into search, click on search, click on price, click on max rent), and raw HTML as inputs. Then, it predicts the next sub-instruction (e.g. type in 7500 into max rent) and the corresponding data-ref attributes to extract the snippet with XPath instead of naively decoding the raw snippet. In the later experiments in Section 4.1, we will demonstrate that linking HTML summarization into sub-instruction prediction is important for real-world web automation performance.
3.3 GROUNDED PROGRAM SYNTHESIS | 2307.12856#25 | A Real-World WebAgent with Planning, Long Context Understanding, and Program Synthesis | Pre-trained large language models (LLMs) have recently achieved better
generalization and sample efficiency in autonomous web automation. However, the
performance on real-world websites has still suffered from (1) open domainness,
(2) limited context length, and (3) lack of inductive bias on HTML. We
introduce WebAgent, an LLM-driven agent that learns from self-experience to
complete tasks on real websites following natural language instructions.
WebAgent plans ahead by decomposing instructions into canonical
sub-instructions, summarizes long HTML documents into task-relevant snippets,
and acts on websites via Python programs generated from those. We design
WebAgent with Flan-U-PaLM, for grounded code generation, and HTML-T5, new
pre-trained LLMs for long HTML documents using local and global attention
mechanisms and a mixture of long-span denoising objectives, for planning and
summarization. We empirically demonstrate that our modular recipe improves the
success on real websites by over 50%, and that HTML-T5 is the best model to
solve various HTML understanding tasks; achieving 18.7% higher success rate
than the prior method on MiniWoB web automation benchmark, and SoTA performance
on Mind2Web, an offline task planning evaluation. | http://arxiv.org/pdf/2307.12856 | Izzeddin Gur, Hiroki Furuta, Austin Huang, Mustafa Safdari, Yutaka Matsuo, Douglas Eck, Aleksandra Faust | cs.LG, cs.AI, cs.CL | null | null | cs.LG | 20230724 | 20231003 | [
{
"id": "2101.02235"
},
{
"id": "2302.13971"
},
{
"id": "2103.06410"
},
{
"id": "2305.08848"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2106.15772"
},
{
"id": "2305.16653"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2105.13231"
},
{
"id": "2205.05131"
},
{
"id": "2204.00598"
},
{
"id": "2203.17189"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2105.11210"
},
{
"id": "2304.11477"
},
{
"id": "1906.01749"
},
{
"id": "2108.07732"
},
{
"id": "2202.00217"
},
{
"id": "2102.04664"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "1811.00937"
},
{
"id": "2303.08774"
},
{
"id": "1808.06226"
},
{
"id": "2010.12844"
},
{
"id": "2110.14168"
},
{
"id": "2209.11302"
},
{
"id": "2210.09261"
},
{
"id": "2306.07863"
},
{
"id": "2306.00245"
},
{
"id": "2305.16291"
},
{
"id": "2005.14165"
},
{
"id": "2305.11014"
},
{
"id": "2211.10435"
},
{
"id": "2210.03629"
},
{
"id": "1912.13318"
},
{
"id": "2108.13643"
},
{
"id": "2303.09752"
},
{
"id": "2103.07191"
},
{
"id": "1611.04230"
},
{
"id": "2209.07753"
},
{
"id": "2002.08155"
},
{
"id": "2212.10560"
},
{
"id": "2110.08518"
},
{
"id": "2210.03945"
},
{
"id": "2201.08239"
},
{
"id": "2204.01691"
},
{
"id": "1906.03741"
},
{
"id": "2203.02155"
},
{
"id": "2201.07207"
},
{
"id": "2004.08483"
},
{
"id": "1804.05685"
},
{
"id": "2206.10498"
},
{
"id": "2207.01206"
},
{
"id": "2105.09938"
},
{
"id": "2205.12255"
},
{
"id": "2305.11854"
}
] |
2307.12950 | 25 | Table 3: Percentage of outputs preferred in GPT-4 binary evaluations when comparing RLCD to baselines for aligning LLaMA-7B. RLCDâs outputs are substantially preferred by GPT-4 across all tasks and data simulation model scales, except comparing RLCD30B to RLAIF30B, where performance is relatively equal.
not sacrificing helpfulness compared to baselines. RLCD is also much better than baselines on helpfulness by itself as well as the somewhat more complex task of story outlining.
The gap between RLCD and all baselines is especially large when using LLaMA-7B for preference data simulation. Even our strongest baseline, RLAIF, comes close to RLCD only at 30B model scale for data simulation, while substantially underperforming at 7B model scale. Even at 30B scale, although GPT-4 prefers RLAIF30B to RLCD30B in some cases, humans always consider RLCD to be at least equal or better; GPT-4 evaluations may show greater disagreement with humans at this level of model quality (Appendix G). | 2307.12950#25 | RLCD: Reinforcement Learning from Contrast Distillation for Language Model Alignment | We propose Reinforcement Learning from Contrast Distillation (RLCD), a method
for aligning language models to follow natural language principles without
using human feedback. RLCD trains a preference model using simulated preference
pairs that contain both a high-quality and low-quality example, generated using
contrasting positive and negative prompts. The preference model is then used to
improve a base unaligned language model via reinforcement learning.
Empirically, RLCD outperforms RLAIF (Bai et al., 2022b) and context
distillation (Huang et al., 2022) baselines across three diverse alignment
tasks--harmlessness, helpfulness, and story outline generation--and on both 7B
and 30B model scales for preference data simulation. | http://arxiv.org/pdf/2307.12950 | Kevin Yang, Dan Klein, Asli Celikyilmaz, Nanyun Peng, Yuandong Tian | cs.CL, cs.AI | null | null | cs.CL | 20230724 | 20230818 | [
{
"id": "2302.13971"
},
{
"id": "2109.07445"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.03047"
},
{
"id": "2209.15189"
},
{
"id": "2306.15595"
},
{
"id": "1807.03748"
},
{
"id": "2202.00161"
},
{
"id": "1606.07947"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2204.05862"
},
{
"id": "2301.11270"
},
{
"id": "2112.00861"
},
{
"id": "2307.03172"
},
{
"id": "2206.11349"
},
{
"id": "2305.14387"
},
{
"id": "2210.11610"
},
{
"id": "2102.10960"
},
{
"id": "2306.11816"
}
] |
2307.12966 | 25 | (Zheng et al., 2023) and AlpacaEval (Dubois et al., 2023). Luo et al. (2023) further expand this idea to produce complex code and programming instructions from the simple ones and propose the WizardCoder model, which outperforms several strong commercial LLMs, e.g., Anthropicâs Claude and Googleâs Bard. Gunasekar et al. (2023) propose to generate textbook-like instructions prompted with sufï¬cient background knowledge to promote reasoning and basic algorithmic skills of LLMs. They ï¬nd that the resulting 1.3B LLMs phi-1 suc- cessfully outperform various much larger LLMs, showing the importance of data quality. | 2307.12966#25 | Aligning Large Language Models with Human: A Survey | Large Language Models (LLMs) trained on extensive textual corpora have
emerged as leading solutions for a broad array of Natural Language Processing
(NLP) tasks. Despite their notable performance, these models are prone to
certain limitations such as misunderstanding human instructions, generating
potentially biased content, or factually incorrect (hallucinated) information.
Hence, aligning LLMs with human expectations has become an active area of
interest within the research community. This survey presents a comprehensive
overview of these alignment technologies, including the following aspects. (1)
Data collection: the methods for effectively collecting high-quality
instructions for LLM alignment, including the use of NLP benchmarks, human
annotations, and leveraging strong LLMs. (2) Training methodologies: a detailed
review of the prevailing training methods employed for LLM alignment. Our
exploration encompasses Supervised Fine-tuning, both Online and Offline human
preference training, along with parameter-efficient training mechanisms. (3)
Model Evaluation: the methods for evaluating the effectiveness of these
human-aligned LLMs, presenting a multifaceted approach towards their
assessment. In conclusion, we collate and distill our findings, shedding light
on several promising future research avenues in the field. This survey,
therefore, serves as a valuable resource for anyone invested in understanding
and advancing the alignment of LLMs to better suit human-oriented tasks and
expectations. An associated GitHub link collecting the latest papers is
available at https://github.com/GaryYufei/AlignLLMHumanSurvey. | http://arxiv.org/pdf/2307.12966 | Yufei Wang, Wanjun Zhong, Liangyou Li, Fei Mi, Xingshan Zeng, Wenyong Huang, Lifeng Shang, Xin Jiang, Qun Liu | cs.CL | work in progress | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2307.03109"
},
{
"id": "2305.14251"
},
{
"id": "2305.18290"
},
{
"id": "2305.13711"
},
{
"id": "2306.11644"
},
{
"id": "2306.09212"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.15011"
},
{
"id": "2304.08109"
},
{
"id": "2305.16264"
},
{
"id": "2305.10263"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2305.01210"
},
{
"id": "2306.04751"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2210.05158"
},
{
"id": "2306.15895"
},
{
"id": "2301.13688"
},
{
"id": "2307.02053"
},
{
"id": "2306.04563"
},
{
"id": "2305.15067"
},
{
"id": "2305.14688"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2306.09296"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2307.08701"
},
{
"id": "2304.08177"
},
{
"id": "2303.16634"
},
{
"id": "2306.17492"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.16739"
},
{
"id": "1909.08593"
},
{
"id": "2304.14317"
},
{
"id": "2305.14233"
},
{
"id": "2304.00723"
},
{
"id": "2306.08568"
},
{
"id": "2304.03277"
},
{
"id": "2307.03692"
},
{
"id": "2305.14314"
},
{
"id": "2307.06290"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2305.16960"
}
] |
2307.12573 | 26 | action. This difï¬culty level represents the challenge or desired level of success for the action. The Game Master (GM) assumes the responsibility of judging and guiding the player in performing the skill check based on the characterâs proï¬ciency associated with the action. The GM then rolls a dice to determine the outcome. In our task, we annotate the skill checks performed by playersâ characters during the games based on the semantic or recorded results provided by the GM. It is important to note that some skill checks are not predictable based solely on previous interactions. For example, in Call of Cthulhu (COC) games, perception checks may be prompted by the GM when players enter speciï¬c locations. These checks are closely tied to the game models and cannot be predicted in advance. Additionally, certain checks require additional calculations based on character attributes or cards, such as determining damage based on hit points or factoring in armor class to reduce damage. These calculations cannot be inferred solely from the game logs and we also remove these check in MOE. Thus, we have excluded any checks that are unpredictable and included only those check items that can be inferred from the game logs. For example, the COC logs contain 61 check items (including skills and attributes) that can be veriï¬ed, while the DND logs contain 25 such items. Further details regarding the check items will be provided in the supplementary material. | 2307.12573#26 | Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models | Recent advancements in natural language and Large Language Models (LLMs) have
enabled AI agents to simulate human-like interactions within virtual worlds.
However, these interactions still face limitations in complexity and
flexibility, particularly in scenarios involving multiple characters and novel
objects. Pre-defining all interactable objects in the agent's world model
presents challenges, and conveying implicit intentions to multiple characters
through complex interactions remains difficult. To address these issues, we
propose integrating virtual Game Masters (GMs) into the agent's world model,
drawing inspiration from Tabletop Role-Playing Games (TRPGs). GMs play a
crucial role in overseeing information, estimating players' intentions,
providing environment descriptions, and offering feedback, compensating for
current world model deficiencies. To facilitate future explorations for complex
interactions, we introduce a benchmark named Tachikuma, comprising a Multiple
character and novel Object based interaction Estimation (MOE) task and a
supporting dataset. MOE challenges models to understand characters' intentions
and accurately determine their actions within intricate contexts involving
multi-character and novel object interactions. Besides, the dataset captures
log data from real-time communications during gameplay, providing diverse,
grounded, and complex interactions for further explorations. Finally, we
present a simple prompting baseline and evaluate its performance, demonstrating
its effectiveness in enhancing interaction understanding. We hope that our
dataset and task will inspire further research in complex interactions with
natural language, fostering the development of more advanced AI agents. | http://arxiv.org/pdf/2307.12573 | Yuanzhi Liang, Linchao Zhu, Yi Yang | cs.CL | Preliminary version of an ongoing work | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2212.10060"
},
{
"id": "2205.11916"
},
{
"id": "2304.03442"
},
{
"id": "2203.11171"
},
{
"id": "2212.10618"
},
{
"id": "2301.00234"
},
{
"id": "2303.18223"
},
{
"id": "2010.07375"
},
{
"id": "2212.10403"
},
{
"id": "2305.01528"
},
{
"id": "2305.16960"
}
] |
2307.12856 | 26 | 3.3 GROUNDED PROGRAM SYNTHESIS
Web automation on real-world websites suffers from the open-ended action space, compared to the simplified simulators (Shi et al., 2017; Yao et al., 2022a). Unlike previous works (Gur et al., 2019; Humphreys et al., 2022; Jia et al., 2019; Liu et al., 2018), real-world web agents could not pre- define a categorical action space to specify which elements on the websites they should interact. To overcome such an open-domainness, we introduce act via programming paradigm in web automation by leveraging the capability of LLMs on conditional code generation (Chen et al., 2021a; Liang et al., 2023). Given a few canonical examples for program generation, next sub-instruction, and extracted HTML snippet from HTML-T5, Flan-U-PaLM (Chowdhery et al., 2022; Chung et al., 2022) with 540B parameters decodes an executable Python program (Figure 3) using Selenium WebDriver, a library for browser automation. Such a conditional program synthesis demands that LLMs are capable enough to not only generate the code following natural language instructions, but also understand the semantics and functionality of HTML elements. We provide several Python snippet examples generated by Flan-U-PaLM as follows (we treat sub-instructions as comments in the script): | 2307.12856#26 | A Real-World WebAgent with Planning, Long Context Understanding, and Program Synthesis | Pre-trained large language models (LLMs) have recently achieved better
generalization and sample efficiency in autonomous web automation. However, the
performance on real-world websites has still suffered from (1) open domainness,
(2) limited context length, and (3) lack of inductive bias on HTML. We
introduce WebAgent, an LLM-driven agent that learns from self-experience to
complete tasks on real websites following natural language instructions.
WebAgent plans ahead by decomposing instructions into canonical
sub-instructions, summarizes long HTML documents into task-relevant snippets,
and acts on websites via Python programs generated from those. We design
WebAgent with Flan-U-PaLM, for grounded code generation, and HTML-T5, new
pre-trained LLMs for long HTML documents using local and global attention
mechanisms and a mixture of long-span denoising objectives, for planning and
summarization. We empirically demonstrate that our modular recipe improves the
success on real websites by over 50%, and that HTML-T5 is the best model to
solve various HTML understanding tasks; achieving 18.7% higher success rate
than the prior method on MiniWoB web automation benchmark, and SoTA performance
on Mind2Web, an offline task planning evaluation. | http://arxiv.org/pdf/2307.12856 | Izzeddin Gur, Hiroki Furuta, Austin Huang, Mustafa Safdari, Yutaka Matsuo, Douglas Eck, Aleksandra Faust | cs.LG, cs.AI, cs.CL | null | null | cs.LG | 20230724 | 20231003 | [
{
"id": "2101.02235"
},
{
"id": "2302.13971"
},
{
"id": "2103.06410"
},
{
"id": "2305.08848"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2106.15772"
},
{
"id": "2305.16653"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2105.13231"
},
{
"id": "2205.05131"
},
{
"id": "2204.00598"
},
{
"id": "2203.17189"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2105.11210"
},
{
"id": "2304.11477"
},
{
"id": "1906.01749"
},
{
"id": "2108.07732"
},
{
"id": "2202.00217"
},
{
"id": "2102.04664"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "1811.00937"
},
{
"id": "2303.08774"
},
{
"id": "1808.06226"
},
{
"id": "2010.12844"
},
{
"id": "2110.14168"
},
{
"id": "2209.11302"
},
{
"id": "2210.09261"
},
{
"id": "2306.07863"
},
{
"id": "2306.00245"
},
{
"id": "2305.16291"
},
{
"id": "2005.14165"
},
{
"id": "2305.11014"
},
{
"id": "2211.10435"
},
{
"id": "2210.03629"
},
{
"id": "1912.13318"
},
{
"id": "2108.13643"
},
{
"id": "2303.09752"
},
{
"id": "2103.07191"
},
{
"id": "1611.04230"
},
{
"id": "2209.07753"
},
{
"id": "2002.08155"
},
{
"id": "2212.10560"
},
{
"id": "2110.08518"
},
{
"id": "2210.03945"
},
{
"id": "2201.08239"
},
{
"id": "2204.01691"
},
{
"id": "1906.03741"
},
{
"id": "2203.02155"
},
{
"id": "2201.07207"
},
{
"id": "2004.08483"
},
{
"id": "1804.05685"
},
{
"id": "2206.10498"
},
{
"id": "2207.01206"
},
{
"id": "2105.09938"
},
{
"id": "2205.12255"
},
{
"id": "2305.11854"
}
] |
2307.12950 | 26 | The difference between RLCD and baselines can be observed qualitatively as well. For instance, the example shown in Table 4 makes it abundantly clear why RLCD is so strongly preferred over our baselines on the helpfulness data at 7B model scale for preference data simulation. See Appendix I for additional, i.i.d. example outputs for both RLCD and baselines.
Human: What did Thomas Edison invent? Assistant:
LLaMA: An alkaline battery.
RLAIF7B: Thomas Alva Edison was an American inventor and businessman.
# Context-Dist7B: The light bulb. | 2307.12950#26 | RLCD: Reinforcement Learning from Contrast Distillation for Language Model Alignment | We propose Reinforcement Learning from Contrast Distillation (RLCD), a method
for aligning language models to follow natural language principles without
using human feedback. RLCD trains a preference model using simulated preference
pairs that contain both a high-quality and low-quality example, generated using
contrasting positive and negative prompts. The preference model is then used to
improve a base unaligned language model via reinforcement learning.
Empirically, RLCD outperforms RLAIF (Bai et al., 2022b) and context
distillation (Huang et al., 2022) baselines across three diverse alignment
tasks--harmlessness, helpfulness, and story outline generation--and on both 7B
and 30B model scales for preference data simulation. | http://arxiv.org/pdf/2307.12950 | Kevin Yang, Dan Klein, Asli Celikyilmaz, Nanyun Peng, Yuandong Tian | cs.CL, cs.AI | null | null | cs.CL | 20230724 | 20230818 | [
{
"id": "2302.13971"
},
{
"id": "2109.07445"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.03047"
},
{
"id": "2209.15189"
},
{
"id": "2306.15595"
},
{
"id": "1807.03748"
},
{
"id": "2202.00161"
},
{
"id": "1606.07947"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2204.05862"
},
{
"id": "2301.11270"
},
{
"id": "2112.00861"
},
{
"id": "2307.03172"
},
{
"id": "2206.11349"
},
{
"id": "2305.14387"
},
{
"id": "2210.11610"
},
{
"id": "2102.10960"
},
{
"id": "2306.11816"
}
] |
2307.12966 | 26 | Improving Output Quality Aside from the pro- vision of high-quality instruction input, a critical requisite is to skillfully prompt LLMs to yield highquality responses. The conventional method of enhancing response quality entails appending LLM prompts with additional conditions, encompassing the following facets.
(1) Reasoning-Provoking Conditions: Wei et al. (2022b) proposed the Chain-of-Thought (CoT) reasoning approach, which includes precon- ditions in the LLM prompts and generation the intermediate reasoning processes for complex prob- lems, thereby assisting LLMs in problem-solving. Inspired by CoT, Mukherjee et al. (2023) devel- oped the Orca model, which learns not only the superï¬cial response text from LLMs, but also cap- tures complex reasoning process signals. Speciï¬- cally, they guided LLMs to respond to reasoning- intensive FLAN instructions with a series of pre- deï¬ned system prompts (e.g., âthink step-by-step and justify your responseâ), spurring LLMs (e.g., GPT4) to disclose their reasoning process infor- mation. Thanks to these advancements, the Orca model signiï¬cantly outperformed several powerful open-sourced LLMs. | 2307.12966#26 | Aligning Large Language Models with Human: A Survey | Large Language Models (LLMs) trained on extensive textual corpora have
emerged as leading solutions for a broad array of Natural Language Processing
(NLP) tasks. Despite their notable performance, these models are prone to
certain limitations such as misunderstanding human instructions, generating
potentially biased content, or factually incorrect (hallucinated) information.
Hence, aligning LLMs with human expectations has become an active area of
interest within the research community. This survey presents a comprehensive
overview of these alignment technologies, including the following aspects. (1)
Data collection: the methods for effectively collecting high-quality
instructions for LLM alignment, including the use of NLP benchmarks, human
annotations, and leveraging strong LLMs. (2) Training methodologies: a detailed
review of the prevailing training methods employed for LLM alignment. Our
exploration encompasses Supervised Fine-tuning, both Online and Offline human
preference training, along with parameter-efficient training mechanisms. (3)
Model Evaluation: the methods for evaluating the effectiveness of these
human-aligned LLMs, presenting a multifaceted approach towards their
assessment. In conclusion, we collate and distill our findings, shedding light
on several promising future research avenues in the field. This survey,
therefore, serves as a valuable resource for anyone invested in understanding
and advancing the alignment of LLMs to better suit human-oriented tasks and
expectations. An associated GitHub link collecting the latest papers is
available at https://github.com/GaryYufei/AlignLLMHumanSurvey. | http://arxiv.org/pdf/2307.12966 | Yufei Wang, Wanjun Zhong, Liangyou Li, Fei Mi, Xingshan Zeng, Wenyong Huang, Lifeng Shang, Xin Jiang, Qun Liu | cs.CL | work in progress | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2307.03109"
},
{
"id": "2305.14251"
},
{
"id": "2305.18290"
},
{
"id": "2305.13711"
},
{
"id": "2306.11644"
},
{
"id": "2306.09212"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.15011"
},
{
"id": "2304.08109"
},
{
"id": "2305.16264"
},
{
"id": "2305.10263"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2305.01210"
},
{
"id": "2306.04751"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2210.05158"
},
{
"id": "2306.15895"
},
{
"id": "2301.13688"
},
{
"id": "2307.02053"
},
{
"id": "2306.04563"
},
{
"id": "2305.15067"
},
{
"id": "2305.14688"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2306.09296"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2307.08701"
},
{
"id": "2304.08177"
},
{
"id": "2303.16634"
},
{
"id": "2306.17492"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.16739"
},
{
"id": "1909.08593"
},
{
"id": "2304.14317"
},
{
"id": "2305.14233"
},
{
"id": "2304.00723"
},
{
"id": "2306.08568"
},
{
"id": "2304.03277"
},
{
"id": "2307.03692"
},
{
"id": "2305.14314"
},
{
"id": "2307.06290"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2305.16960"
}
] |
2307.12573 | 27 | # 3.4 Context Excerpt
Following the labeling of check items in the game logs, we proceed to excerpt relevant contexts associated with each check. Our selection of excerpted contexts is guided by three key principles to ensure the inclusion of long and diverse interactions. First, we ensure that the excerpted contexts encompass complete events within the game, such as the entire process of encountering enemies or the detailed information and clues leading up to the exploration of certain locations. This ensures that the extracted contexts provide a comprehensive understanding of the events. Second, we require that the excerpted contexts involve at least two characters who are present in the current scenarios of the game. This criterion allows for the examination of interactions between multiple characters, providing a more complex context for analysis. Lastly, we ensure that at least one character within the excerpted contexts has a skill check that can be predicted. This principle guarantees that the selected contexts contain situations where skill checks can be inferred based on the information available up to the last turn. By adhering to these principles, we ensure that the contexts support the understanding of the complex interactions and enable the inference of charactersâ intentions in subsequent turns.
# 3.5 Statistical Analysis | 2307.12573#27 | Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models | Recent advancements in natural language and Large Language Models (LLMs) have
enabled AI agents to simulate human-like interactions within virtual worlds.
However, these interactions still face limitations in complexity and
flexibility, particularly in scenarios involving multiple characters and novel
objects. Pre-defining all interactable objects in the agent's world model
presents challenges, and conveying implicit intentions to multiple characters
through complex interactions remains difficult. To address these issues, we
propose integrating virtual Game Masters (GMs) into the agent's world model,
drawing inspiration from Tabletop Role-Playing Games (TRPGs). GMs play a
crucial role in overseeing information, estimating players' intentions,
providing environment descriptions, and offering feedback, compensating for
current world model deficiencies. To facilitate future explorations for complex
interactions, we introduce a benchmark named Tachikuma, comprising a Multiple
character and novel Object based interaction Estimation (MOE) task and a
supporting dataset. MOE challenges models to understand characters' intentions
and accurately determine their actions within intricate contexts involving
multi-character and novel object interactions. Besides, the dataset captures
log data from real-time communications during gameplay, providing diverse,
grounded, and complex interactions for further explorations. Finally, we
present a simple prompting baseline and evaluate its performance, demonstrating
its effectiveness in enhancing interaction understanding. We hope that our
dataset and task will inspire further research in complex interactions with
natural language, fostering the development of more advanced AI agents. | http://arxiv.org/pdf/2307.12573 | Yuanzhi Liang, Linchao Zhu, Yi Yang | cs.CL | Preliminary version of an ongoing work | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2212.10060"
},
{
"id": "2205.11916"
},
{
"id": "2304.03442"
},
{
"id": "2203.11171"
},
{
"id": "2212.10618"
},
{
"id": "2301.00234"
},
{
"id": "2303.18223"
},
{
"id": "2010.07375"
},
{
"id": "2212.10403"
},
{
"id": "2305.01528"
},
{
"id": "2305.16960"
}
] |
2307.12856 | 27 | 1 # Type in walnut creek, ca into search 2 driver.find_element(By.CSS_SELECTOR, â[data-ref="175"]â).clear() 3 driver.find_element(By.CSS_SELECTOR, â[data-ref="175"]â).send_keys("walnut creek, ca") 4 5 # Submit the search 6 driver.find_element(By.CSS_SELECTOR, â[data-ref="175"]â).submit() 7 8 # Click on the apartments 9 driver.find_element(By.CSS_SELECTOR, â[data-ref="572"]â).click() 10 11 # Scroll down housing type by 200px 12 driver.execute_script(âgetScrollParent(document.querySelector("#type-of-housing")).scrollBy({top: 200})â)
# 4 EXPERIMENTAL RESULTS | 2307.12856#27 | A Real-World WebAgent with Planning, Long Context Understanding, and Program Synthesis | Pre-trained large language models (LLMs) have recently achieved better
generalization and sample efficiency in autonomous web automation. However, the
performance on real-world websites has still suffered from (1) open domainness,
(2) limited context length, and (3) lack of inductive bias on HTML. We
introduce WebAgent, an LLM-driven agent that learns from self-experience to
complete tasks on real websites following natural language instructions.
WebAgent plans ahead by decomposing instructions into canonical
sub-instructions, summarizes long HTML documents into task-relevant snippets,
and acts on websites via Python programs generated from those. We design
WebAgent with Flan-U-PaLM, for grounded code generation, and HTML-T5, new
pre-trained LLMs for long HTML documents using local and global attention
mechanisms and a mixture of long-span denoising objectives, for planning and
summarization. We empirically demonstrate that our modular recipe improves the
success on real websites by over 50%, and that HTML-T5 is the best model to
solve various HTML understanding tasks; achieving 18.7% higher success rate
than the prior method on MiniWoB web automation benchmark, and SoTA performance
on Mind2Web, an offline task planning evaluation. | http://arxiv.org/pdf/2307.12856 | Izzeddin Gur, Hiroki Furuta, Austin Huang, Mustafa Safdari, Yutaka Matsuo, Douglas Eck, Aleksandra Faust | cs.LG, cs.AI, cs.CL | null | null | cs.LG | 20230724 | 20231003 | [
{
"id": "2101.02235"
},
{
"id": "2302.13971"
},
{
"id": "2103.06410"
},
{
"id": "2305.08848"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2106.15772"
},
{
"id": "2305.16653"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2105.13231"
},
{
"id": "2205.05131"
},
{
"id": "2204.00598"
},
{
"id": "2203.17189"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2105.11210"
},
{
"id": "2304.11477"
},
{
"id": "1906.01749"
},
{
"id": "2108.07732"
},
{
"id": "2202.00217"
},
{
"id": "2102.04664"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "1811.00937"
},
{
"id": "2303.08774"
},
{
"id": "1808.06226"
},
{
"id": "2010.12844"
},
{
"id": "2110.14168"
},
{
"id": "2209.11302"
},
{
"id": "2210.09261"
},
{
"id": "2306.07863"
},
{
"id": "2306.00245"
},
{
"id": "2305.16291"
},
{
"id": "2005.14165"
},
{
"id": "2305.11014"
},
{
"id": "2211.10435"
},
{
"id": "2210.03629"
},
{
"id": "1912.13318"
},
{
"id": "2108.13643"
},
{
"id": "2303.09752"
},
{
"id": "2103.07191"
},
{
"id": "1611.04230"
},
{
"id": "2209.07753"
},
{
"id": "2002.08155"
},
{
"id": "2212.10560"
},
{
"id": "2110.08518"
},
{
"id": "2210.03945"
},
{
"id": "2201.08239"
},
{
"id": "2204.01691"
},
{
"id": "1906.03741"
},
{
"id": "2203.02155"
},
{
"id": "2201.07207"
},
{
"id": "2004.08483"
},
{
"id": "1804.05685"
},
{
"id": "2206.10498"
},
{
"id": "2207.01206"
},
{
"id": "2105.09938"
},
{
"id": "2205.12255"
},
{
"id": "2305.11854"
}
] |
2307.12950 | 27 | LLaMA: An alkaline battery.
RLAIF7B: Thomas Alva Edison was an American inventor and businessman.
# Context-Dist7B: The light bulb.
RLCD7B: Edison is often remembered for inventing the light bulb, but he actually invented over a thousand different things throughout his lifetime, including the phonograph, electric power grid, microphone, video camera, and the motion picture Edison was the first to take advantage of incandescent lighting in an camera. industrial setting, opening up a new era of work environments in the 1870s and 1880s. His most well-known invention, however, was the incandescent light bulb, which he invented in 1879 and brought to market by 1880. He saw the potential for widespread use in homes, offices, and street lighting, and the bulb changed the way people lived and worked across the world. Edison had experimented with the incandescent light bulb since the mid-1870s, eventually perfecting the design with a filament made from carbonized thread.
Table 4: Outputs generated by baselines and RLCD for an example from our helpfulness test set. All methods use the same LLaMA-7B during preference data simulation, but RLCDâs final output is clearly more comprehensive in this case.
# 5 ANALYSIS
We run two additional analyses to provide further insight on RLCD.
7 | 2307.12950#27 | RLCD: Reinforcement Learning from Contrast Distillation for Language Model Alignment | We propose Reinforcement Learning from Contrast Distillation (RLCD), a method
for aligning language models to follow natural language principles without
using human feedback. RLCD trains a preference model using simulated preference
pairs that contain both a high-quality and low-quality example, generated using
contrasting positive and negative prompts. The preference model is then used to
improve a base unaligned language model via reinforcement learning.
Empirically, RLCD outperforms RLAIF (Bai et al., 2022b) and context
distillation (Huang et al., 2022) baselines across three diverse alignment
tasks--harmlessness, helpfulness, and story outline generation--and on both 7B
and 30B model scales for preference data simulation. | http://arxiv.org/pdf/2307.12950 | Kevin Yang, Dan Klein, Asli Celikyilmaz, Nanyun Peng, Yuandong Tian | cs.CL, cs.AI | null | null | cs.CL | 20230724 | 20230818 | [
{
"id": "2302.13971"
},
{
"id": "2109.07445"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.03047"
},
{
"id": "2209.15189"
},
{
"id": "2306.15595"
},
{
"id": "1807.03748"
},
{
"id": "2202.00161"
},
{
"id": "1606.07947"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2204.05862"
},
{
"id": "2301.11270"
},
{
"id": "2112.00861"
},
{
"id": "2307.03172"
},
{
"id": "2206.11349"
},
{
"id": "2305.14387"
},
{
"id": "2210.11610"
},
{
"id": "2102.10960"
},
{
"id": "2306.11816"
}
] |
2307.12966 | 27 | (2) Hand-crafted Guiding Principles: Sun et al. (2023b) introduced self-alignment framework that incorporates 16 manually devised principle rules into input prompts, thereby steering LLMs towards generating useful, ethical, and reliable re- sponses. To augment the impact of these rules, they employed the Chain-of-Thoughts (CoT) technol- ogy (Wei et al., 2022b), elucidating ï¬ve examples to coach LLMs in discerning which rules to imple- ment prior to generating actual response contents.
Chen et al. (2023e) devised a method to generate a set of role proï¬les using a blend of ChatGPT and manual ef- forts. They created seed instructions for each role proï¬le and applied self-instruction to the combi- nation of role proï¬les and instructions to obtain nuanced responses from LLMs. Xu et al. (2023a) proposed a two-stage instruction response frame- work in which an expert proï¬le is initially gen- erated based on the instructions to be answered, followed by using both the expert proï¬le and ac- tual instructions to prompt LLMs for high-quality responses. | 2307.12966#27 | Aligning Large Language Models with Human: A Survey | Large Language Models (LLMs) trained on extensive textual corpora have
emerged as leading solutions for a broad array of Natural Language Processing
(NLP) tasks. Despite their notable performance, these models are prone to
certain limitations such as misunderstanding human instructions, generating
potentially biased content, or factually incorrect (hallucinated) information.
Hence, aligning LLMs with human expectations has become an active area of
interest within the research community. This survey presents a comprehensive
overview of these alignment technologies, including the following aspects. (1)
Data collection: the methods for effectively collecting high-quality
instructions for LLM alignment, including the use of NLP benchmarks, human
annotations, and leveraging strong LLMs. (2) Training methodologies: a detailed
review of the prevailing training methods employed for LLM alignment. Our
exploration encompasses Supervised Fine-tuning, both Online and Offline human
preference training, along with parameter-efficient training mechanisms. (3)
Model Evaluation: the methods for evaluating the effectiveness of these
human-aligned LLMs, presenting a multifaceted approach towards their
assessment. In conclusion, we collate and distill our findings, shedding light
on several promising future research avenues in the field. This survey,
therefore, serves as a valuable resource for anyone invested in understanding
and advancing the alignment of LLMs to better suit human-oriented tasks and
expectations. An associated GitHub link collecting the latest papers is
available at https://github.com/GaryYufei/AlignLLMHumanSurvey. | http://arxiv.org/pdf/2307.12966 | Yufei Wang, Wanjun Zhong, Liangyou Li, Fei Mi, Xingshan Zeng, Wenyong Huang, Lifeng Shang, Xin Jiang, Qun Liu | cs.CL | work in progress | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2307.03109"
},
{
"id": "2305.14251"
},
{
"id": "2305.18290"
},
{
"id": "2305.13711"
},
{
"id": "2306.11644"
},
{
"id": "2306.09212"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.15011"
},
{
"id": "2304.08109"
},
{
"id": "2305.16264"
},
{
"id": "2305.10263"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2305.01210"
},
{
"id": "2306.04751"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2210.05158"
},
{
"id": "2306.15895"
},
{
"id": "2301.13688"
},
{
"id": "2307.02053"
},
{
"id": "2306.04563"
},
{
"id": "2305.15067"
},
{
"id": "2305.14688"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2306.09296"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2307.08701"
},
{
"id": "2304.08177"
},
{
"id": "2303.16634"
},
{
"id": "2306.17492"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.16739"
},
{
"id": "1909.08593"
},
{
"id": "2304.14317"
},
{
"id": "2305.14233"
},
{
"id": "2304.00723"
},
{
"id": "2306.08568"
},
{
"id": "2304.03277"
},
{
"id": "2307.03692"
},
{
"id": "2305.14314"
},
{
"id": "2307.06290"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2305.16960"
}
] |
2307.12573 | 28 | # 3.5 Statistical Analysis
We present the statistical results of answers in MOE in Tab. 1. In total, we have extracted and labeled 1,003 sets of contexts and corresponding skill checks, which serve as the input context and ground truth for our task. The average number of turns in our dataset is 32.12, indicating its complexity compared to previous works that primarily focused on single-turn responses. Furthermore, we provide the distributions of skill check labels of the Dungeons and Dragons (DND) logs in the MOE task, as illustrated in Fig. 4 and Fig. 3. The number of characters involved in skill checks varies from 1 to 11, with an average of 1.696 characters per skill check. This reï¬ects the complexity of multi-character interactions within our dataset, which close to the real-human communication in the games. Additionally, the items for skill checks exhibit diversity, highlighting the varied interactions between players. Both sets of statistical results underscore the value of our task as a comprehensive testbed for understanding complex interactions in TRPGs.
7
rules Dataset DDD Corpus [13] â¼4,430,000 DND DND PBP [3] DND GANDALF [25] DND,COC,PF,SW Ours
Table 1: Dataset statistic. Our dataset exhibits a comparable scale to previous works, while also encompassing a higher diversity of game rules.
# 4 Dataset | 2307.12573#28 | Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models | Recent advancements in natural language and Large Language Models (LLMs) have
enabled AI agents to simulate human-like interactions within virtual worlds.
However, these interactions still face limitations in complexity and
flexibility, particularly in scenarios involving multiple characters and novel
objects. Pre-defining all interactable objects in the agent's world model
presents challenges, and conveying implicit intentions to multiple characters
through complex interactions remains difficult. To address these issues, we
propose integrating virtual Game Masters (GMs) into the agent's world model,
drawing inspiration from Tabletop Role-Playing Games (TRPGs). GMs play a
crucial role in overseeing information, estimating players' intentions,
providing environment descriptions, and offering feedback, compensating for
current world model deficiencies. To facilitate future explorations for complex
interactions, we introduce a benchmark named Tachikuma, comprising a Multiple
character and novel Object based interaction Estimation (MOE) task and a
supporting dataset. MOE challenges models to understand characters' intentions
and accurately determine their actions within intricate contexts involving
multi-character and novel object interactions. Besides, the dataset captures
log data from real-time communications during gameplay, providing diverse,
grounded, and complex interactions for further explorations. Finally, we
present a simple prompting baseline and evaluate its performance, demonstrating
its effectiveness in enhancing interaction understanding. We hope that our
dataset and task will inspire further research in complex interactions with
natural language, fostering the development of more advanced AI agents. | http://arxiv.org/pdf/2307.12573 | Yuanzhi Liang, Linchao Zhu, Yi Yang | cs.CL | Preliminary version of an ongoing work | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2212.10060"
},
{
"id": "2205.11916"
},
{
"id": "2304.03442"
},
{
"id": "2203.11171"
},
{
"id": "2212.10618"
},
{
"id": "2301.00234"
},
{
"id": "2303.18223"
},
{
"id": "2010.07375"
},
{
"id": "2212.10403"
},
{
"id": "2305.01528"
},
{
"id": "2305.16960"
}
] |
2307.12856 | 28 | # 4 EXPERIMENTAL RESULTS
To study how a modular combination of LLMs under self-supervision enables real-world web automa- tion by overcoming open-endedness and long context documents, we execute instruction-following tasks on real websites (Section 4.1). In Appendix E, we also test WebAgent on WebSRC (Chen et al., 2021b), a static HTML comprehension benchmark, compared to prior transformer models specialized for structured documents (Li et al., 2021b; Zhao et al., 2022). In addition, we quantify the performance of HTML-T5 itself on simulated web benchmark, MiniWoB++, and offline task planning benchmark, Mind2Web (Section 4.2).
6
Preprint
Span Length µ real-estate MiniWoB++ Architectures Attention Type L = 2048 L = 4096 (no HTML-denoising) 78.07 53.8% Flan-T5-Base Long-T5-Base Long-T5-Base Dense Local Local & Global 34.0% 43.4% 53.1% 35.3% 44.0% 53.6% | 2307.12856#28 | A Real-World WebAgent with Planning, Long Context Understanding, and Program Synthesis | Pre-trained large language models (LLMs) have recently achieved better
generalization and sample efficiency in autonomous web automation. However, the
performance on real-world websites has still suffered from (1) open domainness,
(2) limited context length, and (3) lack of inductive bias on HTML. We
introduce WebAgent, an LLM-driven agent that learns from self-experience to
complete tasks on real websites following natural language instructions.
WebAgent plans ahead by decomposing instructions into canonical
sub-instructions, summarizes long HTML documents into task-relevant snippets,
and acts on websites via Python programs generated from those. We design
WebAgent with Flan-U-PaLM, for grounded code generation, and HTML-T5, new
pre-trained LLMs for long HTML documents using local and global attention
mechanisms and a mixture of long-span denoising objectives, for planning and
summarization. We empirically demonstrate that our modular recipe improves the
success on real websites by over 50%, and that HTML-T5 is the best model to
solve various HTML understanding tasks; achieving 18.7% higher success rate
than the prior method on MiniWoB web automation benchmark, and SoTA performance
on Mind2Web, an offline task planning evaluation. | http://arxiv.org/pdf/2307.12856 | Izzeddin Gur, Hiroki Furuta, Austin Huang, Mustafa Safdari, Yutaka Matsuo, Douglas Eck, Aleksandra Faust | cs.LG, cs.AI, cs.CL | null | null | cs.LG | 20230724 | 20231003 | [
{
"id": "2101.02235"
},
{
"id": "2302.13971"
},
{
"id": "2103.06410"
},
{
"id": "2305.08848"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2106.15772"
},
{
"id": "2305.16653"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2105.13231"
},
{
"id": "2205.05131"
},
{
"id": "2204.00598"
},
{
"id": "2203.17189"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2105.11210"
},
{
"id": "2304.11477"
},
{
"id": "1906.01749"
},
{
"id": "2108.07732"
},
{
"id": "2202.00217"
},
{
"id": "2102.04664"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "1811.00937"
},
{
"id": "2303.08774"
},
{
"id": "1808.06226"
},
{
"id": "2010.12844"
},
{
"id": "2110.14168"
},
{
"id": "2209.11302"
},
{
"id": "2210.09261"
},
{
"id": "2306.07863"
},
{
"id": "2306.00245"
},
{
"id": "2305.16291"
},
{
"id": "2005.14165"
},
{
"id": "2305.11014"
},
{
"id": "2211.10435"
},
{
"id": "2210.03629"
},
{
"id": "1912.13318"
},
{
"id": "2108.13643"
},
{
"id": "2303.09752"
},
{
"id": "2103.07191"
},
{
"id": "1611.04230"
},
{
"id": "2209.07753"
},
{
"id": "2002.08155"
},
{
"id": "2212.10560"
},
{
"id": "2110.08518"
},
{
"id": "2210.03945"
},
{
"id": "2201.08239"
},
{
"id": "2204.01691"
},
{
"id": "1906.03741"
},
{
"id": "2203.02155"
},
{
"id": "2201.07207"
},
{
"id": "2004.08483"
},
{
"id": "1804.05685"
},
{
"id": "2206.10498"
},
{
"id": "2207.01206"
},
{
"id": "2105.09938"
},
{
"id": "2205.12255"
},
{
"id": "2305.11854"
}
] |
2307.12950 | 28 | # 5 ANALYSIS
We run two additional analyses to provide further insight on RLCD.
7
5.1 PREFERENCE MODEL EVALUATION
For the harmlessness and helpfulness tasks, we evaluate RLCDâs preference model compared to RLAIFâs on 2000 gold human-labeled preference data examples from Bai et al. (2022a), based on the same prompts as used for preference data simulation. We check average binary prediction accuracy (i.e., whether the gold human-preferred output is assigned higher preference probability) as well as the average probability that each preference model assigns to the gold output.
Harmlessness Prompts Helpfulness Prompts Method Acc. Prob. Acc. Prob. RLAIF7B RLCD7B 35.6 52.4 0.492 0.516 60.6 64.4 0.508 0.601 RLAIF30B RLCD30B 45.7 55.9 0.489 0.542 66.2 66.7 0.551 0.628
Table 5: Average binary accuracy and probability for favoring gold human-preferred output on harmlessness and helpfulness data, for RLAIF and RLCD preference models. RLCDâs preference models perform better on both datasets. | 2307.12950#28 | RLCD: Reinforcement Learning from Contrast Distillation for Language Model Alignment | We propose Reinforcement Learning from Contrast Distillation (RLCD), a method
for aligning language models to follow natural language principles without
using human feedback. RLCD trains a preference model using simulated preference
pairs that contain both a high-quality and low-quality example, generated using
contrasting positive and negative prompts. The preference model is then used to
improve a base unaligned language model via reinforcement learning.
Empirically, RLCD outperforms RLAIF (Bai et al., 2022b) and context
distillation (Huang et al., 2022) baselines across three diverse alignment
tasks--harmlessness, helpfulness, and story outline generation--and on both 7B
and 30B model scales for preference data simulation. | http://arxiv.org/pdf/2307.12950 | Kevin Yang, Dan Klein, Asli Celikyilmaz, Nanyun Peng, Yuandong Tian | cs.CL, cs.AI | null | null | cs.CL | 20230724 | 20230818 | [
{
"id": "2302.13971"
},
{
"id": "2109.07445"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.03047"
},
{
"id": "2209.15189"
},
{
"id": "2306.15595"
},
{
"id": "1807.03748"
},
{
"id": "2202.00161"
},
{
"id": "1606.07947"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2204.05862"
},
{
"id": "2301.11270"
},
{
"id": "2112.00861"
},
{
"id": "2307.03172"
},
{
"id": "2206.11349"
},
{
"id": "2305.14387"
},
{
"id": "2210.11610"
},
{
"id": "2102.10960"
},
{
"id": "2306.11816"
}
] |
2307.12966 | 28 | Jiang et al. (2023) proposed monitoring the quality of instruction response based on external LLM-based evaluations. They ï¬rst ï¬ne-tune foundational LLMs with instruction data to obtain âstudent LLMsâ. Then, for each of training instruction, they
gather responses from both teacher LLMs (e.g., ChatGPT) and student LLMs and prompted LLMs to conduct pairwise evaluation on the quality of both responses. Instructions are retained only when the student LLMsâ response falls short of that from the teacher LLMs.
# 2.2.2 Multi-turn Instructions | 2307.12966#28 | Aligning Large Language Models with Human: A Survey | Large Language Models (LLMs) trained on extensive textual corpora have
emerged as leading solutions for a broad array of Natural Language Processing
(NLP) tasks. Despite their notable performance, these models are prone to
certain limitations such as misunderstanding human instructions, generating
potentially biased content, or factually incorrect (hallucinated) information.
Hence, aligning LLMs with human expectations has become an active area of
interest within the research community. This survey presents a comprehensive
overview of these alignment technologies, including the following aspects. (1)
Data collection: the methods for effectively collecting high-quality
instructions for LLM alignment, including the use of NLP benchmarks, human
annotations, and leveraging strong LLMs. (2) Training methodologies: a detailed
review of the prevailing training methods employed for LLM alignment. Our
exploration encompasses Supervised Fine-tuning, both Online and Offline human
preference training, along with parameter-efficient training mechanisms. (3)
Model Evaluation: the methods for evaluating the effectiveness of these
human-aligned LLMs, presenting a multifaceted approach towards their
assessment. In conclusion, we collate and distill our findings, shedding light
on several promising future research avenues in the field. This survey,
therefore, serves as a valuable resource for anyone invested in understanding
and advancing the alignment of LLMs to better suit human-oriented tasks and
expectations. An associated GitHub link collecting the latest papers is
available at https://github.com/GaryYufei/AlignLLMHumanSurvey. | http://arxiv.org/pdf/2307.12966 | Yufei Wang, Wanjun Zhong, Liangyou Li, Fei Mi, Xingshan Zeng, Wenyong Huang, Lifeng Shang, Xin Jiang, Qun Liu | cs.CL | work in progress | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2307.03109"
},
{
"id": "2305.14251"
},
{
"id": "2305.18290"
},
{
"id": "2305.13711"
},
{
"id": "2306.11644"
},
{
"id": "2306.09212"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.15011"
},
{
"id": "2304.08109"
},
{
"id": "2305.16264"
},
{
"id": "2305.10263"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2305.01210"
},
{
"id": "2306.04751"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2210.05158"
},
{
"id": "2306.15895"
},
{
"id": "2301.13688"
},
{
"id": "2307.02053"
},
{
"id": "2306.04563"
},
{
"id": "2305.15067"
},
{
"id": "2305.14688"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2306.09296"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2307.08701"
},
{
"id": "2304.08177"
},
{
"id": "2303.16634"
},
{
"id": "2306.17492"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.16739"
},
{
"id": "1909.08593"
},
{
"id": "2304.14317"
},
{
"id": "2305.14233"
},
{
"id": "2304.00723"
},
{
"id": "2306.08568"
},
{
"id": "2304.03277"
},
{
"id": "2307.03692"
},
{
"id": "2305.14314"
},
{
"id": "2307.06290"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2305.16960"
}
] |
2307.12573 | 29 | Table 1: Dataset statistic. Our dataset exhibits a comparable scale to previous works, while also encompassing a higher diversity of game rules.
# 4 Dataset
To support our MOE task with more grounded and complex data, we have collect a new dataset. It is sourced from a Chinese TRPG forum2. This forum hosts a wide array of game records uploaded by users, spanning different rule systems e.g., DND, COC, PF, SW, etc. Unlike play-by-post forums [3], where players interact by writing and posting responses, the game logs in this forum are compiled and summarized by the Game Masters (GMs) after the game sessions3. Besides, in play-by-post scenarios, interactions between players are not immediate, and the feedback from the next player may not appear until several days or even weeks later. In contrast, the majority of game logs in our dataset are derived from instant messaging platforms, including voice and text communication. This characteristic allows for the capture of abundant immediate player responses, closely mirroring daily conversations with grounded language interactions. Consequently, our dataset provides more grounded semantics within real-time communication, making it conducive for exploring AI agents. | 2307.12573#29 | Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models | Recent advancements in natural language and Large Language Models (LLMs) have
enabled AI agents to simulate human-like interactions within virtual worlds.
However, these interactions still face limitations in complexity and
flexibility, particularly in scenarios involving multiple characters and novel
objects. Pre-defining all interactable objects in the agent's world model
presents challenges, and conveying implicit intentions to multiple characters
through complex interactions remains difficult. To address these issues, we
propose integrating virtual Game Masters (GMs) into the agent's world model,
drawing inspiration from Tabletop Role-Playing Games (TRPGs). GMs play a
crucial role in overseeing information, estimating players' intentions,
providing environment descriptions, and offering feedback, compensating for
current world model deficiencies. To facilitate future explorations for complex
interactions, we introduce a benchmark named Tachikuma, comprising a Multiple
character and novel Object based interaction Estimation (MOE) task and a
supporting dataset. MOE challenges models to understand characters' intentions
and accurately determine their actions within intricate contexts involving
multi-character and novel object interactions. Besides, the dataset captures
log data from real-time communications during gameplay, providing diverse,
grounded, and complex interactions for further explorations. Finally, we
present a simple prompting baseline and evaluate its performance, demonstrating
its effectiveness in enhancing interaction understanding. We hope that our
dataset and task will inspire further research in complex interactions with
natural language, fostering the development of more advanced AI agents. | http://arxiv.org/pdf/2307.12573 | Yuanzhi Liang, Linchao Zhu, Yi Yang | cs.CL | Preliminary version of an ongoing work | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2212.10060"
},
{
"id": "2205.11916"
},
{
"id": "2304.03442"
},
{
"id": "2203.11171"
},
{
"id": "2212.10618"
},
{
"id": "2301.00234"
},
{
"id": "2303.18223"
},
{
"id": "2010.07375"
},
{
"id": "2212.10403"
},
{
"id": "2305.01528"
},
{
"id": "2305.16960"
}
] |
2307.12856 | 29 | Table 2: (Left) Architecture comparison on MiniWoB++ 12K dataset (Liu et al., 2018) with average success rate over 56 tasks. Local and global attention matches to the hierarchical tree structure of HTML, and then improves the success rate by over 18%, compared to the instruction-finetuned dense attentions (Chung et al., 2022; Furuta et al., 2023). (Right) HTML-denoising comparison with different mixtures of span length (Raffel et al., 2020; Tay et al., 2022). We use LongT5-Base models for pre-training. HTML-denoising generally improves the performance on offline task planning on real estate website and MiniWoB benchmark. Especially, using longer span lengths (µ â {8, 6}) outperforms other choices, including the popular configuration in natural language domain (µ â {3, 8, 64} + Prefix LM objective), which can reduce the less meaningful prediction from shorter spans (e.g. µ = 3), and inject the structural bias of HTML better.
4.1 REAL-WORLD WEB AUTOMATION | 2307.12856#29 | A Real-World WebAgent with Planning, Long Context Understanding, and Program Synthesis | Pre-trained large language models (LLMs) have recently achieved better
generalization and sample efficiency in autonomous web automation. However, the
performance on real-world websites has still suffered from (1) open domainness,
(2) limited context length, and (3) lack of inductive bias on HTML. We
introduce WebAgent, an LLM-driven agent that learns from self-experience to
complete tasks on real websites following natural language instructions.
WebAgent plans ahead by decomposing instructions into canonical
sub-instructions, summarizes long HTML documents into task-relevant snippets,
and acts on websites via Python programs generated from those. We design
WebAgent with Flan-U-PaLM, for grounded code generation, and HTML-T5, new
pre-trained LLMs for long HTML documents using local and global attention
mechanisms and a mixture of long-span denoising objectives, for planning and
summarization. We empirically demonstrate that our modular recipe improves the
success on real websites by over 50%, and that HTML-T5 is the best model to
solve various HTML understanding tasks; achieving 18.7% higher success rate
than the prior method on MiniWoB web automation benchmark, and SoTA performance
on Mind2Web, an offline task planning evaluation. | http://arxiv.org/pdf/2307.12856 | Izzeddin Gur, Hiroki Furuta, Austin Huang, Mustafa Safdari, Yutaka Matsuo, Douglas Eck, Aleksandra Faust | cs.LG, cs.AI, cs.CL | null | null | cs.LG | 20230724 | 20231003 | [
{
"id": "2101.02235"
},
{
"id": "2302.13971"
},
{
"id": "2103.06410"
},
{
"id": "2305.08848"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2106.15772"
},
{
"id": "2305.16653"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2105.13231"
},
{
"id": "2205.05131"
},
{
"id": "2204.00598"
},
{
"id": "2203.17189"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2105.11210"
},
{
"id": "2304.11477"
},
{
"id": "1906.01749"
},
{
"id": "2108.07732"
},
{
"id": "2202.00217"
},
{
"id": "2102.04664"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "1811.00937"
},
{
"id": "2303.08774"
},
{
"id": "1808.06226"
},
{
"id": "2010.12844"
},
{
"id": "2110.14168"
},
{
"id": "2209.11302"
},
{
"id": "2210.09261"
},
{
"id": "2306.07863"
},
{
"id": "2306.00245"
},
{
"id": "2305.16291"
},
{
"id": "2005.14165"
},
{
"id": "2305.11014"
},
{
"id": "2211.10435"
},
{
"id": "2210.03629"
},
{
"id": "1912.13318"
},
{
"id": "2108.13643"
},
{
"id": "2303.09752"
},
{
"id": "2103.07191"
},
{
"id": "1611.04230"
},
{
"id": "2209.07753"
},
{
"id": "2002.08155"
},
{
"id": "2212.10560"
},
{
"id": "2110.08518"
},
{
"id": "2210.03945"
},
{
"id": "2201.08239"
},
{
"id": "2204.01691"
},
{
"id": "1906.03741"
},
{
"id": "2203.02155"
},
{
"id": "2201.07207"
},
{
"id": "2004.08483"
},
{
"id": "1804.05685"
},
{
"id": "2206.10498"
},
{
"id": "2207.01206"
},
{
"id": "2105.09938"
},
{
"id": "2205.12255"
},
{
"id": "2305.11854"
}
] |
2307.12950 | 29 | As shown in Table 5, RLCDâs preference models exhibit higher agreement with human preferences compared to RLAIFâs, whether measured by binary accuracy or by probability of agreement.
Perhaps surprisingly, RLAIFâs harmlessness preference models actually perform worse than chance, even for RLAIF30B, even though RLAIF30B performs quite reasonably downstream for mitigating harmful outputs (e.g., examples in Appendix I).6 In fact, this low agreement may not be entirely unexpected, as Bai et al. (2022b) also observe that both (1) few-shot prompting for the scoring LLM and (2) well over 10B model scale seem necessary to achieve higher than chance agreement with humans on harmlessness. It is also not impossible for RLAIF30B to successfully mitigate harm downstream despite low preference model agreement with humans, as human labels may also contain errors or biases. See Appendix C for further discussion, as well as experiments with a version of RLAIF using few-shot prompts for scoring. | 2307.12950#29 | RLCD: Reinforcement Learning from Contrast Distillation for Language Model Alignment | We propose Reinforcement Learning from Contrast Distillation (RLCD), a method
for aligning language models to follow natural language principles without
using human feedback. RLCD trains a preference model using simulated preference
pairs that contain both a high-quality and low-quality example, generated using
contrasting positive and negative prompts. The preference model is then used to
improve a base unaligned language model via reinforcement learning.
Empirically, RLCD outperforms RLAIF (Bai et al., 2022b) and context
distillation (Huang et al., 2022) baselines across three diverse alignment
tasks--harmlessness, helpfulness, and story outline generation--and on both 7B
and 30B model scales for preference data simulation. | http://arxiv.org/pdf/2307.12950 | Kevin Yang, Dan Klein, Asli Celikyilmaz, Nanyun Peng, Yuandong Tian | cs.CL, cs.AI | null | null | cs.CL | 20230724 | 20230818 | [
{
"id": "2302.13971"
},
{
"id": "2109.07445"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.03047"
},
{
"id": "2209.15189"
},
{
"id": "2306.15595"
},
{
"id": "1807.03748"
},
{
"id": "2202.00161"
},
{
"id": "1606.07947"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2204.05862"
},
{
"id": "2301.11270"
},
{
"id": "2112.00861"
},
{
"id": "2307.03172"
},
{
"id": "2206.11349"
},
{
"id": "2305.14387"
},
{
"id": "2210.11610"
},
{
"id": "2102.10960"
},
{
"id": "2306.11816"
}
] |
2307.12966 | 29 | In previous sections, we mainly focus on collecting synthetic single-turn instructions. However, LLMs well aligned with human should be capable to in- teract with users in a dialogue-based setting. To achieve this goal, some research efforts attempt to collect synthetic multi-turn instructions from strong LLMs. When aligning LLaMA with human, Vicuna (Chiang et al., 2023) leverage instructions from ShareGPT which is website hosting interest- ing human-LLMs joint conversations. However, ShareGPT requires large volumes of users to up- load their conversations. Xu et al. (2023c) propose a novel Self-Chatting framework where questions from popular QA websites are used as the starting topics, then Chat-3.5 is prompted to chat with it- self about this question in a four-turn dialogue. Li et al. (2023a) propose CAMEL, a ârole-playingâ framework where a human annotators ï¬rst provide a topic, then LLMs are separately prompted to be âAI Usersâ and âAI Assistantsâ to discuss about this topic. Ji et al. (2023) take a step further and | 2307.12966#29 | Aligning Large Language Models with Human: A Survey | Large Language Models (LLMs) trained on extensive textual corpora have
emerged as leading solutions for a broad array of Natural Language Processing
(NLP) tasks. Despite their notable performance, these models are prone to
certain limitations such as misunderstanding human instructions, generating
potentially biased content, or factually incorrect (hallucinated) information.
Hence, aligning LLMs with human expectations has become an active area of
interest within the research community. This survey presents a comprehensive
overview of these alignment technologies, including the following aspects. (1)
Data collection: the methods for effectively collecting high-quality
instructions for LLM alignment, including the use of NLP benchmarks, human
annotations, and leveraging strong LLMs. (2) Training methodologies: a detailed
review of the prevailing training methods employed for LLM alignment. Our
exploration encompasses Supervised Fine-tuning, both Online and Offline human
preference training, along with parameter-efficient training mechanisms. (3)
Model Evaluation: the methods for evaluating the effectiveness of these
human-aligned LLMs, presenting a multifaceted approach towards their
assessment. In conclusion, we collate and distill our findings, shedding light
on several promising future research avenues in the field. This survey,
therefore, serves as a valuable resource for anyone invested in understanding
and advancing the alignment of LLMs to better suit human-oriented tasks and
expectations. An associated GitHub link collecting the latest papers is
available at https://github.com/GaryYufei/AlignLLMHumanSurvey. | http://arxiv.org/pdf/2307.12966 | Yufei Wang, Wanjun Zhong, Liangyou Li, Fei Mi, Xingshan Zeng, Wenyong Huang, Lifeng Shang, Xin Jiang, Qun Liu | cs.CL | work in progress | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2307.03109"
},
{
"id": "2305.14251"
},
{
"id": "2305.18290"
},
{
"id": "2305.13711"
},
{
"id": "2306.11644"
},
{
"id": "2306.09212"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.15011"
},
{
"id": "2304.08109"
},
{
"id": "2305.16264"
},
{
"id": "2305.10263"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2305.01210"
},
{
"id": "2306.04751"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2210.05158"
},
{
"id": "2306.15895"
},
{
"id": "2301.13688"
},
{
"id": "2307.02053"
},
{
"id": "2306.04563"
},
{
"id": "2305.15067"
},
{
"id": "2305.14688"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2306.09296"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2307.08701"
},
{
"id": "2304.08177"
},
{
"id": "2303.16634"
},
{
"id": "2306.17492"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.16739"
},
{
"id": "1909.08593"
},
{
"id": "2304.14317"
},
{
"id": "2305.14233"
},
{
"id": "2304.00723"
},
{
"id": "2306.08568"
},
{
"id": "2304.03277"
},
{
"id": "2307.03692"
},
{
"id": "2305.14314"
},
{
"id": "2307.06290"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2305.16960"
}
] |
2307.12856 | 30 | 4.1 REAL-WORLD WEB AUTOMATION
Evaluation Methodology We first evaluate WebAgent with the real-world navigation performance under human supervision, at real estate website (a platform for housing), social media website (a network of communities), and map website. These three websites have different properties. real-estate requires long-horizon planning (about 20 steps per episode) for complex form- filling with a few page transitions (at least 2 pages), and social-media needs shorter plans (about 10 steps per episode) with many page transitions (at least 4 pages) by selecting appropriate hyperlinks on the page. map is the easiest domain with shorter plans and a few page transitions. WebAgent receives natural language instructions (e.g. Can you search for a studio bedroom, 1+ bathroom apartments in oroville, ca for corporate housing on real estate website?, or Could you present the most new thread of Python community filtered by Tutorial tag on social media website?), and acts via planning, summarizing by HTML-T5, and then programming by Flan-U-PaLM (Figure 5). Through the self-experience supervision process, we curate 260 episodes on real estate website, 230 episodes on social media website, and 410 episodes on map website to finetune HTML-T5. | 2307.12856#30 | A Real-World WebAgent with Planning, Long Context Understanding, and Program Synthesis | Pre-trained large language models (LLMs) have recently achieved better
generalization and sample efficiency in autonomous web automation. However, the
performance on real-world websites has still suffered from (1) open domainness,
(2) limited context length, and (3) lack of inductive bias on HTML. We
introduce WebAgent, an LLM-driven agent that learns from self-experience to
complete tasks on real websites following natural language instructions.
WebAgent plans ahead by decomposing instructions into canonical
sub-instructions, summarizes long HTML documents into task-relevant snippets,
and acts on websites via Python programs generated from those. We design
WebAgent with Flan-U-PaLM, for grounded code generation, and HTML-T5, new
pre-trained LLMs for long HTML documents using local and global attention
mechanisms and a mixture of long-span denoising objectives, for planning and
summarization. We empirically demonstrate that our modular recipe improves the
success on real websites by over 50%, and that HTML-T5 is the best model to
solve various HTML understanding tasks; achieving 18.7% higher success rate
than the prior method on MiniWoB web automation benchmark, and SoTA performance
on Mind2Web, an offline task planning evaluation. | http://arxiv.org/pdf/2307.12856 | Izzeddin Gur, Hiroki Furuta, Austin Huang, Mustafa Safdari, Yutaka Matsuo, Douglas Eck, Aleksandra Faust | cs.LG, cs.AI, cs.CL | null | null | cs.LG | 20230724 | 20231003 | [
{
"id": "2101.02235"
},
{
"id": "2302.13971"
},
{
"id": "2103.06410"
},
{
"id": "2305.08848"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2106.15772"
},
{
"id": "2305.16653"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2105.13231"
},
{
"id": "2205.05131"
},
{
"id": "2204.00598"
},
{
"id": "2203.17189"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2105.11210"
},
{
"id": "2304.11477"
},
{
"id": "1906.01749"
},
{
"id": "2108.07732"
},
{
"id": "2202.00217"
},
{
"id": "2102.04664"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "1811.00937"
},
{
"id": "2303.08774"
},
{
"id": "1808.06226"
},
{
"id": "2010.12844"
},
{
"id": "2110.14168"
},
{
"id": "2209.11302"
},
{
"id": "2210.09261"
},
{
"id": "2306.07863"
},
{
"id": "2306.00245"
},
{
"id": "2305.16291"
},
{
"id": "2005.14165"
},
{
"id": "2305.11014"
},
{
"id": "2211.10435"
},
{
"id": "2210.03629"
},
{
"id": "1912.13318"
},
{
"id": "2108.13643"
},
{
"id": "2303.09752"
},
{
"id": "2103.07191"
},
{
"id": "1611.04230"
},
{
"id": "2209.07753"
},
{
"id": "2002.08155"
},
{
"id": "2212.10560"
},
{
"id": "2110.08518"
},
{
"id": "2210.03945"
},
{
"id": "2201.08239"
},
{
"id": "2204.01691"
},
{
"id": "1906.03741"
},
{
"id": "2203.02155"
},
{
"id": "2201.07207"
},
{
"id": "2004.08483"
},
{
"id": "1804.05685"
},
{
"id": "2206.10498"
},
{
"id": "2207.01206"
},
{
"id": "2105.09938"
},
{
"id": "2205.12255"
},
{
"id": "2305.11854"
}
] |
2307.12950 | 30 | In any case, RLCDâs learned preference models do not exhibit the same lower-than-chance human agreement as RLAIFâs on the harmlessness prompts. Moreover, RLCDâs preference models exhibit higher agreement with humans compared to RLAIFâs on the helpfulness prompts as well. Even if the preference modelâs level of human agreement may not correlate perfectly to downstream performance, we suppose that high human agreement should be somewhat desirable in and of itself.
Finally, RLCDâs preference models make judgments with higher polarity compared to RLAIFâs, likely due to our use of discrete binary preference labels as opposed to continuous probabilities (Figure 1). We explore a version of RLAIF that also uses binary preference labels in Appendix B.
5.2 RESCORING VARIANT OF RLCD
We additionally investigate a variant of RLCD, RLCD-Rescore, in which we generate preference data (o+, oâ) using our prompts p+, pâ but re-label using the same scoring prompts as in RLAIF. We compare pairwise against RLCD on all three tasks using GPT-4. | 2307.12950#30 | RLCD: Reinforcement Learning from Contrast Distillation for Language Model Alignment | We propose Reinforcement Learning from Contrast Distillation (RLCD), a method
for aligning language models to follow natural language principles without
using human feedback. RLCD trains a preference model using simulated preference
pairs that contain both a high-quality and low-quality example, generated using
contrasting positive and negative prompts. The preference model is then used to
improve a base unaligned language model via reinforcement learning.
Empirically, RLCD outperforms RLAIF (Bai et al., 2022b) and context
distillation (Huang et al., 2022) baselines across three diverse alignment
tasks--harmlessness, helpfulness, and story outline generation--and on both 7B
and 30B model scales for preference data simulation. | http://arxiv.org/pdf/2307.12950 | Kevin Yang, Dan Klein, Asli Celikyilmaz, Nanyun Peng, Yuandong Tian | cs.CL, cs.AI | null | null | cs.CL | 20230724 | 20230818 | [
{
"id": "2302.13971"
},
{
"id": "2109.07445"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.03047"
},
{
"id": "2209.15189"
},
{
"id": "2306.15595"
},
{
"id": "1807.03748"
},
{
"id": "2202.00161"
},
{
"id": "1606.07947"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2204.05862"
},
{
"id": "2301.11270"
},
{
"id": "2112.00861"
},
{
"id": "2307.03172"
},
{
"id": "2206.11349"
},
{
"id": "2305.14387"
},
{
"id": "2210.11610"
},
{
"id": "2102.10960"
},
{
"id": "2306.11816"
}
] |
2307.12966 | 30 | to be âAI Usersâ and âAI Assistantsâ to discuss about this topic. Ji et al. (2023) take a step further and prompt LLMs to ï¬rst determine the conversation topic and then ask LLMs to chat with themselves to produce dialogue corpus. Ye et al. (2023a) propose a novel revision-based multi-turn dialogue corpus. Speciï¬- cally, after instructions and initial responses, they further prompt LLMs to generate feedback and the revised version of responses if necessary. They use this dataset to train the SelFee model and show that SelFee can effectively improve its own answers when prompted to do so without any external guid- ance. The UltraLLaMA model (Ding et al., 2023) leverages a wide range of real-world information, including (a) real-world knowledge from LLMs and Wikipedia; (b) various text creation tasks; (c) high-quality textual corpus, to produce initial ques- tions and instructions that guide LLMs to generate diverse and high-quality multi-turn dialogues. | 2307.12966#30 | Aligning Large Language Models with Human: A Survey | Large Language Models (LLMs) trained on extensive textual corpora have
emerged as leading solutions for a broad array of Natural Language Processing
(NLP) tasks. Despite their notable performance, these models are prone to
certain limitations such as misunderstanding human instructions, generating
potentially biased content, or factually incorrect (hallucinated) information.
Hence, aligning LLMs with human expectations has become an active area of
interest within the research community. This survey presents a comprehensive
overview of these alignment technologies, including the following aspects. (1)
Data collection: the methods for effectively collecting high-quality
instructions for LLM alignment, including the use of NLP benchmarks, human
annotations, and leveraging strong LLMs. (2) Training methodologies: a detailed
review of the prevailing training methods employed for LLM alignment. Our
exploration encompasses Supervised Fine-tuning, both Online and Offline human
preference training, along with parameter-efficient training mechanisms. (3)
Model Evaluation: the methods for evaluating the effectiveness of these
human-aligned LLMs, presenting a multifaceted approach towards their
assessment. In conclusion, we collate and distill our findings, shedding light
on several promising future research avenues in the field. This survey,
therefore, serves as a valuable resource for anyone invested in understanding
and advancing the alignment of LLMs to better suit human-oriented tasks and
expectations. An associated GitHub link collecting the latest papers is
available at https://github.com/GaryYufei/AlignLLMHumanSurvey. | http://arxiv.org/pdf/2307.12966 | Yufei Wang, Wanjun Zhong, Liangyou Li, Fei Mi, Xingshan Zeng, Wenyong Huang, Lifeng Shang, Xin Jiang, Qun Liu | cs.CL | work in progress | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2307.03109"
},
{
"id": "2305.14251"
},
{
"id": "2305.18290"
},
{
"id": "2305.13711"
},
{
"id": "2306.11644"
},
{
"id": "2306.09212"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.15011"
},
{
"id": "2304.08109"
},
{
"id": "2305.16264"
},
{
"id": "2305.10263"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2305.01210"
},
{
"id": "2306.04751"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2210.05158"
},
{
"id": "2306.15895"
},
{
"id": "2301.13688"
},
{
"id": "2307.02053"
},
{
"id": "2306.04563"
},
{
"id": "2305.15067"
},
{
"id": "2305.14688"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2306.09296"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2307.08701"
},
{
"id": "2304.08177"
},
{
"id": "2303.16634"
},
{
"id": "2306.17492"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.16739"
},
{
"id": "1909.08593"
},
{
"id": "2304.14317"
},
{
"id": "2305.14233"
},
{
"id": "2304.00723"
},
{
"id": "2306.08568"
},
{
"id": "2304.03277"
},
{
"id": "2307.03692"
},
{
"id": "2305.14314"
},
{
"id": "2307.06290"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2305.16960"
}
] |
2307.12573 | 31 | We propose a three-step agent generation benchmark called âThink Before Speakâ (TBS), which aims to guide Large Language Models (LLMs) in comprehending complex and lengthy contexts of interactions more accurately. Unlike simple template-based prompting approaches [11, 24, 8], our method takes into consideration the speciï¬c properties of Tabletop Role-Playing Games (TRPGs) and incorporates the principles of Chain of Thought (CoT) [21, 9] in its prompting design. In the generated check item, the answers consist of character names and corresponding skill names. However, directly expecting the models to produce accurate character and skill names is a challenging task. Using a single-step template prompting approach may result in LLMs generating characters that do not exist in the given contexts, characters with no relevant actions, mismatches between characters and their associated skills, or skills that are not deï¬ned within the game rules. To address these challenges, our method guides LLMs through a three-step process. Firstly, the models are prompted to identify the characters present in the current game scenarios. Then, they are encouraged to consider the intentions of the characters and | 2307.12573#31 | Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models | Recent advancements in natural language and Large Language Models (LLMs) have
enabled AI agents to simulate human-like interactions within virtual worlds.
However, these interactions still face limitations in complexity and
flexibility, particularly in scenarios involving multiple characters and novel
objects. Pre-defining all interactable objects in the agent's world model
presents challenges, and conveying implicit intentions to multiple characters
through complex interactions remains difficult. To address these issues, we
propose integrating virtual Game Masters (GMs) into the agent's world model,
drawing inspiration from Tabletop Role-Playing Games (TRPGs). GMs play a
crucial role in overseeing information, estimating players' intentions,
providing environment descriptions, and offering feedback, compensating for
current world model deficiencies. To facilitate future explorations for complex
interactions, we introduce a benchmark named Tachikuma, comprising a Multiple
character and novel Object based interaction Estimation (MOE) task and a
supporting dataset. MOE challenges models to understand characters' intentions
and accurately determine their actions within intricate contexts involving
multi-character and novel object interactions. Besides, the dataset captures
log data from real-time communications during gameplay, providing diverse,
grounded, and complex interactions for further explorations. Finally, we
present a simple prompting baseline and evaluate its performance, demonstrating
its effectiveness in enhancing interaction understanding. We hope that our
dataset and task will inspire further research in complex interactions with
natural language, fostering the development of more advanced AI agents. | http://arxiv.org/pdf/2307.12573 | Yuanzhi Liang, Linchao Zhu, Yi Yang | cs.CL | Preliminary version of an ongoing work | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2212.10060"
},
{
"id": "2205.11916"
},
{
"id": "2304.03442"
},
{
"id": "2203.11171"
},
{
"id": "2212.10618"
},
{
"id": "2301.00234"
},
{
"id": "2303.18223"
},
{
"id": "2010.07375"
},
{
"id": "2212.10403"
},
{
"id": "2305.01528"
},
{
"id": "2305.16960"
}
] |
2307.12856 | 31 | We prepare 20 different natural language instructions (see Appendix F for the full list), and measure the success rate and score for the evaluation. The score represents the percentage of required attributes covered during the episode (Yao et al., 2022a); for instance, (1) apartments for (2) corporate housing with (3) studio bedroom and (4) 1+ bathroom located in (5) oroville, ca, can be specified in the instruction. When the agents could search the housing satisfying (1), (2), (5) and not (3), (4), the score is 60 (= 100 Ã 3/5). If the agents achieve 100 score, that episode will mark as success. | 2307.12856#31 | A Real-World WebAgent with Planning, Long Context Understanding, and Program Synthesis | Pre-trained large language models (LLMs) have recently achieved better
generalization and sample efficiency in autonomous web automation. However, the
performance on real-world websites has still suffered from (1) open domainness,
(2) limited context length, and (3) lack of inductive bias on HTML. We
introduce WebAgent, an LLM-driven agent that learns from self-experience to
complete tasks on real websites following natural language instructions.
WebAgent plans ahead by decomposing instructions into canonical
sub-instructions, summarizes long HTML documents into task-relevant snippets,
and acts on websites via Python programs generated from those. We design
WebAgent with Flan-U-PaLM, for grounded code generation, and HTML-T5, new
pre-trained LLMs for long HTML documents using local and global attention
mechanisms and a mixture of long-span denoising objectives, for planning and
summarization. We empirically demonstrate that our modular recipe improves the
success on real websites by over 50%, and that HTML-T5 is the best model to
solve various HTML understanding tasks; achieving 18.7% higher success rate
than the prior method on MiniWoB web automation benchmark, and SoTA performance
on Mind2Web, an offline task planning evaluation. | http://arxiv.org/pdf/2307.12856 | Izzeddin Gur, Hiroki Furuta, Austin Huang, Mustafa Safdari, Yutaka Matsuo, Douglas Eck, Aleksandra Faust | cs.LG, cs.AI, cs.CL | null | null | cs.LG | 20230724 | 20231003 | [
{
"id": "2101.02235"
},
{
"id": "2302.13971"
},
{
"id": "2103.06410"
},
{
"id": "2305.08848"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2106.15772"
},
{
"id": "2305.16653"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2105.13231"
},
{
"id": "2205.05131"
},
{
"id": "2204.00598"
},
{
"id": "2203.17189"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2105.11210"
},
{
"id": "2304.11477"
},
{
"id": "1906.01749"
},
{
"id": "2108.07732"
},
{
"id": "2202.00217"
},
{
"id": "2102.04664"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "1811.00937"
},
{
"id": "2303.08774"
},
{
"id": "1808.06226"
},
{
"id": "2010.12844"
},
{
"id": "2110.14168"
},
{
"id": "2209.11302"
},
{
"id": "2210.09261"
},
{
"id": "2306.07863"
},
{
"id": "2306.00245"
},
{
"id": "2305.16291"
},
{
"id": "2005.14165"
},
{
"id": "2305.11014"
},
{
"id": "2211.10435"
},
{
"id": "2210.03629"
},
{
"id": "1912.13318"
},
{
"id": "2108.13643"
},
{
"id": "2303.09752"
},
{
"id": "2103.07191"
},
{
"id": "1611.04230"
},
{
"id": "2209.07753"
},
{
"id": "2002.08155"
},
{
"id": "2212.10560"
},
{
"id": "2110.08518"
},
{
"id": "2210.03945"
},
{
"id": "2201.08239"
},
{
"id": "2204.01691"
},
{
"id": "1906.03741"
},
{
"id": "2203.02155"
},
{
"id": "2201.07207"
},
{
"id": "2004.08483"
},
{
"id": "1804.05685"
},
{
"id": "2206.10498"
},
{
"id": "2207.01206"
},
{
"id": "2105.09938"
},
{
"id": "2205.12255"
},
{
"id": "2305.11854"
}
] |
2307.12950 | 31 | As shown in Table 6, RLCD substantially outperforms RLCD-Rescore at 7B model scale for prefer- ence data simulation, indicating that labeling o+, oâ based on the initial prompts p+, pâ used for output generation is much more effective compared to the post hoc rescoring used in RLAIF. At least in the settings we examine, LLaMA-7B appears to be more capable of generating contrasting outputs o+, oâ than labeling them after the fact.
However, rescoring becomes a viable alternative at 30B scale, as the scoring LLM becomes more capable of labeling examples closer to the boundary. At such model scales, it may also be possible to run a version of RLCD that mixes labels from the two options (RLCD and RLCD-Rescore), or to use
6On the other hand, RLAIF7Bâs downstream performance is quite poor, perhaps reflecting its preference modelâs low agreement with humans.
8 | 2307.12950#31 | RLCD: Reinforcement Learning from Contrast Distillation for Language Model Alignment | We propose Reinforcement Learning from Contrast Distillation (RLCD), a method
for aligning language models to follow natural language principles without
using human feedback. RLCD trains a preference model using simulated preference
pairs that contain both a high-quality and low-quality example, generated using
contrasting positive and negative prompts. The preference model is then used to
improve a base unaligned language model via reinforcement learning.
Empirically, RLCD outperforms RLAIF (Bai et al., 2022b) and context
distillation (Huang et al., 2022) baselines across three diverse alignment
tasks--harmlessness, helpfulness, and story outline generation--and on both 7B
and 30B model scales for preference data simulation. | http://arxiv.org/pdf/2307.12950 | Kevin Yang, Dan Klein, Asli Celikyilmaz, Nanyun Peng, Yuandong Tian | cs.CL, cs.AI | null | null | cs.CL | 20230724 | 20230818 | [
{
"id": "2302.13971"
},
{
"id": "2109.07445"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.03047"
},
{
"id": "2209.15189"
},
{
"id": "2306.15595"
},
{
"id": "1807.03748"
},
{
"id": "2202.00161"
},
{
"id": "1606.07947"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2204.05862"
},
{
"id": "2301.11270"
},
{
"id": "2112.00861"
},
{
"id": "2307.03172"
},
{
"id": "2206.11349"
},
{
"id": "2305.14387"
},
{
"id": "2210.11610"
},
{
"id": "2102.10960"
},
{
"id": "2306.11816"
}
] |
2307.12573 | 32 | Firstly, the models are prompted to identify the characters present in the current game scenarios. Then, they are encouraged to consider the intentions of the characters and list those who are likely to take action or are engaged in ongoing movements. Finally, we provide the models with a comprehensive set of possible skills derived from the game rules, allowing them to select the most appropriate character-skill combinations that the GM may ask the players to check. This gradual guidance facilitates more accurate and context-aware responses from the LLMs. | 2307.12573#32 | Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models | Recent advancements in natural language and Large Language Models (LLMs) have
enabled AI agents to simulate human-like interactions within virtual worlds.
However, these interactions still face limitations in complexity and
flexibility, particularly in scenarios involving multiple characters and novel
objects. Pre-defining all interactable objects in the agent's world model
presents challenges, and conveying implicit intentions to multiple characters
through complex interactions remains difficult. To address these issues, we
propose integrating virtual Game Masters (GMs) into the agent's world model,
drawing inspiration from Tabletop Role-Playing Games (TRPGs). GMs play a
crucial role in overseeing information, estimating players' intentions,
providing environment descriptions, and offering feedback, compensating for
current world model deficiencies. To facilitate future explorations for complex
interactions, we introduce a benchmark named Tachikuma, comprising a Multiple
character and novel Object based interaction Estimation (MOE) task and a
supporting dataset. MOE challenges models to understand characters' intentions
and accurately determine their actions within intricate contexts involving
multi-character and novel object interactions. Besides, the dataset captures
log data from real-time communications during gameplay, providing diverse,
grounded, and complex interactions for further explorations. Finally, we
present a simple prompting baseline and evaluate its performance, demonstrating
its effectiveness in enhancing interaction understanding. We hope that our
dataset and task will inspire further research in complex interactions with
natural language, fostering the development of more advanced AI agents. | http://arxiv.org/pdf/2307.12573 | Yuanzhi Liang, Linchao Zhu, Yi Yang | cs.CL | Preliminary version of an ongoing work | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2212.10060"
},
{
"id": "2205.11916"
},
{
"id": "2304.03442"
},
{
"id": "2203.11171"
},
{
"id": "2212.10618"
},
{
"id": "2301.00234"
},
{
"id": "2303.18223"
},
{
"id": "2010.07375"
},
{
"id": "2212.10403"
},
{
"id": "2305.01528"
},
{
"id": "2305.16960"
}
] |
2307.12856 | 32 | Results For comparison, we prepare three baselines, consisting of language model modules and a single LLM conditioned on different prompts per role, such as Flan-U-PaLM (Chung et al., 2022), that with a planning language model (Flan-U-PaLM+P), and that with a summarization language model (Flan-U-PaLM+S). If they do not use language model modules, prompted Flan-U-PaLM plans in an open-loop manner (Plan: *), and regular-expression-based retrieval summarizes given raw HTML (Sum: *). Table | shows that by leveraging planning and summarization language model modules, WebAgent achieves best 65% success and 87.6% score on real-estate, 70% success and 85.8% score on social-media, and 80% success and 93.8% score on map, significantly outperforming single Flan-U-PaLM, or with partial language model modules (most of those achieve about 10 - 30% success). This result suggests that self-experience supervision notably improves the performance, and closed-loop planning grounded on HTML observations via finetuned domain language models is more suitable for open-ended web automation than open-loop planning with few-shot | 2307.12856#32 | A Real-World WebAgent with Planning, Long Context Understanding, and Program Synthesis | Pre-trained large language models (LLMs) have recently achieved better
generalization and sample efficiency in autonomous web automation. However, the
performance on real-world websites has still suffered from (1) open domainness,
(2) limited context length, and (3) lack of inductive bias on HTML. We
introduce WebAgent, an LLM-driven agent that learns from self-experience to
complete tasks on real websites following natural language instructions.
WebAgent plans ahead by decomposing instructions into canonical
sub-instructions, summarizes long HTML documents into task-relevant snippets,
and acts on websites via Python programs generated from those. We design
WebAgent with Flan-U-PaLM, for grounded code generation, and HTML-T5, new
pre-trained LLMs for long HTML documents using local and global attention
mechanisms and a mixture of long-span denoising objectives, for planning and
summarization. We empirically demonstrate that our modular recipe improves the
success on real websites by over 50%, and that HTML-T5 is the best model to
solve various HTML understanding tasks; achieving 18.7% higher success rate
than the prior method on MiniWoB web automation benchmark, and SoTA performance
on Mind2Web, an offline task planning evaluation. | http://arxiv.org/pdf/2307.12856 | Izzeddin Gur, Hiroki Furuta, Austin Huang, Mustafa Safdari, Yutaka Matsuo, Douglas Eck, Aleksandra Faust | cs.LG, cs.AI, cs.CL | null | null | cs.LG | 20230724 | 20231003 | [
{
"id": "2101.02235"
},
{
"id": "2302.13971"
},
{
"id": "2103.06410"
},
{
"id": "2305.08848"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2106.15772"
},
{
"id": "2305.16653"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2105.13231"
},
{
"id": "2205.05131"
},
{
"id": "2204.00598"
},
{
"id": "2203.17189"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2105.11210"
},
{
"id": "2304.11477"
},
{
"id": "1906.01749"
},
{
"id": "2108.07732"
},
{
"id": "2202.00217"
},
{
"id": "2102.04664"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "1811.00937"
},
{
"id": "2303.08774"
},
{
"id": "1808.06226"
},
{
"id": "2010.12844"
},
{
"id": "2110.14168"
},
{
"id": "2209.11302"
},
{
"id": "2210.09261"
},
{
"id": "2306.07863"
},
{
"id": "2306.00245"
},
{
"id": "2305.16291"
},
{
"id": "2005.14165"
},
{
"id": "2305.11014"
},
{
"id": "2211.10435"
},
{
"id": "2210.03629"
},
{
"id": "1912.13318"
},
{
"id": "2108.13643"
},
{
"id": "2303.09752"
},
{
"id": "2103.07191"
},
{
"id": "1611.04230"
},
{
"id": "2209.07753"
},
{
"id": "2002.08155"
},
{
"id": "2212.10560"
},
{
"id": "2110.08518"
},
{
"id": "2210.03945"
},
{
"id": "2201.08239"
},
{
"id": "2204.01691"
},
{
"id": "1906.03741"
},
{
"id": "2203.02155"
},
{
"id": "2201.07207"
},
{
"id": "2004.08483"
},
{
"id": "1804.05685"
},
{
"id": "2206.10498"
},
{
"id": "2207.01206"
},
{
"id": "2105.09938"
},
{
"id": "2205.12255"
},
{
"id": "2305.11854"
}
] |
2307.12950 | 32 | 6On the other hand, RLAIF7Bâs downstream performance is quite poor, perhaps reflecting its preference modelâs low agreement with humans.
8
Harmlessness Prompts Helpfulness Prompts Outlining Prompts Method Harm Help Help Qual RLCD7B vs. RLCD-Rescore7B 86.0 / 14.0 75.8 / 24.2 86.3 / 13.7 88.8 / 11.2 RLCD30B vs. RLCD-Rescore30B 54.6 / 45.4 53.2 / 46.8 47.3 / 52.7 36.4 / 63.6
Table 6: Percentage of outputs preferred in GPT-4 pairwise comparisons for RLCD vs. RLCD-Rescore variant (re-labeling outputs using RLAIF scoring prompts). RLCD dramatically outperforms RLCD-Rescore at 7B scale for preference data simulation, but rescoring becomes a viable alternative at 30B scale. | 2307.12950#32 | RLCD: Reinforcement Learning from Contrast Distillation for Language Model Alignment | We propose Reinforcement Learning from Contrast Distillation (RLCD), a method
for aligning language models to follow natural language principles without
using human feedback. RLCD trains a preference model using simulated preference
pairs that contain both a high-quality and low-quality example, generated using
contrasting positive and negative prompts. The preference model is then used to
improve a base unaligned language model via reinforcement learning.
Empirically, RLCD outperforms RLAIF (Bai et al., 2022b) and context
distillation (Huang et al., 2022) baselines across three diverse alignment
tasks--harmlessness, helpfulness, and story outline generation--and on both 7B
and 30B model scales for preference data simulation. | http://arxiv.org/pdf/2307.12950 | Kevin Yang, Dan Klein, Asli Celikyilmaz, Nanyun Peng, Yuandong Tian | cs.CL, cs.AI | null | null | cs.CL | 20230724 | 20230818 | [
{
"id": "2302.13971"
},
{
"id": "2109.07445"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.03047"
},
{
"id": "2209.15189"
},
{
"id": "2306.15595"
},
{
"id": "1807.03748"
},
{
"id": "2202.00161"
},
{
"id": "1606.07947"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2204.05862"
},
{
"id": "2301.11270"
},
{
"id": "2112.00861"
},
{
"id": "2307.03172"
},
{
"id": "2206.11349"
},
{
"id": "2305.14387"
},
{
"id": "2210.11610"
},
{
"id": "2102.10960"
},
{
"id": "2306.11816"
}
] |
2307.12966 | 32 | into Multilingual ones. One straightforward idea is to translate instruction inputs and outputs into the target languages. Chen et al. (2023e) pro- pose two translation strategies: (a) Post-answering which ï¬rst translates the instruction inputs into the target language and then prompts strong LLMs to answer it. This could potentially preserve the speciï¬c culture patterns embedded in the target lan- guages, but the output quality may be low as exist- ing strong LLMs are often English-dominated; (b) Post-translating which ï¬rst prompts strong LLMs to respond the instructions in English, then trans- late both inputs and outputs. This approach could obtain high-quality output text, but lost the spe- ciï¬c culture information. Li et al. (2023b) follow the Post-answering strategy to construct instruc- tion data for 52 popular languages using Google- Translate, then use these data to ï¬ne-tune LLaMA using the LoRA technology. An alternative solu- tion is to mix several langauges in a multi-turn dialogue. BayLing (Zhang et al., 2023c) introduces a set of multi-turn interactive | 2307.12966#32 | Aligning Large Language Models with Human: A Survey | Large Language Models (LLMs) trained on extensive textual corpora have
emerged as leading solutions for a broad array of Natural Language Processing
(NLP) tasks. Despite their notable performance, these models are prone to
certain limitations such as misunderstanding human instructions, generating
potentially biased content, or factually incorrect (hallucinated) information.
Hence, aligning LLMs with human expectations has become an active area of
interest within the research community. This survey presents a comprehensive
overview of these alignment technologies, including the following aspects. (1)
Data collection: the methods for effectively collecting high-quality
instructions for LLM alignment, including the use of NLP benchmarks, human
annotations, and leveraging strong LLMs. (2) Training methodologies: a detailed
review of the prevailing training methods employed for LLM alignment. Our
exploration encompasses Supervised Fine-tuning, both Online and Offline human
preference training, along with parameter-efficient training mechanisms. (3)
Model Evaluation: the methods for evaluating the effectiveness of these
human-aligned LLMs, presenting a multifaceted approach towards their
assessment. In conclusion, we collate and distill our findings, shedding light
on several promising future research avenues in the field. This survey,
therefore, serves as a valuable resource for anyone invested in understanding
and advancing the alignment of LLMs to better suit human-oriented tasks and
expectations. An associated GitHub link collecting the latest papers is
available at https://github.com/GaryYufei/AlignLLMHumanSurvey. | http://arxiv.org/pdf/2307.12966 | Yufei Wang, Wanjun Zhong, Liangyou Li, Fei Mi, Xingshan Zeng, Wenyong Huang, Lifeng Shang, Xin Jiang, Qun Liu | cs.CL | work in progress | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2307.03109"
},
{
"id": "2305.14251"
},
{
"id": "2305.18290"
},
{
"id": "2305.13711"
},
{
"id": "2306.11644"
},
{
"id": "2306.09212"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.15011"
},
{
"id": "2304.08109"
},
{
"id": "2305.16264"
},
{
"id": "2305.10263"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2305.01210"
},
{
"id": "2306.04751"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2210.05158"
},
{
"id": "2306.15895"
},
{
"id": "2301.13688"
},
{
"id": "2307.02053"
},
{
"id": "2306.04563"
},
{
"id": "2305.15067"
},
{
"id": "2305.14688"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2306.09296"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2307.08701"
},
{
"id": "2304.08177"
},
{
"id": "2303.16634"
},
{
"id": "2306.17492"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.16739"
},
{
"id": "1909.08593"
},
{
"id": "2304.14317"
},
{
"id": "2305.14233"
},
{
"id": "2304.00723"
},
{
"id": "2306.08568"
},
{
"id": "2304.03277"
},
{
"id": "2307.03692"
},
{
"id": "2305.14314"
},
{
"id": "2307.06290"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2305.16960"
}
] |
2307.12573 | 33 | Speciï¬cally, in the ï¬rst step of our prompting approach, we guide the language models by providing a prompt such as âBased on the TRPG game record provided above, identify the characters or NPCs that exist in the current scenarios.â This prompts the language model to recognize and understand the characters present in the given contexts. In the second step, we prompt the language models with a question like âWhich character or NPC is expected to carry out activities next?â This encourages the models to delve deeper into the semantics of the contexts and infer the intentions of the characters. For the ï¬nal step, we provide LLMs with all possible skills deï¬ned in the TRPG rules and guide
2www.goddessfantasy.net 3Most users whose logs are used in our study have agreed and provided their informed consent. We are
trying to contact and communicate all users to be informed and agree with the participation of the research.
8 | 2307.12573#33 | Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models | Recent advancements in natural language and Large Language Models (LLMs) have
enabled AI agents to simulate human-like interactions within virtual worlds.
However, these interactions still face limitations in complexity and
flexibility, particularly in scenarios involving multiple characters and novel
objects. Pre-defining all interactable objects in the agent's world model
presents challenges, and conveying implicit intentions to multiple characters
through complex interactions remains difficult. To address these issues, we
propose integrating virtual Game Masters (GMs) into the agent's world model,
drawing inspiration from Tabletop Role-Playing Games (TRPGs). GMs play a
crucial role in overseeing information, estimating players' intentions,
providing environment descriptions, and offering feedback, compensating for
current world model deficiencies. To facilitate future explorations for complex
interactions, we introduce a benchmark named Tachikuma, comprising a Multiple
character and novel Object based interaction Estimation (MOE) task and a
supporting dataset. MOE challenges models to understand characters' intentions
and accurately determine their actions within intricate contexts involving
multi-character and novel object interactions. Besides, the dataset captures
log data from real-time communications during gameplay, providing diverse,
grounded, and complex interactions for further explorations. Finally, we
present a simple prompting baseline and evaluate its performance, demonstrating
its effectiveness in enhancing interaction understanding. We hope that our
dataset and task will inspire further research in complex interactions with
natural language, fostering the development of more advanced AI agents. | http://arxiv.org/pdf/2307.12573 | Yuanzhi Liang, Linchao Zhu, Yi Yang | cs.CL | Preliminary version of an ongoing work | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2212.10060"
},
{
"id": "2205.11916"
},
{
"id": "2304.03442"
},
{
"id": "2203.11171"
},
{
"id": "2212.10618"
},
{
"id": "2301.00234"
},
{
"id": "2303.18223"
},
{
"id": "2010.07375"
},
{
"id": "2212.10403"
},
{
"id": "2305.01528"
},
{
"id": "2305.16960"
}
] |
2307.12856 | 33 | planning grounded on HTML observations via finetuned domain language models is more suitable for open-ended web automation than open-loop planning with few-shot LLMs. This trend is remarkable in real-estate (even Flan-U-PaLM+P achieves 50% success), where the longer planning horizon is needed to fulfill instructions. We also observe that coupling sub-instruction prediction with HTML summarization in language model modules plays a critical role in task success. The development of more capable planning modules to decompose the given instructions adaptively and accurately could help WebAgent improve the performance further. | 2307.12856#33 | A Real-World WebAgent with Planning, Long Context Understanding, and Program Synthesis | Pre-trained large language models (LLMs) have recently achieved better
generalization and sample efficiency in autonomous web automation. However, the
performance on real-world websites has still suffered from (1) open domainness,
(2) limited context length, and (3) lack of inductive bias on HTML. We
introduce WebAgent, an LLM-driven agent that learns from self-experience to
complete tasks on real websites following natural language instructions.
WebAgent plans ahead by decomposing instructions into canonical
sub-instructions, summarizes long HTML documents into task-relevant snippets,
and acts on websites via Python programs generated from those. We design
WebAgent with Flan-U-PaLM, for grounded code generation, and HTML-T5, new
pre-trained LLMs for long HTML documents using local and global attention
mechanisms and a mixture of long-span denoising objectives, for planning and
summarization. We empirically demonstrate that our modular recipe improves the
success on real websites by over 50%, and that HTML-T5 is the best model to
solve various HTML understanding tasks; achieving 18.7% higher success rate
than the prior method on MiniWoB web automation benchmark, and SoTA performance
on Mind2Web, an offline task planning evaluation. | http://arxiv.org/pdf/2307.12856 | Izzeddin Gur, Hiroki Furuta, Austin Huang, Mustafa Safdari, Yutaka Matsuo, Douglas Eck, Aleksandra Faust | cs.LG, cs.AI, cs.CL | null | null | cs.LG | 20230724 | 20231003 | [
{
"id": "2101.02235"
},
{
"id": "2302.13971"
},
{
"id": "2103.06410"
},
{
"id": "2305.08848"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2106.15772"
},
{
"id": "2305.16653"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2105.13231"
},
{
"id": "2205.05131"
},
{
"id": "2204.00598"
},
{
"id": "2203.17189"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2105.11210"
},
{
"id": "2304.11477"
},
{
"id": "1906.01749"
},
{
"id": "2108.07732"
},
{
"id": "2202.00217"
},
{
"id": "2102.04664"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "1811.00937"
},
{
"id": "2303.08774"
},
{
"id": "1808.06226"
},
{
"id": "2010.12844"
},
{
"id": "2110.14168"
},
{
"id": "2209.11302"
},
{
"id": "2210.09261"
},
{
"id": "2306.07863"
},
{
"id": "2306.00245"
},
{
"id": "2305.16291"
},
{
"id": "2005.14165"
},
{
"id": "2305.11014"
},
{
"id": "2211.10435"
},
{
"id": "2210.03629"
},
{
"id": "1912.13318"
},
{
"id": "2108.13643"
},
{
"id": "2303.09752"
},
{
"id": "2103.07191"
},
{
"id": "1611.04230"
},
{
"id": "2209.07753"
},
{
"id": "2002.08155"
},
{
"id": "2212.10560"
},
{
"id": "2110.08518"
},
{
"id": "2210.03945"
},
{
"id": "2201.08239"
},
{
"id": "2204.01691"
},
{
"id": "1906.03741"
},
{
"id": "2203.02155"
},
{
"id": "2201.07207"
},
{
"id": "2004.08483"
},
{
"id": "1804.05685"
},
{
"id": "2206.10498"
},
{
"id": "2207.01206"
},
{
"id": "2105.09938"
},
{
"id": "2205.12255"
},
{
"id": "2305.11854"
}
] |
2307.12950 | 33 | a method such as PREADD (Pei et al., 2023) to modulate the control strength of the prompts p+, pâ to obtain accurately labeled preference pairs closer to the classification boundary. On the other hand, it may also be the case that the larger effective context window requirement for post hoc labeling (Sec. 3.3) could cause RLCD-Rescoreâs performance to degrade compared to RLCD when o+, oâ are much longer than in our current experiments, even when using LLaMA-30B.
# 6 DISCUSSION
In this work we have presented RLCD, a method for aligning LLMs using simulated pairwise preference data obtained from prompting the same LLM. RLCD follows a similar pipeline to RLAIF while adding ideas reminiscent of context distillation: we simulate pairwise preference data using a positive prompt p+ and a negative prompt pâ, aiming to amplify the difference between outputs o+, oâ by encouraging opposite-directional changes on a desired attribute such as harmlessness. Empirical results on three diverse alignment tasks across multiple model scales for preference data simulation confirm our intuitions that RLCD can be highly effective, outperforming both RLAIF and context distillation baselines. | 2307.12950#33 | RLCD: Reinforcement Learning from Contrast Distillation for Language Model Alignment | We propose Reinforcement Learning from Contrast Distillation (RLCD), a method
for aligning language models to follow natural language principles without
using human feedback. RLCD trains a preference model using simulated preference
pairs that contain both a high-quality and low-quality example, generated using
contrasting positive and negative prompts. The preference model is then used to
improve a base unaligned language model via reinforcement learning.
Empirically, RLCD outperforms RLAIF (Bai et al., 2022b) and context
distillation (Huang et al., 2022) baselines across three diverse alignment
tasks--harmlessness, helpfulness, and story outline generation--and on both 7B
and 30B model scales for preference data simulation. | http://arxiv.org/pdf/2307.12950 | Kevin Yang, Dan Klein, Asli Celikyilmaz, Nanyun Peng, Yuandong Tian | cs.CL, cs.AI | null | null | cs.CL | 20230724 | 20230818 | [
{
"id": "2302.13971"
},
{
"id": "2109.07445"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.03047"
},
{
"id": "2209.15189"
},
{
"id": "2306.15595"
},
{
"id": "1807.03748"
},
{
"id": "2202.00161"
},
{
"id": "1606.07947"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2204.05862"
},
{
"id": "2301.11270"
},
{
"id": "2112.00861"
},
{
"id": "2307.03172"
},
{
"id": "2206.11349"
},
{
"id": "2305.14387"
},
{
"id": "2210.11610"
},
{
"id": "2102.10960"
},
{
"id": "2306.11816"
}
] |
2307.12966 | 33 | tion is to mix several langauges in a multi-turn dialogue. BayLing (Zhang et al., 2023c) introduces a set of multi-turn interactive translation instruc- tions to simultaneously improve multilingual and instruction-following ability for LLMs. Speciï¬- cally, each multi-turn instruction is essentially a translation task where users ï¬rst ask LLMs to trans- late a sentence to another language, then the users gradually add additional requirements (e.g., could you only use 10 words?). This process naturally connects different languages as well as human pref- erences with LLMs. We also summarize how to effectively adapt English-oriented LLMs to other languages in Appendix A.1. | 2307.12966#33 | Aligning Large Language Models with Human: A Survey | Large Language Models (LLMs) trained on extensive textual corpora have
emerged as leading solutions for a broad array of Natural Language Processing
(NLP) tasks. Despite their notable performance, these models are prone to
certain limitations such as misunderstanding human instructions, generating
potentially biased content, or factually incorrect (hallucinated) information.
Hence, aligning LLMs with human expectations has become an active area of
interest within the research community. This survey presents a comprehensive
overview of these alignment technologies, including the following aspects. (1)
Data collection: the methods for effectively collecting high-quality
instructions for LLM alignment, including the use of NLP benchmarks, human
annotations, and leveraging strong LLMs. (2) Training methodologies: a detailed
review of the prevailing training methods employed for LLM alignment. Our
exploration encompasses Supervised Fine-tuning, both Online and Offline human
preference training, along with parameter-efficient training mechanisms. (3)
Model Evaluation: the methods for evaluating the effectiveness of these
human-aligned LLMs, presenting a multifaceted approach towards their
assessment. In conclusion, we collate and distill our findings, shedding light
on several promising future research avenues in the field. This survey,
therefore, serves as a valuable resource for anyone invested in understanding
and advancing the alignment of LLMs to better suit human-oriented tasks and
expectations. An associated GitHub link collecting the latest papers is
available at https://github.com/GaryYufei/AlignLLMHumanSurvey. | http://arxiv.org/pdf/2307.12966 | Yufei Wang, Wanjun Zhong, Liangyou Li, Fei Mi, Xingshan Zeng, Wenyong Huang, Lifeng Shang, Xin Jiang, Qun Liu | cs.CL | work in progress | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2307.03109"
},
{
"id": "2305.14251"
},
{
"id": "2305.18290"
},
{
"id": "2305.13711"
},
{
"id": "2306.11644"
},
{
"id": "2306.09212"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.15011"
},
{
"id": "2304.08109"
},
{
"id": "2305.16264"
},
{
"id": "2305.10263"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2305.01210"
},
{
"id": "2306.04751"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2210.05158"
},
{
"id": "2306.15895"
},
{
"id": "2301.13688"
},
{
"id": "2307.02053"
},
{
"id": "2306.04563"
},
{
"id": "2305.15067"
},
{
"id": "2305.14688"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2306.09296"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2307.08701"
},
{
"id": "2304.08177"
},
{
"id": "2303.16634"
},
{
"id": "2306.17492"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.16739"
},
{
"id": "1909.08593"
},
{
"id": "2304.14317"
},
{
"id": "2305.14233"
},
{
"id": "2304.00723"
},
{
"id": "2306.08568"
},
{
"id": "2304.03277"
},
{
"id": "2307.03692"
},
{
"id": "2305.14314"
},
{
"id": "2307.06290"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2305.16960"
}
] |
2307.12573 | 34 | trying to contact and communicate all users to be informed and agree with the participation of the research.
8
LLMs Prompting Method GPT-3.5 GPT-4 template prompt template prompt + zcot char prompt + skill prompt pre-char prompt + char prompt + statistic predictor pre-char prompt + char prompt + skill prompt + zcot pre-char prompt + char prompt + skill prompt CF 42.02 39.28 50.43 53.32 50.50 53.32 SF 15.30 14.46 14.78 5.03 12.88 15.91 CF 43.21 42.45 53.55 57.94 53.45 57.94 SF 15.93 16.25 16.79 5.03 17.39 20.02
Table 2: Comparison of different prompting methods and LLMs. Results prove that our task is solvable but requires higher understanding ability for grounded and complex semantics. | 2307.12573#34 | Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models | Recent advancements in natural language and Large Language Models (LLMs) have
enabled AI agents to simulate human-like interactions within virtual worlds.
However, these interactions still face limitations in complexity and
flexibility, particularly in scenarios involving multiple characters and novel
objects. Pre-defining all interactable objects in the agent's world model
presents challenges, and conveying implicit intentions to multiple characters
through complex interactions remains difficult. To address these issues, we
propose integrating virtual Game Masters (GMs) into the agent's world model,
drawing inspiration from Tabletop Role-Playing Games (TRPGs). GMs play a
crucial role in overseeing information, estimating players' intentions,
providing environment descriptions, and offering feedback, compensating for
current world model deficiencies. To facilitate future explorations for complex
interactions, we introduce a benchmark named Tachikuma, comprising a Multiple
character and novel Object based interaction Estimation (MOE) task and a
supporting dataset. MOE challenges models to understand characters' intentions
and accurately determine their actions within intricate contexts involving
multi-character and novel object interactions. Besides, the dataset captures
log data from real-time communications during gameplay, providing diverse,
grounded, and complex interactions for further explorations. Finally, we
present a simple prompting baseline and evaluate its performance, demonstrating
its effectiveness in enhancing interaction understanding. We hope that our
dataset and task will inspire further research in complex interactions with
natural language, fostering the development of more advanced AI agents. | http://arxiv.org/pdf/2307.12573 | Yuanzhi Liang, Linchao Zhu, Yi Yang | cs.CL | Preliminary version of an ongoing work | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2212.10060"
},
{
"id": "2205.11916"
},
{
"id": "2304.03442"
},
{
"id": "2203.11171"
},
{
"id": "2212.10618"
},
{
"id": "2301.00234"
},
{
"id": "2303.18223"
},
{
"id": "2010.07375"
},
{
"id": "2212.10403"
},
{
"id": "2305.01528"
},
{
"id": "2305.16960"
}
] |
2307.12856 | 34 | Error Analysis We also analyze the reason of failures by categorizing them into programming, planning, and summarization errors (Table 1). Programming error does not satisfy the given sub- instructions or HTML snippet. Planning error predicts sub-instructions conflicting with user instruc7
# Preprint
Cross-Task Cross-Website Cross-Domain Train Ele. Acc Op. F1 Step SR Ele. Acc Op. F1 Step SR Ele. Acc Op. F1 Step SR MindAct (Flan-T5-XL) MindAct (GPT-4) HTML-T5-XL (ours) SL ICL SL 55.1 41.6 76.4 75.7 60.6 78.8 52.0 36.2 71.5 42.0 35.8 68.4 65.2 51.1 71.0 38.9 30.1 62.2 42.1 37.1 73.0 66.5 46.5 75.6 39.6 26.4 67.1
Table 3: Offline action prediction performance in Mind2Web dataset. We leverage the cached candidate generation results and multi-choice QA formulation by following Deng et al. (2023). HTML-T5 significantly outperforms MindAct with Flan-T5 or GPT-4 across task/website/domain generalization in terms of all the metrics (element accuracy, operation F1, and step success rate). | 2307.12856#34 | A Real-World WebAgent with Planning, Long Context Understanding, and Program Synthesis | Pre-trained large language models (LLMs) have recently achieved better
generalization and sample efficiency in autonomous web automation. However, the
performance on real-world websites has still suffered from (1) open domainness,
(2) limited context length, and (3) lack of inductive bias on HTML. We
introduce WebAgent, an LLM-driven agent that learns from self-experience to
complete tasks on real websites following natural language instructions.
WebAgent plans ahead by decomposing instructions into canonical
sub-instructions, summarizes long HTML documents into task-relevant snippets,
and acts on websites via Python programs generated from those. We design
WebAgent with Flan-U-PaLM, for grounded code generation, and HTML-T5, new
pre-trained LLMs for long HTML documents using local and global attention
mechanisms and a mixture of long-span denoising objectives, for planning and
summarization. We empirically demonstrate that our modular recipe improves the
success on real websites by over 50%, and that HTML-T5 is the best model to
solve various HTML understanding tasks; achieving 18.7% higher success rate
than the prior method on MiniWoB web automation benchmark, and SoTA performance
on Mind2Web, an offline task planning evaluation. | http://arxiv.org/pdf/2307.12856 | Izzeddin Gur, Hiroki Furuta, Austin Huang, Mustafa Safdari, Yutaka Matsuo, Douglas Eck, Aleksandra Faust | cs.LG, cs.AI, cs.CL | null | null | cs.LG | 20230724 | 20231003 | [
{
"id": "2101.02235"
},
{
"id": "2302.13971"
},
{
"id": "2103.06410"
},
{
"id": "2305.08848"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2106.15772"
},
{
"id": "2305.16653"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2105.13231"
},
{
"id": "2205.05131"
},
{
"id": "2204.00598"
},
{
"id": "2203.17189"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2105.11210"
},
{
"id": "2304.11477"
},
{
"id": "1906.01749"
},
{
"id": "2108.07732"
},
{
"id": "2202.00217"
},
{
"id": "2102.04664"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "1811.00937"
},
{
"id": "2303.08774"
},
{
"id": "1808.06226"
},
{
"id": "2010.12844"
},
{
"id": "2110.14168"
},
{
"id": "2209.11302"
},
{
"id": "2210.09261"
},
{
"id": "2306.07863"
},
{
"id": "2306.00245"
},
{
"id": "2305.16291"
},
{
"id": "2005.14165"
},
{
"id": "2305.11014"
},
{
"id": "2211.10435"
},
{
"id": "2210.03629"
},
{
"id": "1912.13318"
},
{
"id": "2108.13643"
},
{
"id": "2303.09752"
},
{
"id": "2103.07191"
},
{
"id": "1611.04230"
},
{
"id": "2209.07753"
},
{
"id": "2002.08155"
},
{
"id": "2212.10560"
},
{
"id": "2110.08518"
},
{
"id": "2210.03945"
},
{
"id": "2201.08239"
},
{
"id": "2204.01691"
},
{
"id": "1906.03741"
},
{
"id": "2203.02155"
},
{
"id": "2201.07207"
},
{
"id": "2004.08483"
},
{
"id": "1804.05685"
},
{
"id": "2206.10498"
},
{
"id": "2207.01206"
},
{
"id": "2105.09938"
},
{
"id": "2205.12255"
},
{
"id": "2305.11854"
}
] |
2307.12950 | 34 | However, despite our strong empirical results, we think that RLCD only scratches the surface of what is possible when modifying the automatic preference data simulation procedure for RLHF pipelines. For instance, across the several experimental settings in this work, our current RLCD approach benefits from intuitively pushing o+ and oâ farther apart to reduce label noise. In cases where reranking outputs post hoc is easy, or where one has a sufficiently strong scoring LLM to provide accurate labels even close to the classification boundary, one could alternatively attempt to create harder training examples by intentionally pushing o+ and oâ closer together compared to whatever RLAIF achieves by random chance. Additionally, it could prove useful to simulate preference labels in formats other than a single binary label, such as by ranking more than two outputs at a time or using more fine-grained annotations on longer outputs, and we are excited to investigate these and other possibilities for continuing to improve automatic data simulation procedures for LLM alignment.
# LIMITATIONS | 2307.12950#34 | RLCD: Reinforcement Learning from Contrast Distillation for Language Model Alignment | We propose Reinforcement Learning from Contrast Distillation (RLCD), a method
for aligning language models to follow natural language principles without
using human feedback. RLCD trains a preference model using simulated preference
pairs that contain both a high-quality and low-quality example, generated using
contrasting positive and negative prompts. The preference model is then used to
improve a base unaligned language model via reinforcement learning.
Empirically, RLCD outperforms RLAIF (Bai et al., 2022b) and context
distillation (Huang et al., 2022) baselines across three diverse alignment
tasks--harmlessness, helpfulness, and story outline generation--and on both 7B
and 30B model scales for preference data simulation. | http://arxiv.org/pdf/2307.12950 | Kevin Yang, Dan Klein, Asli Celikyilmaz, Nanyun Peng, Yuandong Tian | cs.CL, cs.AI | null | null | cs.CL | 20230724 | 20230818 | [
{
"id": "2302.13971"
},
{
"id": "2109.07445"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.03047"
},
{
"id": "2209.15189"
},
{
"id": "2306.15595"
},
{
"id": "1807.03748"
},
{
"id": "2202.00161"
},
{
"id": "1606.07947"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2204.05862"
},
{
"id": "2301.11270"
},
{
"id": "2112.00861"
},
{
"id": "2307.03172"
},
{
"id": "2206.11349"
},
{
"id": "2305.14387"
},
{
"id": "2210.11610"
},
{
"id": "2102.10960"
},
{
"id": "2306.11816"
}
] |
2307.12966 | 34 | # Instruction Data Management
As discussed above, there are extensive approaches focusing on generating high-quality instructions from different sources. Naturally, it becomes crit- ical to effectively manage all of these instruction data in the LLMs alignment.
Instruction Implications Several studies focus on the implications of instruction data. Ji et al. (2023) demonstrate that an increment in the total count of training instructions can be advantageous for standard NLP tasks (e.g., information extrac- tion, classiï¬cation, Closed QA, summarization). Yet, it bears negligible inï¬uence on complex rea- soning tasks such as Math, Code, CoT, and Brain- storming. Intriguingly, Muennighoff et al. (2023) discover that adding approximately 50% of pro- gramming instructions not only leaves unaffected
the general conversational performance but also enhances the reasoning prowess of LLMs. In par- allel, Ghosal et al. (2023) observe that integrating FLAN-style instructions with synthetic instructions from ChatGPT/GPT-4 effectively enhances LLMsâ reasoning and problem-solving capacity. | 2307.12966#34 | Aligning Large Language Models with Human: A Survey | Large Language Models (LLMs) trained on extensive textual corpora have
emerged as leading solutions for a broad array of Natural Language Processing
(NLP) tasks. Despite their notable performance, these models are prone to
certain limitations such as misunderstanding human instructions, generating
potentially biased content, or factually incorrect (hallucinated) information.
Hence, aligning LLMs with human expectations has become an active area of
interest within the research community. This survey presents a comprehensive
overview of these alignment technologies, including the following aspects. (1)
Data collection: the methods for effectively collecting high-quality
instructions for LLM alignment, including the use of NLP benchmarks, human
annotations, and leveraging strong LLMs. (2) Training methodologies: a detailed
review of the prevailing training methods employed for LLM alignment. Our
exploration encompasses Supervised Fine-tuning, both Online and Offline human
preference training, along with parameter-efficient training mechanisms. (3)
Model Evaluation: the methods for evaluating the effectiveness of these
human-aligned LLMs, presenting a multifaceted approach towards their
assessment. In conclusion, we collate and distill our findings, shedding light
on several promising future research avenues in the field. This survey,
therefore, serves as a valuable resource for anyone invested in understanding
and advancing the alignment of LLMs to better suit human-oriented tasks and
expectations. An associated GitHub link collecting the latest papers is
available at https://github.com/GaryYufei/AlignLLMHumanSurvey. | http://arxiv.org/pdf/2307.12966 | Yufei Wang, Wanjun Zhong, Liangyou Li, Fei Mi, Xingshan Zeng, Wenyong Huang, Lifeng Shang, Xin Jiang, Qun Liu | cs.CL | work in progress | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2307.03109"
},
{
"id": "2305.14251"
},
{
"id": "2305.18290"
},
{
"id": "2305.13711"
},
{
"id": "2306.11644"
},
{
"id": "2306.09212"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.15011"
},
{
"id": "2304.08109"
},
{
"id": "2305.16264"
},
{
"id": "2305.10263"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2305.01210"
},
{
"id": "2306.04751"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2210.05158"
},
{
"id": "2306.15895"
},
{
"id": "2301.13688"
},
{
"id": "2307.02053"
},
{
"id": "2306.04563"
},
{
"id": "2305.15067"
},
{
"id": "2305.14688"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2306.09296"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2307.08701"
},
{
"id": "2304.08177"
},
{
"id": "2303.16634"
},
{
"id": "2306.17492"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.16739"
},
{
"id": "1909.08593"
},
{
"id": "2304.14317"
},
{
"id": "2305.14233"
},
{
"id": "2304.00723"
},
{
"id": "2306.08568"
},
{
"id": "2304.03277"
},
{
"id": "2307.03692"
},
{
"id": "2305.14314"
},
{
"id": "2307.06290"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2305.16960"
}
] |
2307.12573 | 35 | Table 2: Comparison of different prompting methods and LLMs. Results prove that our task is solvable but requires higher understanding ability for grounded and complex semantics.
them to generate character names that correspond to the potential skill checks. Our prompts for this step include phrases such as âWhat skills are required for the mentioned characters to carry out their respective activities?â Furthermore, to facilitate comparison with other benchmarks, we extend the TBS approach to also generate utterances to simulate a real-human GM. Given the predictions from the TBS model, LLMs are required to generate responses in the tone and style of a GM. We achieve this by using prompts such as âAs a game master for a TRPG game, generate responses based on the provided character names and the corresponding skills.â
# 6 Experimental Results
In this section, we provide a detailed discussion, comprehensive evaluation, and analysis of our benchmark. | 2307.12573#35 | Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models | Recent advancements in natural language and Large Language Models (LLMs) have
enabled AI agents to simulate human-like interactions within virtual worlds.
However, these interactions still face limitations in complexity and
flexibility, particularly in scenarios involving multiple characters and novel
objects. Pre-defining all interactable objects in the agent's world model
presents challenges, and conveying implicit intentions to multiple characters
through complex interactions remains difficult. To address these issues, we
propose integrating virtual Game Masters (GMs) into the agent's world model,
drawing inspiration from Tabletop Role-Playing Games (TRPGs). GMs play a
crucial role in overseeing information, estimating players' intentions,
providing environment descriptions, and offering feedback, compensating for
current world model deficiencies. To facilitate future explorations for complex
interactions, we introduce a benchmark named Tachikuma, comprising a Multiple
character and novel Object based interaction Estimation (MOE) task and a
supporting dataset. MOE challenges models to understand characters' intentions
and accurately determine their actions within intricate contexts involving
multi-character and novel object interactions. Besides, the dataset captures
log data from real-time communications during gameplay, providing diverse,
grounded, and complex interactions for further explorations. Finally, we
present a simple prompting baseline and evaluate its performance, demonstrating
its effectiveness in enhancing interaction understanding. We hope that our
dataset and task will inspire further research in complex interactions with
natural language, fostering the development of more advanced AI agents. | http://arxiv.org/pdf/2307.12573 | Yuanzhi Liang, Linchao Zhu, Yi Yang | cs.CL | Preliminary version of an ongoing work | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2212.10060"
},
{
"id": "2205.11916"
},
{
"id": "2304.03442"
},
{
"id": "2203.11171"
},
{
"id": "2212.10618"
},
{
"id": "2301.00234"
},
{
"id": "2303.18223"
},
{
"id": "2010.07375"
},
{
"id": "2212.10403"
},
{
"id": "2305.01528"
},
{
"id": "2305.16960"
}
] |
2307.12856 | 35 | tions, and summarization error fails to extract the relevant HTML snippets for given sub-instructions. From the website perspective, the failures on real-estate concentrate in planning because of its long-horizon nature. map also fails in planning when confusing starting point and destination. In contrast, social-media tends to fail in programming due to the ambiguous sub-instructions or summarization including redundant hyperlinks, which results in transiting wrong pages or clicking unexecutable elements. From the method perspective, WebAgent often fails in planning by predicting incorrect sub-instructions (for instance, in real-estate, WebAgent generates incorrect plans in 70% of failure episodes), while other baselines more fail in programming or summarization steps. This observation indicates that, through the self-experience supervision, the ratio of programming and summarization errors has decreased while the fundamental difficulty of planning, which requires consistent and accurate prediction over long horizon without error accumulation, still remains.
4.2 ABLATION OF HTML-T5 | 2307.12856#35 | A Real-World WebAgent with Planning, Long Context Understanding, and Program Synthesis | Pre-trained large language models (LLMs) have recently achieved better
generalization and sample efficiency in autonomous web automation. However, the
performance on real-world websites has still suffered from (1) open domainness,
(2) limited context length, and (3) lack of inductive bias on HTML. We
introduce WebAgent, an LLM-driven agent that learns from self-experience to
complete tasks on real websites following natural language instructions.
WebAgent plans ahead by decomposing instructions into canonical
sub-instructions, summarizes long HTML documents into task-relevant snippets,
and acts on websites via Python programs generated from those. We design
WebAgent with Flan-U-PaLM, for grounded code generation, and HTML-T5, new
pre-trained LLMs for long HTML documents using local and global attention
mechanisms and a mixture of long-span denoising objectives, for planning and
summarization. We empirically demonstrate that our modular recipe improves the
success on real websites by over 50%, and that HTML-T5 is the best model to
solve various HTML understanding tasks; achieving 18.7% higher success rate
than the prior method on MiniWoB web automation benchmark, and SoTA performance
on Mind2Web, an offline task planning evaluation. | http://arxiv.org/pdf/2307.12856 | Izzeddin Gur, Hiroki Furuta, Austin Huang, Mustafa Safdari, Yutaka Matsuo, Douglas Eck, Aleksandra Faust | cs.LG, cs.AI, cs.CL | null | null | cs.LG | 20230724 | 20231003 | [
{
"id": "2101.02235"
},
{
"id": "2302.13971"
},
{
"id": "2103.06410"
},
{
"id": "2305.08848"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2106.15772"
},
{
"id": "2305.16653"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2105.13231"
},
{
"id": "2205.05131"
},
{
"id": "2204.00598"
},
{
"id": "2203.17189"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2105.11210"
},
{
"id": "2304.11477"
},
{
"id": "1906.01749"
},
{
"id": "2108.07732"
},
{
"id": "2202.00217"
},
{
"id": "2102.04664"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "1811.00937"
},
{
"id": "2303.08774"
},
{
"id": "1808.06226"
},
{
"id": "2010.12844"
},
{
"id": "2110.14168"
},
{
"id": "2209.11302"
},
{
"id": "2210.09261"
},
{
"id": "2306.07863"
},
{
"id": "2306.00245"
},
{
"id": "2305.16291"
},
{
"id": "2005.14165"
},
{
"id": "2305.11014"
},
{
"id": "2211.10435"
},
{
"id": "2210.03629"
},
{
"id": "1912.13318"
},
{
"id": "2108.13643"
},
{
"id": "2303.09752"
},
{
"id": "2103.07191"
},
{
"id": "1611.04230"
},
{
"id": "2209.07753"
},
{
"id": "2002.08155"
},
{
"id": "2212.10560"
},
{
"id": "2110.08518"
},
{
"id": "2210.03945"
},
{
"id": "2201.08239"
},
{
"id": "2204.01691"
},
{
"id": "1906.03741"
},
{
"id": "2203.02155"
},
{
"id": "2201.07207"
},
{
"id": "2004.08483"
},
{
"id": "1804.05685"
},
{
"id": "2206.10498"
},
{
"id": "2207.01206"
},
{
"id": "2105.09938"
},
{
"id": "2205.12255"
},
{
"id": "2305.11854"
}
] |
2307.12950 | 35 | # LIMITATIONS
While we have carefully investigated the effectiveness of RLCD compared to several baselines on three tasks for LLaMA-7B, and even run experiments with preference data simulation using LLaMA-30B, state-of-the-art pretrained LLMs are still much larger, and we have not yet empirically verified our conclusions when aligning larger pretrained LLMs. It would also be interesting to test other algorithms for leveraging preference data such as DPO (Rafailov et al., 2023).
The performance of both RLCD and baselines also depends on the prompts used for pairwise preference simulation and scoring, so the results could change with different prompts. While it is difficult to entirely eliminate the impact of prompt design on performance, we have attempted to limit this impact in our pairwise comparison experiments by matching the prompt contexts used in RLCD and baselines where possible (Appendix A). We use prompts with similar meanings for RLCD and RLAIF, and use the same p+ for context distillation as in RLCD. We also use zero-shot prompting throughout our experiments to avoid any influence from few-shot examples.
Additionally, all outputs in this work are limited to 300 tokens, which is sufficient for the overwhelm- ing majority of model outputs in our tasks, but may be insufficient for tasks requiring longer context.
9 | 2307.12950#35 | RLCD: Reinforcement Learning from Contrast Distillation for Language Model Alignment | We propose Reinforcement Learning from Contrast Distillation (RLCD), a method
for aligning language models to follow natural language principles without
using human feedback. RLCD trains a preference model using simulated preference
pairs that contain both a high-quality and low-quality example, generated using
contrasting positive and negative prompts. The preference model is then used to
improve a base unaligned language model via reinforcement learning.
Empirically, RLCD outperforms RLAIF (Bai et al., 2022b) and context
distillation (Huang et al., 2022) baselines across three diverse alignment
tasks--harmlessness, helpfulness, and story outline generation--and on both 7B
and 30B model scales for preference data simulation. | http://arxiv.org/pdf/2307.12950 | Kevin Yang, Dan Klein, Asli Celikyilmaz, Nanyun Peng, Yuandong Tian | cs.CL, cs.AI | null | null | cs.CL | 20230724 | 20230818 | [
{
"id": "2302.13971"
},
{
"id": "2109.07445"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.03047"
},
{
"id": "2209.15189"
},
{
"id": "2306.15595"
},
{
"id": "1807.03748"
},
{
"id": "2202.00161"
},
{
"id": "1606.07947"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2204.05862"
},
{
"id": "2301.11270"
},
{
"id": "2112.00861"
},
{
"id": "2307.03172"
},
{
"id": "2206.11349"
},
{
"id": "2305.14387"
},
{
"id": "2210.11610"
},
{
"id": "2102.10960"
},
{
"id": "2306.11816"
}
] |
2307.12966 | 35 | Wang et al. (2023d) conduct a comprehensive analysis of the impacts of various instructions de- rived from different sources on factual knowledge, reasoning, coding, multilingual, and open-ended scenarios. They also reveal that instructions per- taining to CoT and Coding are vital for augmenting the reasoning capability of LLMs. Additionally, they ascertain that different instructions can affect different LLM capabilities. Therefore, a composite of all instruction types empowers the correspond- ing LLMs to reach their better overall performance, hinting at the need for more advanced instruction collection techniques and technologies. | 2307.12966#35 | Aligning Large Language Models with Human: A Survey | Large Language Models (LLMs) trained on extensive textual corpora have
emerged as leading solutions for a broad array of Natural Language Processing
(NLP) tasks. Despite their notable performance, these models are prone to
certain limitations such as misunderstanding human instructions, generating
potentially biased content, or factually incorrect (hallucinated) information.
Hence, aligning LLMs with human expectations has become an active area of
interest within the research community. This survey presents a comprehensive
overview of these alignment technologies, including the following aspects. (1)
Data collection: the methods for effectively collecting high-quality
instructions for LLM alignment, including the use of NLP benchmarks, human
annotations, and leveraging strong LLMs. (2) Training methodologies: a detailed
review of the prevailing training methods employed for LLM alignment. Our
exploration encompasses Supervised Fine-tuning, both Online and Offline human
preference training, along with parameter-efficient training mechanisms. (3)
Model Evaluation: the methods for evaluating the effectiveness of these
human-aligned LLMs, presenting a multifaceted approach towards their
assessment. In conclusion, we collate and distill our findings, shedding light
on several promising future research avenues in the field. This survey,
therefore, serves as a valuable resource for anyone invested in understanding
and advancing the alignment of LLMs to better suit human-oriented tasks and
expectations. An associated GitHub link collecting the latest papers is
available at https://github.com/GaryYufei/AlignLLMHumanSurvey. | http://arxiv.org/pdf/2307.12966 | Yufei Wang, Wanjun Zhong, Liangyou Li, Fei Mi, Xingshan Zeng, Wenyong Huang, Lifeng Shang, Xin Jiang, Qun Liu | cs.CL | work in progress | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2307.03109"
},
{
"id": "2305.14251"
},
{
"id": "2305.18290"
},
{
"id": "2305.13711"
},
{
"id": "2306.11644"
},
{
"id": "2306.09212"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.15011"
},
{
"id": "2304.08109"
},
{
"id": "2305.16264"
},
{
"id": "2305.10263"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2305.01210"
},
{
"id": "2306.04751"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2210.05158"
},
{
"id": "2306.15895"
},
{
"id": "2301.13688"
},
{
"id": "2307.02053"
},
{
"id": "2306.04563"
},
{
"id": "2305.15067"
},
{
"id": "2305.14688"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2306.09296"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2307.08701"
},
{
"id": "2304.08177"
},
{
"id": "2303.16634"
},
{
"id": "2306.17492"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.16739"
},
{
"id": "1909.08593"
},
{
"id": "2304.14317"
},
{
"id": "2305.14233"
},
{
"id": "2304.00723"
},
{
"id": "2306.08568"
},
{
"id": "2304.03277"
},
{
"id": "2307.03692"
},
{
"id": "2305.14314"
},
{
"id": "2307.06290"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2305.16960"
}
] |
2307.12573 | 36 | # 6 Experimental Results
In this section, we provide a detailed discussion, comprehensive evaluation, and analysis of our benchmark.
Baseline Methods: As our baseline, we employ LLMs with template prompting, which have been utilized in previous studies. We speciï¬cally adapt two popular LLMs, which are GPT-3.5 and GPT- 4 [1]. By incorporating different language models, we can thoroughly assess the performance of our prompting benchmark. Furthermore, recent researches [21, 9, 20] have demonstrated the efï¬cacy of Chain-of-Thought (CoT) methods in improving understanding capabilities. To compare with this approach, we include the zero-shot CoT (zcot) method [9] in our evaluation.
Additionally, to demonstrate the ability to infer check items, we introduce a statistical predictor for check items. Given the predicted characters, we select the skills with the highest probability based on the statistical distribution observed in our dataset. This statistical predictor serves as a lower bound for generating check items and also reveals the impact of any biases present in our dataset. | 2307.12573#36 | Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models | Recent advancements in natural language and Large Language Models (LLMs) have
enabled AI agents to simulate human-like interactions within virtual worlds.
However, these interactions still face limitations in complexity and
flexibility, particularly in scenarios involving multiple characters and novel
objects. Pre-defining all interactable objects in the agent's world model
presents challenges, and conveying implicit intentions to multiple characters
through complex interactions remains difficult. To address these issues, we
propose integrating virtual Game Masters (GMs) into the agent's world model,
drawing inspiration from Tabletop Role-Playing Games (TRPGs). GMs play a
crucial role in overseeing information, estimating players' intentions,
providing environment descriptions, and offering feedback, compensating for
current world model deficiencies. To facilitate future explorations for complex
interactions, we introduce a benchmark named Tachikuma, comprising a Multiple
character and novel Object based interaction Estimation (MOE) task and a
supporting dataset. MOE challenges models to understand characters' intentions
and accurately determine their actions within intricate contexts involving
multi-character and novel object interactions. Besides, the dataset captures
log data from real-time communications during gameplay, providing diverse,
grounded, and complex interactions for further explorations. Finally, we
present a simple prompting baseline and evaluate its performance, demonstrating
its effectiveness in enhancing interaction understanding. We hope that our
dataset and task will inspire further research in complex interactions with
natural language, fostering the development of more advanced AI agents. | http://arxiv.org/pdf/2307.12573 | Yuanzhi Liang, Linchao Zhu, Yi Yang | cs.CL | Preliminary version of an ongoing work | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2212.10060"
},
{
"id": "2205.11916"
},
{
"id": "2304.03442"
},
{
"id": "2203.11171"
},
{
"id": "2212.10618"
},
{
"id": "2301.00234"
},
{
"id": "2303.18223"
},
{
"id": "2010.07375"
},
{
"id": "2212.10403"
},
{
"id": "2305.01528"
},
{
"id": "2305.16960"
}
] |
2307.12856 | 36 | 4.2 ABLATION OF HTML-T5
In addition to the evaluation as WebAgent system, we extensively examine HTML-T5 about (i) the generalization to other websites with Mind2Web dataset (Deng et al., 2023), (ii) the performance on MiniWoB++, a standard web automation benchmark (Liu et al., 2018; Shi et al., 2017), and (iii) its architecture and pre-training objective. We adopt 16K tokens for the context window unless otherwise mentioned. We also evaluate HTML-T5 on the pre-training dataset and model initialization, offline task planning with self-generated real-estate traces, and description generation benchmark (Gur et al., 2022) to test HTML understanding on static dataset in Appendix H. | 2307.12856#36 | A Real-World WebAgent with Planning, Long Context Understanding, and Program Synthesis | Pre-trained large language models (LLMs) have recently achieved better
generalization and sample efficiency in autonomous web automation. However, the
performance on real-world websites has still suffered from (1) open domainness,
(2) limited context length, and (3) lack of inductive bias on HTML. We
introduce WebAgent, an LLM-driven agent that learns from self-experience to
complete tasks on real websites following natural language instructions.
WebAgent plans ahead by decomposing instructions into canonical
sub-instructions, summarizes long HTML documents into task-relevant snippets,
and acts on websites via Python programs generated from those. We design
WebAgent with Flan-U-PaLM, for grounded code generation, and HTML-T5, new
pre-trained LLMs for long HTML documents using local and global attention
mechanisms and a mixture of long-span denoising objectives, for planning and
summarization. We empirically demonstrate that our modular recipe improves the
success on real websites by over 50%, and that HTML-T5 is the best model to
solve various HTML understanding tasks; achieving 18.7% higher success rate
than the prior method on MiniWoB web automation benchmark, and SoTA performance
on Mind2Web, an offline task planning evaluation. | http://arxiv.org/pdf/2307.12856 | Izzeddin Gur, Hiroki Furuta, Austin Huang, Mustafa Safdari, Yutaka Matsuo, Douglas Eck, Aleksandra Faust | cs.LG, cs.AI, cs.CL | null | null | cs.LG | 20230724 | 20231003 | [
{
"id": "2101.02235"
},
{
"id": "2302.13971"
},
{
"id": "2103.06410"
},
{
"id": "2305.08848"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2106.15772"
},
{
"id": "2305.16653"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2105.13231"
},
{
"id": "2205.05131"
},
{
"id": "2204.00598"
},
{
"id": "2203.17189"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2105.11210"
},
{
"id": "2304.11477"
},
{
"id": "1906.01749"
},
{
"id": "2108.07732"
},
{
"id": "2202.00217"
},
{
"id": "2102.04664"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "1811.00937"
},
{
"id": "2303.08774"
},
{
"id": "1808.06226"
},
{
"id": "2010.12844"
},
{
"id": "2110.14168"
},
{
"id": "2209.11302"
},
{
"id": "2210.09261"
},
{
"id": "2306.07863"
},
{
"id": "2306.00245"
},
{
"id": "2305.16291"
},
{
"id": "2005.14165"
},
{
"id": "2305.11014"
},
{
"id": "2211.10435"
},
{
"id": "2210.03629"
},
{
"id": "1912.13318"
},
{
"id": "2108.13643"
},
{
"id": "2303.09752"
},
{
"id": "2103.07191"
},
{
"id": "1611.04230"
},
{
"id": "2209.07753"
},
{
"id": "2002.08155"
},
{
"id": "2212.10560"
},
{
"id": "2110.08518"
},
{
"id": "2210.03945"
},
{
"id": "2201.08239"
},
{
"id": "2204.01691"
},
{
"id": "1906.03741"
},
{
"id": "2203.02155"
},
{
"id": "2201.07207"
},
{
"id": "2004.08483"
},
{
"id": "1804.05685"
},
{
"id": "2206.10498"
},
{
"id": "2207.01206"
},
{
"id": "2105.09938"
},
{
"id": "2205.12255"
},
{
"id": "2305.11854"
}
] |
2307.12950 | 36 | Additionally, all outputs in this work are limited to 300 tokens, which is sufficient for the overwhelm- ing majority of model outputs in our tasks, but may be insufficient for tasks requiring longer context.
9
It remains to be investigated how well RLCD works when generating much longer outputs during preference data simulation.
Finally, all of our prompts, data, and outputs are in English, and we have not tested in other languages. Performance may be worse especially in low-resource languages for which pretrained LLMs have seen less data during pretraining.
# ETHICS
Strong general-purpose methods for improving and controlling language models pose a risk of dual use. In this work, we focus on the harmlessness and helpfulness tasks from Bai et al. (2022a;b); ad- vancements on the harmlessness task especially have significant potential to mitigate risks associated with deploying strong language models. Our story outline task, based on creative writing, is also relatively innocuous.
Additionally, as discussed in Limitations, our experiments in this work are solely in English, and performance could be worse in lower-resource languages.
# ACKNOWLEDGEMENTS | 2307.12950#36 | RLCD: Reinforcement Learning from Contrast Distillation for Language Model Alignment | We propose Reinforcement Learning from Contrast Distillation (RLCD), a method
for aligning language models to follow natural language principles without
using human feedback. RLCD trains a preference model using simulated preference
pairs that contain both a high-quality and low-quality example, generated using
contrasting positive and negative prompts. The preference model is then used to
improve a base unaligned language model via reinforcement learning.
Empirically, RLCD outperforms RLAIF (Bai et al., 2022b) and context
distillation (Huang et al., 2022) baselines across three diverse alignment
tasks--harmlessness, helpfulness, and story outline generation--and on both 7B
and 30B model scales for preference data simulation. | http://arxiv.org/pdf/2307.12950 | Kevin Yang, Dan Klein, Asli Celikyilmaz, Nanyun Peng, Yuandong Tian | cs.CL, cs.AI | null | null | cs.CL | 20230724 | 20230818 | [
{
"id": "2302.13971"
},
{
"id": "2109.07445"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.03047"
},
{
"id": "2209.15189"
},
{
"id": "2306.15595"
},
{
"id": "1807.03748"
},
{
"id": "2202.00161"
},
{
"id": "1606.07947"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2204.05862"
},
{
"id": "2301.11270"
},
{
"id": "2112.00861"
},
{
"id": "2307.03172"
},
{
"id": "2206.11349"
},
{
"id": "2305.14387"
},
{
"id": "2210.11610"
},
{
"id": "2102.10960"
},
{
"id": "2306.11816"
}
] |
2307.12966 | 36 | Instruction Quantity Another critical question in instruction data management is the optimal quan- tity of instruction data required for effective LLM alignment. AlShikh et al. (2023) address this ques- tion by introducing a novel early-stopping criterion known as IFS. The premise of IFS rests on the ob- servation that, given an input textual preï¬x, founda- tional LLMs typically predict ensuing tokens and generate "continuation-like" outputs, while fully instruction-tuned LLMs interpret the input preï¬x as questions, thereby generating "answer-like" out- puts. IFS is quantiï¬ed as the proportion of "answer- like" outputs within all its outputs given the instruc- tions. The researchers train an external classiï¬er to discriminate between "continuation-like" and "answer-like" outputs, concluding that LLaMA ne- cessitates approximately 8K instructions to achieve a high IFS score. More instructions could poten- tially induce a semantic shift in the foundational LLMs. Zhou et al. (2023) similarly discern that merely 6K high-quality instructions sufï¬ce | 2307.12966#36 | Aligning Large Language Models with Human: A Survey | Large Language Models (LLMs) trained on extensive textual corpora have
emerged as leading solutions for a broad array of Natural Language Processing
(NLP) tasks. Despite their notable performance, these models are prone to
certain limitations such as misunderstanding human instructions, generating
potentially biased content, or factually incorrect (hallucinated) information.
Hence, aligning LLMs with human expectations has become an active area of
interest within the research community. This survey presents a comprehensive
overview of these alignment technologies, including the following aspects. (1)
Data collection: the methods for effectively collecting high-quality
instructions for LLM alignment, including the use of NLP benchmarks, human
annotations, and leveraging strong LLMs. (2) Training methodologies: a detailed
review of the prevailing training methods employed for LLM alignment. Our
exploration encompasses Supervised Fine-tuning, both Online and Offline human
preference training, along with parameter-efficient training mechanisms. (3)
Model Evaluation: the methods for evaluating the effectiveness of these
human-aligned LLMs, presenting a multifaceted approach towards their
assessment. In conclusion, we collate and distill our findings, shedding light
on several promising future research avenues in the field. This survey,
therefore, serves as a valuable resource for anyone invested in understanding
and advancing the alignment of LLMs to better suit human-oriented tasks and
expectations. An associated GitHub link collecting the latest papers is
available at https://github.com/GaryYufei/AlignLLMHumanSurvey. | http://arxiv.org/pdf/2307.12966 | Yufei Wang, Wanjun Zhong, Liangyou Li, Fei Mi, Xingshan Zeng, Wenyong Huang, Lifeng Shang, Xin Jiang, Qun Liu | cs.CL | work in progress | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2307.03109"
},
{
"id": "2305.14251"
},
{
"id": "2305.18290"
},
{
"id": "2305.13711"
},
{
"id": "2306.11644"
},
{
"id": "2306.09212"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.15011"
},
{
"id": "2304.08109"
},
{
"id": "2305.16264"
},
{
"id": "2305.10263"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2305.01210"
},
{
"id": "2306.04751"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2210.05158"
},
{
"id": "2306.15895"
},
{
"id": "2301.13688"
},
{
"id": "2307.02053"
},
{
"id": "2306.04563"
},
{
"id": "2305.15067"
},
{
"id": "2305.14688"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2306.09296"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2307.08701"
},
{
"id": "2304.08177"
},
{
"id": "2303.16634"
},
{
"id": "2306.17492"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.16739"
},
{
"id": "1909.08593"
},
{
"id": "2304.14317"
},
{
"id": "2305.14233"
},
{
"id": "2304.00723"
},
{
"id": "2306.08568"
},
{
"id": "2304.03277"
},
{
"id": "2307.03692"
},
{
"id": "2305.14314"
},
{
"id": "2307.06290"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2305.16960"
}
] |
2307.12573 | 37 | Evaluations: To evaluate the effects of MOE and TBS frameworks on interaction understanding, we introduce the concept of a virtual Game Master (GM) in TRPGs. The virtual GM serves as a simulation of a real-human GM, possessing the ability to comprehend interactions, infer intentions, interact with players, and provide guidance for their actions. This role fulï¬lls the criteria of our requirements for the agents that enable to understand complex interactions. By incorporating the virtual GM, we create a platform to assess the agentsâ understanding of complex interactions and their ability to navigate diverse scenarios. In detail, we generate GM utterances using both ground truth information from C2A and predictions from TBS. The generation process follows the methodology outlined in [26, 11], which leverages LLMs, template prompts, and additional inputs for characters and skills. | 2307.12573#37 | Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models | Recent advancements in natural language and Large Language Models (LLMs) have
enabled AI agents to simulate human-like interactions within virtual worlds.
However, these interactions still face limitations in complexity and
flexibility, particularly in scenarios involving multiple characters and novel
objects. Pre-defining all interactable objects in the agent's world model
presents challenges, and conveying implicit intentions to multiple characters
through complex interactions remains difficult. To address these issues, we
propose integrating virtual Game Masters (GMs) into the agent's world model,
drawing inspiration from Tabletop Role-Playing Games (TRPGs). GMs play a
crucial role in overseeing information, estimating players' intentions,
providing environment descriptions, and offering feedback, compensating for
current world model deficiencies. To facilitate future explorations for complex
interactions, we introduce a benchmark named Tachikuma, comprising a Multiple
character and novel Object based interaction Estimation (MOE) task and a
supporting dataset. MOE challenges models to understand characters' intentions
and accurately determine their actions within intricate contexts involving
multi-character and novel object interactions. Besides, the dataset captures
log data from real-time communications during gameplay, providing diverse,
grounded, and complex interactions for further explorations. Finally, we
present a simple prompting baseline and evaluate its performance, demonstrating
its effectiveness in enhancing interaction understanding. We hope that our
dataset and task will inspire further research in complex interactions with
natural language, fostering the development of more advanced AI agents. | http://arxiv.org/pdf/2307.12573 | Yuanzhi Liang, Linchao Zhu, Yi Yang | cs.CL | Preliminary version of an ongoing work | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2212.10060"
},
{
"id": "2205.11916"
},
{
"id": "2304.03442"
},
{
"id": "2203.11171"
},
{
"id": "2212.10618"
},
{
"id": "2301.00234"
},
{
"id": "2303.18223"
},
{
"id": "2010.07375"
},
{
"id": "2212.10403"
},
{
"id": "2305.01528"
},
{
"id": "2305.16960"
}
] |
2307.12856 | 37 | Mind2Web Mind2Web (Deng et al., 2023) is an action-annotated real-world dataset with over 2K instructions collected from 137 websites. It provides action prediction tasks that measure the generalization of LLMs across the tasks, websites, and their domains (e.g. travel, shopping). Conditioned on the top-50 HTML snippet candidates, task instruction, and action history, LLMs should predict the next step action by choosing a target element to interact with in a multi-choice QA format and generating the operation such as click, type, or select option. We finetune HTML- T5-XL with the training dataset. The performance is evaluated with element accuracy, operation F1, and step success rate that cares for both element and operation correctness. Table 3 reveals that HTML-T5 significantly outperforms baselines with Flan-T5-XL or GPT-4 (OpenAI, 2023) across task/website/domain generalization, which increases element accuracy by 20-30%, operation F1 by 5-10%, and step success rate by 20-30%. This highlights that HTML-T5 can handle real-world web automation tasks better and shows generalization beyond our real-world evaluation with 3 websites. | 2307.12856#37 | A Real-World WebAgent with Planning, Long Context Understanding, and Program Synthesis | Pre-trained large language models (LLMs) have recently achieved better
generalization and sample efficiency in autonomous web automation. However, the
performance on real-world websites has still suffered from (1) open domainness,
(2) limited context length, and (3) lack of inductive bias on HTML. We
introduce WebAgent, an LLM-driven agent that learns from self-experience to
complete tasks on real websites following natural language instructions.
WebAgent plans ahead by decomposing instructions into canonical
sub-instructions, summarizes long HTML documents into task-relevant snippets,
and acts on websites via Python programs generated from those. We design
WebAgent with Flan-U-PaLM, for grounded code generation, and HTML-T5, new
pre-trained LLMs for long HTML documents using local and global attention
mechanisms and a mixture of long-span denoising objectives, for planning and
summarization. We empirically demonstrate that our modular recipe improves the
success on real websites by over 50%, and that HTML-T5 is the best model to
solve various HTML understanding tasks; achieving 18.7% higher success rate
than the prior method on MiniWoB web automation benchmark, and SoTA performance
on Mind2Web, an offline task planning evaluation. | http://arxiv.org/pdf/2307.12856 | Izzeddin Gur, Hiroki Furuta, Austin Huang, Mustafa Safdari, Yutaka Matsuo, Douglas Eck, Aleksandra Faust | cs.LG, cs.AI, cs.CL | null | null | cs.LG | 20230724 | 20231003 | [
{
"id": "2101.02235"
},
{
"id": "2302.13971"
},
{
"id": "2103.06410"
},
{
"id": "2305.08848"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2106.15772"
},
{
"id": "2305.16653"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2105.13231"
},
{
"id": "2205.05131"
},
{
"id": "2204.00598"
},
{
"id": "2203.17189"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2105.11210"
},
{
"id": "2304.11477"
},
{
"id": "1906.01749"
},
{
"id": "2108.07732"
},
{
"id": "2202.00217"
},
{
"id": "2102.04664"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "1811.00937"
},
{
"id": "2303.08774"
},
{
"id": "1808.06226"
},
{
"id": "2010.12844"
},
{
"id": "2110.14168"
},
{
"id": "2209.11302"
},
{
"id": "2210.09261"
},
{
"id": "2306.07863"
},
{
"id": "2306.00245"
},
{
"id": "2305.16291"
},
{
"id": "2005.14165"
},
{
"id": "2305.11014"
},
{
"id": "2211.10435"
},
{
"id": "2210.03629"
},
{
"id": "1912.13318"
},
{
"id": "2108.13643"
},
{
"id": "2303.09752"
},
{
"id": "2103.07191"
},
{
"id": "1611.04230"
},
{
"id": "2209.07753"
},
{
"id": "2002.08155"
},
{
"id": "2212.10560"
},
{
"id": "2110.08518"
},
{
"id": "2210.03945"
},
{
"id": "2201.08239"
},
{
"id": "2204.01691"
},
{
"id": "1906.03741"
},
{
"id": "2203.02155"
},
{
"id": "2201.07207"
},
{
"id": "2004.08483"
},
{
"id": "1804.05685"
},
{
"id": "2206.10498"
},
{
"id": "2207.01206"
},
{
"id": "2105.09938"
},
{
"id": "2205.12255"
},
{
"id": "2305.11854"
}
] |
2307.12950 | 37 | Additionally, as discussed in Limitations, our experiments in this work are solely in English, and performance could be worse in lower-resource languages.
# ACKNOWLEDGEMENTS
We thank our colleagues at Meta AI and the Berkeley NLP group for their helpful discussions and feedback. This work was supported by Meta AI, Berkeley AI Research, Open Philanthropy, DARPA under the SemaFor program (HR00112020054), the Machine Common Sense (MCS) program under Cooperative Agreement N66001-19-2-4032, and the NSF through a fellowship to the first author. The content does not necessarily reflect the position or the policy of the government, and no official endorsement should be inferred.
# REFERENCES
Anthropic. Introducing claude, 2023. URL https://www.anthropic.com/index/ introducing-claude.
Amanda Askell, Yuntao Bai, Anna Chen, Dawn Drain, Deep Ganguli, Tom Henighan, Andy Jones, Nicholas Joseph, Ben Mann, Nova DasSarma, et al. A general language assistant as a laboratory for alignment. arXiv preprint arXiv:2112.00861, 2021. | 2307.12950#37 | RLCD: Reinforcement Learning from Contrast Distillation for Language Model Alignment | We propose Reinforcement Learning from Contrast Distillation (RLCD), a method
for aligning language models to follow natural language principles without
using human feedback. RLCD trains a preference model using simulated preference
pairs that contain both a high-quality and low-quality example, generated using
contrasting positive and negative prompts. The preference model is then used to
improve a base unaligned language model via reinforcement learning.
Empirically, RLCD outperforms RLAIF (Bai et al., 2022b) and context
distillation (Huang et al., 2022) baselines across three diverse alignment
tasks--harmlessness, helpfulness, and story outline generation--and on both 7B
and 30B model scales for preference data simulation. | http://arxiv.org/pdf/2307.12950 | Kevin Yang, Dan Klein, Asli Celikyilmaz, Nanyun Peng, Yuandong Tian | cs.CL, cs.AI | null | null | cs.CL | 20230724 | 20230818 | [
{
"id": "2302.13971"
},
{
"id": "2109.07445"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.03047"
},
{
"id": "2209.15189"
},
{
"id": "2306.15595"
},
{
"id": "1807.03748"
},
{
"id": "2202.00161"
},
{
"id": "1606.07947"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2204.05862"
},
{
"id": "2301.11270"
},
{
"id": "2112.00861"
},
{
"id": "2307.03172"
},
{
"id": "2206.11349"
},
{
"id": "2305.14387"
},
{
"id": "2210.11610"
},
{
"id": "2102.10960"
},
{
"id": "2306.11816"
}
] |
2307.12966 | 37 | semantic shift in the foundational LLMs. Zhou et al. (2023) similarly discern that merely 6K high-quality instructions sufï¬ce to align with human preferences. Motivated by these ï¬nd- ings, researchers are investigating high-quality in- struction selection. Cao et al. (2023) aim to iden- tify predictive features of high-quality instructions. Initially, they extract representative features from the instruction dataset, then utilize these instruc- tions to ï¬ne-tune LLMs. The feature importance is based on the modelâs performance. Their ex- periments demonstrate the better performance of LLMs trained on the resultant instructions. Differ- ently, Chen et al. (2023b) propose using ChatGPT | 2307.12966#37 | Aligning Large Language Models with Human: A Survey | Large Language Models (LLMs) trained on extensive textual corpora have
emerged as leading solutions for a broad array of Natural Language Processing
(NLP) tasks. Despite their notable performance, these models are prone to
certain limitations such as misunderstanding human instructions, generating
potentially biased content, or factually incorrect (hallucinated) information.
Hence, aligning LLMs with human expectations has become an active area of
interest within the research community. This survey presents a comprehensive
overview of these alignment technologies, including the following aspects. (1)
Data collection: the methods for effectively collecting high-quality
instructions for LLM alignment, including the use of NLP benchmarks, human
annotations, and leveraging strong LLMs. (2) Training methodologies: a detailed
review of the prevailing training methods employed for LLM alignment. Our
exploration encompasses Supervised Fine-tuning, both Online and Offline human
preference training, along with parameter-efficient training mechanisms. (3)
Model Evaluation: the methods for evaluating the effectiveness of these
human-aligned LLMs, presenting a multifaceted approach towards their
assessment. In conclusion, we collate and distill our findings, shedding light
on several promising future research avenues in the field. This survey,
therefore, serves as a valuable resource for anyone invested in understanding
and advancing the alignment of LLMs to better suit human-oriented tasks and
expectations. An associated GitHub link collecting the latest papers is
available at https://github.com/GaryYufei/AlignLLMHumanSurvey. | http://arxiv.org/pdf/2307.12966 | Yufei Wang, Wanjun Zhong, Liangyou Li, Fei Mi, Xingshan Zeng, Wenyong Huang, Lifeng Shang, Xin Jiang, Qun Liu | cs.CL | work in progress | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2307.03109"
},
{
"id": "2305.14251"
},
{
"id": "2305.18290"
},
{
"id": "2305.13711"
},
{
"id": "2306.11644"
},
{
"id": "2306.09212"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.15011"
},
{
"id": "2304.08109"
},
{
"id": "2305.16264"
},
{
"id": "2305.10263"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2305.01210"
},
{
"id": "2306.04751"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2210.05158"
},
{
"id": "2306.15895"
},
{
"id": "2301.13688"
},
{
"id": "2307.02053"
},
{
"id": "2306.04563"
},
{
"id": "2305.15067"
},
{
"id": "2305.14688"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2306.09296"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2307.08701"
},
{
"id": "2304.08177"
},
{
"id": "2303.16634"
},
{
"id": "2306.17492"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.16739"
},
{
"id": "1909.08593"
},
{
"id": "2304.14317"
},
{
"id": "2305.14233"
},
{
"id": "2304.00723"
},
{
"id": "2306.08568"
},
{
"id": "2304.03277"
},
{
"id": "2307.03692"
},
{
"id": "2305.14314"
},
{
"id": "2307.06290"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2305.16960"
}
] |
2307.12573 | 38 | Rather than relying on metrics based on captioning in previous works [25, 26], we employ subjective evaluation conducted by real-human players. Given the diversity of descriptions in grounded language, there is no deï¬nitive ground truth for evaluating the responses of GMs. Subjective evaluation provides more valuable insights into the degree of realism in the generated utterances. Following [25, 18, 4, 23, 10], we invite volunteers to score the responses based on three factors: naturalness, groundedness, and factual correctness. Naturalness assesses the extent to which the generated responses resemble human-like language. Groundedness measures the degree to which the responses effectively employ grounded language similar to everyday communication. Lastly, factual correctness evaluates whether there are any factual errors or inconsistencies with the given contexts.
9
# 6.1 Objective Evaluation | 2307.12573#38 | Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models | Recent advancements in natural language and Large Language Models (LLMs) have
enabled AI agents to simulate human-like interactions within virtual worlds.
However, these interactions still face limitations in complexity and
flexibility, particularly in scenarios involving multiple characters and novel
objects. Pre-defining all interactable objects in the agent's world model
presents challenges, and conveying implicit intentions to multiple characters
through complex interactions remains difficult. To address these issues, we
propose integrating virtual Game Masters (GMs) into the agent's world model,
drawing inspiration from Tabletop Role-Playing Games (TRPGs). GMs play a
crucial role in overseeing information, estimating players' intentions,
providing environment descriptions, and offering feedback, compensating for
current world model deficiencies. To facilitate future explorations for complex
interactions, we introduce a benchmark named Tachikuma, comprising a Multiple
character and novel Object based interaction Estimation (MOE) task and a
supporting dataset. MOE challenges models to understand characters' intentions
and accurately determine their actions within intricate contexts involving
multi-character and novel object interactions. Besides, the dataset captures
log data from real-time communications during gameplay, providing diverse,
grounded, and complex interactions for further explorations. Finally, we
present a simple prompting baseline and evaluate its performance, demonstrating
its effectiveness in enhancing interaction understanding. We hope that our
dataset and task will inspire further research in complex interactions with
natural language, fostering the development of more advanced AI agents. | http://arxiv.org/pdf/2307.12573 | Yuanzhi Liang, Linchao Zhu, Yi Yang | cs.CL | Preliminary version of an ongoing work | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2212.10060"
},
{
"id": "2205.11916"
},
{
"id": "2304.03442"
},
{
"id": "2203.11171"
},
{
"id": "2212.10618"
},
{
"id": "2301.00234"
},
{
"id": "2303.18223"
},
{
"id": "2010.07375"
},
{
"id": "2212.10403"
},
{
"id": "2305.01528"
},
{
"id": "2305.16960"
}
] |
2307.12856 | 38 | MiniWoB++ We here evaluate HTML-T5 on simulated web environments, MiniWoB++ with 56 tasks by running 100 evaluation episodes per task. We finetune HTML-T5 with 12K human demonstra- tions (Liu et al., 2018), and compare the average success rate to prior supervised-learned agents (Gur et al., 2022; Humphreys et al., 2022), LongT5, and its instruction-finetuned variants (Chung et al., 2022; Furuta et al., 2023) we prepared 1. Table 4 shows that HTML-T5-XL significantly outperforms WebN-T5, the prior best model, by 18.7%. Notably, we demonstrate HTML-denoising consistently improves the performance on top of LongT5 in all the model sizes, better than instruction-finetuning introduced in prior work (Furuta et al., 2023). Furthermore, we finetune HTML-T5-XL with 347K demonstrations from Furuta et al. (2023), which performs better than 11B-parameter Flan-T5-XXL even with 3B parameters, achieving 85.6% success. These prove we successfully incorporate domain knowledge on HTML comprehension for web automation into pre-trained language models. | 2307.12856#38 | A Real-World WebAgent with Planning, Long Context Understanding, and Program Synthesis | Pre-trained large language models (LLMs) have recently achieved better
generalization and sample efficiency in autonomous web automation. However, the
performance on real-world websites has still suffered from (1) open domainness,
(2) limited context length, and (3) lack of inductive bias on HTML. We
introduce WebAgent, an LLM-driven agent that learns from self-experience to
complete tasks on real websites following natural language instructions.
WebAgent plans ahead by decomposing instructions into canonical
sub-instructions, summarizes long HTML documents into task-relevant snippets,
and acts on websites via Python programs generated from those. We design
WebAgent with Flan-U-PaLM, for grounded code generation, and HTML-T5, new
pre-trained LLMs for long HTML documents using local and global attention
mechanisms and a mixture of long-span denoising objectives, for planning and
summarization. We empirically demonstrate that our modular recipe improves the
success on real websites by over 50%, and that HTML-T5 is the best model to
solve various HTML understanding tasks; achieving 18.7% higher success rate
than the prior method on MiniWoB web automation benchmark, and SoTA performance
on Mind2Web, an offline task planning evaluation. | http://arxiv.org/pdf/2307.12856 | Izzeddin Gur, Hiroki Furuta, Austin Huang, Mustafa Safdari, Yutaka Matsuo, Douglas Eck, Aleksandra Faust | cs.LG, cs.AI, cs.CL | null | null | cs.LG | 20230724 | 20231003 | [
{
"id": "2101.02235"
},
{
"id": "2302.13971"
},
{
"id": "2103.06410"
},
{
"id": "2305.08848"
},
{
"id": "2204.02311"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2106.15772"
},
{
"id": "2305.16653"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2105.13231"
},
{
"id": "2205.05131"
},
{
"id": "2204.00598"
},
{
"id": "2203.17189"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2105.11210"
},
{
"id": "2304.11477"
},
{
"id": "1906.01749"
},
{
"id": "2108.07732"
},
{
"id": "2202.00217"
},
{
"id": "2102.04664"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "1811.00937"
},
{
"id": "2303.08774"
},
{
"id": "1808.06226"
},
{
"id": "2010.12844"
},
{
"id": "2110.14168"
},
{
"id": "2209.11302"
},
{
"id": "2210.09261"
},
{
"id": "2306.07863"
},
{
"id": "2306.00245"
},
{
"id": "2305.16291"
},
{
"id": "2005.14165"
},
{
"id": "2305.11014"
},
{
"id": "2211.10435"
},
{
"id": "2210.03629"
},
{
"id": "1912.13318"
},
{
"id": "2108.13643"
},
{
"id": "2303.09752"
},
{
"id": "2103.07191"
},
{
"id": "1611.04230"
},
{
"id": "2209.07753"
},
{
"id": "2002.08155"
},
{
"id": "2212.10560"
},
{
"id": "2110.08518"
},
{
"id": "2210.03945"
},
{
"id": "2201.08239"
},
{
"id": "2204.01691"
},
{
"id": "1906.03741"
},
{
"id": "2203.02155"
},
{
"id": "2201.07207"
},
{
"id": "2004.08483"
},
{
"id": "1804.05685"
},
{
"id": "2206.10498"
},
{
"id": "2207.01206"
},
{
"id": "2105.09938"
},
{
"id": "2205.12255"
},
{
"id": "2305.11854"
}
] |
2307.12950 | 38 | Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862, 2022a.
Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. Constitutional ai: Harmlessness from ai feedback. arXiv preprint arXiv:2212.08073, 2022b.
Jonathan D Chang, Kiante Brantley, Rajkumar Ramamurthy, Dipendra Misra, and Wen Sun. Learning to generate better than your llm. arXiv preprint arXiv:2306.11816, 2023.
Shouyuan Chen, Sherman Wong, Liangjian Chen, and Yuandong Tian. Extending context window of large language models via positional interpolation. arXiv preprint arXiv:2306.15595, 2023. | 2307.12950#38 | RLCD: Reinforcement Learning from Contrast Distillation for Language Model Alignment | We propose Reinforcement Learning from Contrast Distillation (RLCD), a method
for aligning language models to follow natural language principles without
using human feedback. RLCD trains a preference model using simulated preference
pairs that contain both a high-quality and low-quality example, generated using
contrasting positive and negative prompts. The preference model is then used to
improve a base unaligned language model via reinforcement learning.
Empirically, RLCD outperforms RLAIF (Bai et al., 2022b) and context
distillation (Huang et al., 2022) baselines across three diverse alignment
tasks--harmlessness, helpfulness, and story outline generation--and on both 7B
and 30B model scales for preference data simulation. | http://arxiv.org/pdf/2307.12950 | Kevin Yang, Dan Klein, Asli Celikyilmaz, Nanyun Peng, Yuandong Tian | cs.CL, cs.AI | null | null | cs.CL | 20230724 | 20230818 | [
{
"id": "2302.13971"
},
{
"id": "2109.07445"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.03047"
},
{
"id": "2209.15189"
},
{
"id": "2306.15595"
},
{
"id": "1807.03748"
},
{
"id": "2202.00161"
},
{
"id": "1606.07947"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2204.05862"
},
{
"id": "2301.11270"
},
{
"id": "2112.00861"
},
{
"id": "2307.03172"
},
{
"id": "2206.11349"
},
{
"id": "2305.14387"
},
{
"id": "2210.11610"
},
{
"id": "2102.10960"
},
{
"id": "2306.11816"
}
] |
2307.12966 | 38 | to directly assess the quality of instructions by as- signing scores. They report that the LLM trained on the top 9K instructions notably outperforms those trained on the complete set of 52K Alpaca instructions.
# 3 Alignment Training
After collecting instructions from various sources, we then consider using these data to ï¬ne-tune exist- ing foundational LLMs to align with human. The native solution is Supervised Fine-Tuning (SFT). Speciï¬cally, given instruction input x, SFT calcu- lates the cross-entropy loss over the ground-truth response y as follows:
Lyp=- Se log Prim (yir\@. Yir,<t) (1) t
Essentially, SFT helps LLMs to understand the se- mantic meaning of prompts and make meaningful responses. The main limitation of SFT is that it only teaches LLMs about the best responses and cannot provide ï¬ne-grained comparisons to sub- optimal ones. However, it is worth noting that SFT objective or SFT model parameters has also been integrated into many human preference training ob- jective to regularize and stabilize the training pro- cess of LLMs. We summarize the research efforts built on top of SFT into: Online human preference training, Ofï¬ine human preference training and Parameter-effective ï¬ne-tuning solutions. | 2307.12966#38 | Aligning Large Language Models with Human: A Survey | Large Language Models (LLMs) trained on extensive textual corpora have
emerged as leading solutions for a broad array of Natural Language Processing
(NLP) tasks. Despite their notable performance, these models are prone to
certain limitations such as misunderstanding human instructions, generating
potentially biased content, or factually incorrect (hallucinated) information.
Hence, aligning LLMs with human expectations has become an active area of
interest within the research community. This survey presents a comprehensive
overview of these alignment technologies, including the following aspects. (1)
Data collection: the methods for effectively collecting high-quality
instructions for LLM alignment, including the use of NLP benchmarks, human
annotations, and leveraging strong LLMs. (2) Training methodologies: a detailed
review of the prevailing training methods employed for LLM alignment. Our
exploration encompasses Supervised Fine-tuning, both Online and Offline human
preference training, along with parameter-efficient training mechanisms. (3)
Model Evaluation: the methods for evaluating the effectiveness of these
human-aligned LLMs, presenting a multifaceted approach towards their
assessment. In conclusion, we collate and distill our findings, shedding light
on several promising future research avenues in the field. This survey,
therefore, serves as a valuable resource for anyone invested in understanding
and advancing the alignment of LLMs to better suit human-oriented tasks and
expectations. An associated GitHub link collecting the latest papers is
available at https://github.com/GaryYufei/AlignLLMHumanSurvey. | http://arxiv.org/pdf/2307.12966 | Yufei Wang, Wanjun Zhong, Liangyou Li, Fei Mi, Xingshan Zeng, Wenyong Huang, Lifeng Shang, Xin Jiang, Qun Liu | cs.CL | work in progress | null | cs.CL | 20230724 | 20230724 | [
{
"id": "2307.03109"
},
{
"id": "2305.14251"
},
{
"id": "2305.18290"
},
{
"id": "2305.13711"
},
{
"id": "2306.11644"
},
{
"id": "2306.09212"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.15011"
},
{
"id": "2304.08109"
},
{
"id": "2305.16264"
},
{
"id": "2305.10263"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2305.01210"
},
{
"id": "2306.04751"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2210.05158"
},
{
"id": "2306.15895"
},
{
"id": "2301.13688"
},
{
"id": "2307.02053"
},
{
"id": "2306.04563"
},
{
"id": "2305.15067"
},
{
"id": "2305.14688"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2306.09296"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2307.08701"
},
{
"id": "2304.08177"
},
{
"id": "2303.16634"
},
{
"id": "2306.17492"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.16739"
},
{
"id": "1909.08593"
},
{
"id": "2304.14317"
},
{
"id": "2305.14233"
},
{
"id": "2304.00723"
},
{
"id": "2306.08568"
},
{
"id": "2304.03277"
},
{
"id": "2307.03692"
},
{
"id": "2305.14314"
},
{
"id": "2307.06290"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2305.16960"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.