doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2306.06070
| 19 |
i cneml> <form ide@> <div netac"navigation; sitelinks"> | 1 Element: <select <p> <a> Collect Renaissance </a> <a> Shop Le Meridien ' âAction: SELECT </a> <a> Westin Store </a> <a> sheraton Store </a> | Vale: Queen ips ieiw> = cdivy select idet nevoersizey select aff |__| LRN Fe ee Sizes) CSpom netactablist> «button ide2 netar"buttons fab"> Description </buttons «<a ids? aetae"Shop Feather & Down Pillow"> <img meta="Product Feather & [âD] Prediction LLM Down Pillow"> <p> <a> California Privacy Rights </a> <a> Privacy Statement </a> <a> Terms of Use </a> <a SoooooOoOoO>ODODO"_ (Seer = id=4> Loyalty Terms </a> â [span] Pillow Protector -> CLICK What should be the next action? & Down Pillow'> <span> Feather & Down Pillow </span> </a> f '[ Please select from the following choices (If the correct action is not in the page C.
|
2306.06070#19
|
Mind2Web: Towards a Generalist Agent for the Web
|
We introduce Mind2Web, the first dataset for developing and evaluating
generalist agents for the web that can follow language instructions to complete
complex tasks on any website. Existing datasets for web agents either use
simulated websites or only cover a limited set of websites and tasks, thus not
suitable for generalist web agents. With over 2,000 open-ended tasks collected
from 137 websites spanning 31 domains and crowdsourced action sequences for the
tasks, Mind2Web provides three necessary ingredients for building generalist
web agents: 1) diverse domains, websites, and tasks, 2) use of real-world
websites instead of simulated and simplified ones, and 3) a broad spectrum of
user interaction patterns. Based on Mind2Web, we conduct an initial exploration
of using large language models (LLMs) for building generalist web agents. While
the raw HTML of real-world websites are often too large to be fed to LLMs, we
show that first filtering it with a small LM significantly improves the
effectiveness and efficiency of LLMs. Our solution demonstrates a decent level
of performance, even on websites or entire domains the model has never seen
before, but there is still a substantial room to improve towards truly
generalizable agents. We open-source our dataset, model implementation, and
trained models (https://osu-nlp-group.github.io/Mind2Web) to facilitate further
research on building a generalist agent for the web.
|
http://arxiv.org/pdf/2306.06070
|
Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, Yu Su
|
cs.CL
|
Website: https://osu-nlp-group.github.io/Mind2Web. Updated with
supplementary material. NeurIPS'23 Spotlight
| null |
cs.CL
|
20230609
|
20231209
|
[] |
2306.06264
| 19 |
Figure 4: The correlation between explicit and implicit knowledge instillation using entropy and KL-divergence metric for T5 language model. The mismatch regions are being identified as before.
Hallucination which refers to the incorrect, non- sensical, or incoherent facts in the generated text, can hinder the real-world adoption of LLMs in various applications. In here, our objective is to investigate whether it is feasible to use our met- rics to identify the facts that are probable to be hallucinated by the LLM. Our conjecture is that the hallucinated facts are typically those that the model has less information about, which is what we investigate here. We utilize entities, their asso- ciated facts, and the generated paragraphs obtained in factual alignment experiments to examine the effectiveness of our metrics in accurately detecting fabricated facts.
|
2306.06264#19
|
Measuring and Modifying Factual Knowledge in Large Language Models
|
Large Language Models (LLMs) store an extensive amount of factual knowledge
obtained from vast collections of text. To effectively utilize these models for
downstream tasks, it is crucial to have reliable methods for measuring their
knowledge. However, existing approaches for knowledge measurement have certain
limitations, and despite recent efforts, they fail to provide accurate
measurements and the necessary insights for modifying the knowledge within
LLMs. In this work, we employ information theory-based measurements to provide
a framework estimating the factual knowledge contained within large language
models. More specifically, we measure knowledge by analyzing the LLM's
prediction probability distribution before and after instilling the target
knowledge, employing metrics such as entropy and KL-divergence. Introducing our
metrics, we first assess their accuracy in comparison to previous ranking-based
methods, surpassing them by over $35\%$ in a synthetic experiment. Then, we
explore two prominent methods of knowledge instillation, discovering that LLMs
exhibit limitations in capturing new knowledge under specific circumstances for
one of these methods. Lastly, we demonstrate the applicability of our methods
in extracting unlearned and mislearned facts in LLMs through their application
to in-context learning. We make code and data for all methods and experiments
in this paper publicly available.
|
http://arxiv.org/pdf/2306.06264
|
Pouya Pezeshkpour
|
cs.CL, cs.LG
| null | null |
cs.CL
|
20230609
|
20230609
|
[
{
"id": "2302.13971"
},
{
"id": "1909.01066"
},
{
"id": "2204.06031"
},
{
"id": "2204.02311"
},
{
"id": "2106.09231"
},
{
"id": "1907.11692"
},
{
"id": "2104.07567"
},
{
"id": "2010.05731"
},
{
"id": "1910.10683"
},
{
"id": "2207.05221"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
},
{
"id": "2203.00211"
},
{
"id": "2301.12810"
}
] |
2306.06283
| 19 |
mol. repr. & framework G4(MP2) Atomization Energy R2 MAD / eV MAD / eV SMILES: GPTChem SELFIES: GPTChem SMILES: GPT2-LoRA SELFIES: GPT2-LoRA 0.984 0.961 0.931 0.959 0.99 1.18 2.03 1.93 0.976 0.973 0.910 0.915 0.03 0.03 0.06 0.06
energies. Table II shows that good agreement could be achieved for the â-ML approach. This showcases how techniques established for conventional ML on molecules can also be applied with LLMs.
Importantly, this approach is not limited to the OpenAI application programming interface (API). With parameter eï¬cient ï¬ne-tuning (PEFT) with low-rank adaptors (LoRA) [47] of the GPT-2 model [48], one can also obtain comparable results on consumer hardware. These results make the LIFT approach widely more accessible and allow research to the LIFT framework for chemistry without relying on OpenAI.
b. Text2Concrete
|
2306.06283#19
|
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
|
Large-language models (LLMs) such as GPT-4 caught the interest of many
scientists. Recent studies suggested that these models could be useful in
chemistry and materials science. To explore these possibilities, we organized a
hackathon.
This article chronicles the projects built as part of this hackathon.
Participants employed LLMs for various applications, including predicting
properties of molecules and materials, designing novel interfaces for tools,
extracting knowledge from unstructured data, and developing new educational
applications.
The diverse topics and the fact that working prototypes could be generated in
less than two days highlight that LLMs will profoundly impact the future of our
fields. The rich collection of ideas and projects also indicates that the
applications of LLMs are not limited to materials science and chemistry but
offer potential benefits to a wide range of scientific disciplines.
|
http://arxiv.org/pdf/2306.06283
|
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
|
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
| null | null |
cond-mat.mtrl-sci
|
20230609
|
20230714
|
[
{
"id": "2209.08203"
},
{
"id": "2212.04450"
}
] |
2306.07929
| 19 |
The m records with the highest similarities are retrieved to form the exemplars in the prompt. The particular similarity function designed for each task set is detailed in Subsection 4.1.
The exemplar is supposed to demonstrate the format of the input and the output to the LLM. The input part usually comprises the task infor- mation and the observation, along with some interaction feedback or auxiliary information. The particular input format depends on the task domain and will be detailed in Subsection 4.1. The output part indicates the action decision. Specifically, we propose to present the action decisions in a form of âaction adviceâ comprising both encouraged and discouraged actions rather than simply present an action to execute. This is motivated by the perspective that âreasoning is
5
|
2306.07929#19
|
Large Language Models Are Semi-Parametric Reinforcement Learning Agents
|
Inspired by the insights in cognitive science with respect to human memory
and reasoning mechanism, a novel evolvable LLM-based (Large Language Model)
agent framework is proposed as REMEMBERER. By equipping the LLM with a
long-term experience memory, REMEMBERER is capable of exploiting the
experiences from the past episodes even for different task goals, which excels
an LLM-based agent with fixed exemplars or equipped with a transient working
memory. We further introduce Reinforcement Learning with Experience Memory
(RLEM) to update the memory. Thus, the whole system can learn from the
experiences of both success and failure, and evolve its capability without
fine-tuning the parameters of the LLM. In this way, the proposed REMEMBERER
constitutes a semi-parametric RL agent. Extensive experiments are conducted on
two RL task sets to evaluate the proposed framework. The average results with
different initialization and training sets exceed the prior SOTA by 4% and 2%
for the success rate on two task sets and demonstrate the superiority and
robustness of REMEMBERER.
|
http://arxiv.org/pdf/2306.07929
|
Danyang Zhang, Lu Chen, Situo Zhang, Hongshen Xu, Zihan Zhao, Kai Yu
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230609
|
20231030
|
[
{
"id": "2201.06009"
}
] |
2306.05685
| 20 |
5
Limited capability in grading math and reasoning questions. LLMs are known to have limited math and reasoning capability [10], which results in its failure of grading such questions because they do not know the correct answers. However, what is more intriguing is that it also shows limitations in grading basic math problems which it is capable of solving. For instance, in Figure 13 (Appendix), we present an example of an elementary math question in which GPT-4 makes an incorrect judgment. Itâs worth noting that although GPT-4 can solve the problem (when asked separately), it was misled by the provided answers, ultimately resulting in incorrect judgment. This pattern can also be seen in a reasoning question example in Figure 14 (Appendix). Both GPT-3.5 and Claude-v1 show a similar weakness. In Section 3.4, we will introduce a reference-guided method to mitigate such issues.
# 3.4 Addressing limitations
We present a few methods to address position bias and the limited grading ability for math questions.
|
2306.05685#20
|
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
|
Evaluating large language model (LLM) based chat assistants is challenging
due to their broad capabilities and the inadequacy of existing benchmarks in
measuring human preferences. To address this, we explore using strong LLMs as
judges to evaluate these models on more open-ended questions. We examine the
usage and limitations of LLM-as-a-judge, including position, verbosity, and
self-enhancement biases, as well as limited reasoning ability, and propose
solutions to mitigate some of them. We then verify the agreement between LLM
judges and human preferences by introducing two benchmarks: MT-bench, a
multi-turn question set; and Chatbot Arena, a crowdsourced battle platform. Our
results reveal that strong LLM judges like GPT-4 can match both controlled and
crowdsourced human preferences well, achieving over 80% agreement, the same
level of agreement between humans. Hence, LLM-as-a-judge is a scalable and
explainable way to approximate human preferences, which are otherwise very
expensive to obtain. Additionally, we show our benchmark and traditional
benchmarks complement each other by evaluating several variants of LLaMA and
Vicuna. The MT-bench questions, 3K expert votes, and 30K conversations with
human preferences are publicly available at
https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge.
|
http://arxiv.org/pdf/2306.05685
|
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, Ion Stoica
|
cs.CL, cs.AI
|
NeurIPS 2023 Datasets and Benchmarks Track
| null |
cs.CL
|
20230609
|
20231224
|
[
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2305.10403"
},
{
"id": "2304.07327"
},
{
"id": "2201.11903"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.12420"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2306.04751"
},
{
"id": "2211.09110"
},
{
"id": "2301.13688"
},
{
"id": "2004.14602"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2211.05719"
},
{
"id": "2206.04615"
},
{
"id": "2204.05862"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2305.17926"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2306.05715
| 20 |
4https://dart.dev/ 5https://dartpad.dev 6The version released on March 1st, 2023, https://openai.com/blog/introducing-chatgpt-and-whisper-apis 7GPT-4 was released on March 14th, 2023 (https://openai.com/research/gpt-4). While working on this article, we had no access to the GPT-4 API.
ICER â23 V1, August 7â11, 2023, Chicago, IL, USA
Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanpää, and Juha Sorva
Table 1: Summaries of the exercises we analyzed. The âCountâ column lists the number of help requests for each exercise.
Count Exercise name Exercise description 66 57 47 42 40 40 36 34 31 31 31 28 28 23 Diï¬erence between two numbers Writing a program that reads in two numbers and prints out their diï¬erence. Asking for a password Average of entered numbers Counting positive numbers Authentication Veriï¬cation of input On calculating an average Searching from a phone book Fixing a bit! Average distance of long jumps Sum between Count of entered numbers Explaining the number First and last name 21 In reverse order
|
2306.05715#20
|
Exploring the Responses of Large Language Models to Beginner Programmers' Help Requests
|
Background and Context: Over the past year, large language models (LLMs) have
taken the world by storm. In computing education, like in other walks of life,
many opportunities and threats have emerged as a consequence.
Objectives: In this article, we explore such opportunities and threats in a
specific area: responding to student programmers' help requests. More
specifically, we assess how good LLMs are at identifying issues in problematic
code that students request help on.
Method: We collected a sample of help requests and code from an online
programming course. We then prompted two different LLMs (OpenAI Codex and
GPT-3.5) to identify and explain the issues in the students' code and assessed
the LLM-generated answers both quantitatively and qualitatively.
Findings: GPT-3.5 outperforms Codex in most respects. Both LLMs frequently
find at least one actual issue in each student program (GPT-3.5 in 90% of the
cases). Neither LLM excels at finding all the issues (GPT-3.5 finding them 57%
of the time). False positives are common (40% chance for GPT-3.5). The advice
that the LLMs provide on the issues is often sensible. The LLMs perform better
on issues involving program logic rather than on output formatting. Model
solutions are frequently provided even when the LLM is prompted not to. LLM
responses to prompts in a non-English language are only slightly worse than
responses to English prompts.
Implications: Our results continue to highlight the utility of LLMs in
programming education. At the same time, the results highlight the
unreliability of LLMs: LLMs make some of the same mistakes that students do,
perhaps especially when formatting output as required by automated assessment
systems. Our study informs teachers interested in using LLMs as well as future
efforts to customize LLMs for the needs of programming education.
|
http://arxiv.org/pdf/2306.05715
|
Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanpää, Juha Sorva
|
cs.CY, cs.AI, cs.CL, cs.HC, cs.SE
|
13 pages, 1 figure. To be published in Proceedings of the 2023 ACM
Conference on International Computing Education Research V.1 (ICER '23 V1)
| null |
cs.CY
|
20230609
|
20230609
|
[
{
"id": "2004.09456"
},
{
"id": "2302.07427"
},
{
"id": "2203.02155"
},
{
"id": "2304.02491"
},
{
"id": "2211.04715"
},
{
"id": "2306.02608"
},
{
"id": "2303.08774"
},
{
"id": "2304.03938"
}
] |
2306.05720
| 20 |
Figure 5: This figure illustrates our intervention workflow, where the foreground object was repo- sitioned in the intervened output. When modifying the representations at a chosen layer (in red), we interrupt LDM before the layer forwards its activation ϵθ(l,t). ϵθ(l,t) is updated so the probing classifierâs prediction changes to match the modified label. We replace ϵⲠθ(l,t) with the original activations and resume the denoising process. Since LDM uses the latents denoised from previous step as input, activations after the intervened layers are also affected by the modification (highlighted in yellow). We adopt a similar intervention scheme for the depth representation.
Intervention on Saliency Representation â_ Intervention on Depth Representation 300 lâ¢m intervened jm Null lm intervened | Sm Null Count of Interventions ° 0.25 0.50 0.75 1.00 1.25 150 1.75 200 Root Mean Squared Error ° 0.0 0.2 04 0.6 08 1.0 Dice Coefficient [0, 1]
Representations Saliency (Dice â) Depth (RMSE â) Pre-intervention 0.46 1.02 Post-intervention 0.69 0.63
|
2306.05720#20
|
Beyond Surface Statistics: Scene Representations in a Latent Diffusion Model
|
Latent diffusion models (LDMs) exhibit an impressive ability to produce
realistic images, yet the inner workings of these models remain mysterious.
Even when trained purely on images without explicit depth information, they
typically output coherent pictures of 3D scenes. In this work, we investigate a
basic interpretability question: does an LDM create and use an internal
representation of simple scene geometry? Using linear probes, we find evidence
that the internal activations of the LDM encode linear representations of both
3D depth data and a salient-object / background distinction. These
representations appear surprisingly early in the denoising process$-$well
before a human can easily make sense of the noisy images. Intervention
experiments further indicate these representations play a causal role in image
synthesis, and may be used for simple high-level editing of an LDM's output.
Project page: https://yc015.github.io/scene-representation-diffusion-model/
|
http://arxiv.org/pdf/2306.05720
|
Yida Chen, Fernanda Viégas, Martin Wattenberg
|
cs.CV, cs.AI, cs.LG
|
A short version of this paper is accepted in the NeurIPS 2023
Workshop on Diffusion Models: https://nips.cc/virtual/2023/74894
| null |
cs.CV
|
20230609
|
20231104
|
[
{
"id": "2209.14988"
},
{
"id": "1805.01070"
},
{
"id": "2210.13382"
},
{
"id": "1909.01066"
},
{
"id": "1809.10040"
},
{
"id": "2212.00774"
},
{
"id": "1610.01644"
},
{
"id": "2206.00364"
},
{
"id": "1905.05950"
},
{
"id": "2212.08861"
},
{
"id": "1908.02899"
},
{
"id": "2111.02114"
},
{
"id": "2105.14002"
},
{
"id": "2112.03126"
}
] |
2306.05783
| 20 |
On the basis of the annotated results, we conducted a manual annotation to further enhance the datasetâs quality. The manual annotations comprised the following requirements for each question: (1) Identification of questions that cannot be resolved solely with textual information. These questions may need extra information on images, audio, or something else. Annotators are offered additional payment for identifying any unanswerable questions. These questions are excluded from Xiezhi Benchmark. (2) Verification of the annotation results of ChatGPT. The verification consists of identifying errors made by ChatGPT or tagging more disciplines to the questions. Annotators are also offered additional bonuses it they make suitable changes to the results made by ChatGPT.
# Question Generation
Xiezhi comprises 80k multiple-choice questions generated from academic surveys, as they frequently encompass well-established domain knowledge. To ensure the quality of the generated question, we first select the longest sentences from these surveys and then identify keywords using the OpenNER method Zhu et al. (2019), which are then masked to formulate the questions. To assemble the set of options for each question, the answers to all other questions in Xiezhi were sampled and combined with the standard answers for each respective question.
# Auto Annotation
|
2306.05783#20
|
Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation
|
New Natural Langauge Process~(NLP) benchmarks are urgently needed to align
with the rapid development of large language models (LLMs). We present Xiezhi,
the most comprehensive evaluation suite designed to assess holistic domain
knowledge. Xiezhi comprises multiple-choice questions across 516 diverse
disciplines ranging from 13 different subjects with 249,587 questions and
accompanied by Xiezhi-Specialty and Xiezhi-Interdiscipline, both with 15k
questions. We conduct evaluation of the 47 cutting-edge LLMs on Xiezhi. Results
indicate that LLMs exceed average performance of humans in science,
engineering, agronomy, medicine, and art, but fall short in economics,
jurisprudence, pedagogy, literature, history, and management. We anticipate
Xiezhi will help analyze important strengths and shortcomings of LLMs, and the
benchmark is released in~\url{https://github.com/MikeGu721/XiezhiBenchmark}.
|
http://arxiv.org/pdf/2306.05783
|
Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Jianchen Wang, Sihang Jiang, Zhuozhi Xiong, Zihan Li, Qianyu He, Rui Xu, Wenhao Huang, Zili Wang, Shusen Wang, Weiguo Zheng, Hongwei Feng, Yanghua Xiao
|
cs.CL
|
Under review of NeurIPS 2023
| null |
cs.CL
|
20230609
|
20230615
|
[
{
"id": "2301.13126"
},
{
"id": "2302.13971"
},
{
"id": "2303.04048"
},
{
"id": "1905.07830"
},
{
"id": "2304.12986"
},
{
"id": "2304.07854"
},
{
"id": "2211.05100"
},
{
"id": "1909.00277"
},
{
"id": "2305.10263"
},
{
"id": "1909.06058"
},
{
"id": "2206.07682"
},
{
"id": "2304.06364"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2303.18223"
},
{
"id": "2302.09419"
},
{
"id": "2303.14742"
},
{
"id": "2111.10952"
},
{
"id": "2301.12726"
},
{
"id": "2304.01933"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
},
{
"id": "2105.09938"
}
] |
2306.05817
| 20 |
k=1 = F (u), s.t. ik â I, k = 1, · · · , N. GPT4Rec [Li et al., 2023c] tunes a large language model to produce the title of next item according to the userâs behavior history via multi-beam generation. VIP5 [Geng et al., 2023] and GPTRec [Petrov and Macdonald, 2023] frame the next item recommendation task as a generative task, and utilizes a sequence-to-sequence model to generate the index of the next recommended item. Hua et al. [2023b] also explore better ways for item indexing (e.g., sequential indexing, collabora- tive indexing) in order to enhance the performance of such in- dex generation tasks. Chen [2023], Wang and Lim [2023], Li et al. [2023g], and Hou et al. [2023b] apply LLM to directly produce the ï¬nal ranked list with an optional pre-ï¬ltered set of item candidates in the input prompts. This task highly re- lies on the intrinsic reasoning ability of LLM. Besides, FaiRLLM [Zhang et al., 2023a] and UP5 [Hua et al., 2023a] in- tend to address the fairness issue when adapting LLM for item generation tasks.
|
2306.05817#20
|
How Can Recommender Systems Benefit from Large Language Models: A Survey
|
Recommender systems (RS) play important roles to match users' information
needs for Internet applications. In natural language processing (NLP) domains,
large language model (LLM) has shown astonishing emergent abilities (e.g.,
instruction following, reasoning), thus giving rise to the promising research
direction of adapting LLM to RS for performance enhancements and user
experience improvements. In this paper, we conduct a comprehensive survey on
this research direction from an application-oriented view. We first summarize
existing research works from two orthogonal perspectives: where and how to
adapt LLM to RS. For the "WHERE" question, we discuss the roles that LLM could
play in different stages of the recommendation pipeline, i.e., feature
engineering, feature encoder, scoring/ranking function, and pipeline
controller. For the "HOW" question, we investigate the training and inference
strategies, resulting in two fine-grained taxonomy criteria, i.e., whether to
tune LLMs or not, and whether to involve conventional recommendation model
(CRM) for inference. Detailed analysis and general development trajectories are
provided for both questions, respectively. Then, we highlight key challenges in
adapting LLM to RS from three aspects, i.e., efficiency, effectiveness, and
ethics. Finally, we summarize the survey and discuss the future prospects. We
also actively maintain a GitHub repository for papers and other related
resources in this rising direction:
https://github.com/CHIANGEL/Awesome-LLM-for-RecSys.
|
http://arxiv.org/pdf/2306.05817
|
Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Xiangyang Li, Chenxu Zhu, Huifeng Guo, Yong Yu, Ruiming Tang, Weinan Zhang
|
cs.IR, cs.AI
|
15 pages; 3 figures; summarization table in appendix
| null |
cs.IR
|
20230609
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2304.05263"
},
{
"id": "2305.07001"
},
{
"id": "2305.11700"
},
{
"id": "2305.06566"
},
{
"id": "2305.15756"
},
{
"id": "2105.08318"
},
{
"id": "2304.03879"
},
{
"id": "2303.08559"
},
{
"id": "1703.04247"
},
{
"id": "2206.07682"
},
{
"id": "2305.07961"
},
{
"id": "2305.05973"
},
{
"id": "2305.15498"
},
{
"id": "2306.11114"
},
{
"id": "2305.04518"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2305.12090"
},
{
"id": "2212.10403"
},
{
"id": "2304.03022"
},
{
"id": "2305.07609"
},
{
"id": "2209.08060"
},
{
"id": "2209.07562"
},
{
"id": "2304.09542"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "1908.08167"
},
{
"id": "1909.10351"
},
{
"id": "2305.15036"
},
{
"id": "2305.12081"
},
{
"id": "2304.07862"
},
{
"id": "2305.03017"
},
{
"id": "2305.09858"
},
{
"id": "2305.06474"
},
{
"id": "2305.13731"
},
{
"id": "2304.03153"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2306.10702"
},
{
"id": "2306.02250"
},
{
"id": "2303.13835"
},
{
"id": "2305.14302"
},
{
"id": "2302.03735"
},
{
"id": "2306.02841"
},
{
"id": "2305.11206"
},
{
"id": "2203.15876"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
}
] |
2306.05949
| 20 |
types, and Representational Harms), violent or non-consensual imagery or audio, and physically threatening language, i.e., threats to the lives and safety of individuals or groups of people. Although base systems cannot act on the content that is generated by them, they can still inflict harms upon viewers who are targeted, help normalize harmful content, and aid in the production of harmful content for distribution (e.g., misinformation and non-consensual imagery).
In an early example, Microsoftâs Tay bot showed these exact vulnerabilities and generated violent language such as Holocaust denial and threats to women and people of color within 24 hours of its release [255]. Recent harms have proved fatal [268]. For these reasons, it is of the utmost importance that generative AI systems are evaluated for their potential to generate harmful content and how such content may be propagated without appropriate measures for identifying and addressing them.
What to Evaluate Cultural values can highlight specific prominent topics according to a given application and modality. For example, An image generative model prompted on politics can segment generations with disparate geographic and political party, building, infrastructural, and figure representation, alongside ideological cues. Cultural sensitive topics can range from physical aspects of human appearance and health to less visible or descriptive aspects of human behavior and emotional expression. A non-exhaustive categorical framework and human reviewed evaluations [228] can capture some aspects of culture.
|
2306.05949#20
|
Evaluating the Social Impact of Generative AI Systems in Systems and Society
|
Generative AI systems across modalities, ranging from text, image, audio, and
video, have broad social impacts, but there exists no official standard for
means of evaluating those impacts and which impacts should be evaluated. We
move toward a standard approach in evaluating a generative AI system for any
modality, in two overarching categories: what is able to be evaluated in a base
system that has no predetermined application and what is able to be evaluated
in society. We describe specific social impact categories and how to approach
and conduct evaluations in the base technical system, then in people and
society. Our framework for a base system defines seven categories of social
impact: bias, stereotypes, and representational harms; cultural values and
sensitive content; disparate performance; privacy and data protection;
financial costs; environmental costs; and data and content moderation labor
costs. Suggested methods for evaluation apply to all modalities and analyses of
the limitations of existing evaluations serve as a starting point for necessary
investment in future evaluations. We offer five overarching categories for what
is able to be evaluated in society, each with their own subcategories:
trustworthiness and autonomy; inequality, marginalization, and violence;
concentration of authority; labor and creativity; and ecosystem and
environment. Each subcategory includes recommendations for mitigating harm. We
are concurrently crafting an evaluation repository for the AI research
community to contribute existing evaluations along the given categories. This
version will be updated following a CRAFT session at ACM FAccT 2023.
|
http://arxiv.org/pdf/2306.05949
|
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
|
cs.CY, cs.AI
| null | null |
cs.CY
|
20230609
|
20230612
|
[
{
"id": "2007.04068"
},
{
"id": "2305.09800"
},
{
"id": "1908.09203"
},
{
"id": "2202.05520"
},
{
"id": "2302.10329"
},
{
"id": "2107.03374"
},
{
"id": "2210.06245"
},
{
"id": "2211.02001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2301.10226"
},
{
"id": "2202.02647"
},
{
"id": "2112.10752"
},
{
"id": "2206.04615"
},
{
"id": "2202.00885"
},
{
"id": "2010.15581"
},
{
"id": "2305.09941"
},
{
"id": "2301.04246"
},
{
"id": "2304.12298"
},
{
"id": "2203.09509"
},
{
"id": "2207.14157"
},
{
"id": "2102.09692"
},
{
"id": "1804.10999"
},
{
"id": "2303.11156"
},
{
"id": "2104.06390"
},
{
"id": "2002.05651"
}
] |
2306.06070
| 20 |
<span> Feather & Down Pillow </span> </a> f '[ Please select from the following choices (If the correct action is not in the page C. Based on the HTML webpage above, try to complete the following | | above, please select A. âNone of the above): âAction: SELECT task 1 Value: Queen Task: Search for queen-size pllow protectors rom the Marat [A ofthe above Shop, and oundâ ad two proces the art and checkout 1/8: Sorm e0> cal meta=%navigation; stelnks'> <p> <a> Collect Previous actions: Renaissance </a> <a> Shop Le Meridien </a> <a> Westin Store </a> <a> [butlon} Special Otters > CLICK âJoc. SSclectidet motos"Size Selecta Se> Tink) Shop Mariott Opens a new window > CLICK 1]. <tution id=2 meta=buton; ab"> Description <buton> [menuitem] category pilows > CLICK t[E. <aide9 meta='Shop Feather & Down Pilow> <img meta="Product Feather ' ' '
|
2306.06070#20
|
Mind2Web: Towards a Generalist Agent for the Web
|
We introduce Mind2Web, the first dataset for developing and evaluating
generalist agents for the web that can follow language instructions to complete
complex tasks on any website. Existing datasets for web agents either use
simulated websites or only cover a limited set of websites and tasks, thus not
suitable for generalist web agents. With over 2,000 open-ended tasks collected
from 137 websites spanning 31 domains and crowdsourced action sequences for the
tasks, Mind2Web provides three necessary ingredients for building generalist
web agents: 1) diverse domains, websites, and tasks, 2) use of real-world
websites instead of simulated and simplified ones, and 3) a broad spectrum of
user interaction patterns. Based on Mind2Web, we conduct an initial exploration
of using large language models (LLMs) for building generalist web agents. While
the raw HTML of real-world websites are often too large to be fed to LLMs, we
show that first filtering it with a small LM significantly improves the
effectiveness and efficiency of LLMs. Our solution demonstrates a decent level
of performance, even on websites or entire domains the model has never seen
before, but there is still a substantial room to improve towards truly
generalizable agents. We open-source our dataset, model implementation, and
trained models (https://osu-nlp-group.github.io/Mind2Web) to facilitate further
research on building a generalist agent for the web.
|
http://arxiv.org/pdf/2306.06070
|
Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, Yu Su
|
cs.CL
|
Website: https://osu-nlp-group.github.io/Mind2Web. Updated with
supplementary material. NeurIPS'23 Spotlight
| null |
cs.CL
|
20230609
|
20231209
|
[] |
2306.06264
| 20 |
Before investigating the applicability of our met- rics in factual alignment and detecting halluci- nation, we need to define a model that can pre- dict whether a given fact appeared, didnât appear, or appeared incorrectly (hallucinated) in a given paragraph. To accomplish this, we fine-tune a RoBERTa-based (Liu et al., 2019) classifier by ex- tracting facts from LAMA (Petroni et al., 2019) and their corresponding prompts from T-REx dataset (Elsahar et al., 2018). T-REx provides prompts in the form of paragraphs, where the facts can appear explicitly or implicitly within these paragraphs. In order to gather data for the didnât appear class, we replace the object of the fact by randomly sampling from all the objects connected to the subject of our target fact. Similarly, for the appeared incorrectly class, we replace the object of the fact by randomly sampling from all the objects that appear in the graph with that relation. Our training, developIntructGPT FLAN-T5 Appeared Didnât Appear Hallucinated 100 86 82 92 95 81
Table 2: The accuracy of the discriminator in classifying facts according to their appearance in generated para- graphs is evaluated through a user study.
|
2306.06264#20
|
Measuring and Modifying Factual Knowledge in Large Language Models
|
Large Language Models (LLMs) store an extensive amount of factual knowledge
obtained from vast collections of text. To effectively utilize these models for
downstream tasks, it is crucial to have reliable methods for measuring their
knowledge. However, existing approaches for knowledge measurement have certain
limitations, and despite recent efforts, they fail to provide accurate
measurements and the necessary insights for modifying the knowledge within
LLMs. In this work, we employ information theory-based measurements to provide
a framework estimating the factual knowledge contained within large language
models. More specifically, we measure knowledge by analyzing the LLM's
prediction probability distribution before and after instilling the target
knowledge, employing metrics such as entropy and KL-divergence. Introducing our
metrics, we first assess their accuracy in comparison to previous ranking-based
methods, surpassing them by over $35\%$ in a synthetic experiment. Then, we
explore two prominent methods of knowledge instillation, discovering that LLMs
exhibit limitations in capturing new knowledge under specific circumstances for
one of these methods. Lastly, we demonstrate the applicability of our methods
in extracting unlearned and mislearned facts in LLMs through their application
to in-context learning. We make code and data for all methods and experiments
in this paper publicly available.
|
http://arxiv.org/pdf/2306.06264
|
Pouya Pezeshkpour
|
cs.CL, cs.LG
| null | null |
cs.CL
|
20230609
|
20230609
|
[
{
"id": "2302.13971"
},
{
"id": "1909.01066"
},
{
"id": "2204.06031"
},
{
"id": "2204.02311"
},
{
"id": "2106.09231"
},
{
"id": "1907.11692"
},
{
"id": "2104.07567"
},
{
"id": "2010.05731"
},
{
"id": "1910.10683"
},
{
"id": "2207.05221"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
},
{
"id": "2203.00211"
},
{
"id": "2301.12810"
}
] |
2306.06283
| 20 |
b. Text2Concrete
Concrete is the most used construction material, and the mechanical properties and climate impact of these materials are a complex function of the processing and formula- tion. Much research is focused on formulations of concrete that are less CO2 intensive. [49] To expedite the design process, e.g., by prioritizing experiments using ML-predictions, data-driven methods have been investigated by V¨olker et al. [50] The Text2Concrete team (Sabine Kruschwitz, Christoph V¨olker, and Ghezal Ahmad Zia) explored, based on data reported by Rao and Rao [51], whether LLMs can be used for this task. This data set provides 240 alternative, more sustainable, concrete formulations and their respective compressive strengths. From a practical point of view, one would like to have a model that can predict the compressive strength of the concrete as a function of its formulation.
Interestingly, the largest LLMs can already give predictions without any ï¬ne-tuning. These models can âlearnâ from the few examples provided by the user in the prompt. Of course, such a few-shot approach (or ICL, [20]) does not allow for the same type of
8
|
2306.06283#20
|
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
|
Large-language models (LLMs) such as GPT-4 caught the interest of many
scientists. Recent studies suggested that these models could be useful in
chemistry and materials science. To explore these possibilities, we organized a
hackathon.
This article chronicles the projects built as part of this hackathon.
Participants employed LLMs for various applications, including predicting
properties of molecules and materials, designing novel interfaces for tools,
extracting knowledge from unstructured data, and developing new educational
applications.
The diverse topics and the fact that working prototypes could be generated in
less than two days highlight that LLMs will profoundly impact the future of our
fields. The rich collection of ideas and projects also indicates that the
applications of LLMs are not limited to materials science and chemistry but
offer potential benefits to a wide range of scientific disciplines.
|
http://arxiv.org/pdf/2306.06283
|
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
|
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
| null | null |
cond-mat.mtrl-sci
|
20230609
|
20230714
|
[
{
"id": "2209.08203"
},
{
"id": "2212.04450"
}
] |
2306.07929
| 20 |
5
rememberingâ to exploit both successful and failed experiences. To form the output part in the exemplar, the actions with the highest Q value estimations from the retrieved record are given as the encouraged actions, while the actions with poor Q value estimations (e.g., zero or negative estimations) are given as the discouraged actions. It is believed that the advice with high value expectations can lead the LLM to follow the past success, while the advice with poor expectations will teach the LLM to avoid a similar failure. A clear depiction of the exemplar format can be found in Figure 4. Prompted by such exemplars, the LLM will also predict both encouraged and discouraged actions and speculate their Q values given a new input. The predicted Q values are used to select the optimal action, to be specific, the encouraged action with the highest Q value speculation will be executed in the environment.
|
2306.07929#20
|
Large Language Models Are Semi-Parametric Reinforcement Learning Agents
|
Inspired by the insights in cognitive science with respect to human memory
and reasoning mechanism, a novel evolvable LLM-based (Large Language Model)
agent framework is proposed as REMEMBERER. By equipping the LLM with a
long-term experience memory, REMEMBERER is capable of exploiting the
experiences from the past episodes even for different task goals, which excels
an LLM-based agent with fixed exemplars or equipped with a transient working
memory. We further introduce Reinforcement Learning with Experience Memory
(RLEM) to update the memory. Thus, the whole system can learn from the
experiences of both success and failure, and evolve its capability without
fine-tuning the parameters of the LLM. In this way, the proposed REMEMBERER
constitutes a semi-parametric RL agent. Extensive experiments are conducted on
two RL task sets to evaluate the proposed framework. The average results with
different initialization and training sets exceed the prior SOTA by 4% and 2%
for the success rate on two task sets and demonstrate the superiority and
robustness of REMEMBERER.
|
http://arxiv.org/pdf/2306.07929
|
Danyang Zhang, Lu Chen, Situo Zhang, Hongshen Xu, Zihan Zhao, Kai Yu
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230609
|
20231030
|
[
{
"id": "2201.06009"
}
] |
2306.05685
| 21 |
# 3.4 Addressing limitations
We present a few methods to address position bias and the limited grading ability for math questions.
Swapping positions. The position bias can be addressed by simple solutions. A conservative approach is to call a judge twice by swapping the order of two answers and only declare a win when an answer is preferred in both orders. If the results are inconsistent after swapping, we can call it a tie. Another more aggressive approach is to assign positions randomly, which can be effective at a large scale with the correct expectations. In the following experiments, we use the conservative one.
Few-shot judge. We assess whether few-shot examples can improve consistency in the position bias benchmark. We select three good judgment examples using MT-bench-like questions, GPT-3.5 and Vicuna for generating answers, and GPT-4 for generating judgments. The examples cover three cases: A is better, B is better, and tie. As shown in Table 12 (Appendix), the few-shot judge can significantly increase the consistency of GPT-4 from 65.0% to 77.5%. However, high consistency may not imply high accuracy and we are not sure whether the few-shot examples will introduce new biases. Besides, the longer prompts make API calls 4Ã more expensive. We use the zero-shot prompt by default in our following experiments but leave an additional study in Appendix D.2.
|
2306.05685#21
|
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
|
Evaluating large language model (LLM) based chat assistants is challenging
due to their broad capabilities and the inadequacy of existing benchmarks in
measuring human preferences. To address this, we explore using strong LLMs as
judges to evaluate these models on more open-ended questions. We examine the
usage and limitations of LLM-as-a-judge, including position, verbosity, and
self-enhancement biases, as well as limited reasoning ability, and propose
solutions to mitigate some of them. We then verify the agreement between LLM
judges and human preferences by introducing two benchmarks: MT-bench, a
multi-turn question set; and Chatbot Arena, a crowdsourced battle platform. Our
results reveal that strong LLM judges like GPT-4 can match both controlled and
crowdsourced human preferences well, achieving over 80% agreement, the same
level of agreement between humans. Hence, LLM-as-a-judge is a scalable and
explainable way to approximate human preferences, which are otherwise very
expensive to obtain. Additionally, we show our benchmark and traditional
benchmarks complement each other by evaluating several variants of LLaMA and
Vicuna. The MT-bench questions, 3K expert votes, and 30K conversations with
human preferences are publicly available at
https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge.
|
http://arxiv.org/pdf/2306.05685
|
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, Ion Stoica
|
cs.CL, cs.AI
|
NeurIPS 2023 Datasets and Benchmarks Track
| null |
cs.CL
|
20230609
|
20231224
|
[
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2305.10403"
},
{
"id": "2304.07327"
},
{
"id": "2201.11903"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.12420"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2306.04751"
},
{
"id": "2211.09110"
},
{
"id": "2301.13688"
},
{
"id": "2004.14602"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2211.05719"
},
{
"id": "2206.04615"
},
{
"id": "2204.05862"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2305.17926"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2306.05715
| 21 |
(1) The exercise handout (2) Starter code (where applicable) (3) The studentâs code (4) The help request text written by the student (5) The model solution (6) An additional passage of text that describes the context and
asks for suggestions
During prompt engineering, we observed that the help request texts were unnecessary, as they were generally uninformative be- yond indicating that the student was struggling. Another observa- tion was that including the model solution in the prompt often led to a response explaining that solution and increased the chance of the solution being echoed in the response. Moreover, it appeared unnecessary to include trivial starter code (an empty function).
Of the prompting options that we explored, we deemed the fol- lowing procedure the best: Begin the prompt with the exercise handout, followed by the studentâs code and a question. Explain the course context as part of the question. Write the question in the ï¬rst person (so that the model is likelier to produce output that could be directly given to students). Include an explicit request that
the model not produce a model solution, corrected code, or auto- mated tests (even though the eï¬ect of this request is limited). In- clude non-trivial starter code and mark it as such in the prompt.
|
2306.05715#21
|
Exploring the Responses of Large Language Models to Beginner Programmers' Help Requests
|
Background and Context: Over the past year, large language models (LLMs) have
taken the world by storm. In computing education, like in other walks of life,
many opportunities and threats have emerged as a consequence.
Objectives: In this article, we explore such opportunities and threats in a
specific area: responding to student programmers' help requests. More
specifically, we assess how good LLMs are at identifying issues in problematic
code that students request help on.
Method: We collected a sample of help requests and code from an online
programming course. We then prompted two different LLMs (OpenAI Codex and
GPT-3.5) to identify and explain the issues in the students' code and assessed
the LLM-generated answers both quantitatively and qualitatively.
Findings: GPT-3.5 outperforms Codex in most respects. Both LLMs frequently
find at least one actual issue in each student program (GPT-3.5 in 90% of the
cases). Neither LLM excels at finding all the issues (GPT-3.5 finding them 57%
of the time). False positives are common (40% chance for GPT-3.5). The advice
that the LLMs provide on the issues is often sensible. The LLMs perform better
on issues involving program logic rather than on output formatting. Model
solutions are frequently provided even when the LLM is prompted not to. LLM
responses to prompts in a non-English language are only slightly worse than
responses to English prompts.
Implications: Our results continue to highlight the utility of LLMs in
programming education. At the same time, the results highlight the
unreliability of LLMs: LLMs make some of the same mistakes that students do,
perhaps especially when formatting output as required by automated assessment
systems. Our study informs teachers interested in using LLMs as well as future
efforts to customize LLMs for the needs of programming education.
|
http://arxiv.org/pdf/2306.05715
|
Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanpää, Juha Sorva
|
cs.CY, cs.AI, cs.CL, cs.HC, cs.SE
|
13 pages, 1 figure. To be published in Proceedings of the 2023 ACM
Conference on International Computing Education Research V.1 (ICER '23 V1)
| null |
cs.CY
|
20230609
|
20230609
|
[
{
"id": "2004.09456"
},
{
"id": "2302.07427"
},
{
"id": "2203.02155"
},
{
"id": "2304.02491"
},
{
"id": "2211.04715"
},
{
"id": "2306.02608"
},
{
"id": "2303.08774"
},
{
"id": "2304.03938"
}
] |
2306.05720
| 21 |
Representations Saliency (Dice â) Depth (RMSE â) Pre-intervention 0.46 1.02 Post-intervention 0.69 0.63
Figure 6: Quantitative results show that intervening saliency representation had causal effects on modelâs outputs. The median Dice coefficient between modified salient object masks and synthetic masks of 1851 intervened outputs is 0.69 (vs. 0.46 of null baseline). The median RMSE between modified depth maps and depth maps of 1855 intervened outputs is 0.63 (vs. 1.02 of null baseline).
horizontal translations sampled from U(â120, â90) ⪠(90, 120). We created 5 different dâ² sample in the test set.
The intervention then modifies the LDMâs representation so the probing classifierâs output, if us- ing a modified representation as input, changes to dâ² b. This is achieved by updating the internal representation ϵθ(l,t) using gradients from the probing classifier pb. The weights of pb are frozen.
g = âLCE(pb(ϵθ(l,t)), dâ² b) âϵθ(l,t) (3)
|
2306.05720#21
|
Beyond Surface Statistics: Scene Representations in a Latent Diffusion Model
|
Latent diffusion models (LDMs) exhibit an impressive ability to produce
realistic images, yet the inner workings of these models remain mysterious.
Even when trained purely on images without explicit depth information, they
typically output coherent pictures of 3D scenes. In this work, we investigate a
basic interpretability question: does an LDM create and use an internal
representation of simple scene geometry? Using linear probes, we find evidence
that the internal activations of the LDM encode linear representations of both
3D depth data and a salient-object / background distinction. These
representations appear surprisingly early in the denoising process$-$well
before a human can easily make sense of the noisy images. Intervention
experiments further indicate these representations play a causal role in image
synthesis, and may be used for simple high-level editing of an LDM's output.
Project page: https://yc015.github.io/scene-representation-diffusion-model/
|
http://arxiv.org/pdf/2306.05720
|
Yida Chen, Fernanda Viégas, Martin Wattenberg
|
cs.CV, cs.AI, cs.LG
|
A short version of this paper is accepted in the NeurIPS 2023
Workshop on Diffusion Models: https://nips.cc/virtual/2023/74894
| null |
cs.CV
|
20230609
|
20231104
|
[
{
"id": "2209.14988"
},
{
"id": "1805.01070"
},
{
"id": "2210.13382"
},
{
"id": "1909.01066"
},
{
"id": "1809.10040"
},
{
"id": "2212.00774"
},
{
"id": "1610.01644"
},
{
"id": "2206.00364"
},
{
"id": "1905.05950"
},
{
"id": "2212.08861"
},
{
"id": "1908.02899"
},
{
"id": "2111.02114"
},
{
"id": "2105.14002"
},
{
"id": "2112.03126"
}
] |
2306.05783
| 21 |
# Auto Annotation
The objectives of auto annotation include the elimination of unanswerable questions and the as- signment of relevant discipline labels to each question. For unanswerable questions, we extracted keywords from the Xiezhi-meta, such as âas shown in the figure belowâ or âas listed in the tableâ and so on, and exclude questions that contain any of these keywords from collected data. To achieve the purpose of disciplines labeling, we, similar to the previous subsection, input the detailed information of a question and disciplines to ChatGPT to help us do the annotation. In addition, we trained a classifier using Xiezhi-Meta. This classifier is finetuned based on a llama-7b model and outputs the similarity between the questions and each discipline. The annotation results at present are a combined work between trained Classifier and ChatGPT. Considering the high expense of GPT-series products, we intend to solely rely on Classifier for auto annotation.
# 3.2.3 Xiezhi-Specialty & Xiezhi-Interdiscipline
|
2306.05783#21
|
Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation
|
New Natural Langauge Process~(NLP) benchmarks are urgently needed to align
with the rapid development of large language models (LLMs). We present Xiezhi,
the most comprehensive evaluation suite designed to assess holistic domain
knowledge. Xiezhi comprises multiple-choice questions across 516 diverse
disciplines ranging from 13 different subjects with 249,587 questions and
accompanied by Xiezhi-Specialty and Xiezhi-Interdiscipline, both with 15k
questions. We conduct evaluation of the 47 cutting-edge LLMs on Xiezhi. Results
indicate that LLMs exceed average performance of humans in science,
engineering, agronomy, medicine, and art, but fall short in economics,
jurisprudence, pedagogy, literature, history, and management. We anticipate
Xiezhi will help analyze important strengths and shortcomings of LLMs, and the
benchmark is released in~\url{https://github.com/MikeGu721/XiezhiBenchmark}.
|
http://arxiv.org/pdf/2306.05783
|
Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Jianchen Wang, Sihang Jiang, Zhuozhi Xiong, Zihan Li, Qianyu He, Rui Xu, Wenhao Huang, Zili Wang, Shusen Wang, Weiguo Zheng, Hongwei Feng, Yanghua Xiao
|
cs.CL
|
Under review of NeurIPS 2023
| null |
cs.CL
|
20230609
|
20230615
|
[
{
"id": "2301.13126"
},
{
"id": "2302.13971"
},
{
"id": "2303.04048"
},
{
"id": "1905.07830"
},
{
"id": "2304.12986"
},
{
"id": "2304.07854"
},
{
"id": "2211.05100"
},
{
"id": "1909.00277"
},
{
"id": "2305.10263"
},
{
"id": "1909.06058"
},
{
"id": "2206.07682"
},
{
"id": "2304.06364"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2303.18223"
},
{
"id": "2302.09419"
},
{
"id": "2303.14742"
},
{
"id": "2111.10952"
},
{
"id": "2301.12726"
},
{
"id": "2304.01933"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
},
{
"id": "2105.09938"
}
] |
2306.05817
| 21 |
Hybrid Task In hybrid tasks, the large language model serves in a multi- task manner, where both the item scoring and generation tasks could be handled by a single LLM through a uniï¬ed lan- guage interface. The basis for supporting this hybrid func- tionality is that large language models are inherent multi- task learners [Brown et al., 2020; Ouyang et al., 2022]. P5 [Geng et al., 2022], M6-Rec [Cui et al., 2022] and Instruc- tRec [Zhang et al., 2023b] tune the encoder-decoder models for better alignment towards a series of recommendation tasks including both item scoring and generation tasks via differ- ent prompting templates. Other works [Liu et al., 2023a; Sun et al., 2023; Dai et al., 2023] manually design task- speciï¬c prompts to call a uniï¬ed central LLM (e.g., ChatGPT API) to perform multiple tasks, including but not restricted to pointwise rating prediction, pairwise item comparison, and listwise ranking list generation.
|
2306.05817#21
|
How Can Recommender Systems Benefit from Large Language Models: A Survey
|
Recommender systems (RS) play important roles to match users' information
needs for Internet applications. In natural language processing (NLP) domains,
large language model (LLM) has shown astonishing emergent abilities (e.g.,
instruction following, reasoning), thus giving rise to the promising research
direction of adapting LLM to RS for performance enhancements and user
experience improvements. In this paper, we conduct a comprehensive survey on
this research direction from an application-oriented view. We first summarize
existing research works from two orthogonal perspectives: where and how to
adapt LLM to RS. For the "WHERE" question, we discuss the roles that LLM could
play in different stages of the recommendation pipeline, i.e., feature
engineering, feature encoder, scoring/ranking function, and pipeline
controller. For the "HOW" question, we investigate the training and inference
strategies, resulting in two fine-grained taxonomy criteria, i.e., whether to
tune LLMs or not, and whether to involve conventional recommendation model
(CRM) for inference. Detailed analysis and general development trajectories are
provided for both questions, respectively. Then, we highlight key challenges in
adapting LLM to RS from three aspects, i.e., efficiency, effectiveness, and
ethics. Finally, we summarize the survey and discuss the future prospects. We
also actively maintain a GitHub repository for papers and other related
resources in this rising direction:
https://github.com/CHIANGEL/Awesome-LLM-for-RecSys.
|
http://arxiv.org/pdf/2306.05817
|
Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Xiangyang Li, Chenxu Zhu, Huifeng Guo, Yong Yu, Ruiming Tang, Weinan Zhang
|
cs.IR, cs.AI
|
15 pages; 3 figures; summarization table in appendix
| null |
cs.IR
|
20230609
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2304.05263"
},
{
"id": "2305.07001"
},
{
"id": "2305.11700"
},
{
"id": "2305.06566"
},
{
"id": "2305.15756"
},
{
"id": "2105.08318"
},
{
"id": "2304.03879"
},
{
"id": "2303.08559"
},
{
"id": "1703.04247"
},
{
"id": "2206.07682"
},
{
"id": "2305.07961"
},
{
"id": "2305.05973"
},
{
"id": "2305.15498"
},
{
"id": "2306.11114"
},
{
"id": "2305.04518"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2305.12090"
},
{
"id": "2212.10403"
},
{
"id": "2304.03022"
},
{
"id": "2305.07609"
},
{
"id": "2209.08060"
},
{
"id": "2209.07562"
},
{
"id": "2304.09542"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "1908.08167"
},
{
"id": "1909.10351"
},
{
"id": "2305.15036"
},
{
"id": "2305.12081"
},
{
"id": "2304.07862"
},
{
"id": "2305.03017"
},
{
"id": "2305.09858"
},
{
"id": "2305.06474"
},
{
"id": "2305.13731"
},
{
"id": "2304.03153"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2306.10702"
},
{
"id": "2306.02250"
},
{
"id": "2303.13835"
},
{
"id": "2305.14302"
},
{
"id": "2302.03735"
},
{
"id": "2306.02841"
},
{
"id": "2305.11206"
},
{
"id": "2203.15876"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
}
] |
2306.05949
| 21 |
Hate, Toxicity, and Targeted Violence and safe to hurtful outputs can be evaluated in context of safe discussions, toxicity metrics [87, 182], hurtfulness [165], and level of offense [71] for language. Nonconsensual generations of existing people should be evaluated with the person themselves. Research toward approaches to characterizing harmful content is ongoing by modality [193].
Training data, including fine-tuning and other data can be examined to explain many of the behaviors of large data-driven generative systems, and particularly their potentially harmful behaviors; what associations in the training corpus led to toxic behaviors, whether generated information corresponds to trustworthy training sources, examining whether the data collection abides by ethical frameworks for the rights of data subjects, etc. Different levels of access and description of the training data can help answer these questions with due consideration for privacy needs [183].
|
2306.05949#21
|
Evaluating the Social Impact of Generative AI Systems in Systems and Society
|
Generative AI systems across modalities, ranging from text, image, audio, and
video, have broad social impacts, but there exists no official standard for
means of evaluating those impacts and which impacts should be evaluated. We
move toward a standard approach in evaluating a generative AI system for any
modality, in two overarching categories: what is able to be evaluated in a base
system that has no predetermined application and what is able to be evaluated
in society. We describe specific social impact categories and how to approach
and conduct evaluations in the base technical system, then in people and
society. Our framework for a base system defines seven categories of social
impact: bias, stereotypes, and representational harms; cultural values and
sensitive content; disparate performance; privacy and data protection;
financial costs; environmental costs; and data and content moderation labor
costs. Suggested methods for evaluation apply to all modalities and analyses of
the limitations of existing evaluations serve as a starting point for necessary
investment in future evaluations. We offer five overarching categories for what
is able to be evaluated in society, each with their own subcategories:
trustworthiness and autonomy; inequality, marginalization, and violence;
concentration of authority; labor and creativity; and ecosystem and
environment. Each subcategory includes recommendations for mitigating harm. We
are concurrently crafting an evaluation repository for the AI research
community to contribute existing evaluations along the given categories. This
version will be updated following a CRAFT session at ACM FAccT 2023.
|
http://arxiv.org/pdf/2306.05949
|
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
|
cs.CY, cs.AI
| null | null |
cs.CY
|
20230609
|
20230612
|
[
{
"id": "2007.04068"
},
{
"id": "2305.09800"
},
{
"id": "1908.09203"
},
{
"id": "2202.05520"
},
{
"id": "2302.10329"
},
{
"id": "2107.03374"
},
{
"id": "2210.06245"
},
{
"id": "2211.02001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2301.10226"
},
{
"id": "2202.02647"
},
{
"id": "2112.10752"
},
{
"id": "2206.04615"
},
{
"id": "2202.00885"
},
{
"id": "2010.15581"
},
{
"id": "2305.09941"
},
{
"id": "2301.04246"
},
{
"id": "2304.12298"
},
{
"id": "2203.09509"
},
{
"id": "2207.14157"
},
{
"id": "2102.09692"
},
{
"id": "1804.10999"
},
{
"id": "2303.11156"
},
{
"id": "2104.06390"
},
{
"id": "2002.05651"
}
] |
2306.06070
| 21 |
Figure 5: Illustration of action prediction with LLMs.
candidate DOM elements from the webpage that best align with both the task description and the current step. We formulate the task query by concatenating the task description with previous actions. The textual representation of each candidate DOM element is derived from a combination of the elementâs tag, its textual content, and salient attribute values, as well as the textual representation of its parent and child elements. As shown in Figure 4, we pair each DOM element with the task query and feed it to an encoder-only LM through the cross-encoder architecture [28], yielding a matching score. At training time, we randomly sample negative elements from the webpage, and use the target element as the positive example. The matching score is passed through a sigmoid activation function and optimized with a binary cross entropy loss. At inference time, we score all elements in the webpage and pick the top-k elements with the largest logits as input to the second stage.
# 3.2 Action Prediction with LLMs
|
2306.06070#21
|
Mind2Web: Towards a Generalist Agent for the Web
|
We introduce Mind2Web, the first dataset for developing and evaluating
generalist agents for the web that can follow language instructions to complete
complex tasks on any website. Existing datasets for web agents either use
simulated websites or only cover a limited set of websites and tasks, thus not
suitable for generalist web agents. With over 2,000 open-ended tasks collected
from 137 websites spanning 31 domains and crowdsourced action sequences for the
tasks, Mind2Web provides three necessary ingredients for building generalist
web agents: 1) diverse domains, websites, and tasks, 2) use of real-world
websites instead of simulated and simplified ones, and 3) a broad spectrum of
user interaction patterns. Based on Mind2Web, we conduct an initial exploration
of using large language models (LLMs) for building generalist web agents. While
the raw HTML of real-world websites are often too large to be fed to LLMs, we
show that first filtering it with a small LM significantly improves the
effectiveness and efficiency of LLMs. Our solution demonstrates a decent level
of performance, even on websites or entire domains the model has never seen
before, but there is still a substantial room to improve towards truly
generalizable agents. We open-source our dataset, model implementation, and
trained models (https://osu-nlp-group.github.io/Mind2Web) to facilitate further
research on building a generalist agent for the web.
|
http://arxiv.org/pdf/2306.06070
|
Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, Yu Su
|
cs.CL
|
Website: https://osu-nlp-group.github.io/Mind2Web. Updated with
supplementary material. NeurIPS'23 Spotlight
| null |
cs.CL
|
20230609
|
20231209
|
[] |
2306.06264
| 21 |
Table 2: The accuracy of the discriminator in classifying facts according to their appearance in generated para- graphs is evaluated through a user study.
ment, and test sets consist of 5000, 1000, and 1000 samples, respectively. The discriminator achieves 90.4% accuracy on the test data.
To enhance the accuracy of our discriminator when applied to generated text, we only consider predictions with a confidence level exceeding 0.95. Additionally, we evaluate the accuracy of the dis- criminator on generated text in a user study by randomly selecting 100 instances for each class and model and asking three participants to clas- sify the given fact and generated paragraph pairs. We then employ majority voting to determine the classification for each pair. The result of the user study is presented in Table 2, demonstrating that our discriminator achieves over 81% accuracy for all classes in both InstructGPT and Flan-T5 LLMs.
The results of the factual alignment and Halluci- nation experiments can be found in Table 3 (results of measuring knowledge based on entropy is pro- vided in Appendix). The objective of this analysis is to identify relation types that our metrics can po- tentially differentiate among the three classes: ap- peared, didnât appear, and hallucinated (appeared incorrectly) facts. In the table, we have highlighted
|
2306.06264#21
|
Measuring and Modifying Factual Knowledge in Large Language Models
|
Large Language Models (LLMs) store an extensive amount of factual knowledge
obtained from vast collections of text. To effectively utilize these models for
downstream tasks, it is crucial to have reliable methods for measuring their
knowledge. However, existing approaches for knowledge measurement have certain
limitations, and despite recent efforts, they fail to provide accurate
measurements and the necessary insights for modifying the knowledge within
LLMs. In this work, we employ information theory-based measurements to provide
a framework estimating the factual knowledge contained within large language
models. More specifically, we measure knowledge by analyzing the LLM's
prediction probability distribution before and after instilling the target
knowledge, employing metrics such as entropy and KL-divergence. Introducing our
metrics, we first assess their accuracy in comparison to previous ranking-based
methods, surpassing them by over $35\%$ in a synthetic experiment. Then, we
explore two prominent methods of knowledge instillation, discovering that LLMs
exhibit limitations in capturing new knowledge under specific circumstances for
one of these methods. Lastly, we demonstrate the applicability of our methods
in extracting unlearned and mislearned facts in LLMs through their application
to in-context learning. We make code and data for all methods and experiments
in this paper publicly available.
|
http://arxiv.org/pdf/2306.06264
|
Pouya Pezeshkpour
|
cs.CL, cs.LG
| null | null |
cs.CL
|
20230609
|
20230609
|
[
{
"id": "2302.13971"
},
{
"id": "1909.01066"
},
{
"id": "2204.06031"
},
{
"id": "2204.02311"
},
{
"id": "2106.09231"
},
{
"id": "1907.11692"
},
{
"id": "2104.07567"
},
{
"id": "2010.05731"
},
{
"id": "1910.10683"
},
{
"id": "2207.05221"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
},
{
"id": "2203.00211"
},
{
"id": "2301.12810"
}
] |
2306.06283
| 21 |
8
optimization as ï¬ne-tuning, and one can therefore expect it to be less accurate. However, Ramos et al. [34] showed that this method could perform wellâespecially if only so few data points are available such that ï¬ne-tuning is not a suitable approach.
For their case study, the Text2Concrete team found a predictive accuracy comparable to a Gaussian process regression (GPR) model (but inferior to a random forest (RF) model). However, one signiï¬cant advantage of LLMs is that one can easily incorporate context. The Text2Concrete team used this to include well-established design principles like the inï¬uence of the water-to-cement ratio on strength (Figure 1) into the modeling by simply stating the relationship between the features in natural language (e.g., âhigh water/cement ratio reduces strengthâ). This additional context reduced the outliers and outperformed the RF model (R2 of 0.67 and 0.72, respectively).
|
2306.06283#21
|
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
|
Large-language models (LLMs) such as GPT-4 caught the interest of many
scientists. Recent studies suggested that these models could be useful in
chemistry and materials science. To explore these possibilities, we organized a
hackathon.
This article chronicles the projects built as part of this hackathon.
Participants employed LLMs for various applications, including predicting
properties of molecules and materials, designing novel interfaces for tools,
extracting knowledge from unstructured data, and developing new educational
applications.
The diverse topics and the fact that working prototypes could be generated in
less than two days highlight that LLMs will profoundly impact the future of our
fields. The rich collection of ideas and projects also indicates that the
applications of LLMs are not limited to materials science and chemistry but
offer potential benefits to a wide range of scientific disciplines.
|
http://arxiv.org/pdf/2306.06283
|
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
|
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
| null | null |
cond-mat.mtrl-sci
|
20230609
|
20230714
|
[
{
"id": "2209.08203"
},
{
"id": "2212.04450"
}
] |
2306.07929
| 21 |
It is worth noting that REMEMBERER agent necessitates only a limited number of training steps to achieve a promising performance, which leads to a non-exhaustive action record within its memory. Consequently, instances may arise where there is only one action associated with a given context (g, ot), or the highest Q value remains deficient, or no sufficiently unfavorable action exists to discourage. In such cases, randomly sampled action advice is favored over encouraging an action with low expectations or discouraging an action with moderate expectations. Our ablation study in Subsection 4.4 sheds light on various strategies for generating advice in such scenarios.
# 4 Experiments & results
# 4.1 Experiment setup & implementation details
To assess the effectiveness of REMEMBERER, we evaluate it on two recent task sets with the promising performance of LLM-based agents: WebShop and WikiHow. All the experiments are conducted based on the OpenAI API of GPT-3.5 [Brown et al., 2020] text-davinci-0032.
|
2306.07929#21
|
Large Language Models Are Semi-Parametric Reinforcement Learning Agents
|
Inspired by the insights in cognitive science with respect to human memory
and reasoning mechanism, a novel evolvable LLM-based (Large Language Model)
agent framework is proposed as REMEMBERER. By equipping the LLM with a
long-term experience memory, REMEMBERER is capable of exploiting the
experiences from the past episodes even for different task goals, which excels
an LLM-based agent with fixed exemplars or equipped with a transient working
memory. We further introduce Reinforcement Learning with Experience Memory
(RLEM) to update the memory. Thus, the whole system can learn from the
experiences of both success and failure, and evolve its capability without
fine-tuning the parameters of the LLM. In this way, the proposed REMEMBERER
constitutes a semi-parametric RL agent. Extensive experiments are conducted on
two RL task sets to evaluate the proposed framework. The average results with
different initialization and training sets exceed the prior SOTA by 4% and 2%
for the success rate on two task sets and demonstrate the superiority and
robustness of REMEMBERER.
|
http://arxiv.org/pdf/2306.07929
|
Danyang Zhang, Lu Chen, Situo Zhang, Hongshen Xu, Zihan Zhao, Kai Yu
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230609
|
20231030
|
[
{
"id": "2201.06009"
}
] |
2306.05685
| 22 |
Chain-of-thought and reference-guided judge. In Section 3.3, we have shown LLMâs limited capability in grading math and reasoning questions. We propose two simple methods to mitigate this issue: chain-of-thought judge and reference-guided judge. Chain-of-thought is a widely used technique to improve LLMâs reasoning capability [47]. We propose a similar technique to prompt an LLM judge to begin with answering the question independently and then start grading. Detailed prompt in Figure 7 (Appendix). However, even with the CoT prompt, we find that in many cases LLM makes exactly the same mistake as the given answers in its problem-solving process (See example in Figure 15 (Appendix), suggesting that LLM judge may still be misled by the context. Hence, we propose a reference-guided method, in which we first generate LLM judgeâs answer independently, and then display it as a reference answer in the judge prompt. In Table 4, we see a significant improvement in failure rate (from 70% to 15%) over the default prompt.
Fine-tuning a judge model. We try fine-tuning a Vicuna-13B on arena data to act as a judge and show some promising preliminary results in Appendix F.
# 3.5 Multi-turn judge
|
2306.05685#22
|
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
|
Evaluating large language model (LLM) based chat assistants is challenging
due to their broad capabilities and the inadequacy of existing benchmarks in
measuring human preferences. To address this, we explore using strong LLMs as
judges to evaluate these models on more open-ended questions. We examine the
usage and limitations of LLM-as-a-judge, including position, verbosity, and
self-enhancement biases, as well as limited reasoning ability, and propose
solutions to mitigate some of them. We then verify the agreement between LLM
judges and human preferences by introducing two benchmarks: MT-bench, a
multi-turn question set; and Chatbot Arena, a crowdsourced battle platform. Our
results reveal that strong LLM judges like GPT-4 can match both controlled and
crowdsourced human preferences well, achieving over 80% agreement, the same
level of agreement between humans. Hence, LLM-as-a-judge is a scalable and
explainable way to approximate human preferences, which are otherwise very
expensive to obtain. Additionally, we show our benchmark and traditional
benchmarks complement each other by evaluating several variants of LLaMA and
Vicuna. The MT-bench questions, 3K expert votes, and 30K conversations with
human preferences are publicly available at
https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge.
|
http://arxiv.org/pdf/2306.05685
|
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, Ion Stoica
|
cs.CL, cs.AI
|
NeurIPS 2023 Datasets and Benchmarks Track
| null |
cs.CL
|
20230609
|
20231224
|
[
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2305.10403"
},
{
"id": "2304.07327"
},
{
"id": "2201.11903"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.12420"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2306.04751"
},
{
"id": "2211.09110"
},
{
"id": "2301.13688"
},
{
"id": "2004.14602"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2211.05719"
},
{
"id": "2206.04615"
},
{
"id": "2204.05862"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2305.17926"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2306.05715
| 22 |
A corresponding prompt template is in Figure 1. Using this tem- plate, we generated responses to our sample of 150 help requests. For temperature, a parameter that controls randomness in LLM re- sponses, we used 0, which should yield the most deterministic re- sponses and has been found to work well for feedback in prior work [45]. To explore the possibility of the model generating the re- sponses in Finnish in addition to English, we created two versions. We thus generated a total of 600 generated help request responses (150 help requests à 2 languages à 2 models).
3.3 Classiï¬cation of Issues in Help Requests The help requests were ï¬rst analyzed qualitatively, looking for is- sues in student code. We annotated the source code from the 150 help requests with issues that a teacher would provide feedback on. This was carried out by one of the researchers, who is the teacher responsible for the course, has more than a decade of experience in teaching introductory programming, and has speciï¬c experience
Exploring the Responses of Large Language Models to Beginner Programmersâ Help Requests
ICER â23 V1, August 7â11, 2023, Chicago, IL, USA
## Programming exercise handout
# <optional starter code>
# <handout>
# ## My code
# <student's code>
## Question
|
2306.05715#22
|
Exploring the Responses of Large Language Models to Beginner Programmers' Help Requests
|
Background and Context: Over the past year, large language models (LLMs) have
taken the world by storm. In computing education, like in other walks of life,
many opportunities and threats have emerged as a consequence.
Objectives: In this article, we explore such opportunities and threats in a
specific area: responding to student programmers' help requests. More
specifically, we assess how good LLMs are at identifying issues in problematic
code that students request help on.
Method: We collected a sample of help requests and code from an online
programming course. We then prompted two different LLMs (OpenAI Codex and
GPT-3.5) to identify and explain the issues in the students' code and assessed
the LLM-generated answers both quantitatively and qualitatively.
Findings: GPT-3.5 outperforms Codex in most respects. Both LLMs frequently
find at least one actual issue in each student program (GPT-3.5 in 90% of the
cases). Neither LLM excels at finding all the issues (GPT-3.5 finding them 57%
of the time). False positives are common (40% chance for GPT-3.5). The advice
that the LLMs provide on the issues is often sensible. The LLMs perform better
on issues involving program logic rather than on output formatting. Model
solutions are frequently provided even when the LLM is prompted not to. LLM
responses to prompts in a non-English language are only slightly worse than
responses to English prompts.
Implications: Our results continue to highlight the utility of LLMs in
programming education. At the same time, the results highlight the
unreliability of LLMs: LLMs make some of the same mistakes that students do,
perhaps especially when formatting output as required by automated assessment
systems. Our study informs teachers interested in using LLMs as well as future
efforts to customize LLMs for the needs of programming education.
|
http://arxiv.org/pdf/2306.05715
|
Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanpää, Juha Sorva
|
cs.CY, cs.AI, cs.CL, cs.HC, cs.SE
|
13 pages, 1 figure. To be published in Proceedings of the 2023 ACM
Conference on International Computing Education Research V.1 (ICER '23 V1)
| null |
cs.CY
|
20230609
|
20230609
|
[
{
"id": "2004.09456"
},
{
"id": "2302.07427"
},
{
"id": "2203.02155"
},
{
"id": "2304.02491"
},
{
"id": "2211.04715"
},
{
"id": "2306.02608"
},
{
"id": "2303.08774"
},
{
"id": "2304.03938"
}
] |
2306.05720
| 22 |
g = âLCE(pb(ϵθ(l,t)), dâ² b) âϵθ(l,t) (3)
ϵθ(l,t) is updated using Adam [16] with gradients calculated by Eq.3. In Section 4.1, we observed that the representation of salient objects formed very early on. Interven- tions on the later denoising process cannot effectively change the position of foreground objects. The decoder side self-attention also performed better than the encoder side ones in the early steps. Thus, during intervention, we modified the activations of decoder side self-attention layers at the first five steps. In a preliminary study, we experimented with intervening in different number of denoising
7
Modified Label Modified Label Foreground Dice Original Output Original Label Intervened New Synthetic Output Label Original Output Original Label Intervened New Synthetic eee Output Label 0.83 0.60 0.71 0.62 0.70 0.63 0.70 0.65 0.59 0.69
Modified Label Foreground Dice Original Output Original Label Intervened New Synthetic Output Label 0.83 0.71 0.70 0.70 0.59
Modified Label Original Output Original Label Intervened New Synthetic eee Output Label 0.60 0.62 0.63 0.65 0.69
|
2306.05720#22
|
Beyond Surface Statistics: Scene Representations in a Latent Diffusion Model
|
Latent diffusion models (LDMs) exhibit an impressive ability to produce
realistic images, yet the inner workings of these models remain mysterious.
Even when trained purely on images without explicit depth information, they
typically output coherent pictures of 3D scenes. In this work, we investigate a
basic interpretability question: does an LDM create and use an internal
representation of simple scene geometry? Using linear probes, we find evidence
that the internal activations of the LDM encode linear representations of both
3D depth data and a salient-object / background distinction. These
representations appear surprisingly early in the denoising process$-$well
before a human can easily make sense of the noisy images. Intervention
experiments further indicate these representations play a causal role in image
synthesis, and may be used for simple high-level editing of an LDM's output.
Project page: https://yc015.github.io/scene-representation-diffusion-model/
|
http://arxiv.org/pdf/2306.05720
|
Yida Chen, Fernanda Viégas, Martin Wattenberg
|
cs.CV, cs.AI, cs.LG
|
A short version of this paper is accepted in the NeurIPS 2023
Workshop on Diffusion Models: https://nips.cc/virtual/2023/74894
| null |
cs.CV
|
20230609
|
20231104
|
[
{
"id": "2209.14988"
},
{
"id": "1805.01070"
},
{
"id": "2210.13382"
},
{
"id": "1909.01066"
},
{
"id": "1809.10040"
},
{
"id": "2212.00774"
},
{
"id": "1610.01644"
},
{
"id": "2206.00364"
},
{
"id": "1905.05950"
},
{
"id": "2212.08861"
},
{
"id": "1908.02899"
},
{
"id": "2111.02114"
},
{
"id": "2105.14002"
},
{
"id": "2112.03126"
}
] |
2306.05783
| 22 |
# 3.2.3 Xiezhi-Specialty & Xiezhi-Interdiscipline
To ensure the validity of the evaluation results, we further propose two additional datasets, Xiezhi- Specialty and Xiezhi-Interdiscipline. The trajectory of LLM development tends to consolidate multiple capabilities within individual LLMs, which may consequently yield unanticipated interdisci- plinary problem-solving proficiencies. The division of Xiezhi into the Specialty and Interdiscipline datasets is designed to correspond with this evolving trend. These datasets are derived from the original Xiezhi Benchmark with the exclusion of some sensitive questions (e.g., military science) and deeply Chinese-centric questions (e.g., Literary Chinese QA, ancient Chinese poetry comple- tion). Based on a balanced sampling strategy, Xiezhi-Specialty is constructed by selecting questions involved in 3 disciplines or less, while Xiezhi-Interdiscipline includes questions tagged by 4 dis- ciplines or more. Fig. 3 presents an instance of the Xiezhi-Specialty, while an instance of the Xiezhi-Interdiscipline is depicted in Fig. 4.
5
|
2306.05783#22
|
Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation
|
New Natural Langauge Process~(NLP) benchmarks are urgently needed to align
with the rapid development of large language models (LLMs). We present Xiezhi,
the most comprehensive evaluation suite designed to assess holistic domain
knowledge. Xiezhi comprises multiple-choice questions across 516 diverse
disciplines ranging from 13 different subjects with 249,587 questions and
accompanied by Xiezhi-Specialty and Xiezhi-Interdiscipline, both with 15k
questions. We conduct evaluation of the 47 cutting-edge LLMs on Xiezhi. Results
indicate that LLMs exceed average performance of humans in science,
engineering, agronomy, medicine, and art, but fall short in economics,
jurisprudence, pedagogy, literature, history, and management. We anticipate
Xiezhi will help analyze important strengths and shortcomings of LLMs, and the
benchmark is released in~\url{https://github.com/MikeGu721/XiezhiBenchmark}.
|
http://arxiv.org/pdf/2306.05783
|
Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Jianchen Wang, Sihang Jiang, Zhuozhi Xiong, Zihan Li, Qianyu He, Rui Xu, Wenhao Huang, Zili Wang, Shusen Wang, Weiguo Zheng, Hongwei Feng, Yanghua Xiao
|
cs.CL
|
Under review of NeurIPS 2023
| null |
cs.CL
|
20230609
|
20230615
|
[
{
"id": "2301.13126"
},
{
"id": "2302.13971"
},
{
"id": "2303.04048"
},
{
"id": "1905.07830"
},
{
"id": "2304.12986"
},
{
"id": "2304.07854"
},
{
"id": "2211.05100"
},
{
"id": "1909.00277"
},
{
"id": "2305.10263"
},
{
"id": "1909.06058"
},
{
"id": "2206.07682"
},
{
"id": "2304.06364"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2303.18223"
},
{
"id": "2302.09419"
},
{
"id": "2303.14742"
},
{
"id": "2111.10952"
},
{
"id": "2301.12726"
},
{
"id": "2304.01933"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
},
{
"id": "2105.09938"
}
] |
2306.05817
| 22 |
2.4 LLM for Pipeline Controller As the model size scales up, LLM tends to exhibit emergent behaviors that may not be observed in previous smaller lan- guage models, e.g., in-context learning and logical reason- ing [Wei et al., 2022; Zhao et al., 2023]. With such emergent abilities, LLM is no longer just a part of the recommender system mentioned above, but could actively participate in the pipeline control over the system, possibly leading to a more interactive and explainable recommendation process. Chat- REC [Gao et al., 2023] leverages ChatGPT to bridge the con- versational interface and traditional recommender systems, where it is required to infer user preferences, decide whether or not to call the backend recommendation API, and further modify (e.g., ï¬lter, rerank) the returned item candidates be- fore presenting them to the user. RecLLM [Friedman et al., 2023] further extends the permission of LLM, and proposes a roadmap for building an integrated conversational recom- mender system, where LLM is able to manage the dialogue, understand user preference, arrange the ranking stage, and even provide a controllable LLM-based user simulator to gen- erate synthetic conversations.
|
2306.05817#22
|
How Can Recommender Systems Benefit from Large Language Models: A Survey
|
Recommender systems (RS) play important roles to match users' information
needs for Internet applications. In natural language processing (NLP) domains,
large language model (LLM) has shown astonishing emergent abilities (e.g.,
instruction following, reasoning), thus giving rise to the promising research
direction of adapting LLM to RS for performance enhancements and user
experience improvements. In this paper, we conduct a comprehensive survey on
this research direction from an application-oriented view. We first summarize
existing research works from two orthogonal perspectives: where and how to
adapt LLM to RS. For the "WHERE" question, we discuss the roles that LLM could
play in different stages of the recommendation pipeline, i.e., feature
engineering, feature encoder, scoring/ranking function, and pipeline
controller. For the "HOW" question, we investigate the training and inference
strategies, resulting in two fine-grained taxonomy criteria, i.e., whether to
tune LLMs or not, and whether to involve conventional recommendation model
(CRM) for inference. Detailed analysis and general development trajectories are
provided for both questions, respectively. Then, we highlight key challenges in
adapting LLM to RS from three aspects, i.e., efficiency, effectiveness, and
ethics. Finally, we summarize the survey and discuss the future prospects. We
also actively maintain a GitHub repository for papers and other related
resources in this rising direction:
https://github.com/CHIANGEL/Awesome-LLM-for-RecSys.
|
http://arxiv.org/pdf/2306.05817
|
Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Xiangyang Li, Chenxu Zhu, Huifeng Guo, Yong Yu, Ruiming Tang, Weinan Zhang
|
cs.IR, cs.AI
|
15 pages; 3 figures; summarization table in appendix
| null |
cs.IR
|
20230609
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2304.05263"
},
{
"id": "2305.07001"
},
{
"id": "2305.11700"
},
{
"id": "2305.06566"
},
{
"id": "2305.15756"
},
{
"id": "2105.08318"
},
{
"id": "2304.03879"
},
{
"id": "2303.08559"
},
{
"id": "1703.04247"
},
{
"id": "2206.07682"
},
{
"id": "2305.07961"
},
{
"id": "2305.05973"
},
{
"id": "2305.15498"
},
{
"id": "2306.11114"
},
{
"id": "2305.04518"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2305.12090"
},
{
"id": "2212.10403"
},
{
"id": "2304.03022"
},
{
"id": "2305.07609"
},
{
"id": "2209.08060"
},
{
"id": "2209.07562"
},
{
"id": "2304.09542"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "1908.08167"
},
{
"id": "1909.10351"
},
{
"id": "2305.15036"
},
{
"id": "2305.12081"
},
{
"id": "2304.07862"
},
{
"id": "2305.03017"
},
{
"id": "2305.09858"
},
{
"id": "2305.06474"
},
{
"id": "2305.13731"
},
{
"id": "2304.03153"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2306.10702"
},
{
"id": "2306.02250"
},
{
"id": "2303.13835"
},
{
"id": "2305.14302"
},
{
"id": "2302.03735"
},
{
"id": "2306.02841"
},
{
"id": "2305.11206"
},
{
"id": "2203.15876"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
}
] |
2306.05949
| 22 |
Limitations Evaluating cultural values requires examining an infinite list of topics that contribute to a cultural viewpoint. Human-led evaluations [173] for hateful and sensitive content can have a high psychological cost, as seen in content moderation labor (see 4.1.7 Data and Content Moderation Labor). The types and intensity of sensitive content that may be produced across modalities may vary. For example, the creation of hate speech and hateful imagery may overlap in their target, yet provide different levels of psychological distress in generated content. For evaluations which rely on a third party API, such as the many benchmarks which leverage Google Perspective API [182] for toxicity detection, it is important to make sure comparisons between models are standardized using the same version of the API to avoid reproducibility issues [185].
# 4.1.3 Disparate Performance
In the context of evaluating the impact of generative AI systems, disparate performance refers to AI systems that perform differently for different subpopulations, leading to unequal outcomes for those groups. A model that is trained on a dataset that is disproportionately skewed towards one particular demographic group may perform poorly for other demographic groups [43].
|
2306.05949#22
|
Evaluating the Social Impact of Generative AI Systems in Systems and Society
|
Generative AI systems across modalities, ranging from text, image, audio, and
video, have broad social impacts, but there exists no official standard for
means of evaluating those impacts and which impacts should be evaluated. We
move toward a standard approach in evaluating a generative AI system for any
modality, in two overarching categories: what is able to be evaluated in a base
system that has no predetermined application and what is able to be evaluated
in society. We describe specific social impact categories and how to approach
and conduct evaluations in the base technical system, then in people and
society. Our framework for a base system defines seven categories of social
impact: bias, stereotypes, and representational harms; cultural values and
sensitive content; disparate performance; privacy and data protection;
financial costs; environmental costs; and data and content moderation labor
costs. Suggested methods for evaluation apply to all modalities and analyses of
the limitations of existing evaluations serve as a starting point for necessary
investment in future evaluations. We offer five overarching categories for what
is able to be evaluated in society, each with their own subcategories:
trustworthiness and autonomy; inequality, marginalization, and violence;
concentration of authority; labor and creativity; and ecosystem and
environment. Each subcategory includes recommendations for mitigating harm. We
are concurrently crafting an evaluation repository for the AI research
community to contribute existing evaluations along the given categories. This
version will be updated following a CRAFT session at ACM FAccT 2023.
|
http://arxiv.org/pdf/2306.05949
|
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
|
cs.CY, cs.AI
| null | null |
cs.CY
|
20230609
|
20230612
|
[
{
"id": "2007.04068"
},
{
"id": "2305.09800"
},
{
"id": "1908.09203"
},
{
"id": "2202.05520"
},
{
"id": "2302.10329"
},
{
"id": "2107.03374"
},
{
"id": "2210.06245"
},
{
"id": "2211.02001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2301.10226"
},
{
"id": "2202.02647"
},
{
"id": "2112.10752"
},
{
"id": "2206.04615"
},
{
"id": "2202.00885"
},
{
"id": "2010.15581"
},
{
"id": "2305.09941"
},
{
"id": "2301.04246"
},
{
"id": "2304.12298"
},
{
"id": "2203.09509"
},
{
"id": "2207.14157"
},
{
"id": "2102.09692"
},
{
"id": "1804.10999"
},
{
"id": "2303.11156"
},
{
"id": "2104.06390"
},
{
"id": "2002.05651"
}
] |
2306.06070
| 22 |
After obtaining the top-k candidates, we utilize the candidate set to prune the webpage snapshot and construct snippets that only include the selected candidates and their neighbours as inputs to an LLM. Recent studies [10, 13] have suggested that training LMs for discrimination rather than generation is more generalizable and sample-efficient for other grounding tasks. Inspired by that, we convert the task of element selection into a multi-choice question answering (QA) problem. Instead of generating the complete target element, the LM is trained to instead select from a list of options. For comparison, we also include a baseline that directly generates the target element based on the provided webpage snippet. In both cases, we directly let the LLM generate the operation, along with the additional value needed for some operations. An example is shown in Figure 5. We incorporate up to 5 candidate elements within each input, together with a None option, and partition the candidate set into several groups. During training, we construct the target sequence using ground-truth actions and fine-tune the model using a left-to-right language modeling objective. During inference, we divide the top-k candidates into multiple clusters of five options. If more than one option is selected after a round, we form new groups with the selected ones. This process repeats until a single element is selected, or all options are rejected by the model, i.e., the model chooses the None option for all groups.
6
_ _
|
2306.06070#22
|
Mind2Web: Towards a Generalist Agent for the Web
|
We introduce Mind2Web, the first dataset for developing and evaluating
generalist agents for the web that can follow language instructions to complete
complex tasks on any website. Existing datasets for web agents either use
simulated websites or only cover a limited set of websites and tasks, thus not
suitable for generalist web agents. With over 2,000 open-ended tasks collected
from 137 websites spanning 31 domains and crowdsourced action sequences for the
tasks, Mind2Web provides three necessary ingredients for building generalist
web agents: 1) diverse domains, websites, and tasks, 2) use of real-world
websites instead of simulated and simplified ones, and 3) a broad spectrum of
user interaction patterns. Based on Mind2Web, we conduct an initial exploration
of using large language models (LLMs) for building generalist web agents. While
the raw HTML of real-world websites are often too large to be fed to LLMs, we
show that first filtering it with a small LM significantly improves the
effectiveness and efficiency of LLMs. Our solution demonstrates a decent level
of performance, even on websites or entire domains the model has never seen
before, but there is still a substantial room to improve towards truly
generalizable agents. We open-source our dataset, model implementation, and
trained models (https://osu-nlp-group.github.io/Mind2Web) to facilitate further
research on building a generalist agent for the web.
|
http://arxiv.org/pdf/2306.06070
|
Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, Yu Su
|
cs.CL
|
Website: https://osu-nlp-group.github.io/Mind2Web. Updated with
supplementary material. NeurIPS'23 Spotlight
| null |
cs.CL
|
20230609
|
20231209
|
[] |
2306.06264
| 22 |
Relations IntructGPT FLAN-T5 Appeared Didnât Appear Hallucinated Appeared Didnât Appear Hallucinated shares border with official language named after part of capital diplomatic relation sister city continent capital of place of birth genre located in the admin territory country has part religion country of citizenship field of work occupation position held work location instrument place of death position played headquarters location location of formation employer member of instance of developer language of work or name country of origin original language of work owned by 0.252 1.737 0.056 0.001 1.736 0.035 - 0.175 1.242 1.335 0.025 0.147 0.003 - - 1.999 0.333 0.119 0.938 0.116 0.017 0.461 1.41 0.564 0.827 0.004 0.056 - - - - - - 0.155 2.823 0.384 0.0 2.898 0.133 5.196 0.002 0.72 1.681 0.715 - - - - - - - - - - - - - - - - - - - - - - 0.162 2.407 0.158 0.017 1.68 0.339 1.621 0.078 0.793 2.501
|
2306.06264#22
|
Measuring and Modifying Factual Knowledge in Large Language Models
|
Large Language Models (LLMs) store an extensive amount of factual knowledge
obtained from vast collections of text. To effectively utilize these models for
downstream tasks, it is crucial to have reliable methods for measuring their
knowledge. However, existing approaches for knowledge measurement have certain
limitations, and despite recent efforts, they fail to provide accurate
measurements and the necessary insights for modifying the knowledge within
LLMs. In this work, we employ information theory-based measurements to provide
a framework estimating the factual knowledge contained within large language
models. More specifically, we measure knowledge by analyzing the LLM's
prediction probability distribution before and after instilling the target
knowledge, employing metrics such as entropy and KL-divergence. Introducing our
metrics, we first assess their accuracy in comparison to previous ranking-based
methods, surpassing them by over $35\%$ in a synthetic experiment. Then, we
explore two prominent methods of knowledge instillation, discovering that LLMs
exhibit limitations in capturing new knowledge under specific circumstances for
one of these methods. Lastly, we demonstrate the applicability of our methods
in extracting unlearned and mislearned facts in LLMs through their application
to in-context learning. We make code and data for all methods and experiments
in this paper publicly available.
|
http://arxiv.org/pdf/2306.06264
|
Pouya Pezeshkpour
|
cs.CL, cs.LG
| null | null |
cs.CL
|
20230609
|
20230609
|
[
{
"id": "2302.13971"
},
{
"id": "1909.01066"
},
{
"id": "2204.06031"
},
{
"id": "2204.02311"
},
{
"id": "2106.09231"
},
{
"id": "1907.11692"
},
{
"id": "2104.07567"
},
{
"id": "2010.05731"
},
{
"id": "1910.10683"
},
{
"id": "2207.05221"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
},
{
"id": "2203.00211"
},
{
"id": "2301.12810"
}
] |
2306.06283
| 22 |
The exciting aspect is that this is a typical example of domain knowledge that cannot be captured with a simple equation incorporable into conventional modeling workï¬ows. Such âfuzzyâ domain knowledge, which may sometimes exist only in the minds of re- searchers, is common in chemistry and materials science. With the incorporation of such âfuzzyâ knowledge into LIFT-based predictions using LLMs, we now have a novel and very promising approach to leverage such domain expertise that we could not leverage before. Interestingly, this also may provide a way to test âfuzzyâ hypotheses, e.g., a researcher could describe the hypothesis in natural language and see how it aï¬ects the model accuracy. While the Text2Concrete example has not exhaustively analyzed how âfuzzyâ context alterations aï¬ect LLM performance, we recognize this as a key area for fu- ture research that could enhance the application of LLMs and our approach to leveraging âfuzzyâ domain knowledge within materials science.
c. Molecule Discovery by Context
|
2306.06283#22
|
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
|
Large-language models (LLMs) such as GPT-4 caught the interest of many
scientists. Recent studies suggested that these models could be useful in
chemistry and materials science. To explore these possibilities, we organized a
hackathon.
This article chronicles the projects built as part of this hackathon.
Participants employed LLMs for various applications, including predicting
properties of molecules and materials, designing novel interfaces for tools,
extracting knowledge from unstructured data, and developing new educational
applications.
The diverse topics and the fact that working prototypes could be generated in
less than two days highlight that LLMs will profoundly impact the future of our
fields. The rich collection of ideas and projects also indicates that the
applications of LLMs are not limited to materials science and chemistry but
offer potential benefits to a wide range of scientific disciplines.
|
http://arxiv.org/pdf/2306.06283
|
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
|
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
| null | null |
cond-mat.mtrl-sci
|
20230609
|
20230714
|
[
{
"id": "2209.08203"
},
{
"id": "2212.04450"
}
] |
2306.07929
| 22 |
WebShop [Yao et al., 2022a] WebShop is a task set simulating a web store site. The agent is instructed to browse the site and shop for the target goods. The information of over 1M products is crawled from the Amazon store3. About 12K product requests are re-written by crowd laborers to generate more diverse instructions. A score between 0 and 1 will be rated after shopping by assessing the correspondence between the product and the instruction. We followed Shinn et al. [2023] and conducted our experiments on the first 100 tasks from the same shuffled task list released along with the task set. At each interaction step, the LLM takes the web page representation and a list of available actions as input. The task instruction is omitted, for there is always an instruction present on the top of the web page. As there are no intermediate rewards during the episode, only the last 5 performed actions serve as procedure feedback. Inspired by the chain-of-thought technique [Wei et al., 2022] and the ReAct mechanism [Yao et al., 2022b], the LLM is prompted to predict a reason for its decision as the extra information depicted in Figure 4. The representation of the web pages is
|
2306.07929#22
|
Large Language Models Are Semi-Parametric Reinforcement Learning Agents
|
Inspired by the insights in cognitive science with respect to human memory
and reasoning mechanism, a novel evolvable LLM-based (Large Language Model)
agent framework is proposed as REMEMBERER. By equipping the LLM with a
long-term experience memory, REMEMBERER is capable of exploiting the
experiences from the past episodes even for different task goals, which excels
an LLM-based agent with fixed exemplars or equipped with a transient working
memory. We further introduce Reinforcement Learning with Experience Memory
(RLEM) to update the memory. Thus, the whole system can learn from the
experiences of both success and failure, and evolve its capability without
fine-tuning the parameters of the LLM. In this way, the proposed REMEMBERER
constitutes a semi-parametric RL agent. Extensive experiments are conducted on
two RL task sets to evaluate the proposed framework. The average results with
different initialization and training sets exceed the prior SOTA by 4% and 2%
for the success rate on two task sets and demonstrate the superiority and
robustness of REMEMBERER.
|
http://arxiv.org/pdf/2306.07929
|
Danyang Zhang, Lu Chen, Situo Zhang, Hongshen Xu, Zihan Zhao, Kai Yu
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230609
|
20231030
|
[
{
"id": "2201.06009"
}
] |
2306.05685
| 23 |
# 3.5 Multi-turn judge
In MT-bench, every question involves two turns to evaluate conversational abilities. Therefore, when comparing two assistants, it becomes necessary to present a total of two questions and four responses, complicating the prompt design. We explore two possible designs, (1) breaking the two turns into two prompts or (2) displaying complete conversations in a single prompt. Our finding is the former one can cause the LLM judge struggling to locate the assistantâs previous response precisely. We illustrate a case in Figure 16 (Appendix) where GPT-4 makes an inaccurate judgment due to a faulty reference. This suggests the necessity of displaying a complete conversation to enable the LLM judge to better grasp the context. We then consider the alternative design that presents two full conversations in a single prompt in which we ask the LLM judge to focus on the second question (Figure 9 (Appendix)). This approach has been found to significantly alleviate the aforementioned referencing issue.
6
# 4 Agreement Evaluation
We study the agreement between different LLM judges and humans on MT-bench and Chatbot Arena datasets. On MT-bench, we also study the agreement among humans. MT-bench represents a small-scale study with controlled human evaluation, while Chatbot Arena represents a larger-scale study with crowdsourced human evaluation in the wild.
# 4.1 Setup
|
2306.05685#23
|
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
|
Evaluating large language model (LLM) based chat assistants is challenging
due to their broad capabilities and the inadequacy of existing benchmarks in
measuring human preferences. To address this, we explore using strong LLMs as
judges to evaluate these models on more open-ended questions. We examine the
usage and limitations of LLM-as-a-judge, including position, verbosity, and
self-enhancement biases, as well as limited reasoning ability, and propose
solutions to mitigate some of them. We then verify the agreement between LLM
judges and human preferences by introducing two benchmarks: MT-bench, a
multi-turn question set; and Chatbot Arena, a crowdsourced battle platform. Our
results reveal that strong LLM judges like GPT-4 can match both controlled and
crowdsourced human preferences well, achieving over 80% agreement, the same
level of agreement between humans. Hence, LLM-as-a-judge is a scalable and
explainable way to approximate human preferences, which are otherwise very
expensive to obtain. Additionally, we show our benchmark and traditional
benchmarks complement each other by evaluating several variants of LLaMA and
Vicuna. The MT-bench questions, 3K expert votes, and 30K conversations with
human preferences are publicly available at
https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge.
|
http://arxiv.org/pdf/2306.05685
|
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, Ion Stoica
|
cs.CL, cs.AI
|
NeurIPS 2023 Datasets and Benchmarks Track
| null |
cs.CL
|
20230609
|
20231224
|
[
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2305.10403"
},
{
"id": "2304.07327"
},
{
"id": "2201.11903"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.12420"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2306.04751"
},
{
"id": "2211.09110"
},
{
"id": "2301.13688"
},
{
"id": "2004.14602"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2211.05719"
},
{
"id": "2206.04615"
},
{
"id": "2204.05862"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2305.17926"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2306.05715
| 23 |
## Programming exercise handout
# <optional starter code>
# <handout>
# ## My code
# <student's code>
## Question
I am in an introductory programming course where we use the Dart programming language. I have been given a programming exercise with the above handout. I have written code given above. My code does not work as expected, however. Please provide suggestions on how I could fix my code so that it fulfills the requirements in the handout. Do not include a model solution, the corrected code, or automated tests in the response.
## Answer
# Figure 1: Our template for prompting the LLMs.
of answering help requests in this course. We chose to annotate the help requests again instead of using existing answers to these help requests, as the help requests had been previously answered by a pool of teachers and teaching assistants, and we wanted a consistent baseline for the present analysis.
We then grouped the issues by high-level theme (e.g., logic er- ror, I/O problem) and by sub-theme (e.g., arithmetic, formatting) and determined the themesâ distribution over the exercises. These results are in Section 4.1.
|
2306.05715#23
|
Exploring the Responses of Large Language Models to Beginner Programmers' Help Requests
|
Background and Context: Over the past year, large language models (LLMs) have
taken the world by storm. In computing education, like in other walks of life,
many opportunities and threats have emerged as a consequence.
Objectives: In this article, we explore such opportunities and threats in a
specific area: responding to student programmers' help requests. More
specifically, we assess how good LLMs are at identifying issues in problematic
code that students request help on.
Method: We collected a sample of help requests and code from an online
programming course. We then prompted two different LLMs (OpenAI Codex and
GPT-3.5) to identify and explain the issues in the students' code and assessed
the LLM-generated answers both quantitatively and qualitatively.
Findings: GPT-3.5 outperforms Codex in most respects. Both LLMs frequently
find at least one actual issue in each student program (GPT-3.5 in 90% of the
cases). Neither LLM excels at finding all the issues (GPT-3.5 finding them 57%
of the time). False positives are common (40% chance for GPT-3.5). The advice
that the LLMs provide on the issues is often sensible. The LLMs perform better
on issues involving program logic rather than on output formatting. Model
solutions are frequently provided even when the LLM is prompted not to. LLM
responses to prompts in a non-English language are only slightly worse than
responses to English prompts.
Implications: Our results continue to highlight the utility of LLMs in
programming education. At the same time, the results highlight the
unreliability of LLMs: LLMs make some of the same mistakes that students do,
perhaps especially when formatting output as required by automated assessment
systems. Our study informs teachers interested in using LLMs as well as future
efforts to customize LLMs for the needs of programming education.
|
http://arxiv.org/pdf/2306.05715
|
Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanpää, Juha Sorva
|
cs.CY, cs.AI, cs.CL, cs.HC, cs.SE
|
13 pages, 1 figure. To be published in Proceedings of the 2023 ACM
Conference on International Computing Education Research V.1 (ICER '23 V1)
| null |
cs.CY
|
20230609
|
20230609
|
[
{
"id": "2004.09456"
},
{
"id": "2302.07427"
},
{
"id": "2203.02155"
},
{
"id": "2304.02491"
},
{
"id": "2211.04715"
},
{
"id": "2306.02608"
},
{
"id": "2303.08774"
},
{
"id": "2304.03938"
}
] |
2306.05720
| 23 |
Modified Label Original Output Original Label Intervened New Synthetic eee Output Label 0.60 0.62 0.63 0.65 0.69
Figure 7: Column 1 & 2: modelâs original outputs and labels of salient objects or depth; Column 3: the modified labels used in intervention; Column 4: the intervened outputs (with contours of modified labels in green for intervention on saliency); Column 5: synthetic labels of intervention outputs. Dice and RMSE are measured between modified labels and synthetic labels of intervention outputs.
steps for saliency and depth representations (see Appendix D). For each intervened layer at the chosen denoising steps, we optimized ϵθ(l,t) so the probing prediction pb(ϵâ²
To assess the effects of this intervention, we computed the Dice coefficient between the modified mask and synthetic mask of intervened output. If the saliency representation has a causal role in LDM, the salient regions of newly generated images should match the modified masks.
|
2306.05720#23
|
Beyond Surface Statistics: Scene Representations in a Latent Diffusion Model
|
Latent diffusion models (LDMs) exhibit an impressive ability to produce
realistic images, yet the inner workings of these models remain mysterious.
Even when trained purely on images without explicit depth information, they
typically output coherent pictures of 3D scenes. In this work, we investigate a
basic interpretability question: does an LDM create and use an internal
representation of simple scene geometry? Using linear probes, we find evidence
that the internal activations of the LDM encode linear representations of both
3D depth data and a salient-object / background distinction. These
representations appear surprisingly early in the denoising process$-$well
before a human can easily make sense of the noisy images. Intervention
experiments further indicate these representations play a causal role in image
synthesis, and may be used for simple high-level editing of an LDM's output.
Project page: https://yc015.github.io/scene-representation-diffusion-model/
|
http://arxiv.org/pdf/2306.05720
|
Yida Chen, Fernanda Viégas, Martin Wattenberg
|
cs.CV, cs.AI, cs.LG
|
A short version of this paper is accepted in the NeurIPS 2023
Workshop on Diffusion Models: https://nips.cc/virtual/2023/74894
| null |
cs.CV
|
20230609
|
20231104
|
[
{
"id": "2209.14988"
},
{
"id": "1805.01070"
},
{
"id": "2210.13382"
},
{
"id": "1909.01066"
},
{
"id": "1809.10040"
},
{
"id": "2212.00774"
},
{
"id": "1610.01644"
},
{
"id": "2206.00364"
},
{
"id": "1905.05950"
},
{
"id": "2212.08861"
},
{
"id": "1908.02899"
},
{
"id": "2111.02114"
},
{
"id": "2105.14002"
},
{
"id": "2112.03126"
}
] |
2306.05783
| 23 |
5
PPARMNGHENSAA () The hygroscopicity of textiles refers to the material's ( ) 1) DRWOKSSBIBEFI 2) BGKHERE 3) URGE AIAEII 4) BOIRMERE 5) ZEA ...... 50) 44 1) ability to absorb water 2) waterproofness 3) ability to absorb oil 4) grease-proofness 5) old people ...... 50) 44 BR: 1 Answer: 1 MRAP: LE PRA SLE BATE Related Subject: Engineer, Textile Science and Engineering, Textile Engineer
Figure 3: An example question from Xiezhi-Specialty, which is related to Textile Science and Engi- neering. English translations are shown below the corresponding Chinese text for better readability.
BFPBTF MEST () HE. The number of electrons in an atom is equal to the number of ( ) 1) FRY 2) PF 3) RFA ZA 4) FAS 5) BR... 50) #IS#L 1) proton 2) neutron 3) the sum of protons and neutrons 4) the difference between protons and neutrons 5) Hot Springs ...... 50) Typewriters BR 1 Answer: 1 RAE: EF, OBA, HE, BFAA SAK, BBY SEK Related Subject: Science, Physics, Chemistry, Electronics Science and Technology, Nuclear Science and Technology
|
2306.05783#23
|
Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation
|
New Natural Langauge Process~(NLP) benchmarks are urgently needed to align
with the rapid development of large language models (LLMs). We present Xiezhi,
the most comprehensive evaluation suite designed to assess holistic domain
knowledge. Xiezhi comprises multiple-choice questions across 516 diverse
disciplines ranging from 13 different subjects with 249,587 questions and
accompanied by Xiezhi-Specialty and Xiezhi-Interdiscipline, both with 15k
questions. We conduct evaluation of the 47 cutting-edge LLMs on Xiezhi. Results
indicate that LLMs exceed average performance of humans in science,
engineering, agronomy, medicine, and art, but fall short in economics,
jurisprudence, pedagogy, literature, history, and management. We anticipate
Xiezhi will help analyze important strengths and shortcomings of LLMs, and the
benchmark is released in~\url{https://github.com/MikeGu721/XiezhiBenchmark}.
|
http://arxiv.org/pdf/2306.05783
|
Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Jianchen Wang, Sihang Jiang, Zhuozhi Xiong, Zihan Li, Qianyu He, Rui Xu, Wenhao Huang, Zili Wang, Shusen Wang, Weiguo Zheng, Hongwei Feng, Yanghua Xiao
|
cs.CL
|
Under review of NeurIPS 2023
| null |
cs.CL
|
20230609
|
20230615
|
[
{
"id": "2301.13126"
},
{
"id": "2302.13971"
},
{
"id": "2303.04048"
},
{
"id": "1905.07830"
},
{
"id": "2304.12986"
},
{
"id": "2304.07854"
},
{
"id": "2211.05100"
},
{
"id": "1909.00277"
},
{
"id": "2305.10263"
},
{
"id": "1909.06058"
},
{
"id": "2206.07682"
},
{
"id": "2304.06364"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2303.18223"
},
{
"id": "2302.09419"
},
{
"id": "2303.14742"
},
{
"id": "2111.10952"
},
{
"id": "2301.12726"
},
{
"id": "2304.01933"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
},
{
"id": "2105.09938"
}
] |
2306.05817
| 23 |
2.5 Discussion We could observe that the development trajectory about where to adapt LLM to RS is fundamentally aligned with the progress of large language models. Back to year 2021 and early days in 2022, the parameter sizes of pretrained lan- guage models are still relatively small (e.g., 340M for BERT, 1.5B for GPT2-XL). Therefore, earlier works usually tend to either incorporate these small-scale language models as sim- ple textual feature encoders, or as scoring/ranking functions ï¬netuned to ï¬t the data distribution from recommender sys- tems.
As the model size gradually increases, researchers dis- cover that large language models have gained emergent abil- ities (e.g., instruction following, reasoning), as well as a vast amount of open-world knowledge with powerful text gen- eration capacities. Equipped with these amazing features
|
2306.05817#23
|
How Can Recommender Systems Benefit from Large Language Models: A Survey
|
Recommender systems (RS) play important roles to match users' information
needs for Internet applications. In natural language processing (NLP) domains,
large language model (LLM) has shown astonishing emergent abilities (e.g.,
instruction following, reasoning), thus giving rise to the promising research
direction of adapting LLM to RS for performance enhancements and user
experience improvements. In this paper, we conduct a comprehensive survey on
this research direction from an application-oriented view. We first summarize
existing research works from two orthogonal perspectives: where and how to
adapt LLM to RS. For the "WHERE" question, we discuss the roles that LLM could
play in different stages of the recommendation pipeline, i.e., feature
engineering, feature encoder, scoring/ranking function, and pipeline
controller. For the "HOW" question, we investigate the training and inference
strategies, resulting in two fine-grained taxonomy criteria, i.e., whether to
tune LLMs or not, and whether to involve conventional recommendation model
(CRM) for inference. Detailed analysis and general development trajectories are
provided for both questions, respectively. Then, we highlight key challenges in
adapting LLM to RS from three aspects, i.e., efficiency, effectiveness, and
ethics. Finally, we summarize the survey and discuss the future prospects. We
also actively maintain a GitHub repository for papers and other related
resources in this rising direction:
https://github.com/CHIANGEL/Awesome-LLM-for-RecSys.
|
http://arxiv.org/pdf/2306.05817
|
Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Xiangyang Li, Chenxu Zhu, Huifeng Guo, Yong Yu, Ruiming Tang, Weinan Zhang
|
cs.IR, cs.AI
|
15 pages; 3 figures; summarization table in appendix
| null |
cs.IR
|
20230609
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2304.05263"
},
{
"id": "2305.07001"
},
{
"id": "2305.11700"
},
{
"id": "2305.06566"
},
{
"id": "2305.15756"
},
{
"id": "2105.08318"
},
{
"id": "2304.03879"
},
{
"id": "2303.08559"
},
{
"id": "1703.04247"
},
{
"id": "2206.07682"
},
{
"id": "2305.07961"
},
{
"id": "2305.05973"
},
{
"id": "2305.15498"
},
{
"id": "2306.11114"
},
{
"id": "2305.04518"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2305.12090"
},
{
"id": "2212.10403"
},
{
"id": "2304.03022"
},
{
"id": "2305.07609"
},
{
"id": "2209.08060"
},
{
"id": "2209.07562"
},
{
"id": "2304.09542"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "1908.08167"
},
{
"id": "1909.10351"
},
{
"id": "2305.15036"
},
{
"id": "2305.12081"
},
{
"id": "2304.07862"
},
{
"id": "2305.03017"
},
{
"id": "2305.09858"
},
{
"id": "2305.06474"
},
{
"id": "2305.13731"
},
{
"id": "2304.03153"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2306.10702"
},
{
"id": "2306.02250"
},
{
"id": "2303.13835"
},
{
"id": "2305.14302"
},
{
"id": "2302.03735"
},
{
"id": "2306.02841"
},
{
"id": "2305.11206"
},
{
"id": "2203.15876"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
}
] |
2306.05949
| 23 |
Data availability differs due to geographic biases in data collection [216], disparate digitization of content globally due to varying levels of internet access for digitizing content, and infrastructure created to support some languages or accents over others, among other reasons. Much of the training data for state of art generative models comes from the internet. However, the composition of this data reflects historical usage patterns; 5% of the world speaks English at home, yet 63.7% of internet communication is in English [197]. This has implications for downstream model performance where models underperform on parts of the distribution underrepresented in the training set. For example, automatic speech recognition models (ASR), which convert spoken language (audio) to text have been shown to exhibit racial disparities [130], forcing people to adapt to engage with such systems [100] and has implications (see 4.2.3.2 Imposing Norms and Values) for popular audio generation accent representation.
6
Interventions to mitigate harms caused by generative AI systems may also introduce and exhibit disparate performance issues [238]. For instance, automated hate speech detection driven by annotated data with an insensitivity to dialect differences can amplify harm to minority or marginalized groups by silencing their voices (see 4.2.2.1 Community Erasure) or incorrectly labeling their speech as offensive [67]. This therefore requires that the interventions used are documented for which particular populations and norms that they seek to cover, and which they do not.
|
2306.05949#23
|
Evaluating the Social Impact of Generative AI Systems in Systems and Society
|
Generative AI systems across modalities, ranging from text, image, audio, and
video, have broad social impacts, but there exists no official standard for
means of evaluating those impacts and which impacts should be evaluated. We
move toward a standard approach in evaluating a generative AI system for any
modality, in two overarching categories: what is able to be evaluated in a base
system that has no predetermined application and what is able to be evaluated
in society. We describe specific social impact categories and how to approach
and conduct evaluations in the base technical system, then in people and
society. Our framework for a base system defines seven categories of social
impact: bias, stereotypes, and representational harms; cultural values and
sensitive content; disparate performance; privacy and data protection;
financial costs; environmental costs; and data and content moderation labor
costs. Suggested methods for evaluation apply to all modalities and analyses of
the limitations of existing evaluations serve as a starting point for necessary
investment in future evaluations. We offer five overarching categories for what
is able to be evaluated in society, each with their own subcategories:
trustworthiness and autonomy; inequality, marginalization, and violence;
concentration of authority; labor and creativity; and ecosystem and
environment. Each subcategory includes recommendations for mitigating harm. We
are concurrently crafting an evaluation repository for the AI research
community to contribute existing evaluations along the given categories. This
version will be updated following a CRAFT session at ACM FAccT 2023.
|
http://arxiv.org/pdf/2306.05949
|
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
|
cs.CY, cs.AI
| null | null |
cs.CY
|
20230609
|
20230612
|
[
{
"id": "2007.04068"
},
{
"id": "2305.09800"
},
{
"id": "1908.09203"
},
{
"id": "2202.05520"
},
{
"id": "2302.10329"
},
{
"id": "2107.03374"
},
{
"id": "2210.06245"
},
{
"id": "2211.02001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2301.10226"
},
{
"id": "2202.02647"
},
{
"id": "2112.10752"
},
{
"id": "2206.04615"
},
{
"id": "2202.00885"
},
{
"id": "2010.15581"
},
{
"id": "2305.09941"
},
{
"id": "2301.04246"
},
{
"id": "2304.12298"
},
{
"id": "2203.09509"
},
{
"id": "2207.14157"
},
{
"id": "2102.09692"
},
{
"id": "1804.10999"
},
{
"id": "2303.11156"
},
{
"id": "2104.06390"
},
{
"id": "2002.05651"
}
] |
2306.06264
| 23 |
- - - - - - - - 0.162 2.407 0.158 0.017 1.68 0.339 1.621 0.078 0.793 2.501 0.028 0.005 0.007 0.004 5.938 0.584 0.309 0.367 0.91 0.355 0.012 0.135 0.136 - - - - - - - - - - 0.725 9.327 12.109 10.951 3.375 3.215 - 7.363 8.504 - - 4.862 2.84 - - 1.542 3.364 - 2.434 4.94 - 0.881 - 6.692 - 2.212 - - - - 1.838 0.489 0.165 1.147 6.787 11.232 9.13 6.33 1.956 9.903 5.378 8.275 9.144 - 4.945 5.93 - - - - - - 9.411 - 0.912 - - - - - 0.899 6.875 - - - - 0.64 - 7.941 13.083 9.599 3.45 - 5.938 7.207 7.618 3.862 6.233 1.739 10.635 - 2.631 6.093 5.662 8.29 3.687 7.387
|
2306.06264#23
|
Measuring and Modifying Factual Knowledge in Large Language Models
|
Large Language Models (LLMs) store an extensive amount of factual knowledge
obtained from vast collections of text. To effectively utilize these models for
downstream tasks, it is crucial to have reliable methods for measuring their
knowledge. However, existing approaches for knowledge measurement have certain
limitations, and despite recent efforts, they fail to provide accurate
measurements and the necessary insights for modifying the knowledge within
LLMs. In this work, we employ information theory-based measurements to provide
a framework estimating the factual knowledge contained within large language
models. More specifically, we measure knowledge by analyzing the LLM's
prediction probability distribution before and after instilling the target
knowledge, employing metrics such as entropy and KL-divergence. Introducing our
metrics, we first assess their accuracy in comparison to previous ranking-based
methods, surpassing them by over $35\%$ in a synthetic experiment. Then, we
explore two prominent methods of knowledge instillation, discovering that LLMs
exhibit limitations in capturing new knowledge under specific circumstances for
one of these methods. Lastly, we demonstrate the applicability of our methods
in extracting unlearned and mislearned facts in LLMs through their application
to in-context learning. We make code and data for all methods and experiments
in this paper publicly available.
|
http://arxiv.org/pdf/2306.06264
|
Pouya Pezeshkpour
|
cs.CL, cs.LG
| null | null |
cs.CL
|
20230609
|
20230609
|
[
{
"id": "2302.13971"
},
{
"id": "1909.01066"
},
{
"id": "2204.06031"
},
{
"id": "2204.02311"
},
{
"id": "2106.09231"
},
{
"id": "1907.11692"
},
{
"id": "2104.07567"
},
{
"id": "2010.05731"
},
{
"id": "1910.10683"
},
{
"id": "2207.05221"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
},
{
"id": "2203.00211"
},
{
"id": "2301.12810"
}
] |
2306.06283
| 23 |
c. Molecule Discovery by Context
Much context is available in the full text of scientiï¬c articles. This has been exploited by Tshitoyan et al. [52] who used a Word2Vec [53] approach to embed words into a vector space. Word2Vec does so by tasking a model to predict for a word the probability for all possible next words in a vocabulary. In this way, word embeddings capture syntactic and semantic details of lexical items (i.e., words). When applied to material science ab- stracts, the word embeddings of compounds such as Li2CuSb could be used for materials discovery by measuring their distance (cosine similarity) to concepts such as âthermo- electricâ. [54] However, traditional Word2Vec, as used by Tshitoyan et al. [52], only produces static embeddings, which remain unchanged after training. Word embeddings
9
Training a Prediction Model In-Context Learning between vectorized Concrete Formulations (X) and Labels (Y) x Y T1 , "output": , âoutputâ 22 B , "output": & GPT "response" "T1=45 MPa"
|
2306.06283#23
|
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
|
Large-language models (LLMs) such as GPT-4 caught the interest of many
scientists. Recent studies suggested that these models could be useful in
chemistry and materials science. To explore these possibilities, we organized a
hackathon.
This article chronicles the projects built as part of this hackathon.
Participants employed LLMs for various applications, including predicting
properties of molecules and materials, designing novel interfaces for tools,
extracting knowledge from unstructured data, and developing new educational
applications.
The diverse topics and the fact that working prototypes could be generated in
less than two days highlight that LLMs will profoundly impact the future of our
fields. The rich collection of ideas and projects also indicates that the
applications of LLMs are not limited to materials science and chemistry but
offer potential benefits to a wide range of scientific disciplines.
|
http://arxiv.org/pdf/2306.06283
|
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
|
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
| null | null |
cond-mat.mtrl-sci
|
20230609
|
20230714
|
[
{
"id": "2209.08203"
},
{
"id": "2212.04450"
}
] |
2306.07929
| 23 |
al., 2022b], the LLM is prompted to predict a reason for its decision as the extra information depicted in Figure 4. The representation of the web pages is simplified in the same way as ReAct. The task similarity fg is calculated using the all-MiniLM-L12-v2 model from Sentence-Transformers [Reimers and Gurevych, 2019]. As it is noticed that the web pages in WebShop are instantiated from some templates, we categorize the web pages into four patterns and design a similarity lookup table to compute the observation similarity fo according to the web page patterns. The details about the similarity table should be referred to in the supplementary. It is observed that most of the tasks end in 5 steps, thus we directly conduct a full-trajectory expanding while performing multi-step bootstrapping:
|
2306.07929#23
|
Large Language Models Are Semi-Parametric Reinforcement Learning Agents
|
Inspired by the insights in cognitive science with respect to human memory
and reasoning mechanism, a novel evolvable LLM-based (Large Language Model)
agent framework is proposed as REMEMBERER. By equipping the LLM with a
long-term experience memory, REMEMBERER is capable of exploiting the
experiences from the past episodes even for different task goals, which excels
an LLM-based agent with fixed exemplars or equipped with a transient working
memory. We further introduce Reinforcement Learning with Experience Memory
(RLEM) to update the memory. Thus, the whole system can learn from the
experiences of both success and failure, and evolve its capability without
fine-tuning the parameters of the LLM. In this way, the proposed REMEMBERER
constitutes a semi-parametric RL agent. Extensive experiments are conducted on
two RL task sets to evaluate the proposed framework. The average results with
different initialization and training sets exceed the prior SOTA by 4% and 2%
for the success rate on two task sets and demonstrate the superiority and
robustness of REMEMBERER.
|
http://arxiv.org/pdf/2306.07929
|
Danyang Zhang, Lu Chen, Situo Zhang, Hongshen Xu, Zihan Zhao, Kai Yu
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230609
|
20231030
|
[
{
"id": "2201.06009"
}
] |
2306.05685
| 24 |
# 4.1 Setup
MT-bench. We generate answers for all 80 questions with 6 models: GPT-4, GPT-3.5, Claude-V1, Vicuna-13B, Alpaca-13B [38], and LLaMA-13B [39]. We then use 2 kinds of judges: LLM judges and 58 expert-level human labelers. The labelers are mostly graduate students so they are considered experts and more skilled than average crowd workers. We let LLM judges evaluate all pairs and let each human evaluate at least 20 random multi-turn questions. This resulted in around 3K votes for all questions. The detailed data collection process is in Appendix C.
Chatbot Arena. We randomly sample 3K single-turn votes from 30K arena data, which covers models including GPT-4, GPT-3.5, Claude, Vicuna-7B/13B, Koala-13B [16], Alpaca-13B, LLaMA- 13B, and Dolly-12B. We use two kinds of judges: LLM judges and collected crowd judges (2114 unique IPs).
|
2306.05685#24
|
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
|
Evaluating large language model (LLM) based chat assistants is challenging
due to their broad capabilities and the inadequacy of existing benchmarks in
measuring human preferences. To address this, we explore using strong LLMs as
judges to evaluate these models on more open-ended questions. We examine the
usage and limitations of LLM-as-a-judge, including position, verbosity, and
self-enhancement biases, as well as limited reasoning ability, and propose
solutions to mitigate some of them. We then verify the agreement between LLM
judges and human preferences by introducing two benchmarks: MT-bench, a
multi-turn question set; and Chatbot Arena, a crowdsourced battle platform. Our
results reveal that strong LLM judges like GPT-4 can match both controlled and
crowdsourced human preferences well, achieving over 80% agreement, the same
level of agreement between humans. Hence, LLM-as-a-judge is a scalable and
explainable way to approximate human preferences, which are otherwise very
expensive to obtain. Additionally, we show our benchmark and traditional
benchmarks complement each other by evaluating several variants of LLaMA and
Vicuna. The MT-bench questions, 3K expert votes, and 30K conversations with
human preferences are publicly available at
https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge.
|
http://arxiv.org/pdf/2306.05685
|
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, Ion Stoica
|
cs.CL, cs.AI
|
NeurIPS 2023 Datasets and Benchmarks Track
| null |
cs.CL
|
20230609
|
20231224
|
[
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2305.10403"
},
{
"id": "2304.07327"
},
{
"id": "2201.11903"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.12420"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2306.04751"
},
{
"id": "2211.09110"
},
{
"id": "2301.13688"
},
{
"id": "2004.14602"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2211.05719"
},
{
"id": "2206.04615"
},
{
"id": "2204.05862"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2305.17926"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2306.05715
| 24 |
3.4 Analysis of Help Request Responses The LLMsâ responses to help requests were analyzed qualitatively and quantitatively. As Codex often produced surplus content (e.g., new questions and code examples), we cleaned up the data by au- tomatically removing any subsequent content from the responses that repeated the prompt format.
We focused our analysis on seven aspects, listed below. For each response analyzed, we asked whether it ...
models à two languages), each of which we assessed in terms of the seven questions listed above. The results of this comparison are in Section 4.2.
3.4.2 Analysis of Responses and Issues. Since the initial analysis suggested that GPT-3.5 clearly outperforms Codex and that its per- formance is similar in English and Finnish, we focused our subse- quent eï¬orts on GPT-3.5âs responses to English prompts. After an- alyzing the remaining 120, we had a total of 150 analyses of English responses from GPT-3.5. We combined the classiï¬cation of issues (Section 3.3 above) with the analysis of the LLM responses, check- ing how the responses diï¬ered for requests that involved diï¬erent kinds of issues. The results of this analysis are in Section 4.3.
|
2306.05715#24
|
Exploring the Responses of Large Language Models to Beginner Programmers' Help Requests
|
Background and Context: Over the past year, large language models (LLMs) have
taken the world by storm. In computing education, like in other walks of life,
many opportunities and threats have emerged as a consequence.
Objectives: In this article, we explore such opportunities and threats in a
specific area: responding to student programmers' help requests. More
specifically, we assess how good LLMs are at identifying issues in problematic
code that students request help on.
Method: We collected a sample of help requests and code from an online
programming course. We then prompted two different LLMs (OpenAI Codex and
GPT-3.5) to identify and explain the issues in the students' code and assessed
the LLM-generated answers both quantitatively and qualitatively.
Findings: GPT-3.5 outperforms Codex in most respects. Both LLMs frequently
find at least one actual issue in each student program (GPT-3.5 in 90% of the
cases). Neither LLM excels at finding all the issues (GPT-3.5 finding them 57%
of the time). False positives are common (40% chance for GPT-3.5). The advice
that the LLMs provide on the issues is often sensible. The LLMs perform better
on issues involving program logic rather than on output formatting. Model
solutions are frequently provided even when the LLM is prompted not to. LLM
responses to prompts in a non-English language are only slightly worse than
responses to English prompts.
Implications: Our results continue to highlight the utility of LLMs in
programming education. At the same time, the results highlight the
unreliability of LLMs: LLMs make some of the same mistakes that students do,
perhaps especially when formatting output as required by automated assessment
systems. Our study informs teachers interested in using LLMs as well as future
efforts to customize LLMs for the needs of programming education.
|
http://arxiv.org/pdf/2306.05715
|
Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanpää, Juha Sorva
|
cs.CY, cs.AI, cs.CL, cs.HC, cs.SE
|
13 pages, 1 figure. To be published in Proceedings of the 2023 ACM
Conference on International Computing Education Research V.1 (ICER '23 V1)
| null |
cs.CY
|
20230609
|
20230609
|
[
{
"id": "2004.09456"
},
{
"id": "2302.07427"
},
{
"id": "2203.02155"
},
{
"id": "2304.02491"
},
{
"id": "2211.04715"
},
{
"id": "2306.02608"
},
{
"id": "2303.08774"
},
{
"id": "2304.03938"
}
] |
2306.05720
| 24 |
We performed a similar intervention on the continuous depth representation. The depth map of the original output dc was translated with horizontal and vertical translations sampled from U(â120, â90) ⪠(90, 120) to generate the modified map dâ² c. Empty areas outside translated depth map were filled with its edge values. As in the intervention for saliency, ϵθ(l,t) is updated using the gradients of probing regressor pc so its output matches dâ² c. We calculated the gradients using the same Eq. 3 with LCE, pb, and dâ² c. We intervened on all self-attention layers at the first three sampling steps. The intervention on the depth representations was more effective when modifying all self-attention layer activations. The effects of our interventions were measured by the RMSE between dâ² c and the depth map of intervened output. If a causal role exists, the depth maps of new outputs should match the modified depth maps used in intervention.
|
2306.05720#24
|
Beyond Surface Statistics: Scene Representations in a Latent Diffusion Model
|
Latent diffusion models (LDMs) exhibit an impressive ability to produce
realistic images, yet the inner workings of these models remain mysterious.
Even when trained purely on images without explicit depth information, they
typically output coherent pictures of 3D scenes. In this work, we investigate a
basic interpretability question: does an LDM create and use an internal
representation of simple scene geometry? Using linear probes, we find evidence
that the internal activations of the LDM encode linear representations of both
3D depth data and a salient-object / background distinction. These
representations appear surprisingly early in the denoising process$-$well
before a human can easily make sense of the noisy images. Intervention
experiments further indicate these representations play a causal role in image
synthesis, and may be used for simple high-level editing of an LDM's output.
Project page: https://yc015.github.io/scene-representation-diffusion-model/
|
http://arxiv.org/pdf/2306.05720
|
Yida Chen, Fernanda Viégas, Martin Wattenberg
|
cs.CV, cs.AI, cs.LG
|
A short version of this paper is accepted in the NeurIPS 2023
Workshop on Diffusion Models: https://nips.cc/virtual/2023/74894
| null |
cs.CV
|
20230609
|
20231104
|
[
{
"id": "2209.14988"
},
{
"id": "1805.01070"
},
{
"id": "2210.13382"
},
{
"id": "1909.01066"
},
{
"id": "1809.10040"
},
{
"id": "2212.00774"
},
{
"id": "1610.01644"
},
{
"id": "2206.00364"
},
{
"id": "1905.05950"
},
{
"id": "2212.08861"
},
{
"id": "1908.02899"
},
{
"id": "2111.02114"
},
{
"id": "2105.14002"
},
{
"id": "2112.03126"
}
] |
2306.05783
| 24 |
Figure 4: An example question from Xiezhi-Interdiscipline, which is related to Physics, Chemistry, Electronics Science and Technology, Nuclear Science and Technology.
# 4 Experiments
# 4.1 Setup
Models: We conducted experiments on 47 cutting-edge LLMs, the detailed descriptions of all tested LLMs are listed in Tab 3 in Appendix. Our experiments cover 45 open-source LLMs based on eight different base models: bloom, llama, moss, pythia, gpt-neox, stablelm, chatGLM and falcon. Considering the legal issues, we only show the results of two publicly recognized API-based LLMs, ChatGPT and GPT-4.
More options: All tested LLMs need to choose the best-fit answer from 50 options for each question. Each question is set up with 3 confusing options in addition to the correct answer, and another 46 options are randomly sampled from all options in all questions in Xiezhi. It is worth noting that it is possible to use WordNet, open source synonym databases, or other word construction methods to generate more confusing options. However, our experiments show that the performance of all LLMs declined dramatically when the number of options increased, even when using so many non-confusing options. This achieves our goal of exacerbating the performance gap between LLMs through new experimental settings and also shows that the traditional 4-choice setting has room for improvement.
|
2306.05783#24
|
Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation
|
New Natural Langauge Process~(NLP) benchmarks are urgently needed to align
with the rapid development of large language models (LLMs). We present Xiezhi,
the most comprehensive evaluation suite designed to assess holistic domain
knowledge. Xiezhi comprises multiple-choice questions across 516 diverse
disciplines ranging from 13 different subjects with 249,587 questions and
accompanied by Xiezhi-Specialty and Xiezhi-Interdiscipline, both with 15k
questions. We conduct evaluation of the 47 cutting-edge LLMs on Xiezhi. Results
indicate that LLMs exceed average performance of humans in science,
engineering, agronomy, medicine, and art, but fall short in economics,
jurisprudence, pedagogy, literature, history, and management. We anticipate
Xiezhi will help analyze important strengths and shortcomings of LLMs, and the
benchmark is released in~\url{https://github.com/MikeGu721/XiezhiBenchmark}.
|
http://arxiv.org/pdf/2306.05783
|
Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Jianchen Wang, Sihang Jiang, Zhuozhi Xiong, Zihan Li, Qianyu He, Rui Xu, Wenhao Huang, Zili Wang, Shusen Wang, Weiguo Zheng, Hongwei Feng, Yanghua Xiao
|
cs.CL
|
Under review of NeurIPS 2023
| null |
cs.CL
|
20230609
|
20230615
|
[
{
"id": "2301.13126"
},
{
"id": "2302.13971"
},
{
"id": "2303.04048"
},
{
"id": "1905.07830"
},
{
"id": "2304.12986"
},
{
"id": "2304.07854"
},
{
"id": "2211.05100"
},
{
"id": "1909.00277"
},
{
"id": "2305.10263"
},
{
"id": "1909.06058"
},
{
"id": "2206.07682"
},
{
"id": "2304.06364"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2303.18223"
},
{
"id": "2302.09419"
},
{
"id": "2303.14742"
},
{
"id": "2111.10952"
},
{
"id": "2301.12726"
},
{
"id": "2304.01933"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
},
{
"id": "2105.09938"
}
] |
2306.05817
| 24 |
+ Infer with CRM Defeated Baseline ° @ @ TransRec 23 vndom KAR 23 p @ GENRE 25 o (CTR-BERT â21 âopalarity ve 2 âag | Phen 21 iF @ mwras OR LP ° ° ° tention based AnyPredit 28 ee pacing UNBERT2I og âTransRec â22 WJ e ciated âNot tune LLM. ZBSReC 21 | cay. NewsRec 22 Tune LLM Size of LLM. ° o « e 0 oa @oEee? O twins 5 PALR 23 CMCPTA peosShorGPT 25} se¢necsys 24 e © Unthec 23 O >ro0s ) nn WR cracrra 2s @ pres 22 prance 23 ANP Development harGPTs 25 RecFormer 23 @ BNR 23 eb) : Instructtee 23 craiiens 23 OO PrompttNR 23 @ TALLRec â23 e Infer w/o CRM. GPTRee 25
|
2306.05817#24
|
How Can Recommender Systems Benefit from Large Language Models: A Survey
|
Recommender systems (RS) play important roles to match users' information
needs for Internet applications. In natural language processing (NLP) domains,
large language model (LLM) has shown astonishing emergent abilities (e.g.,
instruction following, reasoning), thus giving rise to the promising research
direction of adapting LLM to RS for performance enhancements and user
experience improvements. In this paper, we conduct a comprehensive survey on
this research direction from an application-oriented view. We first summarize
existing research works from two orthogonal perspectives: where and how to
adapt LLM to RS. For the "WHERE" question, we discuss the roles that LLM could
play in different stages of the recommendation pipeline, i.e., feature
engineering, feature encoder, scoring/ranking function, and pipeline
controller. For the "HOW" question, we investigate the training and inference
strategies, resulting in two fine-grained taxonomy criteria, i.e., whether to
tune LLMs or not, and whether to involve conventional recommendation model
(CRM) for inference. Detailed analysis and general development trajectories are
provided for both questions, respectively. Then, we highlight key challenges in
adapting LLM to RS from three aspects, i.e., efficiency, effectiveness, and
ethics. Finally, we summarize the survey and discuss the future prospects. We
also actively maintain a GitHub repository for papers and other related
resources in this rising direction:
https://github.com/CHIANGEL/Awesome-LLM-for-RecSys.
|
http://arxiv.org/pdf/2306.05817
|
Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Xiangyang Li, Chenxu Zhu, Huifeng Guo, Yong Yu, Ruiming Tang, Weinan Zhang
|
cs.IR, cs.AI
|
15 pages; 3 figures; summarization table in appendix
| null |
cs.IR
|
20230609
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2304.05263"
},
{
"id": "2305.07001"
},
{
"id": "2305.11700"
},
{
"id": "2305.06566"
},
{
"id": "2305.15756"
},
{
"id": "2105.08318"
},
{
"id": "2304.03879"
},
{
"id": "2303.08559"
},
{
"id": "1703.04247"
},
{
"id": "2206.07682"
},
{
"id": "2305.07961"
},
{
"id": "2305.05973"
},
{
"id": "2305.15498"
},
{
"id": "2306.11114"
},
{
"id": "2305.04518"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2305.12090"
},
{
"id": "2212.10403"
},
{
"id": "2304.03022"
},
{
"id": "2305.07609"
},
{
"id": "2209.08060"
},
{
"id": "2209.07562"
},
{
"id": "2304.09542"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "1908.08167"
},
{
"id": "1909.10351"
},
{
"id": "2305.15036"
},
{
"id": "2305.12081"
},
{
"id": "2304.07862"
},
{
"id": "2305.03017"
},
{
"id": "2305.09858"
},
{
"id": "2305.06474"
},
{
"id": "2305.13731"
},
{
"id": "2304.03153"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2306.10702"
},
{
"id": "2306.02250"
},
{
"id": "2303.13835"
},
{
"id": "2305.14302"
},
{
"id": "2302.03735"
},
{
"id": "2306.02841"
},
{
"id": "2305.11206"
},
{
"id": "2203.15876"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
}
] |
2306.05949
| 24 |
What to Evaluate Dataset composition and decisions can give insight to subsequent performance. The language, speech, and imagery included in datasets as well as decisions made about that data, including filtering and reward modeling, will impact how the model performs for different groups or categories of concepts associated with groups. Generative image models for example, may output varying quality generations when producing different concepts, with quality referring to photorealism, aesthetic quality, and conceptual richness [170].
Evaluating model generations across subpopulation languages, accents, and similar topics using the same evaluation criteria as the highest performing language or accent can illustrate areas where there is disparate performance and can help document areas for further model development and mitigation work.
Limitations Similar limitations that lead to disparate system performance contribute to disparate attention to evaluations for different groups. Performance evaluations for similar tasks in non-English languages will vary by the amount of resourcing for a given language. More spoken and digitized languages may have more evaluations than lower-resource languages.
# 4.1.4 Privacy and Data Protection
|
2306.05949#24
|
Evaluating the Social Impact of Generative AI Systems in Systems and Society
|
Generative AI systems across modalities, ranging from text, image, audio, and
video, have broad social impacts, but there exists no official standard for
means of evaluating those impacts and which impacts should be evaluated. We
move toward a standard approach in evaluating a generative AI system for any
modality, in two overarching categories: what is able to be evaluated in a base
system that has no predetermined application and what is able to be evaluated
in society. We describe specific social impact categories and how to approach
and conduct evaluations in the base technical system, then in people and
society. Our framework for a base system defines seven categories of social
impact: bias, stereotypes, and representational harms; cultural values and
sensitive content; disparate performance; privacy and data protection;
financial costs; environmental costs; and data and content moderation labor
costs. Suggested methods for evaluation apply to all modalities and analyses of
the limitations of existing evaluations serve as a starting point for necessary
investment in future evaluations. We offer five overarching categories for what
is able to be evaluated in society, each with their own subcategories:
trustworthiness and autonomy; inequality, marginalization, and violence;
concentration of authority; labor and creativity; and ecosystem and
environment. Each subcategory includes recommendations for mitigating harm. We
are concurrently crafting an evaluation repository for the AI research
community to contribute existing evaluations along the given categories. This
version will be updated following a CRAFT session at ACM FAccT 2023.
|
http://arxiv.org/pdf/2306.05949
|
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
|
cs.CY, cs.AI
| null | null |
cs.CY
|
20230609
|
20230612
|
[
{
"id": "2007.04068"
},
{
"id": "2305.09800"
},
{
"id": "1908.09203"
},
{
"id": "2202.05520"
},
{
"id": "2302.10329"
},
{
"id": "2107.03374"
},
{
"id": "2210.06245"
},
{
"id": "2211.02001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2301.10226"
},
{
"id": "2202.02647"
},
{
"id": "2112.10752"
},
{
"id": "2206.04615"
},
{
"id": "2202.00885"
},
{
"id": "2010.15581"
},
{
"id": "2305.09941"
},
{
"id": "2301.04246"
},
{
"id": "2304.12298"
},
{
"id": "2203.09509"
},
{
"id": "2207.14157"
},
{
"id": "2102.09692"
},
{
"id": "1804.10999"
},
{
"id": "2303.11156"
},
{
"id": "2104.06390"
},
{
"id": "2002.05651"
}
] |
2306.06070
| 24 |
Cross-Task Cross-Website Cross-Domain Ele. Acc Op. F1 Step SR SR Ele. Acc Op. F1 Step SR SR Ele. Acc Op. F1 Step SR SR Classification 26.8 â â â 21.6 â â â 24.5 â â â Generation 20.2 52.0 17.5 0.0 13.9 44.7 11.0 0.0 14.2 44.7 11.9 0.4 MINDACT w/ Flan-T5B w/ Flan-T5L w/ Flan-T5XL w/ GPT-3.5 w/ GPT-4â 43.6 53.4 55.1 20.3 41.6 76.8 75.7 75.7 56.6 60.6 41.0 50.3 52.0 17.4 36.2 4.0 7.1 5.2 0.8 2.0 32.1 39.2 42.0 19.3 35.8 67.6 67.1 65.2 48.8 51.1 29.5 35.3 38.9 16.2 30.1 1.7 1.1 5.1 0.6 2.0 33.9 39.7 42.1 21.6 37.1 67.3 67.2 66.5 52.8 46.5 31.6 37.3 39.6
|
2306.06070#24
|
Mind2Web: Towards a Generalist Agent for the Web
|
We introduce Mind2Web, the first dataset for developing and evaluating
generalist agents for the web that can follow language instructions to complete
complex tasks on any website. Existing datasets for web agents either use
simulated websites or only cover a limited set of websites and tasks, thus not
suitable for generalist web agents. With over 2,000 open-ended tasks collected
from 137 websites spanning 31 domains and crowdsourced action sequences for the
tasks, Mind2Web provides three necessary ingredients for building generalist
web agents: 1) diverse domains, websites, and tasks, 2) use of real-world
websites instead of simulated and simplified ones, and 3) a broad spectrum of
user interaction patterns. Based on Mind2Web, we conduct an initial exploration
of using large language models (LLMs) for building generalist web agents. While
the raw HTML of real-world websites are often too large to be fed to LLMs, we
show that first filtering it with a small LM significantly improves the
effectiveness and efficiency of LLMs. Our solution demonstrates a decent level
of performance, even on websites or entire domains the model has never seen
before, but there is still a substantial room to improve towards truly
generalizable agents. We open-source our dataset, model implementation, and
trained models (https://osu-nlp-group.github.io/Mind2Web) to facilitate further
research on building a generalist agent for the web.
|
http://arxiv.org/pdf/2306.06070
|
Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, Yu Su
|
cs.CL
|
Website: https://osu-nlp-group.github.io/Mind2Web. Updated with
supplementary material. NeurIPS'23 Spotlight
| null |
cs.CL
|
20230609
|
20231209
|
[] |
2306.06264
| 24 |
7.618 3.862 6.233 1.739 10.635 - 2.631 6.093 5.662 8.29 3.687 7.387 2.09 6.054 - - 1.855 7.075 - - 12.251 10.112 13.142 Table 3: Per-relation breakdown of three classes of facts, categorized by their appearance in the generated paragraphs produced by InstructGPT and FLAN-T5, is presented to evaluate the effectiveness of the KL-divergence metric in distinguishing between facts across these classes (bigger numbers indicate the lower amount of knowledge). Relations in which our metric demonstrates effective differentiation between different fact classes are highlighted in green.
|
2306.06264#24
|
Measuring and Modifying Factual Knowledge in Large Language Models
|
Large Language Models (LLMs) store an extensive amount of factual knowledge
obtained from vast collections of text. To effectively utilize these models for
downstream tasks, it is crucial to have reliable methods for measuring their
knowledge. However, existing approaches for knowledge measurement have certain
limitations, and despite recent efforts, they fail to provide accurate
measurements and the necessary insights for modifying the knowledge within
LLMs. In this work, we employ information theory-based measurements to provide
a framework estimating the factual knowledge contained within large language
models. More specifically, we measure knowledge by analyzing the LLM's
prediction probability distribution before and after instilling the target
knowledge, employing metrics such as entropy and KL-divergence. Introducing our
metrics, we first assess their accuracy in comparison to previous ranking-based
methods, surpassing them by over $35\%$ in a synthetic experiment. Then, we
explore two prominent methods of knowledge instillation, discovering that LLMs
exhibit limitations in capturing new knowledge under specific circumstances for
one of these methods. Lastly, we demonstrate the applicability of our methods
in extracting unlearned and mislearned facts in LLMs through their application
to in-context learning. We make code and data for all methods and experiments
in this paper publicly available.
|
http://arxiv.org/pdf/2306.06264
|
Pouya Pezeshkpour
|
cs.CL, cs.LG
| null | null |
cs.CL
|
20230609
|
20230609
|
[
{
"id": "2302.13971"
},
{
"id": "1909.01066"
},
{
"id": "2204.06031"
},
{
"id": "2204.02311"
},
{
"id": "2106.09231"
},
{
"id": "1907.11692"
},
{
"id": "2104.07567"
},
{
"id": "2010.05731"
},
{
"id": "1910.10683"
},
{
"id": "2207.05221"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
},
{
"id": "2203.00211"
},
{
"id": "2301.12810"
}
] |
2306.06283
| 24 |
Figure 1: Using LLMs to predict the compressive strength of concretes. An illustration of the conventional approach for solving this task, i.e., training classical prediction models using ten training data points as tabular data (left). Using the LIFT framework LLMs can also use tabular data and leverage context information provided in natural language (right). The context can be âfuzzyâ design rules often known in chemistry and materials science but hard to incorporate in conventional ML models. Augmented with this context and ten training examples, ICL with LLM leads to a performance that outperforms baselines such as RFs or GPR.
extracted from an LLM, on the other hand, are contextualized on the speciï¬c sequence
|
2306.06283#24
|
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
|
Large-language models (LLMs) such as GPT-4 caught the interest of many
scientists. Recent studies suggested that these models could be useful in
chemistry and materials science. To explore these possibilities, we organized a
hackathon.
This article chronicles the projects built as part of this hackathon.
Participants employed LLMs for various applications, including predicting
properties of molecules and materials, designing novel interfaces for tools,
extracting knowledge from unstructured data, and developing new educational
applications.
The diverse topics and the fact that working prototypes could be generated in
less than two days highlight that LLMs will profoundly impact the future of our
fields. The rich collection of ideas and projects also indicates that the
applications of LLMs are not limited to materials science and chemistry but
offer potential benefits to a wide range of scientific disciplines.
|
http://arxiv.org/pdf/2306.06283
|
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
|
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
| null | null |
cond-mat.mtrl-sci
|
20230609
|
20230714
|
[
{
"id": "2209.08203"
},
{
"id": "2212.04450"
}
] |
2306.07929
| 24 |
T QW(or,4) = ore. (7) rat
WikiHow [Zhang et al., 2023] WikiHow is a task set based on the collaborative wiki app WikiHow4 running on the interaction platform Mobile-Env [Zhang et al., 2023]. The task set contains amounts of navigation tasks. The target of the agent is to follow the instructions and navigate to the required page. Intermediate rewards and instructions may be triggered during the episode. We followed Zhang et al. [2023] and evaluated the proposed REMEMBERER on the âcanonical subsetâ comprising 70
# 2https://openai.com/api/ 3https://www.amazon.com/ 4https://www.wikihow.com/Main-Page
6
Table 1: Results on WebShop. The result of the prior state of the art, ReAct [Yao et al., 2022b], is attained with the public implementation released by the original authors. The RL, IL, and IL+RL re- sults are retrieved directly from Yao et al. [2022a].
Table 2: Results on WikiHow. âMobile-Envâ in- dicates the prior result from Zhang et al. [2023]. âRMMBR. (A)â denotes the results by directly running the evaluation of REMEMBERER with a human-annotated experience memory.
|
2306.07929#24
|
Large Language Models Are Semi-Parametric Reinforcement Learning Agents
|
Inspired by the insights in cognitive science with respect to human memory
and reasoning mechanism, a novel evolvable LLM-based (Large Language Model)
agent framework is proposed as REMEMBERER. By equipping the LLM with a
long-term experience memory, REMEMBERER is capable of exploiting the
experiences from the past episodes even for different task goals, which excels
an LLM-based agent with fixed exemplars or equipped with a transient working
memory. We further introduce Reinforcement Learning with Experience Memory
(RLEM) to update the memory. Thus, the whole system can learn from the
experiences of both success and failure, and evolve its capability without
fine-tuning the parameters of the LLM. In this way, the proposed REMEMBERER
constitutes a semi-parametric RL agent. Extensive experiments are conducted on
two RL task sets to evaluate the proposed framework. The average results with
different initialization and training sets exceed the prior SOTA by 4% and 2%
for the success rate on two task sets and demonstrate the superiority and
robustness of REMEMBERER.
|
http://arxiv.org/pdf/2306.07929
|
Danyang Zhang, Lu Chen, Situo Zhang, Hongshen Xu, Zihan Zhao, Kai Yu
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230609
|
20231030
|
[
{
"id": "2201.06009"
}
] |
2306.05685
| 25 |
Metrics. We define the agreement between two types of judges as the probability of randomly selected individuals (but not identical) of each type agreeing on a randomly selected question. See more explanation in Appendix D.3. Average win rate is the average of win rates against all other players. These metrics can be computed with or without including tie votes.
# 4.2 High agreement between GPT-4 and humans
We compute agreement on MT-bench data. In Table 5, GPT-4 with both pairwise comparison and single answer grading show very high agreements with human experts. The agreement under setup S2 (w/o tie) between GPT-4 and humans reaches 85%, which is even higher than the agreement among humans (81%). This means GPT-4âs judgments closely align with the majority of humans. We also show that GPT-4âs judgments may help humans make better judgments. During our data collection, when a humanâs choice deviated from GPT-4, we presented GPT-4âs judgments to humans and ask if they are reasonable (details in Appendix C.1). Despite different views, humans deemed GPT-4âs judgments reasonable in 75% of cases and are even willing to change their choices in 34% of cases.
|
2306.05685#25
|
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
|
Evaluating large language model (LLM) based chat assistants is challenging
due to their broad capabilities and the inadequacy of existing benchmarks in
measuring human preferences. To address this, we explore using strong LLMs as
judges to evaluate these models on more open-ended questions. We examine the
usage and limitations of LLM-as-a-judge, including position, verbosity, and
self-enhancement biases, as well as limited reasoning ability, and propose
solutions to mitigate some of them. We then verify the agreement between LLM
judges and human preferences by introducing two benchmarks: MT-bench, a
multi-turn question set; and Chatbot Arena, a crowdsourced battle platform. Our
results reveal that strong LLM judges like GPT-4 can match both controlled and
crowdsourced human preferences well, achieving over 80% agreement, the same
level of agreement between humans. Hence, LLM-as-a-judge is a scalable and
explainable way to approximate human preferences, which are otherwise very
expensive to obtain. Additionally, we show our benchmark and traditional
benchmarks complement each other by evaluating several variants of LLaMA and
Vicuna. The MT-bench questions, 3K expert votes, and 30K conversations with
human preferences are publicly available at
https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge.
|
http://arxiv.org/pdf/2306.05685
|
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, Ion Stoica
|
cs.CL, cs.AI
|
NeurIPS 2023 Datasets and Benchmarks Track
| null |
cs.CL
|
20230609
|
20231224
|
[
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2305.10403"
},
{
"id": "2304.07327"
},
{
"id": "2201.11903"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.12420"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2306.04751"
},
{
"id": "2211.09110"
},
{
"id": "2301.13688"
},
{
"id": "2004.14602"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2211.05719"
},
{
"id": "2206.04615"
},
{
"id": "2204.05862"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2305.17926"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2306.05715
| 25 |
For further insight, we annotated the LLM responses with free- form notes, noting any phenomena that appeared potentially inter- esting from a pedagogical point of view; 109 of the 150 responses received at least one annotation. We thematically analyzed these notes; the main results are in Section 4.4.
(1) ... identiï¬es and mentions at least one actual issue? (2) ... identiï¬es and mentions all actual issues? (3) ... identiï¬es any non-existent issues? (4) ... includes duplicate or superï¬uous content? (5) ... includes code? (6) ... includes code that can be considered a model solution? (7) ... includes any automated tests?
3.5 Ethical Considerations The research was conducted in compliance with the local ethical principles and guidelines. To avoid leaking any personal informa- tion to third-party services, we manually vetted the inputs that we fed to the LLMs, both during prompt engineering and during the ï¬nal generation of the responses.
For each of the seven aspects, each LLM response was manually categorized as either âyesâ or ânoâ.
|
2306.05715#25
|
Exploring the Responses of Large Language Models to Beginner Programmers' Help Requests
|
Background and Context: Over the past year, large language models (LLMs) have
taken the world by storm. In computing education, like in other walks of life,
many opportunities and threats have emerged as a consequence.
Objectives: In this article, we explore such opportunities and threats in a
specific area: responding to student programmers' help requests. More
specifically, we assess how good LLMs are at identifying issues in problematic
code that students request help on.
Method: We collected a sample of help requests and code from an online
programming course. We then prompted two different LLMs (OpenAI Codex and
GPT-3.5) to identify and explain the issues in the students' code and assessed
the LLM-generated answers both quantitatively and qualitatively.
Findings: GPT-3.5 outperforms Codex in most respects. Both LLMs frequently
find at least one actual issue in each student program (GPT-3.5 in 90% of the
cases). Neither LLM excels at finding all the issues (GPT-3.5 finding them 57%
of the time). False positives are common (40% chance for GPT-3.5). The advice
that the LLMs provide on the issues is often sensible. The LLMs perform better
on issues involving program logic rather than on output formatting. Model
solutions are frequently provided even when the LLM is prompted not to. LLM
responses to prompts in a non-English language are only slightly worse than
responses to English prompts.
Implications: Our results continue to highlight the utility of LLMs in
programming education. At the same time, the results highlight the
unreliability of LLMs: LLMs make some of the same mistakes that students do,
perhaps especially when formatting output as required by automated assessment
systems. Our study informs teachers interested in using LLMs as well as future
efforts to customize LLMs for the needs of programming education.
|
http://arxiv.org/pdf/2306.05715
|
Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanpää, Juha Sorva
|
cs.CY, cs.AI, cs.CL, cs.HC, cs.SE
|
13 pages, 1 figure. To be published in Proceedings of the 2023 ACM
Conference on International Computing Education Research V.1 (ICER '23 V1)
| null |
cs.CY
|
20230609
|
20230609
|
[
{
"id": "2004.09456"
},
{
"id": "2302.07427"
},
{
"id": "2203.02155"
},
{
"id": "2304.02491"
},
{
"id": "2211.04715"
},
{
"id": "2306.02608"
},
{
"id": "2303.08774"
},
{
"id": "2304.03938"
}
] |
2306.05720
| 25 |
Salient object intervention results: We conducted our proposed intervention on the test set of our synthetic dataset (371 samples). For the resultant 1855 interventions, the median Dice coefficient is 0.69 between the modified salient object labels and the synthetic labels of intervention outputs. We further compared the modified label dâ² b with the synthetic label db of the original images, which acts as a baseline. The comparison gave us a median Dice overlap of 0.46 (see Figure 6).
As Figure 7 shows, our interventions successfully repositioned the foreground objects by solely modifying the self-attention activation using the projection learned by probing classifiers. Our results suggest the representation of salient objects has a causal role on the modelâs output.
Depth intervention results: The median RMSE for 1855 intervention outputs is 0.63, whereas the median RMSE of the null intervention is 1.02 (between dâ² c and the original depth dc). This result confirmed the causal role of depth representations. In a fine-grained intervention experiment (see Appendix E), we created an additional salient object in the middleground of the scene by inserting the objectâs depth map with increased depth value in the LDMâs depth representation.
# 6 Related Works
Recently, Stable Diffusion 2.0 introduced the depth as a condition on its text-to-image 2D diffusion model for shape-preserving synthesis. Its depth-conditional model was fine-tuned with a relative
|
2306.05720#25
|
Beyond Surface Statistics: Scene Representations in a Latent Diffusion Model
|
Latent diffusion models (LDMs) exhibit an impressive ability to produce
realistic images, yet the inner workings of these models remain mysterious.
Even when trained purely on images without explicit depth information, they
typically output coherent pictures of 3D scenes. In this work, we investigate a
basic interpretability question: does an LDM create and use an internal
representation of simple scene geometry? Using linear probes, we find evidence
that the internal activations of the LDM encode linear representations of both
3D depth data and a salient-object / background distinction. These
representations appear surprisingly early in the denoising process$-$well
before a human can easily make sense of the noisy images. Intervention
experiments further indicate these representations play a causal role in image
synthesis, and may be used for simple high-level editing of an LDM's output.
Project page: https://yc015.github.io/scene-representation-diffusion-model/
|
http://arxiv.org/pdf/2306.05720
|
Yida Chen, Fernanda Viégas, Martin Wattenberg
|
cs.CV, cs.AI, cs.LG
|
A short version of this paper is accepted in the NeurIPS 2023
Workshop on Diffusion Models: https://nips.cc/virtual/2023/74894
| null |
cs.CV
|
20230609
|
20231104
|
[
{
"id": "2209.14988"
},
{
"id": "1805.01070"
},
{
"id": "2210.13382"
},
{
"id": "1909.01066"
},
{
"id": "1809.10040"
},
{
"id": "2212.00774"
},
{
"id": "1610.01644"
},
{
"id": "2206.00364"
},
{
"id": "1905.05950"
},
{
"id": "2212.08861"
},
{
"id": "1908.02899"
},
{
"id": "2111.02114"
},
{
"id": "2105.14002"
},
{
"id": "2112.03126"
}
] |
2306.05783
| 25 |
Few-Shot Demonstration: Additionally, we aim to test the LLMsâ understanding of demonstrations. Therefore, we evaluate the LLMsâ capabilities under 0-shot, 1-shot, and 3-shot settings. Although previous researches use a 5-shot setting, our experiments have much bigger options number for each question, taking the maximum input length of each LLM into consideration, we only use at most 3 examples in our few-shot learning experiments. All the examples used for demonstration are obtained from Xiezhiâs training set and have at least two labels matching with the test questions like Fig. 5.
Metrics: In this section, we present mainly two experiment results: the overall performance of all LLMs across various benchmarks, and the ranking of the top eight 0-shot LLMs in 12 non-sensitive domain categories of the Xiezhi-Benchmark with the scores for top and average practitioners. For the 45 open-source models assessed in our evaluation, we calculated the probability of each model choosing every option using generative probabilities and then ranked all options accordingly based on the probabilities. Due to legal considerations, we only display the results of two publicly recognized API-based LLMs: ChatGPT and GPT-4, and we ask them to rank all given options through instructions. To represent the results of all ranking outcomes, we employed the Mean Reciprocal Rank (MRR) as
6
|
2306.05783#25
|
Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation
|
New Natural Langauge Process~(NLP) benchmarks are urgently needed to align
with the rapid development of large language models (LLMs). We present Xiezhi,
the most comprehensive evaluation suite designed to assess holistic domain
knowledge. Xiezhi comprises multiple-choice questions across 516 diverse
disciplines ranging from 13 different subjects with 249,587 questions and
accompanied by Xiezhi-Specialty and Xiezhi-Interdiscipline, both with 15k
questions. We conduct evaluation of the 47 cutting-edge LLMs on Xiezhi. Results
indicate that LLMs exceed average performance of humans in science,
engineering, agronomy, medicine, and art, but fall short in economics,
jurisprudence, pedagogy, literature, history, and management. We anticipate
Xiezhi will help analyze important strengths and shortcomings of LLMs, and the
benchmark is released in~\url{https://github.com/MikeGu721/XiezhiBenchmark}.
|
http://arxiv.org/pdf/2306.05783
|
Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Jianchen Wang, Sihang Jiang, Zhuozhi Xiong, Zihan Li, Qianyu He, Rui Xu, Wenhao Huang, Zili Wang, Shusen Wang, Weiguo Zheng, Hongwei Feng, Yanghua Xiao
|
cs.CL
|
Under review of NeurIPS 2023
| null |
cs.CL
|
20230609
|
20230615
|
[
{
"id": "2301.13126"
},
{
"id": "2302.13971"
},
{
"id": "2303.04048"
},
{
"id": "1905.07830"
},
{
"id": "2304.12986"
},
{
"id": "2304.07854"
},
{
"id": "2211.05100"
},
{
"id": "1909.00277"
},
{
"id": "2305.10263"
},
{
"id": "1909.06058"
},
{
"id": "2206.07682"
},
{
"id": "2304.06364"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2303.18223"
},
{
"id": "2302.09419"
},
{
"id": "2303.14742"
},
{
"id": "2111.10952"
},
{
"id": "2301.12726"
},
{
"id": "2304.01933"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
},
{
"id": "2105.09938"
}
] |
2306.05817
| 25 |
Figure 3: Four-quadrant classiï¬cation about how to adapt LLM to RS. Each circle in the quadrants denotes one research work with the corresponding model name attached below the circle. The size of each circle means the largest size of LLM leveraged in the research work. The color of each circle indicates the best compared baseline that the proposed model defeats as reported in the corresponding paper. For example, the green circle of Chat-REC in quadrant 3 denotes that it utilizes a large language model with size larger than 100B (i.e., ChatGPT) and defeats the MF baseline. Besides, we sum- marize the general development trajectory with light-colored arrows. Abbreviations: MF is short for matrix factorization; MLP is short for multi-layer perceptron.
|
2306.05817#25
|
How Can Recommender Systems Benefit from Large Language Models: A Survey
|
Recommender systems (RS) play important roles to match users' information
needs for Internet applications. In natural language processing (NLP) domains,
large language model (LLM) has shown astonishing emergent abilities (e.g.,
instruction following, reasoning), thus giving rise to the promising research
direction of adapting LLM to RS for performance enhancements and user
experience improvements. In this paper, we conduct a comprehensive survey on
this research direction from an application-oriented view. We first summarize
existing research works from two orthogonal perspectives: where and how to
adapt LLM to RS. For the "WHERE" question, we discuss the roles that LLM could
play in different stages of the recommendation pipeline, i.e., feature
engineering, feature encoder, scoring/ranking function, and pipeline
controller. For the "HOW" question, we investigate the training and inference
strategies, resulting in two fine-grained taxonomy criteria, i.e., whether to
tune LLMs or not, and whether to involve conventional recommendation model
(CRM) for inference. Detailed analysis and general development trajectories are
provided for both questions, respectively. Then, we highlight key challenges in
adapting LLM to RS from three aspects, i.e., efficiency, effectiveness, and
ethics. Finally, we summarize the survey and discuss the future prospects. We
also actively maintain a GitHub repository for papers and other related
resources in this rising direction:
https://github.com/CHIANGEL/Awesome-LLM-for-RecSys.
|
http://arxiv.org/pdf/2306.05817
|
Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Xiangyang Li, Chenxu Zhu, Huifeng Guo, Yong Yu, Ruiming Tang, Weinan Zhang
|
cs.IR, cs.AI
|
15 pages; 3 figures; summarization table in appendix
| null |
cs.IR
|
20230609
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2304.05263"
},
{
"id": "2305.07001"
},
{
"id": "2305.11700"
},
{
"id": "2305.06566"
},
{
"id": "2305.15756"
},
{
"id": "2105.08318"
},
{
"id": "2304.03879"
},
{
"id": "2303.08559"
},
{
"id": "1703.04247"
},
{
"id": "2206.07682"
},
{
"id": "2305.07961"
},
{
"id": "2305.05973"
},
{
"id": "2305.15498"
},
{
"id": "2306.11114"
},
{
"id": "2305.04518"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2305.12090"
},
{
"id": "2212.10403"
},
{
"id": "2304.03022"
},
{
"id": "2305.07609"
},
{
"id": "2209.08060"
},
{
"id": "2209.07562"
},
{
"id": "2304.09542"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "1908.08167"
},
{
"id": "1909.10351"
},
{
"id": "2305.15036"
},
{
"id": "2305.12081"
},
{
"id": "2304.07862"
},
{
"id": "2305.03017"
},
{
"id": "2305.09858"
},
{
"id": "2305.06474"
},
{
"id": "2305.13731"
},
{
"id": "2304.03153"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2306.10702"
},
{
"id": "2306.02250"
},
{
"id": "2303.13835"
},
{
"id": "2305.14302"
},
{
"id": "2302.03735"
},
{
"id": "2306.02841"
},
{
"id": "2305.11206"
},
{
"id": "2203.15876"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
}
] |
2306.05949
| 25 |
# 4.1.4 Privacy and Data Protection
Examining the ways in which generative AI systems providers leverage user data is critical to evaluating its impact. Protecting personal information and personal and group privacy depends largely on training data, training methods, and security measures. The data on which the system was trained or adapted should be consensually and lawfully collected and secured and secured under the rules of the jurisdictions in which the data subjects and the entity collecting the data are based. Moreover, there are strong intellectual property and privacy concerns, with generative models generating copyrighted content [254] and highly sensitive documents [49] or personally identifiable information (PII), such as phone numbers, addresses and private medical records.
Providers should respect the consent and choices of individuals for collecting, processing, and sharing data with external parties, as sensitive data could be inevitably leveraged for downstream harm such as security breaches, privacy violations, and other adversarial attacks. Oftentimes, this might require retroactively retraining a generative AI system, in accordance with policy such as the California Consumer Privacy Act (CCPA) [4].
What to Evaluate Although some evaluations operate as a proxy for a systemâs ability to generate copyrighted or licensed content found within pretraining data [139], there is great potential for more comprehensive evaluations.
|
2306.05949#25
|
Evaluating the Social Impact of Generative AI Systems in Systems and Society
|
Generative AI systems across modalities, ranging from text, image, audio, and
video, have broad social impacts, but there exists no official standard for
means of evaluating those impacts and which impacts should be evaluated. We
move toward a standard approach in evaluating a generative AI system for any
modality, in two overarching categories: what is able to be evaluated in a base
system that has no predetermined application and what is able to be evaluated
in society. We describe specific social impact categories and how to approach
and conduct evaluations in the base technical system, then in people and
society. Our framework for a base system defines seven categories of social
impact: bias, stereotypes, and representational harms; cultural values and
sensitive content; disparate performance; privacy and data protection;
financial costs; environmental costs; and data and content moderation labor
costs. Suggested methods for evaluation apply to all modalities and analyses of
the limitations of existing evaluations serve as a starting point for necessary
investment in future evaluations. We offer five overarching categories for what
is able to be evaluated in society, each with their own subcategories:
trustworthiness and autonomy; inequality, marginalization, and violence;
concentration of authority; labor and creativity; and ecosystem and
environment. Each subcategory includes recommendations for mitigating harm. We
are concurrently crafting an evaluation repository for the AI research
community to contribute existing evaluations along the given categories. This
version will be updated following a CRAFT session at ACM FAccT 2023.
|
http://arxiv.org/pdf/2306.05949
|
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
|
cs.CY, cs.AI
| null | null |
cs.CY
|
20230609
|
20230612
|
[
{
"id": "2007.04068"
},
{
"id": "2305.09800"
},
{
"id": "1908.09203"
},
{
"id": "2202.05520"
},
{
"id": "2302.10329"
},
{
"id": "2107.03374"
},
{
"id": "2210.06245"
},
{
"id": "2211.02001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2301.10226"
},
{
"id": "2202.02647"
},
{
"id": "2112.10752"
},
{
"id": "2206.04615"
},
{
"id": "2202.00885"
},
{
"id": "2010.15581"
},
{
"id": "2305.09941"
},
{
"id": "2301.04246"
},
{
"id": "2304.12298"
},
{
"id": "2203.09509"
},
{
"id": "2207.14157"
},
{
"id": "2102.09692"
},
{
"id": "1804.10999"
},
{
"id": "2303.11156"
},
{
"id": "2104.06390"
},
{
"id": "2002.05651"
}
] |
2306.06264
| 25 |
relation types where there is a meaningful differ- ence in our metrics across these classes in green. Firstly, it is evident that most of the relations where our metrics are unable to differentiate between the classes involve location or language as their object. Additionally, when comparing appeared with hal- lucinated facts, we observe that for relations with a location as the object, the model possesses more knowledge about hallucinated facts in comparison to appeared ones. Conversely, for other types of relations, the model demonstrates a higher knowl- edge level for appeared facts. Moreover, except for a few relations involving location and language as their object, the LLMs exhibit significantly lower knowledge for didnât appear facts when compared to the other two classes.
|
2306.06264#25
|
Measuring and Modifying Factual Knowledge in Large Language Models
|
Large Language Models (LLMs) store an extensive amount of factual knowledge
obtained from vast collections of text. To effectively utilize these models for
downstream tasks, it is crucial to have reliable methods for measuring their
knowledge. However, existing approaches for knowledge measurement have certain
limitations, and despite recent efforts, they fail to provide accurate
measurements and the necessary insights for modifying the knowledge within
LLMs. In this work, we employ information theory-based measurements to provide
a framework estimating the factual knowledge contained within large language
models. More specifically, we measure knowledge by analyzing the LLM's
prediction probability distribution before and after instilling the target
knowledge, employing metrics such as entropy and KL-divergence. Introducing our
metrics, we first assess their accuracy in comparison to previous ranking-based
methods, surpassing them by over $35\%$ in a synthetic experiment. Then, we
explore two prominent methods of knowledge instillation, discovering that LLMs
exhibit limitations in capturing new knowledge under specific circumstances for
one of these methods. Lastly, we demonstrate the applicability of our methods
in extracting unlearned and mislearned facts in LLMs through their application
to in-context learning. We make code and data for all methods and experiments
in this paper publicly available.
|
http://arxiv.org/pdf/2306.06264
|
Pouya Pezeshkpour
|
cs.CL, cs.LG
| null | null |
cs.CL
|
20230609
|
20230609
|
[
{
"id": "2302.13971"
},
{
"id": "1909.01066"
},
{
"id": "2204.06031"
},
{
"id": "2204.02311"
},
{
"id": "2106.09231"
},
{
"id": "1907.11692"
},
{
"id": "2104.07567"
},
{
"id": "2010.05731"
},
{
"id": "1910.10683"
},
{
"id": "2207.05221"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
},
{
"id": "2203.00211"
},
{
"id": "2301.12810"
}
] |
2306.06283
| 25 |
extracted from an LLM, on the other hand, are contextualized on the speciï¬c sequence
(sentence) in which they are used and, therefore, can more eï¬ectively capture the contexts of words within a given corpus [55]. Inspired by this, the GlobusLabs team (Zhi Hong, Logan Ward) investigated if similar embeddings could be used to discover hydrogen car- rier molecules, that are relevant for energy storage applications. For this, they leverage the ScholarBert model [56] trained on a large corpus of scientiï¬c articles collected by the Public.Resource.Org nonproï¬t organization. For diï¬erent candidate molecules, they searched for sentences in the Public.Resource.Org corpus and used the average of the embeddings of these sentences as a ï¬ngerprint of the molecules. Given those ï¬ngerprints, they could rank molecules by how close their ï¬ngerprints are to the ones of known hy- drogen carrier molecules. Visual inspection indicates that the selected molecules indeed bear similarities to known hydrogen carrier molecules.
d. Text template paraphrasing
|
2306.06283#25
|
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
|
Large-language models (LLMs) such as GPT-4 caught the interest of many
scientists. Recent studies suggested that these models could be useful in
chemistry and materials science. To explore these possibilities, we organized a
hackathon.
This article chronicles the projects built as part of this hackathon.
Participants employed LLMs for various applications, including predicting
properties of molecules and materials, designing novel interfaces for tools,
extracting knowledge from unstructured data, and developing new educational
applications.
The diverse topics and the fact that working prototypes could be generated in
less than two days highlight that LLMs will profoundly impact the future of our
fields. The rich collection of ideas and projects also indicates that the
applications of LLMs are not limited to materials science and chemistry but
offer potential benefits to a wide range of scientific disciplines.
|
http://arxiv.org/pdf/2306.06283
|
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
|
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
| null | null |
cond-mat.mtrl-sci
|
20230609
|
20230714
|
[
{
"id": "2209.08203"
},
{
"id": "2212.04450"
}
] |
2306.07929
| 25 |
LLM only ReAct RMMBR. 0.55 0.66 0.68 0.29 0.36 0.39 Method LLM only Mobile-Env RMMBR. RMMBR. (A) Avg Reward 2.58 2.50 2.63 2.56 Success Rate 0.90 0.89 0.93 0.91
Table 3: Results on WebShop with different exemplar combinations (initial experiences for RE- MEMBERER) and different training sets (for REMEMBERER). Ei denotes the different exemplar combinations, while Si denotes the different training sets. The first line of each method shows the mean scores, and the second line shows the success rates.
Different (Initial) Exemplars Different Training Sets E0 + S0 E1 + S0 E2 + S0 E0 + S1 Avg Std ReAct 0.72 0.42 0.65 0.35 0.60 0.30 - - 0.66 0.36 0.06 0.06 LLM only 0.52 0.26 0.54 0.28 0.59 0.32 - - 0.55 0.29 0.04 0.03 RMMBR. 0.66 0.37 0.71 0.41 0.66 0.37 0.67 0.40 0.68 0.39 0.02 0.02
|
2306.07929#25
|
Large Language Models Are Semi-Parametric Reinforcement Learning Agents
|
Inspired by the insights in cognitive science with respect to human memory
and reasoning mechanism, a novel evolvable LLM-based (Large Language Model)
agent framework is proposed as REMEMBERER. By equipping the LLM with a
long-term experience memory, REMEMBERER is capable of exploiting the
experiences from the past episodes even for different task goals, which excels
an LLM-based agent with fixed exemplars or equipped with a transient working
memory. We further introduce Reinforcement Learning with Experience Memory
(RLEM) to update the memory. Thus, the whole system can learn from the
experiences of both success and failure, and evolve its capability without
fine-tuning the parameters of the LLM. In this way, the proposed REMEMBERER
constitutes a semi-parametric RL agent. Extensive experiments are conducted on
two RL task sets to evaluate the proposed framework. The average results with
different initialization and training sets exceed the prior SOTA by 4% and 2%
for the success rate on two task sets and demonstrate the superiority and
robustness of REMEMBERER.
|
http://arxiv.org/pdf/2306.07929
|
Danyang Zhang, Lu Chen, Situo Zhang, Hongshen Xu, Zihan Zhao, Kai Yu
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230609
|
20231030
|
[
{
"id": "2201.06009"
}
] |
2306.05685
| 26 |
The data from Arena shows a similar trend, as illustrated by Table 6. Comparing GPT-4 and other LLM judges, we find they reach a similar non-tie agreement ratio between humans but the number of non-tied votes from GPT-4 is much larger. This means that GPT-4 is more affirmative and less suffered from position bias but other models also perform well when they give an affirmative answer.
In both tables, GPT-4 with single-answer grading matches both pairwise GPT-4 and human prefer- ences very well. This means GPT-4 has a relatively stable internal rubric. Although it may sometimes perform slightly worse than pairwise comparison and give more tie votes, it is a more scalable method.
We then perform a breakdown analysis by computing agreement on different model pairs and categories. We only include non-tied votes. In Figure 2, we observe the agreement between GPT-4 and human progressively increases in line with the performance disparity of the model pairs (i.e., larger win rate difference), from 70% to nearly 100%. This suggests that GPT-4 aligns with humans better when significant performance differences exist between the models.
2 8 © =
|
2306.05685#26
|
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
|
Evaluating large language model (LLM) based chat assistants is challenging
due to their broad capabilities and the inadequacy of existing benchmarks in
measuring human preferences. To address this, we explore using strong LLMs as
judges to evaluate these models on more open-ended questions. We examine the
usage and limitations of LLM-as-a-judge, including position, verbosity, and
self-enhancement biases, as well as limited reasoning ability, and propose
solutions to mitigate some of them. We then verify the agreement between LLM
judges and human preferences by introducing two benchmarks: MT-bench, a
multi-turn question set; and Chatbot Arena, a crowdsourced battle platform. Our
results reveal that strong LLM judges like GPT-4 can match both controlled and
crowdsourced human preferences well, achieving over 80% agreement, the same
level of agreement between humans. Hence, LLM-as-a-judge is a scalable and
explainable way to approximate human preferences, which are otherwise very
expensive to obtain. Additionally, we show our benchmark and traditional
benchmarks complement each other by evaluating several variants of LLaMA and
Vicuna. The MT-bench questions, 3K expert votes, and 30K conversations with
human preferences are publicly available at
https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge.
|
http://arxiv.org/pdf/2306.05685
|
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, Ion Stoica
|
cs.CL, cs.AI
|
NeurIPS 2023 Datasets and Benchmarks Track
| null |
cs.CL
|
20230609
|
20231224
|
[
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2305.10403"
},
{
"id": "2304.07327"
},
{
"id": "2201.11903"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.12420"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2306.04751"
},
{
"id": "2211.09110"
},
{
"id": "2301.13688"
},
{
"id": "2004.14602"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2211.05719"
},
{
"id": "2206.04615"
},
{
"id": "2204.05862"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2305.17926"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2306.05715
| 26 |
For each of the seven aspects, each LLM response was manually categorized as either âyesâ or ânoâ.
3.4.1 Comparing Models. To gain insight into the relative perfor- mance of diï¬erent LLMs, we conducted an initial analysis on a sub- set of our data. We randomly chose two help requests for each ex- ercise and analyzed the responses created by GPT-3.5 and Codex with English and Finnish prompts. This step thus involved a total of 120 LLM responses (two help requests à ï¬fteen exercises à two
4 RESULTS 4.1 Issues in Help Requests In 150 help requests, we identiï¬ed a total of 275 issues, for an av- erage of 1.9 issues per help request. All programs associated with a help request had at least one issue; the maximum was six.
Six main themes emerged. From most to least common, they are: (1) logic errors, present in 108 help requests, 72%; (2) problems with input and output, 51 requests, 34%; (3) syntax errors, 12, 8.0%; (4)
ICER â23 V1, August 7â11, 2023, Chicago, IL, USA
|
2306.05715#26
|
Exploring the Responses of Large Language Models to Beginner Programmers' Help Requests
|
Background and Context: Over the past year, large language models (LLMs) have
taken the world by storm. In computing education, like in other walks of life,
many opportunities and threats have emerged as a consequence.
Objectives: In this article, we explore such opportunities and threats in a
specific area: responding to student programmers' help requests. More
specifically, we assess how good LLMs are at identifying issues in problematic
code that students request help on.
Method: We collected a sample of help requests and code from an online
programming course. We then prompted two different LLMs (OpenAI Codex and
GPT-3.5) to identify and explain the issues in the students' code and assessed
the LLM-generated answers both quantitatively and qualitatively.
Findings: GPT-3.5 outperforms Codex in most respects. Both LLMs frequently
find at least one actual issue in each student program (GPT-3.5 in 90% of the
cases). Neither LLM excels at finding all the issues (GPT-3.5 finding them 57%
of the time). False positives are common (40% chance for GPT-3.5). The advice
that the LLMs provide on the issues is often sensible. The LLMs perform better
on issues involving program logic rather than on output formatting. Model
solutions are frequently provided even when the LLM is prompted not to. LLM
responses to prompts in a non-English language are only slightly worse than
responses to English prompts.
Implications: Our results continue to highlight the utility of LLMs in
programming education. At the same time, the results highlight the
unreliability of LLMs: LLMs make some of the same mistakes that students do,
perhaps especially when formatting output as required by automated assessment
systems. Our study informs teachers interested in using LLMs as well as future
efforts to customize LLMs for the needs of programming education.
|
http://arxiv.org/pdf/2306.05715
|
Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanpää, Juha Sorva
|
cs.CY, cs.AI, cs.CL, cs.HC, cs.SE
|
13 pages, 1 figure. To be published in Proceedings of the 2023 ACM
Conference on International Computing Education Research V.1 (ICER '23 V1)
| null |
cs.CY
|
20230609
|
20230609
|
[
{
"id": "2004.09456"
},
{
"id": "2302.07427"
},
{
"id": "2203.02155"
},
{
"id": "2304.02491"
},
{
"id": "2211.04715"
},
{
"id": "2306.02608"
},
{
"id": "2303.08774"
},
{
"id": "2304.03938"
}
] |
2306.05720
| 26 |
8
Original Prompt = âHarley-Davidson Switchback 2012: Vivid Blackâ -Â¥YYYPr Intervened Outputs a a Rightward Translation âââ>
Figure 8: A sequence of a moving motorbike created by intervening on the LDMâs internal represen- tations of a salient object. All images are generated using the same prompt, initial latent vector, and random seed.
depth map as an additional input to the LDM. In a similar vein, Zhao et al. [32] suggests using depth estimation as a prior step for better image inpainting. Our work, however, shows that the 2D Stable Diffusion model already encodes the depth dimension even when trained without an explicit depth prior.
The work of Kim et al. [15], performed contemporaneously and independently from this work, touches on some of the same issues but from a different perspective â generating geometrically plausible images. They found a nonlinear depth representation inside a pretrained 2D DDPM using Multi-Layer Perceptrons. Their results also indicated the depth dimension emerges in the early denoising. Our probing experiments, however, suggest that 2D LDM has an even simpler linear depth representation.
|
2306.05720#26
|
Beyond Surface Statistics: Scene Representations in a Latent Diffusion Model
|
Latent diffusion models (LDMs) exhibit an impressive ability to produce
realistic images, yet the inner workings of these models remain mysterious.
Even when trained purely on images without explicit depth information, they
typically output coherent pictures of 3D scenes. In this work, we investigate a
basic interpretability question: does an LDM create and use an internal
representation of simple scene geometry? Using linear probes, we find evidence
that the internal activations of the LDM encode linear representations of both
3D depth data and a salient-object / background distinction. These
representations appear surprisingly early in the denoising process$-$well
before a human can easily make sense of the noisy images. Intervention
experiments further indicate these representations play a causal role in image
synthesis, and may be used for simple high-level editing of an LDM's output.
Project page: https://yc015.github.io/scene-representation-diffusion-model/
|
http://arxiv.org/pdf/2306.05720
|
Yida Chen, Fernanda Viégas, Martin Wattenberg
|
cs.CV, cs.AI, cs.LG
|
A short version of this paper is accepted in the NeurIPS 2023
Workshop on Diffusion Models: https://nips.cc/virtual/2023/74894
| null |
cs.CV
|
20230609
|
20231104
|
[
{
"id": "2209.14988"
},
{
"id": "1805.01070"
},
{
"id": "2210.13382"
},
{
"id": "1909.01066"
},
{
"id": "1809.10040"
},
{
"id": "2212.00774"
},
{
"id": "1610.01644"
},
{
"id": "2206.00364"
},
{
"id": "1905.05950"
},
{
"id": "2212.08861"
},
{
"id": "1908.02899"
},
{
"id": "2111.02114"
},
{
"id": "2105.14002"
},
{
"id": "2112.03126"
}
] |
2306.05783
| 26 |
eet Pea ERIS, Please select the correct answer for the following single choice questions Lb NEARER TSR ie? Which of the following natural disasters has contributed to the development of meteorology? () 1) He2) HK 3) WS 4) BK 5) BR 50) BRE 1) Earthquakes 2) Floods 3) Tornadoes 4) Droughts 5) E-sports player .. 25: 3 Answer: 3 HBB: BE, AGF Related Subject: Science, Atmospheric Science 50) Relevance Which of the following will happen to an object when there is no external force acting on it? 1) SAS CHSHRLL 2) ASRS DREaw) 3) BENIN 4) Kaew s) HFT... 50) EH 1) always at rest 2) alwaysin uniform linear motion 3) undergoes accelerated motion 4) random motion 5) Gardenia ...... 50) Mainframe BR: 3 Answer: 3 RFE: BF, WBS Related Subject: Science, Physics BURT MATE? () Which meteorological instrumentis used to measure atmospheric pressure? ( ) 1) UR 2) âEsth3) Bit 4) RSet 5) THRALEAS .... 50) FARK 1) Wave velocity and medium depth 2) Turbulence intensity and
|
2306.05783#26
|
Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation
|
New Natural Langauge Process~(NLP) benchmarks are urgently needed to align
with the rapid development of large language models (LLMs). We present Xiezhi,
the most comprehensive evaluation suite designed to assess holistic domain
knowledge. Xiezhi comprises multiple-choice questions across 516 diverse
disciplines ranging from 13 different subjects with 249,587 questions and
accompanied by Xiezhi-Specialty and Xiezhi-Interdiscipline, both with 15k
questions. We conduct evaluation of the 47 cutting-edge LLMs on Xiezhi. Results
indicate that LLMs exceed average performance of humans in science,
engineering, agronomy, medicine, and art, but fall short in economics,
jurisprudence, pedagogy, literature, history, and management. We anticipate
Xiezhi will help analyze important strengths and shortcomings of LLMs, and the
benchmark is released in~\url{https://github.com/MikeGu721/XiezhiBenchmark}.
|
http://arxiv.org/pdf/2306.05783
|
Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Jianchen Wang, Sihang Jiang, Zhuozhi Xiong, Zihan Li, Qianyu He, Rui Xu, Wenhao Huang, Zili Wang, Shusen Wang, Weiguo Zheng, Hongwei Feng, Yanghua Xiao
|
cs.CL
|
Under review of NeurIPS 2023
| null |
cs.CL
|
20230609
|
20230615
|
[
{
"id": "2301.13126"
},
{
"id": "2302.13971"
},
{
"id": "2303.04048"
},
{
"id": "1905.07830"
},
{
"id": "2304.12986"
},
{
"id": "2304.07854"
},
{
"id": "2211.05100"
},
{
"id": "1909.00277"
},
{
"id": "2305.10263"
},
{
"id": "1909.06058"
},
{
"id": "2206.07682"
},
{
"id": "2304.06364"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2303.18223"
},
{
"id": "2302.09419"
},
{
"id": "2303.14742"
},
{
"id": "2111.10952"
},
{
"id": "2301.12726"
},
{
"id": "2304.01933"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
},
{
"id": "2105.09938"
}
] |
2306.05817
| 26 |
brought by large-scale parameters, LLM starts to not only deepen the usage in the feature encoder and scoring/ranking function stage, but also move beyond and extend their roles into other stages of the recommendation pipeline. For in- stance, in the feature engineering stage, we could instruct LLM to generate reliable auxiliary features and synthetic training data [Liu et al., 2023c]. In this way, open-world knowledge from LLM is injected into the recommendation model, which is usually a closed-domain system. Not to mention, participating in the pipeline control further requires sufï¬cient logical reasoning and tool utilization capabilities, which are possessed by LLM.
In summary, we believe that, as the abilities of large lan- guage models are further explored, they will form gradually deeper couplings and bindings with multiple stages of the rec- ommendation pipeline. Even further, we might need to cus- tomize large language models speciï¬cally tailored to satisfy the unique requirements of recommender systems [Lin and Zhang, 2023].
3 How to Adapt LLM To answer the âHOWâ question about adapting LLM to RS, we carry out two orthogonal taxonomy criteria to distinguish the adaptation of LLMs to RS, resulting in a four-quadrant classiï¬cation shown in Figure 3:
|
2306.05817#26
|
How Can Recommender Systems Benefit from Large Language Models: A Survey
|
Recommender systems (RS) play important roles to match users' information
needs for Internet applications. In natural language processing (NLP) domains,
large language model (LLM) has shown astonishing emergent abilities (e.g.,
instruction following, reasoning), thus giving rise to the promising research
direction of adapting LLM to RS for performance enhancements and user
experience improvements. In this paper, we conduct a comprehensive survey on
this research direction from an application-oriented view. We first summarize
existing research works from two orthogonal perspectives: where and how to
adapt LLM to RS. For the "WHERE" question, we discuss the roles that LLM could
play in different stages of the recommendation pipeline, i.e., feature
engineering, feature encoder, scoring/ranking function, and pipeline
controller. For the "HOW" question, we investigate the training and inference
strategies, resulting in two fine-grained taxonomy criteria, i.e., whether to
tune LLMs or not, and whether to involve conventional recommendation model
(CRM) for inference. Detailed analysis and general development trajectories are
provided for both questions, respectively. Then, we highlight key challenges in
adapting LLM to RS from three aspects, i.e., efficiency, effectiveness, and
ethics. Finally, we summarize the survey and discuss the future prospects. We
also actively maintain a GitHub repository for papers and other related
resources in this rising direction:
https://github.com/CHIANGEL/Awesome-LLM-for-RecSys.
|
http://arxiv.org/pdf/2306.05817
|
Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Xiangyang Li, Chenxu Zhu, Huifeng Guo, Yong Yu, Ruiming Tang, Weinan Zhang
|
cs.IR, cs.AI
|
15 pages; 3 figures; summarization table in appendix
| null |
cs.IR
|
20230609
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2304.05263"
},
{
"id": "2305.07001"
},
{
"id": "2305.11700"
},
{
"id": "2305.06566"
},
{
"id": "2305.15756"
},
{
"id": "2105.08318"
},
{
"id": "2304.03879"
},
{
"id": "2303.08559"
},
{
"id": "1703.04247"
},
{
"id": "2206.07682"
},
{
"id": "2305.07961"
},
{
"id": "2305.05973"
},
{
"id": "2305.15498"
},
{
"id": "2306.11114"
},
{
"id": "2305.04518"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2305.12090"
},
{
"id": "2212.10403"
},
{
"id": "2304.03022"
},
{
"id": "2305.07609"
},
{
"id": "2209.08060"
},
{
"id": "2209.07562"
},
{
"id": "2304.09542"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "1908.08167"
},
{
"id": "1909.10351"
},
{
"id": "2305.15036"
},
{
"id": "2305.12081"
},
{
"id": "2304.07862"
},
{
"id": "2305.03017"
},
{
"id": "2305.09858"
},
{
"id": "2305.06474"
},
{
"id": "2305.13731"
},
{
"id": "2304.03153"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2306.10702"
},
{
"id": "2306.02250"
},
{
"id": "2303.13835"
},
{
"id": "2305.14302"
},
{
"id": "2302.03735"
},
{
"id": "2306.02841"
},
{
"id": "2305.11206"
},
{
"id": "2203.15876"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
}
] |
2306.05949
| 26 |
Memorization of training examples remains a critical security and privacy concern [49, 50]. Address- ing this issue may yield improvements in performance for various downstream applications [172]. Additionally, generative AI systems providers may maintain the right to authorize access of user data to external third-parties, such as human annotation vendors. For sharing data to third-parties, data providers should ensure that only lawful data is shared, consent for sharing is obtained from data subjects, and that shared data does not contain any private, personally identifiable, or otherwise sensitive data.
Limitations Generative AI systems are harder to evaluate without clear documentation, systems for obtaining consent (e.g., opt-out mechanisms), and appropriate technical and process controls to secure user data that can threaten the privacy and security of individuals. Thus, robustly evaluating privacy risks will often require full process and governance audits that go beyond evaluating artifacts in isolation. Rules for leveraging end-user data for training purposes are unclear, where user prompts, geolocation data, and similar data can be used to improve a system. The immense size of training datasets [118] makes scrutiny increasingly difficult.
7
# 4.1.5 Financial Costs
|
2306.05949#26
|
Evaluating the Social Impact of Generative AI Systems in Systems and Society
|
Generative AI systems across modalities, ranging from text, image, audio, and
video, have broad social impacts, but there exists no official standard for
means of evaluating those impacts and which impacts should be evaluated. We
move toward a standard approach in evaluating a generative AI system for any
modality, in two overarching categories: what is able to be evaluated in a base
system that has no predetermined application and what is able to be evaluated
in society. We describe specific social impact categories and how to approach
and conduct evaluations in the base technical system, then in people and
society. Our framework for a base system defines seven categories of social
impact: bias, stereotypes, and representational harms; cultural values and
sensitive content; disparate performance; privacy and data protection;
financial costs; environmental costs; and data and content moderation labor
costs. Suggested methods for evaluation apply to all modalities and analyses of
the limitations of existing evaluations serve as a starting point for necessary
investment in future evaluations. We offer five overarching categories for what
is able to be evaluated in society, each with their own subcategories:
trustworthiness and autonomy; inequality, marginalization, and violence;
concentration of authority; labor and creativity; and ecosystem and
environment. Each subcategory includes recommendations for mitigating harm. We
are concurrently crafting an evaluation repository for the AI research
community to contribute existing evaluations along the given categories. This
version will be updated following a CRAFT session at ACM FAccT 2023.
|
http://arxiv.org/pdf/2306.05949
|
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
|
cs.CY, cs.AI
| null | null |
cs.CY
|
20230609
|
20230612
|
[
{
"id": "2007.04068"
},
{
"id": "2305.09800"
},
{
"id": "1908.09203"
},
{
"id": "2202.05520"
},
{
"id": "2302.10329"
},
{
"id": "2107.03374"
},
{
"id": "2210.06245"
},
{
"id": "2211.02001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2301.10226"
},
{
"id": "2202.02647"
},
{
"id": "2112.10752"
},
{
"id": "2206.04615"
},
{
"id": "2202.00885"
},
{
"id": "2010.15581"
},
{
"id": "2305.09941"
},
{
"id": "2301.04246"
},
{
"id": "2304.12298"
},
{
"id": "2203.09509"
},
{
"id": "2207.14157"
},
{
"id": "2102.09692"
},
{
"id": "1804.10999"
},
{
"id": "2303.11156"
},
{
"id": "2104.06390"
},
{
"id": "2002.05651"
}
] |
2306.06070
| 26 |
# 4 Experiments
# 4.1 Experimental Setup
The diversity of MIND2WEB provides a unique opportunity to evaluate an agentâs generalizability at different levels. We seek to understand how well an agent can generalize across domains, websites, and tasks: TestCross-Domain, for which we hold out two top-level domains, Information and Service, with 912 tasks from 73 websites. Here, the model is expected to generalize to an entirely new domain without having seen any websites or tasks associated with that domain during training. TestCross-Website, with 10 websites from each remaining top-level domain, containing 177 tasks. In this setting, the model is never exposed to the test websites during training. However, it has been trained on websites from the same domain and possibly with similar tasks. This setup allows us to assess an agentâs ability to adapt to entirely new websites, yet within familiar domains and task contexts. TestCross-Task, where we randomly split 20% of the remaining data, regardless of domains and websites, resulting in 252 tasks from 69 websites. In this setting, the model has been exposed to webpages from the same website during training and has likely encountered similar tasks. The rest of data is used for training, which contains 1,009 tasks from 73 websites.
# 4.2 Data Preprocessing and Evaluation
|
2306.06070#26
|
Mind2Web: Towards a Generalist Agent for the Web
|
We introduce Mind2Web, the first dataset for developing and evaluating
generalist agents for the web that can follow language instructions to complete
complex tasks on any website. Existing datasets for web agents either use
simulated websites or only cover a limited set of websites and tasks, thus not
suitable for generalist web agents. With over 2,000 open-ended tasks collected
from 137 websites spanning 31 domains and crowdsourced action sequences for the
tasks, Mind2Web provides three necessary ingredients for building generalist
web agents: 1) diverse domains, websites, and tasks, 2) use of real-world
websites instead of simulated and simplified ones, and 3) a broad spectrum of
user interaction patterns. Based on Mind2Web, we conduct an initial exploration
of using large language models (LLMs) for building generalist web agents. While
the raw HTML of real-world websites are often too large to be fed to LLMs, we
show that first filtering it with a small LM significantly improves the
effectiveness and efficiency of LLMs. Our solution demonstrates a decent level
of performance, even on websites or entire domains the model has never seen
before, but there is still a substantial room to improve towards truly
generalizable agents. We open-source our dataset, model implementation, and
trained models (https://osu-nlp-group.github.io/Mind2Web) to facilitate further
research on building a generalist agent for the web.
|
http://arxiv.org/pdf/2306.06070
|
Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, Yu Su
|
cs.CL
|
Website: https://osu-nlp-group.github.io/Mind2Web. Updated with
supplementary material. NeurIPS'23 Spotlight
| null |
cs.CL
|
20230609
|
20231209
|
[] |
2306.06264
| 26 |
alignment and probability of hallucination for In- structGPT and FLAN-T5. The relations admin territory, country, country of citizenship, oc- cupation, work location, place of death, posi- tion played, headquarters location, location of formation, employer, member of show a lower requirement for explicit knowledge instillation to appear in the generated output in InstructGPT. On the other hand, for FLAN-T5, the relations field of work, position held, headquarters location, country of origin, original language of work, owned by exhibit a similar characteristic. More- over, certain relations demonstrate higher resis- tance to hallucination in InstructGPT and FLAN- T5. Specifically, the relations headquarters loca- tion, location of formation, employer, member of exhibit a greater resistance to hallucination in InstructGPT, while the relations official language,
Further analysis of the results reveals interest- ing observation in relation to the need for factual
sister city, headquarters location, instance of, developer, owned by demonstrate a higher resis- tance to hallucination in FLAN-T5.
|
2306.06264#26
|
Measuring and Modifying Factual Knowledge in Large Language Models
|
Large Language Models (LLMs) store an extensive amount of factual knowledge
obtained from vast collections of text. To effectively utilize these models for
downstream tasks, it is crucial to have reliable methods for measuring their
knowledge. However, existing approaches for knowledge measurement have certain
limitations, and despite recent efforts, they fail to provide accurate
measurements and the necessary insights for modifying the knowledge within
LLMs. In this work, we employ information theory-based measurements to provide
a framework estimating the factual knowledge contained within large language
models. More specifically, we measure knowledge by analyzing the LLM's
prediction probability distribution before and after instilling the target
knowledge, employing metrics such as entropy and KL-divergence. Introducing our
metrics, we first assess their accuracy in comparison to previous ranking-based
methods, surpassing them by over $35\%$ in a synthetic experiment. Then, we
explore two prominent methods of knowledge instillation, discovering that LLMs
exhibit limitations in capturing new knowledge under specific circumstances for
one of these methods. Lastly, we demonstrate the applicability of our methods
in extracting unlearned and mislearned facts in LLMs through their application
to in-context learning. We make code and data for all methods and experiments
in this paper publicly available.
|
http://arxiv.org/pdf/2306.06264
|
Pouya Pezeshkpour
|
cs.CL, cs.LG
| null | null |
cs.CL
|
20230609
|
20230609
|
[
{
"id": "2302.13971"
},
{
"id": "1909.01066"
},
{
"id": "2204.06031"
},
{
"id": "2204.02311"
},
{
"id": "2106.09231"
},
{
"id": "1907.11692"
},
{
"id": "2104.07567"
},
{
"id": "2010.05731"
},
{
"id": "1910.10683"
},
{
"id": "2207.05221"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
},
{
"id": "2203.00211"
},
{
"id": "2301.12810"
}
] |
2306.06283
| 26 |
d. Text template paraphrasing
In the LIFT framework used in the examples above, the data are embedded in so- called prompt templates that can have a form like What is the <property name> of <representation>?, where the texts in chevrons are placeholders that are replaced with actual values such as âsolubilityâ and â2-acetyloxybenzoic acidâ. In the low-data regime, data points are âwastedâ by the model needing to learn the syntax of the prompt tem- plates. In the big-data regime, in contrast, one might worry that the model loses some of its general language modeling abilities by always dealing with the same template. This
10
|
2306.06283#26
|
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
|
Large-language models (LLMs) such as GPT-4 caught the interest of many
scientists. Recent studies suggested that these models could be useful in
chemistry and materials science. To explore these possibilities, we organized a
hackathon.
This article chronicles the projects built as part of this hackathon.
Participants employed LLMs for various applications, including predicting
properties of molecules and materials, designing novel interfaces for tools,
extracting knowledge from unstructured data, and developing new educational
applications.
The diverse topics and the fact that working prototypes could be generated in
less than two days highlight that LLMs will profoundly impact the future of our
fields. The rich collection of ideas and projects also indicates that the
applications of LLMs are not limited to materials science and chemistry but
offer potential benefits to a wide range of scientific disciplines.
|
http://arxiv.org/pdf/2306.06283
|
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
|
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
| null | null |
cond-mat.mtrl-sci
|
20230609
|
20230714
|
[
{
"id": "2209.08203"
},
{
"id": "2212.04450"
}
] |
2306.07929
| 26 |
tasks. Specifically, the LLM is input with the task description, the screen representation, and the step instruction. The screen is represented in an HTML element sequence following Zhang et al. [2023]. Additionally, the last 5 performed actions along with the last reward are given to the LLM as the procedure feedback. As for the output, the LLM is prompted to print the HTML representation of the operated element as the extra information. This is expected to force the LLM to discover the relation between the element id and the certain element. The task similarity fg designed for WikiHow is computed from the step instructions. It is noticed that the instructions follow some patterns, thus, we inspect the instructions and categorize them into six types. Then a similarity lookup table is designed according to the instruction types. The details should be referred to in the supplementary. The observation similarity fo is computed based on the length of the longest common sequence of the HTML elements in the screen representation:
fo(sc1, sc2) = lcs(sc1, sc2) max{len(sc1), len(sc2)} . (8)
The full-trajectory expanding is adopted, as most of the tasks will end in 5 steps as well.
# 4.2 Results on WebShop
|
2306.07929#26
|
Large Language Models Are Semi-Parametric Reinforcement Learning Agents
|
Inspired by the insights in cognitive science with respect to human memory
and reasoning mechanism, a novel evolvable LLM-based (Large Language Model)
agent framework is proposed as REMEMBERER. By equipping the LLM with a
long-term experience memory, REMEMBERER is capable of exploiting the
experiences from the past episodes even for different task goals, which excels
an LLM-based agent with fixed exemplars or equipped with a transient working
memory. We further introduce Reinforcement Learning with Experience Memory
(RLEM) to update the memory. Thus, the whole system can learn from the
experiences of both success and failure, and evolve its capability without
fine-tuning the parameters of the LLM. In this way, the proposed REMEMBERER
constitutes a semi-parametric RL agent. Extensive experiments are conducted on
two RL task sets to evaluate the proposed framework. The average results with
different initialization and training sets exceed the prior SOTA by 4% and 2%
for the success rate on two task sets and demonstrate the superiority and
robustness of REMEMBERER.
|
http://arxiv.org/pdf/2306.07929
|
Danyang Zhang, Lu Chen, Situo Zhang, Hongshen Xu, Zihan Zhao, Kai Yu
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230609
|
20231030
|
[
{
"id": "2201.06009"
}
] |
2306.05685
| 27 |
2 8 © =
ââ GPT-4 Judge â2- GPT-3.5Judge âm Claude Judge âe- Human â*â Human (first tum) 1.0 1.0 1.0 1.0 08 08 08 08 06 06 0.6 0.6 0.4 04 0.4 0.4 0.2 02 0.2 0.2 0.0 0.0 0.0 0.0 Dye 39 2 3 2 de nD se 3 aH 2 DB ge 45 2 28 3 Db ge 45 28 28 2 oO coe EG aa area an FT arte Co aan RON aS HOM E⢠HOP per yr" HOM eâ¢â¢ Wr" (a) All votes, first turn (b) Non-tied votes, first turn (c) All votes, second tum (d) Non-tied votes, second tum
Figure 3: Average win rate of six models under different judges on MT-bench.
7
|
2306.05685#27
|
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
|
Evaluating large language model (LLM) based chat assistants is challenging
due to their broad capabilities and the inadequacy of existing benchmarks in
measuring human preferences. To address this, we explore using strong LLMs as
judges to evaluate these models on more open-ended questions. We examine the
usage and limitations of LLM-as-a-judge, including position, verbosity, and
self-enhancement biases, as well as limited reasoning ability, and propose
solutions to mitigate some of them. We then verify the agreement between LLM
judges and human preferences by introducing two benchmarks: MT-bench, a
multi-turn question set; and Chatbot Arena, a crowdsourced battle platform. Our
results reveal that strong LLM judges like GPT-4 can match both controlled and
crowdsourced human preferences well, achieving over 80% agreement, the same
level of agreement between humans. Hence, LLM-as-a-judge is a scalable and
explainable way to approximate human preferences, which are otherwise very
expensive to obtain. Additionally, we show our benchmark and traditional
benchmarks complement each other by evaluating several variants of LLaMA and
Vicuna. The MT-bench questions, 3K expert votes, and 30K conversations with
human preferences are publicly available at
https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge.
|
http://arxiv.org/pdf/2306.05685
|
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, Ion Stoica
|
cs.CL, cs.AI
|
NeurIPS 2023 Datasets and Benchmarks Track
| null |
cs.CL
|
20230609
|
20231224
|
[
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2305.10403"
},
{
"id": "2304.07327"
},
{
"id": "2201.11903"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.12420"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2306.04751"
},
{
"id": "2211.09110"
},
{
"id": "2301.13688"
},
{
"id": "2004.14602"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2211.05719"
},
{
"id": "2206.04615"
},
{
"id": "2204.05862"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2305.17926"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2306.05715
| 27 |
ICER â23 V1, August 7â11, 2023, Chicago, IL, USA
very incomplete solutions where the student was far from a work- ing program, 8, 5.3%; (5) problems with semicolons8, 4, 2.7%; and (6) failing to meet hidden requirements in automated tests, 3, 2.0%. Below, we elaborate on the two most common themes.
4.1.1 of three sub-themes: Logic errors. The vast majority of logic errors fell under one
⢠Conditionals (in 37 requests). E.g., missing conditional, wrong expression in conditional, mistakes in nesting.
⢠Iteration (30). E.g., missing iteration, out of bounds errors in loop, incorrect termination.
⢠Arithmetic (23). E.g., incrementing a counter incorrectly, sum- ming instead of counting, treating zero as positive.
Other, less common logic errors included misusing function param- eters, printing in a function when expected to return a value, mis- placing logic, and placing variables outside of functions (leading, e.g., to a sum variable getting incremented over multiple function calls).
4.1.2 three dominant sub-themes: Input and output. For input/output errors, too, we identiï¬ed
|
2306.05715#27
|
Exploring the Responses of Large Language Models to Beginner Programmers' Help Requests
|
Background and Context: Over the past year, large language models (LLMs) have
taken the world by storm. In computing education, like in other walks of life,
many opportunities and threats have emerged as a consequence.
Objectives: In this article, we explore such opportunities and threats in a
specific area: responding to student programmers' help requests. More
specifically, we assess how good LLMs are at identifying issues in problematic
code that students request help on.
Method: We collected a sample of help requests and code from an online
programming course. We then prompted two different LLMs (OpenAI Codex and
GPT-3.5) to identify and explain the issues in the students' code and assessed
the LLM-generated answers both quantitatively and qualitatively.
Findings: GPT-3.5 outperforms Codex in most respects. Both LLMs frequently
find at least one actual issue in each student program (GPT-3.5 in 90% of the
cases). Neither LLM excels at finding all the issues (GPT-3.5 finding them 57%
of the time). False positives are common (40% chance for GPT-3.5). The advice
that the LLMs provide on the issues is often sensible. The LLMs perform better
on issues involving program logic rather than on output formatting. Model
solutions are frequently provided even when the LLM is prompted not to. LLM
responses to prompts in a non-English language are only slightly worse than
responses to English prompts.
Implications: Our results continue to highlight the utility of LLMs in
programming education. At the same time, the results highlight the
unreliability of LLMs: LLMs make some of the same mistakes that students do,
perhaps especially when formatting output as required by automated assessment
systems. Our study informs teachers interested in using LLMs as well as future
efforts to customize LLMs for the needs of programming education.
|
http://arxiv.org/pdf/2306.05715
|
Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanpää, Juha Sorva
|
cs.CY, cs.AI, cs.CL, cs.HC, cs.SE
|
13 pages, 1 figure. To be published in Proceedings of the 2023 ACM
Conference on International Computing Education Research V.1 (ICER '23 V1)
| null |
cs.CY
|
20230609
|
20230609
|
[
{
"id": "2004.09456"
},
{
"id": "2302.07427"
},
{
"id": "2203.02155"
},
{
"id": "2304.02491"
},
{
"id": "2211.04715"
},
{
"id": "2306.02608"
},
{
"id": "2303.08774"
},
{
"id": "2304.03938"
}
] |
2306.05720
| 27 |
Kim et al. [15] approximated the pseudo ground-truth of depth using predictions from the aggregated representations of multiple feature blocks. They compared the pseudo depth label against a weaker prediction inferred from a single feature block. This difference is applied as a guidance to improve the geometric consistency of output images. From an interpretability perspective, we seek evidence for a causal relation between internal representations and depth. We demonstrate the geometry of images can be altered by directly modifying the LDMâs layer-wise internal model of depth, suggesting that a causal role exists. Moreover, our intervention on depth representations can control the scene geometry of output images with respect to a predefined label (see Appendix E and Figure 8). This is not achievable using the guidance methods in [15].
Baranchuk et al. [2] extrapolated the intermediate activations of a pretrained diffusion model for semantic segmentation. Their high segmentation performance reveals that the diffusion model encodes the rich semantic representations during training for generative tasks. Our work shows that the internal representation of LDM also captures the geometric properties of its synthesized images.
|
2306.05720#27
|
Beyond Surface Statistics: Scene Representations in a Latent Diffusion Model
|
Latent diffusion models (LDMs) exhibit an impressive ability to produce
realistic images, yet the inner workings of these models remain mysterious.
Even when trained purely on images without explicit depth information, they
typically output coherent pictures of 3D scenes. In this work, we investigate a
basic interpretability question: does an LDM create and use an internal
representation of simple scene geometry? Using linear probes, we find evidence
that the internal activations of the LDM encode linear representations of both
3D depth data and a salient-object / background distinction. These
representations appear surprisingly early in the denoising process$-$well
before a human can easily make sense of the noisy images. Intervention
experiments further indicate these representations play a causal role in image
synthesis, and may be used for simple high-level editing of an LDM's output.
Project page: https://yc015.github.io/scene-representation-diffusion-model/
|
http://arxiv.org/pdf/2306.05720
|
Yida Chen, Fernanda Viégas, Martin Wattenberg
|
cs.CV, cs.AI, cs.LG
|
A short version of this paper is accepted in the NeurIPS 2023
Workshop on Diffusion Models: https://nips.cc/virtual/2023/74894
| null |
cs.CV
|
20230609
|
20231104
|
[
{
"id": "2209.14988"
},
{
"id": "1805.01070"
},
{
"id": "2210.13382"
},
{
"id": "1909.01066"
},
{
"id": "1809.10040"
},
{
"id": "2212.00774"
},
{
"id": "1610.01644"
},
{
"id": "2206.00364"
},
{
"id": "1905.05950"
},
{
"id": "2212.08861"
},
{
"id": "1908.02899"
},
{
"id": "2111.02114"
},
{
"id": "2105.14002"
},
{
"id": "2112.03126"
}
] |
2306.05817
| 27 |
⢠Tune/Not Tune LLM denotes whether we will tune LLM during the training phase. The deï¬nition of tuning LLM includes both full ï¬netuning and other parameter-efï¬cient ï¬netuning methods (e.g., LoRA [Hu et al., 2021]).
⢠Infer with/without CRM denotes whether we will involve conventional recommendation models (CRM) during the
inference phase. Note that there are works that only use CRM to serve as independent pre-ranking functions to gen- erate candidate item set for LLM. We categorize them as âinfer without CRMâ, since the CRM is independent of LLM, and could be decoupled from the ï¬nal recommenda- tion task. In Figure 3, we use different marker sizes to indicate the size of the large language model the research works adapt, and use different colors to indicate the best baseline they have defeated in terms of item recommendation. Thus, a few works are not included since they do not provide tradi- tional recommendation evaluation, e.g., RecLLM [Friedman et al., 2023] only investigates the system architecture design to involve LLM for RS pipeline control without experimental evaluation.
|
2306.05817#27
|
How Can Recommender Systems Benefit from Large Language Models: A Survey
|
Recommender systems (RS) play important roles to match users' information
needs for Internet applications. In natural language processing (NLP) domains,
large language model (LLM) has shown astonishing emergent abilities (e.g.,
instruction following, reasoning), thus giving rise to the promising research
direction of adapting LLM to RS for performance enhancements and user
experience improvements. In this paper, we conduct a comprehensive survey on
this research direction from an application-oriented view. We first summarize
existing research works from two orthogonal perspectives: where and how to
adapt LLM to RS. For the "WHERE" question, we discuss the roles that LLM could
play in different stages of the recommendation pipeline, i.e., feature
engineering, feature encoder, scoring/ranking function, and pipeline
controller. For the "HOW" question, we investigate the training and inference
strategies, resulting in two fine-grained taxonomy criteria, i.e., whether to
tune LLMs or not, and whether to involve conventional recommendation model
(CRM) for inference. Detailed analysis and general development trajectories are
provided for both questions, respectively. Then, we highlight key challenges in
adapting LLM to RS from three aspects, i.e., efficiency, effectiveness, and
ethics. Finally, we summarize the survey and discuss the future prospects. We
also actively maintain a GitHub repository for papers and other related
resources in this rising direction:
https://github.com/CHIANGEL/Awesome-LLM-for-RecSys.
|
http://arxiv.org/pdf/2306.05817
|
Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Xiangyang Li, Chenxu Zhu, Huifeng Guo, Yong Yu, Ruiming Tang, Weinan Zhang
|
cs.IR, cs.AI
|
15 pages; 3 figures; summarization table in appendix
| null |
cs.IR
|
20230609
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2304.05263"
},
{
"id": "2305.07001"
},
{
"id": "2305.11700"
},
{
"id": "2305.06566"
},
{
"id": "2305.15756"
},
{
"id": "2105.08318"
},
{
"id": "2304.03879"
},
{
"id": "2303.08559"
},
{
"id": "1703.04247"
},
{
"id": "2206.07682"
},
{
"id": "2305.07961"
},
{
"id": "2305.05973"
},
{
"id": "2305.15498"
},
{
"id": "2306.11114"
},
{
"id": "2305.04518"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2305.12090"
},
{
"id": "2212.10403"
},
{
"id": "2304.03022"
},
{
"id": "2305.07609"
},
{
"id": "2209.08060"
},
{
"id": "2209.07562"
},
{
"id": "2304.09542"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "1908.08167"
},
{
"id": "1909.10351"
},
{
"id": "2305.15036"
},
{
"id": "2305.12081"
},
{
"id": "2304.07862"
},
{
"id": "2305.03017"
},
{
"id": "2305.09858"
},
{
"id": "2305.06474"
},
{
"id": "2305.13731"
},
{
"id": "2304.03153"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2306.10702"
},
{
"id": "2306.02250"
},
{
"id": "2303.13835"
},
{
"id": "2305.14302"
},
{
"id": "2302.03735"
},
{
"id": "2306.02841"
},
{
"id": "2305.11206"
},
{
"id": "2203.15876"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
}
] |
2306.05949
| 27 |
7
# 4.1.5 Financial Costs
The estimated financial costs of training, testing, and deploying generative AI systems can restrict the groups of people able to afford developing and interacting with these systems. Concretely: sourcing training data, computing infrastructure for training and testing, and labor hours contribute to the overall financial costs. These metrics are not standard to release for any system, but can be estimated for a specific category, such as the cost to train and host a model.
What to Evaluate Researchers and developers can estimate infrastructure, hardware costs, and hours of labor from researchers, developers, and crowdworkers. Popular existing estimates focus on compute using low-cost or standard pricing per instance-hour [137]. Research lowering training costs also show tracking compute cost by day as the model trains and scales [253]. Frameworks break down cost per system component: data cost, compute cost, and technical architecture of the system itself [163]. Other variables used to calculate cost include size of dataset, model size, and training volume [218].
|
2306.05949#27
|
Evaluating the Social Impact of Generative AI Systems in Systems and Society
|
Generative AI systems across modalities, ranging from text, image, audio, and
video, have broad social impacts, but there exists no official standard for
means of evaluating those impacts and which impacts should be evaluated. We
move toward a standard approach in evaluating a generative AI system for any
modality, in two overarching categories: what is able to be evaluated in a base
system that has no predetermined application and what is able to be evaluated
in society. We describe specific social impact categories and how to approach
and conduct evaluations in the base technical system, then in people and
society. Our framework for a base system defines seven categories of social
impact: bias, stereotypes, and representational harms; cultural values and
sensitive content; disparate performance; privacy and data protection;
financial costs; environmental costs; and data and content moderation labor
costs. Suggested methods for evaluation apply to all modalities and analyses of
the limitations of existing evaluations serve as a starting point for necessary
investment in future evaluations. We offer five overarching categories for what
is able to be evaluated in society, each with their own subcategories:
trustworthiness and autonomy; inequality, marginalization, and violence;
concentration of authority; labor and creativity; and ecosystem and
environment. Each subcategory includes recommendations for mitigating harm. We
are concurrently crafting an evaluation repository for the AI research
community to contribute existing evaluations along the given categories. This
version will be updated following a CRAFT session at ACM FAccT 2023.
|
http://arxiv.org/pdf/2306.05949
|
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
|
cs.CY, cs.AI
| null | null |
cs.CY
|
20230609
|
20230612
|
[
{
"id": "2007.04068"
},
{
"id": "2305.09800"
},
{
"id": "1908.09203"
},
{
"id": "2202.05520"
},
{
"id": "2302.10329"
},
{
"id": "2107.03374"
},
{
"id": "2210.06245"
},
{
"id": "2211.02001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2301.10226"
},
{
"id": "2202.02647"
},
{
"id": "2112.10752"
},
{
"id": "2206.04615"
},
{
"id": "2202.00885"
},
{
"id": "2010.15581"
},
{
"id": "2305.09941"
},
{
"id": "2301.04246"
},
{
"id": "2304.12298"
},
{
"id": "2203.09509"
},
{
"id": "2207.14157"
},
{
"id": "2102.09692"
},
{
"id": "1804.10999"
},
{
"id": "2303.11156"
},
{
"id": "2104.06390"
},
{
"id": "2002.05651"
}
] |
2306.06070
| 27 |
# 4.2 Data Preprocessing and Evaluation
We apply simple heuristics to clean the raw HTML documents, keeping only elements that are visible and carry substantial semantic meaning, as determined by their attributes, textual content, and neighboring elements. This effectively reduces the average number of elements from 1,135 to 580, while still maintaining an overall recall of 94.7% for the target element in the training data.
For evaluation, we first calculate Element Accuracy that compares the selected element with all acceptable elements, and Operation F1 that calculates token-level F1 score for the predicted operation. This is the same as accuracy for Click, but considers the correctness of the input value for Type and Select Option. Each step of the task is evaluated independently with the ground truth action history provided. We then define Step Success Rate and Success Rate (for the whole task). A step is regarded as successful only if both the selected element and the predicted operation are correct. A task is regarded successful only if all steps have succeeded. It is therefore a stringent metric. For step-wise metrics, we report macro average across tasks.
# 4.3 Results
|
2306.06070#27
|
Mind2Web: Towards a Generalist Agent for the Web
|
We introduce Mind2Web, the first dataset for developing and evaluating
generalist agents for the web that can follow language instructions to complete
complex tasks on any website. Existing datasets for web agents either use
simulated websites or only cover a limited set of websites and tasks, thus not
suitable for generalist web agents. With over 2,000 open-ended tasks collected
from 137 websites spanning 31 domains and crowdsourced action sequences for the
tasks, Mind2Web provides three necessary ingredients for building generalist
web agents: 1) diverse domains, websites, and tasks, 2) use of real-world
websites instead of simulated and simplified ones, and 3) a broad spectrum of
user interaction patterns. Based on Mind2Web, we conduct an initial exploration
of using large language models (LLMs) for building generalist web agents. While
the raw HTML of real-world websites are often too large to be fed to LLMs, we
show that first filtering it with a small LM significantly improves the
effectiveness and efficiency of LLMs. Our solution demonstrates a decent level
of performance, even on websites or entire domains the model has never seen
before, but there is still a substantial room to improve towards truly
generalizable agents. We open-source our dataset, model implementation, and
trained models (https://osu-nlp-group.github.io/Mind2Web) to facilitate further
research on building a generalist agent for the web.
|
http://arxiv.org/pdf/2306.06070
|
Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, Yu Su
|
cs.CL
|
Website: https://osu-nlp-group.github.io/Mind2Web. Updated with
supplementary material. NeurIPS'23 Spotlight
| null |
cs.CL
|
20230609
|
20231209
|
[] |
2306.06264
| 27 |
sister city, headquarters location, instance of, developer, owned by demonstrate a higher resis- tance to hallucination in FLAN-T5.
Lastly, upon examining samples where the in- jected information did not result in accurate pre- dictions, we can identify relations where explicit instillation alone is insufficient. Since we cannot fine-tune these models (at least for InstaructGPT) and compare implicit with explicit directly, we con- sider failure in explicit knowledge instillation in cases where the label does not appear within the top 5 predicted outputs. Similar to the previous analysis, approximately 80% of the mispredicted samples even after explicit instillation were asso- ciated with location or language queries for both InstructGPT and FLAN-T5. Moreover, these rela- tions primarily consist of the ones highlighted in red in Table 3.
# 6 Related Works
Large language models (LLMs) have emerged as the central focus of recent advancements and ap- plications in NLP. Given their extensive repository of factual knowledge, effectively harnessing these models for downstream tasks necessitates accurate measurements of their inherited knowledge.
|
2306.06264#27
|
Measuring and Modifying Factual Knowledge in Large Language Models
|
Large Language Models (LLMs) store an extensive amount of factual knowledge
obtained from vast collections of text. To effectively utilize these models for
downstream tasks, it is crucial to have reliable methods for measuring their
knowledge. However, existing approaches for knowledge measurement have certain
limitations, and despite recent efforts, they fail to provide accurate
measurements and the necessary insights for modifying the knowledge within
LLMs. In this work, we employ information theory-based measurements to provide
a framework estimating the factual knowledge contained within large language
models. More specifically, we measure knowledge by analyzing the LLM's
prediction probability distribution before and after instilling the target
knowledge, employing metrics such as entropy and KL-divergence. Introducing our
metrics, we first assess their accuracy in comparison to previous ranking-based
methods, surpassing them by over $35\%$ in a synthetic experiment. Then, we
explore two prominent methods of knowledge instillation, discovering that LLMs
exhibit limitations in capturing new knowledge under specific circumstances for
one of these methods. Lastly, we demonstrate the applicability of our methods
in extracting unlearned and mislearned facts in LLMs through their application
to in-context learning. We make code and data for all methods and experiments
in this paper publicly available.
|
http://arxiv.org/pdf/2306.06264
|
Pouya Pezeshkpour
|
cs.CL, cs.LG
| null | null |
cs.CL
|
20230609
|
20230609
|
[
{
"id": "2302.13971"
},
{
"id": "1909.01066"
},
{
"id": "2204.06031"
},
{
"id": "2204.02311"
},
{
"id": "2106.09231"
},
{
"id": "1907.11692"
},
{
"id": "2104.07567"
},
{
"id": "2010.05731"
},
{
"id": "1910.10683"
},
{
"id": "2207.05221"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
},
{
"id": "2203.00211"
},
{
"id": "2301.12810"
}
] |
2306.06283
| 27 |
10
naturally raises the question if one can augment the dataset to mitigate these problemsâ thereby leveraging again, similar to â-ML, a technique that has found use in conventional ML previously. However, text-based data are challenging to augment due to their discrete nature and the fact that the augmented text still needs to be syntactically and seman- tically valid. Interestingly, as Michael Pieler (OpenBioML.org and Stability.AI) shows (and as has been explored by Dai et al. [57]), it turns out that LLMs can also be used to address this problem by simply prompting an LLM (e.g., GPT-4 or Anthrophicâs Claude) to paraphrase a prompt template (see SI section ID).
This approach will allow us to automatically create new paraphrased high-quality prompts for LIFT-based training very eï¬cientlyâto augment the dataset and reduce the risk of overï¬tting to a speciï¬c template. Latter might be particularly important if one still wants to retain general language abilities of the LLMs after ï¬netuning.
e. Genetic algorithm using an LLM
|
2306.06283#27
|
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
|
Large-language models (LLMs) such as GPT-4 caught the interest of many
scientists. Recent studies suggested that these models could be useful in
chemistry and materials science. To explore these possibilities, we organized a
hackathon.
This article chronicles the projects built as part of this hackathon.
Participants employed LLMs for various applications, including predicting
properties of molecules and materials, designing novel interfaces for tools,
extracting knowledge from unstructured data, and developing new educational
applications.
The diverse topics and the fact that working prototypes could be generated in
less than two days highlight that LLMs will profoundly impact the future of our
fields. The rich collection of ideas and projects also indicates that the
applications of LLMs are not limited to materials science and chemistry but
offer potential benefits to a wide range of scientific disciplines.
|
http://arxiv.org/pdf/2306.06283
|
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
|
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
| null | null |
cond-mat.mtrl-sci
|
20230609
|
20230714
|
[
{
"id": "2209.08203"
},
{
"id": "2212.04450"
}
] |
2306.07929
| 27 |
The full-trajectory expanding is adopted, as most of the tasks will end in 5 steps as well.
# 4.2 Results on WebShop
REMEMBERER is applied to WebShop with 2-shot in-context learning. The experience memory is initialized with four annotated experiences of the decision step from one trajectory. The agent is trained for 3 epochs on a training set containing 10 different tasks outside the test sets used by Yao et al. [2022b] and Shinn et al. [2023]. To control the total expense and achieve bootstrapping, the succeeded tasks in the first epoch are excluded from training in the following two epochs. The trajectories exceeding 15 steps are considered to be failed, as most of the tasks can end in 5 steps. The main results are shown in Table 1. We used the public ReAct [Yao et al., 2022b] implementation released by the authors and run with text-davinci-003 instead of text-davince-002 in Yao et al. [2022b]. The run of ReAct shares the same trajectory as the exemplar with REMEMBERER. The âLLM onlyâ
7
|
2306.07929#27
|
Large Language Models Are Semi-Parametric Reinforcement Learning Agents
|
Inspired by the insights in cognitive science with respect to human memory
and reasoning mechanism, a novel evolvable LLM-based (Large Language Model)
agent framework is proposed as REMEMBERER. By equipping the LLM with a
long-term experience memory, REMEMBERER is capable of exploiting the
experiences from the past episodes even for different task goals, which excels
an LLM-based agent with fixed exemplars or equipped with a transient working
memory. We further introduce Reinforcement Learning with Experience Memory
(RLEM) to update the memory. Thus, the whole system can learn from the
experiences of both success and failure, and evolve its capability without
fine-tuning the parameters of the LLM. In this way, the proposed REMEMBERER
constitutes a semi-parametric RL agent. Extensive experiments are conducted on
two RL task sets to evaluate the proposed framework. The average results with
different initialization and training sets exceed the prior SOTA by 4% and 2%
for the success rate on two task sets and demonstrate the superiority and
robustness of REMEMBERER.
|
http://arxiv.org/pdf/2306.07929
|
Danyang Zhang, Lu Chen, Situo Zhang, Hongshen Xu, Zihan Zhao, Kai Yu
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230609
|
20231030
|
[
{
"id": "2201.06009"
}
] |
2306.05685
| 28 |
Figure 3: Average win rate of six models under different judges on MT-bench.
7
Table 5: Agreement between two types of judges on MT-bench. âG4-Pairâ and âG4-Singleâ denote GPT-4 with pairwise comparison and single-answer grading respectively. The single-answer grading can be converted into pairwise comparison results for calculating the agreement. We report two setups: âS1â includes non-tie, tie, and inconsistent (due to position bias) votes and counts inconsistent as tie; âS2â only includes non-tie votes. The agreement between two random judges under each setup is denoted as âR=â. The top value in each cell is the agreement, and the bottom gray value is #votes.
|
2306.05685#28
|
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
|
Evaluating large language model (LLM) based chat assistants is challenging
due to their broad capabilities and the inadequacy of existing benchmarks in
measuring human preferences. To address this, we explore using strong LLMs as
judges to evaluate these models on more open-ended questions. We examine the
usage and limitations of LLM-as-a-judge, including position, verbosity, and
self-enhancement biases, as well as limited reasoning ability, and propose
solutions to mitigate some of them. We then verify the agreement between LLM
judges and human preferences by introducing two benchmarks: MT-bench, a
multi-turn question set; and Chatbot Arena, a crowdsourced battle platform. Our
results reveal that strong LLM judges like GPT-4 can match both controlled and
crowdsourced human preferences well, achieving over 80% agreement, the same
level of agreement between humans. Hence, LLM-as-a-judge is a scalable and
explainable way to approximate human preferences, which are otherwise very
expensive to obtain. Additionally, we show our benchmark and traditional
benchmarks complement each other by evaluating several variants of LLaMA and
Vicuna. The MT-bench questions, 3K expert votes, and 30K conversations with
human preferences are publicly available at
https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge.
|
http://arxiv.org/pdf/2306.05685
|
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, Ion Stoica
|
cs.CL, cs.AI
|
NeurIPS 2023 Datasets and Benchmarks Track
| null |
cs.CL
|
20230609
|
20231224
|
[
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2305.10403"
},
{
"id": "2304.07327"
},
{
"id": "2201.11903"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.12420"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2306.04751"
},
{
"id": "2211.09110"
},
{
"id": "2301.13688"
},
{
"id": "2004.14602"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2211.05719"
},
{
"id": "2206.04615"
},
{
"id": "2204.05862"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2305.17926"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2306.05715
| 28 |
4.1.2 three dominant sub-themes: Input and output. For input/output errors, too, we identiï¬ed
⢠Formatting of output (25 requests). E.g., completely incor- rect formatting, missing information in output, minor ex- tra content in output. This category also includes single- character mistakes in writing and punctuation.
⢠Unwanted printouts (24). E.g., debug information printed, completely unexpected output.
⢠Missing printouts (10). E.g., failure to produce the speciï¬ed output when dealing with a corner case.
Side Note: Exercise Specificity of the Issues. Diï¬erent exercises bring about diï¬erent issues. We explored this brieï¬y, focusing on the most common themes of logic and I/O. As expected, there was considerable variation between exercises. Typically, a single sub- theme was prevalent in a particular exercise (e.g., conditionals in the Veriï¬cation of input exercise; formatting issues in First and last name), but there were some exercises with a varied mix of issues.
|
2306.05715#28
|
Exploring the Responses of Large Language Models to Beginner Programmers' Help Requests
|
Background and Context: Over the past year, large language models (LLMs) have
taken the world by storm. In computing education, like in other walks of life,
many opportunities and threats have emerged as a consequence.
Objectives: In this article, we explore such opportunities and threats in a
specific area: responding to student programmers' help requests. More
specifically, we assess how good LLMs are at identifying issues in problematic
code that students request help on.
Method: We collected a sample of help requests and code from an online
programming course. We then prompted two different LLMs (OpenAI Codex and
GPT-3.5) to identify and explain the issues in the students' code and assessed
the LLM-generated answers both quantitatively and qualitatively.
Findings: GPT-3.5 outperforms Codex in most respects. Both LLMs frequently
find at least one actual issue in each student program (GPT-3.5 in 90% of the
cases). Neither LLM excels at finding all the issues (GPT-3.5 finding them 57%
of the time). False positives are common (40% chance for GPT-3.5). The advice
that the LLMs provide on the issues is often sensible. The LLMs perform better
on issues involving program logic rather than on output formatting. Model
solutions are frequently provided even when the LLM is prompted not to. LLM
responses to prompts in a non-English language are only slightly worse than
responses to English prompts.
Implications: Our results continue to highlight the utility of LLMs in
programming education. At the same time, the results highlight the
unreliability of LLMs: LLMs make some of the same mistakes that students do,
perhaps especially when formatting output as required by automated assessment
systems. Our study informs teachers interested in using LLMs as well as future
efforts to customize LLMs for the needs of programming education.
|
http://arxiv.org/pdf/2306.05715
|
Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanpää, Juha Sorva
|
cs.CY, cs.AI, cs.CL, cs.HC, cs.SE
|
13 pages, 1 figure. To be published in Proceedings of the 2023 ACM
Conference on International Computing Education Research V.1 (ICER '23 V1)
| null |
cs.CY
|
20230609
|
20230609
|
[
{
"id": "2004.09456"
},
{
"id": "2302.07427"
},
{
"id": "2203.02155"
},
{
"id": "2304.02491"
},
{
"id": "2211.04715"
},
{
"id": "2306.02608"
},
{
"id": "2303.08774"
},
{
"id": "2304.03938"
}
] |
2306.05720
| 28 |
Poole et al. [20] and Wang et al. [30] utilized features learned by 2D diffusion models to optimize a Neural Radiance Field network for 3D synthesis. In contrast, our study centered on finding and interpreting the 3D representations inside LDM. Instead of extending 2D models to a 3D generative task, we take a direct approach of using linear probing classifier to uncover the depth features learned by the self-attention modules.
# 7 Conclusion
Our experiments provide evidence that the Stable Diffusion model, although trained solely on two- dimensional images, contains an internal linear representation related to scene geometry. Probing uncovers a salient object / background distinction as well as information related to relative depth. These representations emerge in the early denoising stage. Furthermore, interventional experiments support a causal link between the internal representation and the final image produced by the model. These results add nuance to ongoing debates about whether generative models can learn more than just âsurfaceâ statistics.
Our experiments also suggest a number of avenues for future research. A natural extension is to look for representations of other scene attributes, such as lighting or texture. Indeed, just as certain language models are said to ârediscover the NLP pipelineâ [26], perhaps LDMs recapitulate the
9
standard steps in computer graphics. More generally, one might look for models of semantic aspects of a scene, such as sentiment.
# 8 Acknowledgements
|
2306.05720#28
|
Beyond Surface Statistics: Scene Representations in a Latent Diffusion Model
|
Latent diffusion models (LDMs) exhibit an impressive ability to produce
realistic images, yet the inner workings of these models remain mysterious.
Even when trained purely on images without explicit depth information, they
typically output coherent pictures of 3D scenes. In this work, we investigate a
basic interpretability question: does an LDM create and use an internal
representation of simple scene geometry? Using linear probes, we find evidence
that the internal activations of the LDM encode linear representations of both
3D depth data and a salient-object / background distinction. These
representations appear surprisingly early in the denoising process$-$well
before a human can easily make sense of the noisy images. Intervention
experiments further indicate these representations play a causal role in image
synthesis, and may be used for simple high-level editing of an LDM's output.
Project page: https://yc015.github.io/scene-representation-diffusion-model/
|
http://arxiv.org/pdf/2306.05720
|
Yida Chen, Fernanda Viégas, Martin Wattenberg
|
cs.CV, cs.AI, cs.LG
|
A short version of this paper is accepted in the NeurIPS 2023
Workshop on Diffusion Models: https://nips.cc/virtual/2023/74894
| null |
cs.CV
|
20230609
|
20231104
|
[
{
"id": "2209.14988"
},
{
"id": "1805.01070"
},
{
"id": "2210.13382"
},
{
"id": "1909.01066"
},
{
"id": "1809.10040"
},
{
"id": "2212.00774"
},
{
"id": "1610.01644"
},
{
"id": "2206.00364"
},
{
"id": "1905.05950"
},
{
"id": "2212.08861"
},
{
"id": "1908.02899"
},
{
"id": "2111.02114"
},
{
"id": "2105.14002"
},
{
"id": "2112.03126"
}
] |
2306.05783
| 28 |
Figure 5: An example of a 3-shot evaluation with cross-subject problems. The red text is the autocompleted response from the model, while the preceding text is the inputted prompt. English translations are shown below the corresponding Chinese text for better readability.
the metric, which calculates the reciprocal rank of the correct answer. MRR closer to 1 indicates that the model is more capable of placing the correct answer at the front of the ranking, while it suggests that the LLM tends to place the correct answer at the bottom if it is closer to 0.
# 4.2 Results of LLMs
The overall performance towards Xiezhi and baselines of all LLMs are listed in Tab. 1. The ranking of all LLMs in each domain category is listed in Tab. 2. And here we give the most intriguing observation in the experiments.
Note: (1) The results of GPT-4 and ChatGPT are acquired through instructions, their real capabilities of them may be higher than the score listed in the tables. (2) Tab. 2 displays the optimal outcomes, which are combined performance of Xiezhi-Specialty and Xiezhi-Interdiscipline, in both Chinese and English Xiezhi. (3) At the moment of writing this paper, M3KE has solely released its training dataset. So we employed this dataset for conducting the experiments, which allowed us to execute only 0-shot experimental setups.
|
2306.05783#28
|
Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation
|
New Natural Langauge Process~(NLP) benchmarks are urgently needed to align
with the rapid development of large language models (LLMs). We present Xiezhi,
the most comprehensive evaluation suite designed to assess holistic domain
knowledge. Xiezhi comprises multiple-choice questions across 516 diverse
disciplines ranging from 13 different subjects with 249,587 questions and
accompanied by Xiezhi-Specialty and Xiezhi-Interdiscipline, both with 15k
questions. We conduct evaluation of the 47 cutting-edge LLMs on Xiezhi. Results
indicate that LLMs exceed average performance of humans in science,
engineering, agronomy, medicine, and art, but fall short in economics,
jurisprudence, pedagogy, literature, history, and management. We anticipate
Xiezhi will help analyze important strengths and shortcomings of LLMs, and the
benchmark is released in~\url{https://github.com/MikeGu721/XiezhiBenchmark}.
|
http://arxiv.org/pdf/2306.05783
|
Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Jianchen Wang, Sihang Jiang, Zhuozhi Xiong, Zihan Li, Qianyu He, Rui Xu, Wenhao Huang, Zili Wang, Shusen Wang, Weiguo Zheng, Hongwei Feng, Yanghua Xiao
|
cs.CL
|
Under review of NeurIPS 2023
| null |
cs.CL
|
20230609
|
20230615
|
[
{
"id": "2301.13126"
},
{
"id": "2302.13971"
},
{
"id": "2303.04048"
},
{
"id": "1905.07830"
},
{
"id": "2304.12986"
},
{
"id": "2304.07854"
},
{
"id": "2211.05100"
},
{
"id": "1909.00277"
},
{
"id": "2305.10263"
},
{
"id": "1909.06058"
},
{
"id": "2206.07682"
},
{
"id": "2304.06364"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2303.18223"
},
{
"id": "2302.09419"
},
{
"id": "2303.14742"
},
{
"id": "2111.10952"
},
{
"id": "2301.12726"
},
{
"id": "2304.01933"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
},
{
"id": "2105.09938"
}
] |
2306.05949
| 28 |
Limitations Only accounting for compute cost overlooks the many variables that contribute to a systemâs training. Costs in pre- and post-deployment, depending on how a system is released [227] are also difficult to track as cost variables may not be directly tied to a system alone. Human labor and hidden costs similarly may be indirect. Costs also change over time and with a changing economy for all components. Finally, it is necessary to keep track of the changes of costs and economy of components over time.
# 4.1.6 Environmental Costs and Carbon Emissions
The computing power used in training, testing, and deploying generative AI systems, especially large scale systems, uses substantial energy resources and thereby contributes to the global climate crisis by emitting greenhouse gasses [233]. While the environmental costs of compute has become an area of active research, with workshops dedicated to the question, the environmental costs of manufacturing hardware remains under-explored. One potential reason for this discrepancy may be that estimating compute and energy costs, while complex, is a comparably transparent task compared to tracing the emissions of the of emissions throughout the manufacturing process. However, recent estimates suggest that the manufacturing process have substantial environmental costs [96]. Overall, information about emissions is scarce and there is no consensus for what constitutes the total carbon footprint of AI systems.
|
2306.05949#28
|
Evaluating the Social Impact of Generative AI Systems in Systems and Society
|
Generative AI systems across modalities, ranging from text, image, audio, and
video, have broad social impacts, but there exists no official standard for
means of evaluating those impacts and which impacts should be evaluated. We
move toward a standard approach in evaluating a generative AI system for any
modality, in two overarching categories: what is able to be evaluated in a base
system that has no predetermined application and what is able to be evaluated
in society. We describe specific social impact categories and how to approach
and conduct evaluations in the base technical system, then in people and
society. Our framework for a base system defines seven categories of social
impact: bias, stereotypes, and representational harms; cultural values and
sensitive content; disparate performance; privacy and data protection;
financial costs; environmental costs; and data and content moderation labor
costs. Suggested methods for evaluation apply to all modalities and analyses of
the limitations of existing evaluations serve as a starting point for necessary
investment in future evaluations. We offer five overarching categories for what
is able to be evaluated in society, each with their own subcategories:
trustworthiness and autonomy; inequality, marginalization, and violence;
concentration of authority; labor and creativity; and ecosystem and
environment. Each subcategory includes recommendations for mitigating harm. We
are concurrently crafting an evaluation repository for the AI research
community to contribute existing evaluations along the given categories. This
version will be updated following a CRAFT session at ACM FAccT 2023.
|
http://arxiv.org/pdf/2306.05949
|
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
|
cs.CY, cs.AI
| null | null |
cs.CY
|
20230609
|
20230612
|
[
{
"id": "2007.04068"
},
{
"id": "2305.09800"
},
{
"id": "1908.09203"
},
{
"id": "2202.05520"
},
{
"id": "2302.10329"
},
{
"id": "2107.03374"
},
{
"id": "2210.06245"
},
{
"id": "2211.02001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2301.10226"
},
{
"id": "2202.02647"
},
{
"id": "2112.10752"
},
{
"id": "2206.04615"
},
{
"id": "2202.00885"
},
{
"id": "2010.15581"
},
{
"id": "2305.09941"
},
{
"id": "2301.04246"
},
{
"id": "2304.12298"
},
{
"id": "2203.09509"
},
{
"id": "2207.14157"
},
{
"id": "2102.09692"
},
{
"id": "1804.10999"
},
{
"id": "2303.11156"
},
{
"id": "2104.06390"
},
{
"id": "2002.05651"
}
] |
2306.06070
| 28 |
# 4.3 Results
Candidate Generation. We fine-tune DeBERTa [16] as the small LM for candidate generation. As candidate generation requires high efficiency, we use the base version DeBERTaB with 86M parameters. Overall, it achieves 88.9% / 85.3% / 85.7% Recall@50 on TestCross-Task, TestCross-Website and TestCross-Domain, respectively. We use its top-50 ranking results as the candidate pool for all subsequent experiments.
Action Prediction. We mainly compare against two baselines in Table 2. The first directly uses the candidate generation model (DeBERTa) for element selection, which is similar to existing work [14, 35] that combines an encoder with classification heads. However, such a design cannot
7
of MEE Travel MEN Shopping mL Entertainment mM Info. MEE Service 6 04 03 oz
Figure 6: Step success rate per website grouped by the three splits: TestCross-Task, TestCross-Website and TestCross-Domain from left to right. Here we only show websites with more than three test tasks.
|
2306.06070#28
|
Mind2Web: Towards a Generalist Agent for the Web
|
We introduce Mind2Web, the first dataset for developing and evaluating
generalist agents for the web that can follow language instructions to complete
complex tasks on any website. Existing datasets for web agents either use
simulated websites or only cover a limited set of websites and tasks, thus not
suitable for generalist web agents. With over 2,000 open-ended tasks collected
from 137 websites spanning 31 domains and crowdsourced action sequences for the
tasks, Mind2Web provides three necessary ingredients for building generalist
web agents: 1) diverse domains, websites, and tasks, 2) use of real-world
websites instead of simulated and simplified ones, and 3) a broad spectrum of
user interaction patterns. Based on Mind2Web, we conduct an initial exploration
of using large language models (LLMs) for building generalist web agents. While
the raw HTML of real-world websites are often too large to be fed to LLMs, we
show that first filtering it with a small LM significantly improves the
effectiveness and efficiency of LLMs. Our solution demonstrates a decent level
of performance, even on websites or entire domains the model has never seen
before, but there is still a substantial room to improve towards truly
generalizable agents. We open-source our dataset, model implementation, and
trained models (https://osu-nlp-group.github.io/Mind2Web) to facilitate further
research on building a generalist agent for the web.
|
http://arxiv.org/pdf/2306.06070
|
Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, Yu Su
|
cs.CL
|
Website: https://osu-nlp-group.github.io/Mind2Web. Updated with
supplementary material. NeurIPS'23 Spotlight
| null |
cs.CL
|
20230609
|
20231209
|
[] |
2306.06264
| 28 |
Measuring factual knowledge in LLMs The significance of ensuring factual correctness in LLMs has received considerable attention due to its critical role in determining the applicability of language models. Previous studies (Petroni et al., 2019; AlKhamissi et al., 2022) have explored the quantification of factual knowledge in LLMs by assessing their understanding of facts in knowl- edge bases using ranking metrics. In a different ap- proach, Varshney et al. (2022) incorporate question answering as a means to measure the uncertainty of LLMs regarding specific facts. Furthermore, re- cent works (Kadavath et al., 2022; Lin et al., 2022) have adopted self-evaluation techniques by query- ing LLMs themselves to assess their certainty re- garding factual knowledge.
learning Factual knowledge in in-context Given the remarkable success of in-context learn- ing based LLMs (Brown et al., 2020; Chowdh- ery et al., 2022; Touvron et al., 2023) across var- ious NLP applications, factual knowledge serves as an invaluable resource for evaluating and guid- ing the generated output of these models. Cohen et al. (2023) employed prompting to crawl internal
|
2306.06264#28
|
Measuring and Modifying Factual Knowledge in Large Language Models
|
Large Language Models (LLMs) store an extensive amount of factual knowledge
obtained from vast collections of text. To effectively utilize these models for
downstream tasks, it is crucial to have reliable methods for measuring their
knowledge. However, existing approaches for knowledge measurement have certain
limitations, and despite recent efforts, they fail to provide accurate
measurements and the necessary insights for modifying the knowledge within
LLMs. In this work, we employ information theory-based measurements to provide
a framework estimating the factual knowledge contained within large language
models. More specifically, we measure knowledge by analyzing the LLM's
prediction probability distribution before and after instilling the target
knowledge, employing metrics such as entropy and KL-divergence. Introducing our
metrics, we first assess their accuracy in comparison to previous ranking-based
methods, surpassing them by over $35\%$ in a synthetic experiment. Then, we
explore two prominent methods of knowledge instillation, discovering that LLMs
exhibit limitations in capturing new knowledge under specific circumstances for
one of these methods. Lastly, we demonstrate the applicability of our methods
in extracting unlearned and mislearned facts in LLMs through their application
to in-context learning. We make code and data for all methods and experiments
in this paper publicly available.
|
http://arxiv.org/pdf/2306.06264
|
Pouya Pezeshkpour
|
cs.CL, cs.LG
| null | null |
cs.CL
|
20230609
|
20230609
|
[
{
"id": "2302.13971"
},
{
"id": "1909.01066"
},
{
"id": "2204.06031"
},
{
"id": "2204.02311"
},
{
"id": "2106.09231"
},
{
"id": "1907.11692"
},
{
"id": "2104.07567"
},
{
"id": "2010.05731"
},
{
"id": "1910.10683"
},
{
"id": "2207.05221"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
},
{
"id": "2203.00211"
},
{
"id": "2301.12810"
}
] |
2306.06283
| 28 |
e. Genetic algorithm using an LLM
Genetic algorithms are popular methods for generating new structures; they are evo- lutionary algorithms in which building blocks (e.g., fragments of SMILES strings) are iteratively crossed over, mutated, and subjected to other genetic operations to evolve structures with better performance (such as catalysts with higher conversion) [58]. The eï¬ciency of such a genetic algorithm often depends on how well the genes and genetic operations match the underlying chemistry. For example, if the algorithm replaces atom by atom, it may take several generations before a complete functional group is replaced.
One might hypothesize that LLMs can make the evolution process more eï¬cient, e.g., by using an LLM to handle the reproduction. One might expect that inductive biases in the LLM help create recombined molecules which are more chemically viable, maintaining the motifs of the two parent molecules better than a random operation.
|
2306.06283#28
|
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
|
Large-language models (LLMs) such as GPT-4 caught the interest of many
scientists. Recent studies suggested that these models could be useful in
chemistry and materials science. To explore these possibilities, we organized a
hackathon.
This article chronicles the projects built as part of this hackathon.
Participants employed LLMs for various applications, including predicting
properties of molecules and materials, designing novel interfaces for tools,
extracting knowledge from unstructured data, and developing new educational
applications.
The diverse topics and the fact that working prototypes could be generated in
less than two days highlight that LLMs will profoundly impact the future of our
fields. The rich collection of ideas and projects also indicates that the
applications of LLMs are not limited to materials science and chemistry but
offer potential benefits to a wide range of scientific disciplines.
|
http://arxiv.org/pdf/2306.06283
|
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
|
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
| null | null |
cond-mat.mtrl-sci
|
20230609
|
20230714
|
[
{
"id": "2209.08203"
},
{
"id": "2212.04450"
}
] |
2306.07929
| 28 |
7
Table 4: Comparison of the number of annotated trajectories and steps of REMEMBERER and the IL baseline. The number of steps of the training set of IL is estimated according to the average human trajectory length on the test split as 11.3 in Yao et al. [2022a].
Table 5: Comparison of the number of the tasks in the training set and the updating steps of RE- MEMBERER with the IL and RL baselines. The number of the updating steps of IL is estimated from 10 epochs on 1,012 trajectories with an av- erage trajectory length of 11.3.
# Method
# Method
# IL REMEMBERER
# #Trajectories
1,012 1
# #Steps
~11,436 4
Method #Tasks #Steps RL IL REMEMBERER 10,587 - 10 100,000 ~114,356 74
Table 6: Results on WikiHow with different exemplar combinations (initial experiences for REMEM- BERER) and different training sets (for REMEMBERER).
|
2306.07929#28
|
Large Language Models Are Semi-Parametric Reinforcement Learning Agents
|
Inspired by the insights in cognitive science with respect to human memory
and reasoning mechanism, a novel evolvable LLM-based (Large Language Model)
agent framework is proposed as REMEMBERER. By equipping the LLM with a
long-term experience memory, REMEMBERER is capable of exploiting the
experiences from the past episodes even for different task goals, which excels
an LLM-based agent with fixed exemplars or equipped with a transient working
memory. We further introduce Reinforcement Learning with Experience Memory
(RLEM) to update the memory. Thus, the whole system can learn from the
experiences of both success and failure, and evolve its capability without
fine-tuning the parameters of the LLM. In this way, the proposed REMEMBERER
constitutes a semi-parametric RL agent. Extensive experiments are conducted on
two RL task sets to evaluate the proposed framework. The average results with
different initialization and training sets exceed the prior SOTA by 4% and 2%
for the success rate on two task sets and demonstrate the superiority and
robustness of REMEMBERER.
|
http://arxiv.org/pdf/2306.07929
|
Danyang Zhang, Lu Chen, Situo Zhang, Hongshen Xu, Zihan Zhao, Kai Yu
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230609
|
20231030
|
[
{
"id": "2201.06009"
}
] |
2306.05685
| 29 |
Setup S1 (R = 33%) S2 (R = 50%) Setup S1 (R = 33%) S2 (R = 50%) Judge G4-Single Human G4-Single Human Judge G4-Single Human G4-Single Human G4-Pair 70% 1138 66% 1343 97% 662 85% 859 G4-Pair 70% 1161 66% 1325 95% 727 85% 864 G4-Single - 60% 1280 - 85% 739 G4-Single - 59% 1285 - 84% 776 Human - 63% 721 - 81% 479 Human - 67% 707 - 82% 474
# (a) First Turn
(b) Second Turn
Table 6: Agreement between two types of judges on Chatbot Arena. âG4-Sâ denotes GPT-4 with single-answer grading. âG4â, âG3.5â and âCâ denote GPT-4, GPT-3.5, and Claude with pairwise comparison, respectively. âHâ denotes human. The remaining of table follows the same format as Table 5.
|
2306.05685#29
|
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
|
Evaluating large language model (LLM) based chat assistants is challenging
due to their broad capabilities and the inadequacy of existing benchmarks in
measuring human preferences. To address this, we explore using strong LLMs as
judges to evaluate these models on more open-ended questions. We examine the
usage and limitations of LLM-as-a-judge, including position, verbosity, and
self-enhancement biases, as well as limited reasoning ability, and propose
solutions to mitigate some of them. We then verify the agreement between LLM
judges and human preferences by introducing two benchmarks: MT-bench, a
multi-turn question set; and Chatbot Arena, a crowdsourced battle platform. Our
results reveal that strong LLM judges like GPT-4 can match both controlled and
crowdsourced human preferences well, achieving over 80% agreement, the same
level of agreement between humans. Hence, LLM-as-a-judge is a scalable and
explainable way to approximate human preferences, which are otherwise very
expensive to obtain. Additionally, we show our benchmark and traditional
benchmarks complement each other by evaluating several variants of LLaMA and
Vicuna. The MT-bench questions, 3K expert votes, and 30K conversations with
human preferences are publicly available at
https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge.
|
http://arxiv.org/pdf/2306.05685
|
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, Ion Stoica
|
cs.CL, cs.AI
|
NeurIPS 2023 Datasets and Benchmarks Track
| null |
cs.CL
|
20230609
|
20231224
|
[
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2305.10403"
},
{
"id": "2304.07327"
},
{
"id": "2201.11903"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.12420"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2306.04751"
},
{
"id": "2211.09110"
},
{
"id": "2301.13688"
},
{
"id": "2004.14602"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2211.05719"
},
{
"id": "2206.04615"
},
{
"id": "2204.05862"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2305.17926"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2306.05715
| 29 |
4.2 Performance of Diï¬erent LLMs As described in Section 3.4.1, our comparison of the LLMs is based on four LLMâlanguage pairings, with 30 LLM responses analyzed for each pairing, and seven aspects examined for each response. Table 2 summarizes the ï¬ndings.
The table shows a clear diï¬erence in performance between GPT- 3.5 and Codex. GPT-3.5 identiï¬ed and mentioned at least one actual issue in 90% of the cases in both languages. Codex succeeded 70% of the time in English, with Finnish performance far behind at 33%. In terms of identifying all of the issues present, GPT-3.5 succeeded approximately 55% of the time in both languages, whereas Codexâs performance was around a mere 15%.
Non-existing issues (false positives) were fairly common in all LLMâlanguage pairings. They were the rarest (23% of help requests) when GPT-3.5 was prompted in Finnish. Codex was also prone to producing superï¬uous content.
|
2306.05715#29
|
Exploring the Responses of Large Language Models to Beginner Programmers' Help Requests
|
Background and Context: Over the past year, large language models (LLMs) have
taken the world by storm. In computing education, like in other walks of life,
many opportunities and threats have emerged as a consequence.
Objectives: In this article, we explore such opportunities and threats in a
specific area: responding to student programmers' help requests. More
specifically, we assess how good LLMs are at identifying issues in problematic
code that students request help on.
Method: We collected a sample of help requests and code from an online
programming course. We then prompted two different LLMs (OpenAI Codex and
GPT-3.5) to identify and explain the issues in the students' code and assessed
the LLM-generated answers both quantitatively and qualitatively.
Findings: GPT-3.5 outperforms Codex in most respects. Both LLMs frequently
find at least one actual issue in each student program (GPT-3.5 in 90% of the
cases). Neither LLM excels at finding all the issues (GPT-3.5 finding them 57%
of the time). False positives are common (40% chance for GPT-3.5). The advice
that the LLMs provide on the issues is often sensible. The LLMs perform better
on issues involving program logic rather than on output formatting. Model
solutions are frequently provided even when the LLM is prompted not to. LLM
responses to prompts in a non-English language are only slightly worse than
responses to English prompts.
Implications: Our results continue to highlight the utility of LLMs in
programming education. At the same time, the results highlight the
unreliability of LLMs: LLMs make some of the same mistakes that students do,
perhaps especially when formatting output as required by automated assessment
systems. Our study informs teachers interested in using LLMs as well as future
efforts to customize LLMs for the needs of programming education.
|
http://arxiv.org/pdf/2306.05715
|
Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanpää, Juha Sorva
|
cs.CY, cs.AI, cs.CL, cs.HC, cs.SE
|
13 pages, 1 figure. To be published in Proceedings of the 2023 ACM
Conference on International Computing Education Research V.1 (ICER '23 V1)
| null |
cs.CY
|
20230609
|
20230609
|
[
{
"id": "2004.09456"
},
{
"id": "2302.07427"
},
{
"id": "2203.02155"
},
{
"id": "2304.02491"
},
{
"id": "2211.04715"
},
{
"id": "2306.02608"
},
{
"id": "2303.08774"
},
{
"id": "2304.03938"
}
] |
2306.05720
| 29 |
9
standard steps in computer graphics. More generally, one might look for models of semantic aspects of a scene, such as sentiment.
# 8 Acknowledgements
We would like to thank Catherine Yeh, Shivam Raval, and Aoyu Wu for reading and sharing their feedback on this paper. We also wish to thank Kenneth Li and David Bau who contributed their thoughts to an early draft of this work.
# References
[1] Guillaume Alain and Yoshua Bengio. Understanding intermediate layers using linear classifier probes. arXiv preprint arXiv:1610.01644, 2016.
[2] Dmitry Baranchuk, Ivan Rubachev, Andrey Voynov, Valentin Khrulkov, and Artem Babenko. Label-efficient semantic segmentation with diffusion models. arXiv preprint arXiv:2112.03126, 2021.
[3] Yonatan Belinkov. Probing classifiers: Promises, shortcomings, and alternatives. 2021.
[4] Emily M Bender and Alexander Koller. Climbing towards nlu: On meaning, form, and understanding in the age of data. In Proceedings of the 58th annual meeting of the association for computational linguistics, pages 5185â5198, 2020.
|
2306.05720#29
|
Beyond Surface Statistics: Scene Representations in a Latent Diffusion Model
|
Latent diffusion models (LDMs) exhibit an impressive ability to produce
realistic images, yet the inner workings of these models remain mysterious.
Even when trained purely on images without explicit depth information, they
typically output coherent pictures of 3D scenes. In this work, we investigate a
basic interpretability question: does an LDM create and use an internal
representation of simple scene geometry? Using linear probes, we find evidence
that the internal activations of the LDM encode linear representations of both
3D depth data and a salient-object / background distinction. These
representations appear surprisingly early in the denoising process$-$well
before a human can easily make sense of the noisy images. Intervention
experiments further indicate these representations play a causal role in image
synthesis, and may be used for simple high-level editing of an LDM's output.
Project page: https://yc015.github.io/scene-representation-diffusion-model/
|
http://arxiv.org/pdf/2306.05720
|
Yida Chen, Fernanda Viégas, Martin Wattenberg
|
cs.CV, cs.AI, cs.LG
|
A short version of this paper is accepted in the NeurIPS 2023
Workshop on Diffusion Models: https://nips.cc/virtual/2023/74894
| null |
cs.CV
|
20230609
|
20231104
|
[
{
"id": "2209.14988"
},
{
"id": "1805.01070"
},
{
"id": "2210.13382"
},
{
"id": "1909.01066"
},
{
"id": "1809.10040"
},
{
"id": "2212.00774"
},
{
"id": "1610.01644"
},
{
"id": "2206.00364"
},
{
"id": "1905.05950"
},
{
"id": "2212.08861"
},
{
"id": "1908.02899"
},
{
"id": "2111.02114"
},
{
"id": "2105.14002"
},
{
"id": "2112.03126"
}
] |
2306.05783
| 29 |
Observation 1: Best Performance = Pretraining + Finetuning Examining the overall results presented in Tab. 2, it is observed that all top-10 open-source LLMs are built upon either the llama or bloom frameworks. This suggests that obtaining the most exceptional performance is more likely through these two base models, due to their substantial potential and superior performance in domain text comprehension. Moreover, it is noted that all open-source models within the top-10 overall performance in Tab. 2 are finetuned models, which implies that only finetuned LLMs can attain the highest performance. As a result, both effective pretraining and fine-tuning processes are crucial components in attaining optimal performance in domain text comprehension.
Observation 2: Most LLMs are incapable of performing stably few-shot learning from demon- strations As shown in the âPerformance-Averageâ in Tab. 1, the average performance of LLMs reveals that more quantity of examples results in better model performance. However, it is not an
7
|
2306.05783#29
|
Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation
|
New Natural Langauge Process~(NLP) benchmarks are urgently needed to align
with the rapid development of large language models (LLMs). We present Xiezhi,
the most comprehensive evaluation suite designed to assess holistic domain
knowledge. Xiezhi comprises multiple-choice questions across 516 diverse
disciplines ranging from 13 different subjects with 249,587 questions and
accompanied by Xiezhi-Specialty and Xiezhi-Interdiscipline, both with 15k
questions. We conduct evaluation of the 47 cutting-edge LLMs on Xiezhi. Results
indicate that LLMs exceed average performance of humans in science,
engineering, agronomy, medicine, and art, but fall short in economics,
jurisprudence, pedagogy, literature, history, and management. We anticipate
Xiezhi will help analyze important strengths and shortcomings of LLMs, and the
benchmark is released in~\url{https://github.com/MikeGu721/XiezhiBenchmark}.
|
http://arxiv.org/pdf/2306.05783
|
Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Jianchen Wang, Sihang Jiang, Zhuozhi Xiong, Zihan Li, Qianyu He, Rui Xu, Wenhao Huang, Zili Wang, Shusen Wang, Weiguo Zheng, Hongwei Feng, Yanghua Xiao
|
cs.CL
|
Under review of NeurIPS 2023
| null |
cs.CL
|
20230609
|
20230615
|
[
{
"id": "2301.13126"
},
{
"id": "2302.13971"
},
{
"id": "2303.04048"
},
{
"id": "1905.07830"
},
{
"id": "2304.12986"
},
{
"id": "2304.07854"
},
{
"id": "2211.05100"
},
{
"id": "1909.00277"
},
{
"id": "2305.10263"
},
{
"id": "1909.06058"
},
{
"id": "2206.07682"
},
{
"id": "2304.06364"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2303.18223"
},
{
"id": "2302.09419"
},
{
"id": "2303.14742"
},
{
"id": "2111.10952"
},
{
"id": "2301.12726"
},
{
"id": "2304.01933"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
},
{
"id": "2105.09938"
}
] |
2306.05817
| 29 |
3.1 Tune LLM; Infer with CRM (Quadrant 1) Existing works in quadrant 1 mainly focus on applying rel- atively smaller pretrained language models (e.g., BERT) to the ï¬eld of news recommendation [Zhang et al., 2021a; Wu et al., 2021; Liu et al., 2022b; Yu et al., 2022b] and e- commercial advertisement [Muhamed et al., 2021; Li et al., 2023e]. As discussed in Section 2.5, the primary roles of these small-scale language models are only to serve as feature encoders for semantic representation enhancement. Conse- quently, a conventional recommendation model (CRM) is re- quired to make the ï¬nal recommendation, with generated tex- tual embeddings as auxiliary inputs. Additionally, the small model size makes it affordable to fully ï¬netune the language model during the training phase. TransRec [Fu et al., 2023a] proposes layerwise adaptor tuning over BERT and ViT mod- els to ensure the training efï¬ciency and multi-modality en- hanced representations. As shown in Figure 3, since CRM is involved and LLM is
|
2306.05817#29
|
How Can Recommender Systems Benefit from Large Language Models: A Survey
|
Recommender systems (RS) play important roles to match users' information
needs for Internet applications. In natural language processing (NLP) domains,
large language model (LLM) has shown astonishing emergent abilities (e.g.,
instruction following, reasoning), thus giving rise to the promising research
direction of adapting LLM to RS for performance enhancements and user
experience improvements. In this paper, we conduct a comprehensive survey on
this research direction from an application-oriented view. We first summarize
existing research works from two orthogonal perspectives: where and how to
adapt LLM to RS. For the "WHERE" question, we discuss the roles that LLM could
play in different stages of the recommendation pipeline, i.e., feature
engineering, feature encoder, scoring/ranking function, and pipeline
controller. For the "HOW" question, we investigate the training and inference
strategies, resulting in two fine-grained taxonomy criteria, i.e., whether to
tune LLMs or not, and whether to involve conventional recommendation model
(CRM) for inference. Detailed analysis and general development trajectories are
provided for both questions, respectively. Then, we highlight key challenges in
adapting LLM to RS from three aspects, i.e., efficiency, effectiveness, and
ethics. Finally, we summarize the survey and discuss the future prospects. We
also actively maintain a GitHub repository for papers and other related
resources in this rising direction:
https://github.com/CHIANGEL/Awesome-LLM-for-RecSys.
|
http://arxiv.org/pdf/2306.05817
|
Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Xiangyang Li, Chenxu Zhu, Huifeng Guo, Yong Yu, Ruiming Tang, Weinan Zhang
|
cs.IR, cs.AI
|
15 pages; 3 figures; summarization table in appendix
| null |
cs.IR
|
20230609
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2304.05263"
},
{
"id": "2305.07001"
},
{
"id": "2305.11700"
},
{
"id": "2305.06566"
},
{
"id": "2305.15756"
},
{
"id": "2105.08318"
},
{
"id": "2304.03879"
},
{
"id": "2303.08559"
},
{
"id": "1703.04247"
},
{
"id": "2206.07682"
},
{
"id": "2305.07961"
},
{
"id": "2305.05973"
},
{
"id": "2305.15498"
},
{
"id": "2306.11114"
},
{
"id": "2305.04518"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2305.12090"
},
{
"id": "2212.10403"
},
{
"id": "2304.03022"
},
{
"id": "2305.07609"
},
{
"id": "2209.08060"
},
{
"id": "2209.07562"
},
{
"id": "2304.09542"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "1908.08167"
},
{
"id": "1909.10351"
},
{
"id": "2305.15036"
},
{
"id": "2305.12081"
},
{
"id": "2304.07862"
},
{
"id": "2305.03017"
},
{
"id": "2305.09858"
},
{
"id": "2305.06474"
},
{
"id": "2305.13731"
},
{
"id": "2304.03153"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2306.10702"
},
{
"id": "2306.02250"
},
{
"id": "2303.13835"
},
{
"id": "2305.14302"
},
{
"id": "2302.03735"
},
{
"id": "2306.02841"
},
{
"id": "2305.11206"
},
{
"id": "2203.15876"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
}
] |
2306.05949
| 29 |
What to Evaluate The existing efforts in evaluating the energy consumed and carbon emitted by AI systems have pursued two main directions: the creation of tools to evaluate these impacts and empirical studies of one or several models. For instance, [132] proposes both a web-based and programmatic approach for quantifying the carbon emissions of models, meanwhile [104] proposes an experiment-impact-tracker, for energy and carbon usage reporting research. Other popular work includes conversion based on power consumed in the U.S. [233] and examining environmental impact across compute-related impacts, immediate impacts of applying AI, and system-level impacts [120].
Existing metrics for reporting range from energy, compute, and runtime, to carbon emissions. CPU, GPU, and TPU related information such as hardware information, package power draw, GPU performance state, and CPU frequency, as well as memory usage are additional metrics. In addition to metrics, consideration of the region/location of the energy grid where the experiment is being run on is important given significant differences in carbon emissions between energy grids, and informs the move to run experiments in âclean regionsâ. Tools such as CodeCarbon can be used to estimate power consumption [61].
|
2306.05949#29
|
Evaluating the Social Impact of Generative AI Systems in Systems and Society
|
Generative AI systems across modalities, ranging from text, image, audio, and
video, have broad social impacts, but there exists no official standard for
means of evaluating those impacts and which impacts should be evaluated. We
move toward a standard approach in evaluating a generative AI system for any
modality, in two overarching categories: what is able to be evaluated in a base
system that has no predetermined application and what is able to be evaluated
in society. We describe specific social impact categories and how to approach
and conduct evaluations in the base technical system, then in people and
society. Our framework for a base system defines seven categories of social
impact: bias, stereotypes, and representational harms; cultural values and
sensitive content; disparate performance; privacy and data protection;
financial costs; environmental costs; and data and content moderation labor
costs. Suggested methods for evaluation apply to all modalities and analyses of
the limitations of existing evaluations serve as a starting point for necessary
investment in future evaluations. We offer five overarching categories for what
is able to be evaluated in society, each with their own subcategories:
trustworthiness and autonomy; inequality, marginalization, and violence;
concentration of authority; labor and creativity; and ecosystem and
environment. Each subcategory includes recommendations for mitigating harm. We
are concurrently crafting an evaluation repository for the AI research
community to contribute existing evaluations along the given categories. This
version will be updated following a CRAFT session at ACM FAccT 2023.
|
http://arxiv.org/pdf/2306.05949
|
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
|
cs.CY, cs.AI
| null | null |
cs.CY
|
20230609
|
20230612
|
[
{
"id": "2007.04068"
},
{
"id": "2305.09800"
},
{
"id": "1908.09203"
},
{
"id": "2202.05520"
},
{
"id": "2302.10329"
},
{
"id": "2107.03374"
},
{
"id": "2210.06245"
},
{
"id": "2211.02001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2301.10226"
},
{
"id": "2202.02647"
},
{
"id": "2112.10752"
},
{
"id": "2206.04615"
},
{
"id": "2202.00885"
},
{
"id": "2010.15581"
},
{
"id": "2305.09941"
},
{
"id": "2301.04246"
},
{
"id": "2304.12298"
},
{
"id": "2203.09509"
},
{
"id": "2207.14157"
},
{
"id": "2102.09692"
},
{
"id": "1804.10999"
},
{
"id": "2303.11156"
},
{
"id": "2104.06390"
},
{
"id": "2002.05651"
}
] |
2306.06070
| 29 |
benefit from many recent LMs that use an encoder-decoder or decoder-only architecture. It cannot predict actions and the element selection performance is also not competitive, as shown in Table 2. We use Flan-T5 [10] as the backbone for the generation model. The autoregressive generation formulation (Figure 5 top) does not perform well, and even underperforms the classification baseline on element selection despite the larger model size (220M for Flan-T5B). We observe a substantial gain with MINDACT using the multi-choice QA formulation. The best model achieves 52.0% step success rate under Cross-Task setting, and 38.9% / 39.6% when generalizing to unseen websites and domains. However, the overall task success rate remains low for all models, as the agent often commits at least one error step in most cases.
|
2306.06070#29
|
Mind2Web: Towards a Generalist Agent for the Web
|
We introduce Mind2Web, the first dataset for developing and evaluating
generalist agents for the web that can follow language instructions to complete
complex tasks on any website. Existing datasets for web agents either use
simulated websites or only cover a limited set of websites and tasks, thus not
suitable for generalist web agents. With over 2,000 open-ended tasks collected
from 137 websites spanning 31 domains and crowdsourced action sequences for the
tasks, Mind2Web provides three necessary ingredients for building generalist
web agents: 1) diverse domains, websites, and tasks, 2) use of real-world
websites instead of simulated and simplified ones, and 3) a broad spectrum of
user interaction patterns. Based on Mind2Web, we conduct an initial exploration
of using large language models (LLMs) for building generalist web agents. While
the raw HTML of real-world websites are often too large to be fed to LLMs, we
show that first filtering it with a small LM significantly improves the
effectiveness and efficiency of LLMs. Our solution demonstrates a decent level
of performance, even on websites or entire domains the model has never seen
before, but there is still a substantial room to improve towards truly
generalizable agents. We open-source our dataset, model implementation, and
trained models (https://osu-nlp-group.github.io/Mind2Web) to facilitate further
research on building a generalist agent for the web.
|
http://arxiv.org/pdf/2306.06070
|
Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, Yu Su
|
cs.CL
|
Website: https://osu-nlp-group.github.io/Mind2Web. Updated with
supplementary material. NeurIPS'23 Spotlight
| null |
cs.CL
|
20230609
|
20231209
|
[] |
2306.06264
| 29 |
knowledge within these models, thereby construct- ing knowledge bases. Authors in Peng et al. (2023) augmented factual knowledge into LLMs to en- hance the accuracy of their output. Furthermore, recent studies (Goodrich et al., 2019; Shuster et al., 2021; Ji et al., 2023) utilize factual knowledge to detect and mitigate hallucination in the generated output of LLMs.
# 7 Conclusion
In this paper, we introduced novel metrics for mea- suring factual knowledge in large language mod- els (LLMs) compensating for the shortcomings of existing ranking-based methods. Our results revealed that our proposed metrics outperformed traditional ranking-based approaches, providing more accurate assessments of factual knowledge in LLMs. Additionally, we explored the distinc- tion between implicit and explicit knowledge in- stillation in LLMs. Through comprehensive exper- iments, we observed cases where explicit knowl- edge instillation alone was inadequate, highlighting the need for fine-tuning. These cases primarily re- volve around location and language-related queries, emphasizing the intricate nature of these types of facts and the challenges they pose for explicit instil- lation. This finding contributes to our understand- ing of the interplay between implicit and explicit knowledge in LLMs.
|
2306.06264#29
|
Measuring and Modifying Factual Knowledge in Large Language Models
|
Large Language Models (LLMs) store an extensive amount of factual knowledge
obtained from vast collections of text. To effectively utilize these models for
downstream tasks, it is crucial to have reliable methods for measuring their
knowledge. However, existing approaches for knowledge measurement have certain
limitations, and despite recent efforts, they fail to provide accurate
measurements and the necessary insights for modifying the knowledge within
LLMs. In this work, we employ information theory-based measurements to provide
a framework estimating the factual knowledge contained within large language
models. More specifically, we measure knowledge by analyzing the LLM's
prediction probability distribution before and after instilling the target
knowledge, employing metrics such as entropy and KL-divergence. Introducing our
metrics, we first assess their accuracy in comparison to previous ranking-based
methods, surpassing them by over $35\%$ in a synthetic experiment. Then, we
explore two prominent methods of knowledge instillation, discovering that LLMs
exhibit limitations in capturing new knowledge under specific circumstances for
one of these methods. Lastly, we demonstrate the applicability of our methods
in extracting unlearned and mislearned facts in LLMs through their application
to in-context learning. We make code and data for all methods and experiments
in this paper publicly available.
|
http://arxiv.org/pdf/2306.06264
|
Pouya Pezeshkpour
|
cs.CL, cs.LG
| null | null |
cs.CL
|
20230609
|
20230609
|
[
{
"id": "2302.13971"
},
{
"id": "1909.01066"
},
{
"id": "2204.06031"
},
{
"id": "2204.02311"
},
{
"id": "2106.09231"
},
{
"id": "1907.11692"
},
{
"id": "2104.07567"
},
{
"id": "2010.05731"
},
{
"id": "1910.10683"
},
{
"id": "2207.05221"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
},
{
"id": "2203.00211"
},
{
"id": "2301.12810"
}
] |
2306.06283
| 29 |
The team from McGill University (Benjamin Weiser, Jerome Genzling, Nico- las Gastellu, Sylvester Zhang, Tao Liu, Alexander Al-Feghali, Nicolas Moitessier) set out the ï¬rst steps to test this hypothesis (Figure 2). In initial experiments, they found that GPT-3.5, without any ï¬netuning, can fragment molecules provided as SMILES at rotatable bonds with a success rate of 70 %. This indicates that GPT-3.5 understands SMILES strings and aspects of their relation to the chemical structures they represent. Subsequently, they asked the LLMs to fragment and recombine two given molecules. The LLM frequently created new combined molecules with fragments of each species which were reasonable chemical structures more often than a random SMILES string combining
11
Fragment Reproduce Optimize "Some modifications that could potentially improve the scores include adding or removing halogens, modifying the length or branching of the carbon chain, and adding or removing functional groups such as -â¬O-, -COC-, -C=C- and -0CO-. Additionally, modifying the stereochemistry of the molecule could also have an impact on the score.â Tanimoto Similarity over Generations
|
2306.06283#29
|
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
|
Large-language models (LLMs) such as GPT-4 caught the interest of many
scientists. Recent studies suggested that these models could be useful in
chemistry and materials science. To explore these possibilities, we organized a
hackathon.
This article chronicles the projects built as part of this hackathon.
Participants employed LLMs for various applications, including predicting
properties of molecules and materials, designing novel interfaces for tools,
extracting knowledge from unstructured data, and developing new educational
applications.
The diverse topics and the fact that working prototypes could be generated in
less than two days highlight that LLMs will profoundly impact the future of our
fields. The rich collection of ideas and projects also indicates that the
applications of LLMs are not limited to materials science and chemistry but
offer potential benefits to a wide range of scientific disciplines.
|
http://arxiv.org/pdf/2306.06283
|
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
|
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
| null | null |
cond-mat.mtrl-sci
|
20230609
|
20230714
|
[
{
"id": "2209.08203"
},
{
"id": "2212.04450"
}
] |
2306.07929
| 29 |
Table 6: Results on WikiHow with different exemplar combinations (initial experiences for REMEM- BERER) and different training sets (for REMEMBERER).
Different (Initial) Exemplars Different Training Sets LLM only E0 + S0 E1 + S0 2.56 0.90 2.60 0.90 E2 + S0 2.59 0.89 E0 + S1 - - Avg 2.58 0.90 Std 0.02 0.01 RMMBR. 2.63 0.93 2.63 0.91 2.59 0.90 2.66 0.97 2.63 0.93 0.03 0.03
baseline indicates a single LLM with 2 fixed exemplars sampled from the initial experiences of REMEMBERER. The average performance of REMEMBERER exceeds the baseline by a large extent and surpasses the prior state of the art, ReAct, as well. This proves the effectiveness of augmenting the LLM with an external evolvable experience memory. The proposed REMEMBERER also outperforms the RL, IL (imitation learning), and IL+RL baselines on both metrics.
|
2306.07929#29
|
Large Language Models Are Semi-Parametric Reinforcement Learning Agents
|
Inspired by the insights in cognitive science with respect to human memory
and reasoning mechanism, a novel evolvable LLM-based (Large Language Model)
agent framework is proposed as REMEMBERER. By equipping the LLM with a
long-term experience memory, REMEMBERER is capable of exploiting the
experiences from the past episodes even for different task goals, which excels
an LLM-based agent with fixed exemplars or equipped with a transient working
memory. We further introduce Reinforcement Learning with Experience Memory
(RLEM) to update the memory. Thus, the whole system can learn from the
experiences of both success and failure, and evolve its capability without
fine-tuning the parameters of the LLM. In this way, the proposed REMEMBERER
constitutes a semi-parametric RL agent. Extensive experiments are conducted on
two RL task sets to evaluate the proposed framework. The average results with
different initialization and training sets exceed the prior SOTA by 4% and 2%
for the success rate on two task sets and demonstrate the superiority and
robustness of REMEMBERER.
|
http://arxiv.org/pdf/2306.07929
|
Danyang Zhang, Lu Chen, Situo Zhang, Hongshen Xu, Zihan Zhao, Kai Yu
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230609
|
20231030
|
[
{
"id": "2201.06009"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.